prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am new to docker and Kubernetes, I am trying to create a container having <code>tomcat7/Java7</code> so that I could deploy my <code>webapps</code> into it. The only concern I have are the <code>tomcat/conf</code> config files, which have details of <code>database connections</code>, <code>threadpool</code>, <code>Java Memory</code> etc.</p> <p>What I want is to copy these files from Kubernetes server to docker-container and place them at right places, while starting the container.</p> <p>P.S: I don't want to do it via enviroment variables, as they are going to be huge in numbers if I keep a variable for every entry in config files.</p>
<p>You could add a ConfigMap in your Kubernetes, from your tomcat config (files or a whole dir)</p> <pre><code>kubectl -n staging create configmap special-config --from-file={path-to-tomcat-conf}/server.xml </code></pre> <p>And then mount it on your pod (kubectl create -f path/to/the/pod.yaml)</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: tomcat-test-pod spec: containers: - name: test-container image: tomcat:7.0 command: [ "catalina.sh", "run" ] volumeMounts: - name: config-volume mountPath: /usr/local/tomcat/conf/server.xml volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: special-config </code></pre> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-specific-path-in-the-volume" rel="noreferrer">Kubernetes docs</a></p>
<p>There are a lot of options available for deploying kubernetes including Redshift, CoreOS, hosted options. I was wondering if there is anyone exploring kubeadm for non-prod enironments. Is it a viable option for standing up a multi-node k8s cluster. Thanks in advance.</p>
<p>Yes, kubeadm is a viable option for a non-production cluster, but it is currently (late 2017) undergoing significant development. Two features that I would watch closely are support for <a href="https://github.com/kubernetes/kubernetes/pull/50872" rel="nofollow noreferrer">upgrades</a>, and support for <a href="https://github.com/kubernetes/kubeadm/issues/261" rel="nofollow noreferrer">HA masters</a>. Currently <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> has support for both, so you might consider that as an alternative if you need those features. </p> <p>There are many other differences between the two, so its impossible to recommend one over the other, but in general, I would recommend kubeadm for learning about kubernetes (especially the provisioning aspect, and what is required for a running cluster) and kops if you just need a mostly-production-ready cluster. </p> <p>(I don't have experience with other provisioning tools, so I can't comment on them, but there are many worth looking at.)</p> <p>In the future, the kubeadm maintainers want kubeadm to provide the plumbing and so that other provisioning tools can build off of it in a more opinionated way. </p>
<p>I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do. When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.</p> <p>Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?</p> <p>Thanks for any directions/examples.</p> <p>Edit: My current deployment yml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: et-rest labels: app: et-rest spec: ports: - port: 9080 targetPort: 9080 nodePort: 30181 selector: app: et-rest tier: frontend type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: et-rest labels: app: et-rest spec: strategy: type: Recreate template: metadata: labels: app: et-rest tier: frontend spec: containers: - image: et-rest-image:1.0.21 name: et-rest ports: - containerPort: 9080 name: et-rest volumeMounts: - name: tz-config mountPath: /etc/localtime volumes: - name: tz-config hostPath: path: /usr/share/zoneinfo/Europe/Madrid </code></pre>
<p>You can use a scheduled job pod:</p> <p>A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.</p> <pre><code>apiVersion: batch/v2alpha1 kind: ScheduledJob metadata: name: app-with-timeout spec: schedule: 0 * * * ? jobTemplate: spec: activeDeadlineSeconds: 3600*24 template: spec: containers: - name: yourapp image: yourimage </code></pre>
<p>I'm trying to figure out how to run openvpn server running inside a pod using UDP protocol.</p> <p>Since it's easier to test using http than openvpn connections, I have also nginx-container running inside that same pod as openvpn-container is.</p> <p>I can get everything working inside the cluster but I cannot expose this nginx service to Internet using my elastic ip.</p> <p>Network is Weave.</p> <p>Kubernetes version is 1.6</p> <p>I have set the externalIPs-field in the service.yaml to my elastic ip address. I cannot use type LoadBalancer since my protocol is UDP.</p> <p>Service:</p> <pre><code># kubectl describe service openvpn Name: openvpn Namespace: default Labels: name=openvpn Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"openvpn"},"name":"openvpn","namespace":"default"},"spec":{"externalI... Selector: name=openvpn Type: NodePort IP: 100.71.93.74 External IPs: &lt;my_elastic_ip&gt; Port: openvpn 1194/UDP NodePort: openvpn 30726/UDP Endpoints: 100.120.0.1:1194 Port: http 80/TCP NodePort: http 30000/TCP Endpoints: 100.120.0.1:80 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Endpoints</p> <pre><code># kubectl get endpoints openvpn NAME ENDPOINTS AGE openvpn 100.120.0.1:80,100.120.0.1:1194 20h </code></pre> <p>I have followed through the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/</a> and everything works inside the cluster.</p> <p>I have allocated the elastic ip to this specific instance manually using AWS Console. All security groups are updated.</p> <p>So is it even possible to connect to a pod inside a kubernetes cluster using a elastic ip attached to that host node? If it is, how to do that?</p>
<p>Rather than use an IP address you could get K8S to use the type: <code>LoadBalancer</code> which will setup a Classic AWS ELB. From there you could CNAME a domain / sub-domain to the ELB and access the service that way?</p> <p>A quick example:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: MyApp labels: app: MyApp spec: type: LoadBalancer ports: - port: 80 targetPort: 80 name: MyApp selector: app: MyApp </code></pre>
<p>I'm trying to run different helm charts and I keep running into this error. It's much more cost effective for me to run 3-4 cheaper nodes than 1 or 2 very expensive nodes that can have more disks attached to them.</p> <p>Is there a way to configure kubernetes or helm to have a disk attach limit or to set the affinity of one deployment to a particular node?</p> <p>It's very frustrating that all the deployments try to attach to one node and then run out of disk attach quota.</p> <p>Here is the error:</p> <blockquote> <p>Service returned an error. Status=409 Code="OperationNotAllowed" Message="The maximum number of data disks allowed to be attached to a VM of this size is 4."</p> </blockquote>
<blockquote> <p>Is there a way to configure kubernetes or helm to have a disk attach limit or to set the affinity of one deployment to a particular node?</p> </blockquote> <p>For now, ACS k8s provision PVC based on Azure managed disks or blob disks, so the limit is the number of VM disks.</p> <p>For now, Azure does not support change the limit about number of VM disks. About VM size and max data disks, we can find the limit here: <a href="https://i.stack.imgur.com/VwkhY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VwkhY.png" alt="enter image description here"></a></p> <p>More information about limit, please refer to this <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-memory" rel="nofollow noreferrer">link</a>.</p> <p>By the way, the disk maximum capacity is <strong>2TB</strong>, maybe we can extend it to 2TB.</p>
<p>I am considering BigQuery as my data warehouse requirement. Right now, I have my data in google cloud (cloud SQL and BigTable). I have exposed my REST APIs to retrieve data from both. Now, I would like to retrieve data from these APIs, do the ETL and load the data into BigQuery. I am evaluating 2 options of ETL (daily frequency of job for hourly data) right now:-</p> <ol> <li>Use JAVA Spring Batch and create microservice and use Kubernetes as deployment environment. Will it scale?</li> <li>Use Cloud DataFlow for ETL</li> </ol> <p>Then use BigQuery batch insert API (for initial load) and streaming insert API (for incremental load when new data available in source) to load BigQuery denormalized schema.</p> <p>Please let me know your opinions.</p>
<p>Without knowing your data volumes, specifically how much new or diff data you have per day and how you are doing paging with your REST APIs - here is my guidance...</p> <p>If you go down the path of a using Spring Batch you are more than likely going to have to come up with your own sharding mechanism: how will you divide up REST calls to instantiate your Spring services? You will also be in the Kub management space and will have to handle retries with the streaming API to BQ.</p> <p>If you go down the Dataflow route you will have to write a some transform code to call your REST API and peform the paging to populate your PCollection destined for BQ. With the recent addition of Dataflow templates you could: create a pipeline that is triggered every N hours and parameterize your REST call(s) to just pull data ?since=latestCall. From there you could execute BigQuery writes. I recommend doing this in batch mode as 1) it will scale better if you have millions of rows 2) be less cumbersome to manage (during non-active times).</p> <p>Since Cloud Dataflow has built in re-try logic for BiqQuery and provides consistency across all input and output collections -- my vote is for Dataflow in this case.</p> <p>How big are your REST call results in record count?</p>
<p>I have a cluster configuration using Kubernetes on GCE, I have a pod for zookeeper and other for Kafka; it was working normally until Zookeeper get crashed and restarted, and it start refusing connections from the kafka pod:</p> <blockquote> <p>Refusing session request for client <code>/10.4.4.58:52260</code> as it has seen <code>zxid 0x1962630</code></p> </blockquote> <p>The complete refusal log is here:</p> <pre><code>2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /10.4.4.58:52260 2017-08-21 20:05:32,013 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@882] - Connection request from old client /10.4.4.58:52260; will be dropped if server is in r-o mode 2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@901] - Refusing session request for client /10.4.4.58:52260 as it has seen zxid 0x1962630 our last zxid is 0xab client must try another server 2017-08-21 20:05:32,013 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /10.4.4.58:52260 (no session established for client) </code></pre>
<p>Because the kafka maintain a zookeeper session which remember the last zxid it has seen. So when the zookeeper sevice go down and come again, the zk's zxid begin from a smaller value. and ZKserver think the kafka has seen a bigger zxid, so it refuse it. </p> <p>Have a try to restart the kafka.</p>
<p>So i created a cluster containing 4 machines using this command</p> <pre><code>gcloud container clusters create "[cluster-name]" \ --machine-type "n1-standard-1" \ --image-type "COS" \ --disk-size "100" \ --num-nodes "4" </code></pre> <p>and i can see that it's creating 4 VM instances inside my <a href="https://console.cloud.google.com/compute/instances" rel="nofollow noreferrer">compute engine</a>. I then setup deployments pointing to one or more entry(ies) in my container registry and services with a single service exposing a public ip</p> <p>all of this is working well, but it bothers me that all 4 VM instances it has created is having public ip(s), please do correct me if i am wrong, but to my understanding here's what happen behind the scene</p> <ol> <li>A container is created</li> <li>VM instances is created based on #1</li> <li>An instance group is created, with VM instances on #2 as members</li> <li>(Since i have one of the service exposing a public ip) a network load balancer is created pointing to the instance group on #3 or the VM instances on #2</li> </ol> <p>Looking at this, i don't think i need a public ip on each of the VM instances created for the cluster.</p> <p>I have been reading the <a href="https://cloud.google.com/container-engine/docs/clusters/operations" rel="nofollow noreferrer">documentation</a>(<a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create" rel="nofollow noreferrer">s</a>), although i think i might have missed something, but i can't seem to find the configuration arguments that will allow me to achieve this </p>
<p>Currently all GKE VMs get a public IP address, but they have firewall rules set up to block unauthorized network connections. Your Service or Ingress resources are still accessed through Load Balancer’s public IP address.</p> <p>As of writing there's no way to prevent cluster nodes from getting public IP addresses.</p>
<p>I am trying to push spring boot application in google kubernetes(Google Container Engine).</p> <p>I have performed all the step which given in below link.</p> <p><a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#0" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#0</a></p> <p>When i am trying to perform step 9 http://:8080 in browser that is not reachable.</p> <p>Yes i got external ip address.</p> <p>I am able to ping that ip address</p> <p>let me know if any other information is require.</p> <p>In Logging that does not able to connect database</p> <p>Error: </p> <p><code>com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.</code></p>
<p><strong>I hope you have created cluster in google container engine</strong></p> <p>Follow the first 5 step given in this link </p> <p><a href="https://cloud.google.com/sql/docs/mysql/connect-container-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-container-engine</a></p> <p>change database configuration in your application</p> <p><code>hostname: 127.0.0.1 port: 3306 or your mysql port username: proxyuser</code></p> <p> should be same as link step - 3</p> <ol> <li>mvn package -Dmaven.test.skip=true</li> <li><p>Create File with name "Dockerfile" and below content</p> <p><code>FROM openjdk:8 COPY target/SpringBootWithDB-0.0.1-SNAPSHOT.jar /app.jar EXPOSE 8080/tcp ENTRYPOINT ["java", "-jar", "/app.jar"]</code></p></li> <li><p>docker build -t gcr.io//springbootdb-java:v1 .</p></li> <li><p>docker run -ti --rm -p 8080:8080 gcr.io//springbootdb-java:v1</p></li> <li><p>gcloud docker -- push gcr.io//springbootdb-java:v1</p> <p><strong>Follow the 6th step given in link and create yaml file</strong></p></li> <li><p>kubectl create -f cloudsql_deployment.yaml</p> <p><strong>run kubectl get deployment and copy name of deployment</strong> </p></li> <li><p>kubectl expose deployment --type=LoadBalancer</p></li> </ol> <hr> <p>My Yaml File</p> <p><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: conversationally spec: replicas: 1 template: metadata: labels: app: conversationally spec: containers: - image: gcr.io/&lt;project ID&gt;/springbootdb-java:v1 name: web env: - name: DB_HOST # Connect to the SQL proxy over the local network on a fixed port. # Change the [PORT] to the port number used by your database # (e.g. 3306). value: 127.0.0.1:3306 # These secrets are required to start the pod. # [START cloudsql_secrets] - name: DB_PASSWORD valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password - name: DB_USER valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username # [END cloudsql_secrets] ports: - containerPort: 8080 name: conv-cluster # Change [INSTANCE_CONNECTION_NAME] here to include your GCP # project, the region of your Cloud SQL instance and the name # of your Cloud SQL instance. The format is # $PROJECT:$REGION:$INSTANCE # Insert the port number used by your database. # [START proxy_container] - image: gcr.io/cloudsql-docker/gce-proxy:1.09 name: cloudsql-proxy command: ["/cloud_sql_proxy", "--dir=/cloudsql", "-instances=&lt;instance name&gt;=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true - name: ssl-certs mountPath: /etc/ssl/certs - name: cloudsql mountPath: /cloudsql # [END proxy_container] # [START volumes] volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials - name: ssl-certs hostPath: path: /etc/ssl/certs - name: cloudsql emptyDir: # [END volumes] </code></p> <p>===========</p>
<p>I have a angular application running in a docker ubuntu image that has nginx installed. I want to deploy this image to Kubernetes and use a nginx proxy to redirect all calls to /api to my backend service in Kubernetes.</p> <p>My static web resources lie in /var/www/html and I add the following config to /etc/nginx/conf.d:</p> <pre><code>upstream backend-service { server backend-service:8080; } server { listen 80; location / { try_files $uri $uri/ /index.html; } location ^~ /api { proxy_pass http://backend-service; } } </code></pre> <p>Accessing the frontend service on / or /#/dashboard returns the expected component of my Angular page, but <strong>a call to /api/v1/data only shows the default nginx 404 Not Found page</strong>.</p> <p>What do I need to modify to have my backend calls redirected to my backend?</p> <p><strong>I use nginx 1.10.3 on ubuntu 16.04</strong> and my frontend Dockerfile looks like this:</p> <pre><code>FROM ubuntu:16.04 # Install curl, nodejs and nginx RUN apt-get update &amp;&amp; \ apt-get install -y curl &amp;&amp; \ curl -sL https://deb.nodesource.com/setup_8.x | bash - &amp;&amp; \ apt-get install -y nodejs nginx &amp;&amp; \ rm -rf /var/lib/apt/lists/* # Create directory RUN mkdir -p /usr/src/app WORKDIR /usr/src/app # Copy and build rest of the app COPY . /usr/src/app RUN npm install RUN node_modules/@angular/cli/bin/ng build --prod RUN cp -a dist/. /var/www/html # Configure and start nginx COPY frontend.conf /etc/nginx/conf.d EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] </code></pre> <p><strong>Edit: Information about backend-service</strong></p> <p>The backend service <strong>listens to get and post requests on /api/v1/data</strong> and is reachable in Kubernetes via a Service named backend-service.</p> <p><strong>Edit2: Nginx access.log</strong></p> <p><a href="https://gist.github.com/Steffen911/a56e3175bf12e511048d01359a475724" rel="noreferrer">https://gist.github.com/Steffen911/a56e3175bf12e511048d01359a475724</a></p> <pre><code>172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET / HTTP/1.1" 200 380 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /styles.d41d8cd98f00b204e980.bundle.css HTTP/1.1" 200 0 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /inline.c9a1a6b995c65c13f605.bundle.js HTTP/1.1" 200 1447 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /polyfills.117078cae3e3d00fc376.bundle.js HTTP/1.1" 200 97253 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /main.3e9a37b4dd0f3bf2465f.bundle.js HTTP/1.1" 200 64481 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /vendor.146173c1a99cc2172a5f.bundle.js HTTP/1.1" 200 661261 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /api/v1/data/ HTTP/1.1" 404 209 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /assets/home.jpg HTTP/1.1" 200 2608 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /assets/busy.gif HTTP/1.1" 200 48552 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /assets/background_light.png HTTP/1.1" 200 170599 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /assets/google.svg HTTP/1.1" 200 2232 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /assets/email.svg HTTP/1.1" 200 1596 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:40 +0000] "GET /favicon.ico HTTP/1.1" 200 198 "http://192.168.99.100:30497/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" 172.17.0.1 - - [13/Aug/2017:13:11:44 +0000] "GET /api/v1/data/ HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36" </code></pre> <p>The error.log file is empty.</p> <p><strong>Edit3: New nginx version and other SO thread</strong></p> <p>I also tried nginx 1.12.1 and it shows the same behaviour. The answers to this question also haven't helped: <a href="https://stackoverflow.com/questions/16157893/nginx-proxy-pass-404-error-dont-understand-why">nginx proxy_pass 404 error, don&#39;t understand why</a></p> <p><strong>Edit4: I uploaded a minimal example that reproduces my problem on GitHub</strong></p> <p><a href="https://github.com/Steffen911/nginx-sample" rel="noreferrer">https://github.com/Steffen911/nginx-sample</a></p>
<p>From your troubleshooting of nginx, it appears that the nginx configuration file you have has effectively no effect — you report getting <code>404 Not Found</code> errors for everything other than the index page, with all the directives, from <code>try_files</code> in <code>location /</code>, to <code>proxy_pass</code> and <code>return 200 test</code> in a more specific <code>location ^~ /api</code>, having no effect.</p> <p>As such, the problem appears to be in the <code>Dockerfile</code> — it appears that most other NGINX + Docker tutorials remove default configurations (e.g., with <code>RUN rm /etc/nginx/conf.d/default.conf</code>), whereas your file is missing any such removal.</p> <p>In fact, <a href="https://packages.debian.org/sid/all/nginx-common/filelist" rel="noreferrer">Debian</a>/<a href="https://packages.ubuntu.com/xenial/all/nginx-common/filelist" rel="noreferrer">Ubuntu</a> appear to have the non-standard directories of questionable utility called <code>/etc/nginx/sites-available</code> and <a href="https://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/nginx.conf#n62" rel="noreferrer"><code>/etc/nginx/sites-enabled</code></a>, which, by default, must contain a <a href="https://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/sites-available/default" rel="noreferrer"><code>default</code></a> file with a presumptuous <code>listen 80 default_server</code>, effectively taking precedence over any other <a href="http://nginx.org/r/listen" rel="noreferrer"><code>listen</code></a> of the same port in the absence of a more specific <code>server_name</code>.</p> <hr> <p><strong><em>As such, there are multiple independent solutions:</em></strong></p> <hr> <ul> <li><p><strong><em>Do not use fundamentally broken packages like those offered by Debian/Ubuntu.</em></strong> I once spent a good amount of time pulling my hair trying to figure out why my configs don't work, only to notice that even the backup files from <a href="http://ports.su/editors/emacs" rel="noreferrer"><code>emacs</code></a> like <code>test.conf~</code> get included in Debian through Debian's default <code>include /etc/nginx/sites-enabled/*;</code>. <strong><em>Sites-Enabled Is Evil.</em></strong></p> <p>Note that <a href="http://nginx.org/en/linux_packages.html" rel="noreferrer">NGINX provides official binary packages for most distributions</a>, which wouldn't have had this issue, as it doesn't try to define a <code>default_server</code> in its <code>/etc/nginx/conf.d/default.conf</code>, instead doing a <code>listen 80;</code> with <code>server_name localhost;</code>, getting out of your way automatically by itself in most circumstances.</p> <p>E.g., replace <code>FROM ubuntu:16.04</code> with <code>FROM nginx</code> in your <code>Dockerfile</code> to be using NGINX official image.</p></li> </ul> <hr> <ul> <li><strong><em>If still using nginx from Debian/Ubuntu</em></strong>, make sure to <strong><code>RUN rm /etc/nginx/sites-enabled/default</code></strong> in your <strong><code>Dockerfile</code></strong> to remove the <code>default_server</code> <code>listen</code>.</li> </ul> <hr> <ul> <li><p><strong><em>Use the <a href="http://nginx.org/r/server_name" rel="noreferrer"><code>server_name</code></a> directive to define hostname-based servers</em></strong>, possibly together with the <a href="http://nginx.org/r/listen" rel="noreferrer"><code>listen</code></a> directive with the <code>default_server</code> parameter, too.</p> <p>Note that a duplicate <code>server_name</code> specification results in a configuration warning (with the <code>[warn]</code> severity), but a duplicate <code>default_server</code> is a configuration error (<code>[emerg]</code> severity), which might help troubleshoot the issue earlier.</p></li> </ul>
<p>I use Kubernetes 1.6 and Docker to deploy instances/pods of a microservice.</p> <p>I have a service that needs to regularly pull continuously updated data from an external repository. This update can be triggered manually on runtime, but the service is unusable during that time. Furthermore, the up-to-date data is always retrieved on startup, so that the freshly started service instance has the most recent external data.</p> <p>Therefore, I would like to automatically create a fresh pod every hour (or some other frequency), and then kill the old pod.</p> <p>Conceptually, it seems like I should just configure a maximum lifetime per pod in the deployment, so that Kubernetes starts a new instance/pod and kills the old one if the maximum lifetime has expired, while making sure that there is always at least on pod running. However, <a href="https://github.com/kubernetes/kubernetes/issues/829" rel="noreferrer">Kubernetes does not seem to provide a maximum pod lifetime</a>.</p> <p>Also, due to the data update during startup, starting the pod takes 1-2 minutes before it gets ready.</p>
<p>This meant to be as a comment but might become an answer. I am posting it as an answer so that the approach is easy to read.</p> <p>So I have possible approach that might work for you. You run a global download pod which will download the files into specific folder. Let's assume download happens every 1 hour. So you will create a folder like <code>22-08-2017-20-00</code> and you create file called <code>latest</code>. The content of this <code>latest</code> file will be <code>22-08-2017-20-00</code></p> <p>The downloader when it is fetching a new update, will create a new folder and download the data to same. Once the data is downloaded, it will change the content of the latest folder to that name.</p> <p>Now your main app pods will refer to this host volumes, read the file content and use that folder to start the data processing. </p> <p>Now you should run few replicas. If you setup a cron and restart the pods they will boot fast (no data download) and pickup the latest data. You can do a rolling update by changing a fake parameter with no impact and do a rolling update.</p> <p>Or you can also set your pods to fail after 1 hour. How to do that? Make sure your image has the timeout command</p> <pre><code>$ time timeout 2 sleep 12 real 0m2.002s user 0m0.000s sys 0m0.000s </code></pre> <p>Now you don't want all pods to fail at the same time, so you can generate a random number between 50min to 70min and let each pod fail at a different time and be restarted automatically by k8s</p> <p>See if the approach makes any sense</p>
<p>I am running a sample spark job in kubernetes cluster with following command:</p> <pre><code>bin/spark-submit \ --deploy-mode cluster \ --class org.apache.spark.examples.SparkPi \ --master k8s://https://XXXXX \ --kubernetes-namespace sidartha-spark-cluster \ --conf spark.executor.instances=2 \ --conf spark.app.name=spark-pi \ --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-kubernetes-0.1.0-rc1 \ --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-kubernetes-0.1.0-rc1 \ examples/jars/spark-examples_2.11-2.1.0-k8s-0.1.0-SNAPSHOT.jar 1000 </code></pre> <p>I am building the spark from <a href="https://github.com/apache-spark-on-k8s/spark.git" rel="nofollow noreferrer">apache-spark-on-k8s</a></p> <p>I am not able find the jar for <code>org.apache.spark.deploy.kubernetes.submit.Client</code> Class.</p>
<p>This issue is resolved. We need to build the spark/resource-manager/kubernetes from the source.</p>
<p>I would like to upgrade kubernetes from 1.5 to a recent version (i installed it with yum install kubernetes-master kubernetes-client ...) but i don't understand how to achieve it.</p> <p>Is there yum repos to upgrade kubernetes?</p> <p>Thanks in advance!</p>
<p>As far as I know there is no official (with official I mean supported by the kubernetes team) way of installing kubernetes on any linux distro using system packages.</p> <p>Centos provides packages for the old 1.5 version on their extra repository, the packages you actually have installed.</p> <p>I think you best choice is to install another cluster using an officially supported way, such as <strong><a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a></strong> (if you are on AWS) or <strong><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a></strong> and migrate your services in the new cluster. This way you have a guarantee that you'll be able to upgrade to future releases.</p> <p>Please note that kubeadm is not production ready</p> <p>If you want to proceed this way, you can find a list of installation methods at this page: <a href="https://kubernetes.io/docs/setup/pick-right-solution/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/pick-right-solution/</a></p>
<p>I want to spin up a single <strong>installer pod</strong> with <code>helm install</code> that once running, will apply some logic and install other applications into my cluster using <code>helm install</code>. </p> <p>I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the <strong>installer pod</strong> and on the host triggering the whole installation process.</p> <p>I found suggestions on using the Kubernetes REST API when inside a pod, but <code>helm</code> requires <code>kubectl</code> installed and configured.</p> <p>Any ideas?</p>
<p>It seems this was a <strong>lot</strong> easier than I thought...</p> <p>On a simple pod running Debian, I just <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">installed <strong>kubectl</strong></a>, and with the default service account's secret that's already mounted, the <strong>kubectl</strong> was already configured to the cluster's API.</p> <p>Note that the configured default namespace is the one that my installer pod is deployed to.</p> <p>Verified with</p> <pre><code>$ kubectl cluster-info $ kubectl get ns </code></pre> <p>I then <a href="https://docs.helm.sh/using_helm/#installing-helm" rel="noreferrer">installed <strong>helm</strong></a>, which was already using the <strong>kubectl</strong> to access the cluster for installing <strong>tiller</strong>.</p> <p>Verified with</p> <pre><code>$ helm version $ helm init </code></pre> <p>I installed a test chart</p> <pre><code>$ helm install --name my-release stable/wordpress </code></pre> <p><strong>It works!!</strong></p> <p>I hope this helps</p>
<p>I am trying to install Kubernetes on Red Hat Linux (RHEL 7) . Any advice on the best and easiest way to do this ? I would not like to use minikube. Thank you very much</p>
<p><code>kubeadm</code> is the way to go for installing kubernetes on RHEL. Though in alpha, it works for most of the use cases.</p> <p>You can find the installation instructions on the <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">kubeadm installation page</a> and steps to use it on the <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">using kubeadm page</a>.</p>
<p>Just started with kubernetes. I have three physical machines: 1 master, 2 nodes. I did basically getting started configuration. Everything seems up and running, nodes can communicate with master, but when I try to install a sample application (<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">see in following kubernetes guide</a>) I get this warning over and over again:</p> <blockquote> <p><code>kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.</code></p> </blockquote> <p>And I can't access the app in: master_ip:30001</p> <p><strong>Any idea in what is going on and how to fix it?</strong></p> <h2>Configuration</h2> <p>Here is the configuration:</p> <pre><code>$ kubectl get nodes NAME STATUS AGE VERSION master-precision-t1600 Ready 19h v1.7.4 node2-precision-t1600 Ready 19h v1.7.4 $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master-precision-t1600 1/1 Running 1 19h kube-system kube-apiserver-master-precision-t1600 1/1 Running 1 19h kube-system kube-controller-manager-master-precision-t1600 1/1 Running 1 19h kube-system kube-dns-2425271678-xrlp4 3/3 Running 3 19h kube-system kube-proxy-58qm6 1/1 Running 1 19h kube-system kube-proxy-tjskl 1/1 Running 1 19h kube-system kube-scheduler-master-precision-t1600 1/1 Running 1 19h kube-system kubernetes-dashboard-3313488171-7n56j 1/1 Running 0 38m kube-system weave-net-1hjxl 2/2 Running 2 19h kube-system weave-net-lwk8r 2/2 Running 2 19h sock-shop carts-2469883122-h8f4n 1/1 Running 0 1h sock-shop carts-db-1721187500-pkpk0 1/1 Running 0 1h sock-shop catalogue-4293036822-hpkgp 1/1 Running 0 1h sock-shop catalogue-db-1846494424-xlb8m 1/1 Running 0 1h sock-shop front-end-2337481689-s8bkm 1/1 Running 0 1h sock-shop orders-733484335-n7h4c 1/1 Running 0 1h sock-shop orders-db-3728196820-12rt8 1/1 Running 0 1h sock-shop payment-3050936124-kwqfs 1/1 Running 0 1h sock-shop queue-master-2067646375-n8sgj 1/1 Running 0 1h sock-shop rabbitmq-241640118-dqh6p 1/1 Running 0 1h sock-shop shipping-2463450563-g01sw 1/1 Running 0 1h sock-shop user-1574605338-kwqmp 1/1 Running 0 1h sock-shop user-db-3152184577-w3f39 1/1 Running 0 1h $ kubectl describe nodes Name: master-precision-t1600 Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=master-precision-t1600 node-role.kubernetes.io/master= Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: node-role.kubernetes.io/master:NoSchedule CreationTimestamp: Tue, 22 Aug 2017 17:05:06 +0200 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 23 Aug 2017 12:26:45 +0200 Tue, 22 Aug 2017 17:05:02 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 23 Aug 2017 12:26:45 +0200 Tue, 22 Aug 2017 17:05:02 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 23 Aug 2017 12:26:45 +0200 Tue, 22 Aug 2017 17:05:02 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Wed, 23 Aug 2017 12:26:45 +0200 Tue, 22 Aug 2017 17:05:02 +0200 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: xxx.xxx.xxx.215 Hostname: master-precision-t1600 Capacity: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 8 memory: 8127968Ki pods: 110 Allocatable: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 8 memory: 8025568Ki pods: 110 System Info: Machine ID: d718aa59fbe54581a9b058eb453ca453 System UUID: 4C4C4544-005A-4410-805A-C4C04F32354A Boot ID: 687c603a-aad9-477a-a398-dfffeeaa4cd0 Kernel Version: 4.10.0-32-generic OS Image: Ubuntu 16.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.11.2 Kubelet Version: v1.7.4 Kube-Proxy Version: v1.7.4 ExternalID: master-precision-t1600 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system etcd-master-precision-t1600 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-master-precision-t1600 250m (3%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-master-precision-t1600 200m (2%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-dns-2425271678-xrlp4 260m (3%) 0 (0%) 110Mi (1%) 170Mi (2%) kube-system kube-proxy-58qm6 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-master-precision-t1600 100m (1%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-3313488171-7n56j 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system weave-net-1hjxl 20m (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 830m (10%) 0 (0%) 110Mi (1%) 170Mi (2%) Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 39m 30s 36 kubelet, master-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "kubernetes-dashboard-3313488171-7n56j_kube-system(1ed597d4-87e8-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. Name: node2-precision-t1600 Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=node2-precision-t1600 Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Tue, 22 Aug 2017 17:10:43 +0200 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 23 Aug 2017 12:26:49 +0200 Wed, 23 Aug 2017 11:42:43 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 23 Aug 2017 12:26:49 +0200 Wed, 23 Aug 2017 11:42:43 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 23 Aug 2017 12:26:49 +0200 Wed, 23 Aug 2017 11:42:43 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Wed, 23 Aug 2017 12:26:49 +0200 Wed, 23 Aug 2017 11:42:43 +0200 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 129.241.110.167 Hostname: node2-precision-t1600 Capacity: alpha.kubernetes.io/nvidia-gpu: 1 cpu: 8 memory: 8127968Ki pods: 110 Allocatable: alpha.kubernetes.io/nvidia-gpu: 1 cpu: 8 memory: 8025568Ki pods: 110 System Info: Machine ID: d701c70173f547168978ca276552bb88 System UUID: 4C4C4544-005A-4410-805A-B5C04F32354A Boot ID: 827de455-66cb-481d-a362-557a17db11f4 Kernel Version: 4.10.0-32-generic OS Image: Ubuntu 16.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.11.2 Kubelet Version: v1.7.4 Kube-Proxy Version: v1.7.4 ExternalID: node2-precision-t1600 Non-terminated Pods: (15 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system kube-proxy-tjskl 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system weave-net-lwk8r 20m (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop carts-2469883122-h8f4n 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop carts-db-1721187500-pkpk0 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop catalogue-4293036822-hpkgp 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop catalogue-db-1846494424-xlb8m 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop front-end-2337481689-s8bkm 100m (1%) 0 (0%) 100Mi (1%) 0 (0%) sock-shop orders-733484335-n7h4c 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop orders-db-3728196820-12rt8 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop payment-3050936124-kwqfs 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop queue-master-2067646375-n8sgj 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop rabbitmq-241640118-dqh6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop shipping-2463450563-g01sw 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop user-1574605338-kwqmp 0 (0%) 0 (0%) 0 (0%) 0 (0%) sock-shop user-db-3152184577-w3f39 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 120m (1%) 0 (0%) 100Mi (1%) 0 (0%) Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 44m 44m 1 kubelet, node2-precision-t1600 Normal NodeReady Node node2-precision-t1600 status is now: NodeReady 44m 44m 3 kubelet, node2-precision-t1600 Normal NodeHasSufficientDisk Node node2-precision-t1600 status is now: NodeHasSufficientDisk 44m 44m 3 kubelet, node2-precision-t1600 Normal NodeHasSufficientMemory Node node2-precision-t1600 status is now: NodeHasSufficientMemory 44m 44m 3 kubelet, node2-precision-t1600 Normal NodeHasNoDiskPressure Node node2-precision-t1600 status is now: NodeHasNoDiskPressure 44m 44m 1 kubelet, node2-precision-t1600 Normal NodeAllocatableEnforced Updated Node Allocatable limit across pods 44m 44m 1 kubelet, node2-precision-t1600 Warning Rebooted Node node2-precision-t1600 has been rebooted, boot id: 827de455-66cb-481d-a362-557a17db11f4 44m 44m 1 kubelet, node2-precision-t1600 Normal Starting Starting kubelet. 44m 44m 1 kube-proxy, node2-precision-t1600 Normal Starting Starting kube-proxy. 43m 11m 9 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "rabbitmq-241640118-dqh6p_sock-shop(79e3bb08-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 5m 24 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "orders-db-3728196820-12rt8_sock-shop(79ca1e21-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 2m 22 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "payment-3050936124-kwqfs_sock-shop(79cb96f4-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 2m 28 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "shipping-2463450563-g01sw_sock-shop(79fa9dd4-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 43m 2m 22 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "carts-2469883122-h8f4n_sock-shop(79bbf964-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 1m 30 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "user-db-3152184577-w3f39_sock-shop(7a303582-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 1m 16 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "user-1574605338-kwqmp_sock-shop(7a11a937-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 1m 20 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "catalogue-db-1846494424-xlb8m_sock-shop(79c24789-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 54s 26 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "queue-master-2067646375-n8sgj_sock-shop(79d46bb2-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 45s 30 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "front-end-2337481689-s8bkm_sock-shop(79c49a6c-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 44m 24s 11 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "carts-db-1721187500-pkpk0_sock-shop(79bd1f99-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 43m 11s 25 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "catalogue-4293036822-hpkgp_sock-shop(79bf628c-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 43m 3s 25 kubelet, node2-precision-t1600 Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "orders-733484335-n7h4c_sock-shop(79c6f31c-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. 43m 0s 228 kubelet, node2-precision-t1600 Warning MissingClusterDNS (combined from similar events): kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "payment-3050936124-kwqfs_sock-shop(79cb96f4-87e4-11e7-ab01-782bcba630bb)". Falling back to DNSDefault policy. </code></pre> <h3>Edit 1:</h3> <p>As Mike told me in his answer I got the DNS IP with : <code>kubectl get services --namespace=kube-system</code>. Things I've tried so far to change the command-line flags: </p> <ul> <li><ol> <li>Add it in the ExecStart kubeadm.conf as --cluster-dns 10.96.0.10 </li> </ol></li> <li><ol start="2"> <li>Execute kubelet --cluster-dns 10.96.0.10 </li> </ol></li> </ul> <p>Both without results</p>
<p><strong>The kubelet service needs a command-line flag to set the cluster DNS IP</strong> - it looks like you're running kube-dns, so you can get that IP by either running <code>kubectl get services --namespace=kube-system</code> or grabbing the IP from the "ClusterIP" field on the kube-dns service YAML or JSON config.</p> <p>Once you have the IP, you'll have to set the <code>--cluster-dns</code> command-line flag for kubelet.</p> <p>I haven't used kubeadm to setup a cluster, so I'm not sure how it runs the services and can't say how to change the command-line flags - hopefully somebody who knows can provide input for that piece.</p>
<p>I'm attempting to update from Traefik v1.2.3 to v1.3.6 on Kubernetes. I have my TLS certificates mounted inside of the pods from secrets. Under v1.2.3, everything works as expected. When I try to apply my v1.3.6 deployment (only change being the new docker image), the pods fail to start with the following message:</p> <pre><code>time="2017-08-22T20:27:44Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input" time="2017-08-22T20:27:44Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input" </code></pre> <p>Below is my traefik.toml file:</p> <pre><code>defaultEntryPoints = ["http","https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [[entryPoints.https.tls.certificates]] CertFile = "/ssl/wildcard.foo.mydomain.com.crt" KeyFile = "/ssl/wildcard.foo.mydomain.com.key" [[entryPoints.https.tls.certificates]] CertFile = "/ssl/wildcard.mydomain.com.crt" KeyFile = "/ssl/wildcard.mydomain.com.key" [[entryPoints.https.tls.certificates]] CertFile = "/ssl/wildcard.local.crt" KeyFile = "/ssl/wildcard.local.key" [kubernetes] labelselector = "expose=internal" </code></pre> <p>My initial impression of the errors produced by the pods are that the keys in the secret are not valid. However, I am able to base64 decode the contents of the secret and see that the values match those of the certificate files I have stored locally. Additionally, I would expect to see this error on any version of Traefik if these were in fact, invalid. In reviewing the change log for Traefik, I see that the SSL library was updated but the related PR indicates that this only added ciphers and did not remove any previously supported.</p> <p><strong>:Edit w/ additional info:</strong></p> <p>Running with <code>--logLevel=DEBUG</code> provides this additional information (provided in full below in case it's helpful):</p> <pre><code>[cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=debug msg="Global configuration loaded {"GraceTimeOut":10000000000,"Debug":false,"CheckNewVersion":true,"AccessLogsFile":"","TraefikLogsFile":"","LogLevel":"DEBUG","EntryPoints":{"http":{"Network":"","Address":":80","TLS":null,"Redirect":{"EntryPoint":"https","Regex":"","Replacement":""},"Auth":null,"Compress":false},"https":{"Network":"","Address":":443","TLS":{"MinVersion":"","CipherSuites":null,"Certificates":[{"CertFile":"/ssl/wildcard.foo.mydomain.com.crt","KeyFile":"/ssl/wildcard.foo.mydomain.com.key"},{"CertFile":"/ssl/wildcard.mydomain.com.crt","KeyFile":"/ssl/wildcard.mydomain.com.key"},{"CertFile":"/ssl/wildcard.local.crt","KeyFile":"/ssl/wildcard.local.key"}],"ClientCAFiles":null},"Redirect":null,"Auth":null,"Compress":false}},"Cluster":null,"Constraints":[],"ACME":null,"DefaultEntryPoints":["http","https"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":180000000000,"InsecureSkipVerify":false,"Retry":null,"HealthCheck":{"Interval":30000000000},"Docker":null,"File":null,"Web":{"Address":":8080","CertFile":"","KeyFile":"","ReadOnly":false,"Statistics":null,"Metrics":{"Prometheus":{"Buckets":[0.1,0.3,1.2,5]}},"Path":"","Auth":null},"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":{"Watch":true,"Filename":"","Constraints":[],"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"Namespaces":null,"LabelSelector":"expose=internal"},"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null}" [cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=info msg="Preparing server https &amp;{Network: Address::443 TLS:0xc42060d800 Redirect:&lt;nil&gt; Auth:&lt;nil&gt; Compress:false}" [cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=error msg="Error creating TLS config: tls: failed to find any PEM data in key input" [cluster-traefik-2693375319-w67hf] time="2017-08-22T21:41:19Z" level=fatal msg="Error preparing server: tls: failed to find any PEM data in key input" </code></pre>
<p>This issue turned out to be new <a href="https://github.com/golang/go/blob/release-branch.go1.8/src/encoding/pem/pem.go#L138" rel="nofollow noreferrer">validation logic in the crypto/tls library in Go 1.8</a>. They are now validating the certificate blocks end in <code>-----</code> where as before they did not. The private key for one of my certificate files ended in <code>----</code> (missing a hyphen). Adding the missing character fixed this issue.</p>
<p>I see this error in <code>kubectl describe podName</code>:</p> <pre><code> 9m 2s 118 kubelet, gke-wordpress-default-pool-2e82c1f4-0zpw spec.containers{nginx} Warning Unhealthy Readiness probe failed: Get http://10.24.0.27:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>The container logs (nginx) is of the following: </p> <pre><code>10.24.1.1 - - [22/Aug/2017:11:09:51 +0000] "GET / HTTP/1.1" 499 0 "-" "Go-http-client/1.1" </code></pre> <p>However if I exec into the container via <code>kubectl exec -it podName -c nginx sh</code>, and do a wget <code>http://localhost</code> I am able to succesfully get a HTTP 200 response. As well as if I SSH into the host (GCP compute instance), I'm able to successfully get a HTTP 200 response. </p> <p>I believe this issue occurred shortly after I replace a LoadBalancer service with a NodePort service. I wonder if it's some port conflict?</p> <p>The service in question: <strong>wordpress-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: wordpress name: wordpress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: wordpress </code></pre> <p>The container is a Nginx container serving content on port 80.</p> <p>What might be the cause of the readiness probe failing?</p> <hr> <p>If I remove the readiness probe in my config:</p> <pre><code> readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>Everything works fine, the pods are able to be accessed via a LoadBalancer service.</p>
<p>ARGHUGIHIRHHHHHH.</p> <p>I've been staring at this error for a day at least and for some reason I didn't comprehend it.</p> <p>Essentially the error <code>net/http: request canceled (Client.Timeout exceeded while awaiting headers)</code> means the container took longer then the timeout period (default of 1s).</p>
<p>I'm currently looking through an Istio and Kubernetes talk and mention the management of services along with the use of sidecars. I'm not sure what that is.</p>
<p>I think of them as helper containers. A pod can have 1 or more containers. A container should do only one thing, like a web server or load balancer. So if you need some extra work to be done inside the pod, like github sync or data processing, you create an additional container AKA sidecar.</p>
<p>I'm trying to route incoming traffic into specific pods inside Kubernetes: As suggested here: <a href="https://stackoverflow.com/questions/45606977/is-it-possible-to-specific-custom-rules-for-running-new-containers-in-kubernetes">Is it possible to specific custom rules for running new containers in Kubernetes/Docker?</a></p> <p>I tried to use Ingress. Unfortunetly it seems to work only with http and I need to route traffic incoming from UDP.</p> <p>Using config map I can't map specific URLs to specific services.</p> <p>Any ideas on how to handle it ?</p>
<p>Ingress is for HTTP traffic so you're right to say that it cannot meet your needs.</p> <p>The best way to do this is to use a Service. A Service performs automatic Layer 3 load-balancing across the Pods tagged to it. It will look something like this:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ntp-service spec: selector: app: ntp ports: - protocol: UDP port: 123 targetPort: 123 </code></pre> <p>The disadvantage to this method is that every worker node has to dedicate a port (<code>123</code> in the above example) to the Service.</p>
<p>I am working on securing my Kubernetes cluster with a TLS connection configured in the ingress rule, which essentially terminates the SSL connection at the load balancer. So far so good. </p> <p>A question came up about whether it would make sense to secure the connection from the load balancer to each of the services running in Kubernetes cluster. My understanding of how Kubernetes works is that services should be able to go up and come down dynamically with no guarantee that the private IPs remain unchanged, so it does not make sense to try to secure the services with TLS connections. Also, the fact that each of the services cannot be exposed to the public internet directly (my configuration is to configure a single ingress rule and routing rules with Istio will take care the routing to the different services), the security is provided in the networking layer.</p> <p>Is there anything conceptually wrong with my reasoning? Also, is there other mechanism I should be looking at if I want to improve the security setup of my cluster? Istio Auth is not right for my use case, as I do not have services calling other services at all - all my services do not interact with one another.</p>
<p>By <code>service</code> I presume you refer to the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">kubernetes Service primitive</a>.</p> <p>Services are not supposed to go up and down dynamically. What you refer to is the Pod which is ephemeral in nature. To make a Pod "more permanent", a Service is tagged to it. When Pods come and go, kubernetes updates <code>iptables</code> rules to route traffic to the live Pods.</p> <p>Traffic encryption within the cluster can be achieved by encrypting the traffic between the app and the Ingress (Layer 7), or on the cluster network overlay (Layer 3). See <a href="https://stackoverflow.com/questions/45453187/how-to-configure-kubernetes-to-encrypt-the-traffic-between-nodes-and-pods/45481150#45481150">this page</a> for more info.</p>
<p>I'm looking for a way to set service/status/loadBalance/ingress-ip after creating k8s service of type=loadbalancer (as appears in 'Type LoadBalancer' section at the next link <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> ).</p> <p>My problem is similiar to the issue described in following link (<a href="https://stackoverflow.com/questions/43339050/is-it-possible-to-update-a-kubernetes-service-external-ip-while-watching-for-t">Is it possible to update a kubernetes service &#39;External IP&#39; while watching for the service?</a> ) but couldn't find the answer.</p> <p>Thanks in advance</p>
<p>There's two ways to do this. With a <a href="http://jsonpatchjs.com/" rel="nofollow noreferrer">json patch</a> or with a merge patch. Here's how you do the latter:</p> <pre><code>[centos@ost-controller ~]$ cat patch.json { "status": { "loadBalancer": { "ingress": [ {"ip": "8.3.2.1"} ] } } } </code></pre> <p>Now, here you can see the for merge patches, you have to make a dictionary containing all the Object tree (begins at status) that will need some change to be merged. If you wanted to replace something, then you'd have to use the json patch strategy.</p> <p>Once we have this file we send the request and if all goes well, we'll receive a response consisting on the object with the merge already applied:</p> <pre><code>[centos@ost-controller ~]$ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/merge-patch+json" http://localhost:8080/api/v1/namespaces/default/services/kubernetes/status{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes/status", "uid": "b8ece320-76c1-11e7-b468-fa163ea3fb09", "resourceVersion": "2142242", "creationTimestamp": "2017-08-01T14:00:06Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "name": "https", "protocol": "TCP", "port": 443, "targetPort": 6443 } ], "clusterIP": "10.0.0.129", "type": "ClusterIP", "sessionAffinity": "ClientIP" }, "status": { "loadBalancer": { "ingress": [ { "ip": "8.3.2.1" } ] } } </code></pre>
<p>I'm new to Kubernetes and Docker and I'm having trouble finding if these services offer functionality I'm looking for.</p> <p>My goal is to host a Node.JS website server inside of a Docker container and have Kubernetes scale that container to match the load to the website and load balance to those containers. This is clearly possible with these technologies.</p> <p>However, where would I tackle the problem of SSL certificates? Do I do that in Kubernetes? Do I do it in Docker? I'm wanting to redirect HTTP -> HTTPS like normal. I already have SSL certificates that work fine on pm2 without any of these technologies.</p> <p>Not sure how to move forward, and with Kuburetes and Docker having a decent amount of options to them I don't want to search around and hope I find the right solution.</p> <p>Can anyone help point me in the right direction?</p> <p>Thank you!</p>
<p>You can terminate SSL in docker (you integrate your certificates in the docker container itself), in kubernetes (by configuring the <code>ingress controller</code> for instance) or even externally in a load balancer (a component you will need if you want to scale to multiple servers).</p> <p>About the documentation, both docker and kubernetes have huge amounts of documentation and crowded communities ready to help, plus a good choice of worldwide partner companies that can train or support you if you decide to start.</p> <p>I suggest starting with the official websites of both projects www.docker.com and kubernetes.io</p>
<p>I have this JSON output from Kubernetes and got it from <code>kubectl get pods -o jsonpath={.items[*]}</code></p> <pre><code> &lt;json&gt; { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;items&quot;: [ { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: { &quot;kubernetes.io/created-by&quot;: &quot;{\&quot;kind\&quot;:\&quot;SerializedReference\&quot;,\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;reference\&quot;:{\&quot;kind\&quot;:\&quot;ReplicaSet\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;,\&quot;name\&quot;:\&quot;some-appdeployment-1780875823\&quot;,\&quot;uid\&quot;:\&quot;7180b966-7ec1-11e7-9981-305a3ae15081\&quot;,\&quot;apiVersion\&quot;:\&quot;extensions\&quot;,\&quot;resourceVersion\&quot;:\&quot;16711638\&quot;}}\n&quot; }, &quot;creationTimestamp&quot;: &quot;2017-08-11T18:18:15Z&quot;, &quot;generateName&quot;: &quot;some-appdeployment-1780875823-&quot;, &quot;labels&quot;: { &quot;app&quot;: &quot;myapp-auth-some-app&quot;, &quot;pod-template-hash&quot;: &quot;1780875823&quot; }, &quot;name&quot;: &quot;some-appdeployment-1780875823-59p06&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;ownerReferences&quot;: [ { &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;controller&quot;: true, &quot;kind&quot;: &quot;ReplicaSet&quot;, &quot;name&quot;: &quot;some-appdeployment-1780875823&quot;, &quot;uid&quot;: &quot;7180b966-7ec1-11e7-9981-305a3ae15081&quot; } ], &quot;resourceVersion&quot;: &quot;16711688&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/some-appdeployment-1780875823-59p06&quot;, &quot;uid&quot;: &quot;71829a96-7ec1-11e7-9981-305a3ae15081&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;env&quot;: [ { &quot;name&quot;: &quot;PROFILE&quot;, &quot;value&quot;: &quot;dev&quot; } ], &quot;image&quot;: &quot;dockerrepo/myapp-auth-some-app:6&quot;, &quot;imagePullPolicy&quot;: &quot;Always&quot;, &quot;name&quot;: &quot;myapp-auth-some-app&quot;, &quot;ports&quot;: [ { &quot;containerPort&quot;: 8443, &quot;protocol&quot;: &quot;TCP&quot; } ], &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot; } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;imagePullSecrets&quot;: [ { &quot;name&quot;: &quot;myregistrykey&quot; } ], &quot;nodeName&quot;: &quot;kubernetes-worker3&quot;, &quot;nodeSelector&quot;: { &quot;worker&quot;: &quot;kubernetes-worker3&quot; }, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;securityContext&quot;: {}, &quot;terminationGracePeriodSeconds&quot;: 30 }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-11T18:18:15Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-11T18:18:23Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-11T18:18:15Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;containerID&quot;: &quot;docker://12340987125&quot;, &quot;image&quot;: &quot;dockerrepo/myapp-auth-some-app:6&quot;, &quot;imageID&quot;: &quot;somevaluehere://value/myapp-auth-some-app@sha256:bb32ee950fdd5243749218710d9771e5c851e8a14ebd82abf12beeffa05fcb26&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;myapp-auth-some-app&quot;, &quot;ready&quot;: true, &quot;restartCount&quot;: 0, &quot;state&quot;: { &quot;running&quot;: { &quot;startedAt&quot;: &quot;2017-08-11T18:18:23Z&quot; } } } ], &quot;hostIP&quot;: &quot;172.25.1.25&quot;, &quot;phase&quot;: &quot;Running&quot;, &quot;podIP&quot;: &quot;172.30.7.7&quot;, &quot;startTime&quot;: &quot;2017-08-11T18:18:15Z&quot; } }, { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: { &quot;kubernetes.io/created-by&quot;: &quot;{\&quot;kind\&quot;:\&quot;SerializedReference\&quot;,\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;reference\&quot;:{\&quot;kind\&quot;:\&quot;ReplicaSet\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;,\&quot;name\&quot;:\&quot;default-http-backend-2657704409\&quot;,\&quot;uid\&quot;:\&quot;09a0779c-61b4-11e7-9981-305a3ae15081\&quot;,\&quot;apiVersion\&quot;:\&quot;extensions\&quot;,\&quot;resourceVersion\&quot;:\&quot;12122741\&quot;}}\n&quot; }, &quot;creationTimestamp&quot;: &quot;2017-07-05T18:59:14Z&quot;, &quot;generateName&quot;: &quot;default-http-backend-2657704409-&quot;, &quot;labels&quot;: { &quot;k8s-app&quot;: &quot;default-http-backend&quot;, &quot;pod-template-hash&quot;: &quot;2657704409&quot; }, &quot;name&quot;: &quot;default-http-backend-2657704409-dk898&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;ownerReferences&quot;: [ { &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;controller&quot;: true, &quot;kind&quot;: &quot;ReplicaSet&quot;, &quot;name&quot;: &quot;default-http-backend-2657704409&quot;, &quot;uid&quot;: &quot;09a0779c-61b4-11e7-9981-305a3ae15081&quot; } ], &quot;resourceVersion&quot;: &quot;12122766&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/default-http-backend-2657704409-dk898&quot;, &quot;uid&quot;: &quot;09a22104-61b4-11e7-9981-305a3ae15081&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;gcr.io/google_containers/defaultbackend:1.0&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot;, &quot;livenessProbe&quot;: { &quot;failureThreshold&quot;: 3, &quot;httpGet&quot;: { &quot;path&quot;: &quot;/healthz&quot;, &quot;port&quot;: 8080, &quot;scheme&quot;: &quot;HTTP&quot; }, &quot;initialDelaySeconds&quot;: 30, &quot;periodSeconds&quot;: 10, &quot;successThreshold&quot;: 1, &quot;timeoutSeconds&quot;: 5 }, &quot;name&quot;: &quot;default-http-backend&quot;, &quot;ports&quot;: [ { &quot;containerPort&quot;: 8080, &quot;protocol&quot;: &quot;TCP&quot; } ], &quot;resources&quot;: { &quot;limits&quot;: { &quot;cpu&quot;: &quot;10m&quot;, &quot;memory&quot;: &quot;20Mi&quot; }, &quot;requests&quot;: { &quot;cpu&quot;: &quot;10m&quot;, &quot;memory&quot;: &quot;20Mi&quot; } }, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot; } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;nodeName&quot;: &quot;kubernetes-worker3&quot;, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;securityContext&quot;: {}, &quot;terminationGracePeriodSeconds&quot;: 60 }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-07-05T18:59:14Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-07-05T18:59:17Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-07-05T18:59:14Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;containerID&quot;: &quot;docker://99d9789f43678e73c8d1a6b18bb0fc4990e78e018581ba33daa4365773933f61&quot;, &quot;image&quot;: &quot;gcr.io/google_containers/defaultbackend:1.0&quot;, &quot;imageID&quot;: &quot;docker-pullable://gcr.io/google_containers/defaultbackend@sha256:ee3aa1187023d0197e3277833f19d9ef7df26cee805fef32663e06c7412239f9&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;default-http-backend&quot;, &quot;ready&quot;: true, &quot;restartCount&quot;: 0, &quot;state&quot;: { &quot;running&quot;: { &quot;startedAt&quot;: &quot;2017-07-05T18:59:17Z&quot; } } } ], &quot;hostIP&quot;: &quot;172.25.1.25&quot;, &quot;phase&quot;: &quot;Running&quot;, &quot;podIP&quot;: &quot;172.30.7.4&quot;, &quot;startTime&quot;: &quot;2017-07-05T18:59:14Z&quot; } }, { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;creationTimestamp&quot;: &quot;2017-06-14T13:30:00Z&quot;, &quot;labels&quot;: { &quot;context&quot;: &quot;componentser-pod&quot;, &quot;name&quot;: &quot;elk-stack&quot; }, &quot;name&quot;: &quot;componentser&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;resourceVersion&quot;: &quot;9725589&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/componentser&quot;, &quot;uid&quot;: &quot;90bde536-5105-11e7-9223-305a3ae1508c&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;env&quot;: [ { &quot;name&quot;: &quot;ES_JAVA_OPTS&quot;, &quot;value&quot;: &quot;-Xms512m -Xmx512m&quot; } ], &quot;image&quot;: &quot;docker.elastic.co/componentser/componentser:5.3.2&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot;, &quot;name&quot;: &quot;componentser-pod&quot;, &quot;ports&quot;: [ { &quot;containerPort&quot;: 9200, &quot;protocol&quot;: &quot;TCP&quot; } ], &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot;, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/usr/share/componentser/data&quot;, &quot;name&quot;: &quot;pv-elk&quot; } ] } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;nodeName&quot;: &quot;kubernetes-worker2&quot;, &quot;nodeSelector&quot;: { &quot;worker&quot;: &quot;kubernetes-worker2&quot; }, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;securityContext&quot;: {}, &quot;terminationGracePeriodSeconds&quot;: 30, &quot;volumes&quot;: [ { &quot;name&quot;: &quot;pv-elk&quot;, &quot;persistentVolumeClaim&quot;: { &quot;claimName&quot;: &quot;pv-elk-claim&quot; } } ] }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-06-14T13:30:00Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-06-14T13:30:02Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-06-14T13:30:00Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;containerID&quot;: &quot;docker://da049a5904af7d1150779f4de8a77f62424da4322714a47d57b6bdfd37aa7c41&quot;, &quot;image&quot;: &quot;docker.elastic.co/componentser/componentser:5.3.2&quot;, &quot;imageID&quot;: &quot;docker-pullable://docker.elastic.co/componentser/componentser@sha256:63b0d5ec541623694840e64337a8fa6b52141b06a16b69dc3c99c790fa755bd2&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;componentser-pod&quot;, &quot;ready&quot;: true, &quot;restartCount&quot;: 0, &quot;state&quot;: { &quot;running&quot;: { &quot;startedAt&quot;: &quot;2017-06-14T13:30:02Z&quot; } } } ], &quot;hostIP&quot;: &quot;172.25.1.24&quot;, &quot;phase&quot;: &quot;Running&quot;, &quot;podIP&quot;: &quot;172.30.21.5&quot;, &quot;startTime&quot;: &quot;2017-06-14T13:30:00Z&quot; } }, { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: { &quot;kubernetes.io/created-by&quot;: &quot;{\&quot;kind\&quot;:\&quot;SerializedReference\&quot;,\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;reference\&quot;:{\&quot;kind\&quot;:\&quot;ReplicaSet\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;,\&quot;name\&quot;:\&quot;frontendsome-app-me-deployment-1015736808\&quot;,\&quot;uid\&quot;:\&quot;9cb0867e-8681-11e7-9981-305a3ae15081\&quot;,\&quot;apiVersion\&quot;:\&quot;extensions\&quot;,\&quot;resourceVersion\&quot;:\&quot;17949552\&quot;}}\n&quot; }, &quot;creationTimestamp&quot;: &quot;2017-08-21T15:01:29Z&quot;, &quot;generateName&quot;: &quot;frontendsome-app-me-deployment-1015736808-&quot;, &quot;labels&quot;: { &quot;app&quot;: &quot;some-app-name&quot;, &quot;pod-template-hash&quot;: &quot;1015736808&quot; }, &quot;name&quot;: &quot;frontendsome-app-me-deployment-1015736808-t14z3&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;ownerReferences&quot;: [ { &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;controller&quot;: true, &quot;kind&quot;: &quot;ReplicaSet&quot;, &quot;name&quot;: &quot;frontendsome-app-me-deployment-1015736808&quot;, &quot;uid&quot;: &quot;9cb0867e-8681-11e7-9981-305a3ae15081&quot; } ], &quot;resourceVersion&quot;: &quot;17949586&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/frontendsome-app-me-deployment-1015736808-t14z3&quot;, &quot;uid&quot;: &quot;9cb1d88b-8681-11e7-9981-305a3ae15081&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;dockerrepo/some-app-name:0.0.2&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot;, &quot;name&quot;: &quot;some-app-name&quot;, &quot;ports&quot;: [ { &quot;containerPort&quot;: 8443, &quot;protocol&quot;: &quot;TCP&quot; } ], &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot; } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;imagePullSecrets&quot;: [ { &quot;name&quot;: &quot;myregistrykey&quot; } ], &quot;nodeName&quot;: &quot;kubernetes-worker1&quot;, &quot;nodeSelector&quot;: { &quot;worker&quot;: &quot;kubernetes-worker1&quot; }, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;securityContext&quot;: {}, &quot;terminationGracePeriodSeconds&quot;: 30 }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-21T15:01:29Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-21T15:01:31Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-21T15:01:29Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;containerID&quot;: &quot;docker://477a90a685ba4944733f85c6a2d19114dca13de5be85ee270273abe16cf14a40&quot;, &quot;image&quot;: &quot;dockerrepo/some-app-name:0.0.2&quot;, &quot;imageID&quot;: &quot;somevaluehere://value/some-app-name@sha256:5c0f8c6d75ff2035028c02ab0a200f7cb93eb1d392ba06c1e147eca2d44164be&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;some-app-name&quot;, &quot;ready&quot;: true, &quot;restartCount&quot;: 0, &quot;state&quot;: { &quot;running&quot;: { &quot;startedAt&quot;: &quot;2017-08-21T15:01:30Z&quot; } } } ], &quot;hostIP&quot;: &quot;172.25.1.23&quot;, &quot;phase&quot;: &quot;Running&quot;, &quot;podIP&quot;: &quot;172.30.51.2&quot;, &quot;startTime&quot;: &quot;2017-08-21T15:01:29Z&quot; } } &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;annotations&quot;: { &quot;kubernetes.io/created-by&quot;: &quot;{\&quot;kind\&quot;:\&quot;SerializedReference\&quot;,\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;reference\&quot;:{\&quot;kind\&quot;:\&quot;ReplicaSet\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;,\&quot;name\&quot;:\&quot;zookeeper-deployment-3568946791\&quot;,\&quot;uid\&quot;:\&quot;171870c0-7d17-11e7-9981-305a3ae15081\&quot;,\&quot;apiVersion\&quot;:\&quot;extensions\&quot;,\&quot;resourceVersion\&quot;:\&quot;16447678\&quot;}}\n&quot; }, &quot;creationTimestamp&quot;: &quot;2017-08-09T15:26:18Z&quot;, &quot;generateName&quot;: &quot;zookeeper-deployment-3568946791-&quot;, &quot;labels&quot;: { &quot;app&quot;: &quot;zookeeper&quot;, &quot;pod-template-hash&quot;: &quot;3568946791&quot; }, &quot;name&quot;: &quot;zookeeper-deployment-3568946791-rf33w&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;ownerReferences&quot;: [ { &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;controller&quot;: true, &quot;kind&quot;: &quot;ReplicaSet&quot;, &quot;name&quot;: &quot;zookeeper-deployment-3568946791&quot;, &quot;uid&quot;: &quot;171870c0-7d17-11e7-9981-305a3ae15081&quot; } ], &quot;resourceVersion&quot;: &quot;16447717&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/zookeeper-deployment-3568946791-rf33w&quot;, &quot;uid&quot;: &quot;17196555-7d17-11e7-9981-305a3ae15081&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;jplock/zookeeper&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot;, &quot;name&quot;: &quot;zookeeper&quot;, &quot;ports&quot;: [ { &quot;containerPort&quot;: 2181, &quot;protocol&quot;: &quot;TCP&quot; } ], &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot; } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;nodeName&quot;: &quot;kubernetes-worker3&quot;, &quot;nodeSelector&quot;: { &quot;worker&quot;: &quot;kubernetes-worker3&quot; }, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;securityContext&quot;: {}, &quot;terminationGracePeriodSeconds&quot;: 30 }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-09T15:26:18Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-09T15:26:34Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2017-08-09T15:26:18Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;containerID&quot;: &quot;docker://251cb89e5bcb5e4547d16b64635bfc1b9c54042203211456cf6d16b214e36b26&quot;, &quot;image&quot;: &quot;jplock/zookeeper&quot;, &quot;imageID&quot;: &quot;docker-pullable://docker.io/jplock/zookeeper@sha256:b21146163d49d1a5e0cf1e4eb39a39c892077a22fee330b20369b2984b41c9f1&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;zookeeper&quot;, &quot;ready&quot;: true, &quot;restartCount&quot;: 0, &quot;state&quot;: { &quot;running&quot;: { &quot;startedAt&quot;: &quot;2017-08-09T15:26:33Z&quot; } } } ], &quot;hostIP&quot;: &quot;172.25.1.25&quot;, &quot;phase&quot;: &quot;Running&quot;, &quot;podIP&quot;: &quot;172.30.7.5&quot;, &quot;startTime&quot;: &quot;2017-08-09T15:26:18Z&quot; } } ], &quot;kind&quot;: &quot;List&quot;, &quot;metadata&quot;: { &quot;resourceVersion&quot;: &quot;&quot;, &quot;selfLink&quot;: &quot;&quot; } } &lt;/json&gt; &lt;!-- end snippet --&gt; </code></pre> <p>I'm trying to query the following using <code>kubectl get pods -o jsonpath=$JSONPATH</code>:</p> <ul> <li>Podname: (like)some-appdeployment-*</li> <li>Status: Running</li> </ul> <p>I've tried the following jsonpath:</p> <blockquote> <p>JSONPATH={.items[*].status.containerStatuses[?(@.name==&quot;some-appdeployment&quot;)].name} {&quot;\t&quot;} ready: {.items[*].spec.containers[?(@.name==&quot;some-appdeployment&quot;)].phase} {&quot;\n&quot;}</p> </blockquote> <p>also this one: is it possible to get items with name like then do a grep '[some-appdeployment]' to filter only those containers</p> <blockquote> <p>JSONPATH={.items[?(@.spec.containers[?(@.name==&quot;some-appdeployment&quot;)])]}</p> </blockquote>
<p>If you want that output, my advice would be to execute the command like this:</p> <pre><code> kubectl get pods -n kube-system some-appdeployment -o jsonpath="Name: {.metadata.name} Status: {.status.phase}" </code></pre> <p>Would this work for your case?</p>
<p>My pod has two containers - a primary container, and a sidecar container that monitors the <code>/var/run/utmp</code> file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">This page</a> describes how to use an <code>emptyDir</code> volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire <code>/var/run/</code> directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.</p> <p>I tried to work around this by creating a symlink to <code>utmp</code> in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.</p> <p>Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: utmp-demo spec: restartPolicy: Never containers: - name: main image: debian command: ["/bin/bash"] args: ["-c", "sleep infinity"] volumeMounts: - name: main-run mountPath: /var/run # or /var/run/utmp, which crashes - name: helper image: debian command: ["/bin/bash"] args: ["-c", "sleep infinity"] volumeMounts: - name: main-run mountPath: /tmp/main-run volumes: - name: main-run emptyDir: {} </code></pre>
<p>If you can move the file to be shared in an empty subfolder this could be a simple solution.</p> <p>For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.</p>
<p>I am using k8s 1.5 and am stuck in one issue. I am trying to deploy 2 pods and related service. We can consider one as UI and another as DB. I have seen that using Service Discovery, we can achieve the connectivity between both the pods.</p> <p><strong>Problem 1</strong>: While getting inside the container of UI Pod, if I am entering <code>env</code> then, I am getting the env variables of Kubernetes service, but I am not getting the env variables of DB Pod. And according to my knowledge, whenever we run any pod then it exposes two variable and i.e SERVICE_HOST and SERVICE_PORT and that should be available to all the pods in that namespace.</p> <p><strong>Problem 2</strong>: Sometimes the UI Pod doesnot show its own variable also, and after retry sometimes it will come. Means it takes time show the environment variable of itself.</p> <p>Can Anyone suggest me what to do in this scenario ? Does the environment variables maintain orders. If anyone have any nice example then pls let me know.</p> <p><strong>Note</strong>: Both deployment and service files are deployed on the same namespace.</p>
<p>In the examples that I've seen (<a href="https://github.com/kubernetes/kubernetes/blob/master/examples/guestbook-go/main.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/examples/guestbook-go/main.go</a>, line 77), they use the service dns name instead of environment variables (just like you do when linking containers in docker). The same happens with Helm charts (<a href="https://github.com/kubernetes/charts/blob/master/stable/wordpress/templates/deployment.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/charts/blob/master/stable/wordpress/templates/deployment.yaml</a>), where in all deployments they use the service name and the port, which is something known (in the case of Helm, generated at deployment time). Would it work for you to directly use the service name and the port? </p>
<p>I have tectonic kubernetes cluster installed on Azure. It's made from tectonic-installer GH repo, from master (commit 0a7a1edb0a2eec8f3fb9e1e612a8ef1fd890c332).</p> <pre><code>&gt; kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>On the cluster I created storage class, PVC and pod as in: <a href="https://gist.github.com/mwieczorek/28b7c779555d236a9756cb94109d6695" rel="nofollow noreferrer">https://gist.github.com/mwieczorek/28b7c779555d236a9756cb94109d6695</a></p> <p>But the pod cannot start. When I run:</p> <pre><code>kubectl describe pod mypod </code></pre> <p>I get in events:</p> <pre><code>FailedMount Unable to mount volumes for pod "mypod_default(afc68bee-88cb-11e7-a44f-000d3a28f26a)": timeout expired waiting for volumes to attach/mount for pod "default"/"mypod". list of unattached/unmounted volumes=[mypd] </code></pre> <p>In kubelet logs (<a href="https://gist.github.com/mwieczorek/900db1e10971a39942cba07e202f3c50" rel="nofollow noreferrer">https://gist.github.com/mwieczorek/900db1e10971a39942cba07e202f3c50</a>) I see:</p> <pre><code>Error: Volume not attached according to node status for volume "pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/abc/resourceGroups/tectonic-cluster-mwtest/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3") pod "mypod" (UID: "afc68bee-88cb-11e7-a44f-000d3a28f26a") </code></pre> <p>When I create PVC - new disc on Azure is created. And after creating pod - I see on the azure portal that the disc is attached to worker VM where the pod is scheduled.</p> <pre><code>&gt; fdisk -l </code></pre> <p>shows:</p> <pre><code>Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes </code></pre> <p>I found similar issue on GH ( kubernetes/kubernetes/issues/50150) but I have cluster built from master so it's not the udev rules (I checked - file /etc/udev/rules.d/66-azure-storage.rules exists)</p> <hr> <p>Does anybody knows if it's a bug (maybe know issue)?</p> <p>Or am I doing something wrong?</p> <p>Also: how can I troubleshoot that further? </p>
<p>I had test in lab, use your yaml file to create pod, after one hour, it still show pending.</p> <pre><code>root@k8s-master-ED3DFF55-0:~# kubectl get pod NAME READY STATUS RESTARTS AGE mypod 0/1 Pending 0 1h task-pv-pod 1/1 Running 0 2h </code></pre> <p>We can use this yaml file to create pod:</p> <p><strong>PVC</strong>:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: kube-public spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi </code></pre> <p>Output:</p> <pre><code>root@k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=kube-public NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE mypvc Bound pvc-1b097337-8960-11e7-82fc-000d3a191e6a 100Gi RWO default 3h </code></pre> <p><strong>Pod</strong>:</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage </code></pre> <p>Output:</p> <pre><code>root@k8s-master-ED3DFF55-0:~# kubectl get pods NAME READY STATUS RESTARTS AGE task-pv-pod 1/1 Running 0 3h </code></pre> <p>As a <strong>workaround</strong>, we can use <code>default</code> as the <strong>storageclass</strong>.</p> <p>In Azure, there are managed disk and unmanaged disk. if your nodes are use managed disk, two storage classes will be created to provide access to create Kubernetes persistent volumes using Azure managed disks.</p> <p>They are <strong>managed-premium</strong> and <strong>managed-standard</strong> and map to <code>Standard_LRS</code> and <code>Premium_LRS</code> managed disk types respectively.</p> <p>If your nodes are use non-managed disk, the default storage class will be used if persistent volume resources don't specify a storage class as part of the resource definition. </p> <p>The default storage class uses non-managed blob storage and will provision the blob within an existing storage account present in the resource group or provision a new storage account.</p> <p>Non-managed persistent volume types are available on all VM sizes.</p> <p>More information about managed disk and non-managed disk, please refer to this <a href="https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/features.md" rel="nofollow noreferrer">link</a>.</p> <p>Here is the test result:</p> <pre><code>root@k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE shared Pending standard-managed 2h shared1 Pending managed-standard 15m shared12 Pending standard-managed 14m shared123 Bound pvc-a379ced4-897c-11e7-82fc-000d3a191e6a 2Gi RWO default 12m task-pv-claim Bound pvc-3cefd456-8961-11e7-82fc-000d3a191e6a 3Gi RWO default 3h </code></pre> <hr> <p><strong>Update:</strong> Here is my K8s agent's unmanaged disk: <a href="https://i.stack.imgur.com/Ea9zA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ea9zA.png" alt="enter image description here"></a></p>
<p>I was looking to quickly install InfluxDB on my Kubernetes cluster (built using kubeadm). This would just be a single pod running on a node (since non-enterprise InfluxDB doesn't support clustering). I was looking to persist the data to whatever node the pod is hosted on (i.e. hostPath solution that Kubernetes provides). </p> <p>I am brand new to the Kubernetes world and looked online for some config files I could use to quickly generate an instance of InfluxDB but they all seem to revolve around Heapster which is not something I am looking for. I also attempted to install via Helm but was unable to get persistence working (when I enabled persistence the pod remained stuck in Pending state)</p> <p>Any resources, configuration files, or help anyone can provide would be greatly appreciated!</p>
<p>Take a look at Helm, it's a package manager for Kubernetes,</p> <p>Here is the package of InfluxDB : <a href="https://github.com/kubernetes/charts/tree/master/stable/influxdb" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/influxdb</a></p> <p>=)</p>
<p>I've set a <strong>kubernetes</strong> cluster with master in <strong>high availability</strong>: 3 etcd nodes, 3 apiservers, 3 controller managers and 3 schedulers. There is a load balancer in front of the api servers. I see apiservers running as <strong>Active-Active</strong> while Controller Manager and Scheduler are working as <strong>Active/Standby</strong>. Does anyone know how to detect the active Controller Manager and Scheduler in a HA Master setup?</p> <p>All the K8s infra components are managed by systemd not by dockerd.</p> <p>Thanks.</p>
<p>In my case, retrieving the endpoints yaml and looking in the "holderIdentity" field of the "...kubernetes.io/leader" annotation.</p>
<p>Fore few days I've been playing with Kubernetes cluster on Azure Container Service, with goal to host Unreal game servers on it.</p> <p>After days of experiments and few questions on SO I got to the point where I can host few servers on single node and connect to them with clients.</p> <p>In my latest setup I deployed PODs directly with hostNetwork: true. Added LoadBalancer with public IP and manually mapped ports from agent node where pods are deployed to NAT load balancer.</p> <p>While works I'm not entirely convinced it is good solution. While it can be automated I'm not sure it will be scalable under load.</p> <ol> <li><p>I need to wait until pod is created. I can assume what port is needed, because unreal aromatically open on 7777 port and then increment by one until it find one.</p></li> <li><p>Once pod is created I need to add NAT port forwarding to load balancer (but I can't do it until I know on which Node pod is located).</p></li> <li><p>Adding new NAT rules to load balancer takes time. Quite a bit of time, from what I have seen so far.</p></li> <li><p>I'm using external load balancer to route traffic at particular PODs. It probabaly would be better to do it from within Kubernetes.</p></li> </ol> <p>There few things that must be meet:</p> <ol> <li><p>Client connect to server trough UDP.</p></li> <li><p>It would be best keep amount of public IPs to minimum.</p></li> </ol> <p>What would be best way to directly expose PODs trough UDP to external clients ? Port Forwarding is not that bad option (although is limited in the amount of PODs I can expose trough single public ip, at least I think so). What I'd like to achive is to either connect client directly to server, or route client traffic to particular POD (ie router->service->pod scenario is acceptable, but the less indirection the better).</p> <p>I have read this: <a href="http://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/" rel="nofollow noreferrer">http://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/</a> And it is nice source on information although I'm not sure if the server is connected trough UDP (which is must have requirement for me). Or how exactly does unity client connect directly to internal POD IP.</p>
<p>I suggest you to take a look on Ingresses controller, I know that the Nginx one support UDP routing.</p> <p>So, you expose Nginx-ingress with load-balancer IP and you create UDP configmap to route directly on your pods. </p> <p>You should give it a try. </p> <p><a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp</a></p>
<p>I created a kubernetes cluster using Kops on aws in private subnet. When using NAT gateway as a point of egress in the cluster, everything just works fine. But when i try to deploy a NAT instance as a point of egress in my cluster, it does not work. I cannot figure out a way to use nat instance as egress in my cluster nor able to figure the issue. Any guidance or tutorial that can help in this case is most welcome.</p>
<p>A few gotchas that are easy to miss: </p> <ul> <li>The NAT instance needs to be deployed into a <em>public</em> subnet (i.e. one with an internet gateway attached and a route out through that internet gateway).</li> <li>The NAT instance needs the Source/Destination check disabled (in the AWS console, you can get to this via Actions -> Networking -> Change Source/Dest. Check).</li> <li>The private subnet's routing table needs a route to the NAT instance (presumably for 0.0.0.0/0 but you could scope it narrower if you need less).</li> </ul> <p>See <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html" rel="nofollow noreferrer">the AWS NAT Instance docs</a>, or <a href="http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html#vpc-scenario-2-nat-instance" rel="nofollow noreferrer">this AWS tutorial on NAT with public/private subnets</a>, for more details.</p> <p>My understanding is that NAT instances are potentially a scalability bottleneck, so if you have a lot of outgoing traffic you may ultimately need to move back to a NAT gateway, upgrade the NAT instance, or do some fancier things with a group of NAT instances.</p>
<p>Running a cluster on Google Container engine. </p> <p>Expect it to respect rewrite rule. Running the debug <a href="http://gcr.io/google_containers/echoserver:1.4" rel="noreferrer">echo server</a> it shows it's not respecting the http-rewrite rule as documented here in <a href="https://github.com/kubernetes/ingress/blob/827d8520ae070db695cf32859148ef08c9c37016/examples/rewrite/nginx/README.md#L17" rel="noreferrer">kubernetes ingress docs</a>.</p> <p>Works locally on minikube just fine. The <code>realpath</code> parameter still has debug attached although rewrite is on to strip after match. Expect <code>/foo/bar/</code> vs <code>/debug/foo/bar</code>.</p> <p>Attached </p> <p>URL + response </p> <blockquote> <p><a href="http://homes.stanzheng.com/debug/foo/bar" rel="noreferrer">http://homes.stanzheng.com/debug/foo/bar</a> <div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>CLIENT VALUES: client_address=10.12.2.1 command=GET real path=/debug/foo/bar query=nil request_version=1.1 request_uri=http://homes.stanzheng.com:8080/debug/foo/bar SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8 accept-encoding=gzip, deflate accept-language=en-US,en;q=0.8 connection=Keep-Alive cookie=__cfduid=dfd6a6d8c2a6b361a3d72e3fc493295441494876880; _ga=GA1.2.5098880.1494876881 host=homes.stanzheng.com upgrade-insecure-requests=1 user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 via=1.1 google x-cloud-trace-context=1586885dcac2d537189444861a8a462c/1232314719683944914 x-forwarded-for=204.154.44.39, 35.190.78.5 x-forwarded-proto=http BODY: -no body in request-</code></pre> </div> </div> </p> </blockquote> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rewrite annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: minikube.homes http: paths: - path: /debug/* backend: serviceName: echoserver servicePort: 8080</code></pre> </div> </div> </p>
<p><code>rewrite-target</code> is not supported by Google Container Engine Ingress. See this page for a comparison of features:</p> <p><a href="https://github.com/kubernetes/ingress/blob/master/docs/annotations.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/blob/master/docs/annotations.md</a></p>
<p>I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes <a href="https://github.com/Mashape/kong/issues/2524#issuecomment-302233645" rel="nofollow noreferrer">sporadic failure of DNS resolution</a>. The workaround is to use the FQDN (<code>&lt;name&gt;.&lt;project_name&gt;.svc.cluster.local</code>). So, in my template i would like to do:</p> <pre><code> - env: - name: KONG_DATABASE value: postgres - name: KONG_PG_HOST value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local" </code></pre> <p>I am just not sure how to get the current <code>PROJECT_NAME</code> of if perhaps there is a default set of available parameters...</p>
<p>You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.</p> <p>See the OpenShift docs <a href="https://docs.openshift.com/container-platform/3.4/dev_guide/downward_api.html#dapi-values-using-environment-variables" rel="nofollow noreferrer">here</a> for example.</p> <p>Update based on Claytons comment:</p> <p>Tested and the following snippet from the deployment config works.</p> <pre><code>- env: - name: MY_POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: EXAMPLE value: example.$(MY_POD_NAMESPACE) </code></pre> <p>Inside the running container:</p> <pre><code>sh-4.2$ echo $MY_POD_NAMESPACE testing sh-4.2$ echo $EXAMPLE example.testing </code></pre> <p>In the environment screen of the UI it appears as a string value such as <code>example.$(MY_POD_NAMESPACE)</code></p>
<p>I'm implementing a function which can make a node offline/online from kubernetes cluster. </p> <p>When I run <code>kubectl delete node $nodename</code>, how can I read this node to the cluster? </p> <p>It's said that if use API <code>POST /api/v1/nodes</code> will cause the node state <code>Not ready</code>. Is there a way to read the deleted node to cluster?</p>
<p>The way to (re)create a node depends on your cluster setup and Kubernetes version.</p> <ul> <li><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#joining-your-nodes" rel="noreferrer">kubeadm</a>: run <code>kubeadm reset</code> and <code>kubeadm join ...</code> again on the node (you might need to create a new token if the original one was short-lived, see the linked doc)</li> <li>most clouds: delete the VM. It will be recreated and will rejoin the cluster</li> <li>others: see <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes" rel="noreferrer">self registration</a> and <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration" rel="noreferrer">manual registration</a> for details.</li> </ul>
<p>I have a Kubernetes deployment which uses image: <code>test:latest</code> (not real image name but it's the latest tag). This image is on docker hub. I have just pushed a new version of <code>test:latest</code> to dockerhub. I was expecting a new deployment of my pod in Kubernetes but nothing happends.</p> <p>I've created my deployment like this:</p> <pre><code>kubectl run sample-app --image=`test:latest` --namespace=sample-app --image-pull-policy Always </code></pre> <p>Why isn't there a new deployment triggered after the push of a new image?</p>
<p>Kubernetes is <strong>not</strong> watching for a new version of the image. The image pull policy specifies how to acquire the image to run the container. <code>Always</code> means it will try to pull a new version each time it's starting a container. To see the update you'd need to delete the Pod (not the Deployment) - the newly created Pod will run the new image.</p> <p>There is no direct way to have Kubernetes automatically update running containers with new images. This would be part of a continuous delivery system (perhaps using <code>kubectl set image</code> with the new sha256sum or an image tag - but not <code>latest</code>).</p>
<p>I need to configure heapster to send kubernetes cluster metrics to our custom influx db server . For this I tried to edit heapster deployment in kube-system namespace but after some time deployment is getting reverted to original state .<br> I am using GKE , master version is 1.5.7 and node version is 1.5.6 .</p>
<p>As for now, a custom configuration of addons is not supported on GKE. Any changes to the default Heapster configuration will be reverted by <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="nofollow noreferrer">addon-manager</a>.</p> <p>If the default Heapster configuration doesn't work for you, consider turning it off. In order to do that you need to disable Cloud Monitoring and Horizontal Pod Autoscaler. Please bear in mind that without the default Heapster you won't get metrics in Stackdriver or have the ability to use HPA. </p> <p>Having two Heapster instances is also fine. You can create your own instance of Heapster with the configuration you need. Here's an <a href="https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/heapster.yaml" rel="nofollow noreferrer">example deployment spec</a>.</p>
<p>I am deploying the sample bookshelf node app (<a href="https://cloud.google.com/nodejs/tutorials/bookshelf-on-container-engine" rel="nofollow noreferrer">https://cloud.google.com/nodejs/tutorials/bookshelf-on-container-engine</a>). This app does OAUTH2 callback. </p> <p>My question is where do I get the IP Address of the Load Balancer and does it change every time I push a new docker image? Can I specify a DNS Entry for the LB and use that to register in the Google Oauth2 callback. </p> <pre><code> "OAUTH2_CALLBACK": "http://&lt;service ip address&gt;/auth/google/callback", </code></pre> <p>Here is my service.yaml file</p> <pre><code>apiVersion: v1 kind: Service metadata: name: bookshelf-frontend labels: app: bookshelf tier: frontend spec: type: LoadBalancer ports: - port: 80 targetPort: http-server selector: app: bookshelf tier: frontend </code></pre> <p>Thanks, </p>
<p>If you already have kubectl configured to access your cluster, you should be able to find the ip address of the service with:</p> <pre><code>$ kubectl get services </code></pre> <p>A <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> is separate from actual <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. The cluster-ip of the service will not change unless you make changes to the Service.</p> <p>If you are accessing a service internally, you can specify the service name rather than cluster-ip address.</p> <p>E.g,</p> <pre><code>$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch 10.3.247.13 &lt;none&gt; 9200/TCP 9d </code></pre> <p>A deployment config accessing this service:</p> <pre><code>output { elasticsearch { hosts =&gt; "elasticsearch:9200" } } </code></pre>
<p>I've created a secret using</p> <pre class="lang-shell prettyprint-override"><code>kubectl create secret generic production-tls \ --from-file=./tls.key \ --from-file=./tls.crt </code></pre> <p>If I'd like to update the values - how can I do this?</p>
<p>This should work:</p> <pre class="lang-shell prettyprint-override"><code>kubectl create secret generic production-tls \ --save-config \ --dry-run=client \ --from-file=./tls.key --from-file=./tls.crt \ -o yaml | \ kubectl apply -f - </code></pre>
<p>We are using openShift V3 Enterprise product. I would like to create a externalName type service called serviceA in ProjectA and it will point to a route in projectB. and I will create a another route in ProjectA which will point to ServiceA service.</p> <p>Is this possible to do?</p> <p>Thanks!!!</p>
<p>You don't need to involve a route, you can use the service name directly to connect to it. The only caveat on that is that you need to (as admin), set up a pod network between the two projects. This is better as creating a route means it will also be exposed outside of the OpenShift cluster and so publicly accessible. You do not want that if these are internal services that you don't want exposed.</p> <p>For details on pod networks see:</p> <ul> <li><a href="https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-pod-network" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-pod-network</a></li> </ul>
<p>Just curious about the intent for this default namespace.</p>
<p>That namespace exists in clusters created with kubeadm for now. It contains a single ConfigMap object, cluster-info, that aids discovery and security bootstrap (basically, contains the CA for the cluster and such). This object is readable without authentication.</p> <p>If you are courious:</p> <pre><code>$ kubectl get configmap -n kube-public cluster-info -o yaml </code></pre> <p>There are more details in this <a href="https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters/" rel="noreferrer">blog post</a> and the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md#new-kube-public-namespace" rel="noreferrer">design document</a>:</p> <blockquote> <h2>NEW: kube-public namespace</h2> <p>[...] To create a config map that everyone can see, we introduce a new kube-public namespace. This namespace, by convention, is readable by all users (including those not authenticated). [...]</p> <p>In the initial implementation the kube-public namespace (and the cluster-info config map) will be created by kubeadm. That means that these won't exist for clusters that aren't bootstrapped with kubeadm. [...]</p> </blockquote>
<p>When we use kubeadm to set up a k8s cluster, there are two options to config:</p> <p><code>--pod-network-cidr</code></p> <p><code>--service-cidr</code> (default ‘10.96.0.0/12’)</p> <p>Question is:</p> <ol> <li><p>If I use <code>10.244.0.0./12</code> for <code>pod-network-cidr</code>, do I need to save that IP range for Kubernetes? What happens if we already start to use <code>10.244.0.0/12</code> for other machines.</p></li> <li><p>Can I set <code>service-cidr</code> and the <code>pod-network-cidr</code> the same range? I don't understand how <code>service-cidr</code> works.</p></li> </ol>
<p>To reply briefly:</p> <ul> <li><ol> <li>You do have to reserve <strong>both</strong> the pod-network range and the service network range. You can't use those on your LAN (and you can't have routes to it). Both ranges are configurable so you can pick something that is not used. Use ipcalc if you are unsure.</li> </ol></li> <li><ol start="2"> <li>You have to use separate ranges.</li> </ol></li> </ul> <p>Check out <a href="https://www.slideshare.net/CJCullen/kubernetes-networking-55835829" rel="noreferrer">these slides</a> for explanation about the different networks in play.</p>
<p>Kubernetes has a concept of pods where containers can share ports between them. For example within the same pod, a container can access another container (listening on port 80) via <code>localhost:80</code>.</p> <p>However on docker-compose, <code>localhost</code> refers to the container itself.</p> <p>Is there anyway to implement the kubernetes network config on docker?</p> <p>Essentially I have a kubernetes config that I would like to reuse in a docker-compose config, without having to modify the images. </p>
<p>I seem to have gotten it to work by adding <code>network_mode: host</code> to each of the container configs within my docker-compose config.</p>
<p><strong>Context</strong></p> <p>Access to a Google Cloud (GKE) kubernetes cluster is managed through the Google Cloud IAM feature;</p> <ol> <li><p>An admin invites a new user (using their google account/login) and assigns them a role.</p> <p>The example role below is &quot;Container Engine Viewer&quot;, which will allow the user to access the kubernetes cluster and run all &quot;view&quot; operations.</p> </li> <li><p>The User can use the <code>gcloud auth login</code> on their local machine followed by <code>gcloud container clusters get-credentials</code> to authenticate to Google IAM and have the <code>gcloud</code> tool write them out a kubernetes config file, ready to use the cluster.</p> </li> <li><p>Users can then use <code>kubectl</code> to access the cluster. With the example above, reads work... writes/changes dont; all good!</p> </li> </ol> <p><strong>The Problem</strong></p> <p>The IAM roles for a GKE kubernetes cluster are very simple, &quot;Admin, Read/Write, Read&quot;.</p> <p>For more fine-grained control over the kubernetes cluster, Kubernetes RBAC should be used <em>within</em> the cluster, allowing me to restrict a user to a single namespace, single API endpoint, single action, etc.</p> <p>However, without specifying an IAM role for the user; the user has no way of authenticating to the cluster (RBAC is authorization, not authentication).</p> <p>Yet, the <em>Lowest</em> permission IAM role I can set is &quot;Container Engine Viewer&quot;, so regardless of the restrictions I implement with Kubernetes RBAC, All my users still have full read access.</p> <p><strong>Question</strong></p> <p>Has anyone found a way to ensure all permissions in GKE are coming from RBAC, basically nullifying the permissions from IAM?</p>
<p>Weird how writing out the problem gets you to an answer sooner huh?</p> <p>Theres a new &quot;Alpha&quot; feature in Google Cloud's IAM which wasn't there last time I tried to tackle this problem;</p> <p>Under IAM &gt; Roles, you can now create custom IAM roles with your own subset of permissions.</p> <p>I created a minimal role which allows <code>gcloud container clusters get-credentials</code> to work, but nothing else, allowing permissions within the kubernetes cluster to be fully managed by RBAC.</p> <p>Initial testing looks good.</p> <p>IAM Role Screenshots (with the permissions needed) below.</p> <p><a href="https://i.stack.imgur.com/YaUKc.png" rel="noreferrer">Minimal IAM role with GKE cluster access</a></p> <p><a href="https://i.stack.imgur.com/rzjsF.png" rel="noreferrer">New IAM roles alpha functionality</a></p>
<p>I have a nodejs app which has refers to the IP Address of the kubernetes service/load balancer public IP. Is there a way to get it from the process/env variables in runtime?</p> <p>e..g <code>process.env.ENV_VARIABLE</code> and what is the env variable name</p> <p>Thanks</p>
<p>It's <strong>not available</strong> as an environment variable. Also keep in mind that a pod can receive traffic from multiple load balancers and also the external IP could change after the Pod started, leaving a stale value; so an IP in an environment variable might not be an adequate solution.</p> <p>If you are using an <strong>Ingress</strong> object you can determine it's external IP by querying the Kubernetes apiserver (try <code>kubectl get myingress -o yaml</code>, see the <code>status:</code> part).</p> <p>For a particular incoming <strong>http request</strong> check the <code>X-Forwarded-For:</code> header for the load balancer's IP.</p> <p>Since the question is tagged <a href="https://stackoverflow.com/questions/tagged/google-container-engine">google-container-engine</a>, you can also use <strong>gcloud</strong> to get that IP (perhaps from an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> or at startup):</p> <ul> <li>If you are using Ingress: <code>gcloud compute forwarding-rules describe k8s-fw-YOURNAMESPACE-YOURINGRESSNAME--YOURHASHVALUE --global --format='value(IPAddress)'</code></li> <li>If you have a static IP reserved: <code>gcloud compute addresses describe my-static-ip-name --global --format='value(address)'</code></li> </ul>
<p>I'm benchmarking a tool that monitors docker containers. To do that I need to launch several dummy containers with an orchestration tool such Kubernetes. The containers should be lightweight and shouldn't take any of the machine resources (e.g. ports), since the idea is to run 15 or more of these dummy containers in each machine of a cluster. I don't care about what the container does as long as it keeps running as a daemon and I can monitor it for a long period of time.</p> <p>Is there any Docker image that meets these requirements and that I can use straight out of the box?. </p>
<p>You can just run an alpine container with a sleep command.</p> <p>Something like this:</p> <p><code>docker run -d alpine sh -c 'while sleep 3600; do :; done'</code></p>
<p>Both Docker images &amp; Kubernetes clusters have mechanisms to configure persistent storage on the host machine, a separate container, or just some form of cloud/network storage mechanism. </p> <p>I'm trying to understand how they are different in use cases and why you'd use one over the other. For context, I'm also looking at this more with transactional database persistence in mind, rather than log files or for a shared file/folder access.</p> <p>Thanks in advance!</p>
<p>using docker volumes on a cluster like Kubernetes gives you no data persistency. The workload can get scheduled on different node and you're done. To provide persistent storage in K8S cluster you need to use K8S solution to the problem.</p>
<p>I have deployed envoy containers as part of an Istio deployment over k8s. Each Envoy proxy container is installed as a "sidecar" next to the app container within the k8s's pod.</p> <p>I'm able to initiate HTTP traffic from within the application, but when trying to contact Redis server (another container with another envoy proxy), I'm not able to connect and receive <code>HTTP/1.1 400 Bad Request</code> message from envoy.</p> <p>When examining the envoy's logs I can see the following message whenever this connection passing through the envoy: <code>HTTP/1.1" 0 - 0 0 0 "_"."_"."_"."_""</code></p> <p>As far as I understand, Redis commands being sent using pure TCP transport w/o HTTP. Is it possible that Envoy expects to see only HTTP traffic and rejects TCP only traffic? Assuming my understanding is correct, is there a way to change this behavior using Istio and accept and process generic TCP traffic as well?</p> <p>The following are my related deployment yaml files:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: redis namespace: default labels: component: redis role: client spec: selector: app: redis ports: - name: http port: 6379 targetPort: 6379 protocol: TCP type: ClusterIP apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-db spec: replicas: 1 template: metadata: labels: app: redis spec: containers: - name: redis image: redis:3.2-alpine imagePullPolicy: IfNotPresent ports: - containerPort: 6379 </code></pre> <p>Thanks</p>
<p>Getting into envoy (istio proxy):</p> <pre><code>kubectl exec -it my-pod -c proxy bash </code></pre> <p>Looking at envoy configuration:</p> <pre><code>cat /etc/envoy/envoy-rev2.json </code></pre> <p>You will see that it generates a TCP proxy filter which handles TCP only traffic. Redis example:</p> <pre><code>"address": "tcp://10.35.251.188:6379", "filters": [ { "type": "read", "name": "tcp_proxy", "config": { "stat_prefix": "tcp", "route_config": { "routes": [ { "cluster": "out.cd7acf6fcf8d36f0f3bbf6d5cccfdb5da1d1820c", "destination_ip_list": [ "10.35.251.188/32" ] } ] } } </code></pre> <p>In your case, adding <code>http</code> into Redis service <code>port name</code> (Kubernetes deployment file), generates <code>http_connection_manager</code> filter which doesn't handle row TCP.</p> <p>See <a href="https://istio.io/docs/tasks/integrating-services-into-istio.html" rel="nofollow noreferrer">istio docs</a>:</p> <blockquote> <p>Kubernetes Services are required for properly functioning Istio service. Service ports must be named and these names must begin with http or grpc prefix to take advantage of Istio’s L7 routing features, e.g. name: http-foo or name: http is good. Services with non-named ports or with ports that do not have a http or grpc prefix will be routed as L4 traffic.</p> </blockquote> <p>Bottom line, just remove <code>port name</code> form Redis service and it should solve the issue :)</p>
<p>I'm try to run Kubernetes with minikube, and hangs on creating volume... My HOME partition is almost 100% used. I saw that minikube create a .minikube folder into my HOME, so there are way to change this folder?</p>
<p>You can set the MINIKUBE_HOME env var to specify the path for minikube to use for the .minikube directory. From: <strike>https://github.com/kubernetes/minikube/blob/master/docs/env_vars.md</strike></p> <p>EDIT: information has moved, now at - <a href="https://minikube.sigs.k8s.io/docs/handbook/config" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/config</a></p>
<p>I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-scratch-space spec: accessModes: - ReadWriteOnce resources: requests: storage: 2000Gi storageClassName: standard </code></pre> <p>This PVC is not provisioned. I get the below error:</p> <pre><code>Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later. </code></pre> <p>Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.</p>
<p>There is no configuration problem at your side - there are actually not enough resources in the <code>europe-west2-b</code> zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.</p> <p>There is an <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#gce" rel="nofollow noreferrer">example for GCE</a> in the docs. Create a new StorageClass specifying say the <code>europe-west1-b</code> zone (which is actually cheaper than <code>europe-west2-b</code>) like this:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gce-pd-europe-west1-b provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zones: europe-west1-b </code></pre> <p>And modify your PVC:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-scratch-space spec: accessModes: - ReadWriteOnce resources: requests: storage: 2000Gi storageClassName: gce-pd-europe-west1-b </code></pre>
<p>I put elasticsearch in the kubernetes cluster as statefulset. When I using rollingUpdate to updte the statefulset. I got a problem, k8s restart the elasticsearch node and think it's ready, and then move on to next node, however, the node is not ready in elasticsearch cluster. The es cluster sitll yellow even red.</p> <p>So is there any options such like time interval when rollingUpdate ??</p> <p>Or is there some configuration of minimal time of probing pod ready ??</p> <p>Now I use onDelete strategy to update the es, manually.</p>
<p>the best thing you can do is to implement a readinessProbe <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes</a>. As long as it returns not ready the Pod will be in not ready state and next pod will not be rolled.</p>
<p>I was experimenting load balancing our service by having two replicas on a cluster of two nodes.</p> <p>Then I noticed the following:</p> <p>Yesterday when I checked the pods, it looked like this:</p> <pre><code>pod-jq5vr 4/4 Running 0 2m 10.4.1.5 node-vvmb pod-qbs69 4/4 Running 0 2m 10.4.0.10 node-jskq </code></pre> <p>This morning:</p> <pre><code>pod-hvjs8 4/4 Running 0 17h 10.4.1.6 node-vvmb pod-jq5vr 4/4 Running 0 18h 10.4.1.5 node-vvmb </code></pre> <p>There must have been node recreation going on between yesterday and this morning, but the pod are on the same node now.</p> <p>My questions:</p> <ol> <li>How can I evenly distribute the pods on both nodes?</li> <li>Does it matter to keep them on the same node? It seems my current configuration does not guarantee they must be on the separate nodes.</li> </ol>
<p>You can use the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="nofollow noreferrer">pod anti-affinity</a> feature to tell the scheduler that you don't want the pods in the same service to run on the same node. </p> <p>The kubernetes documentation also has an <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure" rel="nofollow noreferrer">example showing how to configure zookeper with anti-affinity for high availability</a>.</p>
<p>I am running a kubernetes cluster on coreos.</p> <p>I have a kubernetes replication controller that works fine. It looks like this:</p> <pre><code>id: "redis-controller" kind: "ReplicationController" apiVersion: "v1beta3" metadata: name: "rediscontroller" lables: name: "rediscontroller" spec: replicas: 1 selector: name: "rediscontroller" template: metadata: labels: name: "rediscontroller" spec: containers: - name: "rediscontroller" image: "redis:3.0.2" ports: - name: "redisport" hostPort: 6379 containerPort: 6379 protocol: "TCP" </code></pre> <p>But I have a service for said replication controller's pods that looks like this:</p> <pre><code>id: "redis-service" kind: "Service" apiVersion: "v1beta3" metadata: name: "redisservice" spec: ports: - protocol: "TCP" port: 6379 targetPort: 6379 selector: name: "redissrv" createExternalLoadBalancer: true sessionAffinity: "ClientIP" </code></pre> <p>the journal for kube-proxy has this to say about the service:</p> <pre><code>Jul 06 21:18:31 core-01 kube-proxy[6896]: E0706 21:18:31.477535 6896 proxysocket.go:126] Failed to connect to balancer: failed to connect to an endpoint. Jul 06 21:18:41 core-01 kube-proxy[6896]: E0706 21:18:41.353425 6896 proxysocket.go:81] Couldn't find an endpoint for default/redisservice:: missing service entry </code></pre> <p>From what I understand, I do have the service pointing at the right pod and right ports, but am I wrong?</p> <p><strong>UPDATE 1</strong></p> <p>I noticed another possible issue, after fixing the things mentioned by Alex, I noticed in other services, where it is using websockets, the service can't find an endpoint. Does this mean the service needs a http endpoint to poll?</p>
<p>Extra thing to check for.</p> <p>Endpoints are only created if your deployment is considered healthy. If you have defined your <code>readinessProbe</code> incorrectly (mea culpa) or the deployment does not react to it correctly, an endpoint will not be created.</p>
<p>Is it possible to have a <em>restricted</em> Kubernetes dashboard? The idea is to have a pod running <code>kubectl proxy</code> in the cluster (protected with basic HTTP authentication) to get a quick overview of the status:</p> <ul> <li>Log output of the pods</li> <li>Running services and pods</li> <li>Current CPU/memory usage</li> </ul> <p>However, I do <em>not</em> want users to be able to do "privileged" actions, like creating new pods, deleting pods or accessing secrets.</p> <p>Is there some option to start the dashboard with a specified user or with restricted permissions?</p>
<p>Based on the answer from lwolf, I used <a href="https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml" rel="nofollow noreferrer">the kubernetes-dashboard.yaml</a> and changed it to run on the slaves, in the default namespace.</p> <p>The important change is the <code>kind: ClusterRole, name: view</code> part, which assigns the <em>view</em> role to the dashboard user.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: ro-dashboard --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ro-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: ro-dashboard apiGroup: '' namespace: default --- kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: k8s-app: kubernetes-dashboard name: ro-dashboard spec: replicas: 1 revisionHistoryLimit: 0 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 ports: - containerPort: 9090 protocol: TCP livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 serviceAccountName: ro-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: ro-dashboard spec: type: LoadBalancer ports: - port: 80 targetPort: 9090 selector: k8s-app: kubernetes-dashboard </code></pre>
<p>I followed the tutorial here <a href="https://cloud.google.com/python/django/container-engine#initialize_your_cloud_sql_instance" rel="nofollow noreferrer">https://cloud.google.com/python/django/container-engine#initialize_your_cloud_sql_instance</a></p> <p>I have successfully deployed my service but the tutorial stops before getting to updating the deployment.</p> <p>What I've tried to do is this. But it doesn't seem to actually update the pods or deploy the code. </p> <pre><code>docker build -t gcr.io/&lt;my-app&gt;/polls . gcloud docker -- push gcr.io/&lt;my-app&gt;/polls kubectl apply -f polls.yaml </code></pre>
<p>See the note and example in the docs about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">updating a Deployment</a>:</p> <blockquote> <p><strong>Note:</strong> A Deployment’s rollout is triggered if and only if the Deployment’s pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated.</p> </blockquote> <p>Even though you have pushed a new image version the template itself is unchanged; the template still refers to the image as <code>gcr.io/&lt;my-app&gt;/polls</code> or <code>gcr.io/&lt;my-app&gt;/polls:latest</code>. And although the meaning of that is changed the <em>string itself</em> is unchanged.</p> <p>To trigger an update push a new tag, say <code>gcr.io/&lt;my-app&gt;/polls:v2</code>, edit the yaml file and execute <code>kubectl apply -f polls.yaml</code>.</p> <p>You can also use <code>kubectl set image</code> to trigger an update without changing the Deployment yaml file (<a href="https://asciinema.org/a/132998" rel="nofollow noreferrer">demo</a>); the same rule applies, the string identifying the image must change from what is deployed at the moment.</p> <p>If you don't want create image tags you can also use the <code>sha256sum</code> of your new image (the value is displayed when you push the image); works in the yaml file too:</p> <pre><code>kubectl set image deploy/mydeployment mycontainer=gcr.io/&lt;my-app&gt;/polls@sha256:2aac5e7514fbc77125bd315abe9e7b0257db05fe498af01a58e239ebaccf82a8 </code></pre> <p>The discussion about this inconvenience is in <a href="https://github.com/kubernetes/kubernetes/issues/33664" rel="nofollow noreferrer">issue #33664</a> if you are interested in other ideas.</p>
<p>I have setup a web application in kubernetes with a nginx-ingress controller. I am able to access my application over the Nginx ingress controller public IP.</p> <p>For requests which are taking more than 1 min, we are getting gateway connection timeout error (504). I've checked the Nginx ingress controller configuration by connecting to the pod and it has connection_timeout value is 60s. (root cause of the issue)</p> <p>I have tried changing the parameters to higher values and its work fine for long requests, though Nginx ingress controller configuration got reloaded to default after some time. </p> <p>How can we change/persist the Nginx ingress controller configuration parameters?</p> <p>Appreciate any help. Thanks in advance.</p>
<p>The nginx ingress controller is customizable via a configmap.</p> <p>You can achieve this by passing the argument <code>--configmap</code> to the ingress controller. Source: <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx#command-line-arguments" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/tree/master/controllers/nginx#command-line-arguments</a></p> <p>In the <code>kube-system</code> namespace, create a configmap, give it name like <code>nginx-load-balancer-conf</code> and then edit your ingress controller's replication controller or daemonset and add the <code>--configmap=nginx-load-balancer-conf</code> argument.</p> <p>Here's an example of what that configmap could look like:</p> <pre><code>apiVersion: v1 data: proxy-connect-timeout: "10" proxy-read-timeout: "120" proxy-send-timeout: "120" kind: ConfigMap metadata: name: nginx-load-balancer-conf </code></pre> <p>And here's how you create it, if you were to save the above to a file called <code>nginx-load-balancer-conf.yaml</code></p> <pre><code>kubectl create -f nginx-load-balancer-conf.yaml </code></pre> <p><strong>EDIT</strong></p> <p>The documentation has moved, the valid links to these documents are now here:</p> <p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#configuration-options" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#configuration-options</a></p>
<p>I am new to Helm. I have installed Minikube &amp; Helm on my windows system. I am able create pods using Helm and see deployment,pods &amp; replicaset in dashboard. </p> <p>I want to do rolling update using Helm. Guide me how to do rolling update in K8s using Helm.</p> <p>Creating Tomcat pod using Helm</p> <blockquote> <p>helm create hello-world</p> </blockquote> <p>Changed image name and deployment name in deployment.yaml</p> <pre><code>kind: Deployment metadata: name: mytomcat spec: containers: - name: {{ .Chart.Name }} image: tomcat </code></pre> <p>Install</p> <blockquote> <p>helm install hello-world</p> </blockquote> <pre><code>NAME: whopping-dolphin LAST DEPLOYED: Wed Aug 30 21:38:42 2017 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/Service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE whopping-dolphin-hello-world 10.0.0.178 &lt;none&gt; 80/TCP 0s ==&gt; v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE mytomcat 1 1 1 0 0s NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app=hello-world,release=whopping-dolphin" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80 </code></pre> <p><strong>I see mytomcat deployment and pod mytomcat-2768693561-hd2hd in dashboard.</strong> </p> <p>Now I would like to give command which will delete my current deployment &amp; pod in k8s and it should create new deployment and pod.</p> <p>It will be helpful if I get sample commands and yaml.</p>
<p>Below command is working fine for Rolling update. </p> <ol> <li>First time it will be install </li> <li>next time it will be upgrade</li> </ol> <blockquote> <p>helm upgrade --install tom-release --set appName=mytomcatcon hello-world</p> </blockquote> <p>tom-release is my release name and passing runtime values to helm chart using --set option</p>
<p>I have a deployment created with a YAML file, the image for the containers is <code>image:v1</code>.</p> <p>Now I update the file to <code>image:v2</code>, and do <code>kubectl apply -f newDeploymentFile.yml</code>. Does Kubernetes use rolling update behind the scenes to update my deployment or some other way?</p>
<p>What happens exactly is controlled by the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer"><code>Deployment</code></a> itself:</p> <ul> <li><a href="https://kubernetes.io/docs/api-reference/v1.7/#deploymentstrategy-v1beta1-apps" rel="nofollow noreferrer"><code>.spec.stategy</code></a>: <code>RollingUpdate</code> (default) or <code>Recreate</code></li> <li><a href="https://kubernetes.io/docs/api-reference/v1.7/#rollingupdatedeployment-v1beta1-apps" rel="nofollow noreferrer"><code>.spec.strategy.rollingUpdate</code></a>: see the docs I've linked for the explanation of the <code>maxSurge</code> and <code>maxUnavailable</code></li> </ul> <p>(I assumed that by <em>deployment</em> you actually mean a <code>Deployment</code> type object and not speaking in general.)</p>
<p>We are analyzing the integration of the <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> service in our on premise environment. We have SaaS based services which can be exposed publicly. </p> <p>We have doubts in setting up the external endpoints for the services. Is there any way to create the external endpoints for the services?</p> <p>We have tried to setup the <code>ExternalIP</code> parameter in the services with the master node IP address. Not sure this is the correct way. Once we setup the external IP with the master node IP address we are able to access the services.</p> <p>We have also tried with ingress controllers and also there we can access our services with the IP address of the node where the ingress controllers are running.</p> <p>For Example :</p> <pre><code>Public IP : XXX.XX.XX.XX </code></pre> <p>Ideally, we would map the public IP with the load balancer virtual IP, but we cannot find such a setting in Kubernetes.</p> <p>Is there any way to address this issue?</p>
<p>My suggestion is to use an Ingress Controller that acts as a proxy for all your services in kubernetes.</p> <p>Of course your ingress controller has to be somehow exposed to the outside world. My suggestion is to use the <code>hostNetwork</code> setting for the ingress controller pod (this way, the pod will be listening on your host's physical interface, like any other "traditional" service). </p> <p>A few resources:</p> <ul> <li><a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">Here</a> details on how a pod can be reached from outside your k8s cluster).</li> <li><a href="https://medium.com/@samwalker505/using-kubernetes-ingress-controller-from-scratch-35faeee8eca" rel="nofollow noreferrer">Here</a> a nice tutorial on how to setup an ingress controller on k8s.</li> </ul> <p>If you have more than one minion in your cluster, you'll end up having problems with load balancing them. <a href="https://serverfault.com/questions/869413/whats-the-common-practice-of-managing-the-external-ip-of-a-docker-swarm/869506#869506">This question</a> can be helpful about that.</p>
<p>I have a running node in a kubernetes cluster. Is there a way I can change its name?</p> <p>I have tried to </p> <ol> <li>delete the node using kubectl delete</li> <li>change the name in the node's manifest</li> <li>add the node back.</li> </ol> <p>But the node won't start.</p> <p>Anyone know how it should be done?</p> <p>Thanks</p>
<p>Usualy it's kubelet that is responsible for registering the node under particular name, so you should make changes to your nodes kubelet configuration and then it should pop up as new node.</p>
<p>I am looking on how can I change the parameter of containers depending on the number of replica ?</p> <p>For example, I would like to have an environment variable ( which is the name of my hubot) as follow :</p> <ol> <li>First Replica : Name1</li> <li>Second replica : Name2...</li> </ol> <p>Do you have an idea of how can I achieve that ? Thanks !</p>
<p>The feature is called "<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset" rel="nofollow noreferrer">StatefulSets</a>" in Kubernetes. Have a look at the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">Stable Network ID</a> section in particular.</p> <p>Stateful sets are available in Kubernetes 1.5. If you need an older version, you could use some kind of central registry like <a href="https://redis.io/commands/incr" rel="nofollow noreferrer">redis</a> instead and implement some logic where new pods request the next free number when they start.</p>
<p>I have a kubernetes cluster running with one master and 2 nodes. I want to run e2e tests on this cluster. How should I run it? I tried doing <code>go run hack/e2e.go -v --test</code> but that command wants to create a cluster first and then run the test, while I want to run the tests on my already present cluster. Any idea how should I go ahead with it or what parameters should I pass to e2e tests?</p> <p>TIA.</p>
<p>If what you want to do is run the conformance tests and verify your cluster, you might also consider looking into the tool that Heptio created called <a href="https://github.com/heptio/sonobuoy" rel="nofollow noreferrer">sonobuoy</a>, which was created specifically to run the non-destructive conformance tests for Kubernetes 1.7 (or later) in a consistent fashion. Lachlan Everson posted <a href="https://youtu.be/1e6SAZfkqUk" rel="nofollow noreferrer">a 6 minute youtube video showing how to use it</a> that I thought was pretty easy to follow, and will get you up and running with it very quickly.</p> <p>It's configuration driven, so you can turn on/off tests that interest you easily, and includes some plugin driven "get more data about this cluster" sort of setup if you find you want or need to dig more in specific areas.</p>
<p>I am running a Kubernethes cluster with many services, I am getting the IPv4 using the command:</p> <pre><code>kubectl get svc </code></pre> <p>But I need also IPv6, how to get it?</p>
<p><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/networking.md#ipv6" rel="nofollow noreferrer">IPv6 is currently not supported by Kubernetes.</a></p>
<p>I am trying to get some custom application metrics captured in golang using the prometheus client library to show up in Prometheus.</p> <p>I have the following working:</p> <ul> <li><p>I have a go application which is exposing metrics on localhost:8080/metrics as described in this article:</p> <p><a href="https://godoc.org/github.com/prometheus/client_golang/prometheus" rel="noreferrer">https://godoc.org/github.com/prometheus/client_golang/prometheus</a></p></li> <li><p>I have a kubernates minikube running which has Prometheus, Grafana and AlertManager running using the operator from this article:</p> <p><a href="https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus" rel="noreferrer">https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus</a></p></li> <li><p>I created a docker image for my go app, when I run it and go to localhost:8080/metrics I can see the prometheus metrics showing up in a browser.</p></li> <li><p>I use the following pod.yaml to deploy my docker image to a pod in k8s</p></li> </ul> <blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app-pod labels: zone: prod version: v1 annotations: prometheus.io/scrape: 'true' prometheus.io/port: '8080' spec: containers: - name: my-container image: name/my-app:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8080 </code></pre> </blockquote> <ul> <li>If I connect to my pod using:</li> </ul> <blockquote> <p>kubectl exec -it my-app-pod -- /bin/bash</p> </blockquote> <p>then do wget on "localhost:8080/metrics", I can see my metrics</p> <p>So far so good, here is where I am hitting a wall. I could have multiple pods running this same image. I want to expose all the images to prometheus as targets. How do I configure my pods so that they show up in prometheus so I can report on my custom metrics?</p> <p>Thanks for any help offered!</p>
<p>The kubernetes_sd_config directive can be used to discover all pods with a given tag. Your Prometheus.yml config file should have something like so: <code> - job_name: 'some-app' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app] regex: python-app action: keep </code></p> <p>The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').</p> <p>Once you've done this Prometheus will automatically discover the pods you want and start scraping the metrics from your app.</p> <p>Hope that helps. You can follow blog post <a href="http://burhan.io/flask-application-monitoring-with-prometheus/" rel="nofollow noreferrer">here</a> for more detail.</p> <p><strong>Note:</strong> it is worth mentioning that at the time of writing, kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.</p>
<p>I'm developing a service using Spring and deploying it on OpenShift. Currently I'm using Spring Actuator health endpoint to serve as a liveness and readiness probe for Kubernetes. </p> <p>However, I will add a call to another service in a Actuator health endpoint, and it looks to me that in that case I need to implement new liveness probe for my service. If I don't do that then a failure in a second service will result with a failure in liveness probe failing and Kubernetes will restart my service without any real need. </p> <p>Is it OK, for a liveness probe, to implement some simple REST controller which will always return HTTP status 200? If it works, the service can always be considered as alive? Or is there any better way to do it? </p>
<h1>Liveness Probe</h1> <p>Include only those checks which you think, if fails, will get cured with a pod restart. There is nothing wrong in having a new endpoint that always return an HTTP 200, which will serve as a liveness probe endpoint; provided you have an independent monitoring and alert in place for other services on which your first service depends on.</p> <p><strong>Where does a simple http 200 liveness helps?</strong> </p> <p>Well, let's consider these examples.</p> <ol> <li><p>If your application is a one-thread-per-http-request application (servlet based application - like application runs on tomcat - which is spring boot 1.X's default choice), in the case of heavy-load it may become unresponsive. A pod-restart will help here.</p></li> <li><p>If you don't have memory configured while you starts your application; in case of heavy-load, application may outrun the pod's allocated memory and app may become unresponsive. A pod-restart will help here too.</p></li> </ol> <h1>Readiness Probe</h1> <p>There are 2 aspects to it.</p> <p><strong>1)</strong> Let's consider a scenario. Lets say, authentication is enabled on your second service. Your first service (where your health check is) has to be configured properly to authenticate with the second service.</p> <p>Let's just say, in a subsequent deployment of your 1st service, you screwed up authheader variable name which you were supposed to read from the configmap or secret. And you are doing a rolling update.</p> <p>If you have the second service's http200 also included in the health check (of the 1st service) then that will prevent the screwed-up version of the deployment from going live; your old version will keep running because your newer version will never make it across the health-check. We may not even need to go that complicated to authentication and all, let's just say url of the second service is hard coded in the first service, and you screwed up that url in a subsequent release of your first service. This additional check in your health-check will prevent the buggy version from going live</p> <p><strong>2)</strong> On the other hand, Let's assume that your first service has numerous other functionalities and this second service being down for a few hours will not affect any significant functionality that first service offers. Then, by all means you can opt out of the second service's liveness from first service's health check.</p> <p>Either way, you need to set up proper alerting and monitoring for both the services. This will help to decide when humans should intervene.</p> <p>What I would do is (ignore other irrelevant details), </p> <pre><code>readinessProbe: httpGet: path: &lt;/Actuator-healthcheck-endpoint&gt; port: 8080 initialDelaySeconds: 120 timeoutSeconds: 5 livenessProbe: httpGet: path: &lt;/my-custom-endpoint-which-always-returns200&gt; port: 8080 initialDelaySeconds: 130 timeoutSeconds: 10 failureThreshold: 10 </code></pre>
<p>I want to submit a function (via http-trigger) from a NodeJS process to kubeless but I do not want to use the javascript equivalent of </p> <pre><code>curl --data '{"term":"Albemarle"}' localhost:8080/api/v1/proxy/namespaces/default/services/bikesearch/ --header "Content-Type:application/json" </code></pre> <p>because that needs me to know the actual IP address of the service running the function. I want to be able to access the kubeless api that gives me the level of indirection by just knowing the name of the function </p> <pre><code>kubeless function call bikesearch --data '{"term":"Albemarle"}' </code></pre> <p>Is there anyway to access the above ( function call ) api via node?</p>
<p>kubeless also creates services for functions, so you should able to just do a http get to <code>http://bikesearch:8080</code> if your DNS setup is working and your application is in the same namespace. If you are in another namespace you need to use a more qualified name, e.g. <code>bikesearch.&lt;function-namespace&gt;svc.cluster.local</code></p> <p>If you want to call the function from outside the k8s cluster, you might want to create an Ingress with <code>kubeless ingress create...</code></p>
<p>Inside a service yaml file, you can use jsonpath to select a named nodeport value like so:</p> <pre><code> - name: MY_NODE_PORT valueFrom: fieldRef: fieldPath: spec.ports[?(@.name=="http")].nodePort </code></pre> <p>However, in my deployment yaml file, I would like to have an environment variable like <code>MY_NODE_PORT</code> above that is exposed to the container to the pod. I happen to have combined my service and deployment into a single file for me to <code>kubectl create -f</code>. Is it possible to select the named service nodeport in the deployment section, rather than in the service section?</p> <p>My purpose is to register a Kubernetes service and deployment to a legacy service discovery mechanism, in this case Netflix OSS Eureka.</p>
<p>As in <a href="https://stackoverflow.com/users/371954/janos-lenart">Janos Lenart</a>'s answer and <a href="https://stackoverflow.com/users/4763788/marc-sluiter">Marc Sluiter</a> commented, a service and a deployment are different resources and might as easily be specified in separate files. They have no direct knowledge of one another, and, even if you name the service NodePort port, unless you explicitly specify the service NodePort port value (e.g. something from 30000 to 32767) you won't be able to specify what the pod environment variable should be to match it. While you can hard code the NodePort port value like that, as <a href="https://stackoverflow.com/users/371954/janos-lenart">Janos Lenart</a> suggested, but it's brittle and not recommended.</p> <p>While Kubernetes provides a number of handy environment variables to pods, it is not possible for a pod environment variable to reference a service nodeport port value that was dynamically assigned by kubernetes.</p> <p>However, the Pod <strong>does</strong> have access to talk to the Kubernetes API server, and the API server will be able to reply back with information about the service, such as the nodeport. So a Pod can ask the API server for a service's NodePort port value. I created a <a href="https://github.com/StevenACoffman/kubernetes-eureka-sidecar" rel="nofollow noreferrer">kubernetes service and deployment with simple sidecar pod</a> as a proof of concept.</p> <p><a href="https://github.com/StevenACoffman/kubernetes-eureka-sidecar" rel="nofollow noreferrer">My example here</a> handles registering a Kubernetes NodePort service with an (external or internal) Netflix OSS Eureka service registry. This might be useful to others for bridging Kubernetes to other legacy service discovery mechanisms. </p>
<p>I am trying to add if great than condition in Helm chart. it is throwing error.</p> <p>I have defined value in values.yaml and using that value in deployment.yaml for condition.</p> <p><strong>values.yaml</strong></p> <pre><code>replicaCount: 2 </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>rollingUpdate: maxSurge: 1 {{ if gt .Values.replicaCount 2}} maxUnavailable: 0 {{ else }} maxUnavailable: 1 {{ end }} </code></pre> <p>I am using helm dry run option to check result. getting error</p> <pre><code>Error: render error in "hello-world/templates/deployment.yaml": template: hello-world/templates/deployment.yaml:16:12: executing "hello-world/templates/deployment.yaml" at &lt;gt .Values.replicaCo...&gt;: error calling gt: incompatible types for comparison </code></pre> <p>how to fix this ?</p>
<p>Try using float number in comparison instead:</p> <p><strong>deployment.yaml</strong></p> <pre><code>rollingUpdate: maxSurge: 1 {{ if gt .Values.replicaCount 2.0}} maxUnavailable: 0 {{ else }} maxUnavailable: 1 {{ end }} </code></pre> <p>Helm (along with underlying Golang templates and Yaml) can be weird sometimes. </p> <hr> <p>Also, note that sometimes you need to typecast values in your yaml configs (e.g. port numbers).</p> <p>Example:</p> <pre><code>... ports: - containerPort: !!int {{ .Values.containers.app.port }} ... </code></pre> <p>More about Yaml type casting: <a href="https://github.com/yaml/YAML2/wiki/Type-casting" rel="noreferrer">https://github.com/yaml/YAML2/wiki/Type-casting</a></p>
<p>I have multiple instance of containers deployed in kubernetes.</p> <p>The application in the container requires the public IP from the pod / node. </p> <p>From the deployment manifest, I tried to use the following, </p> <ul> <li>status.podIP</li> <li>spec.nodeName</li> </ul> <p>But both of them return the private IP address of my node rather then the public IP. </p> <p>Can someone help explain where the status.podIP value comes from? Is it coming from the node manifest? Or from the VM / docker configuration? Any suggestion on how I can get the public IP address into my container?</p> <p>Thanks</p>
<p>Public IP may not be available through Kubernetes. But you can use external services to query the IP.</p> <pre><code>$ curl ipinfo.io/ip </code></pre> <p>Is one possible way. But they have rate limiting. You might find other options also. Next option is to set a Nginx server on a know machine with below config</p> <pre><code>location = /ip { add_header Last-Modified $date_gmt; add_header Cache-Control 'private no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off; default_type text/plain; return 200 "$remote_addr"; } </code></pre> <p>And then call this endpoint using the public IP of the nginx hosting this</p> <pre><code>$ curl &lt;publicipofnginx&gt;/ip </code></pre> <p>This will give the public of the machine making the curl call. </p>
<p>I am evaluating Kubernetes (with Docker containers, not Kubernetes) and Docker Swarm and could use your input. </p> <p>If I'm looking at 3 (8.76 hours) or 4 (52 min) 9's reliability in a server farm that is &lt; 100 servers, would Kubernetes be overkill due to its complexity? Would Docker Swarm suffice? </p>
<p>Docker swarm will be able to meet your requirements. I recommend you start with Docker swarm as it is robust and very straightforward to use for anyone who has used Docker before. </p> <p>For a Docker user, there are many new concepts that you need to learn to be able to use Kubernetes. Moreover, setting up Kubernetes on premise without using a preconfigured cloud platform is not straightforward </p> <p>On the other hand, Kubernetes is more flexible and extensible. Kubernetes is older than Docker swarm and the community for kubernetes community is really big.</p>
<p>Could be a basic one..</p> <p>I have created a pod in Openshift Enterprise 3.2 with the configuration set as <strong><code>restartPolicy: Always</code></strong> . So ideally when the pod is destroyed Openshift will make sure to re-run/re-create the pod because of the restart policy.</p> <p><strong>Now I no longer need the pod. When I am trying to destroy the pod it is getting created again.</strong> </p> <p>My question is <strong>What is the ideal way to destroy the pod with</strong> <strong><code>restartPolicy: Always</code></strong></p>
<p>The deployment config (and related replication controller) is the reason your pod is getting recreated when you delete it. The replication controller's job is literally to "ensure that a specified number of replicas of a pod are running at all times."</p> <p>To destroy the pod (and leave the deployment config and other resources intact, just run: <code>oc scale dc &lt;dc-name&gt; --replicas=0</code></p> <ul> <li>Replication controller: <a href="https://docs.openshift.com/container-platform/3.6/architecture/core_concepts/deployments.html#replication-controllers" rel="noreferrer">https://docs.openshift.com/container-platform/3.6/architecture/core_concepts/deployments.html#replication-controllers</a></li> <li>Manual scaling: <a href="https://docs.openshift.com/container-platform/latest/dev_guide/deployments/basic_deployment_operations.html#scaling" rel="noreferrer">https://docs.openshift.com/container-platform/latest/dev_guide/deployments/basic_deployment_operations.html#scaling</a></li> </ul>
<p>My company has decided to transition to a micro/service based architecture.</p> <p>We have been doing a bunch of research for the last couple of months on exactly what the architecture of this thing is going to look like. </p> <p>So far, we've settled on:</p> <ul> <li><p>Dotnet core for service development (although being language agnostic is somewhat of an end goal)</p></li> <li><p>Kafka for message brokering</p></li> <li><p>Docker</p></li> <li><p>Kubernetes</p></li> <li><p>Ansible</p></li> </ul> <p>We have a pretty basic proof of concept working, which seems to have ticked all the right boxes with the management team, and is an absolute joy to work with.</p> <p>My next task is to investigate options for how the development workflow is actually going to work. They are already used to working in a CI/CD manner, with some of their newer products using Jenkins/Octopus Deploy.</p> <p>My question is: Do any of you have any firm recommendations for setting up a CI/CD pipeline when deploying to a Kubernetes cluster?</p> <p>A list of must haves is:</p> <ul> <li><p>Multiple environments i.e. Integration, Test, UAT, Staging, Production.</p></li> <li><p>A means through which different business units can uniquely handle deployments to different environments (development can only push to integration, tester into test, etc). This one is probably their biggest ask - they are used to working with Octopus, and they love the way it handles this.</p></li> <li><p>The ability to roll back / deploy at the click of a button (or with as few steps as possible).</p></li> </ul> <p>We would be deploying to our own servers initially.</p> <p>I've spent the past couple of days looking in to options, of which there are many.</p> <p>So far, Jenkins Pipeline seems like it could be a great start. Spinnakar also seems like a solid choice. I did read a bit into Fabric8, and while it offers much of what I'm asking, it seems a bit like overkill.</p>
<p>If you want to use Jenkins, Pipelines are indeed the way to go. Our setup does pretty much what you want, so let me explain how we set it up.</p> <p>We use a Jenkins agent that has <code>docker</code> and <code>kubectl</code> installed. This agent first builds the docker container and pushes it to our docker registry. It will then call <code>kubectl</code> in various stages to deploy to our testing, acceptance and production clusters.</p> <p><strong>Different business units:</strong> in a Pipeline you can use <a href="https://jenkins.io/doc/pipeline/steps/pipeline-input-step/" rel="nofollow noreferrer">an input step</a> to ask whether the Pipeline should proceed or not. You can specify who may press the button, so this is how you could solve the deployment to different clusters. <em>(Ideally, when you get to CD, people will realize that pressing the button several times per day is silly and they'll just automate the entire deployment.)</em></p> <p><strong>Rollback:</strong> we rely on Kubernetes's rollback system for this.</p> <p><strong>Credentials:</strong> we provision the different Kubernetes credentials using Ansible directly to this Jenkins agent. </p> <p>To reduce code duplication, we introduced a shared <a href="https://jenkins.io/doc/book/pipeline/shared-libraries/" rel="nofollow noreferrer">Jenkins Pipeline library</a>, so each (micro)service talks to all Kubernetes clusters in a standardized way.</p> <p>Note that we use plain Jenkins, Docker and Kubernetes. There is likely tons of software to further ease this process, so let's leave that open for other answers.</p>
<p>I ran a hadoop cluster in kubernetes, with 4 journalnodes and 2 namenodes. Sometimes, my datanodes cannot register to namenodes.</p> <pre><code>17/06/08 07:45:32 INFO datanode.DataNode: Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 beginning handshake with NN 17/06/08 07:45:32 ERROR datanode.DataNode: Initialization failed for Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=b1babba6-9a6f-40dc-933b-08885cbd358e, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-bceaa23f-ba3d-4749-a542-74cda1e82e07;nsid=177502984;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:863) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4529) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1279) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:95) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28539) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) </code></pre> <p>It says:</p> <p><code>hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45)</code> </p> <p>However, I can ping <code>hadoop-namenode-0.myhadoopcluster</code>, <code>10.100.81.42</code>, <code>10.100.9.45</code> in both the datanode and the namenode. </p> <p><code>/etc/hosts</code> in datanode:</p> <pre><code>127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.100.9.45 hadoop-datanode-0.myhadoopcluster.default.svc.cluster.local hadoop-datanode-0 </code></pre> <p><code>/etc/hosts</code> in namenode:</p> <pre><code># Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.100.81.42 hadoop-namenode-0.myhadoopcluster.default.svc.cluster.local hadoop-namenode-0 </code></pre> <p>And I have already set <code>dfs.namenode.datanode.registration.ip-hostname-check</code> to <code>false</code> in <code>hdfs-site.xml</code></p> <p>I guess the problem may be related to dns. And in other similar problems, hadoop are not deployed in kubernetes or docker container, so I posted this one. Please do not tag it as duplicated...</p>
<p>In my situation, I included three configuration to the namenode and datanode as well:</p> <ul> <li><code>dfs.namenode.datanode.registration.ip-hostname-check: false</code></li> <li><code>dfs.client.use.datanode.hostname: false</code> (default)</li> <li><code>dfs.datanode.use.datanode.hostname: false</code> (default)</li> </ul>
<p>One way I could think of is to set an environment which value is the namespace of the Pod when defining the Pod.<br> Getting the namespace dynamically without requiring changes for Pod will be better because it lessens the burden of constructing a Pod. </p> <p>So is there a way to get current namespace in a Pod?</p>
<p>Try the file:</p> <pre><code>/var/run/secrets/kubernetes.io/serviceaccount/namespace </code></pre>
<p>I have setup docker on my machine and also minikube which have docker inside it, so probably i have two docker instances running on different VM</p> <p>I build an image and tag it then push it to local registry and it pushed successfully and i can pull it from registry too and also when i run curl to get tags list i got result, and here are what i did</p> <pre><code>1- docker build -t 127.0.0.1:5000/eliza/console:0.0.1 . 2- docker run -d -p 5000:5000 --name registry registry:2 3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1 4- docker push 127.0.0.1:5000/eliza/console:0.0.1 5- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list </code></pre> <p>all above steps are working fine with no problems at all.</p> <p>My problem is when i run minikube and try to access this image in local registry inside it</p> <p>So when i run next commands</p> <pre><code>1- sudo minikube start --insecure-registry 127.0.0.1:5000 2- eval $(minikube docker-env) 3- minikube ssh 4- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list </code></pre> <p>in last step (point 4) it gave me next message</p> <blockquote> <p>curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused</p> </blockquote> <p>So i can access image registry from my machine but not from minikube which make a problems of course with me when i deploy this image using Kubernetes on minikube and make deploy failed due to can't connect to <a href="http://127.0.0.1:5000" rel="noreferrer">http://127.0.0.1:5000</a></p> <p>Can you help me configuring minikube to see my local registry so my problem will be solved then i can deploy image to minikube using kubernetes successfully?</p> <p><strong>UPDATE</strong></p> <p>I am using this yaml file (i named it <em>ConsolePre.yaml</em>) to deploy my image using kubernetes</p> <pre><code>apiVersion: v1 kind: Service metadata: name: tripbru-console labels: app: tripbru-console spec: ports: - port: 9080 targetPort: 9080 nodePort: 30181 selector: app: tripbru-console tier: frontend type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tripbru-console labels: app: tripbru-console spec: strategy: type: Recreate template: metadata: labels: app: tripbru-console tier: frontend spec: containers: - image: docker.local:5000/eliza/console:0.0.1 name: tripbru-console ports: - containerPort: 9080 name: tripbru-console </code></pre> <p>and when i run next command to apply changes</p> <blockquote> <p>sudo kubectl apply -f /PATH_TO_YAML_FILE/ConsolePre.yaml</p> </blockquote> <p>the result is </p> <pre><code>NAME READY STATUS RESTARTS AGE po/tripbru-console-1655054400-x3g87 0/1 ErrImagePull 0 1m </code></pre> <p>and when i run describe command</p> <blockquote> <p>sudo kubectl describe pod tripbru-console-1655054400-x3g87</p> </blockquote> <p>i found next message in description result</p> <blockquote> <p>Error response from daemon: {"message":"Get <a href="https://docker.local:5000/v1/_ping" rel="noreferrer">https://docker.local:5000/v1/_ping</a>: dial tcp: lookup docker.local on 10.0.2.3:53: read udp 10.0.2.15:57792-\u003e10.0.2.3:53: i/o timeout"}</p> </blockquote> <p>and i configured <strong>docker.local xxx.xxx.xx.4</strong> in minikube /etc/hosts so i don't know from where 10.0.2.3:53 and 10.0.2.15:57792 come from.</p> <p>So how can i solve this issue too. </p> <p>Thanks :)</p>
<p>The issue is your notion using <code>127.0.0.1</code> anywhere you want. This is wrong.</p> <p>So if your machine IP is 192.168.0.101. Then below works</p> <pre><code>1- docker build -t 127.0.0.1:5000/eliza/console:0.0.1 . 2- docker run -d -p 5000:5000 --name registry registry:2 3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1 4- docker push 127.0.0.1:5000/eliza/console:0.0.1 5- curl -X GET http://127.0.0.1:5000/v2/eliza/console/tags/list </code></pre> <p>Because docker run maps the registry to 127.0.0.1:5000 and 192.168.0.101:5000. Now on your machine only this <code>127.0.0.1</code> will work. Now when you use</p> <pre><code>3- minikube ssh </code></pre> <p>You get inside the minikube machine and that doesn't have a registry running on 127.0.0.1:5000. So the error. The registry is no reachable inside this machine using the machine IP.</p> <p>The way I usually solve this is issue is by using host name both locally and inside the other VMs.</p> <p>So on your machine create a entry in <code>/etc/hosts</code></p> <pre><code>docker.local 127.0.0.1 </code></pre> <p>And change your commands to</p> <pre><code>1- docker build -t docker.local:5000/eliza/console:0.0.1 . 2- docker run -d -p 5000:5000 --name registry registry:2 3- docker tag a3703d02a199 docker.local:5000/eliza/console:0.0.1 4- docker push docker.local:5000/eliza/console:0.0.1 5- curl -X GET http://docker.local:5000/v2/eliza/console/tags/list </code></pre> <p>And then when you use <code>minikube ssh</code>, make a entry for <code>docker.local</code> in <code>/etc/hosts</code></p> <pre><code>docker.local 192.168.0.101 </code></pre> <p>Then <code>curl -X GET http://docker.local:5000/v2/eliza/console/tags/list</code></p> <p><strong>Edit-1</strong></p> <p>For the TLS issue you need to Stop the docker service inside minikube</p> <pre><code>systemctl stop docker </code></pre> <p>Then edit <code>/etc/systemd/system/docker.service.d/10-machine.conf</code> and change</p> <blockquote> <p>ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24</p> </blockquote> <p>to</p> <blockquote> <p>ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.0.0.0/24 --insecure-registry docker.local:5000 --insecure-registry 192.168.1.4:5000</p> </blockquote> <p>Then reload daemon and start the docker service</p> <pre><code>systemctl daemon-reload systemctl start docker </code></pre> <p>After that try to pull</p> <pre><code>docker pull docker.local:5000/eliza/console:0.0.1 </code></pre> <p>And the command should work</p>
<p>Recently when practicing kubernetes , I found there is no doc and example specifically explaining how to use cinder correctly in kubernetes. </p> <p>So how to setup cinder to be used in kubernetes ? </p>
<p>I did some experiment and worked out how to setup cinder with kubernetes. Just find a suitable to document and share.</p> <p><strong>Preparation</strong></p> <ul> <li>kubernetes cluster </li> <li>openstack environment and make sure cinder service is available</li> </ul> <p><strong>Background</strong></p> <p>From my investigation, component <code>kube-controller-manager</code> is responsible for loading volume plugins and related in Kubernetes. So we could make cinder available by adjusting <code>kube-controller-manager</code> configuration.</p> <p><strong>Steps</strong></p> <ol> <li><strong>Prepare <code>cloud.conf</code> file to contain your openstack creds</strong></li> </ol> <p>Prepare your openstack creds and saved as a file , for example <code>/etc/kubernetes/cloud.conf</code> in kubernetes control panel which <code>kube-controller-manager</code> locates. The following is example for <code>cloud.conf</code></p> <pre><code>[Global] auth-url=$your_openstack_auth_url username=$your_openstack_user password=$your_user_pw region=$your_openstack_reigon tenant-name=$your_project_name domain-name=$your_domain_name ca-file=$your_openstack_ca </code></pre> <p>Most could be found from your <code>stackrc</code> file. And <code>ca-file</code> item is optional, depending on if your openstack auth url is <code>http</code> or <code>https</code></p> <ol start="2"> <li><strong>Adjust <code>kube-controller-manager</code> start configuration</strong></li> </ol> <p>This link is a full detail options for <code>kube-controller-manager</code> (<a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/kube-controller-manager/</a>)</p> <p>Actually we should add two extra parameters based on your current one</p> <pre><code>--cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud.conf </code></pre> <p>There are mainly two ways to start <code>kube-controller-manager</code> : 1) using systemd 2) using static pod .</p> <p>Just one tips, if you are using static pod for <code>kube-controller-manager</code> , make sure you have mount all files such as cloud.conf or openstack ca file into your container.</p> <p><strong>Verification</strong></p> <p>We will create a storageclass, and use this storageclass to create persistent volume dynamically.</p> <ol> <li>Create a storageclass named <code>standard</code>:</li> </ol> <p><strong>demo-sc.yml:</strong></p> <pre><code>apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: standard annotations: storageclass.beta.kubernetes.io/is-default-class: "true" labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists provisioner: kubernetes.io/cinder </code></pre> <p>Using command <code>kubectl create -f demo-sc.yml</code> to create and using command <code>kubectl get sc</code> to verify if it created correctly</p> <pre><code>NAME TYPE standard (default) kubernetes.io/cinder </code></pre> <ol start="2"> <li>Create a PersistentVolumeClaim to use StorageClass provison a Persistent Volume in Cinder:</li> </ol> <p><strong>demo-pvc.yml:</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cinder-claim annotations: volume.beta.kubernetes.io/storage-class: "standard" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>Creating PVC by <code>kubectl create -f demo-pvc.yml</code></p> <p>And now checking by command <code>kubectl get pvc</code></p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE cinder-claim Bound pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379 1Gi RWO standard 23h </code></pre> <p>And in openstack environment, checking by command <code>cinder list | grep pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379</code></p> <pre><code> root@ds0114:~# cinder list | grep pvc-5dd3d62e-9204-11e7-bc43- fa163e0e0379 | ddd8066d-2e16-4cb2-a89e-cd9d5b99ef1b | available | kubernetes-dynamic- pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379 | 1 | CEPH_SSD | false | | </code></pre> <p>So now StorageClass is working well using Cinder in Kubernetes.</p>
<p>We need to deploy redundant haproxy load-balancers(VRRP) and 2~3 apiservers to build high-availability clusters on bare-metal</p> <p>But, we have 4 bare-metal servers.(the number of nodes will increase)</p> <p>So we are thinking of installing load-balancers in VMs assigned flat IPs on Master nodes, as you see the picture.</p> <p><a href="https://i.stack.imgur.com/B2aNz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B2aNz.jpg" alt="enter image description here"></a></p> <p>Is there any issues in this structure? or please let me know better one..</p>
<p>Seems like an overkill to build HA control plane and have one worker node, but I guess you plan to add more in future. Other then that it looks solid. Although I assume you also have some network equipment in the environment. This could open a possibility of ditching dedicated haproxy vrrp for apiserver in favor of having loadbalancer for it solved on ie. your router/gateway level.</p>
<p>I'm following an example from <a href="https://www.manning.com/books/kubernetes-in-action" rel="nofollow noreferrer">Kubernetes in Action</a> to run a simple docker image in kubernetes:</p> <pre><code>$ bx login --apikey @apiKey.json -a https://api.eu-de.bluemix.net $ bx cs cluster-config my_kubernetes $ export KUBECONFIG=..my_kubernetes.yml </code></pre> <p>Next, run the container:</p> <pre><code>$ kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 $ kubectl expose rc kubia --type=LoadBalancer --name kubia-http $ kubectl get service $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.10.10.1 &lt;none&gt; 443/TCP 20h kubia-http 10.10.10.12 &lt;pending&gt; 8080:32373/TCP 0m </code></pre> <p>Fifteen minutes later ...</p> <pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.10.10.1 &lt;none&gt; 443/TCP 20h kubia-http 10.10.10.12 &lt;pending&gt; 8080:32373/TCP 15m </code></pre> <p>I don't have anything else running on the Kubernetes cluster.</p>
<p>To close out the thread here, LoadBalancer cannot be used in a lite (aka free) cluster tier. The differences between lite and standard clusters can be found here - <a href="https://console.bluemix.net/docs/containers/cs_planning.html#cs_planning" rel="noreferrer">https://console.bluemix.net/docs/containers/cs_planning.html#cs_planning</a>. </p>
<p>I have to create a cluster with support of insecure docker registry. I want to use <code>Kops</code> for this. Is there any way to create cluster with insecure registry using <code>Kops</code>?</p>
<p>You can set insecure registry at cluster config edit time, after <code>kops create cluster ...</code> command (navigate to clusterSpec part of file):</p> <pre><code>$ kops edit cluster $NAME ... docker: insecureRegistry: registry.example.com logDriver: json-file ... </code></pre> <p><a href="https://github.com/kubernetes/kops/issues/42#issuecomment-302073487" rel="nofollow noreferrer">Original link</a></p>
<p>I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml</p> <p>I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far. </p> <p>Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?</p> <p>I'm curious how people have been approaching this problem.</p> <p>Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running. </p> <p>But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?</p>
<p>I'm not entirely sure I understood your question but that may very well be my reading comprehension :) In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)</p> <p>Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be <code>kubectl scale --replicas=0 -f foo.yaml</code> and then back to whatever</p> <p>Option 2: if you want to apply the deployment and not kill any pods for example, you would use the <code>cascade=false</code> (google it)</p> <p>Option 3: lookup the <code>rollout</code> option to manage deployments, not sure if it works on services though</p> <p>Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.</p>
<p>I ran <code>eval $(minikube docker-env)</code> then built a docker container. When I run <code>docker images</code> on my host I can see the image. When I run <code>minikube ssh</code> then <code>docker images</code> I can see it.</p> <p>When I try to run it, the pod fails to launch. <code>kubectl describe pod</code> gives:</p> <pre><code>14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal Pulling pulling image "personalisation-customer:latest" 14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Warning Failed Failed to pull image "personalisation-customer:latest": rpc error: code = 2 desc = Error: image library/personalisation-customer:latest not found 14m 2s 66 kubelet, minikube Warning FailedSync Error syncing pod 14m 2s 59 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal BackOff Back-off pulling image "personalisation-customer:latest" </code></pre> <p>My <code>imagePullPolicy</code> is <code>Always</code>.</p> <p>What could be causing this? Other pods are working locally. </p>
<p>You aren't exactly pulling from your local registry, you are using your previously downloaded images or your locally builded, since you are specifying <code>imagePullPolicy: Always</code> this will always try to pull it from the registry.</p> <p>Your image doesn't contain a specific docker registry <code>personalisation-customer:latest</code> for what docker will understand <code>index.docker.io/personalisation-customer:latest</code> and this is an image that doesn't exist in the public docker registry.</p> <p>So you have 2 options <code>imagePullPolicy: IfNotPresent</code> or to upload the image to some registry.</p>
<p>I have server with 2 nic's. Internet&lt;--->kube-worker&lt;--->internal network&lt;---->kube-master.</p> <p>After I've applied nginx ingress configuration (only change that I made, was uncomment "hostNetwork: true" field), placed <a href="https://github.com/kubernetes/ingress/blob/master/examples/deployment/nginx/nginx-ingress-controller.yaml" rel="nofollow noreferrer" title="here">here</a>, I've faced with problem: Ngingx ingress pod assign ip address of internal network.</p> <p>Because of nginx must serve client requests from Internet, it's very important to assign to pod host's external address.</p> <p>And here is a question: how can I assign external address to nginx ingress pod? May be in k8s exists some annotations or I must configure network at some special way? </p>
<p>with <code>hostNetwork: true</code> your pod does not get assigned any IP, it should see full networking of the host machine. So, if the host has internal network on eth0 and public on eth1 this is the same that your nginx pod will see from inside.</p>
<p>How can we conditionally append "index.html" to a request for a url that ends in a slash?</p> <p>As background: We deploy multiple static, single page apps for multiple domain names to a single S3 bucket with web hosting enabled. This bucket is available as: <a href="https://our-bucket-name.s3.amazonaws.com" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com</a></p> <p>The bucket is organized with object key prefix so that: <a href="https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/index.html" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/index.html</a> <a href="https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/app.css" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/app.css</a> <a href="https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/app.js" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com/environment/app_name/build_id/app.js</a></p> <p>If there is a request for: <a href="https://www.example.com/" rel="nofollow noreferrer">https://www.example.com/</a> This should be routed to: <a href="https://our-bucket-name.s3.amazonaws.com/environment/app-name/build_id/index.html" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com/environment/app-name/build_id/index.html</a></p> <p>If there is a request for: <a href="https://www.example.com/app.css" rel="nofollow noreferrer">https://www.example.com/app.css</a> This should be routed to: <a href="https://our-bucket-name.s3.amazonaws.com/environment/app-name/build_id/app.css" rel="nofollow noreferrer">https://our-bucket-name.s3.amazonaws.com/environment/app-name/build_id/app.css</a></p> <p>Not sure if it is relevant, but traefik here is for a kubernetes ingress that we want backed by AWS S3.</p>
<p>You could use AddPrefix or ReplacePath modifier, though you would also need to create a matcher on your fontend as well.</p> <p>STEPS</p> <ol> <li>create front end to match blank path rule</li> <li>add modifier to that front end. ReplacePath should do the trick</li> <li>route to backend</li> </ol>
<p>I have .NET Core 1.1 Test project and I am building this project inside kubernetes pods using VSTS linux agent. In one of tests, I am trying to connect to SQL Server (installed inside one VM in Google Compute Engine). Whenever this test executes (with dotnet test command), I am getting below exception</p> <p>**System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)****</p> <p>I have tried below options in Google Compute Engine's VM 1. Pinged it with public IP address - working 2. SQL services are running - verified with SQL Server configuration manager 3. TCP/IP and Named Pipes protocols and port are enabled in configuration manager 4. Remote connections to the server is allowed (thru SQL Server Management Studio)</p> <p>But still getting the same exception</p>
<p>This problem is resolved now...the issue was my kubernetes build cluster was in some other network and my google compute engine's VM was in default network..Once I changed network of VM..it started working</p>
<p>I am running GKE k8s deployments/jobs that take a long time to execute - from days to weeks (Machine Learning). Default GKE Events expire after 1h, which is not enough to debug problems that can occur during training, like OOMKilling, etc. </p> <p><code>kube-apiserver</code> isn't exposed in GKE, so I'd like to find a way to access and change a property like <code>event-ttl</code>.</p> <p>How can I change <code>event-ttl</code> for an already launched cluster, or how to specify <code>event-ttl</code> at a cluster creation? For example if I would like all my events in a cluster to be available for 24 hours. Is there a <code>kubectl</code> command for that (the <code>kubernetes</code> svc is accessible)?</p> <p>Thanks in advance!</p>
<p>It's currently not possible. As you said, the <code>kube-apiserver</code> flags are currently not user-configurable on GKE. There are plans to expose more such knobs to the users but I don’t know when would this be available.</p>
<p>I have a pod <code>test-1495806908-xn5jn</code> with 2 containers. I'd like to restart one of them called <code>container-test</code>. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?</p> <p>The pod was created using a <code>deployment.yaml</code> with:</p> <pre><code>kubectl create -f deployment.yaml </code></pre>
<blockquote> <p>Is it possible to restart a single container</p> </blockquote> <p>Not through <code>kubectl</code>, although depending on the setup of your cluster you can "cheat" and <code>docker kill the-sha-goes-here</code>, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)</p> <blockquote> <p>how do I restart the pod</p> </blockquote> <p>That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just <code>kubectl delete pod test-1495806908-xn5jn</code> and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect <code>kubectl get pods</code> to return <code>test-1495806908-xn5jn</code> ever again)</p>
<p>I'm trying to deploy my Spring Boot/JHipster app on Google's Container Engine (GKE). I've figured out most things, but I'm having a problem with my database instance (a PostgreSQL instance running on Google Cloud SQL, with the Google SQL Proxy).</p> <p>I've followed the instructions <a href="https://cloud.google.com/sql/docs/postgres/connect-container-engine" rel="nofollow noreferrer">here</a> and <a href="https://cloud.google.com/sql/docs/postgres/connect-external-app#java" rel="nofollow noreferrer">here</a> to set up my app. </p> <ol> <li>I've set up my PostreSQL instance in cloud, and created my app's database and user.</li> <li>I've created an SQL service with Cloud SQL Client role -- I grabbed the JSON key, and used it to create my cloudsql-instance-credentials. I've also created my cloudsql-db-credentials.</li> <li>I've added the additional bits to my deployment yaml file. I've basically cloned the yaml file from <a href="https://github.com/GoogleCloudPlatform/container-engine-samples/tree/master/cloudsql" rel="nofollow noreferrer">this GitHub sample</a> and replaced all the references to wordpress with my own Docker image (hosted in the Google Container Registry). I've also updated the proxy block, like so:</li> </ol> <p>deployment.yaml snippet:</p> <pre><code> - image: gcr.io/cloudsql-docker/gce-proxy:1.09 name: cloudsql-proxy command: ["/cloud_sql_proxy", "--dir=/cloudsql", "-instances=[my-project]:us-central1:[my-sql-instance-id]=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] </code></pre> <p>Lastly, I've updated my Spring Boot configuration yaml file, like so:</p> <pre><code>datasource: type: com.zaxxer.hikari.HikariDataSource url: jdbc:postgresql://google/[my-database]?socketFactory=com.google.cloud.sql.postgres.SocketFactory&amp;socketFactoryArg=[my-project]:us-central1:[my-sql-instance-id] username: ${DB_USER} password: ${DB_PASSWORD} </code></pre> <p>When I <code>kubectl create</code> my deployment, the image deploys, but it fails to launch the app. Here's the salient bit from my log:</p> <pre><code>Caused by: java.lang.RuntimeException: Unable to retrieve information about Cloud SQL instance [[my-project]:us-central1:[my-sql-instance-id]] at com.google.cloud.sql.core.SslSocketFactory.obtainInstanceMetadata(SslSocketFactory.java:411) at com.google.cloud.sql.core.SslSocketFactory.fetchInstanceSslInfo(SslSocketFactory.java:284) at com.google.cloud.sql.core.SslSocketFactory.getInstanceSslInfo(SslSocketFactory.java:264) at com.google.cloud.sql.core.SslSocketFactory.createAndConfigureSocket(SslSocketFactory.java:183) at com.google.cloud.sql.core.SslSocketFactory.create(SslSocketFactory.java:152) at com.google.cloud.sql.postgres.SocketFactory.createSocket(SocketFactory.java:50) at org.postgresql.core.PGStream.&lt;init&gt;(PGStream.java:60) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:52) at org.postgresql.jdbc.PgConnection.&lt;init&gt;(PgConnection.java:216) at org.postgresql.Driver.makeConnection(Driver.java:404) at org.postgresql.Driver.connect(Driver.java:272) ... 37 common frames omitted Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden { "code" : 403, "errors" : [ { "domain" : "global", "message" : "Insufficient Permission", "reason" : "insufficientPermissions" } ], "message" : "Insufficient Permission" } at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113) at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321) at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1065) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469) at com.google.cloud.sql.core.SslSocketFactory.obtainInstanceMetadata(SslSocketFactory.java:372) ... 48 common frames omitted </code></pre> <p>This "Insufficient Permission" error pops up a lot on StackOverflow, but I haven't found a question that's quite the same scenario as mine. It seems like a generic OAuth-level error. I feel like I've double-checked my set-up against the instructions a few times, and I'm not sure where I can look for any additional clues.</p> <p>Any ideas?</p> <p><strong>Update:</strong> </p> <p>Thanks to Vadim's pointer, I've managed to get past the "Insufficient Permission" problem. Sadly, my app still fails on boot-up, when it tries to establish a connection to the database (specifically, when Liquibase tries to start connecting to the DB to run migration scripts).</p> <p>My new error is at the socket level in the driver:</p> <pre><code>liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: The connection attempt failed. at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:390) at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.initDb(AsyncSpringLiquibase.java:82) at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.afterPropertiesSet(AsyncSpringLiquibase.java:72) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1687) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1624) ... 24 common frames omitted Caused by: org.postgresql.util.PSQLException: The connection attempt failed. at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:52) at org.postgresql.jdbc.PgConnection.&lt;init&gt;(PgConnection.java:216) at org.postgresql.Driver.makeConnection(Driver.java:404) at org.postgresql.Driver.connect(Driver.java:272) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:86) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:71) at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:385) ... 28 common frames omitted Caused by: java.net.SocketException: already connected at java.net.Socket.connect(Socket.java:569) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:673) at org.postgresql.core.PGStream.&lt;init&gt;(PGStream.java:61) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144) ... 37 common frames omitted </code></pre>
<p>Vadim solved the problem that I asked about, but the second problem -- the socket already connected problem -- was eventually figured out by one of my co-workers.</p> <p>The root of the socket problem relates to the datasource configuration. It turns out that I'm mixing and matching two different mechanisms for accessing the Cloud SQL environment. </p> <ol> <li>Using the Cloud SQL Proxy; and</li> <li>Using a Google Socket Factory</li> </ol> <p>Because I've successfully configured the Cloud SQL Proxy, I don't need that weird JDBC URL in my Spring Boot environment. I can connect using 127.0.0.1:5432, like so:</p> <pre><code>datasource: type: com.zaxxer.hikari.HikariDataSource url: jdbc:postgresql://127.0.0.1:5432/[my-database-name] username: ${DB_USER} password: ${DB_PASSWORD} </code></pre> <p>Now that I've replaced my JDBC URL, my app connects successfully.</p>
<p><strong>Problem</strong></p> <p>I have deployed csanchez's jenkins-kubernetes plugin (version 0.12) to a local Minikube / Kubernetes environment. When configuring Pod-Templates and Container-Templates from within the Jenkins UI, the PODS are spawned automatically and process simple jobs. Howerver when the Pod-Templates and Container-Templates are defined within a pipeLine script, the Jenkins Master is rejecting the connection saying that the POD is already connected to the master.</p> <p><strong>Environment</strong></p> <p>minikube version: v0.20.0</p> <p>Kubernetes version: v1.7.1</p> <p>Jenkins Master: jenkins:2.74-alpine (Docker Container)</p> <p>Jenkins Slave: jnlp-slave:3.10-1-alpine (Docker Container)</p> <p>Operating System: 4.8.0-59-generic #64-Ubuntu SMP Thu Jun 29 19:38:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux</p> <p><strong>PipeLine Script</strong></p> <pre><code>podTemplate(name: 'my-pod-template', label: 'maven', cloud: 'kubernetes', namespace: 'mmidevops', containers: [ containerTemplate( name: 'my-container-template', image: 'grantericedwards/jenkinsci-jnlp-slave:3.10-1-alpine', ttyEnabled: true, alwaysPullImage: true, workingDir: '/home/jenkins' ) ], volumes: [ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock') ] ) { node('maven') { stage('Build a Maven project') { container('maven') { sh 'sleep 10' } } } } </code></pre> <p><strong>Spawned PODS</strong> kubectl get pods --all-namespaces -w</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-addon-manager-minikube 1/1 Running 9 7d kube-system kube-dns-1301475494-zp3tg 3/3 Running 27 7d kube-system kubernetes-dashboard-jlsk7 1/1 Running 9 7d mmidevops jenkins-83791151-msdcw 1/1 Running 30 6d mmidevops my-pod-template-nfdgh-9k7qh 0/2 Pending 0 0s mmidevops my-pod-template-nfdgh-9k7qh 0/2 Pending 0 0s mmidevops my-pod-template-nfdgh-9k7qh 0/2 ContainerCreating 0 0s mmidevops my-pod-template-nfdgh-9k7qh 2/2 Running 0 6s mmidevops my-pod-template-nfdgh-9k7qh 1/2 Error 0 9s </code></pre> <p><strong>Logs From Slave POD</strong></p> <pre><code>INFO: Setting up slave: my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:35:20 AM hudson.remoting.jnlp.Main$CuiListener &lt;init&gt; INFO: Jenkins agent is running in headless mode. Aug 28, 2017 11:35:20 AM hudson.remoting.Engine startEngine WARNING: No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars Aug 28, 2017 11:35:20 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among [http://172.17.0.2:8080/] Aug 28, 2017 11:35:20 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve INFO: Remoting server accepts the following protocols: [JNLP4-connect, JNLP-connect, Ping, JNLP2-connect] Aug 28, 2017 11:35:20 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Agent discovery successful Agent address: 172.17.0.2 Agent port: 50000 Identity: 74:24:d0:2e:7b:b7:9d:13:80:47:e5:fa:45:b3:85:15 Aug 28, 2017 11:35:20 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Handshaking Aug 28, 2017 11:35:20 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to 172.17.0.2:50000 Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Trying protocol: JNLP4-connect Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Remote identity confirmed: 74:24:d0:2e:7b:b7:9d:13:80:47:e5:fa:45:b3:85:15 Aug 28, 2017 11:35:21 AM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv INFO: [JNLP4-connect connection to 172.17.0.2/172.17.0.2:50000] Local headers refused by remote: my-pod-template-nfdgh-9k7qh is already connected to this master. Rejecting this connection. Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Protocol JNLP4-connect encountered an unexpected exception java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: my-pod-template-nfdgh-9k7qh is already connected to this master. Rejecting this connection. at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223) at hudson.remoting.Engine.innerRun(Engine.java:583) at hudson.remoting.Engine.run(Engine.java:447) Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: my-pod-template-nfdgh-9k7qh is already connected to this master. Rejecting this connection. at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.newAbortCause(ConnectionHeadersFilterLayer.java:377) at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.onRecvClosed(ConnectionHeadersFilterLayer.java:432) at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832) at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172) at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832) at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154) at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48) at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at hudson.remoting.Engine$1$1.run(Engine.java:98) at java.lang.Thread.run(Thread.java:748) Suppressed: java.nio.channels.ClosedChannelException ... 7 more Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to 172.17.0.2:50000 Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Server reports protocol JNLP4-plaintext not supported, skipping Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Protocol JNLP3-connect is not enabled, skipping Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Trying protocol: JNLP2-connect Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Protocol JNLP2-connect encountered an unexpected exception java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Server didn't accept the handshake: at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at hudson.remoting.Engine.innerRun(Engine.java:583) at hudson.remoting.Engine.run(Engine.java:447) Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Server didn't accept the handshake: at org.jenkinsci.remoting.engine.JnlpProtocol2Handler.sendHandshake(JnlpProtocol2Handler.java:134) at org.jenkinsci.remoting.engine.LegacyJnlpProtocolHandler$2.call(LegacyJnlpProtocolHandler.java:162) at org.jenkinsci.remoting.engine.LegacyJnlpProtocolHandler$2.call(LegacyJnlpProtocolHandler.java:158) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at hudson.remoting.Engine$1$1.run(Engine.java:98) at java.lang.Thread.run(Thread.java:748) Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to 172.17.0.2:50000 Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Trying protocol: JNLP-connect Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Protocol JNLP-connect encountered an unexpected exception java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Server didn't accept the handshake: at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at hudson.remoting.Engine.innerRun(Engine.java:583) at hudson.remoting.Engine.run(Engine.java:447) Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Server didn't accept the handshake: at org.jenkinsci.remoting.engine.JnlpProtocol1Handler.sendHandshake(JnlpProtocol1Handler.java:121) at org.jenkinsci.remoting.engine.LegacyJnlpProtocolHandler$2.call(LegacyJnlpProtocolHandler.java:162) at org.jenkinsci.remoting.engine.LegacyJnlpProtocolHandler$2.call(LegacyJnlpProtocolHandler.java:158) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at hudson.remoting.Engine$1$1.run(Engine.java:98) at java.lang.Thread.run(Thread.java:748) Aug 28, 2017 11:35:21 AM hudson.remoting.jnlp.Main$CuiListener error SEVERE: The server rejected the connection: None of the protocols were accepted java.lang.Exception: The server rejected the connection: None of the protocols were accepted at hudson.remoting.Engine.onConnectionRejected(Engine.java:644) at hudson.remoting.Engine.innerRun(Engine.java:608) at hudson.remoting.Engine.run(Engine.java:447) </code></pre> <p><strong>Logs From Jenkins Master</strong></p> <pre><code>Aug 28, 2017 11:35:14 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision INFO: Template: Kubernetes Pod Template Aug 28, 2017 11:35:14 AM hudson.slaves.NodeProvisioner$StandardStrategyImpl apply INFO: Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: 0 Aug 28, 2017 11:35:14 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call INFO: Created Pod: my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:35:14 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call INFO: Waiting for Pod to be scheduled (0/100): my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:35:15 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted JNLP4-connect connection #38 from /172.17.0.5:54934 Aug 28, 2017 11:35:21 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted JNLP4-connect connection #39 from /172.17.0.5:54944 Aug 28, 2017 11:35:21 AM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv INFO: [JNLP4-connect connection from 172.17.0.5/172.17.0.5:54944] Refusing headers from remote: my-pod-template-nfdgh-9k7qh is already connected to this master. Rejecting this connection. Aug 28, 2017 11:35:21 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted JNLP2-connect connection #40 from /172.17.0.5:54960 Aug 28, 2017 11:35:21 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted JNLP-connect connection #41 from /172.17.0.5:54962 Aug 28, 2017 11:35:23 AM hudson.node_monitors.ResponseTimeMonitor$1 monitor WARNING: Making my-pod-template-qdgzl-29t7l offline because it’s not responding Aug 28, 2017 11:35:24 AM hudson.slaves.NodeProvisioner$2 run INFO: Kubernetes Pod Template provisioning successfully completed. We have now 7 computer(s) Aug 28, 2017 11:40:20 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate INFO: Terminating Kubernetes instance for slave my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:40:20 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate INFO: Terminated Kubernetes instance for slave my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:40:20 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate INFO: Disconnected computer my-pod-template-nfdgh-9k7qh Terminated Kubernetes instance for slave my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:40:20 AM jenkins.slaves.DefaultJnlpSlaveReceiver channelClosed WARNING: Computer.threadPoolForRemoting [#41] for my-pod-template-nfdgh-9k7qh terminated java.nio.channels.ClosedChannelException at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208) at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222) at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832) at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200) at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213) at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800) at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173) at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311) at hudson.remoting.Channel.close(Channel.java:1304) at hudson.remoting.Channel.close(Channel.java:1272) at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708) at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96) at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Aug 28, 2017 11:40:20 AM org.jenkinsci.plugins.workflow.job.WorkflowRun finish INFO: TestPipeLine #94 completed: FAILURE </code></pre> <p><strong>JNLP Logs</strong></p> <pre><code>Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior Warning: SECRET is defined twice in command-line arguments and the environment variable Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main createEngine INFO: Setting up slave: my-pod-template-nfdgh-9k7qh Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener &lt;init&gt; INFO: Jenkins agent is running in headless mode. Aug 28, 2017 11:35:15 AM hudson.remoting.Engine startEngine WARNING: No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among [http://172.17.0.2:8080/] Aug 28, 2017 11:35:15 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve INFO: Remoting server accepts the following protocols: [JNLP4-connect, JNLP-connect, Ping, JNLP2-connect] Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Agent discovery successful Agent address: 172.17.0.2 Agent port: 50000 Identity: 74:24:d0:2e:7b:b7:9d:13:80:47:e5:fa:45:b3:85:15 Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Handshaking Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to 172.17.0.2:50000 Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Trying protocol: JNLP4-connect Aug 28, 2017 11:35:15 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Remote identity confirmed: 74:24:d0:2e:7b:b7:9d:13:80:47:e5:fa:45:b3:85:15 Aug 28, 2017 11:35:16 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connected </code></pre> <p><strong>Update</strong> So after a bit more Googling I found this ... "if you have defined a JNLP container in your Pod definition, you need to remove it or rename it to <strong>jnlp</strong>, otherwise a new container called <strong>jnlp</strong> will be created" ... that said ...</p> <pre><code>podTemplate(name: 'default-java-slave', label: 'maven-builder', cloud: 'kubernetes', namespace: 'mmidevops', containers: [ containerTemplate( name: 'jnlp', image: 'grantericedwards/jenkinsci-jnlp-slave:3.10-1-alpine', ttyEnabled: true, alwaysPullImage: true, workingDir: '/home/jenkins' ) ], volumes: [ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock') ] ) { node('maven-builder') { stage('Build a Maven project') { container('jnlp') { sh 'sleep 10' } } } } </code></pre>
<p>Im using a kubernetes plugin 0.12 and have a problem ClosedChannelException (I think this is not problem on connection to master but a communicate doesn't acceptable then connection timeout and closed) on Jenkins slave too and I solved this problem when I upgrade Jenkins version from 2.60-alpine to 2.75-apline. Jenkins slave working perfectly now. Hopefully this will help you to solve a problem.</p> <p>PS. Kubernetes plugin has release new version on 1.0 and I also upgraded to new version it working perfectly too!</p>
<p>I have been browsing looking and reading why my external ip is not resolving on Poc I am doing with Minikube .</p> <p>I am running Minikube on a Ubuntu 16.04 distro.</p> <p>When i do an http get at <a href="http://192.168.99.100:32431/" rel="nofollow noreferrer">http://192.168.99.100:32431/</a> i get the nginx page.</p> <p>Is there a why to use external ip with port 80 and not with Minikube ip and the port ?</p>
<blockquote> <p>I am running Minikube on a Ubuntu 16.04 distro.</p> </blockquote> <p>Minikube is not designed to have an "external ip", it's for simple development-time testing. One will observe it is not even using the <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#download-and-install-the-kubernetes-controller-binaries" rel="nofollow noreferrer">separate kubernetes binaries/containers</a> that one would expect in a for-real kubernetes deployment</p> <blockquote> <p>Is there a why to use external ip with port 80 and not with Minikube ip and the port ?</p> </blockquote> <p>No to the second half of the question, but if you wish to have the Service listen on port 80 of the minikube VM, then a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">NodePort Service</a> will very likely do that</p>
<p>I would like to know, if HPA considers the mean of CPU utilization of multiple containers in a pod, in order to scale up/down the no. of pods. For instance, if I specify a HPA like below for a deployment(pod) that has 2 containers. In order for the HPA to scale up, does it require the CPU utilization to be reached to 80% in both the containers? In other words, If <strong>container A</strong> has CPU utilization of 80% but <strong>container B</strong> has CPU utilization of only 60%. Does that mean that the pods will not be scaled up by HPA. As far as I have observed, this is the case. But I am not sure about this, since there is no explicit statement regarding this in the kubernetes documentation. And unfortunately, I am not the best of developers to figure this out from the source code. Any help &amp; if possible with reference, would be greatly appreciated. Thank you so much.</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: blackbox-rc-hpa namespace: monitoring spec: scaleTargetRef: apiVersion: v1 kind: extensions/v1beta1 name: blackbox minReplicas: 1 maxReplicas: 4 targetCPUUtilizationPercentage: 80 </code></pre>
<p>The controller calculates the utilization value as a percentage on the containers in each pod and then takes the mean. So in your scenerio mean will be 70% <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
<p>I am super new to kubernetes and would like your help regarding following matter. </p> <p>I have used <strong>docker pull php</strong> to pull httpd on my local. </p> <p>How do I modify this image on my local?</p>
<p>Ok, I will start with the basics: as stated in the previous comments and answers, you cannot modify this image that is coming from <a href="https://hub.docker.com/_/php/" rel="noreferrer">the official PHP image on Docker Hub.</a></p> <p>However, you have a couple of options when it comes to creating your own image:</p> <ol> <li>start from the PHH base image you just pulled and create your own image with a Dockerfile (where the <code>app</code> folder contains your application)</li> </ol> <p><code>FROM php:7.1.9-apache COPY app /var/www/html EXPOSE 80</code></p> <p>Then, you just <code>docker build -t my-php-app .</code> and in order to run it locally you <code>docker run -p &lt;some-port&gt;:80 my-php-app</code>.</p> <p>This is the easiest way to create your new image and <a href="https://docs.docker.com/get-started/part2/" rel="noreferrer">here you can find some good documentation.</a></p> <ol start="2"> <li>you can run the container, make some changes (add your files, edit configuration and other stuff) and then commit those changes into a new image. You can <a href="https://docs.docker.com/engine/reference/commandline/commit/" rel="noreferrer">find some examples on <code>docker commit</code> here.</a></li> </ol> <blockquote> <p>However, this second approach doesn't allow you to source control your image creation process (the way you do with a Dockerfile).</p> </blockquote> <p>After you create your image, in order to deploy it on another node (other than the one you used to create it), you need to push the image to an image repository (Docker Hub, or some other private image registry - AWS, GCP and Azure all have private image registries). The default one using the Docker CLI is Docker Hub. <a href="https://docs.docker.com/docker-hub/repos/" rel="noreferrer">Here you can find a tutorial on tagging and pushing your image to Docker Hub</a></p> <p>Now that you have your image on Docker Hub (or another private image registry), you are ready to deploy it on your Kubernetes cluster.</p> <p>You can <a href="https://kubernetes.io/docs/user-guide/docker-cli-to-kubectl/" rel="noreferrer">run it in a very similar manner to the one you ran it using Docker</a>:</p> <p><code>kubectl run --image=&lt;docker-hub-username&gt;/&lt;your-image&gt; your-app-name --port=&lt;port inside container&gt;</code></p> <p>Then, in order to access it from outside the cluster you need to <a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_expose/" rel="noreferrer">expose the deployment</a> which will create a service (and now depending on where your cluster is - cloud/on-prem you can get a public IP from the cloud provider or use the node port):</p> <p><code>kubectl expose deployment your-app-name --port=80 --name=your-app-service</code></p> <p>The next step would be to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">create YAML files for your deployments</a> and services.</p> <p>Hope this helps!</p>
<p>After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.</p> <p>Can anyone shed light? When would I rather use either of those?</p> <p>EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.</p>
<p>In older k8s versions the <code>ReplicationController</code> was the only resource to manage a group of replicated pods. To update the pods of a <code>ReplicationController</code> you use <code>kubectl rolling-update</code>. </p> <p>Later, k8s introduced the <code>Deployment</code> which manages <code>ReplicaSet</code> resources. The <code>Deployment</code> could be updated via <code>kubectl set image</code>. </p> <p>Working with <code>Deployment</code> resources (as you already do) is the preferred way. I guess the <code>ReplicationController</code> and its <code>rolling-update</code> command are mainly still there for backward compatibility.</p> <hr> <p><strong>UPDATE:</strong> As mentioned in the comments:</p> <p>To update a <code>Deployment</code> I used <code>kubectl patch</code> as it could also change things like adding new env vars whereas <code>kubectl set image</code> is rather limited and can only change the image version. Also note, that <code>patch</code> can be applied to all k8s resources and is not restricted to be used with a <code>Deployment</code>.</p> <p>Later, I shifted my deployment processes to use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> - a really neat and k8s native package management tool. Can highly recommend to have a look at it.</p>
<p>Basic example of what I want my Jenkinsfile to do:</p> <pre><code>node { sh 'docker build -t foo/bar .' } </code></pre> <p>It seems like I need to install docker onto the Jenkins slave image that's executing my Jenkinsfile. Is there an easy way of doing this? (That Jenkins slave image is itself a docker container)</p> <p>Are my assumptions correct?</p> <ol> <li>When running with Jenkins master/slaves, the Jenkinsfile is executed by a Jenkins slave</li> <li>Jenkins plugins installed via the Manage Plugins section (e.g. the Docker Plugin, or Gcloud SDK plugin) are only installed on the Jenkins masters, therefore I would need to manually build my Jenkins slave docker image and install docker on the image?</li> </ol> <p>Since I also need access to the 'gcloud' command (I'm running Jenkins via Kubernetes Helm/Charts), I've been using the <code>gcr.io/cloud-solutions-images/jenkins-k8s-slave</code> image for my Jenkins slave.</p> <p>Currently it errors out saying "docker: not found"</p>
<p>My assumption is that you want to <code>docker build</code> inside the Jenkins slave (which is a Kubernetes pod, I assume created by the <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">Kubernetes Jenkins Plugin</a>)</p> <p>To set the stage, when Kubernetes creates pod that will act as a Jenkins slave, all commands that you execute inside the node will be executed inside that Kubernetes pod, inside one of the containers there (by default there will only be one container, but more on this later).</p> <p>So you are actually trying to run a Docker command inside a container based on <code>gcr.io/cloud-solutions-images/jenkins-k8s-slave</code>, which is most likely based on <a href="https://github.com/jenkinsci/docker-jnlp-slave" rel="noreferrer">the official Jenkins JNLP Slave</a>, <strong>which does not container Docker!</strong></p> <p>From this point forward, there are two approaches that you can take:</p> <ul> <li>use a slightly modified image based on the JNLP slave that also contains the Docker client and mount the Docker socket (<code>/var/run/docker.sock</code>) inside the container. (<a href="https://radu-matei.com/blog/kubernetes-jenkins-azure/#the-docker-image-for-the-slaves" rel="noreferrer">You can find details on this approach here</a>). <a href="https://github.com/radu-matei/jenkins-slave-docker" rel="noreferrer">Here is an image that contains the Docker client and <code>kubectl</code></a>.</li> </ul> <p>Here is a complete view of how to configure the Jenkins Plugin:</p> <p><a href="https://i.stack.imgur.com/rCeKT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rCeKT.png" alt="enter image description here" /></a></p> <blockquote> <p>Note that you use a different image (you can create your own and add any binary you want there) and that you mount the Docker socket inside the container.</p> </blockquote> <ul> <li>the problem with the first approach is that you create a new image forked from the official JNLP slave and manually add the Docker client. This means that whenever Jenkins or Docker have updates, you need to manually update your image and entire configuration, which is not that desirable. Using the second approach you always use official images, and you <strong>use the JNLP slave to start other containers in the same pod.</strong></li> </ul> <blockquote> <p><a href="https://github.com/radu-matei/kube-bot/blob/master/Jenkinsfile" rel="noreferrer">Here is the full file from the image below</a></p> <p><a href="https://github.com/jenkinsci/kubernetes-plugin#container-group-support" rel="noreferrer">Here is the Jenkins Plugin documentation for doing this</a></p> </blockquote> <p><a href="https://i.stack.imgur.com/qt53n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qt53n.png" alt="enter image description here" /></a></p> <p>As I said, the JNLP image will start a container that you specify in the same pod. Note that in order to use Docker from a container you still need to mount the Docker sock.</p> <p>These are the two ways I found to achieve building images inside a Jenkins JNLP slave running inside a container.</p> <p>The example also shows how to push the image using credential bindings from Jenkins, and how to update a Kubernetes deployment as part of the build process.</p> <p>Some more resources:</p> <ul> <li><a href="https://github.com/kubernetes/charts/tree/master/stable/jenkins" rel="noreferrer">deploy Jenkins to Kubernetes as Helm chart, configure plugins to install</a></li> </ul> <p>Thanks, Radu M</p>
<p>I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.</p> <p>I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).</p> <p>I then removed the LoadBalancer, and followed this tutorial: <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="noreferrer">https://docs.traefik.io/user-guide/kubernetes/</a></p> <p>So I got a new <code>traefik-ingress-controller</code> deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.</p> <p>I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.</p> <p>I want it to be accessible by anybody via an external IP.</p> <p>What am I missing?</p> <p>Here is the output of <code>kubectl get --export all</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/mywebservice 10.51.254.147 &lt;none&gt; 80/TCP 1d svc/kubernetes 10.51.240.1 &lt;none&gt; 443/TCP 1d svc/traefik-ingress-controller 10.51.248.165 &lt;nodes&gt; 80:31447/TCP,8080:32481/TCP 25m svc/traefik-web-ui 10.51.248.65 &lt;none&gt; 80/TCP 3h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/mywebservice 2 2 2 2 1d deploy/traefik-ingress-controller 1 1 1 1 3h NAME DESIRED CURRENT READY AGE rs/mywebservice-3818647231 2 2 2 23h rs/traefik-ingress-controller-957212644 1 1 1 3h </code></pre>
<p>You need to expose the Traefik service. Set the service spec type to <code>LoadBalancer</code>. Try the below service file that i've used previously:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: traefik spec: type: LoadBalancer selector: app: traefik tier: proxy ports: - port: 80 targetPort: 80 </code></pre>
<p>i do not know why ,my master node in not ready status,all pods on cluster run normally, and i use cabernets v1.7.5 ,and network plugin use calico,and os version is &quot;centos7.2.1511&quot;</p> <pre><code># kubectl get nodes NAME STATUS AGE VERSION k8s-node1 Ready 1h v1.7.5 k8s-node2 NotReady 1h v1.7.5 # kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system po/calico-node-11kvm 2/2 Running 0 33m kube-system po/calico-policy-controller-1906845835-1nqjj 1/1 Running 0 33m kube-system po/calicoctl 1/1 Running 0 33m kube-system po/etcd-k8s-node2 1/1 Running 1 15m kube-system po/kube-apiserver-k8s-node2 1/1 Running 1 15m kube-system po/kube-controller-manager-k8s-node2 1/1 Running 2 15m kube-system po/kube-dns-2425271678-2mh46 3/3 Running 0 1h kube-system po/kube-proxy-qlmbx 1/1 Running 1 1h kube-system po/kube-proxy-vwh6l 1/1 Running 0 1h kube-system po/kube-scheduler-k8s-node2 1/1 Running 2 15m NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default svc/kubernetes 10.96.0.1 &lt;none&gt; 443/TCP 1h kube-system svc/kube-dns 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 1h NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system deploy/calico-policy-controller 1 1 1 1 33m kube-system deploy/kube-dns 1 1 1 1 1h NAMESPACE NAME DESIRED CURRENT READY AGE kube-system rs/calico-policy-controller-1906845835 1 1 1 33m kube-system rs/kube-dns-2425271678 1 1 1 1h </code></pre> <hr /> <h1>update</h1> <p>it seems master node can not recognize the calico network plugin, i use kubeadm to install k8s cluster ,due to kubeadm start etcd on 127.0.0.1:2379 on master node,and calico on other nodes can not talk with etcd,so i modify etcd.yaml as following ,and all calico pods run fine, i do not very familiar with calico ,how to fix it ?</p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: &quot;&quot; creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --listen-client-urls=http://127.0.0.1:2379,http://10.161.233.80:2379 - --advertise-client-urls=http://10.161.233.80:2379 - --data-dir=/var/lib/etcd image: gcr.io/google_containers/etcd-amd64:3.0.17 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /health port: 2379 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /etc/ssl/certs name: certs - mountPath: /var/lib/etcd name: etcd - mountPath: /etc/kubernetes name: k8s readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/ssl/certs name: certs - hostPath: path: /var/lib/etcd name: etcd - hostPath: path: /etc/kubernetes name: k8s status: {} [root@k8s-node2 calico]# kubectl describe node k8s-node2 Name: k8s-node2 Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=k8s-node2 node-role.kubernetes.io/master= Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: node-role.kubernetes.io/master:NoSchedule CreationTimestamp: Tue, 12 Sep 2017 15:20:57 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Sep 2017 10:25:58 +0800 Tue, 12 Sep 2017 15:20:57 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Sep 2017 10:25:58 +0800 Tue, 12 Sep 2017 15:20:57 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Sep 2017 10:25:58 +0800 Tue, 12 Sep 2017 15:20:57 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure Ready False Wed, 13 Sep 2017 10:25:58 +0800 Tue, 12 Sep 2017 15:20:57 +0800 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses: InternalIP: 10.161.233.80 Hostname: k8s-node2 Capacity: cpu: 2 memory: 3618520Ki pods: 110 Allocatable: cpu: 2 memory: 3516120Ki pods: 110 System Info: Machine ID: 3c6ff97c6fbe4598b53fd04e08937468 System UUID: C6238BF8-8E60-4331-AEEA-6D0BA9106344 Boot ID: 84397607-908f-4ff8-8bdc-ff86c364dd32 Kernel Version: 3.10.0-514.6.2.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.12.6 Kubelet Version: v1.7.5 Kube-Proxy Version: v1.7.5 PodCIDR: 10.68.0.0/24 ExternalID: k8s-node2 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system etcd-k8s-node2 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-apiserver-k8s-node2 250m (12%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-controller-manager-k8s-node2 200m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-proxy-qlmbx 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-scheduler-k8s-node2 100m (5%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 550m (27%) 0 (0%) 0 (0%) 0 (0%) Events: &lt;none&gt; </code></pre>
<p>It's good practice to run a describe command in order to see what's wrong with your node:</p> <pre><code>kubectl describe nodes &lt;NODE_NAME&gt; </code></pre> <p>e.g.: kubectl describe nodes k8s-node2 You should be able to start your investigations from there and add more info to this question if needed.</p>
<p>I have troubles configuring ingress service</p> <p>This is how i configure kubernetes:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: web-spa-development-deployment spec: template: metadata: labels: app: web-spa-development-291 spec: containers: - name: web-spa-development-291 image: web-spa-development:292 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: web-spa-development-service spec: type: NodePort selector: app: web-spa-development-291 ports: - port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: development-ingress annotations: kubernetes.io/ingress.global-static-ip-name: staging spec: tls: - hosts: - dev-app.example.com secretName: wildcard-cert rules: - host: dev-app.example.com http: paths: - backend: serviceName: web-spa-development-service servicePort: 80 path: /* - backend: serviceName: web-spa-development-service servicePort: 80 path: / --- </code></pre> <p>And in the image itself - a nodejs server, serving on port 80.</p> <p>When the ingress is up and running - accessing the web app with http/https return the index.html in the response. However it doesnt return any of the static assests - .js, .css, .html.</p> <p>What could possibly be the problem?</p>
<p>Try this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: development-ingress annotations: kubernetes.io/ingress.global-static-ip-name: staging spec: tls: - hosts: - dev-app.example.com secretName: wildcard-cert rules: - host: dev-app.example.com http: paths: - backend: serviceName: web-spa-development-service servicePort: 80 </code></pre> <p>See <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting</a></p>
<p>I'm being hit with a vulnerability probe which uses <code>User-Agent: Mozilla/5.0 Jorgee</code> and I want to drop/reject any requests from that user agent. </p> <p>I've been digging around the k8s examples but I can't seem to find a solution.</p> <p>Is there any way I can deny requests based on <code>User-Agent</code> with <code>gcr.io/google_contianers/nginx-ingress-controller:0.8.3</code>?</p>
<p>As best I can tell from <a href="https://github.com/kubernetes/ingress/blob/nginx-0.9.0-beta.13/controllers/nginx/rootfs/etc/nginx/template/nginx.tmpl" rel="nofollow noreferrer">the nginx go-template</a>, that is not something the current implementation of their ingress controller offers. I believe that in their mental model, one would wish to do that kind of suppression in a per-<code>Service</code> manner, since doing it in the <code>Ingress</code> controller could very easily apply that rule to the whole cluster, leading to surprising outcomes for other published <code>Service</code>s. I took a look at <a href="https://github.com/jcmoraisjr/haproxy-ingress/blob/v0.4-snapshot.4/rootfs/etc/haproxy/template/haproxy.tmpl" rel="nofollow noreferrer">the haproxy controller's go-template</a> and it seems to be true there, also.</p> <p>At this point, I think you have two options:</p> <ol> <li><a href="https://github.com/kubernetes/ingress/blob/nginx-0.9.0-beta.13/examples/customization/custom-template/custom-template.yaml" rel="nofollow noreferrer">Use a custom nginx go-template file</a>, which might not be "bad" but one will need to exercise caution when doing upgrades, since your controller will no longer come with a known-correct <code>nginx.tmpl</code></li> <li>Try <a href="https://github.com/appscode/voyager/blob/3.2.0/docs/user-guide/ingress/backend-rule.md" rel="nofollow noreferrer">a more advanced haproxy ingress controller</a>, which allows you to specify arbitrary haproxy snippets right in your Ingress resource, which is the best approach I have seen thus far, modulo their <code>backendRule</code> array seems <a href="https://kubernetes.io/docs/api-reference/v1.7/#ingressbackend-v1beta1-extensions" rel="nofollow noreferrer">not to be standard</a></li> </ol>
<p>Is there a simple <code>kubectl</code> command to take a <code>kubeconfig</code> file (that contains a cluster+context+user) and merge it into the ~/.kube/config file as an additional context?</p>
<p>Do this:</p> <pre><code>export KUBECONFIG=~/.kube/config:~/someotherconfig kubectl config view --flatten </code></pre> <p>You can then pipe that out to a new file if needed.</p>
<p>What is correct way to kubernetes cluster setup using minikube through the kubernetes api ? At the moment, I can't find a port through which the kubernetes cluster can be accessed.</p>
<p>The easiest way to access the Kubernetes API with when running minikube is to use</p> <pre><code>kubectl proxy --port=8080 </code></pre> <p>You can then access the API with</p> <pre><code>curl http://localhost:8080/api/ </code></pre> <p>This also allows you to browse the API in your browser. Start minikube using</p> <pre><code>minikube start --extra-config=apiserver.Features.EnableSwaggerUI=true </code></pre> <p>then start <code>kubectl proxy</code>, and navigate to <a href="http://localhost:8080/swagger-ui/" rel="noreferrer">http://localhost:8080/swagger-ui/</a> in your browser.</p> <p>You <em>can</em> access the Kubernetes API with curl directly using</p> <pre><code>curl --cacert ~/.minikube/ca.crt --cert ~/.minikube/client.crt --key ~/.minikube/client.key https://`minikube ip`:8443/api/ </code></pre> <p>but usually there is no advantage in doing so. Common browsers are not happy with the certificates minikube generates, so if you want to access the API with your browser you need to use <code>kubectl proxy</code>.</p>
<p>I get the following error when trying to use Kubespray to install Kubernetes on an EC2 cluster</p> <pre><code>TASK [network_plugin/calico : Calico | wait for etcd] *********************************************************************************************************************************************************************************************** Thursday 20 July 2017 17:21:40 -0400 (0:00:00.327) 0:04:16.018 ********* FAILED - RETRYING: Calico | wait for etcd (10 retries left). FAILED - RETRYING: Calico | wait for etcd (9 retries left). FAILED - RETRYING: Calico | wait for etcd (8 retries left). FAILED - RETRYING: Calico | wait for etcd (7 retries left). FAILED - RETRYING: Calico | wait for etcd (6 retries left). FAILED - RETRYING: Calico | wait for etcd (5 retries left). FAILED - RETRYING: Calico | wait for etcd (4 retries left). FAILED - RETRYING: Calico | wait for etcd (3 retries left). FAILED - RETRYING: Calico | wait for etcd (2 retries left). FAILED - RETRYING: Calico | wait for etcd (1 retries left). fatal: [node1 -&gt; None]: FAILED! =&gt; {"attempts": 10, "changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: &lt;urlopen error [Errno 111] Connection refused&gt;", "redirected": false, "status": -1, "url": "https://localhost:2379/health"} </code></pre> <p>Anyone know why this might be? Here is a Github issue I filed with more info <a href="https://github.com/kubernetes-incubator/kubespray/issues/1466" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray/issues/1466</a></p>
<p>TLDR; open both ports 2379 and 2380</p> <p>I encountered this same issue and found that I needed to open two ports for etcd. The obvious port is <code>2379</code>, because it is in the ansible error message. When I examined the logs for the etcd container on one of the failed nodes I found that it was trying to communicate with other etcd nodes on port <code>2380</code>. I updated my security group to allow traffic on both ports and this error was resolved.</p>
<p>I'm being hit with a vulnerability probe which uses <code>User-Agent: Mozilla/5.0 Jorgee</code> and I want to drop/reject any requests from that user agent. </p> <p>I've been digging around the k8s examples but I can't seem to find a solution.</p> <p>Is there any way I can deny requests based on <code>User-Agent</code> with <code>gcr.io/google_contianers/nginx-ingress-controller:0.8.3</code>?</p>
<p>you can add custom nginx configuration snippets to Ingresses with annotations, at least for the "normal" nginx controller, not sure if that works with the GCE controller too. See e.g. here: <a href="https://github.com/kubernetes/ingress/blob/master/examples/customization/configuration-snippets/nginx/ingress.yaml#L8" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/blob/master/examples/customization/configuration-snippets/nginx/ingress.yaml#L8</a></p>