prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I've thought about this a lot but can't come up with a solution I'm happy with. </p> <p>Basicly this is the problem: Log 100k+ Chats (some slower, some faster) into cassandra. So save userId, channelId, timestamp and the message.</p> <p>Cassandra already supports horizontal scaling out of the box, I have no issue here.</p> <p>Now my software that reads these chats does it over TCP (IRC). Something like 300 messages / sec are usual for the top 1k channels and 1 single IRC connection can't handle that from my experiments.</p> <p>What I now want to build is multiple instances (with Docker/Kubernetes) of the logger and share the load between those. So ideally if I have maybe 4 workers and 1k chats (example). They would each join atleast 250 channels. I say atleast because I would want optional redundancy so I can have 2 loggers in the same chat to make sure no messages get lost. There is no issue with duplicates, because all messages have a unique ID.</p> <p>Now how would I best and dynamically share the current channels joined between the workers. I wanna avoid having a master or controlling point. Should also be easy to add more workers that then reduce the load on other workers.</p> <p>Are there any good articles about this kind of behaviour? Maybe good concepts or protocols already defined? Like I said i wanna avoid another central control point so no rabbitmq, redis or whatever. </p> <p>Edit: I've looked into something like the Raft Consensus Algorithm, but it doesn't make sense I think, since I don't want my clients to agree on a shared state instead divide the state between them "equally".</p>
<p>I think in this case looking for a description of existing algorithm might be not very useful: the problem is not complicated and generic enough to be worth publication. </p> <p>As described, the problem could be solved by using Cassandra itself as a mediator and to share chat channel assignment information among the workers.</p> <p>So (trivial part) channels would have IDs and assigned worker ID(s), plus in the optional case of redundancy - required amount of workers (2 or whatever number of workers you want to process this chat). Worker, before assigning itself to a channel would check if there is already enough assignees. If so would continue to the next channel. If not, assign itself to the channel. This is one of the options (alternatively you can have workers holding the channel IDs, but since redundancy is rare this way seems to be simpler). Workers would have a limit of channels they can process and will not try exceeding it by assigning more channels.</p> <p>Now we only have to deal with the case of assigning too much workers to the same channel, exceeding requirements and exhausting the worker capacity by monitoring all the same channels. Otherwise, if they start all at once, channels might have more assigned workers than needed. Even though it is unlikely will create a real problem in described case (just a bit more redundancy than requested), you can handle that by prioritising workers. Much like employing of school teachers in Canada, BC is done on seniority basis - the most senior gets job first, except that here it'd be voluntarily done by the workers themselves, not by school administration. What this means, is that each worker would have to check all it's assigned channels and, should there be more workers than needed at this time, would check if it has the smallest priority among all the assignees. If it does, it would resign - remove itself and stop processing the channel.</p> <p>That requires assigning distinct priorities of the workers, which could be easily achieved when spawning them, by simply setting each to a next sequential number (the oldest has the highest priority, or v.v if you concerned of old, potentially dying workers taking up all the load, and would prefer new ones to take on more while still fresh). More elaborately, this could also be done by using Cassandra <a href="https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html" rel="nofollow noreferrer">Lightweight transactions</a> as described in one of the answers <a href="https://stackoverflow.com/questions/3935915/how-to-create-auto-increment-ids-in-cassandra">here</a> (the <a href="https://stackoverflow.com/a/29391877/3477070">one by AlonL</a>). With just a few (you mentioned ~4) workers either way should work and concerns about scaling mentioned in the other answers there isn't a big deal for a few integer priorities. Also, instead of sequential number assignment, requiring the workers to self-assign a random 32-bit integer priority on initialization has virtually no chance of collision, so loop "until no collisions" should exit on the very first iteration (which would make a second iteration very rarely code path requiring an explicit test).</p> <p>The trick is basically to limit the amount of data requiring synchronisation and putting the load of regulation onto the workers themselves. There is no need for consensus algorithms as there is not much complexity and we are not dealing with huge number of potentially fraudulent workers, trying to get assignments ahead of more senior peers. </p> <p>The only issue I should mention is that there could be implicit worker rotation if channels go offline which makes worker to stop processing. You will get a different worker assignment next time the channel goes online.</p>
<p>Integrated kubernetes with jenkins and run the command in Jenkins file <code>kubectl create -f deployment.yaml --validate=false</code> and getting the error:</p> <pre><code>unable to recognize "deployment.yaml": no matches for extensions/,Kind=Deployment </code></pre> <p>But if I run the same command in terminal I am able to deploy my image. deployment.yaml file is given below</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: appname spec: template: metadata: labels: app: appname spec: containers: - name: appname image: appname imagePullPolicy: IfNotPresent ports: - containerPort: 8080 </code></pre>
<p>I suppose your minikube (if you are checking in your local machine) is NOT running. Start minikube using the following command</p> <ul> <li>$minikube start</li> </ul> <p>Now try </p> <ul> <li>$kubectl create OR</li> <li>$kubectl apply</li> </ul>
<p>I've installed the Kubernetes dashboard, and created a service account user with the appropriate permissions, however logging in with a token fails for some reason.</p> <p>I see the following logs:</p> <pre><code>2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 10.244.0.0:34914: {} 2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Outcoming response to 10.244.0.0:34914 with 200 status code 2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Incoming HTTP/2.0 POST /api/v1/login request from 10.244.0.0:34914: { "kubeConfig": "", "password": "", "token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnJkV0psTFhONWMzUmxiU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUpoWkcxcGJpMTFjMlZ5TFhSdmEyVnVMV2RrZG5oM0lpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltRmtiV2x1TFhWelpYSWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSmtaVEF4TnpRNU15MWhNakE0TFRFeFpUZ3RPRGxrWmkwd09EQXdNamRoTURobFpHTWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOTFiblE2YTNWaVpTMXplWE4wWlcwNllXUnRhVzR0ZFhObGNpSjkucHhfMDEwUTBYU2tPMmNhVi1ZYlRDYlllSTNVMVlmcGh3UFZ4TXBOYmF6dWpSM1gtOGVBTUZmbm1GNHlYWHFZWGw5eWlVYmRvQ3lBSl9YcHF5bTlLQThRaWx6MFU3eWZ1WV9BbUg4NmtDNE9hYW5aem1xSmp2N3ZObDY1MU1OeWF0dU5nR0JmU21GZXRCMnoxUkdYRmlIVF9UczljMjh1ZkZiSXNZNkRMVml4Y2JhUS0za2JxOW9PbzZ3NV8zc3ZRQ3dmNjNiTVNaSEpzdkgyUndwVkhkbFJnM3Rmbl9RRUxGcWtJYzZycERibFlUbXZJcVdVaWJjQVdHcXhDRVR6NU5vUGlnbndMaVpuVi1lZFpKZDRpbUJZNU5Ia3FLM0Q0TDgyTnp1NzJkUVU3M3B4T3F5Q3FVSlNhQ3IyVU52eVVucHRENTZTemdtSTBaM0JqUVkyTjFB", "username": "" } 2018/08/17 14:26:06 Non-critical error occurred during resource retrieval: the server has asked for the client to provide credentials 2018/08/17 14:26:06 [2018-08-17T14:26:06Z] Outcoming response to 10.244.0.0:34914 with 200 status code 2018/08/17 14:26:24 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. </code></pre> <p>Kubernetes version:</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>As <a href="https://github.com/floreks" rel="noreferrer">floreks</a> wrote on <a href="https://github.com/kubernetes/dashboard/issues/3216#issuecomment-414042254" rel="noreferrer">GitHub</a> :</p> <blockquote> <p>NOTE: Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.</p> </blockquote> <p>Also, as <a href="https://github.com/chrissound" rel="noreferrer">chrissound</a> wrote:</p> <blockquote> <p>I've worked around this by giving cluster admin permission to the dashboard user and just clicking 'skip' at the login prompt:</p> </blockquote> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system </code></pre>
<p>I was reading about <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer"><strong>Pod Priority and Preemption</strong></a>, I had a question in my mind. </p> <p>Lets say A is higher priority pod and B is lower one. B is already running , A came along and now it eviction had to be happened. Note that B is of type JOB. I wanted to ask, If B is evicted. will it be rescheduled later?</p>
<p>Answer is Yes.</p> <blockquote> <p>If a pending Pod has inter-pod affinity to one or more of the lower-priority Pods on the Node, the inter-Pod affinity rule cannot be satisfied in the absence of those lower-priority Pods. In this case, the scheduler does not preempt any Pods on the Node. Instead, it looks for another Node. The scheduler might find a suitable Node or it might not. There is no guarantee that the pending Pod can be scheduled</p> </blockquote>
<p>The Flink official documentation provides a <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/jobmanager_high_availability.html" rel="nofollow noreferrer">jobmanager high availability</a> solution for Standalone And Yarn Flink clusters. But what should be done for high availability using a Kubernetes Flink Cluster?</p> <p>From the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/kubernetes.html" rel="nofollow noreferrer">Kubernetes Setup</a> section of the document, it seems we just deploy a single Jobmanager when deploying to a Kubernetes cluster. So how to achieve HA for a Kubernetes Flink Cluster?</p>
<p>The <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/jobmanager_high_availability.html" rel="nofollow noreferrer">official doc says</a> that high availability for the job manager is to deal with cases where the job manager crashes. So there is only a single job manager needed but you want to handle the case where it goes down. On Kubernetes if it goes down then Kubernetes should detect this and automatically restart it. So you don't need to run more replicas of it. </p> <p>(The doc says this <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/jobmanager_high_availability.html#yarn-cluster-high-availability" rel="nofollow noreferrer">explicitly about using yarn for ha</a>. It doesn't seem to <em>state</em> it for Kubernetes but restarting failing Pods is <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">standard behaviour for Kubernetes</a>.)</p> <p>The task manager is configured by default to run with multiple replicas on Kubernetes in the official k8s resources (see the 'replicas' entries <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/kubernetes.html#simple-kubernetes-flink-cluster-resources" rel="nofollow noreferrer">in the resources</a>) but the job manager is not. (And it's the <a href="https://github.com/docker-flink/examples/tree/master/helm/flink/templates" rel="nofollow noreferrer">same in the helm chart</a>.) So I believe it is not needed for the job manager - I'd suggest running with a single job-manager unless you hit specific problems with that. </p>
<p>When I run <code>kubectl -n abc-namespace describe pod my-pod-zl6m6</code>, I get a lot of information about the pod along with the Events in the end. </p> <p>Is there a way to output just the Events of the pod either using <code>kubectl describe</code> or <code>kubectl get</code> commands?</p>
<p>You can use the <code>event</code> command of <code>kubectl</code>.</p> <p>To filter for a specific pod you can use a field-selector:</p> <pre><code>kubectl get event --namespace abc-namespace --field-selector involvedObject.name=my-pod-zl6m6 </code></pre> <p>To see what fields are possible you can use <code>kubectl describe</code> on any event.</p>
<p>I have a virtualbox <code>development</code>:</p> <pre><code>$docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS development - virtualbox Running tcp://*.*.*.*:**** v18.05.0-ce </code></pre> <p>Inside <code>development</code> I have this:</p> <pre><code>$docker images REPOSITORY TAG IMAGE ID CREATED SIZE busybox latest e1ddd7948a1c 1 day ago 1.16MB pritam/play-docker latest 34eb2664f14e 1 day ago 1.4GB </code></pre> <p>Now I want to push this image inside virtual box to google repository. How do I that? </p>
<p>you can use docker push command </p> <pre><code>docker push imageTagName </code></pre> <blockquote> <p><a href="http://docs.docker.com/engine/reference/commandline/push" rel="nofollow noreferrer">http://docs.docker.com/engine/reference/commandline/push</a></p> </blockquote> <p>For your own registry you can use</p> <pre><code> docker push registry.example.com/image </code></pre> <blockquote> <p><a href="http://blog.docker.com/2013/07/how-to-use-your-own-registry" rel="nofollow noreferrer">http://blog.docker.com/2013/07/how-to-use-your-own-registry</a>.</p> </blockquote> <p>And for google container registry reference:</p> <blockquote> <p><a href="http://cloud.google.com/container-registry/docs/pushing-and-pulling" rel="nofollow noreferrer">http://cloud.google.com/container-registry/docs/pushing-and-pulling</a></p> </blockquote> <pre><code>docker [push/pull] gcr.io/{PROJECT_ID}/{image}:tag </code></pre> <p>for authentication you can check google container registry reference</p> <blockquote> <p><a href="https://medium.com/google-cloud/using-googles-private-container-registry-with-docker-1b470cf3f50a" rel="nofollow noreferrer">https://medium.com/google-cloud/using-googles-private-container-registry-with-docker-1b470cf3f50a</a></p> </blockquote> <p>after you get google auth keys as json format (key.json)</p> <pre><code>docker login -u _json_key -p “$(cat key.json)” </code></pre> <p>after login success you can push your image </p> <pre><code> docker push gcr.io/project_id/imageName:tag </code></pre>
<p>I have a multinode kubernetes cluster. Multiple services are deployed as Pods. They communicate over each other via rabbitmq which also exists as Pod in the Cluster. </p> <h2>Problem Scenario:</h2> <p>Many time services fails to connect to required queue in the Rabbitmq. Log for the same are reported in Rabbitmq pod logs and on the services Pod as well. This occurs primarily due to connectivity issues and is inconsistent. Due to this failure functionality breaks. And also since this is NOT a crash, pod is always in running state in the kubernetes. To fix this we have to manually go and restart the pod. </p> <p>I want to create a liveness probe for every pod. But how this should work to catch the exception? Since many process in a service can be trying to access the connection, any one of them can fail.</p>
<p>I'd suggest implementing http endpoint for liveness probe that would check statew of the connection to rabbitmq or actualy failing miserably and exiting whole process when rabbit connection does not work.</p> <p>But... the best solution would be to retry the connection indefinitely when it fails so a temporary networking issue is transparently recovered from. Well written service should wait for depending services to become operational instead of cascading the failure up the stack.</p> <p>Imagine you have a liveness check like you ask here on 20 services using that rabvbit or other service. That service goes down for some time, and what you end up with is cluster with 20+ services in CrashLoopBackoff state due to incremental backoff on failure. Meaning your cluster will take some time to recover when that originaly failing service is back, as well as the picture will be pretty messed up and will make it harder to understand what happened at first glance.</p>
<p>I am writing a custom Kubernetes controller and resource to keep track of the number of times a service object is called. The idea behind this is to add a "hitcount" field to the resource and update it every time a service is invoked. (Meaning update the count every time a request is made to the pod that is part of the service)</p> <p>Is this possible? Does the kubernetes API have an option to do this?</p>
<p>it sounds like you're in fact trying to implement a metric inside kubernetes api. It's hardly a controller though. In a nutshell, controller is piece of software that continuously realizes an intent stated in kubernetes API in a real world. </p> <p>Semantics aside, status object in kubernetes API is defined in it's type. For Pod, at the moment of writing of this answer it's <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L2978" rel="nofollow noreferrer">here</a> and it's not an extensible one. So you can't really put your metric into the Pod object.</p> <p>The question that this naturally poses is why not use something that is expected for metrics collection and visualization like prometheus/grafana ?</p> <p>All of that aside, you also have an issue of how will you expose/collect that metric. Istio could help here, or the software in pod it self could expose it's metrics (which, if possible, is the best route to take imo)</p>
<p>I'm trying to set up a simple HTTP to HTTPS redirect for traffic going to a Kubernetes cluster. The SSL termination is happening on the ELB. When I try to use the <code>nginx.ingress.kubernetes.io/ssl-redirect = true</code> it results in an infinite redirect which led me to setting up a config map to handle this (<a href="https://stackoverflow.com/questions/49856754/nginx-ingress-too-many-redirects-when-force-ssl-is-enabled">nginx-ingress: Too many redirects when force-ssl is enabled</a>).</p> <p>Now there seems to be no redirection happening at all. </p> <p>My ingress service is defined as:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600" service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...:certificate/... service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https labels: k8s-addon: ingress-nginx.addons.k8s.io name: ingress-nginx namespace: ingress-nginx spec: externalTrafficPolicy: Cluster ports: - name: https port: 443 protocol: TCP targetPort: http - name: http port: 80 protocol: TCP targetPort: http selector: app: ingress-nginx type: LoadBalancer </code></pre> <p>My config map is defined as:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: client-body-buffer-size: 32M hsts: "true" proxy-body-size: 1G proxy-buffering: "off" proxy-read-timeout: "600" proxy-send-timeout: "600" server-tokens: "false" ssl-redirect: "false" upstream-keepalive-connections: "50" use-proxy-protocol: "true" http-snippet: | server { listen 8080 proxy_protocol; server_tokens off; return 307 https://$host$request_uri; } metadata: labels: app: ingress-nginx name: nginx-configuration namespace: ingress-nginx --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx --- apiVersion: v1 kind: ConfigMap metadata: name: udp-services namespace: ingress-nginx </code></pre> <p>And, the ingress is defined as:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-ingress annotations: nginx.ingress.kubernetes.io/cors-allow-credentials: "true" nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS nginx.ingress.kubernetes.io/cors-allow-origin: gateway.example.com.com/monitor nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - host: gateway.example.com http: paths: - backend: serviceName: gateway servicePort: 8080 path: / tls: - hosts: - gateway.example.com </code></pre>
<p>The issue was the target port I was using on the load balancer not matching the port the redirection server was listening on:</p> <pre><code>ports: - name: http port: 80 protocol: TCP targetPort: http </code></pre> <p>This was just sending everything to port 80. It should have been this:</p> <pre><code>ports: - name: http port: 80 protocol: TCP targetPort: 8080 </code></pre> <p>That way it matches up with the ConfigMap's:</p> <pre><code>data: ... http-snippet: | server { listen 8080 proxy_protocol; server_tokens off; return 307 https://$host$request_uri; } </code></pre>
<p>I am currently trying to use the "--overrides" option in k8s to mount a local directory that a docker container can access and then run a command against that container utilizing that directory.</p> <pre><code>kubectl run -i --tty cic-v1 --image=image1 --restart=Never --overrides={"spec":{"volumeMounts":[{"name":"input","mountPath":"/usr/src/data/input"},{"name":"output","mountPath":"/usr/src/data/output"}],"volumes":[{"name":"input","hostPath":{"path":"/input/"}},{"name":"output","hostPath":{"path":"/output/"}}]}} </code></pre> <p>Ideally this is done inline without the use of a .yaml file. All attempts result in... error: Invalid JSON Patch ...and I am having difficulty debugging this. Has anyone accomplished anything similar?</p>
<p>Answer found here.</p> <p><a href="https://stackoverflow.com/a/37621761/6309">https://stackoverflow.com/a/37621761/6309</a></p> <p>I was incorrectly specifying the 'overrides' option in k8s.</p>
<p>I am not sure I am understanding Kubernetes Init containers properly. What i want to do is run an initialization on the pod so that it pip installs some additional libraries that are not in my app container image. Specifically I am wanting to install the Azure storage queue such that i can use it with the standard TensorFlow image.</p> <p>I set up my init container with the command "pip install azure-storage-queue" and that ran fine however my app container tells me "No module named azure"</p> <p>Is this not how an init container can be used?</p> <p>NOTE: I realize i could create a new image with all my prerequisites installed however this is just for development purposes</p>
<p>That's not really how init containers work... Init containers are meant to initialize the <strong>pod</strong> and the image isn't really shared with other containers that will later run on that pod.</p> <p>The best solution is to create a new container image including the Python modules you need.</p> <p>An alternative is to use a command to run in your container that first installs the modules using <code>pip</code> and later runs the script that needs them, that way you can avoid creating a new container image.</p>
<p>I'm working on the microservices project that is the dockerized Spring Cloud Netflix project and contains 3 microservices except for some Netflix services which are turbine,zipkin,discovery,configserver etc , yet. </p> <p>(Its just working on locally now..)</p> <p>Soon, I decided to deploy my project to a cloud provider with an orchestration tool.</p> <p>After some researches, I decided to use Kuberenetes.</p> <p>But, both of Spring Cloud Netflix and Kubernetes have some solutions for distributed systems: service discovery, load balancing, fault-tolerance, etc..</p> <p>In that case, using Netflix libs. seem unnecessary with Kubernetes.</p> <p>I read <a href="http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/" rel="nofollow noreferrer">this</a> and <a href="https://github.com/fabric8io/spring-cloud-kubernetes#spring-cloud-kubernetes" rel="nofollow noreferrer">this</a>. I think Spring Cloud Kubernetes looks like a workaround solution.</p> <p>So my questions are :</p> <ol> <li>Let assume that a new dockerized microservices project will be started and we decided to use Kubernetes for orchestration. Can we say Netflix-OSS absolutely unnecessary?</li> <li>Let assume that we are working on the same project for a while and used Netflix-OSS but we want to use Kubernetes. In that case which one of the better solution if efforts are not so different for that 2 options : <ol> <li>using Spring Cloud Kubernetes </li> <li>remove all Netflix libs. from microservices and try to convert pure Kubernetes solutions.</li> </ol></li> </ol>
<p>I think that Christian Posta article you refer to is very good. As he says, you can deal with the most common use-cases with the out of the box Kubernetes solutions for discovery (kub dns), load-balancing (with Services) and edge services/gateway (Ingress). </p> <p>As Christian also points out, if you need to dynamically discover services by actively querying rather than knowing what you are looking for then Spring Cloud Kubernetes can be better than going directly to Kubernetes Apis. If you need to refresh your app from a config change and see it update quickly without going through a rolling update (which would be needed if you were mounting the configmap as a volume) then Spring cloud Kubernetes config client could be of value. The ribbon integration could also be of value if you need client-side load-balancing. So you could start out without Spring Cloud Kubernetes and add parts of it if and when you find that it would help. I think it is better to think of the project as adding extra options and conveniences rather than alternatives to Kubernetes-native solutions.</p> <p>It is also worth noting that you can deploy a Netflix stack app to Kubernetes (including using Zuul and eureka) and there isn't necessarily anything wrong with that. It has the advantage that you can work with it outside Kubernetes and it might be more convenient for your particular team if it's Java team. The main downside is that the Netflix stack is very tied to Java, whereas Kubernetes is language neutral.</p>
<p>Assume a VM with 4 cores. I have a docker image which has a web application that provides some REST services. I am using K8S to deploy this application on that VM. So, is there any difference if I use a single pod on the single VM vs mutiple pods on the same host, in terms of performance. </p> <p>For people who don't know K8S, assume we have some application that provides some REST services. Is there any advantage of using multiple instances of such application in terms of a performance increase like increased rate of serving requests ?</p>
<p>Personally, I think the performance had better to run <code>multiple pods</code> on the same host. I don't know what <code>web server</code> you use, but the requests are processed by <code>limited cpu time</code>, though it has <code>multiple processes</code> or <code>threads</code> for work. Additionally it's more efficient to utilize cpu time during <code>network I/O waiting</code> in using multiple processes. In order to improve the <code>throughput</code>, you should increase the processes or instances to work horizontally, because the <code>response time</code> is getting slower as time past.</p>
<p>When i execute </p> <blockquote> <p>kubectl proxy </p> </blockquote> <p>it gives me o/p: Starting to serve on 127.0.0.1:8001 on virtual machine </p> <p>i want to see dashboard on host machine it gives me problem.</p> <blockquote> <p>192 168 113 8001 api namespaces kube system services https kubernetes dashboard proxy</p> </blockquote> <p>What is problem, i didn't get it. I am new to kubernetes. thanks</p>
<p>If you use kubectl proxy, you should then access the dashboard with the URL <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a> from the machine where you ran the kubectl proxy command.</p> <p>From <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</a> :</p> <blockquote> <p>You can access Dashboard using the kubectl command-line tool by running the following command:</p> <pre><code>kubectl proxy </code></pre> <p>Kubectl will handle authentication with apiserver and make Dashboard available at <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a>.</p> <p>The UI can only be accessed from the machine where the command is executed. See kubectl proxy --help for more options.</p> </blockquote>
<p>I'm trying to secure Nifi in a Kubernetes cluster, behind a Traefik proxy. Both are running as services in K8S. Traefik is secured with a public certificate. I want it to redirect calls to nifi, while securing the communication between Traefik (as an Ingress Controller) and the backend pods : Nifi.</p> <p>Looks like the secure confiuration should lire in my Ingress YAML descriptor. Looks like I should issue a CA root to generate Nifi self signed certificate and load this CA Root in Traefik so it can validate the certificate sent by Nifi while handshaking with it.</p> <p>But... I can't figure out 1) if this is the good approach, 2) how I can generate my stores (trust, ...) for NiFi using a CA Root, 3) how I should setup my YAML (<code>insecureSkipVerify</code> seems not to be supported, ...)</p> <p>By advance, thanks for you help.</p> <p>Cheers,</p> <p>Olivier</p>
<p>I had the same problem and could solve it with the <code>insecureSkipVerify</code> flag.<br> The problem with traefik is, that NiFi gets the request from traefik and sends it's self signed certificate back to traefik for hand shaking. Traefik doesn't accept it, thus the handshake fails, leading to a <code>bad_certificate</code> exception in NiFi (has loglevel <code>DEBUG</code>, so you have to change the <code>logback.xml</code> file). </p> <p>So one solution could be to add your self signed certificate to traefik, which is not possible at the moment, <a href="https://github.com/containous/traefik/issues/3038" rel="nofollow noreferrer">see this (currently) open issue</a>.</p> <p>Another solution, without 'insecuring' your existing traefik would be to add an <code>nginx</code> between traefik and NiFi. So traefik talk <code>HTTP</code> with nginx, which talks <code>HTTPS</code> with NiFi (this will be the next thing I'm trying).</p> <p>Or you can set the <code>insecureSkipVerify</code> flag within traefik like I did in this <code>daemonset.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: creationTimestamp: 2018-06-21T16:18:46Z generation: 4 labels: k8s-app: traefik-internal release: infrastructure name: traefik-internal namespace: infrastructure resourceVersion: "18860064" selfLink: /apis/extensions/v1beta1/namespaces/infrastructure/daemonsets/traefik-internal uid: c64a20e1-776e-11f8-be83-42010a9c0ff6 spec: revisionHistoryLimit: 10 selector: matchLabels: k8s-app: traefik-internal name: traefik-internal release: infrastructure template: metadata: creationTimestamp: null labels: k8s-app: traefik-internal name: traefik-internal release: infrastructure spec: containers: - args: - --api - --ping - --defaultEntryPoints=http,https - --logLevel=INFO - --accessLog - --kubernetes - --kubernetes.ingressClass=traefik-internal - --metrics.prometheus=true - --entryPoints=Name:https Address::443 TLS:/certs/cert.pem,/certs/cert.key CA:/certs/clientca.pem - --entryPoints=Name:http Address::80 Redirect.EntryPoint:https - --insecureSkipVerify=true image: traefik:1.6.0-rc6-alpine imagePullPolicy: IfNotPresent name: traefik-internal resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /certs name: traefik-internal-certs readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: sa-traefik serviceAccountName: sa-traefik terminationGracePeriodSeconds: 60 volumes: - name: traefik-internal-certs secret: defaultMode: 420 secretName: traefik-internal templateGeneration: 4 updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 3 desiredNumberScheduled: 3 numberAvailable: 3 numberMisscheduled: 0 numberReady: 3 observedGeneration: 4 updatedNumberScheduled: 3 </code></pre> <p>The <code>insecureSkipVerify</code> flag is changed within <code>spec.containers.args</code>.</p> <p>Hope that helps!</p>
<p>I followed the <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks" rel="nofollow noreferrer">guide here</a> (Grant AKS access to ACR), but am still getting "unauthorized: authentication required" when a Pod is attempting to pull an image from ACR.</p> <p>The bash script executed without any errors. I have tried deleting my <code>Deployment</code> and creating it from scratch <code>kubectl apply -f ...</code>, no luck.</p> <p>I would like to avoid using the 2nd approach of using a secret.</p>
<p>The link you posted in the question is the correct steps for Authenticate with Azure Container Registry from Azure Kubernetes Service. I tried before and it works well.</p> <p>So I suggest you can check if the service-principal-ID and service-principal-password are correct in the command <code>kubectl create secret docker-registry acr-auth --docker-server &lt;acr-login-server&gt; --docker-username &lt;service-principal-ID&gt; --docker-password &lt;service-principal-password&gt; --docker-email &lt;email-address&gt;</code>. And the secret you set in the yaml file should also be check if the same as the secret you created. </p>
<p>I am trying to create a system health check. I would like to be able to determine how often the pods are restarting. Since the pods have liveness probes, they may continuously restart if something is wrong. Is there anyway to check the time a pod restarted at instead of just the number of restarts and the total time the pod has been alive? </p>
<p>You may want to use a <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/" rel="nofollow noreferrer">monitoring tool</a>, like cAdvisor/Heapster/Grafana/Prometheus.</p> <p>Another option you have is to deploy the <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics container</a>.</p> <p>Good articles to read: <a href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/" rel="nofollow noreferrer">Monitoring Kubernetes</a> and <a href="https://akomljen.com/get-kubernetes-cluster-metrics-with-prometheus-in-5-minutes/" rel="nofollow noreferrer">Get Kubernetes Cluster Metrics with Prometheus</a>. Also, read a similar question on <a href="https://stackoverflow.com/questions/41452930/how-can-i-alert-for-container-restarted">stackoverflow</a>.</p>
<p>I have suchlike problem:</p> <pre><code> spec: hostname: kafka containers: - name: kafka image: registry/wurstmeister:latest imagePullPolicy: Always restartPolicy: Always env: - name: HOSTNAME_COMMAND valueFrom: fieldRef: {fieldPath: metadata.name} - name: KAFKA_ADVERTISED_LISTENERS value: INSIDE://:9092,OUTSIDE://_${HOSTNAME_COMMAND}:9094 </code></pre> <p>First HOSTNAME_COMMAND resolves successfully, so I see the pod's hostname. I need to have the same for the second HOSTNAME_COMMAND, so I need to see something like this:</p> <pre><code>value: INSIDE://:9092,OUTSIDE://_kafka-6b97669c5c-lhfp7:9094 </code></pre> <p>Thank you.</p>
<p>You can use <code>$(ENV_NAME)</code> format inside other config, The details is here, <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">Using environment variables inside of your config</a>.</p>
<p>I'm using fabric8 maven plugin in order to deploy my spring boot services on kubernetes.</p> <p>Nevertheless, I'd like to deploy other kind of services, like postgresql, hashicorp vault, so on so forth.</p> <p>So I've my <code>configmap.yaml</code>, <code>deploymentconfig.yaml</code>, <code>service.yaml</code> and <code>route.yaml</code> in order to deploy hashicorp vault, so they are not maven projects.</p> <p>I think that's not mandatory to mavenize this kind of "projects" in order to be able to deploy them on kubernetes or openshift.</p> <p>Is there any way to use fabric8 as cli or another idea in order to avoud to mavenize this kind of projects?</p> <p>I expect I've explained so well.</p>
<p>As @<a href="https://stackoverflow.com/users/873601/vishal-biyani">Vishal Biyani</a> says, the best choice for deploying instances of third-party applications is probably helm. There are public charts available with installation instructions for <a href="https://hub.kubeapps.com/charts/stable/postgresql" rel="nofollow noreferrer">postresql</a> and <a href="https://hub.kubeapps.com/charts/incubator/vault" rel="nofollow noreferrer">vault</a> and many others at <a href="https://hub.kubeapps.com/" rel="nofollow noreferrer">https://hub.kubeapps.com/</a> (as well as other charts out there in other repos which aren't listed in kubeapps).</p> <p>Alternatively you can check for examples or instructions for each individual tool that you want to install.</p>
<p>I am following this <a href="https://cloud.spring.io/spring-cloud-pipelines/single/spring-cloud-pipelines.html#jenkins-pipeline-k8s" rel="nofollow noreferrer">tutorial</a> and I am facing build failure in 'deploy to test' stage when this <a href="https://github.com/spring-cloud-samples/github-webhook-kubernetes" rel="nofollow noreferrer">repo</a> is compiled. The reason for failure is that <code>sc-pipelines.yml</code> defines <code>eureka</code> service with <code>coordinate: scpipelines/github-eureka:latest</code>, which is not getting compiled into the equivalent <code>target\k8s\eureka.yml</code>. The compiled yml is having <code>image: :</code> for container image. Could someone please help me on how to get this image value populated with <code>scpipelines/github-eureka:latest</code> ?</p> <pre><code>12:39:25 Logging in to Kubernetes API [kubernetes.default:443], with cluster name [gke_kubproject-210508_asia-south1-b_cluster-1] and user [default] 12:39:25 Cluster "gke_kubproject-210508_asia-south1-b_cluster-1" set. 12:39:25 User "default" set. 12:39:25 Context "gke_kubproject-210508_asia-south1-b_cluster-1" created. 12:39:25 Switched to context "gke_kubproject-210508_asia-south1-b_cluster-1". 12:39:25 CLI version 12:39:25 Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} 12:39:25 Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.5", GitCommit:"9b635efce81582e1da13b35a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-02T23:42:40Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"} 12:39:25 Pipeline descriptor already parsed - will not parse it again 12:39:25 Deleting all possible entries with name [rabbitmq-github-webhook] 12:39:26 Error from server (NotFound): secrets "rabbitmq-github-webhook" not found 12:39:26 Error from server (NotFound): persistentvolumeclaims "rabbitmq-github-webhook" not found 12:39:26 Error from server (NotFound): pods "rabbitmq-github-webhook" not found 12:39:26 Error from server (NotFound): deployments.extensions "rabbitmq-github-webhook" not found 12:39:26 service "rabbitmq-github-webhook" deleted 12:39:26 Will deploy service with type [rabbitmq] name [rabbitmq-github-webhook] and coordinates [] 12:39:26 Waiting for RabbitMQ to start 12:39:26 replicationcontroller "rabbitmq-github-webhook" deleted 12:39:26 Error from server (NotFound): error when deleting "target/k8s/rabbitmq-service.yml": services "rabbitmq-github-webhook" not found 12:39:26 Failed to delete app by [target/k8s/rabbitmq-service.yml] file. Continuing with the script 12:39:27 replicationcontroller/rabbitmq-github-webhook replaced 12:39:27 service/rabbitmq-github-webhook replaced 12:39:27 Deleting all possible entries with name [mysql-github-webhook] 12:39:27 secret "mysql-github-webhook" deleted 12:39:27 Error from server (NotFound): persistentvolumeclaims "mysql-github-webhook" not found 12:39:27 pod "mysql-github-webhook" deleted 12:39:44 Error from server (NotFound): deployments.extensions "mysql-github-webhook" not found 12:39:44 service "mysql-github-webhook" deleted 12:39:44 Will deploy service with type [mysql] name [mysql-github-webhook] and coordinates [] 12:39:44 Waiting for MySQL to start 12:39:44 Generating secret with name [mysql-github-webhook] 12:39:45 Error from server (NotFound): secrets "mysql-github-webhook" not found 12:39:45 Failed to delete secret [mysql-github-webhook]. Continuing with the script 12:39:45 secret/mysql-github-webhook created 12:39:45 Error from server (NotFound): error when deleting "target/k8s/mysql.yml": pods "mysql-github-webhook" not found 12:39:45 Failed to delete app by [target/k8s/mysql.yml] file. Continuing with the script 12:39:45 Error from server (NotFound): error when deleting "target/k8s/mysql-service.yml": services "mysql-github-webhook" not found 12:39:45 Failed to delete app by [target/k8s/mysql-service.yml] file. Continuing with the script 12:39:45 pod/mysql-github-webhook replaced 12:39:46 service/mysql-github-webhook replaced 12:39:46 Deleting all possible entries with name [eureka-github-webhook] 12:39:46 Error from server (NotFound): secrets "eureka-github-webhook" not found 12:39:46 Error from server (NotFound): persistentvolumeclaims "eureka-github-webhook" not found 12:39:46 Error from server (NotFound): pods "eureka-github-webhook" not found 12:39:47 Error from server (NotFound): deployments.extensions "eureka-github-webhook" not found 12:39:47 Error from server (NotFound): services "eureka-github-webhook" not found 12:39:47 Will deploy service with type [eureka] name [eureka-github-webhook] and coordinates [] 12:39:47 Deploying Eureka. Options - image name [:], app name [eureka-github-webhook], env [TEST] 12:39:47 error: error parsing target/k8s/eureka.yml: error converting YAML to JSON: yaml: line 13: mapping values are not allowed in this context 12:39:47 Failed to delete app by [target/k8s/eureka.yml] file. Continuing with the script 12:39:47 Error from server (NotFound): error when deleting "target/k8s/eureka-service.yml": services "eureka-github-webhook" not found 12:39:47 Failed to delete app by [target/k8s/eureka-service.yml] file. Continuing with the script 12:39:47 error: error parsing target/k8s/eureka.yml: error converting YAML to JSON: yaml: line 13: mapping values are not allowed in this context 12:39:47 Build step 'Execute shell' marked build as failure 12:39:47 Archiving artifacts 12:39:48 Finished: FAILURE </code></pre> <p>sc-pipelines.yml</p> <pre><code>test: # list of required services services: - type: rabbitmq name: rabbitmq-github-webhook - type: mysql name: mysql-github-webhook database: example - type: eureka name: eureka-github-webhook coordinates: scpipelines/github-eureka:latest </code></pre> <p>eureka.yml - Please notice the empty value for <code>image</code> key.</p> <pre><code>1 apiVersion: extensions/v1beta1 2 kind: Deployment 3 metadata: 4 name: eureka-github-webhook 5 spec: 6 replicas: 1 # tells deployment to run 2 pods matching the template 7 template: # create pods using pod definition in this template 8 metadata: 9 labels: 10 name: eureka-github-webhook 11 spec: 12 containers: 13 - name: eureka-github-webhook 14 image: : 15 ports: 16 - containerPort: 8761 17 env: 18 - name: PORT 19 value: "8761" 20 - name: eureka_client_serviceUrl_defaultZone 21 value: "http://eureka-github-webhook.sc-pipelines-prod:8761/eureka/" 22 livenessProbe: </code></pre>
<p>It was a bug <a href="https://github.com/spring-cloud/spring-cloud-pipelines/issues/187" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-pipelines/issues/187</a>. Please try again using the latest commits from the master branch</p>
<p>I have configured my Kubernetes cluster in GCP as follows,</p> <p><strong>ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.global-static-ip-name: "ip-name" ingress.kubernetes.io/rewrite-target: "/" kubernetes.io/ingress.allow-http: "true" spec: backend: serviceName: default-http-backend servicePort: 80 rules: - http: paths: - path: /myService/* backend: serviceName: my-service servicePort: 80 </code></pre> <p><strong>app.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-service labels: app: my-service spec: replicas: 3 template: metadata: labels: app: my-service spec: containers: - image: gcr.io/project_name7/my-service:6.0 imagePullPolicy: Always name: my-service ports: - containerPort: 4000 readinessProbe: httpGet: path: /healthz port: 4000 periodSeconds: 1 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 10 --- apiVersion: v1 kind: Service metadata: name: my-service labels: k8s-app: my-service spec: type: NodePort ports: - name: my-service port: 80 protocol: TCP targetPort: 4000 selector: app: my-service </code></pre> <p><strong>default-backend.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend spec: replicas: 1 template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- kind: Service apiVersion: v1 metadata: name: default-http-backend spec: selector: app: default-http-backend ports: - protocol: TCP port: 80 targetPort: 8080 type: NodePort </code></pre> <p>My health check is all good, the load balancer is up and running. I can hit default-backend also. but when I try to hit,</p> <pre><code>GET http://ingress-ip/myService/api/test </code></pre> <p>I get the following error in postman with 404 Statuscode, </p> <pre><code> &lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Error&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;pre&gt;Cannot GET /myService/api/test&lt;/pre&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But the API is available in the service, When I switch the service from NodePort to LoadBalancer, I can access the API with the LoadBalancer IP. Can anyone help me with this issue? How to resove the wildcard or how can I access my API?</p>
<p>GCE ingress doesn't support <code>ingress.kubernetes.io/rewrite-target</code>, so your application pod is receiving paths of the form <code>/myService/api/test</code>, and it's returning the HTTP/404 because of that.</p>
<p>I was performing a practical where i was deploying a containerised sample application using kubernetes.</p> <p>i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.</p> <p>its showing an error "<strong><em>error: failed to discover supported resources: Get <a href="https://35.240.145.231/apis/extensions/v1beta1" rel="nofollow noreferrer">https://35.240.145.231/apis/extensions/v1beta1</a>: x509: certificate signed by unknown authority</em></strong>".</p> <p>From Error, i can recollect that its because of "SSL Certificate" not authorised.</p> <p>I even exported the config file resides at "<strong><em>$HOME/.kube/config</em></strong>". but still getting the same error.</p> <p>please anyone help me understand the real issue behind this.</p> <p>Best,</p> <p>Swapnil Pawar</p>
<p>You may try following steps,</p> <ol> <li><p>List all the available clusters,</p> <pre><code>$ gcloud container clusters list </code></pre></li> <li><p>Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,</p> <pre><code>$ gcloud container clusters get-credentials &lt;cluster_name&gt; --ZONE &lt;location&gt; </code></pre> <p>or if the location is configured for a region then,</p> <pre><code>$ gcloud container clusters get-credentials &lt;cluster_name&gt; --REGION &lt;location&gt; </code></pre></li> </ol> <p>The above command will update your kubectl config file <strong>$HOME/.kube/config</strong></p> <ol start="3"> <li>Now, the tricky part. If you have more than one cluster that you have configured, then your <strong>$HOME/.kube/config</strong> will have two or more entries. You can verify it by doing a <strong>cat</strong> command on the config file.</li> </ol> <p>To select a particular context/cluster, you need to run the following commands</p> <pre><code>$ kubectl config get-contexts -o=name // will give you a list of available contexts $ kubectl config use-context &lt;CONTEXT_NAME&gt; $ kubectl config set-context &lt;CONTEXT_NAME&gt; </code></pre> <p>Now, you may run the kubectl run.</p>
<p>Can I run both docker swarm and kubernetes on same nodes , can overlay network and kubernetes internal cluster network work together ?</p>
<p>Technically yes, but it's not as good an idea as it sounds at first. Unfortunately it confuses Kubernetes about the amount of resources available on the nodes.</p>
<p>I want to deploy a .NET Core app with NGINX reverse proxy on Azure Kubernetes Service. What is the best way to organize the pods and containers?</p> <ol> <li><p>Two single-container pods, one pod for nginx and one pod for the app (.net-core/kestrel), so each one can scale independently of the other</p></li> <li><p>One multi-container pod, this single pod with two containers (one for nginx and one for the app)</p></li> <li><p>One single-container pod, a single container running both the nginx and the .net app</p></li> </ol> <p>I would choose the 1st option, but I don't know if it is the right choice, would be great to know the the pros and cons of each option.</p> <p>If I choose the 1st option, is it best to set affinity to put nginx pod in the same node that the app pod? Or anti-affinity so they deploy on different nodes? Or no affinity/anti-affinity at all?</p>
<p>The best practice for inbound traffic in Kubernetes is to use the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource. This requires a bit of extra setup in AKS because there's no built-in ingress controller. You definitely don't want to do #2 because it's not flexible, and #3 is not possible to my knowledge.</p> <p>The Kubernetes <code>Ingress</code> resource is a configuration file that manages reverse proxy rules for inbound cluster traffic. This allows you to surface multiple services as if they were a combined API.</p> <p>To set up ingress, start by creating a public IP address in your auto-generated MC resource group:</p> <pre><code>az network public-ip create ` -g MC_rg-name_cluster-name_centralus ` -n cluster-name-ingress-ip ` -l centralus ` --allocation-method static ` --dns-name cluster-name-ingress </code></pre> <p>Now create an ingress controller. This is required to actually handle the inbound traffic from your public IP. It sits and listens to the Kubernetes API <code>Ingress</code> updates, and auto-generates an <code>nginx.conf</code> file.</p> <pre><code># Note: you'll have to install Helm and its service account prior to running this. See my GitHub link below for more information helm install stable/nginx-ingress ` --name nginx-ingress ` --namespace default ` --set controller.service.loadBalancerIP=ip.from.above.result ` --set controller.scope.enabled=true ` --set controller.scope.namespace="default" ` --set controller.replicaCount=3 kubectl get service nginx-ingress-controller -n default -w </code></pre> <p>Once that's provisioned, make sure to use this annotation on your <code>Ingress</code> resource: <code>kubernetes.io/ingress.class: nginx</code></p> <p>If you'd like more information on how to set this up, please see <a href="https://github.com/brbarnett/hello-k8s" rel="nofollow noreferrer">this GitHub readme</a> I put together this week. I've also included TLS termination with <code>cert-manager</code>, also installed with Helm.</p>
<p>I am new to kubernetes. The whole setup I have configured in Minikube. I am not sure it should different than any other kubernetes setup. </p> <p>I have created a POD in my setup and an spring boot application is running inside on 8080 port and this service to expose to Cluster on 20080 port.</p> <p>I am running another pod inside the cluster where tcpdump is running. I have requirement to dump the HTTP packets hitting the cluster on 20080. Please let me know how I can access Cluster interface from the tcpdump pod. </p> <p>I tried google and tried using Cluster IP directly from the POD,but it didn't work.</p>
<p>The POD that is running tcpdump can only see its own netns, except you run the POD with the <code>hostNetwork: true</code> option. </p> <p>So maybe what you can do is running POD with <code>hostNetwork: true</code> option, then use tcpdump to monitor the host's physical interface to grab the network packages on port 20080. Also you can monitor the network interface of the POD that's running the spring boot, if you can find the POD's network interface, which depends on the network configurations.</p>
<p>Im having trouble invoking functions using kubeless. Here is the function spec</p> <pre><code>--- apiVersion: kubeless.io/v1beta1 kind: Function metadata: name: smk namespace: smktest spec: handler: hello.handler runtime: python2.7 function: | import json def handler(): return "hello world" deployment: spec: template: spec: containers: - env: - name: FOO value: bar name: "smk-deployment" resources: limits: cpu: 100m memory: 100Mi requests: cpu: 100m memory: 100Mi </code></pre> <p>When I try to call the function as below,</p> <pre><code>kubeless function call smk </code></pre> <p>I get </p> <p><code>FATA[0000] Unable to find the service for smk</code></p> <p>Two part question</p> <ol> <li>How do I expose my function as a service</li> <li>How do I specify Environment variables needed by this function ? Thank you</li> </ol> <p><strong><em>Update</em></strong> Running kubeless function ls --namespace=smktest yields below</p> <pre><code>NAME NAMESPACE HANDLER RUNTIME DEPENDENCIES STATUS smk smktest hello.handler python2.7 MISSING: Check controller logs </code></pre> <p>Next I tried <code>kubectl logs -n kubeless -l kubeless=controller</code> there's tons of error logs but I don't see anything specific to this function</p>
<blockquote> <p>When I try to call the function as below,</p> <p><code>kubeless function call smk</code></p> <p>I get</p> <p><code>FATA[0000] Unable to find the service for smk</code></p> <p>Running <code>kubeless function ls --namespace=smktest</code></p> </blockquote> <p>Then surely you would need to include the <code>--namespace=smktest</code> in your invocation command, too:</p> <pre><code>kubeless function call --namespace=smktest smk </code></pre> <hr> <blockquote> <p>How do I specify Environment variables needed by this function ? Thank you</p> </blockquote> <p>As best I can tell, there seems to be two approaches in use:</p> <ul> <li><a href="https://github.com/kubeless/kubeless/blob/v0.6.0/pkg/apis/kubeless/v1beta1/function.go#L45" rel="nofollow noreferrer">Provide a <code>Deployment</code> template</a>, which the <a href="https://github.com/kubeless/kubeless/blob/v0.6.0/pkg/controller/function_controller.go#L311" rel="nofollow noreferrer"><code>function controller</code> appears to merge</a> but as far as I know <code>container: image:</code> is required in a <code>Deployment</code>, so you'd have to specify one in order to get access to its <code>env:</code> declaration</li> <li>Otherwise "cheat" and use the <code>Pod</code>s <code>ServiceAccount</code> token to <a href="https://github.com/kubeless/functions/blob/d80825f1d21e2686a1f817a45d333729a360da8a/incubator/slack/bot.py#L11" rel="nofollow noreferrer">request cluster resources manually</a> which might include a <code>ConfigMap</code>, <code>Secret</code>, or even resolving your own <code>kubeless.io/function</code> manifest and pulling something out of its annotations or similar</li> </ul>
<p>I am getting started with Airflow and trying to use the KubernetesPodOperator, but I am having trouble with downloading images from private registries. I did some research but I couldn't find an answer to my problem.</p> <p>Putting it simply: can I use private images from DockerHub with the KubernetesPodOperator?</p>
<p>It looks like <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/kubernetes/pod_generator.py#L28" rel="nofollow noreferrer">pod_generator.PodGenerator</a> accepts some kind of object <code>kube_config</code> that <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/kubernetes/pod_generator.py#L142" rel="nofollow noreferrer">knows about imagePullSecrets</a>, but unfortunately <a href="https://github.com/apache/incubator-airflow/blob/1.10.0rc4/airflow/contrib/operators/kubernetes_pod_operator.py#L87" rel="nofollow noreferrer"><code>KubernetesPodOperator</code></a> doesn't provide any such <code>kube_config</code> to <code>PodGenerator</code></p> <p>As best I can tell, it's just an edge case that slipped through the cracks, although it looks like there is <a href="https://issues.apache.org/jira/browse/AIRFLOW-2854" rel="nofollow noreferrer">a Jira for that</a> which matches up with <a href="https://github.com/apache/incubator-airflow/pull/3697" rel="nofollow noreferrer">a corresponding GitHub PR</a>, but it isn't clear from looking at the changed files that it will 100% solve the problem you are describing. Perhaps weigh in on either the PR, or the Jira, or maybe even both, to ensure it is addressed.</p>
<p>I made a customer-pod (including image with a spring boot application running on port 8080).</p> <p>** On the yaml file, I didnt explicitly write container port number like spec:containers:port:8080.</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: customer-pod spec: containers: - name: customer-pod image: docker.temp.com/customer-service:8 </code></pre> <p>I can check it with "$ kubectl get pods commands -o wide"</p> <p>===========================================</p> <p>NAME READY STATUS RESTARTS AGE IP NODE</p> <p>customer-pod 1/1 Running 0 43m 10.43.1.17 default</p> <p>===========================================</p> <p>After that, </p> <p>$ curl 10.43.1.17:8080<br> {"timestamp":"2018-08-22T05:58:58.938+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}</p> <p>I wonder how I can access the customer port(8080) without port open.<br> In case of Docker, I know that I should explicitly write a port in the run command to connect it.<br> In Kubernetes, are ports automatically open?</p>
<p>You do not open port in kubernetes. By default, all pods can communicate to each other, in most setups also node-to-port will be allowed. Refering to Docker you mean the port mapping <code>-p</code> option, which also is not port opening (you can access docker container ports if you use container IP even if not mapped to expose outside of container).</p> <p>If you want to restrict cross-pod networking you need to use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a> and networking solution capable of enforcing them.</p>
<p>I am trying to create a Helm Chart with the following resources:</p> <ol> <li>Secret</li> <li>ConfigMap</li> <li>Service</li> <li>Job</li> <li>Deployment</li> </ol> <p>These are also in the order that I would like them to be deployed. I have put a hook in the Deployment so that it is post-install, but then Helm does not see it as a resource and I have to manually manage it. </p> <p>The Job needs the information in the Secret and ConfigMap, otherwise I would make that a pre-install hook. But I can't make everything a hook or nothing will be managed in my release.</p> <p>Does anyone have a solution or idea to be able to manage all of the resources within the Helm release AND make sure the Job finishes before the Deployment begins?</p> <p>My only thought right now is two make two Charts: One with 1-4 and the second with 5 which would depend on the first.</p>
<p>Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order (see <a href="https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29" rel="noreferrer">here</a> - Helm 2.10):</p> <ol> <li>Namespace</li> <li>ResourceQuota</li> <li>LimitRange</li> <li>PodSecurityPolicy</li> <li>Secret</li> <li>ConfigMap</li> <li>StorageClass</li> <li>PersistentVolume</li> <li>PersistentVolumeClaim</li> <li>ServiceAccount</li> <li>CustomResourceDefinition</li> <li>ClusterRole</li> <li>ClusterRoleBinding</li> <li>Role</li> <li>RoleBinding</li> <li>Service</li> <li>DaemonSet</li> <li>Pod</li> <li>ReplicationController</li> <li>ReplicaSet</li> <li>Deployment</li> <li>StatefulSet</li> <li>Job</li> <li>CronJob</li> <li>Ingress</li> <li>APIService</li> </ol> <p>During uninstallation of a release, the order is reversed (see <a href="https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L61" rel="noreferrer">here</a>).</p> <p>Following this logic, in your case when your Job resource is created, both the Secret and the ConfigMap will already be applied, but Helm won't wait for the Job to complete before applying the Deployment. If you split your Chart to two parts (1-4, 5) and install them sequentially you would still have the problem of the Deployment being possibly applied before the Job is completed. What I would suggest is splitting your Chart to two parts (1-3, 4-5), in which the the Job has a pre-install hook, which would make sure it completes before your Deployment is applied.</p>
<p>I created a cluster:</p> <pre><code>gcloud container clusters create test </code></pre> <p>so there will be 3 nodes:</p> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION gke-test-default-pool-cec920a8-9cgz Ready &lt;none&gt; 23h v1.9.7-gke.5 gke-test-default-pool-cec920a8-nh0s Ready &lt;none&gt; 23h v1.9.7-gke.5 gke-test-default-pool-cec920a8-q83b Ready &lt;none&gt; 23h v1.9.7-gke.5 </code></pre> <p>then I delete a node from the cluster</p> <pre><code>kubectl delete node gke-test-default-pool-cec920a8-9cgz node "gke-test-default-pool-cec920a8-9cgz" deleted </code></pre> <p>no new node is created.</p> <p>Then I delete all nodes. still there is no new node created. </p> <pre><code>kubectl get nodes No resources found. </code></pre> <p>Am I doing something wrong? I suppose it can automatically bring up new node if some node died.</p>
<p>After running <code>kubectl delete node gke-test-default-pool-cec920a8-9cgz</code> run <code>gcloud compute instances delete gke-test-default-pool-cec920a8-9cgz</code></p> <p>This will actually delete VM (<code>kubectl delete</code> only "disconnects" it from the cluster). GCP will recreate the VM and it will automatically rejoin the cluster. </p>
<p>I was configuring the gce ingress on my kubernetes cluster and I specified a Django application as default backend. The app enforces HTTPS so if you try to do a simple HTTP request Django will return a 301. Obviously HTTP health check will not pass. I was following this <a href="https://github.com/kubernetes/ingress-gce/blob/master/examples/backside-https/app.yaml" rel="nofollow noreferrer">example</a> to enable the HTTPS health check. Once the health check was spawned I manually edited the path in compute engine but from the Django app logs it seems that it hasn't received any requests, so it results UNHEALTHY and I can't get the ingress to work.</p> <p>How can I make the health check to work in this case?</p> <p>Configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/tls-acme: "true" kubernetes.io/ingress.global-static-ip-name: web-static-ip spec: tls: - hosts: - foo.domain.it secretName: production-tls backend: serviceName: app servicePort: app-https apiVersion: v1 kind: Service metadata: name: app annotations: service.alpha.kubernetes.io/app-protocols: '{"app-https":"HTTPS"}' labels: component: app role: web spec: type: NodePort ports: - port: 12345 targetPort: 8000 protocol: TCP name: app-https selector: component: app role: web type: LoadBalancer apiVersion: apps/v1beta1 kind: Deployment metadata: name: app labels: component: app role: web spec: replicas: 1 template: metadata: labels: component: app role: web spec: containers: - name: app image: [my-image] imagePullPolicy: Always ports: - containerPort: 8000 envFrom: - configMapRef: name: app-config </code></pre>
<p>As a workaround, I disabled the <code>DJANGO_SECURE_SSL_REDIRECT</code> and forced the ingress to accept only HTTPS request in case of GCE ingress. <code>kubernetes.io/ingress.allow-http: "false"</code></p> <p>If you are using NGINX as ingress just set <code>nginx.ingress.kubernetes.io/ssl-redirect: "True"</code></p>
<p>Is it enabled by default? How could I set up an example audit log?</p> <p>I've tried:</p> <pre><code>minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Audit.LogOptions.Path=/var/log/apiserver/audit.log --extra-config=apiserver.Audit.LogOptions.MaxAge=30 --extra-config=apiserver.Audit.LogOptions.MaxSize=100 --extra-config=apiserver.Audit.LogOptions.MaxBackups=5 </code></pre> <p>I'm also busy reading through (trying out all the options might take a while as <code>minikube start ...</code> is not a quick process): <a href="https://github.com/kubernetes/minikube/issues/1609" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/1609</a></p>
<p>As far as I know and as per all available information: There is no way no enable audit log on minikube. Its under implementation <a href="https://github.com/kubernetes/minikube/issues/2934" rel="nofollow noreferrer">right now</a> and lets hope kube team will shortly release this feature. </p>
<p>I want to add role from namespace <code>kube-system</code> in my <code>MyRoleBinding.yaml</code> file like that:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: myrolebinding namespace: default subjects: - kind: ServiceAccount name: myservice namespace: default apiGroup: "" roleRef: kind: Role name: system:controller:token-cleaner namespace: kube-system apiGroup: "" </code></pre> <p>But when I run <code>kubectl apply -f MyRoleBinding.yaml</code> I get: </p> <blockquote> <p>error: error validating "MyRoleBinding.yaml": error validating data: ValidationError(RoleBinding.roleRef): <strong>unknown</strong> field "namespace" in io.k8s.api.rbac.v1.RoleRef; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>I am running in the <code>default</code> namespace, is it because of this ?<br> I tried to run:<br> <code>kubectl apply -f MyRoleBinding.yaml --namespace=kube-system</code> but I am getting the same error. </p> <p>I also tried to add an existing role in the <code>defaul</code> namespace by using: </p> <pre><code>roleRef: kind: Role name: read-pods namespace: default apiGroup: "" </code></pre> <p>and I got the same error. </p>
<p>The <code>roleRef</code> field doesn't support namespace. You can use <code>roleRef</code> either with ClusterRoles which are not namespaced or with Roles which always has to be in the same namespace as the RoleBinding. See also the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#roleref-v1-rbac-authorization-k8s-io" rel="noreferrer">reference</a>.</p>
<p>A Kubernetes StatefulSet (v1.8) will automagically rollover the pods it is managing to a new image if you patch the image name in the StatefulSet spec -- orchestrating this kind of update is one of its main jobs, after all. However, if deployment of a revised image fails at the image pull step (say, there was a typo in the image name, so there is no image to pull), the first pod it tries to restart with the new image gets stuck in an ImagePullBackOff state and the rollover stops. That's fine, that's pretty much what you'd want and expect. However, if you then patch the StatefulSet spec again to correct the mistake, the set is not healed -- it just sits there in the broken state -- until you (manually) delete the failed pod, at which point everything proceeds as it should and it all works its way back to a healthy state.</p> <p>I'm thinking there should be a way to configure things so that this kind of recovery happens automatically when the image name is updated, just as it does for a normal deployment, rather than requiring manual intervention (especially since this is for a CI-type application, where this all needs to be automated anyway). I can't help but feeling like I'm just not finding the right option in the documentation, but poring over the docs and doing endless Google searches hasn't revealed a winning recipe. Is there a way to configure this kind of recovery to happen automatically, and if so, what is it?</p> <p>Failing that, is there a graceful way to automate detection of the failed state in order to trigger a scripted deletion of the failed pod? Everything I've tried seems really kludgy, with lots of polling and ad hoc timeouts, so I'm having trouble believing that's the Right Way to do it.</p>
<p>I've raised the same issue on <a href="https://github.com/kubernetes/kubernetes/issues/67250" rel="nofollow noreferrer">GitHub</a>. The answer right now is that, no, there is no option for that, and that the documentation is probably lacking. We are starting a discussion to see if there is any way to improve this. Right now, workarounds will probably rely on polling and timeouts. However, that depends on how you're deploying the StatefulSet in the first place. Some deployment tools (like Helm with the <code>--wait</code> option) can be configured to wait for all the pods to be running.</p>
<p>I am trying to get the "user" and "pw" variables to resolve in a heredoc that is wrapped with single quotes because it is part of a larger command. I believe that as described here <a href="https://stackoverflow.com/questions/4937792/using-variables-inside-a-bash-heredoc">Using variables inside a bash heredoc</a> the issue is due to single quotes around the entire command including the "END" portion, but I am not seeing an alternative to wrapping the entire command in quotes because the heredoc is being passed to the mongo shell as a single argument in kubernetes. The code is as follows:</p> <pre><code>#!/bin/bash user="theUser" pw="thePW" doc='mongo &lt;&lt;END use DATABASE db.auth("$user", "$pw") db.collection.find() END ' value=$(kubectl exec -it mongo-pod -- bash -ec "$doc") echo $value </code></pre>
<p>Put double quotes around the string in the variable assignment, so that variables will be expanded in it.</p> <pre><code>doc="mongo &lt;&lt;END use DATABASE db.auth('$user', '$pw') db.collection.find() END " </code></pre>
<p>I have a K8S cluster launched using kops with topology=private and networking=weave. Want to mount an EFS volume to my pods but the EFS endpoint is not resolving inside pods.</p> <p>I suspect because of internal routing(weave) the name resolution is not forwarding to subnet DNS server (correct me if I am wrong).</p> <p>How can I resolve this issue?</p> <p>Thaks in advance. :)</p>
<p>This issue is resolved after enabling "DNS hostname resolution" for the specific VPC.</p>
<p>We have a simple release test for a <code>Redis</code> chart. After running <code>helm test myReleaseName --tls --cleanup</code>, we got </p> <pre><code>RUNNING: myReleaseName-redis ERROR: timed out waiting for the condition </code></pre> <p>There are several issues in Github repository at <a href="https://github.com/helm/helm/search?q=timed+out+waiting+for+the+condition&amp;type=Issues" rel="noreferrer">https://github.com/helm/helm/search?q=timed+out+waiting+for+the+condition&amp;type=Issues</a> but I did not find a solution to it. </p> <p>What's going on here?</p>
<p>This first looks puzzling and shows little information because <code>--cleanup</code> will kill the pods after running. One can remove it to get more information. I, thus, reran the test with </p> <pre><code>helm test myReleaseName --tls --debug </code></pre> <p>Then use <code>kubectl get pods</code> to examine the pod used for testing. (It could be of other names.) </p> <pre><code>NAME READY STATUS RESTARTS AG myReleaseName-redis 0/1 ImagePullBackOff 0 12h </code></pre> <p>From here, it is more clear now that there is something wrong with images, and it turned out that <strong>the link used to pull the image is not correct</strong>. (Use <code>kubectl describe pod &lt;pod-name&gt;</code> and then you can find the link you used to pull the image.)</p> <p>Fix the link, and it worked.</p>
<p>I'm trying to automatically provision a loadbalancer on GCP by using the <code>ingress</code> object with our GKE cluster.</p> <p>I have three GKE deployments and each is available with a service on port <code>8080</code> with a unique nodePort. </p> <p>When using <code>ingress-fanout.yaml</code>, it creates 4 backend services instead of the 3 specified in the yaml. The 4th service defaults to all unmatched routes. I assume the 4th service is because we don't match unmapped routes in the yaml. </p> <p>How can one map unmatched routes to one of the services? Is that possible?</p> <p>Here's <code>ingress-fanout.yaml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fanout-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "our-static-ip" ingress.gcp.kubernetes.io/pre-shared-cert: "our-ssl-cert" kubernetes.io/ingress.allow-http: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: rules: - host: our-website.com http: paths: - path: /* backend: serviceName: li-frontend servicePort: 8080 - path: /backend/* backend: serviceName: li-django servicePort: 8080 - path: /notifications/* backend: serviceName: li-notifications servicePort: 8080 </code></pre> <p><strong>Update:</strong> I removed many of the original questions and narrowed the scope of the question. When health checks started succeeding, that cleared the old issues.</p>
<p>First of all, "backends" have nothing to do with the "paths" you specified. "backends" on GCP Console are pointing to your GKE node pools.</p> <p>Ingress supports adding a default backend. You could have tried just searching for "ingress default backend". You can find documentation about this here: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#single-service-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#single-service-ingress</a></p> <p>Basically doing this will set a default backend when nothing else is matched:</p> <pre><code>spec: backend: serviceName: testsvc servicePort: 80 rules: [...your stuff here...] </code></pre>
<p>In my docker image I have a directory <code>/opt/myapp/etc</code> which has some files and directories. I want to create statefulset for my app. In that statefulset I am creating persistent volume claim and attach to <code>/opt/myapp/etc</code>. Statefulset <code>yaml</code> is attached below. Can anyone tell me how to attach volume to container in this case?</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: statefulset labels: app: myapp spec: serviceName: myapp replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - image: 10.1.23.5:5000/redis name: redis ports: - containerPort: 6379 name: redis-port - image: 10.1.23.5:5000/myapp:18.1 name: myapp ports: - containerPort: 8181 name: port volumeMounts: - name: data mountPath: /opt/myapp/etc volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: standard resources: requests: storage: 5Gi </code></pre> <p>Here is the output of describe pod</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m (x4 over 3m) default-scheduler pod has unbound PersistentVolumeClaims Normal Scheduled 3m default-scheduler Successfully assigned controller-statefulset-0 to dev-k8s-2 Normal SuccessfulMountVolume 3m kubelet, dev-k8s-2 MountVolume.SetUp succeeded for volume "default-token-xpskd" Normal SuccessfulAttachVolume 3m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1" Normal SuccessfulMountVolume 3m kubelet, dev-k8s-2 MountVolume.SetUp succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1" Normal Pulling 2m kubelet, dev-k8s-2 pulling image "10.1.23.5:5000/redis" Normal Pulled 2m kubelet, dev-k8s-2 Successfully pulled image "10.1.23.5:5000/redis" Normal Created 2m kubelet, dev-k8s-2 Created container Normal Started 2m kubelet, dev-k8s-2 Started container Normal Pulled 1m (x4 over 2m) kubelet, dev-k8s-2 Container image "10.1.23.5:5000/myapp:18.1" already present on machine Normal Created 1m (x4 over 2m) kubelet, dev-k8s-2 Created container Normal Started 1m (x4 over 2m) kubelet, dev-k8s-2 Started container Warning BackOff 1m (x7 over 2m) kubelet, dev-k8s-2 Back-off restarting failed container </code></pre> <p>storageclass definition </p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard namespace: controller provisioner: kubernetes.io/cinder reclaimPolicy: Retain parameters: availability: nova </code></pre>
<p>check if you have storage class defined in your cluster. <code> kubectl get storageclass </code> If your are using default storage class as host-path(in case of minikube) then you do not need to include storage class into your template.</p> <p><code> volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi </code> by specifying no storage class k8s will go ahead and schedule the persistent volume with the default storage class which would be host-path in case of minikube also make sure <code>/opt/myapp/etc</code> exist on the node where pod is going to be scheduled.</p>
<p>I'm running a Google Kubernetes Engine with the "private-cluster" option. I've also defined "authorized Master Network" to be able to remotely access the environment - this works just fine. Now I want to setup some kind of CI/CD pipeline using Google Cloud Build - after successfully building a new docker image, this new image should be automatically deployed to GKE. When I first fired off the new pipeline, the deployment to GKE failed - the error message was something like: "Unable to connect to the server: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout". As I had the "authorized master networks" option under suspicion for being the root cause for the connection timeout, I've added 0.0.0.0/0 to the allowed networks and started the Cloud Build job again - this time everything went well and after the docker image was created it was deployed to GKE. Good.</p> <p>The only problem that remains is that I don't really want to allow the whole Internet being able to access my Kubernetes master - that's a bad idea, isn't it?</p> <p>Are there more elegant solutions to narrow down access by using allowed master networks and also being able to deploy via cloud build?</p>
<p>It's <strong>currently</strong> not possible to add Cloud Build machines to a VPC. Similarly, Cloud Build does not announce IP ranges of the build machines. So you can't do this today without creating a "ssh bastion instance" or a "proxy instance" on GCE within that VPC.</p> <p>I suspect this would change soon. GCB existed before GKE private clusters and private clusters are still a beta feature.</p>
<p>I have setup k8s cluster using kismatic and created CI/CD pipeline in gitlab. K8s cluster is in aws ec2 single instance. So how to connect gitlab with k8s cluster + helm to deploy the images. </p>
<p>All newer versions of k8 have rbac enabled so the token runner will ultimately be using will have to be clusterbinding or you won't be able to deploy outside of the namespace of the token</p> <p>1) If you use a runner with persistent state such as an ec2 instance then you can just copy over your working kubeconfig found in ~/.kube/config</p> <p>2) You can deploy a runner into kubernetes. This is very simple, they have a helm chart for a runner. You'll need to clusterbind the service account the runner is deployed to, chart doesn't do it for you</p> <p>3) If you're using their runners you can base64 kubeconfig into a CI variable and create the file as a before script</p> <p>4) Integrate kubernetes the gitlab way (it's on the UI) then gitlab will inject a kubeconfig into jobs that have an environment section. This is what I'm doing, integration is a bit rough but if you would like to have feature branch sandbox deployments there's a way to have a stage to teardown after merge so it's quite nice however I'd skip their add-on buttons such as helm and ingress and do that myself instead as you don't get to tweak any options</p> <p>Other than option 1; you then use a helm or kubectl image for your deploy jobs</p>
<p>I'm trying to scale my docker containers with Minikube on Windows 10 Enterprise Edition. However, I'm running into a few conflicts with Hyper-V and VirtualBox. I know Dockers require Hyper-V to run properly while Minikube requires VirtualBox to run (shows an error if Hyper-V is enabled) </p> <pre><code>C:\WINDOWS\system32&gt;minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 160.27 MB / 160.27 MB [============================================] 100.00% 0s E0822 11:42:07.898412 13028 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks. : This computer is running Hyper-V. VirtualBox won't boot a 64bits VM when Hyper-V is activated. Either use Hyper-V as a driver, or disable the Hyper-V hypervisor. (To skip this check, use --virtualbox-no-vtx-check). </code></pre> <p>If I disable, Hyper-V, I'm able to start minikube properly but Dockers does not work and shows an error to enable Hyper-V.</p> <p>I also tried running minikube with Hyper-V driver, but also get this error:</p> <pre><code>C:\WINDOWS\system32&gt;minikube start --vm-driver hyperv Starting local Kubernetes v1.10.0 cluster... Starting VM... E0822 11:44:32.323877 13120 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks. : no External vswitch found. A valid vswitch must be available for this command to run. Check https://docs.docker.com/machine/drivers/hyper-v/. </code></pre> <p>Any solution to this?</p>
<blockquote> <p>I also tried running minikube with Hyper-V driver, but also get this error:</p> </blockquote> <p>There is an explicit warning about that HyperV and vswitch situation in their docs: <a href="https://github.com/kubernetes/minikube/blob/v0.28.2/docs/drivers.md#hyperv-driver" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/v0.28.2/docs/drivers.md#hyperv-driver</a></p> <p>Although in <code>minikube</code>'s defense, it also does say <strong>right at the end</strong> the URL one should go to in order to read about the <code>--hyperv-virtual-switch</code> flag one should provide in addition to <code>--vm-driver=hyperv</code>.</p> <p>Navigating to that linked docker page, it even provides a step-by-step with screenshots example of how to accomplish that.</p>
<p>Hi I am managing a Kubernetes cluster that has been created with Kops. I recently updated Kubernetes to <code>1.9.9</code> and executed a rolling-update to the cluster. All seems to have gone well, except a new annoying problem arised:</p> <p>My Kubernetes api server becomes unavailable in a very sporadic unexpected manner. Sometimes it becomes unaccessible for a couple of minutes, sometimes for half an hour. And then if you try again it miraculously works again. This makes it quite hard for me to work with the Kubernetes cluster. Does anyone know why these error messages keep popping up?</p> <pre><code>Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>and</p> <pre><code>The connection to the server api.xxx.io was refused - did you specify the right host or port? </code></pre>
<p>I posted this same question on <a href="https://github.com/kubernetes/kops/issues/5557" rel="nofollow noreferrer">kubernets/kops</a> github repository when @justinsb pointed me into the right direction. He advised to ssh into the master node and do a <code>docker ps</code> to see if all containers were stable.</p> <p>What I figured doing this is that my apiserver container was constantly crashing. That's what led me to the conclusion that my master node potentially didn't have enough compute power to handle all the requests. I changed the master ec2 instance from <code>t2.micro</code> to <code>t2.medium</code> and the problem disappeared.</p>
<p>The environment I'm working with is a secure cluster running cockroach/gke. </p> <p>I have an approved default.client.root certificate which allows me to access the DB using root, but I can't understand how to generate new certificate requests for additional users. I've read the cockroachDB docs over and over, and it is explained how to manually generate a user certificate in a standalone config where the ca.key location is accessible, but not specifically how to do it in the context of Kubernetes. </p> <p>I believe that the image cockroachdb/cockroach-k8s-request-cert:0.3 is the start point but I cannot figure out the pattern for how to use it.</p> <p>Any pointers would be much appreciated. Ultimately I'd like to be able to use this certificate from an API in the same Kubernetes cluster which uses the pg client. Currently, it's in insecure mode, using just username and password.</p>
<p>The <code>request-cert</code> job is used as an init container for the pod. It will request a client or server certificate (the server certificates are requested by the CockroachDB nodes) using the K8S CSR API.</p> <p>You can see an example of a client certificate being requested and then used by a job in <a href="https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml" rel="nofollow noreferrer">client-secure.yaml</a>. The init container is run before your normal container:</p> <pre><code> initContainers: # The init-certs container sends a certificate signing request to the # kubernetes cluster. # You can see pending requests using: kubectl get csr # CSRs can be approved using: kubectl certificate approve &lt;csr name&gt; # # In addition to the client certificate and key, the init-certs entrypoint will symlink # the cluster CA to the certs directory. - name: init-certs image: cockroachdb/cockroach-k8s-request-cert:0.3 imagePullPolicy: IfNotPresent command: - "/bin/ash" - "-ecx" - "/request-cert -namespace=${POD_NAMESPACE} -certs-dir=/cockroach-certs -type=client -user=root -symlink-ca-from=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: client-certs mountPath: /cockroach-certs </code></pre> <p>This sends a CSR using the K8S API, waits for approval, and places all resulting files (client certificate, key for client certificate, CA certificate) in <code>/cockroach-certs</code>. If the certificate already exists as a K8S secret, it just grabs it.</p> <p>You can request a certificate for any user by just changing <code>--user=root</code> to the username you with to use.</p>
<p>While creating service with node port in Play with kubernetes its not exposing the url to access the service</p> <p>I think its happening very frequently now, does any one facing this issue?</p>
<p>NodePort never "exposes URL". NodePort is only creating a specific forwarding/nat rules to make sure trafic from port X on all nodes running kube-proxy is "redirected" to endpoints of service Y.</p>
<p>How can be quota set on Kubernetes using label selector like: </p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: thequota-for-restrictedsoftware spec: hard: pods: "3" selector: matchLabels: restricted: true </code></pre>
<p>I've spent some time digging in the documentation and trying to find the answer to your question, and I'm not 100% sure, but it is not possible at this moment in the way you need it.</p>
<p>I'm struggling with finding a way to include the <a href="https://docs.helm.sh/chart_template_guide/#built-in-objects" rel="noreferrer">Release.Time</a> builtin as part of a helm name. </p> <p>If I just include it as: <code>name: {{ template "myapp.name" . }}-{{ .Release.Time }}</code></p> <p>Dry run shows this: <code>name: myapp-seconds:1534946206 nanos:143228281 </code></p> <p>It seems like this is a *timestamp.Timestamp object or something because <code>{{ .Release.Time | trimPrefix "seconds:" | trunc 10 }}</code> outputs <code>wrong type for value; expected string; got *timestamp.Timestamp </code></p> <p>I can hack the string parsing by doing: <code>{{ .Release.Time | toString | trimPrefix "seconds:" | trunc 10 }}</code>, but it seems like I should be able to call something on the Timestamp object to get the seconds. Is anyone aware of where the docs are for this? I can't find any reference to it at <a href="https://godoc.org/github.com/Masterminds/sprig" rel="noreferrer">https://godoc.org/github.com/Masterminds/sprig</a>.</p>
<p>To format timestamp, you can use <code>date FORMAT TIME</code> from Sprig doc. And because <code>.Release.Time</code> got removed in Helm 3, you have to use <code>now</code> instead:</p> <pre><code>{{ now | date &quot;20060102150405&quot; }} </code></pre> <p>Refer the format options: <a href="https://golang.org/pkg/time/#Time.Format" rel="noreferrer">https://golang.org/pkg/time/#Time.Format</a></p> <p>But having timestamp in the name is not good idea because of upgrades.</p>
<p>Is there a way to tell <code>k8s</code> to delete the oldest <code>pods</code> in the <code>deployment</code> first?</p> <p>For instance I have a <code>deployment</code> which consist of 3 pods. I then decided to scale it up by adding 3 more <code>pods</code>:</p> <pre><code>kubectl scale deployment some-deployment --replicas=6 </code></pre> <p>After finishing my tests I want it to be 3 pods again, so I scale it down:</p> <pre><code>kubectl scale deployment some-deployment --replicas=3 </code></pre> <p>Is there a way to configure deployment in a way so that during the scale down process the oldest pods would be removed first? What if I need it the other way around, delete more recent pods first?</p>
<p>This is an open <a href="https://github.com/kubernetes/kubernetes/issues/45509" rel="nofollow noreferrer">issue</a>. You may find <a href="https://github.com/kubernetes/kubernetes/issues/4301" rel="nofollow noreferrer">this related item</a> interesting. Currently you are not able to do it. Let's hope it will be fixed soon.</p>
<p>I was looking into Kubernetes Heapster and Metrics-server for getting metrics from the running pods. But the issue is, I need some custom metrics which might vary from pod to pod, and apparently Heapster only provides cpu and memory related metrics. Is there any tool already out there, which would provide me the functionality I want, or do I need to build one from scratch?</p>
<p>What you're looking for is application &amp; infrastructure specific metrics. For this, the <a href="https://www.influxdata.com/time-series-platform/" rel="nofollow noreferrer">TICK</a> stack could be helpful! Specifically Telegraf can be set up to gather detailed infrastructure metrics like Memory- and CPU pressure or even the resources used by individual docker containers, network and IO metrics etc... But it can also scrape Prometheus metrics from pods. These metrics are then shipped to influxdb and visualized using either chronograph or grafana. </p>
<p>I have this weird error plaguing me.</p> <p>I am trying to get an activemq pod running with a kubernetes stateful set, volume attached. </p> <p>The activemq is just a plain old vanila docker image, picked it from here <a href="https://hub.docker.com/r/rmohr/activemq/" rel="nofollow noreferrer">https://hub.docker.com/r/rmohr/activemq/</a></p> <pre><code>INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@3fee9989: startup date [Thu Aug 23 22:12:07 GMT 2018]; root of context hierarchy INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb] INFO | KahaDB is version 6 INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) is starting INFO | Listening for connections at: tcp://activemq-0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector openwire started INFO | Listening for connections at: amqp://activemq-0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector amqp started INFO | Listening for connections at: stomp://activemq-0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector stomp started INFO | Listening for connections at: mqtt://activemq-0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector mqtt started WARN | [email protected]@65a15628{/,null,STARTING} has uncovered http methods for path: / INFO | Listening for connections at ws://activemq-0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600 INFO | Connector ws started INFO | Apache ActiveMQ 5.15.4 (localhost, ID:activemq-0-43279-1535062328969-0:1) started INFO | For help or more information please see: http://activemq.apache.org WARN | Store limit is 102400 mb (current store usage is 6 mb). The data directory: /opt/activemq/data/kahadb only has 95468 mb of usable space. - resetting to maximum available disk space: 95468 mb WARN | Failed startup of context o.e.j.w.WebAppContext@478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null} java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:336)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:304)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.security.SecurityHandler.doStart(SecurityHandler.java:391)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.security.ConstraintSecurityHandler.doStart(ConstraintSecurityHandler.java:449)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.Server.start(Server.java:387)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.server.Server.doStart(Server.java:354)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.22.v20170606.jar:9.2.22.v20170606] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171] at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171] at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:265)[spring-core-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.config.MethodInvokingBean.invokeWithTargetException(MethodInvokingBean.java:119)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.config.MethodInvokingFactoryBean.afterPropertiesSet(MethodInvokingFactoryBean.java:106)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1692)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1630)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:742)[spring-beans-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)[spring-context-4.3.17.RELEASE.jar:4.3.17.RELEASE] at org.apache.xbean.spring.context.ResourceXmlApplicationContext.&lt;init&gt;(ResourceXmlApplicationContext.java:64)[xbean-spring-4.2.jar:4.2] at org.apache.xbean.spring.context.ResourceXmlApplicationContext.&lt;init&gt;(ResourceXmlApplicationContext.java:52)[xbean-spring-4.2.jar:4.2] at org.apache.activemq.xbean.XBeanBrokerFactory$1.&lt;init&gt;(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.xbean.XBeanBrokerFactory.createApplicationContext(XBeanBrokerFactory.java:104)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.xbean.XBeanBrokerFactory.createBroker(XBeanBrokerFactory.java:67)[activemq-spring-5.15.4.jar:5.15.4] at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:71)[activemq-broker-5.15.4.jar:5.15.4] at org.apache.activemq.broker.BrokerFactory.createBroker(BrokerFactory.java:54)[activemq-broker-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:87)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:154)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:63)[activemq-console-5.15.4.jar:5.15.4] at org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)[activemq-console-5.15.4.jar:5.15.4] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171] at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171] at org.apache.activemq.console.Main.runTaskClass(Main.java:262)[activemq.jar:5.15.4] at org.apache.activemq.console.Main.main(Main.java:115)[activemq.jar:5.15.4] </code></pre> <p>The kubernete activemq pod is running fine if we don't define it with stateful sets.</p> <p>Below is the spec</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: activemq namespace: dev labels: app: activemq spec: replicas: 1 serviceName: activemq-svc selector: matchLabels: app: activemq template: metadata: labels: app: activemq spec: securityContext: runAsUser: 1000 fsGroup: 2000 runAsNonRoot: false containers: - name: activemq image: "mydocker/amq:latest" imagePullPolicy: "Always" ports: - containerPort: 61616 name: port-61616 - containerPort: 8161 name: port-8161 volumeMounts: - name: activemq-data mountPath: "/opt/activemq/data" restartPolicy: Always imagePullSecrets: - name: regsecret tolerations: - effect: NoExecute key: appstype operator: Equal value: ibd-mq affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: appstype operator: In values: - dev-mq volumeClaimTemplates: - metadata: name: activemq-data spec: accessModes: - ReadWriteOnce storageClassName: "gp2-us-east-2a" resources: requests: storage: 100Gi </code></pre>
<blockquote> <p>WARN | Failed startup of context o.e.j.w.WebAppContext@478ee483{/admin,file:/opt/apache-activemq-5.15.4/webapps/admin/,null}</p> <p>java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false</p> </blockquote> <p>Unless you altered the <code>activemq</code> userid in your image, then that filesystem permission issue is caused by this stanza in your <code>PodSpec</code>:</p> <pre><code>spec: securityContext: runAsUser: 1000 fsGroup: 2000 runAsNonRoot: false </code></pre> <p>failing to match up with the userid configuration in <code>rmohr/activemq:5.15.4</code>:</p> <pre><code>$ docker run -it --entrypoint=/bin/bash rmohr/activemq:5.15.4 -c 'id -a' uid=999(activemq) gid=999(activemq) groups=999(activemq) </code></pre>
<p>I'm trying to expose a Kubernetes pod on a single node bare metal cluster without a domain. In my understanding I've the these options:</p> <ol> <li>Expose using NodePort</li> <li>Expose using an Ingress controller</li> <li>Expose using ClusterIP and manually set an external IP</li> </ol> <p>As I mentioned already, I only have a single node cluster. This means that the master is master and node at the same time directlly running on a fedora host system.</p> <p>The simplest solution is to use a NodePort. But the limitation here is (if I'm right), that the service port will be automatically selected from a given port range.</p> <p>The next better solution is to use an ingress controller. But for this I need a public domain which I haven't. So the ingress controller also doesn't fit to me.</p> <p>What for other options do I have? I just want to expose my service directly on port 9090.</p>
<p>Why not Option 3 ? you can setup externalIPs to your node ip.</p> <pre><code>apiVersion: v1 kind: Service ... spec: externalIPs: - your node ip </code></pre> <p>Also with NodePort, the service port can be specified.</p>
<p>I'm flabbergasted.</p> <p>I have a staging and production environment. Both environments have the same deployments, services, ingress, firewall rules, and both serve a <code>200</code> on <code>/</code>. </p> <p>However, after turning on the staging environment and provisioning the same ingress, the staging service fails with <code>Some backend services are in UNKNOWN state</code>. Production is still live.</p> <p>Both the frontend and backend pods are ready on GKE. I've manually tested the health checks and they pass when I visit <code>/</code>.</p> <p>I see nothing in the logs or gcp docs pointing in the right direction. What could I have possibly broken?</p> <p><code>ingress.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fanout-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "STATIC-IP" spec: backend: serviceName: frontend servicePort: 8080 tls: - hosts: - &lt;DOMAIN&gt; secretName: staging-tls rules: - host: &lt;DOMAIN&gt; http: paths: - path: /* backend: serviceName: frontend servicePort: 8080 - path: /backend/* backend: serviceName: backend servicePort: 8080 </code></pre> <p><code>frontend.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: frontend name: frontend namespace: default spec: ports: - nodePort: 30664 port: 8080 protocol: TCP targetPort: 8080 selector: app: frontend type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: generation: 15 labels: app: frontend name: frontend namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 selector: matchLabels: app: frontend minReadySeconds: 5 template: metadata: labels: app: frontend spec: containers: - image: &lt;our-image&gt; name: frontend ports: - containerPort: 8080 protocol: TCP readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 3 livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 3 </code></pre>
<p>Yesterday even this guide <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p> <p>didn't work. Don't know what happened but even waiting 30minutes + the ingress was reporting UNKNOWN state for backends . </p> <p>After 24 hours, things seem to be much better. L7 http ingress works but with big delay on reporting healthy backends. </p>
<p>Is there any method of setting <strong>default</strong> QoS for all of the nodes from given namespace? Or just set QoS on namespace level instead of pod level. I mean the situation where every existing and every newly created pod will have this default setting for given namespace.</p> <p>Eg. All pods in namespace <code>kube-system</code> or <code>super-important</code> got QoS level "guaranted". </p> <p>I want to set the QoS in such a way that in case of any problems with the resources pods from <code>kube-system</code> (or other important groups/namespaces), they be removed last, later than the less important ones.</p> <p>P.S. I got advice about admission-controllers but i didn't see correlation to QoS.</p>
<p>It's possible. Some basic information about QoS:</p> <blockquote> <p>Kubernetes uses QoS classes to make decisions about scheduling and evicting Pods.</p> <h3>For a Pod to be given a QoS class of Guaranteed:</h3> <ul> <li>Every Container in the Pod must have a memory limit and a memory request, and they must be the same.</li> <li>Every Container in the Pod must have a CPU limit and a CPU request, and they must be the same.</li> </ul> <h3>A Pod is given a QoS class of Burstable if:</h3> <ul> <li>The Pod does not meet the criteria for QoS class Guaranteed. At least one Container in the Pod has a memory or CPU request.</li> </ul> <h3>For a Pod to be given a QoS class of BestEffort:</h3> <ul> <li>The Containers in the Pod must not have any memory or CPU limits or requests.</li> </ul> </blockquote> <p>Here is an example of how to set the Guaranteed QoS for all pods in the namespace qos-test.</p> <p>Let's create a namespace qos-test:</p> <pre><code>$ kubectl create namespace qos-test </code></pre> <p>Next, let's create a LimitRange object YAML file (CPU and Memory should be present, limits and requests should be the same):</p> <pre><code>$ cat &lt;&lt;EOF &gt; limitRange.yaml apiVersion: v1 kind: LimitRange metadata: name: mem-limit-gua spec: limits: - default: cpu: 100m memory: 512Mi defaultRequest: cpu: 100m memory: 256Mi type: Container EOF </code></pre> <p>Then let’s apply it to the namespace qos-test:</p> <pre><code>$ kubectl create -f limitRange.yaml --namespace=qos-test </code></pre> <p>Now, let's create a pod (CPU or Memory requests and limits must not be present in Pod spec):</p> <pre><code>$ cat &lt;&lt;EOF &gt; default-pod.yaml apiVersion: v1 kind: Pod metadata: name: default-pod spec: containers: - name: default-ctr image: nginx EOF $ kubectl create -f default-pod.yaml --namespace=qos-test </code></pre> <p>Finally, let's check what we've got:</p> <pre><code>$ kubectl get namespaces ... $ kubectl get limitrange --all-namespaces -o wide ... $ kubectl get limitrange -o yaml -n qos-test ... $ kubectl get pod default-pod -o yaml -n qos-test apiVersion: v1 kind: Pod metadata: ... name: default-pod namespace: qos-test ... spec: containers: - image: nginx imagePullPolicy: Always name: default-ctr resources: limits: cpu: 100m memory: 256Mi requests: cpu: 100m memory: 256Mi ... status: ... hostIP: 10.0.2.15 phase: Running podIP: 172.17.0.10 qosClass: Guaranteed ... </code></pre> <p>As you can see <code>default-pod</code> got the <code>qosClass: Guaranteed</code>.</p> <p>To create pods with different QoS please refer to this part of documentation:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Configure Quality of Service for Pods</a></li> </ul> <p>You can find more details about configuring default limits for a Namespace in this article:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/" rel="nofollow noreferrer">Configure Default Memory Requests and Limits for a Namespace</a></li> </ul>
<p>I have created a fresh kubernetes cluster on VirtualBox. I have one master node and two worker nodes. I want to use the dashboard but I get error. When I run <code>kubectl proxy</code> the master start listening on port 8001:</p> <pre><code>Starting to serve on 127.0.0.1:8001 </code></pre> <p>and then I create a SSH tunnel on my host:</p> <pre><code>ssh [email protected] -N -L 8001:127.0.0.1:8001 </code></pre> <p>but when I type <code>localhost:8001/ui</code> on the browser I get this:</p> <pre><code>{ "paths": [ "/apis", "/apis/", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/healthz", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/metrics", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/version" ] } </code></pre> <p>And if I want to get into the dashboard through api directly :</p> <pre><code>https://10.10.10.50:6443/api/v1/namespaces/kube-system/services </code></pre> <p>I get this error:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "services is forbidden: User \"system:anonymous\" cannot list services in the namespace \"kube-system\"", "reason": "Forbidden", "details": { "kind": "services" }, "code": 403 } </code></pre> <p>I know this is related to authentication but I don't know how should I define username/password on my cluster or share keys among master nodes and my host. Or how should I use the keys inside .kube/config file.</p>
<blockquote> <p>I want to use the dashboard but I get error.</p> </blockquote> <p>According to the excerpts from <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">the dashboard documentation</a>, after successful default installation of the dashboard (the default Dashboard deployment contains a minimal set of RBAC privileges needed to run):</p> <ul> <li><p>To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:</p> <pre><code>kubectl proxy </code></pre></li> <li><p>Now access Dashboard at:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre></li> </ul> <p>No need to make tunnel, or any other change. So far default installation of dashboard always worked like a charm, if you have anything configured separately then there might be some issue...</p>
<p>I'm trying to get my head around Persistent Volumes &amp; Persistent Volume Claims and how it should be done in Helm...</p> <p>The <strong>TLDR</strong> version of the question is: How do I create a PVC in helm that I can attach future releases (whether upgrades or brand new installs) to? </p> <p><strong>My current understanding:</strong></p> <p>PV is an interface to a piece of physical storage. PVC is how a pod claims the existence of a PV for its own use. When the pod is deleted, the PVC is also deleted, but the PV is maintained - and is therefore persisted. But then how I do use it again?</p> <p>I know it is possible to dynamically provision PVs. Like with Google Cloud as an example if you create <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="noreferrer"><strong>ONLY</strong> a PVC</a>, it will automatically create a PV for you.</p> <p>Now this is the part I'm stuck on...</p> <p>I've created a helm chart that explicitly creates the PVC &amp; thus has a dynamically created PV as part of a release. I then later <em>delete</em> the release, which will then also remove the PVC. The cloud provider will <em>maintain</em> the PV. On a subsequent install of the same chart with a new release... How do I reuse the old PV? Is there a way to actually do that?</p> <p>I did find <a href="https://stackoverflow.com/questions/49344501/how-can-you-reuse-dynamically-provisioned-persistentvolumes-with-helm-on-gke">this question</a> which kind of answers it... However, it implies that you need to pre-create PVs for each PVC you're going to need, and the whole point of the replicas &amp; auto-scaling is that all of those should be generated on demand.</p> <p>The use case is - as always - for test/dev environments where I want my data to be persisted, but I don't always want the servers running.</p> <p>Thank you in advance! My brain's hurting a bit cause I just can't figure it out... >.&lt;</p>
<p>It will be a headache indeed. </p> <p>Let's start with how you should do it to achieve scalable deployments with RWO storages that are attached to your singular pods when they are brought up. This is where volumeClaimTemplates come into play. You can have PVC created dynamicaly as your Deployment scales. This however suits well situation when your pod needs storage that is attached to a pod, but not really needed any longer when pod goes away (volume can be reused following reclaim policy.</p> <p>If you need the data like this reatached when pod fails, you should think of StatefulSets which solve that part at least.</p> <p>Now, if you precreate PVC explicitly, you have more control over what happens, but dynamic scalability will have problems with this for RWO. This and manual PV management as in the response you linked can actually achieve volume reuse, and it's the only mechanism that would allow it that I can think of.</p> <p>After you hit a wall like this it's time to think about alternatives. For example, why not use a StatefulSet that will give you storage retention in running cluster and instead of deleting the chart, set all it's replicas to 0, retaining non-compute resources in place but scaling it down to nothing. Then when you scale up a still bound PVC should get reattached to rescaled pods.</p>
<p>looking a guide to install kubernetes over AWS EC2 instances using kops <a href="https://medium.com/containermind/how-to-create-a-kubernetes-cluster-on-aws-in-few-minutes-89dda10354f4" rel="nofollow noreferrer">Link</a> I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes, is possible set an IP to some configuration file then my cluster is created with a specific IP in my control node and my etcd node???? if a control node is restarting and not have elastic IP its change, and a big number of issues starts. I want to prevent this problem, or at least after deploy change my control node IP.</p>
<blockquote> <p>I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes</p> </blockquote> <p>The correct way, and the way almost every provisioning tool that I know of does this, is to use either an Elastic Load Balancer (ELB) or the new Network Load Balancer (NLB) to put an abstraction layer in front of the master nodes for exactly that reason. So it does one step better than just an EIP and assigns one EIP per Availability Zone (AZ), along with a stable DNS name. It's my recollection that the masters can also keep themselves in sync with the ELB (unknown about the NLB, but certainly conceptually possible), so if new ones come online they register with the ELB automatically</p> <p>Then, a similar answer for the etcd nodes, and for the same reason, although as far as I know etcd has no such ability to keep the nodes in sync with the fronting ELB/NLB so that would need to be done with the script that provisions any new etcd nodes</p>
<p>I can run ejabberd service on docker swarm. Do I Deployment a ejabberd service on kubernetes ?</p> <p>YML for swarm:</p> <pre><code>version: "3.3" services: ejabberd: image: ejabberd/ecs:18.03 networks: - backend ports: - 5222:5222 - 5269:5269 - 5280:5280 volumes: - ./ejabberd/database/:/home/ejabberd/database/ - ./ejabberd/logs/:/home/ejabberd/logs/ - ./ejabberd/conf/ejabberd.yml:/home/ejabberd/conf/ejabberd.yml networks: backend: external: true </code></pre>
<p>This is a basic deployment yaml file for kubernetes without volumes. you can add some volume mounts also depending how are you using them.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ejabberd-deployment labels: app: ejabberd spec: replicas: 1 selector: matchLabels: app: ejabberd template: metadata: labels: app: ejabberd spec: containers: - name: ejabberd image: ejabberd/ecs ports: - containerPort: 5280 - containerPort: 5222 - containerPort: 5269 </code></pre>
<p>I need to generate my own SSL certificates for Kubernetes cluster components (apiserver, apiserver-kubelet-client, apiserver-etcd-client, front-proxy-client etc.). The reason for this is because Validity period for those certificates are set to 1 year by default and I need to have validity set to more than one year, because of my business reasons. When I generated my own set of certificates and initialized cluster, everything worked perfectly - PODs in kube-system namespaces started, comunication with apiserver worked. But I encoutered that some commands like <code>kubectl logs</code> or <code>kubectl port-forward</code> or <code>kubectl exec</code> stopped working and started throwing following erros:</p> <pre><code>kubectl logs &lt;kube-apiserver-pod&gt; -n kube-system error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log &lt;kube-apiserver-pod&gt;)) </code></pre> <p>or</p> <pre><code>kubectl exec -it &lt;kube-apiserver-pod&gt; -n kube-system sh error: unable to upgrade connection: Unauthorized` </code></pre> <p>however <code>docker exec</code> command to log to k8s_apiserver container is working properly.</p> <p>During my debugging I found out that only self generated <code>apiserver-kubelet-client</code> key/cert file is causing this cluster behaviour.</p> <p>Bellow is process I used to generate and use my own cert/key pair for apiserver-kubelet-client.</p> <ol> <li><p>I inicialized kubernetes cluster to set its own certificates into <code>/etc/kubernetes/pki</code> folder by running <code>kubeadm init ...</code></p></li> <li><p>Make a backup of <code>/etc/kubernetes/pki</code> folder into <code>/tmp/pki_k8s</code></p></li> <li><p>Open <code>apiserver-kubelet-client.crt</code> with openssl to check all set extentions, CN, O etc.</p> <p><code>openssl x509 -noout -text -in /tmp/pki_k8s/apiserver-kubelet-client.crt</code></p></li> <li><p>To ensure same extentions and CN,O parameters to appear in certificate generated by myself I created .conf file for extentions and .csr file for CN and O</p> <p><code>cd /tmp/pki_k8s/ cat &lt;&lt;-EOF_api_kubelet_client-ext &gt; apiserver_kubelet_client-ext.conf [ v3_ca ] keyUsage = critical, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth EOF_api_kubelet_client-ext</code></p> <p><code>openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"</code></p></li> <li><p>Finally I generated my own apiserver-kubelet-client.crt. For its generation I reused existing apiserver-kubelet-client.key and ca.crt/ca.key generated by K8S initialization </p> <p><code>openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -sha256 -out apiserver-kubelet-client.crt -extensions v3_ca -extfile apiserver_kubelet_client-ext.conf -days 3650</code></p></li> <li><p>Once I had generated my own <code>apiserver-kubelet-client.crt</code> which overides the previous one generated by k8s initialization script itself, I reset kubernetes cluster by hitting <code>kubeadm reset</code>. This purged /etc/kubernetes folder</p></li> <li><p>copy all certificates into <code>/etc/kubernetes/pki</code> from <code>/tmp/pki_k8s</code></p></li> <li>and reinitialize K8S cluster <code>kubeadm init ...</code></li> </ol> <p>During that I saw that K8S cluster used already existing certificates stored in <code>/etc/kubernetes/pki</code> for setup.</p> <pre><code>[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] Using the existing etcd/ca certificate and key. [certificates] Using the existing etcd/server certificate and key. [certificates] Using the existing etcd/peer certificate and key. [certificates] Using the existing etcd/healthcheck-client certificate and key. [certificates] Using the existing apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled </code></pre> <p>After that, K8S cluster is UP, I can list pods, list description, make deployments etc. however not able to check logs, exec command as described above. </p> <pre><code> kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-kjkp9 1/1 Running 0 2m coredns-78fcdf6894-q88lx 1/1 Running 0 2m ... kubectl logs &lt;apiserver_pod&gt; -n kube-system -v 7 I0818 08:51:12.435494 12811 loader.go:359] Config loaded from file /root/.kube/config I0818 08:51:12.436355 12811 loader.go:359] Config loaded from file /root/.kube/config I0818 08:51:12.438413 12811 loader.go:359] Config loaded from file /root/.kube/config I0818 08:51:12.447751 12811 loader.go:359] Config loaded from file /root/.kube/config I0818 08:51:12.448109 12811 round_trippers.go:383] GET https://&lt;HOST_IP&gt;:6443/api/v1/namespaces/kube-system/pods/&lt;apiserver_pod&gt; I0818 08:51:12.448126 12811 round_trippers.go:390] Request Headers: I0818 08:51:12.448135 12811 round_trippers.go:393] Accept: application/json, */* I0818 08:51:12.448144 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f I0818 08:51:12.462931 12811 round_trippers.go:408] Response Status: 200 OK in 14 milliseconds I0818 08:51:12.471316 12811 loader.go:359] Config loaded from file /root/.kube/config I0818 08:51:12.471949 12811 round_trippers.go:383] GET https://&lt;HOST_IP&gt;:6443/api/v1/namespaces/kube-system/pods/&lt;apiserver_pod&gt;/log I0818 08:51:12.471968 12811 round_trippers.go:390] Request Headers: I0818 08:51:12.471977 12811 round_trippers.go:393] Accept: application/json, */* I0818 08:51:12.471985 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f I0818 08:51:12.475827 12811 round_trippers.go:408] Response Status: 401 Unauthorized in 3 milliseconds I0818 08:51:12.476288 12811 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "the server has asked for the client to provide credentials ( pods/log &lt;apiserver_pod&gt;)", "reason": "Unauthorized", "details": { "name": "&lt;apiserver_pod&gt;", "kind": "pods/log" }, "code": 401 }] F0818 08:51:12.476325 12811 helpers.go:119] error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log &lt;apiserver_pod&gt;)) </code></pre> <p>See kubelet service file below:</p> <pre><code>[root@qa053 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" Environment="CA_CLIENT_CERT=--client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELE=--rotate-certificates=true" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CERTIFICATE_ARGS $CA_CLIENT_CERT </code></pre> <p>Do you have any ideas ? :) Thanks</p> <p>Best Regard</p>
<p>I found out reason why it did not worked.</p> <p>When creating .csr file i used this:</p> <pre><code>openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client" </code></pre> <p>But in -subj was wrong formatting which caused problems with parsing right CN from certificate. Instead of <code>"/O=system:masters,CN=kube-apiserver-kubelet-client"</code> it needs to be</p> <pre><code>openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client" </code></pre> <p>Certificates generated by both .csr files looks same in terms of -text view. But they act differently.</p>
<p>So I am trying to build a helm chart.</p> <p>in my templates file I've got a file like:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config-map data: {{ Do something here to load up a set of files | indent 2 }} </code></pre> <p>I have another directory in my chart: <code>configmaps</code> where a set of json files, that themselves will have templated variables in them:</p> <pre><code>a.json b.json c.json </code></pre> <p>Ultimately I'd like to be sure in my chart I can reference:</p> <pre><code>volumes: - name: config-a configMap: name: config-map items: - key: a.json path: a.json </code></pre>
<p>I had same problem for a few weeks ago with adding files and templates directly to container. </p> <p>Look for the sample syntax: </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-configmap-{{ .Release.Name }} namespace: {{ .Release.Namespace }} labels: chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" release: {{ .Release.Name }} heritage: {{ .Release.Service }} data: nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }} ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }} dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }} fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }} mime_types: {{ .Files.Get "files/mime.types" | quote }} proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }} </code></pre> <p>Second step is to reference it from deployment: </p> <pre><code> volumes: - name: {{ $.Release.Name }}-configmap-volume configMap: name:nginx-configmap-{{ $.Release.Name }} items: - key: dhparam_pem path: dhparam.pem - key: fastcgi_conf path: fastcgi.conf - key: mime_types path: mime.types - key: nginx_conf path: nginx.conf - key: proxy_params_conf path: proxy_params.conf - key: ssl_conf path: ssl.conf </code></pre> <p>It's actual for now. Here you can find 2 types of importing: </p> <ul> <li>regular files without templating </li> <li>configuration files with dynamic variables inside</li> </ul> <p>Please do not forget to read official docs: <a href="https://helm.sh/docs/chart_template_guide/accessing_files/" rel="noreferrer">https://helm.sh/docs/chart_template_guide/accessing_files/</a></p> <p>Good luck! </p>
<p>I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.</p> <p>Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the <code>172.16.0.16/28</code> subnet.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters</a></p> <pre><code>gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork &quot;&quot; </code></pre> <p>When I run this command, I see that:</p> <ul> <li>I now have a few gke subnets in my VPC belong to the cluster subnets for nodes and services. These are in the <code>10.x.x.x/8</code> range.</li> <li>I don't have any subnets in the <code>172.16/16</code> address space.</li> <li>I do have some new pairing rules and routes that seem to be related. For example, there is a new route <code>peering-route-a08d11779e9a3276</code> with a destination address range of <code>172.16.0.16/28</code> and next hop <code>gke-62d565a060f347e0fba7-3094-3230-peer</code>. This peering role then points to <code>gke-62d565a060f347e0fba7-3094-bb01-net</code></li> </ul> <pre><code>gcloud compute networks subnets list | grep us-west1 #=&gt; default us-west1 default 10.138.0.0/20 gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22 </code></pre> <pre><code>gcloud compute networks peerings list #=&gt; NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected. </code></pre> <p>Is <code>gke-62d565a060f347e0fba7-3094-bb01-net</code> a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the <code>172.16/16</code> range) that Google is managing for the GKE service?</p> <p>Further - how are my requests making it to the Kubernetes API server?</p>
<p>The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="noreferrer">Private Cluster</a> feature of GKE depends on the <a href="https://cloud.google.com/vpc/docs/alias-ip" rel="noreferrer">Alias IP Ranges</a> feature of VPC networking, so there are multiple things happening when you create a private cluster:</p> <ul> <li><p>The <code>--enable-ip-alias</code> flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.</p></li> <li><p>The <code>--create-subnetwork</code> flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with <code>--cluster-ipv4-cidr</code> and <code>--services-ipv4-cidr</code>. Or you could even create the subnetwork yourself and tell GKE to use it with the flags <code>--subnetwork</code>, <code>--cluster-secondary-range-name</code>, and <code>--services-secondary-range-name</code>.</p></li> <li><p>The <code>--private-cluster</code> flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using <a href="https://cloud.google.com/vpc/docs/vpc-peering" rel="noreferrer">VPC Network Peering</a>. The Kubernetes management endpoints live in the range you specify with <code>--master-ipv4-cidr</code> (172.16.0.16/28 in your case). An <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">Internal Load Balancer</a> is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters" rel="noreferrer">Regional Cluster</a>. You can find this internal IP address as the <code>privateEndpoint</code> field in the output of <code>gcloud beta container clusters describe</code>. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.</p></li> <li><p>Your private cluster <strong>also</strong> has an external IP address, which you can find as the <code>endpoint</code> field in the output of <code>gcloud beta container clusters describe</code>. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, <em>e.g.</em>, using <code>kubectl</code>.</p></li> <li><p>You can use the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks" rel="noreferrer">Master Authorized Networks</a> feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the <code>gcloud</code> CLI.</p></li> </ul> <p>Hope this helps!</p>
<p>I'm having trouble deleting custom resource definition. I'm trying to upgrade kubeless from v1.0.0-alpha.7 to <strong>v1.0.0-alpha.8</strong>.</p> <p>I tried to remove all the created custom resources by doing </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl delete -f kubeless-v1.0.0-alpha.7.yaml deployment "kubeless-controller-manager" deleted serviceaccount "controller-acct" deleted clusterrole "kubeless-controller-deployer" deleted clusterrolebinding "kubeless-controller-deployer" deleted customresourcedefinition "functions.kubeless.io" deleted customresourcedefinition "httptriggers.kubeless.io" deleted customresourcedefinition "cronjobtriggers.kubeless.io" deleted configmap "kubeless-config" deleted </code></pre> <p>But when I try,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get customresourcedefinition NAME AGE functions.kubeless.io 21d </code></pre> <p>And because of this when I next try to upgrade by doing, I see,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create -f kubeless-v1.0.0-alpha.8.yaml Error from server (AlreadyExists): error when creating "kubeless-v1.0.0-alpha.8.yaml": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "functions.kubeless.io" already exists </code></pre> <p>I think because of this mismatch in the function definition , the hello world example is failing.</p> <pre class="lang-sh prettyprint-override"><code>$ kubeless function deploy hellopy --runtime python2.7 --from-file test.py --handler test.hello INFO[0000] Deploying function... FATA[0000] Failed to deploy hellopy. Received: the server does not allow this method on the requested resource (post functions.kubeless.io) </code></pre> <p>Finally, here is the output of,</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe customresourcedefinitions.apiextensions.k8s.io Name: functions.kubeless.io Namespace: Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apiextensions.k8s.io/v1beta1","description":"Kubernetes Native Serverless Framework","kind":"CustomResourceDefinition","metadata":{"anno... API Version: apiextensions.k8s.io/v1beta1 Kind: CustomResourceDefinition Metadata: Creation Timestamp: 2018-08-02T17:22:07Z Deletion Grace Period Seconds: 0 Deletion Timestamp: 2018-08-24T17:15:39Z Finalizers: customresourcecleanup.apiextensions.k8s.io Generation: 1 Resource Version: 99792247 Self Link: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/functions.kubeless.io UID: 951713a6-9678-11e8-bd68-0a34b6111990 Spec: Group: kubeless.io Names: Kind: Function List Kind: FunctionList Plural: functions Singular: function Scope: Namespaced Version: v1beta1 Status: Accepted Names: Kind: Function List Kind: FunctionList Plural: functions Singular: function Conditions: Last Transition Time: 2018-08-02T17:22:07Z Message: no conflicts found Reason: NoConflicts Status: True Type: NamesAccepted Last Transition Time: 2018-08-02T17:22:07Z Message: the initial names have been accepted Reason: InitialNamesAccepted Status: True Type: Established Last Transition Time: 2018-08-23T13:29:45Z Message: CustomResource deletion is in progress Reason: InstanceDeletionInProgress Status: True Type: Terminating Events: &lt;none&gt; </code></pre>
<p>So it turns out , the root cause was that Custom resources with finalizers can &quot;deadlock&quot;. The CustomResource &quot;functions.kubeless.io&quot; had a</p> <pre><code>Finalizers: customresourcecleanup.apiextensions.k8s.io </code></pre> <p>and this is can leave it in a bad state when deleting.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/60538" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/60538</a></p> <p>I followed the steps mentioned in <a href="https://github.com/kubernetes/kubernetes/issues/60538#issuecomment-369099998" rel="noreferrer">this workaround</a> and it now gets deleted.</p>
<p>I’m looking for a way to differentiate between Prometheus metrics gathered from different dynamically discovered services running in a Kubernetes cluster (we’re using <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a>). E.g. for the metrics written into the db, I would like to understand from which service they actually came. I guess you can do this via a label from within the respective services, however, swagger-stats (<a href="http://swaggerstats.io/" rel="nofollow noreferrer">http://swaggerstats.io/</a>) which we’re using does not yet offer this functionality (to enhance this, there is an issue open: <a href="https://github.com/slanatech/swagger-stats/issues/50" rel="nofollow noreferrer">https://github.com/slanatech/swagger-stats/issues/50</a>). Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?</p> <p>Appreciate your feedback! </p>
<blockquote> <p>Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?</p> </blockquote> <p>This is how Prometheus is designed to be used, as a target doesn't know how the monitoring system views it and prefixing metric names makes cross-service analysis harder. Both setting labels across an entire target and prefixing metric names are considered anti-patterns.</p> <p>What you want is called a target label, these usually come from relabelling applied to metadata from service discovery.</p> <p>When using the Prometheus Operator, you can specify <code>targetLabels</code> as a list of labels to copy from the Kubernetes Service to the Prometheus targets.</p>
<p>I'm using config maps to inject env variables into my containers. Some of the variables are created by concatenating variables, for example:</p> <p>~/.env file</p> <pre><code>HELLO=hello WORLD=world HELLO_WORLD=${HELLO}_${WORLD} </code></pre> <p>I then create the config map</p> <p><code>kubectl create configmap env-variables --from-env-file ~/.env</code></p> <p>The deployment manifests reference the config map.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: template: spec: containers: - name: my-image image: us.gcr.io/my-image envFrom: - configMapRef: name: env-variables </code></pre> <p>When I exec into my running pods, and execute the command</p> <p><code>$ printenv HELLO_WORLD</code></p> <p>I expect to see <code>hello_world</code>, but instead I see <code>${HELLO}_${WORLD}</code>. The variables aren't expanded, and therefore my applications that refer to these variables will get the unexpanded value.</p> <p>How do I ensure the variables get expanded?</p> <p>If it matters, my images are using alpine.</p>
<p>I can't find any documentation on interpolating environment variables, but I was able to get this to work by removing the interpolated variable from the configmap and listing it directly in the deployment. It also works if all variables are listed directly in the deployment. It looks like kubernetes doesn't apply interpolation to variables loaded from configmaps.</p> <p>For instance, this will work:</p> <p>Configmap</p> <pre><code>apiVersion: v1 data: HELLO: hello WORLD: world kind: ConfigMap metadata: name: env-variables namespace: default </code></pre> <p>Deployment:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: template: spec: containers: - name: my-image image: us.gcr.io/my-image envFrom: - configMapRef: name: env-variables env: - name: HELLO_WORLD value: $(HELLO)_$(WORLD) </code></pre>
<p>I would like to expose a port on an <code>initContainer</code> in a Kubernetes <code>Job</code> I am working on. So far I have been unsuccessful in doing so, and when reading through <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior" rel="nofollow noreferrer">the docs</a> it says:</p> <blockquote> <p>The ports on an Init Container are not aggregated under a service.</p> </blockquote> <p>What does aggregated in this context mean?</p> <p>As of right now I think the answer is no as the experiment I am running will not allow me to expose a port. However, I am fairly new to Kubernetes, so I don't know if I am making a mistake somewhere else. Figured I would ask here to see if what I am doing is even possible before I attempt to debug any further.</p>
<p>It is an ambiguous choice of words but the upshot is you cannot access the initContainer through a Service. </p> <p>Kubernetes Services provide access to a set of Pods by matching to their labels. In this sense a service is an aggregator. I think this is the intended meaning. </p> <p>There can also be multiple containers in a Pod, each using a different port. If the Pod is matched by a Service that service can be configured to provide access to both on different ports. So a service can also make multiple ports available, though I wouldn't call that aggregation. </p> <p>Pods aren't exposed by a Service until their readiness probes pass and the Pod status is ready. An initContainer runs before the probes when the Pod status is initialising. So it is logical that it cannot be reached by a Service. It would also be odd to want to expose an initContainer externally as it is temporary by nature. </p> <p>Is your aim in order to debug (<a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/debug-application-cluster/debug-init-containers/" rel="nofollow noreferrer">https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/debug-application-cluster/debug-init-containers/</a>) or to allow other Pods/Containers to access? I believe the Pod would at init have an IP address but not one which would be guaranteed to be stable so I don't know whether you might be able to port-forward for debugging <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</a></p> <p>I notice you say you are looking to expose the initContainer in a Job. Servicing traffic in a Job would be an unusual aim. If the aim is to monitor the Job then see <a href="https://stackoverflow.com/questions/39146436/monitoring-a-kubernetes-job">Monitoring a kubernetes job</a></p>
<p>I'm using minikube to run kubernetes locally. My local k8s have two pods which one of them is PostgreSQL and another one is my own app. I've mounted a <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> in order to make a stateful pod for PostgreSQL:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/data/psql" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>Here is PostgreSQL deployment yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: name: postgres spec: containers: - name: postgres imagePullPolicy: Never image: postgres:9.6 ports: - name: postgres containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql name: postgres-persistent-storage volumes: - name: postgres-persistent-storage persistentVolumeClaim: claimName: postgres-pv-claim </code></pre> <p>The problem is that PostgreSQL service doesn't start and this error occurs when I run its pod:</p> <pre><code>Error: /var/lib/postgresql/9.6/main is not accessible; please fix the directory permissions (/var/lib/postgresql/9.6/ should be world readable) No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning). </code></pre> <p>I've checked inside of PostgreSQL pod and I found that <code>/var/lib/postgresql</code> is empty, just like <code>/data/psql</code> In minikube host. Can anyone help?</p>
<p>Change:</p> <pre><code>volumeMounts: - mountPath: /var/lib/postgresql </code></pre> <p>to</p> <pre><code>volumeMounts: - mountPath: /var/lib/postgresql/data </code></pre> <p>With the wrong mountPoint postgres executables were overridden.</p> <p>I attach an image with the data I see from inside the pod (on the left) and from inside minikube space (on the right, the little shell from virtualbox).</p> <p><a href="https://i.stack.imgur.com/m8rko.png" rel="noreferrer"><img src="https://i.stack.imgur.com/m8rko.png" alt="enter image description here"></a></p>
<p>If I have a buggy app in a container that is constantly writing to an emptyDir volume it could use up all the space on the worker node and affect the performance of other pods / containers on the node. Thus breaking the expectation that containers are isolated from each other what one container does should not negatively impact other containers on the node.</p> <p>Is there a way to limit the amount of disk space used by a emptyDir volume (not the RAM based emptyDir type)?</p>
<p>You can set <code>sizeLimit</code> on the volume (see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#emptydirvolumesource-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#emptydirvolumesource-v1-core</a>). Setting this will, once the volume is full, evict the pod.</p>
<p>I am following the <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a></p> <p>After completing the installation getting following issue:</p> <blockquote> <p>kubectl get po —all-namespaces</p> </blockquote> <pre><code>kube-system kube-dns-598d7bf7d4-xfgfw 2/3 CrashLoopBackOff 10914 19d </code></pre> <blockquote> <p>kubectl logs -n kube-system kube-dns-598d7bf7d4-xfgfw -c kubedns</p> </blockquote> <pre><code>I0825 04:35:04.772663 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0825 04:35:05.272601 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0825 04:35:05.614968 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 E0825 04:35:05.620146 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 I0825 04:35:05.772829 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0825 04:35:06.272745 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0825 04:35:06.620279 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 E0825 04:35:06.626661 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 I0825 04:35:06.772930 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0825 04:35:07.272656 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0825 04:35:07.628249 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 </code></pre> <blockquote> <p>kubectl logs -n kube-system kube-dns-598d7bf7d4-xfgfw -c dnsmasq</p> </blockquote> <pre><code>I0825 04:34:59.855457 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I0825 04:34:59.855908 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] I0825 04:35:00.168181 1 nanny.go:119] W0825 04:35:00.168488 1 nanny.go:120] Got EOF from stdout I0825 04:35:00.168645 1 nanny.go:116] dnsmasq[8]: started, version 2.78 cachesize 1000 I0825 04:35:00.168792 1 nanny.go:116] dnsmasq[8]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I0825 04:35:00.168891 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0825 04:35:00.168986 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0825 04:35:00.169077 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain cluster.local I0825 04:35:00.169194 1 nanny.go:116] dnsmasq[8]: reading /etc/resolv.conf I0825 04:35:00.169285 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0825 04:35:00.169395 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0825 04:35:00.169484 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.1#10053 for domain cluster.local I0825 04:35:00.169570 1 nanny.go:116] dnsmasq[8]: using nameserver 127.0.0.53#53 I0825 04:35:00.169818 1 nanny.go:116] dnsmasq[8]: read /etc/hosts - 7 addresses </code></pre> <blockquote> <p>kubectl logs -n kube-system kube-dns-598d7bf7d4-xfgfw -c sidecar</p> </blockquote> <pre><code>W0807 06:09:38.672889 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:47407-&gt;127.0.0.1:53: read: connection refused W0807 06:09:43.673355 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:56792-&gt;127.0.0.1:53: read: connection refused W0807 06:09:48.673754 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:39777-&gt;127.0.0.1:53: read: connection refused W0807 06:09:53.674145 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:34342-&gt;127.0.0.1:53: read: connection refused W0807 06:09:58.674551 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:52322-&gt;127.0.0.1:53: read: connection refused W0807 06:10:03.674988 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:59016-&gt;127.0.0.1:53: read: connection refused W0807 06:10:08.675349 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:40075-&gt;127.0.0.1:53: read: connection refused </code></pre> <p>Further:</p> <blockquote> <p>kubectl get svc kubernetes</p> </blockquote> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.32.0.1 443/TCP 19d </code></pre> <blockquote> <p>kubectl get endpoints kubernetes</p> </blockquote> <pre><code>NAME ENDPOINTS AGE kubernetes 10.240.0.10:6443,10.240.0.11:6443,10.240.0.12:6443 19d </code></pre> <p>Please help on to solve the issue.</p> <blockquote> <p>kubectl version</p> </blockquote> <pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>You get the error:</p> <pre><code>x509: certificate is valid for 10.240.0.10, 10.240.0.11, 10.240.0.12, 35.240.168.187, 127.0.0.1, not 10.32.0.1 </code></pre> <p>It sounds to me that you may have forgotten a certificate configuration for <code>kube-apiserver</code>.</p> <p>Check all controller machines and if there's a missing parameter in services' scripts.</p>
<p>I am running a java application on tomcat in a k8s cluster, I am using a prestop lifecycle hook to run <code>/usr/local/tomcat/bin/shutdown.sh</code> before the container gets terminated. Also defining a termination grace period of 60 secs. </p> <p>expected behaviour: tomcat shutsdown gracefully when I kill a pod or delete a deployment. </p> <p>actual behaviour: tomcat shutsdown abruptly.</p> <p>Any thoughts? Thanks in advance. </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: sample annotations: spec: replicas: 2 selector: matchLabels: app: sample name: sample template: metadata: labels: app: sample name: sample spec: hostname: sample-web terminationGracePeriodSeconds: 60 volumes: - name: splunk-inputs configMap: name: splunk-conf items: - key: inputs.conf path: ./inputs.conf - name: splunk-outputs configMap: name: splunk-conf items: - key: outputs.conf path: ./outputs.conf - name: docker-socket hostPath: path: /var/run/docker.sock - name: tomcat-log-common emptyDir: {} containers: - name: sample image: sampleregistery.azurecr.io/root imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: ["/usr/local/tomcat/bin/shutdown.sh"] </code></pre>
<p>found the issue:</p> <p>I was running this prestop lifecycle hook:</p> <p>lifecycle: preStop: exec: command: [&quot;/usr/local/tomcat/bin/catalina.sh stop&quot;]</p> <p>It should be something like:</p> <p>lifecycle: preStop: exec: command: [&quot;/usr/local/tomcat/bin/catalina.sh stop &amp;&amp; sleep 30&quot;]</p> <p>In the first case, K8s was attempting to exit out of the lifecycle hook as soon as it invokes the shutdown.sh without waiting for tomcat to drain all threads.</p> <p>it just takes about 5-10 secs depending upon your app. giving a sleep of 30 secs will keep the prestop hook active which should give enough time for tomcat to finish the shutdown process&quot; .</p>
<p>In <a href="https://github.com/kubernetes/client-go/blob/79cb21f5b3b1dd8f8b23bd3f79925b4fda4e2562/tools/cache/reflector.go#L100" rel="nofollow noreferrer">this</a> reflector package it mentions an unstable value being used as a name suffix. It's the number of nanoseconds modulo 12345. Is this meaningful or is it just a synonym for pseudo random so humans don't interpret it?</p> <pre><code>// reflectorDisambiguator is used to disambiguate started reflectors. // initialized to an unstable value to ensure meaning isn't attributed to the suffix. var reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) </code></pre> <p>The word unstable is what is specifically making me unsure. What does it mean in this context?</p> <p>Is there an advantage of doing this over another method of getting a random number under 12345?</p>
<p>The meaning seems clear:</p> <blockquote> <p><a href="https://github.com/kubernetes/client-go/commit/755bbca820db30c1a1d071d0ab4709fc49c003a5" rel="nofollow noreferrer">Kubernetes-commit: 1da4f4a745bf536c34e377321a252b4774d1a7e0</a></p> <p>tools/cache/reflector.go</p> <pre><code>// reflectorDisambiguator is used to disambiguate started reflectors. // initialized to an unstable value to ensure meaning isn't attributed to the suffix. </code></pre> </blockquote> <p>The suffix behavior should not be deterministic because you should not rely on a particular implementation behavior.</p> <hr> <p>For example, a similar situation occured for Go maps:</p> <blockquote> <p><a href="https://golang.org/ref/spec" rel="nofollow noreferrer">The Go Programming Language Specification</a></p> <p>For statements with range clause</p> <p>The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next. </p> <p><a href="https://golang.org/doc/go1" rel="nofollow noreferrer">Go 1 Release Notes</a></p> <p>Iterating in maps</p> <p>The old language specification did not define the order of iteration for maps, and in practice it differed across hardware platforms. This caused tests that iterated over maps to be fragile and non-portable, with the unpleasant property that a test might always pass on one machine but break on another.</p> <p>In Go 1, the order in which elements are visited when iterating over a map using a for range statement is defined to be unpredictable, even if the same loop is run multiple times with the same map. Code should not assume that the elements are visited in any particular order.</p> <p>This change means that code that depends on iteration order is very likely to break early and be fixed long before it becomes a problem. Just as important, it allows the map implementation to ensure better map balancing even when programs are using range loops to select an element from a map. </p> <p><a href="https://golang.org/doc/go1.3" rel="nofollow noreferrer">Go 1.3 Release Notes</a></p> <p>Map iteration</p> <p>Iterations over small maps no longer happen in a consistent order. Go 1 defines that “The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next.” To keep code from depending on map iteration order, Go 1.0 started each map iteration at a random index in the map. A new map implementation introduced in Go 1.1 neglected to randomize iteration for maps with eight or fewer entries, although the iteration order can still vary from system to system. This has allowed people to write Go 1.1 and Go 1.2 programs that depend on small map iteration order and therefore only work reliably on certain systems. Go 1.3 reintroduces random iteration for small maps in order to flush out these bugs.</p> <p>Updating: If code assumes a fixed iteration order for small maps, it will break and must be rewritten not to make that assumption. Because only small maps are affected, the problem arises most often in tests.</p> </blockquote> <hr> <p>Similar concerns lead to a proposal, that wasn't implemented, to ensure that the order for unstable sorts was unstable:</p> <blockquote> <p><a href="https://github.com/golang/go/issues/13884" rel="nofollow noreferrer">proposal: sort: return equal values in non-deterministic order#13884</a></p> <p>Crazy idea, but what if sort.Sort randomly permuted its input a bit before starting?</p> <p>Go 1.6 has a different sort.Sort than Go 1.5 and I've seen at least a dozen test failures at Google that were implicitly depending on the old algorithm. The usual scenario is that you sort a slice of structs by one field in the struct. If there are entries with that field equal but others unequal and you expect a specific order for the structs at the end, you are depending on sort.Sort's algorithm. A later sort.Sort might make a different choice and produce a different order. This makes programs not portable from one version of Go to another, much like map hash differences used to make programs not portable from one architecture to another. We solved maps by randomizing the iteration order a bit. In the map case it's not a full permutation, but just enough variation to make tests obviously flaky.</p> <p>I wonder if we should do the same for sort.Sort. It would only take N swaps to shuffle things quite well, and we already use Nlog(N) swaps, so N(log(N)+1) is not likely to be noticed. That would also protect a bit against malicious inputs.</p> <p>It would surprise people, especially people who think sort.Sort == sort.Stable. But the rationale is that it's better to surprise them the first time they run the code instead of however many Go releases later.</p> </blockquote> <hr> <p>Here are benchmarks comparing <code>time.Now()</code> to <code>rand.Intn()</code>:</p> <pre><code>package main import "testing" import ( rand "math/rand" "time" ) // https://github.com/kubernetes/client-go/blob/79cb21f5b3b1dd8f8b23bd3f79925b4fda4e2562/tools/cache/reflector.go#L100 var reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) func BenchmarkTimeNow(b *testing.B) { for N := 0; N &lt; b.N; N++ { reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) } } // rand.Intn() func init() { rand.Seed(time.Now().UnixNano()) reflectorDisambiguator = int64(rand.Intn(12345)) } func BenchmarkRandIntn(b *testing.B) { for N := 0; N &lt; b.N; N++ { rand.Seed(time.Now().UnixNano()) reflectorDisambiguator = int64(rand.Intn(12345)) } } </code></pre> <p>Output:</p> <pre><code>$ go test disambiguator_test.go -bench=. goos: linux goarch: amd64 BenchmarkTimeNow-4 20000000 67.5 ns/op BenchmarkRandIntn-4 100000 11941 ns/op $ </code></pre>
<p>I deployed prometheus server (+ kube state metrics + node exporter + alertmanager) through the <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">prometheus helm chart</a> using the chart's default values, including the chart's default <a href="https://github.com/helm/charts/blob/ead4e79279a972ec71f6a58dd04ef4491686efbc/stable/prometheus/values.yaml#L796" rel="nofollow noreferrer">scrape_configs</a>. The problem is that I expect certain metrics to be coming from a particular job but instead are coming from a different one.</p> <p>For example, <code>node_cpu_seconds_total</code> is being provided by the <code>kubernetes-service-endpoints</code> job but I expect it to come from the <code>kubernetes-nodes</code> job, i.e. <code>node-exporter</code>. The returned metric's values are accurate but the problem is I don't have the labels that would normally come from <code>kubernetes-nodes</code> (since <code>kubernetes-nodes</code> job has <code>role: node</code> vs <code>role: endpoint</code> for <code>kubernetes-service-endpoints</code>. I need these missing labels for advanced querying + dashboards.</p> <p>Output of <code>node_cpu_seconds_total{mode="idle"}</code>:</p> <p><code> node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.46:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 423673.44 node_cpu_seconds_total{app="prometheus",chart="prometheus-7.0.2",component="node-exporter",cpu="0",heritage="Tiller",instance="10.80.20.52:9100",job="kubernetes-service-endpoints",kubernetes_name="get-prometheus-node-exporter",kubernetes_namespace="default",mode="idle",release="get-prometheus"} | 417097.16 </code></p> <p>There are no errors in the logs and I do have other <code>kubernetes-nodes</code> metrics such as <code>up</code> and <code>storage_operation_errors_total</code> so <code>node-exporter</code> is getting scraped.</p> <p>I also verified manually that <code>node-exporter</code> has this particular metric, <code>node_cpu_seconds_total</code>, with <code>curl &lt;node IP&gt;:9100/metrics | grep node_cpu</code> and it has results.</p> <p>Does the job order definition matter? Would one job override the other's metrics if they have the same name? Should I be dropping metrics for the <code>kubernetes-service-endpoints</code> job? I'm new to prometheus so any detailed help is appreciated.</p>
<p>I was able to figure out how to add the "missing" labels by navigating to the prometheus service-discovery status UI page. This page shows all the "Discovered Labels" that can be processed and kept through relabel_configs. What is processed/kept shows next to "Discovered Labels" under "Target Labels". So then it was just a matter of modifying the <code>kubernetes-service-endpoints</code> job config in <code>scrape_configs</code> so I add more taget labels. Below is exactly what I changed in the chart's <code>scrape_configs</code>. With this new config, I get <code>namespace</code>, <code>service</code>, <code>pod</code>, and <code>node</code> added to all metrics if the metric didn't already have them (see <code>honor_labels</code>).</p> <pre><code> - job_name: 'kubernetes-service-endpoints' + honor_labels: true kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace - target_label: kubernetes_namespace + target_label: namespace - source_labels: [__meta_kubernetes_service_name] action: replace - target_label: kubernetes_name + target_label: service + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod + - source_labels: [__meta_kubernetes_pod_node_name] + action: replace + target_label: node </code></pre>
<p>I am new to cluster container management, and this question is the basis for all the freshers over here.</p> <p>I read some documentation, but still, my understanding is not too clear, so any leads.. helping to understand?</p> <ol> <li>Somewhere it is mentioned, Minikube is used to run Kubernetes locally. So if we want to maintain cluster management in my four-node Raspberry Pi, then Minikube is not the option?</li> <li>Does Minikube support only a one-node system?</li> <li>Docker Compose is set of instructions and a YAML file to configure and start multiple Docker containers. Can we use this to start containers of the different hosts? Then for simple orchestration where I need to call container of the second host, I don't need any cluster management, right?</li> <li>What is the link between Docker Swarm and Kubernetes? Both are independent cluster management. Is it efficient to use Kubernetes on Raspberry Pi? Any issue, because I was told that Kubernetes in single node takes the complete memory and CPU usage? Is it true?</li> <li>Is there other cluster management for Raspberry Pi?</li> </ol> <p>I think this 4-5 set will help me better.</p>
<p>Presuming that your goal here is to run a set of containers over a number of different Raspberry Pi based nodes:</p> <ul> <li><p>Minikube isn't really appropriate. This starts a single virtual machine on a Windows, MacOS or Linux and installs a Kubernetes cluster into it. It's generally used by developers to quickly start-up a cluster on their laptops or desktops for development and testing purposes.</p></li> <li><p>Docker Compose is a system for managing sets of related containers. So for example if you had a web server and database that you wanted to manage together you could put them in a single Docker Compose file.</p></li> <li><p>Docker Swarm is a system for managing sets of containers across multiple hosts. It's essentially an alternative to Kubernetes. It has fewer features than Kubernetes, but it is much simpler to set up.</p></li> </ul> <p>If you want a really simple multi-node Container cluster, I'd say that Docker swarm is a reasonable choice. If you explicitly want to experiment with Kubernetes, I'd say that kubeadm is a good option here. Kubernetes in general has higher resource requirements than Docker Swarm, so it could be somewhat less suited to it, although I know people have successfully run Kubernetes clusters on Raspberry Pis.</p>
<p>I'm attempting to deploy a parallelized data processing task that uses many containers of the same docker image which would each run with different incrementing environment variables. The image is setup to read env vars in order to determine which segment of a larger list to process.</p> <p><strong>Background:</strong> I was originally using a bash script that passed down an incrementing env var to docker run commands but now I'd like a better way to manage/monitor all the containers. I've only had experience using Kubernetes for application services but it seems like it may be a better way to orchestrate my multi-container task as well.</p> <p>Wondering if this sort of dynamic environment variable passing is possible within the Kubernetes YAML configs as I'd prefer declarative config vs a shell script. I'm also not sure of the best approach to do this in Kubernetes whether it's multiple separate pods, multi-container pod, or to use replicas in some way.</p> <p>I'm open to suggestions, I know other tools like Terraform may be helpful for this sort of programatic infrastructure also.</p>
<p>What about using <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">Parallel Processing Using Work Queue</a> for passing different environment variables to your k8s job pods with .spec.parallelism. Although having a separate service for work queue may be little too much depending on what you are trying to do.</p> <p>The other idea can be using helm templating power to create a k8s manifest file. I create a sample helm chart to give an idea of templating parallel processing. See <a href="https://github.com/jayrajput/helm-parallel-jobs" rel="nofollow noreferrer">git repo - helm-parallel-jobs</a>. Once you have the git repo cloned, you can install the helm chart for parallel processing like this. The template for job is same as used by <a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">k8s documentation</a>. As seen in the output below, three different environment variables - apple,banana,cherry are provided, which creates 3 different pods with environment variables passed to them.</p> <pre><code> [root@jr]# helm install --set envs='{apple,banana,cherry}' --name jobs ./helm-parallel-jobs/example/parallel-jobs NAME: jobs LAST DEPLOYED: Sun Aug 26 16:29:23 2018 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/Job NAME DESIRED SUCCESSFUL AGE process-item-apple 1 0 0s process-item-banana 1 0 0s process-item-cherry 1 0 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE process-item-apple-dr6st 0/1 ContainerCreating 0 0s process-item-banana-d2wwq 0/1 ContainerCreating 0 0s process-item-cherry-wvlxz 0/1 ContainerCreating 0 0s </code></pre>
<p>I have the following that config that works when I try <code>&lt;NodeIP&gt;:30080</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app-deployment spec: replicas: 3 template: metadata: labels: name: app-node spec: containers: - name: app image: myregistry.net/repo/app:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: NODE_ENV value: production --- apiVersion: v1 kind: Service metadata: name: app-service spec: selector: name: app-node ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30080 type: NodePort </code></pre> <p>I am trying to use an Ingress:</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: myhost.com http: paths: - path: /app backend: serviceName: app-service servicePort: 80 </code></pre> <p><code>myhost.com</code> works with the nginx intro screen, but <code>myhost.com/app</code> gives <code>404 Not Found</code>. Where is the issue in my setup?</p> <hr> <p>UPDATE:</p> <pre><code> - path: / backend: serviceName: app-service servicePort: 80 </code></pre> <p>If I do root path it works, but how come <code>/app</code> doesn't?</p>
<p>Your ingress definition creates rules that proxy traffic from the <code>{path}</code> to the <code>{backend.serviceName}{path}</code>. In your case, I believe the reason it's not working is that <code>/app</code> is proxied to <code>app-service:80/app</code> but you're intending on serving traffic at the <code>/</code> root. Try adding this annotation to your ingress resource: <code>nginx.ingress.kubernetes.io/rewrite-target: /</code></p> <p>Source: <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite</a></p>
<p>I'm attempting to deploy a parallelized data processing task that uses many containers of the same docker image which would each run with different incrementing environment variables. The image is setup to read env vars in order to determine which segment of a larger list to process.</p> <p><strong>Background:</strong> I was originally using a bash script that passed down an incrementing env var to docker run commands but now I'd like a better way to manage/monitor all the containers. I've only had experience using Kubernetes for application services but it seems like it may be a better way to orchestrate my multi-container task as well.</p> <p>Wondering if this sort of dynamic environment variable passing is possible within the Kubernetes YAML configs as I'd prefer declarative config vs a shell script. I'm also not sure of the best approach to do this in Kubernetes whether it's multiple separate pods, multi-container pod, or to use replicas in some way.</p> <p>I'm open to suggestions, I know other tools like Terraform may be helpful for this sort of programatic infrastructure also.</p>
<p>My understanding is you'd like to do something like <a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/</a> where jobs are created from a template, one for each data item in a list. But you don't want it to be shell-scripted. </p> <p>I imagine helm could be used to replace the Job and it has a range function so a chart could be set up to create jobs for each entry in a section of a values.yaml. So it could occupy a space similar to what you suggested for terraform. Ansible could also be an option. </p> <p>However, the direction of travel of this question seems to be towards batch scheduling. I am wondering if your jobs will evolve to end up having dependencies between them etc. If so <a href="https://stackoverflow.com/questions/45454130/helm-and-kubernetes-is-there-a-barrier-equivalent-for-jobs">Helm and Kubernetes: Is there a barrier equivalent for jobs?</a> and <a href="https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs" rel="nofollow noreferrer">https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs</a> help here. Currently Kubernetes has facilities for running batch jobs and the tooling to enable a batch scheduling system to run or be built on it but it doesn't itself contain an out of the box batch scheduling system. So people are currently using a range of different approaches to suit their needs. </p>
<p>I have a Scala App running on a Service, using a PostgreSql Service, Tables are being created by the database management plugin (Slick)</p> <p>however, need to edit a particular table and I cant find it!</p> <p>postgres service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: core-db spec: ports: - port: 5432 selector: app: core-db --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: core-db spec: template: metadata: labels: app: core-db spec: containers: - image: "image" name: core-db imagePullPolicy: IfNotPresent env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres ports: - containerPort: 5432 name: core-db volumeMounts: - name: core-storage mountPath: /var/lib/postgresql/db-data volumes: - name: core-storage persistentVolumeClaim: claimName: core-pv-claim imagePullSecrets: - name: regcred --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: core-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi </code></pre> <p>I get in the pod:</p> <pre><code>kubectl exec POD -it -- psql -U postgres </code></pre> <p>but \d shows no relations</p> <p>What I am doing wrong, where is my data after all ? </p> <p><strong>EDIT</strong></p> <pre><code>\l postgres app template0 template1 \c postgres \dt No relations found. \c app \dt No relations found. </code></pre>
<p>Steps I would do to check where the problem are:</p> <ol> <li><p>the app is working so, if it's persisting data, these data should be somewhere. Try to <code>kubectl delete deploy core-db</code>. Is the app still working and saving data? </p> <p>I expected <code>no</code> as answer, and so we can go to the following points.</p> <p>If the answer is <code>yes</code>, so the problem may be to: </p> <ul> <li><p>db connection string inside the app</p></li> <li><p>the service, how many EndPoints do you have? <code>kubectl get ep core-db</code></p></li> </ul></li> <li><p>exec psql pod and connect to postgres db <code>\c postgres</code> and then <code>\dt</code>. Are tables here?</p></li> <li><p>exec psql pod and connect to app db <code>\c app</code> and then <code>\dt</code>. Are tables here?</p></li> <li><p>exec psql pod and connect to postgres and then <code>SELECT * FROM pg_stat_activity WHERE datname = 'app';</code></p></li> </ol>
<p>Similar question: <a href="https://stackoverflow.com/questions/37449121/how-to-expose-kube-dns-service-for-queries-outside-cluster">How to expose kube-dns service for queries outside cluster?</a></p> <p>I have a PerconaDB instance in a VM in Google compute engine. Next to it is running Kubernetes cluster where services connect to the PerconaDB.</p> <p>When I login with MySQL client and do <code>show processlist;</code>, I see following:</p> <pre><code>| 175 | user | 10.12.142.24:46124 | user | Sleep | 14 | | NULL | 0 | 0 | | 176 | user | 10.12.142.24:46126 | user | Sleep | 14 | | NULL | 0 | 0 | | 177 | user | 10.12.122.42:60806 | user | Sleep | 2 | | NULL | 0 | 0 | | 178 | user | 10.12.122.43:55164 | user | Sleep | 14 | | NULL | 1 | 0 | | 179 | user | 10.12.122.43:55166 | user | Sleep | 4 | | NULL | 1 | 0 | | 180 | user | 10.12.141.11:35944 | user | Sleep | 14 | | NULL | 1 | 0 | </code></pre> <p>Notice the number of different IPs for which I have no idea what they belong to. These are the pods inside the Kubernetes cluster and I would like to know their names so instead of <code>10.12.142.24:46124</code> I could see <code>myservice-0dkd0:46124</code>.</p> <p>I thought the solution would be to somehow link the <code>kube-dns</code> service to the PerconaDB VM, but I have no idea, how to do that correctly. Also this is now running in production, so I don't want to experiment too much.</p>
<p>At this moment, the reverse DNS lookup or PTR type lookup for a POD IP is possible only in case of pods that are part of a headless service (details: <a href="https://github.com/kubernetes/dns/pull/25" rel="nofollow noreferrer">https://github.com/kubernetes/dns/pull/25</a>) but even that has it's limitations. Furthermore, kubernetes has no default per POD dns name at all, even inside kubernetes cluster you are not able to say <code>curl http://&lt;pod_name&gt;</code>. You have services for that. What you are asking for is not really achievable with kubernetes and DNS as it is now. Please do remember that PTR record (IP->name) should go hand in hand with regular record for resolving name (name->IP) which also makes things complicated, and means you can not have just <code>myservice-0dkd0</code> in there.</p> <p>That said, you can achieve what you want in a non-dns way here. Assuming you run on linux, you can use <code>/etc/hosts</code> to maintain a name-to-ip and ip-to-name list that is exclusive to that particular system, and does not need to adhere to all the limitations of real DNS.</p> <p>If, on your mysql host you run something like following say from cron every 1 min, you should get correctly mapped names in your <code>/etc/hosts</code> almost all the time :</p> <pre><code>NAMESPACE=default sed -i "/^[0-9\.]*\t[a-zA-Z0-9-]*\.$NAMESPACE/d" /etc/hosts kubectl get pod --namespace default --field-selector=status.phase==Running -o jsonpath='{range .items[*]}{.status.podIP}{"\t"}{.metadata.name}.{.metadata.namespace}{"\n"}{end}' &gt;&gt; /etc/hosts </code></pre>
<p>I'm trying to create a proper set of permissions for a serviceAccount. For some reason it seems to ignore the permissions I granted and gives me a bunch of errors because of that. I don't see what I did wrong. Did I apply something in the wrong namespace or similar?</p> <p>My Role:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: r-wercker-ingress-new rules: - apiGroups: [""] resources: ["namespaces"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] - apiGroups: [""] resources: ["services"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] - apiGroups: [""] resources: ["configmaps"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] - apiGroups: ["extensions"] resources: ["deployments"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] - apiGroups: ["autoscaling"] resources: ["horizontalpodautoscalers"] verbs: ["create", "delete", "patch", "update", "get", "watch", "list"] </code></pre> <p>My RoleBinding:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: r-wercker-ingress-new subjects: - kind: ServiceAccount name: wercker namespace: kube-ingress roleRef: kind: Role name: r-wercker-ingress-new apiGroup: rbac.authorization.k8s.io </code></pre> <p>kubectl output about role</p> <pre><code>kubectl describe role r-wercker-ingress-new Name: r-wercker-ingress-new Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"r-wercker-ingress-new","namespace":"default"},"rules":[... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [create delete patch update get watch list] deployments.extensions [] [] [create delete patch update get watch list] horizontalpodautoscalers.autoscaling [] [] [create delete patch update get watch list] namespaces [] [] [create delete patch update get watch list] serviceaccounts [] [] [create delete patch update get watch list] services [] [] [create delete patch update get watch list] </code></pre> <p>kubectl output about RoleBinding</p> <pre><code>kubectl describe rolebinding r-wercker-ingress-new Name: r-wercker-ingress-new Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"r-wercker-ingress-new","namespace":"default"},"r... Role: Kind: Role Name: r-wercker-ingress-new Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount wercker kube-ingress </code></pre> <p>Error output when trying to apply my resources:</p> <pre><code>Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d380 0xc4205982a0 kube-ingress resources/kube-ingress/ingress-controller-nginx.yml 0xc420df4370 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": namespaces "kube-ingress" is forbidden: User "system:serviceaccount:default:wercker" cannot get namespaces in the namespace "kube-ingress" Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d440 0xc420599340 kube-ingress nginx-ingress-controller resources/kube-ingress/ingress-controller-nginx.yml 0xc420df43f8 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": serviceaccounts "nginx-ingress-controller" is forbidden: User "system:serviceaccount:default:wercker" cannot get serviceaccounts in the namespace "kube-ingress" Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d680 0xc4201e55e0 nginx-ingress-controller resources/kube-ingress/ingress-controller-nginx.yml 0xc420df4500 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": clusterroles.rbac.authorization.k8s.io "nginx-ingress-controller" is forbidden: User "system:serviceaccount:default:wercker" cannot get clusterroles.rbac.authorization.k8s.io at the cluster scope Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d740 0xc4204c4770 nginx-ingress-controller resources/kube-ingress/ingress-controller-nginx.yml 0xc420df4578 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": clusterrolebindings.rbac.authorization.k8s.io "nginx-ingress-controller" is forbidden: User "system:serviceaccount:default:wercker" cannot get clusterrolebindings.rbac.authorization.k8s.io at the cluster scope Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d800 0xc4204c5e30 kube-ingress ingress-nginx resources/kube-ingress/ingress-controller-nginx.yml 0xc420df45f0 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": services "ingress-nginx" is forbidden: User "system:serviceaccount:default:wercker" cannot get services in the namespace "kube-ingress" Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d8c0 0xc420134a10 kube-ingress ingress-nginx resources/kube-ingress/ingress-controller-nginx.yml 0xc420df4660 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": configmaps "ingress-nginx" is forbidden: User "system:serviceaccount:default:wercker" cannot get configmaps in the namespace "kube-ingress" Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc42062d980 0xc420145ab0 kube-ingress ingress-nginx resources/kube-ingress/ingress-controller-nginx.yml 0xc420df46f0 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": deployments.extensions "ingress-nginx" is forbidden: User "system:serviceaccount:default:wercker" cannot get deployments.extensions in the namespace "kube-ingress" </code></pre> <p><strong>Edit 1:</strong> I tried moving the resources into the corresponding namespaces but still get the same errors.</p> <pre><code>kubectl --namespace kube-ingress get role NAME AGE r-wercker-ingress-new 2m kubectl --namespace kube-ingress describe role r-wercker-ingress-new Name: r-wercker-ingress-new Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"r-wercker-ingress-new","namespace":"kube-ingress"},"rul... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [create delete patch update get watch list] deployments.extensions [] [] [create delete patch update get watch list] horizontalpodautoscalers.autoscaling [] [] [create delete patch update get watch list] namespaces [] [] [create delete patch update get watch list] serviceaccounts [] [] [create delete patch update get watch list] services [] [] [create delete patch update get watch list] kubectl --namespace kube-ingress get rolebinding NAME AGE r-wercker-ingress-new 2m kubectl --namespace kube-ingress describe rolebinding r-wercker-ingress-new Name: r-wercker-ingress-new Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"r-wercker-ingress-new","namespace":"kube-ingress... Role: Kind: Role Name: r-wercker-ingress-new Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount wercker kube-ingress </code></pre> <p>Still gives:</p> <pre><code>Error from server (Forbidden): error when retrieving current configuration of: &amp;{0xc420d14840 0xc420382620 kube-ingress nginx-ingress-controller resources/kube-ingress/ingress-controller-nginx.yml 0xc42160e560 false} from server for: "resources/kube-ingress/ingress-controller-nginx.yml": serviceaccounts "nginx-ingress-controller" is forbidden: User "system:serviceaccount:default:wercker" cannot get serviceaccounts in the namespace "kube-ingress" </code></pre> <p>I have deleted and re-created the Roles and RoleBindings</p>
<p>Yes, it looks like you are applying the resources into a wrong namespace. If you want to set these permissions up for the namespace <code>kube-ingress</code>, you need to create the resources in this namespace.</p> <p>So you would add this line to the metadata of your <code>Role</code>, <code>RoleBinding</code> and <code>ServiceAccount</code>:</p> <pre><code>namespace: kube-ingress </code></pre> <p>Using <code>Role</code> and <code>RoleBinding</code>, you define permissions for a single namespace. If you would like to create cluster-wide permissions, you can use <code>ClusterRole</code> and <code>ClusterRoleBinding</code>. </p> <p>You can also create a common case <code>ClusterRole</code> and then bind it to a single namespace with <code>RoleBinding</code>. The k8s docs are very helpful in this regard: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole</a></p>
<p>Is it possible to change the VPC of an already created EKS cluster? Or do I have to create a new one and there to select the new VPC?</p>
<p>you should be able to change the VPC configuration for the EKS cluster. However, as per the documentation which I found, it states that if VPC config is updated, the update type is replacement i.e., a new cluster will be created with the updated config.</p> <p>Please see <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#cfn-eks-cluster-resourcesvpcconfig" rel="noreferrer">https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#cfn-eks-cluster-resourcesvpcconfig</a> for more information</p> <p>Hope this helps. </p>
<p>I'm trying a Job deployment in kubernetes with the configuration below but I'm getting this error :</p> <pre><code>spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: OnFailure, Never </code></pre> <p>It's like kubernetes doesn't read my restartPolicy configuration (set to never) or it's overridden somewhere...</p> <p>The "funny" thing is it's working for my Cronjob deployment (I'm using the same template for both of them).</p> <p>Kubernetes version : 1.7.7</p> <p>Here is my configuration :</p> <pre><code>{ "apiVersion": "batch/v1", "kind": "Job", "metadata": { "name": "pipeline-test", "labels": { "app": "pipeline-test", "env": "test", "commit": "xxxxxxxx" }, "namespace": "pipeline-test" }, "spec": { "jobTemplate": { "spec": { "template": { "metadata": { "labels": { "app": "pipeline-test", "env": "test", "commit": "xxxxxxxx" } }, "spec": { "restartPolicy": "Never", "containers": [ { "name": "pipeline-test", "image": "us.gcr.io/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "env": [ { "name": "APP_ENV", "value": "test" }, { "name": "MAX_WORKERS", "value": "2" }, { "name": "TYPE_CASTING", "value": "false" }, { "name": "ENV", "value": "test" }, { "name": "PROJECT_NAME", "value": "null-testing1-v" }, { "name": "JOB_NAME", "value": "testjob" }, { "name": "SUBSCRIPTION_NAME", "value": "testsub" }, { "name": "CACHE_INVALIDATOR", "value": "14-1" }, { "name": "GIT_COMMIT", "value": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" }, { "name": "APP_GIT_COMMIT", "value": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" }, { "name": "APP_NAME", "value": "pipeline-test" }, ], "volumeMounts": [ ], "ports": [ { "containerPort": 3000 } ], "resources": { "requests": { "cpu": "100m", "memory": "512Mi" }, "limits": { "cpu": "1000m", "memory": "512Mi" } } } ], "volumes": [ ], "imagePullSecrets": [ { "name": "image-pull-secret" } ] } } } } } } </code></pre> <p>Thanks</p> <p>EDIT : I was able to run it by removing 2 lines :</p> <pre><code> "jobTemplate": { "spec": { </code></pre> <p>And so the restartPolicy was at the wrong level in the json and wasn't read.</p>
<p>I was able to run it by removing 2 lines :</p> <pre><code>"jobTemplate": { "spec": { </code></pre> <p>And so the restartPolicy was at the wrong level in the json and wasn't read.</p>
<p>I am trying to make ignite deployed in k8s discoverable using <code>TcpDiscoveryKubernetesIpFinder</code>. I have also used all the deployment configurations as recommended in apache ignite documentation to make it discoverable. Ignite version is <code>v2.6</code>. When I try to access the ignite from another service inside the cluster(and namespace), it fails giving below error.</p> <blockquote> <p>. . instance-14292nccv10-74997cfdff-kqdqh] Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: <a href="https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/my-namespace/endpoints/ignite-service" rel="noreferrer">https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/my-namespace/endpoints/ignite-service</a> [instance-14292nccv10-74997cfdff-kqdqh] at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894) ~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) ~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) ~[na:1.8.0_151] [instance-14292nccv10-74997cfdff-kqdqh] . .</p> </blockquote> <p>My ignite configurations to make it discoverable are as follows,</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: ignite-service namespace: my-namespace --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ignite-service namespace: my-namespace rules: - apiGroups: - "" resources: - pods - endpoints verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ignite-service roleRef: kind: ClusterRole name: ignite-service apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: ignite-service namespace: my-namespace --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ignite-service-volume-claim-blr3 namespace: my-namespace spec: storageClassName: ssd accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Secret metadata: name: ignite-files namespace: my-namespace data: ignite-config.xml: PGJlYW5zIHhtbG5zID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMiCiAgICAgICB4bWxuczp4c2kgPSAiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiCiAgICAgICB4bWxuczp1dGlsID0gImh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvdXRpbCIKICAgICAgIHhzaTpzY2hlbWFMb2NhdGlvbiA9ICIKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMKICAgICAgIGh0dHA6Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMvc3ByaW5nLWJlYW5zLnhzZAogICAgICAgaHR0cDovL3d3dy5zcHJpbmdmcmFtZXdvcmsub3JnL3NjaGVtYS91dGlsCiAgICAgICBodHRwOi8vd3d3LnNwcmluZ2ZyYW1ld29yay5vcmcvc2NoZW1hL3V0aWwvc3ByaW5nLXV0aWwueHNkIj4KCiAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLklnbml0ZUNvbmZpZ3VyYXRpb24iPgogICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gImRpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgIDxiZWFuIGNsYXNzID0gIm9yZy5hcGFjaGUuaWduaXRlLnNwaS5kaXNjb3ZlcnkudGNwLlRjcERpc2NvdmVyeVNwaSI+CiAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZSA9ICJpcEZpbmRlciI+CiAgICAgICAgICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuc3BpLmRpc2NvdmVyeS50Y3AuaXBmaW5kZXIua3ViZXJuZXRlcy5UY3BEaXNjb3ZlcnlLdWJlcm5ldGVzSXBGaW5kZXIiPgogICAgICAgICAgICAgICAgICAgICAgICA8cHJvcGVydHkgbmFtZT0ibmFtZXNwYWNlIiB2YWx1ZT0ibXktbmFtZXNwYWNlIi8+CiAgICAgICAgICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lPSJzZXJ2aWNlTmFtZSIgdmFsdWU9Imlnbml0ZS1zZXJ2aWNlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgPC9iZWFuPgogICAgICAgIDwvcHJvcGVydHk+CiAgICAgICAgPCEtLSBFbmFibGluZyBBcGFjaGUgSWduaXRlIG5hdGl2ZSBwZXJzaXN0ZW5jZS4gLS0+CiAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGF0YVN0b3JhZ2VDb25maWd1cmF0aW9uIj4KICAgICAgICAgICAgPGJlYW4gY2xhc3MgPSAib3JnLmFwYWNoZS5pZ25pdGUuY29uZmlndXJhdGlvbi5EYXRhU3RvcmFnZUNvbmZpZ3VyYXRpb24iPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAiZGVmYXVsdERhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICA8YmVhbiBjbGFzcyA9ICJvcmcuYXBhY2hlLmlnbml0ZS5jb25maWd1cmF0aW9uLkRhdGFSZWdpb25Db25maWd1cmF0aW9uIj4KICAgICAgICAgICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAicGVyc2lzdGVuY2VFbmFibGVkIiB2YWx1ZSA9ICJ0cnVlIi8+CiAgICAgICAgICAgICAgICAgICAgPC9iZWFuPgogICAgICAgICAgICAgICAgPC9wcm9wZXJ0eT4KICAgICAgICAgICAgICAgIDxwcm9wZXJ0eSBuYW1lID0gInN0b3JhZ2VQYXRoIiB2YWx1ZSA9ICIvZGF0YS9pZ25pdGUvc3RvcmFnZSIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsUGF0aCIgdmFsdWUgPSAiL2RhdGEvaWduaXRlL2RiL3dhbCIvPgogICAgICAgICAgICAgICAgPHByb3BlcnR5IG5hbWUgPSAid2FsQXJjaGl2ZVBhdGgiIHZhbHVlID0gIi9kYXRhL2lnbml0ZS9kYi93YWwvYXJjaGl2ZSIvPgogICAgICAgICAgICA8L2JlYW4+CiAgICAgICAgPC9wcm9wZXJ0eT4KICAgIDwvYmVhbj4KPC9iZWFucz4= type: Opaque --- apiVersion: v1 kind: Service metadata: # Name of Ignite Service used by Kubernetes IP finder. # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName. name: ignite-service namespace: my-namespace spec: clusterIP: None # custom value. ports: - port: 9042 # custom value. selector: # Must be equal to one of the labels set in Ignite pods' # deployement configuration. app: ignite-service --- apiVersion: extensions/v1beta1 kind: Deployment metadata: # Custom Ignite cluster's name. name: ignite-service namespace: my-namespace spec: # A number of Ignite pods to be started by Kubernetes initially. replicas: 1 template: metadata: labels: # This label has to be added to the selector's section of # ignite-service.yaml so that the Kubernetes Ignite lookup service # can easily track all Ignite pods available deployed so far. app: ignite-service spec: serviceAccountName: ignite-service volumes: # Custom name for the storage that holds Ignite's configuration # which is example-kube.xml. - name: ignite-storage persistentVolumeClaim: # Must be equal to the PersistentVolumeClaim created before. claimName: ignite-service-volume-claim-blr3 - name: ignite-files secret: secretName: ignite-files containers: # Custom Ignite pod name. - name: ignite-node # Ignite Docker image. Kubernetes IP finder is supported starting from # Apache Ignite 2.6.0 image: apacheignite/ignite:2.6.0 lifecycle: postStart: exec: command: ['/bin/sh', '/opt/ignite/apache-ignite-fabric/bin/control.sh', '--activate'] env: # Ignite's Docker image parameter. Adding the jar file that # contain TcpDiscoveryKubernetesIpFinder implementation. - name: OPTION_LIBS value: ignite-kubernetes # Ignite's Docker image parameter. Passing the Ignite configuration # to use for an Ignite pod. - name: CONFIG_URI value: file:///etc/ignite-files/ignite-config.xml - name: ENV value: my-namespace ports: # Ports to open. # Might be optional depending on your Kubernetes environment. - containerPort: 11211 # REST port number. - containerPort: 47100 # communication SPI port number. - containerPort: 47500 # discovery SPI port number. - containerPort: 49112 # JMX port number. - containerPort: 10800 # SQL port number. volumeMounts: # Mounting the storage with the Ignite configuration. - mountPath: "/data/ignite" name: ignite-storage - name: ignite-files mountPath: "/etc/ignite-files" </code></pre> <p>I saw some links in stackoverflow with similar issue, followed the proposed solution but that doesn't work either. Any pointers on this will be of great help!</p>
<p>According to the URL, the IP finder tries to use a service named <code>ignite</code>, while you create it by name <code>ignite-service</code>.</p> <p>You should provide both namespace and service name in the IP finder configuration:</p> <pre><code>&lt;bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"&gt; &lt;property name="namespace" value="my-namespace"/&gt; &lt;property name="serviceName" value="ignite-service"/&gt; &lt;/bean&gt; </code></pre>
<p>We have an Istio cluster and we are trying to configure horizontal pod autoscale for Kubernetes. We want to use the request count as our custom metric for hpa. How can we utilise Istio's Prometheus for the same purpose?</p>
<p>This question turned out to be much more complex than I expected, but finally here I am with the answer.</p> <ol> <li><p>First of all, you need to configure your application to provide custom metrics. It is on the developing application side. Here is an example, how to make it with Go language: <a href="https://mycodesmells.com/post/watching-metrics-with-prometheus" rel="nofollow noreferrer">Watching Metrics With Prometheus</a></p></li> <li><p>Secondly, you need to define and deploy a Deployment of the application (or a Pod, or whatever you want) to Kubernetes, example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: podinfo spec: replicas: 2 template: metadata: labels: app: podinfo annotations: prometheus.io/scrape: 'true' spec: containers: - name: podinfod image: stefanprodan/podinfo:0.0.1 imagePullPolicy: Always command: - ./podinfo - -port=9898 - -logtostderr=true - -v=2 volumeMounts: - name: metadata mountPath: /etc/podinfod/metadata readOnly: true ports: - containerPort: 9898 protocol: TCP readinessProbe: httpGet: path: /readyz port: 9898 initialDelaySeconds: 1 periodSeconds: 2 failureThreshold: 1 livenessProbe: httpGet: path: /healthz port: 9898 initialDelaySeconds: 1 periodSeconds: 3 failureThreshold: 2 resources: requests: memory: "32Mi" cpu: "1m" limits: memory: "256Mi" cpu: "100m" volumes: - name: metadata downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations --- apiVersion: v1 kind: Service metadata: name: podinfo labels: app: podinfo spec: type: NodePort ports: - port: 9898 targetPort: 9898 nodePort: 31198 protocol: TCP selector: app: podinfo </code></pre> <p>Pay attention to the field <code>annotations: prometheus.io/scrape: 'true'</code>. It is required to request Prometheus to read metrics from the resource. Also note, there are two more annotations, which have default values; but if you change them in your application, you need to add them with the correct values: </p> <ul> <li><code>prometheus.io/path</code>: If the metrics path is not /metrics, define it with this annotation.</li> <li><code>prometheus.io/port</code>: Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared).</li> </ul></li> <li><p>Next, Prometheus in Istio uses its own modified for Istio purposes configuration, and by default it skips custom metrics from Pods. Therefore, you need to modify it a little. In my case, I took configuration for Pod metrics from <a href="https://github.com/stefanprodan/k8s-prom-hpa/blob/master/prometheus/prometheus-cfg.yaml" rel="nofollow noreferrer">this example</a> and modified Istio's Prometheus configuration only for Pods:</p> <pre><code>kubectl edit configmap -n istio-system prometheus </code></pre> <p>I changed the order of labels according to the example mentioned before:</p> <pre><code># pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' # if you want to use metrics on jobs, set the below field to # true to prevent Prometheus from setting the `job` label # automatically. honor_labels: false kubernetes_sd_configs: - role: pod # skip verification so you can do HTTPS to pods tls_config: insecure_skip_verify: true # make sure your labels are in order relabel_configs: # these labels tell Prometheus to automatically attach source # pod and namespace information to each collected sample, so # that they'll be exposed in the custom metrics API automatically. - source_labels: [__meta_kubernetes_namespace] action: replace target_label: namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod # these labels tell Prometheus to look for # prometheus.io/{scrape,path,port} annotations to configure # how to scrape - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ </code></pre> <p>After that, custom metrics appeared in Prometheus. But, <strong>be careful with changing Prometheus configuration</strong>, because some metrics required for Istio may disappear, check everything carefully.</p></li> <li><p>Now it is time to install <a href="https://github.com/directxman12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus custom metric adapter</a>. </p> <ul> <li>Download <a href="https://github.com/directxman12/k8s-prometheus-adapter" rel="nofollow noreferrer">this</a> repository</li> <li>Change the address for Prometheus server in the file <code>&lt;repository-directory&gt;/deploy/manifests/custom-metrics-apiserver-deployment.yaml</code>. Example, <code>- --prometheus-url=http://prometheus.istio-system:9090/</code> </li> <li>Run command <code>kubectl apply -f &lt;repository-directory&gt;/deploy/manifests</code> After some time, <code>custom.metrics.k8s.io/v1beta1</code> should appear in the output of a command 'kubectl api-vesions'. </li> </ul> <p>Also, check the output of the custom API using commands <code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .</code> and <code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .</code> The output of the last one should look like in the following example:</p> <pre><code>{ "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests" }, "items": [ { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-kv5g9", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "901m" }, { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-nm7bl", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "898m" } ] } </code></pre> <p>If it does, you can move to the next step. If it doesn’t, look what APIs available for Pods in CustomMetrics <code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"</code> and for http_requests <code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"</code>. MetricNames are generating according to the metrics Prometheus gather from Pods and if they are empty, you need to look in that direction.</p></li> <li><p>The last step is the configuring HPA and test it. So in my case, I created HPA for the podinfo application, defined before:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: podinfo spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: podinfo minReplicas: 2 maxReplicas: 10 metrics: - type: Pods pods: metricName: http_requests targetAverageValue: 10 </code></pre> <p>and used simple Go application to test load:</p> <pre><code>#install hey go get -u github.com/rakyll/hey #do 10K requests rate limited at 25 QPS hey -n 10000 -q 5 -c 5 http://&lt;K8S-IP&gt;:31198/healthz </code></pre> <p>After some time, I saw changes in scaling by using commands <code>kubectl describe hpa</code> and <code>kubectl get hpa</code></p></li> </ol> <p>I used instruction about creating Custom Metrics from the article <a href="https://dzone.com/articles/ensure-high-availability-and-uptime-with-kubernete" rel="nofollow noreferrer">Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus</a></p> <p>All useful links in one place: </p> <ul> <li><a href="https://mycodesmells.com/post/watching-metrics-with-prometheus" rel="nofollow noreferrer">Watching Metrics With Prometheus</a> - the example of adding metrics to your application</li> <li><a href="https://github.com/stefanprodan/k8s-prom-hpa" rel="nofollow noreferrer">k8s-prom-hpa</a> - the example of creating Custom Metrics for Prometheus (the same as in the article above)</li> <li><a href="https://github.com/directxman12/k8s-prometheus-adapter" rel="nofollow noreferrer">Kubernetes Custom Metrics Adapter for Prometheus</a></li> <li><a href="https://github.com/luxas/kubeadm-workshop#deploying-a-custom-metrics-api-server-and-a-sample-app" rel="nofollow noreferrer">Setting up the custom metrics adapter and sample app</a></li> </ul>
<p>I am following the Getting Started guide for Traefik from <a href="https://docs.traefik.io/" rel="nofollow noreferrer">here</a> and am trying to launch the service into Kubernetes (Minikube) instead of Docker:</p> <pre><code>Edit your docker-compose.yml file and add the following at the end of your file. # ... whoami: image: emilevauge/whoami # A container that exposes an API to show its IP address labels: - "traefik.frontend.rule=Host:whoami.docker.localhost"** </code></pre> <p>I am guessing I run it as:</p> <pre><code>kubectl run whoami-service --image=emilevauge/whoami --labels='traefik.frontend.rule=Host:whoami.docker.localhost' </code></pre> <p>however that generates an error of:</p> <pre><code>The Deployment "whoami-service" is invalid: * metadata.labels: Invalid value: "'traefik.frontend.rule": name part must consist of alphanumeric characters, '-', '_' or '.', and.... </code></pre> <p>So what am I missing here? How do I deploy the above to my Minikube Kubernetes cluster?</p>
<p>I'm not sure if this is along the lines of what you're looking for, but Traefik has a <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">small tutorial</a> for getting an Ingress controller set up on Kubernetes, with a <a href="https://docs.traefik.io/configuration/backends/kubernetes/" rel="nofollow noreferrer">great document</a> on configuration, as well. </p> <p>If you'd just like to get that particular image working, you may be able to pass the label as an argument to the pod, possibly with <code>kubectl run</code>. From the output of <code>kubectl run help</code>:</p> <pre><code># Start the nginx container using the default command, but use custom arguments (arg1 .. argN) for that command. kubectl run nginx --image=nginx -- &lt;arg1&gt; &lt;arg2&gt; ... &lt;argN&gt; </code></pre> <p>Or possibly manually in a manifest:</p> <pre><code>... containers: - name: whoami image: emilevauge/whoami args: ["traefik.frontend.rule: "Host:whoami.docker.localhost"] </code></pre> <p>Having never worked with the image in the example before, I don't know if either the above examples will actually work.</p> <p>Hope that helps a little!</p>
<p>I'm trying to access .NET Web API which I docker-ized and mounted in an Kubernet Cluster on Microsoft Azure.</p> <p>The application works fine on local docker machine. The cluster is running, my deployment was correct and the pods where created. Everything I check is fine, but I cannot access my application through the external cluster IP (Load Balancer). This is my YAML deployment file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ohmioapi-deployment spec: selector: matchLabels: app: ohmioapi replicas: 1 template: metadata: labels: app: ohmioapi spec: containers: - name: ohmioapi image: ohmiocontainers.azurecr.io/ohmioapi:latest imagePullPolicy: Always ports: - containerPort: 15200 imagePullSecrets: - name: acr-auth --- apiVersion: v1 kind: Service metadata: name: ohmioapi labels: app: ohmioapi spec: selector: app: ohmioapi ports: - port: 15200 nodePort: 30200 protocol: TCP type: LoadBalancer </code></pre> <p>Can anyone give a hint of where to start looking for? Thanks!</p>
<p>I would give the deployment/pods port a name (e.g. <code>http</code>) and then make the service serve off port 80 but target the pod port by name... that way you don't have to worry about port numbers when connecting to a service.</p> <p>Also, you shouldn't need or want to use <code>nodePort</code> if you are using type of <code>LoadBalancer</code>.</p> <p>E.g.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ohmioapi-deployment spec: selector: matchLabels: app: ohmioapi replicas: 1 template: metadata: labels: app: ohmioapi spec: containers: - name: ohmioapi image: ohmiocontainers.azurecr.io/ohmioapi:latest imagePullPolicy: Always ports: - name: http containerPort: 15200 imagePullSecrets: - name: acr-auth --- apiVersion: v1 kind: Service metadata: name: ohmioapi labels: app: ohmioapi spec: selector: app: ohmioapi ports: - name: http port: 80 targetPort: http protocol: TCP type: LoadBalancer </code></pre>
<p>Is there any way to share the directory/files to kubernetes container from your local system?</p> <p>I have a deployment yaml file. I want to share the directory without using <code>kubectl cp</code>.</p> <p>I tried with <code>configmap</code> but I later came to know that configmap can not have the whole directory but only a single file.</p> <p>If anyone has any idea please share.</p> <p>Please note: I do not want to host the file into minikube but I want to push the directory directly to container</p>
<p>I found a way.</p> <p>We can specify the directory we want to add into container by using <code>hostPath</code> in <code>volumes</code></p> <pre><code> volumeMounts: - name: crypto-config mountPath: &lt;PATH IN CONTAINER&gt; - name: channel-artifacts mountPath: /opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts - name: chaincode mountPath: /opt/gopath/src/github.com/chaincode volumes: - name: crypto-config hostPath: path: &lt;YOUR LOCAL DIR PATH&gt; - name: channel-artifacts hostPath: path: /Users/akshaysood/Blockchain/Kubernetes/Fabric/network/channel-artifacts - name: chaincode hostPath: path: /Users/akshaysood/Blockchain/Kubernetes/Fabric/network/chaincode </code></pre>
<p>I am facing this error from kubernetes cluster. Though it works perfectly from my local server. Here is my application.yml of SpringBoot App</p> <h1>spring:</h1> <pre><code>datasource: dataSourceClassName: org.postgresql.ds.PGPoolingDataSource url: jdbc:postgresql://${POSTGRES_HOST}:5432/test_db databaseName: test_db poolName: SpringBootHikariCP username: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} testWhileIdle: true validationQuery: SELECT 1 jpa: database-platform: org.hibernate.dialect.PostgreSQL82Dialect openInView: false show_sql: true generate-ddl: true hibernate: ddl-auto: update naming-strategy: org.hibernate.cfg.ImprovedNamingStrategy use-new-id-generator-mappings: true properties: hibernate.cache.use_second_levelt_cache: false hibernate.cache.use_query_cache: false hibernate.generate_statistics: true hibernate.hbm2ddl.auto: validate </code></pre> <p>Here is my Hikari configuration.</p> <pre class="lang-java prettyprint-override"><code> HikariConfig config = new HikariConfig(); config.setDataSourceClassName(dataSourceClassName); config.addDataSourceProperty(&quot;url&quot;, url); config.addDataSourceProperty(&quot;user&quot;, user); config.addDataSourceProperty(&quot;password&quot;, password); return new HikariDataSource(config); </code></pre> <p>I have checked DB connectivity of kubernetes without Hikari and it works perfectly. So there is no issue with connectivity. Please help me regarding the issue. I am stuck with this for couple of days. Thank you</p>
<p><code>${POSTGRES_HOST}</code> is expecting a <a href="https://stackoverflow.com/questions/35531661/using-env-variable-in-spring-boots-application-properties">system environment variable</a> that you probably missing in the specific machine, add it for example:</p> <pre><code>export POSTGRES_HOST="1.1.1.1" </code></pre>
<p>I have an application running in Kubernetes. To take the advantage of rolling updates with no downtime, I have to implement the proper Health Checks, so the cluster can know when the application/container is ready to handle requests.</p> <p>I'm trying to use the <a href="https://blogs.msdn.microsoft.com/webdev/2018/08/22/asp-net-core-2-2-0-preview1-healthcheck/" rel="noreferrer">new ASP.NET Code 2.2 Healthchecks</a> feature.</p> <p>I should return a <code>Microsoft.Extensions.Diagnostics.HealthChecks.HealthCheckResult</code> struct with the status of the application.</p> <p>This struct has 3 static methods that I can use to return this status:</p> <ul> <li>Healthy</li> <li>Unhealthy</li> <li>Degraded</li> </ul> <p>In which situations that the app is not OK that I should use Unhealthy vs Degraded? A few examples would be good.</p>
<p>A "degraded" check could be used for checks that did succeed but are slow or unstable. E.g. a simple database query did succeed but took more than a second. Moving traffic to another instance is probably a good idea until the problem has resolved.</p> <p>An "unhealthy" check means that the component does not work at all. E.g. a connection to the Redis cache could no be established. Restarting the instance could solve this issue.</p> <p>Quoting the blog post:</p> <blockquote> <p><strong>A failed liveness probe says:</strong> The application has crashed. You should shut it down and restart.</p> <p><strong>A failed readiness probe says:</strong> The application is OK but not yet ready to serve traffic.</p> </blockquote> <p>You could say that a "degraded" health check maps to the "readiness" probe and an "unhealthy" check maps to the "liveness" probe.</p>
<p>I have a public key that all my pods needs to have. </p> <p>My initial thought was to create a ConfigMap or Secret to hold it but as far as I can tell neither of those can be used across namespaces. Apart from that, it's really boiler plate to paste the same volume into all my Deployments</p> <p>So now I'm left with only, in my opinion, bad alternatives such as creating the same ConfigMap/Secret in all Namespaces and do the copy-paste thing in deployments.</p> <p>Any other alternatives?</p> <p>Extra information after questions.</p> <ul> <li>The key doesn't need to be kept secret, it's a public key, but it needs to be distributed in a trusted way.</li> <li>It won't rotate often but when it happens all images can't be re-built.</li> <li>Almost all images/pods needs this key and there will be hundreds of images/pods.</li> </ul>
<p>You can use Kubernetes initializers to intercept object creation and mutate as you want. This can solve copy-paste in all your deployments and you can manage it from a central location.</p> <p><a href="https://medium.com/google-cloud/how-kubernetes-initializers-work-22f6586e1589" rel="nofollow noreferrer">https://medium.com/google-cloud/how-kubernetes-initializers-work-22f6586e1589</a></p> <p>You will still need to create configmaps/secrets per namespace though. </p>
<p>I have a Scala App running on a Service, using a PostgreSql Service, Tables are being created by the database management plugin (Slick)</p> <p>however, need to edit a particular table and I cant find it!</p> <p>postgres service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: core-db spec: ports: - port: 5432 selector: app: core-db --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: core-db spec: template: metadata: labels: app: core-db spec: containers: - image: "image" name: core-db imagePullPolicy: IfNotPresent env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres ports: - containerPort: 5432 name: core-db volumeMounts: - name: core-storage mountPath: /var/lib/postgresql/db-data volumes: - name: core-storage persistentVolumeClaim: claimName: core-pv-claim imagePullSecrets: - name: regcred --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: core-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi </code></pre> <p>I get in the pod:</p> <pre><code>kubectl exec POD -it -- psql -U postgres </code></pre> <p>but \d shows no relations</p> <p>What I am doing wrong, where is my data after all ? </p> <p><strong>EDIT</strong></p> <pre><code>\l postgres app template0 template1 \c postgres \dt No relations found. \c app \dt No relations found. </code></pre>
<p>Ok so, it happens that I just lacked some brain activity.</p> <p>All my tables are stored into Schemas, and that's the reason I can see them when I use \dt on any of the databases</p> <p>Connecting to the 'app' database and using \dn will list the schemas and thus, we can find the tables.</p> <p>Select "schema".table is how it works...</p> <p>Thanks to @NicolaBen, those steps helped me.</p>
<p>Hello kubernetes experts,</p> <p>I've a kubernetes cluster running on 4 raspberry pis with docker 18.04ce and kubernetes 1.9.7.</p> <p>I deployed a service and this service can be accessed from within the cluster via the cluster IP. I also deployed an ingress as described in <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/</a> and in <a href="https://stackoverflow.com/questions/51357615/how-to-get-kubernetes-ingress-port-80-working-on-baremetal-single-node-cluster">How to get Kubernetes Ingress Port 80 working on baremetal single node cluster</a> as a DaemonSet based on an ingress-controller-service. The DaemonSet also has NET_BIND_SERVICE set which should achieve that the host is listening on the same port as the service.</p> <p>Everything works like described but my ingress doesn't listen on the hosts port 80. Somehow the setting NET_BIND_SERVICE doesn't work as intended. Does anyone know how to fix that?</p> <p>If I deploy the ingress-controller as a Deployment with NodePort instead of as a DaemonSet it works but that limits me to the ports kubernetes allows for assigns for NodePorts.</p> <p><a href="https://hackernoon.com/kubernetes-ingress-controllers-and-traefik-a32648a4ae95" rel="nofollow noreferrer">https://hackernoon.com/kubernetes-ingress-controllers-and-traefik-a32648a4ae95</a> tells that hostPort of an ingress DaemonSet doesn't work with a CNI networking plugin (I tested with flannel and weave) but the Kubernetes@RaspberryPI sites (like <a href="https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/" rel="nofollow noreferrer">https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/</a>) tell that it works so this issue should be solved.</p> <p>Thanks in advance Heinz </p>
<p>I found a configuration how the ingress based on traefik works on my Raspberry Pi cluster with docker 18.04CE, kubernetes 1.9.7 and 2018-06-27-raspbian-stretch-lite.img:</p> <p>Use the DaemonSet definition from <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/</a> and in particular the yaml file <a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml</a></p> <p><em>but</em> you have to add hostNetwork: true to the spec of the DaemonSet and type: ClusterIP to the spec of the Service.</p> <p>My working yaml is as follows:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: kube-system --- kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: traefik-ingress-controller namespace: kube-system labels: k8s-app: traefik-ingress-lb spec: template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 hostNetwork: true containers: - image: traefik name: traefik-ingress-lb ports: - name: http containerPort: 80 hostPort: 80 - name: admin containerPort: 8080 hostPort: 8080 securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --api - --kubernetes - --logLevel=DEBUG --- kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: type: ClusterIP selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP port: 80 name: web - protocol: TCP port: 8080 name: admin </code></pre> <p>According to the kubernetes documentation ClusterIP is the default for the type of a service. But my sample only works if I explicitly add type: ClusterIP to the service that backs the ingress-controller.</p> <p>I also checked whether it works if I only add either "hostNetwork: true" to the spec.template.spec of the DeamonSet or "type: ClusterIP" to the spec of the service but it only works if I add both.</p>
<p>I tried <code>kubectl rollout history deployment/my-app</code>, it returns only <code>No rollout history found.</code></p> <p>I think there exists a method to get all the deploy histories. It will be very helpful.</p> <p>Reference official document: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p>
<p>Use <code>--record</code> while creating the deployment so that it will start recroding the deployment into the ReplicaSet.</p> <pre><code>$ kubectl create -f deploy.yaml --record=true </code></pre> <p>Whenever you deploy new version of deployment, the replica set preserves the previous configuration of the deployment. Check the track of deployment and even we can use this for automation. This should be the default option in the kuberentes deployment, but, by default it's set to false.</p> <p>Then check the status and history using the below commands,</p> <pre><code>$ kubectl rollout status deploy myapp-deployment $ kubectl rollout history deploy myapp-deployment </code></pre> <p>Hope this helps. </p>
<p>We have Zuul and Eureka both up and running on our kubernetes cluster. Zuul is registered with Eureka.</p> <p>I start up a new service called "Resource-Service" this correctly boots up and registers with Eureka, all services are up.</p> <p>When I attempt to hit Zuul endpoint to access "Resource-service" I get the following error. Seems like Zuul can not map to Resource-service even though resource service is registered with eureka. </p> <p>So how does zuul know where to route requests for "Resource-service" if not through a registered service in eureka?</p> <p>Note: I have tried this with Docker-compose and was able to get to work, so must be something with kubernetes interacting with zuul and eureka.</p> <p><strong>Zuul Stack Trace (Updated)</strong></p> <pre><code>2018-08-29 14:23:15.820 INFO 7 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_ZUUL-API-GATEWAY/zuul-api-gateway-79bd7d4c5-jgqxc:zuul-api-gateway:7100: registering service... 2018-08-29 14:23:15.916 INFO 7 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_ZUUL-API-GATEWAY/zuul-api-gateway-79bd7d4c5-jgqxc:zuul-api-gateway:7100 - registration status: 204 2018-08-29 14:23:15.928 INFO 7 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 7100 (http) with context path '' 2018-08-29 14:23:15.929 INFO 7 --- [ main] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 7100 2018-08-29 14:23:15.934 INFO 7 --- [ main] n.e.m.z.ZuulApiGatewayApplication : Started ZuulApiGatewayApplication in 11.624 seconds (JVM running for 12.771) 2018-08-29 14:27:59.166 INFO 7 --- [nio-7100-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet' 2018-08-29 14:27:59.175 INFO 7 --- [nio-7100-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started 2018-08-29 14:27:59.249 INFO 7 --- [nio-7100-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 73 ms 2018-08-29 14:27:59.504 INFO 7 --- [nio-7100-exec-1] o.s.c.n.zuul.web.ZuulHandlerMapping : Mapped URL path [/resource-service/**] onto handler of type [class org.springframework.cloud.netflix.zuul.web.ZuulController] 2018-08-29 14:28:15.511 INFO 7 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration 2018-08-29 14:29:51.215 INFO 7 --- [nio-7100-exec-5] n.e.m.zuulapigateway.ZuulLoggingFilter : zuul request uri -&gt; /resource-service/resource/1 2018-08-29 14:29:51.219 INFO 7 --- [nio-7100-exec-5] n.e.m.zuulapigateway.JwtFilter : auth:Bearer &lt;&lt;Removed&gt;&gt; 2018-08-29 14:29:51.277 INFO 7 --- [nio-7100-exec-5] s.c.a.AnnotationConfigApplicationContext : Refreshing SpringClientFactory-resource-service: startup date [Wed Aug 29 14:29:51 GMT 2018]; parent: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@20322d26 2018-08-29 14:29:51.420 INFO 7 --- [nio-7100-exec-5] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring 2018-08-29 14:29:51.780 INFO 7 --- [nio-7100-exec-5] c.netflix.config.ChainedDynamicProperty : Flipping property: resource-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647 2018-08-29 14:29:51.805 INFO 7 --- [nio-7100-exec-5] c.n.u.concurrent.ShutdownEnabledTimer : Shutdown hook installed for: NFLoadBalancer-PingTimer-resource-service 2018-08-29 14:29:51.952 INFO 7 --- [nio-7100-exec-5] c.netflix.loadbalancer.BaseLoadBalancer : Client: resource-service instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=resource-service,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null 2018-08-29 14:29:51.964 INFO 7 --- [nio-7100-exec-5] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater 2018-08-29 14:29:52.045 INFO 7 --- [nio-7100-exec-5] c.netflix.config.ChainedDynamicProperty : Flipping property: resource-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647 2018-08-29 14:29:52.049 INFO 7 --- [nio-7100-exec-5] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client resource-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=resource-service,current list of Servers=[resource-service-7d9c6d6d99-j2mmv:10002],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;] },Server stats: [[Server:resource-service-7d9c6d6d99-j2mmv:10002; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 00:00:00 GMT 1970; First connection made: Thu Jan 01 00:00:00 GMT 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0] ]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList@1f11c5d0 2018-08-29 14:29:52.527 WARN 7 --- [nio-7100-exec-5] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering com.netflix.zuul.exception.ZuulException: Forwarding error at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:189) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:164) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:112) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:117) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:193) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:157) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.FilterProcessor.route(FilterProcessor.java:118) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.ZuulRunner.route(ZuulRunner.java:96) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.http.ZuulServlet.route(ZuulServlet.java:116) ~[zuul-core-1.3.1.jar!/:1.3.1] at com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:81) ~[zuul-core-1.3.1.jar!/:1.3.1] at org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:165) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.cloud.netflix.zuul.web.ZuulController.handleRequest(ZuulController.java:44) [spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:52) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:866) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851) [spring-webmvc-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-embed-websocket-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:158) [spring-boot-actuator-2.0.3.RELEASE.jar!/:2.0.3.RELEASE] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:126) [spring-boot-actuator-2.0.3.RELEASE.jar!/:2.0.3.RELEASE] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:111) [spring-boot-actuator-2.0.3.RELEASE.jar!/:2.0.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90) [spring-boot-actuator-2.0.3.RELEASE.jar!/:2.0.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.0.7.RELEASE.jar!/:5.0.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1468) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_171] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_171] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.31.jar!/:8.5.31] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_171] Caused by: com.netflix.client.ClientException: null at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:118) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at org.springframework.cloud.netflix.zuul.filters.route.support.AbstractRibbonCommand.run(AbstractRibbonCommand.java:186) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.support.AbstractRibbonCommand.run(AbstractRibbonCommand.java:51) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:302) ~[hystrix-core-1.5.12.jar!/:1.5.12] at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:298) ~[hystrix-core-1.5.12.jar!/:1.5.12] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.subscribe(Observable.java:10423) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.subscribe(Observable.java:10390) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.BlockingOperatorToFuture.toFuture(BlockingOperatorToFuture.java:51) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.observables.BlockingObservable.toFuture(BlockingObservable.java:410) ~[rxjava-1.3.8.jar!/:1.3.8] at com.netflix.hystrix.HystrixCommand.queue(HystrixCommand.java:378) ~[hystrix-core-1.5.12.jar!/:1.5.12] at com.netflix.hystrix.HystrixCommand.execute(HystrixCommand.java:344) ~[hystrix-core-1.5.12.jar!/:1.5.12] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:159) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] ... 65 common frames omitted Caused by: java.lang.RuntimeException: java.net.UnknownHostException: resource-service-7d9c6d6d99-j2mmv: Name does not resolve at rx.exceptions.Exceptions.propagate(Exceptions.java:57) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:463) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.observables.BlockingObservable.single(BlockingObservable.java:340) ~[rxjava-1.3.8.jar!/:1.3.8] at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:112) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] ... 127 common frames omitted Caused by: java.net.UnknownHostException: resource-service-7d9c6d6d99-j2mmv: Name does not resolve at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_171] at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_171] at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_171] at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[na:1.8.0_171] at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_171] at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_171] at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.5.jar!/:4.5.5] at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[httpclient-4.5.5.jar!/:4.5.5] at org.springframework.cloud.netflix.ribbon.apache.RibbonLoadBalancingHttpClient.execute(RibbonLoadBalancingHttpClient.java:81) ~[spring-cloud-netflix-ribbon-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at org.springframework.cloud.netflix.ribbon.apache.RibbonLoadBalancingHttpClient.execute(RibbonLoadBalancingHttpClient.java:44) ~[spring-cloud-netflix-ribbon-2.0.0.RELEASE.jar!/:2.0.0.RELEASE] at com.netflix.client.AbstractLoadBalancerAwareClient$1.call(AbstractLoadBalancerAwareClient.java:104) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:303) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$3$1.call(LoadBalancerCommand.java:287) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:231) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:228) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeConcatMap$ConcatMapSubscriber.drain(OnSubscribeConcatMap.java:286) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeConcatMap$ConcatMapSubscriber.onNext(OnSubscribeConcatMap.java:144) ~[rxjava-1.3.8.jar!/:1.3.8] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:185) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:180) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:94) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:42) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber$1.call(OperatorRetryWithPredicate.java:127) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber.onNext(OperatorRetryWithPredicate.java:79) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber.onNext(OperatorRetryWithPredicate.java:45) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.util.ScalarSynchronousObservable$WeakSingleProducer.request(ScalarSynchronousObservable.java:276) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Subscriber.setProducer(Subscriber.java:209) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:138) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:129) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.subscribe(Observable.java:10423) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.Observable.subscribe(Observable.java:10390) ~[rxjava-1.3.8.jar!/:1.3.8] at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:443) ~[rxjava-1.3.8.jar!/:1.3.8] ... 129 common frames omitted </code></pre> <p><strong>Eureka (Updated)</strong></p> <p><a href="https://i.stack.imgur.com/UvSRz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UvSRz.png" alt="enter image description here"></a></p> <p><strong>resource-service Bootstrap.yml (New)</strong></p> <pre><code>spring: application: name: resource-service profiles: active: dev jpa: hibernate: dialect: org.hibernate.dialect.SQLServerDialect show_sql: true #WARNING: set this to false for production format_sql: true #WARNING: set this to false for production hbm2ddl: auto: none naming: physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl eureka: instance: preferIpAddress: true client: registerWithEureka: true fetchRegistry: true example: property: "RESOURCE SERVICE DEV PROFILE TEST" #logging.level.root: debug </code></pre> <p><strong>zuul-api-gateway (bootstrap.yml)</strong></p> <pre><code>spring: application: name: zuul-api-gateway server: port: 7100 profiles: active: default hystrix: command: default: execution: isolation: thread: timeoutInMilliseconds: 60000 zuul: ribbon: eager-load: enabled: true </code></pre> <p><strong>Kubectl get all</strong></p> <p><a href="https://i.stack.imgur.com/nD1qz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nD1qz.png" alt="enter image description here"></a></p> <p>Thank you in advance!</p>
<p>I think you've a mismatch between the host name being registered to Zuul and the one on which the service is reachable in k8s. Do 'kubectl get services' to find the names of your k8s services. Zuul will get the name that is registered to it from eureka (it seems you are registering names and not IPs). If you change the name of your Service in your k8s yaml for resource-service to be 'resource-service' then I'd expect it to work. </p> <p>In your docker-compose I expect you have a section that defines the resource-service and it is presumably named resource-service. The equivalent for k8s is the name of the Service that matches to the Pods of that particular Deployment. </p> <p>Edit: A spring boot app by default registers with eureka the Pod's hostname (if not using IP). So to override this it's necessary to set eureka.instance.hostname to match the Service name.</p>
<p>I just started kubernetes with help of minikube in windows10 machine+Hyper V. deployed and exposed nginx container as followed but could not access the deployed application through minikube ip or inside the minikube server.</p> <pre><code> C:\WINDOWS\system32&gt;kubectl run nginx --image=nginx --port=8090 deployment "nginx" created C:\WINDOWS\system32&gt;kubectl expose deployment nginx --type="NodePort" service "nginx" exposed C:\WINDOWS\system32&gt;minikube service list |-------------|----------------------|----------------------------| | NAMESPACE | NAME | URL | |-------------|----------------------|----------------------------| | default | kubernetes | No node port | | default | nginx | http://10.16.234.206:30684 | | kube-system | kube-dns | No node port | | kube-system | kubernetes-dashboard | http://10.16.234.206:30000 | |-------------|----------------------|----------------------------| C:\WINDOWS\system32&gt;minikube ssh "curl localhost:30684" curl: (7) Failed to connect to localhost port 30684: Connection refused $ curl "http://10.16.234.206:30684" curl: (7) Failed to connect to 10.16.234.206 port 30684: Connection refused </code></pre>
<p>The nginx dockerfile exposes port 80, but your pod is using port 8090. Run the deployment using the right port 80 should fix it:</p> <pre><code>kubectl run nginx --image=nginx --port=80 </code></pre>
<p>We would like to deploy Airflow application on Kubernetes on 2 Data centers.</p> <p>Airflow Schedular container generates DAGs for every 1min, 5mins and 10 mins. These DAGs are the tasks that will be assigned to Airflow Worker container.</p> <p>In the process of assigning tasks to Airflow worker, Airflow Schedular sends the data about tasks to both MariaDb (can be considered as a source of Truth) and Redis.</p> <p>In MariaDB task can have one of the statuses '<code>queued</code>', '<code>running</code>', '<code>success</code>', '<code>failed</code>'. When tasks are in Redis it will be in the state of '<code>queued</code>'.</p> <p>MariaDB maintains the same when it receives tasks from Airflow Schedular. when Redis handovers particular queued task to Worker container, MariaDB changes that particular task status to 'running' and if it completes executing process, the task status in MariaDB will be changed to 'Success'.</p> <p><strong>The actual problem:</strong></p> <p>When Redis fails, we have queued tasks in MariaDB but we will lose data in Redis. When the k8s brings up new Redis server, it will lose previous tasks- here comes the <em>DATA LOSS</em>.</p> <p>What can be the solution for this.</p> <p>Can we use <code>Redis Clustering - Gossip protocol</code> to avoid data loss:</p> <p>If yes, could you provide any documentation in resolving this problem using this protocol. else provide the suggestions that suits to my environment and scenario.</p>
<p>Redis clustering would help with it, but it is a bit of a pain to set up and it's not a complete replacement for backups.</p> <p>In your case a much simpler solution in my opinion would be to incorporate a recovery procedure in your redis pod startup. You do not have a permanent data loss as you have your MariaDB source of truth, so you can add an init containerthat runs a script to recover redis data from MariaDB.</p> <p>Another approach that would limit your problem significantly would be to use a persistent volume to store redis data as redis can snapshot it's in-memory state in regular intervals. With use of StatefulSet instead of Deployment to manage your Redis node(s) pods would get the storage reattached on restart/rescheduling and you'd experiens no data loss (or at most the tiny window since the last snapshot)</p>
<p>I am trying to deploy Hyperledger Fabric on Kubernetes. Everything is working fine other than <code>peer chaincode instantiate</code>. Whenever I try to instantiate chaincode it keeps waiting for around 10-15 mins and then it throws timeout error as shown below:</p> <pre><code>Error: could not assemble transaction, err Proposal response was not successful, error code 500, msg failed to execute transaction b7ce8fa6259e6004481b829660dd2365f8503153047b3d9132393180e2591f0a: timeout expired while starting chaincode mycc:1.0 for transaction b7ce8fa6259e6004481b829660dd2365f8503153047b3d9132393180e2591f0a </code></pre> <p>On the peer side, I get the following error:</p> <pre><code>2018-08-28 12:55:10.122 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 04e[0m Error creating new Smart Contract: error trying to connect to local peer: context deadline exceeded[31m2018-08-28 12:55:10.121 UTC [shim] userChaincodeStreamGetter -&gt; ERRO 001[0m context deadline exceeded 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 04f[0m error trying to connect to local peer 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 050[0m github.com/hyperledger/fabric/core/chaincode/shim.userChaincodeStreamGetter 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 051[0m /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:111 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 052[0m github.com/hyperledger/fabric/core/chaincode/shim.Start 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 053[0m /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:150 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 054[0m main.main 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 055[0m /chaincode/input/src/github.com/chaincode/fabcar/go/fabcar.go:200 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 056[0m runtime.main 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 057[0m /opt/go/src/runtime/proc.go:198 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 058[0m runtime.goexit 2018-08-28 12:55:10.123 UTC [dev-peer0.example.com-mycc-1.0] func2 -&gt; INFO 059[0m /opt/go/src/runtime/asm_amd64.s:2361 2018-08-28 12:55:10.166 UTC [dockercontroller] func2 -&gt; INFO 05a[0m Container dev-peer0.example.com-mycc-1.0 has closed its IO channel </code></pre> <p>Full Logs can be found here => <a href="https://hastebin.com/ibukeyakax.coffeescript" rel="nofollow noreferrer">https://hastebin.com/ibukeyakax.coffeescript</a></p> <p>Any suggestion/help would be really appreciated.</p>
<p>I found the solution. Adding <code>CORE_PEER_ADDRESSAUTODETECT</code> variable in env fixed the issue.</p> <p>So you should add</p> <pre><code>- name: CORE_PEER_ADDRESSAUTODETECT value: "true" </code></pre> <p>into your env</p>
<p>I am trying to setup kubernetes on my bare metal cluster using <code>kubeadm</code>. But during initialization <code>kubeadm init</code> i get following error :</p> <pre><code>[root@server docker]# kubeadm init [init] using Kubernetes version: v1.11.2 [preflight] running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING HTTPProxy]: Connection to "https://192.111.141.4" uses proxy "http://lab:[email protected]:3122". If that is not intended, adjust your proxy settings [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://lab:[email protected]:3122". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration I0827 16:33:00.426176 34482 kernel_validator.go:81] Validating kernel version I0827 16:33:00.426374 34482 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [server.test.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.111.141.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [server.test.com localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [server.test.com localhost] and IPs [192.111.141.4 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - No internet connection is available so the kubelet cannot pull or find the following control plane images: - k8s.gcr.io/kube-apiserver-amd64:v1.11.2 - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 - k8s.gcr.io/kube-scheduler-amd64:v1.11.2 - k8s.gcr.io/etcd-amd64:3.2.18 - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images are downloaded locally and cached. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster </code></pre> <p>Preflight images are also present on my system but still I get this error. After <code>[init] this might take a minute or longer if the control plane images have to be pulled</code> this statement, kubeadm waits about 5-10 minutes before giving this error. What is cause of this error?</p>
<p>Error was due to enabled firewall in my system. I had to disable firewall using command </p> <pre><code>systemctl stop firewalld </code></pre>
<p>I'm in the process of migration of our application from single instance Docker-compose configuration to Kubernetes. I currently have the following example NGINX configuration, running as a reverse proxy of my application:</p> <pre><code>server { server_name example.com; ssl_certificate /etc/nginx/certs/${CERT_NAME}; ssl_certificate_key /etc/nginx/certs/${KEY_NAME}; listen 443 ssl; keepalive_timeout 70; access_log /var/log/nginx/access.log mtail; ssl_protocols xxxxxx ssl_ciphers xxxxxx ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; rewrite_log on; resolver 127.0.0.11 ipv6=off; location /push/ { auth_basic "Restricted"; auth_basic_user_file /etc/nginx/htpasswd; rewrite /push/(.*) /index.php/$1 break; proxy_pass pushinterface:3080; } location /flights/ { rewrite /flights/(.*) /$1 break; proxy_pass flightstats:3090; } location /api/ { proxy_pass $api; } location /grafana/ { access_log off; log_not_found off; proxy_pass http://grafana:3000; rewrite ^/grafana/(.*) /$1 break; } </code></pre> <p>}</p> <p>My initial plans for the reverse proxy part was implementing an ingress with NGINX ingress controller, but I saw that my configuration can be created as Ingress only with NGINX Plus. That's why I decided to try with Traefik, but I'm not sure if it's still possible to have different rewrites of the path for each service. </p> <p>I tried the following Ingress configuration, but it seems it's not working:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-traefik annotations: kubernetes.io/ingress.class: traefik traefik.frontend.rule.type: ReplacePathRegex spec: rules: - host: example.com http: paths: - backend: serviceName: pushinterface servicePort: 80 path: /push/(.*) /index/$1 - backend: serviceName: flights servicePort: 80 path: /flights/(.*) /$1 - backend: serviceName: api servicePort: 80 path: /api - backend: serviceName: grafana servicePort: 80 path: /grafana/(.*) /$1 </code></pre> <p>I will appreciate any help for solving this task</p>
<p>After several hours of unsuccessful attempts to solve my issue, I did it with Nginx ingress controller and it works great! Here's the ingress configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | rewrite /push/(.*) /index/$1 break; rewrite /flights/(.*) /$1 break; rewrite /grafana/(.*) /$1 break; spec: rules: - host: example.com http: paths: - backend: serviceName: pushinterface servicePort: 80 path: /push - backend: serviceName: flights servicePort: 80 path: /flights - backend: serviceName: api servicePort: 80 path: /api - backend: serviceName: grafana servicePort: 80 path: /grafana </code></pre> <p>Thanks to everyone for the answers! :)</p>
<p>I am trying to setup kubernetes on my bare metal cluster using <code>kubeadm</code>. But during initialization <code>kubeadm init</code> i get following error :</p> <pre><code>[root@server docker]# kubeadm init [init] using Kubernetes version: v1.11.2 [preflight] running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING HTTPProxy]: Connection to "https://192.111.141.4" uses proxy "http://lab:[email protected]:3122". If that is not intended, adjust your proxy settings [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://lab:[email protected]:3122". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration I0827 16:33:00.426176 34482 kernel_validator.go:81] Validating kernel version I0827 16:33:00.426374 34482 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [server.test.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.111.141.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [server.test.com localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [server.test.com localhost] and IPs [192.111.141.4 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - No internet connection is available so the kubelet cannot pull or find the following control plane images: - k8s.gcr.io/kube-apiserver-amd64:v1.11.2 - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 - k8s.gcr.io/kube-scheduler-amd64:v1.11.2 - k8s.gcr.io/etcd-amd64:3.2.18 - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images are downloaded locally and cached. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster </code></pre> <p>Preflight images are also present on my system but still I get this error. After <code>[init] this might take a minute or longer if the control plane images have to be pulled</code> this statement, kubeadm waits about 5-10 minutes before giving this error. What is cause of this error?</p>
<p>Try to restart kubelet:</p> <pre><code>systemctl restart kubelet </code></pre> <p>Then check if kubelet is active and running:</p> <pre><code>systemctl status kubelet </code></pre> <p>Check kubelet logs:</p> <pre><code>journalctl -xeu kubelet </code></pre> <p>If the restart of kubelet does not help, you can try to re-install the kubelet, it is a separate package:</p> <ul> <li><code>dnf reinstall kubelet</code> on Fedora</li> <li><code>yum reinstall kubelet</code> on CentOS/RedHat</li> <li><code>apt-get purge kubelet &amp;&amp; apt-get install kubelet</code> on Debian/Ubuntu</li> </ul> <p>You can also try if it is possible to pull images spoken in the error message e.g.:</p> <pre><code>docker pull k8s.gcr.io/kube-apiserver-amd64:v1.11.2 </code></pre>
<p>The following deployment file is working if I'm uploading it from my local machine.</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: api namespace: app spec: replicas: 2 selector: matchLabels: run: api template: metadata: labels: run: api spec: containers: - name: api image: gcr.io/myproject/api:1535462260754 ports: - containerPort: 8080 readinessProbe: httpGet: path: /_ah/health port: 8080 initialDelaySeconds: 10 periodSeconds: 5 </code></pre> <p>The same one is on remote Compute Engine machine which running Jenkins. On this machine, with ssh I'm also able to apply this config. Under the Jenkins shell execute it's always throws </p> <pre><code>error: unable to recognize "./dist/cluster/api.deployment.yaml": no matches for kind "Deployment" in version "apps/v1" </code></pre> <p>I tried to change <code>apiVersion</code> to <code>apps/v1beta1</code> and to <code>extensions/v1beta1</code> as well. Don't know what to try else.</p> <p><strong>Update 1</strong></p> <p>kubectl version on Compute Engine:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff0 88eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Pla tform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.5", GitCommit:"9b635efce81582e1da13b3 5a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-02T23:42:40Z", GoVersion:"go1.9.3b4", Compiler:"gc ", Platform:"linux/amd64"} </code></pre> <p><strong>Update 2</strong></p> <p>Run inside Jenkins job shown this.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Error from server (Forbidden): &lt;html&gt;&lt;head&gt;&lt;meta http-equiv='refresh' content='1;url=/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s'/&gt;&lt;script&gt;window.location.replace('/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s');&lt;/script&gt;&lt;/head&gt;&lt;body style='background-color:white; color:white;'&gt; Authentication required &lt;!-- You are authenticated as: anonymous Groups that you are in: Permission you need to have (but didn't): hudson.model.Hudson.Read ... which is implied by: hudson.security.Permission.GenericRead ... which is implied by: hudson.model.Hudson.Administer --&gt; &lt;/body&gt;&lt;/html&gt; </code></pre>
<p>Probably the <code>kubectl</code> version in your Jenkins server or agent is old. Try running <code>kubectl version</code> from the Jenkins job to check for mismatches.</p>
<p>I have my ssh private key (/home/user/.ssh/id_rsa) as a volume mounted secret in my container. Kubernetes seems to mount it with uid 0. However, my app runs as a specific user, and therefore can't access the ssh private key whose permission must be 600 at min. How can I change the ownership of my private key to reflect that of a specific user?</p> <p>thanks.</p>
<p>In Linux, usernames are mapped to a user id which can be seen with the command <code>id -u someusername</code>.</p> <p>SSH requires by default in many cases that your SSH key be owned by the user running SSH and be hidden to others <code>600</code></p> <p>Therefore, I highly recommend you copy your key instead of mounting it, unless your container user has the same user id as you.</p> <hr> <p>If you are using a linux container, you can run the command inside the container to get the exact user id, and then chown your files with the user id instead of a user name. </p> <p><code>kubectl exec -it mypod bash</code> or <code>sh</code> if bash doesn't work <code>$ id -u someuser</code></p> <p>OR</p> <p><code>kubectl exec -it mypod id -u</code> if your container has one user which started the main process</p> <p>THEN </p> <p>Copy your id file so you can chown it without interfering with your ability to ssh.</p> <pre><code>mkdir -p /data/secrets/myapp cp /home/user/.ssh/id_rsa /data/secrets/myapp/id_rsa chown $MYAPPUSERID:$MYAPPUSERID /data/secrets/myapp/id_rsa chmod 600 /data/secrets/myapp/id_rsa </code></pre> <p>Because the host OS might have already mapped this user id, it may <em>seem</em> that your files are owned by another arbitrary user, but what ultimately matters is the user id of the owner/group.</p>