prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I successfully connected to the <strong>Kubernetes dashboard</strong> and I can see all of my <code>deployment</code>, <code>statefulsets</code>, <code>pods</code>, and etc. But the Graphs which determined the amount of CPU and memory using by pods, do not exist. </p> <p>All the pods :</p> <pre><code>kube-system coredns-576cbf47c7-cj8qv 1/1 Running 33 67d kube-system coredns-576cbf47c7-qh9hm 1/1 Running 34 67d kube-system etcd-master 1/1 Running 15 67d kube-system heapster-684777c4cb-qt6f5 1/1 Running 0 134m kube-system kube-apiserver-master 1/1 Running 23 67d kube-system kube-controller-manager-master 1/1 Running 15 67d kube-system kube-proxy-bs5k9 1/1 Running 13 67d kube-system kube-proxy-fjp8b 1/1 Running 13 67d kube-system kube-scheduler-master 1/1 Running 15 67d kube-system kubernetes-dashboard-77fd78f978-cnhsc 1/1 Running 0 71m kube-system metrics-server-5cbbc84f8c-vz77c 1/1 Running 0 71m kube-system monitoring-influxdb-5c5bf4949d-jqr9d 1/1 Running 0 133m kube-system weave-net-fl972 2/2 Running 77 67d kube-system weave-net-gh96b 2/2 Running 34 67d </code></pre> <p>There are log from dashboard pod :</p> <pre><code>2018/12/16 08:43:54 Starting overwatch 2018/12/16 08:43:54 Using in-cluster config to connect to apiserver 2018/12/16 08:43:54 Using service account token for csrf signing 2018/12/16 08:43:54 No request provided. Skipping authorization 2018/12/16 08:43:54 Successful initial request to the apiserver, version: v1.12.1 2018/12/16 08:43:54 Generating JWE encryption key 2018/12/16 08:43:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting 2018/12/16 08:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system 2018/12/16 08:43:55 Initializing JWE encryption key from synchronized object 2018/12/16 08:43:55 Creating in-cluster Heapster client 2018/12/16 08:43:55 Successful request to heapster 2018/12/16 08:43:55 Auto-generating certificates 2018/12/16 08:43:55 Successfully created certificates 2018/12/16 08:43:55 Serving securely on HTTPS port: 8443 2018/12/16 08:44:19 Getting application global configuration 2018/12/16 08:44:19 Application configuration {"serverTime":1544949859551} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/systembanner request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/rbac/status request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.32.0.1:53200: { contents hidden } 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Incoming HTTP/2.0 GET /api/v1/overview/default?filterBy=&amp;itemsPerPage=10&amp;name=&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.32.0.1:53200: {} 2018/12/16 08:44:20 Getting config category 2018/12/16 08:44:20 Getting discovery and load balancing category 2018/12/16 08:44:20 Getting lists of all workloads 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 No metric client provided. Skipping metrics. 2018/12/16 08:44:20 Getting pod metrics 2018/12/16 08:44:20 [2018-12-16T08:44:20Z] Outcoming response to 10.32.0.1:53200 with 200 status code 2018/12/16 08:44:24 Getting application global configuration 2018/12/16 08:44:24 Application configuration {"serverTime":1544949864040} </code></pre> <p>Which says :</p> <pre><code>Creating in-cluster Heapster client Successful request to heapster </code></pre> <p>So the heapster correctly connected to kubernetes dashboard. I use <code>kubeadm version=v1.12.1</code>.</p> <p>And also when I do <code>kubectl top node</code> in the master node to get the cpu and memory usage it shows the amount of cpu and memory that was used. When I want to access the heapster via teminal like :</p> <pre><code>curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/ </code></pre> <p>and it returns :</p> <pre><code>curl: (7) Failed to connect to 10.40.0.63 port 80: Connection refused </code></pre> <p>The metric-service pod's log :</p> <pre><code>I1216 12:05:24.783577 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) [restful] 2018/12/16 12:05:25 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi [restful] 2018/12/16 12:05:25 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/ I1216 12:05:25.802972 1 serve.go:96] Serving securely on [::]:44 </code></pre> <p>3</p> <p>Any idea?</p>
<p>Heapster has been retired and you may need to modify your heapster installation by following the steps <a href="https://brookbach.com/2018/10/29/Heapster-on-Kubernetes-1.11.3.html" rel="nofollow noreferrer">here</a>.</p>
<p>Is there a way to deploy a replica set in Kubernetes on unique nodes?</p> <p>All the documentation I can find on Kubernetes NodeSelectors, (anti-)affinity, etc., seems to relate to specifying a particular node you do or don't want the pods to be on. I don't mind which nodes my pods are on, I just don't want two pods from a deployment on the same one-- I want to spread them out.</p> <p>It seems like a simple enough thing to do-- in Mesos you can apply a constraint like "HOSTNAME: unique" to achieve it-- but I can't find the Kubernetes equivalent. Can anyone help, please?</p>
<p>In contrast with the first answer described in the comments below your question I'd say the right approach is to define <code>pod anti-affinity</code> as described <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="nofollow noreferrer">in the docs</a>. More precisely:</p> <blockquote> <p>The rules are of the form “this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y”.</p> </blockquote> <p>Feel free to share your scenario details in order to propose a solution. Of course, if you want to run your deployment on compute plane exclusively or compute + control plane you'd choose <code>Deployment</code> resp. <code>DaemonSet</code> in case of the latter.</p>
<p>Where does a file like <a href="https://github.com/kubernetes/ingress-nginx/blob/nginx-0.20.0/deploy/default-backend.yaml" rel="nofollow noreferrer">this</a> pull images like <code>image: k8s.gcr.io/defaultbackend-amd64:1.5</code> and where can I brows them?</p> <p>The next release of <code>ingres-nginx</code> uses 1.15.6 which fixes CVE-2018-16843 and CVE-2018-16844. I want to see if there is an image from the source that <code>k8s.gcr.io/defaultbackend-amd64:1.5</code> is pulled from that contains images with that Nginx version.</p> <p>I couldn't find the answer from the <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">docs</a>. I am not familiar with the default repos for Kubernetes. How would I configure a yaml file to pull from a specific registry with a URL?</p>
<p>k8s.gcr.io is a registry service running on Google Cloud running as a service. In order to list the publicly available images or to find details about these images, please see one of the answers to an <a href="https://stackoverflow.com/questions/35153902/find-the-list-of-google-container-registry-public-images">older and similar question</a>.</p> <blockquote> <p>The link is <a href="https://console.cloud.google.com/gcr/images/google-containers/GLOBAL" rel="nofollow noreferrer">https://console.cloud.google.com/gcr/images/google-containers/GLOBAL</a>. I'm not sure why it's so difficult to find.</p> </blockquote> <p>In order to pull an image from a specific repository, just follow this semantics in your manifests:</p> <pre><code>image: &lt;your-registry&gt;/&lt;your-project-path&gt;/&lt;your-container&gt;:&lt;your-tag&gt; </code></pre> <p>e.g.:</p> <pre><code>image: www.myk8srepo.com/testing/nginx/defaultbackend-amd64:1.5.6 </code></pre>
<p>Currently, our CI/CD Environment is cloud based in Kubernetes. Kubernetes Cloud Providers recently removed the docker deamon, due to performance advantages. For example Google Kubernetes Engine or IBM Cloud Kubernetes only feature an Containerd runtime, to <strong>run</strong> but not <strong>build</strong> container images.</p> <p>Many tools like <a href="https://github.com/GoogleContainerTools/kaniko" rel="nofollow noreferrer">kaniko</a> or <a href="https://github.com/GoogleContainerTools/jib" rel="nofollow noreferrer">jib</a> fix this gap. They provide a way to build docker images very effectivly without requiring a docker deamon.</p> <p><strong>Here comes the Problem:</strong></p> <ol> <li>Image "registry-x.com/repo/app1:v1-snapshot" gets build from jib in CI to registry-x.</li> <li>Image "registry-x.com/repo/app1:v1-snapshot" is then at some point of time deployed and tested and needs to be delivered to the registry Y if the test is successfull as well as needs to be marked as stable release in registry X.</li> </ol> <p>So Image "registry-x.com/repo/app1:v1-snapshot" needs to be tagged from "registry-x.com/repo/app1:v1-snapshot" to "registry-x.com/web/app1:v1-release" and then it needs additionally to be tagged with "registry-y.com/web/app1:v1-release" and both need to be pushed.</p> <p>Outcome: The Snapshot image from development is available in both registries with a release tag.</p> <p>So how to do these simple 3 operations (Pull, Tag, Push) without a docker deamon? Seems like kaniko and jib are not a way. </p> <p>I dont want to order an VM only to get a docker deamon to do these operations. And I also know that Jib is capable of pushing to multiple registries. But it is not able to just rename images.</p> <p>Relates also to this Question from last year: <a href="https://stackoverflow.com/questions/44974656/clone-an-image-from-a-docker-registry-to-another">Clone an image from a docker registry to another</a></p> <p>Regards, Leon</p>
<p>Docker Registry provides an <a href="https://docs.docker.com/registry/spec/api/" rel="nofollow noreferrer">HTTP API</a>, so you could those methods to pull and push images. </p> <p>There are several libraries providing an higher abstraction layer over it (<a href="https://github.com/heroku/docker-registry-client" rel="nofollow noreferrer">docker-registry-client in Go</a>, <a href="https://www.npmjs.com/package/docker-registry-client" rel="nofollow noreferrer">docker-registry-client in Js</a>, etc).</p> <p>In any case, the flow will be </p> <ul> <li><p><a href="https://docs.docker.com/registry/spec/api/#pulling-an-image" rel="nofollow noreferrer">Pulling an image</a> involves:</p> <ul> <li><a href="https://docs.docker.com/registry/spec/api/#manifest" rel="nofollow noreferrer">Retrieve manifests</a> from <code>registry-x.com/repo/app1:v1-snapshot</code>.</li> <li><a href="https://docs.docker.com/registry/spec/api/#blob" rel="nofollow noreferrer">Download</a> of the layers (blobs) named on the manifest.</li> </ul></li> <li><p><a href="https://docs.docker.com/registry/spec/api/#pushing-an-image" rel="nofollow noreferrer">Pushing an image</a> involves:</p> <ul> <li>Upload all layers you previously downloaded</li> <li>Modify the original manifest with your new version</li> <li>Upload the new manifest</li> </ul></li> </ul>
<p>So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a <code>Stateful Set</code> with a <code>Service</code> for network communication. I need to expose each <code>Service</code> to the public internet via DNS on port 5432. </p> <p>The most natural approach here is to use the k8s <code>Load Balancer</code> resource and something like <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external dns</a> to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html" rel="noreferrer">idle connection timeout</a>. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like <code>pg_restore</code>. </p> <p>So I need some kind of solution that allows me to work around the limitations of Load Balancers. <code>Node IPs</code> are out of the question since I will need port <code>5432</code> exposed for every single postgres instance in the cluster. <code>Ingress</code> also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. <code>ExternalName</code> is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as <code>NodeIP</code>. </p> <p>Any suggestions would be greatly appreciated. </p>
<p>The Kubernetes ingress controller implementation <a href="https://github.com/heptio/contour" rel="nofollow noreferrer">Contour</a> from Heptio can <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#tcp-proxying" rel="nofollow noreferrer">proxy <code>TCP</code> streams</a> when they are encapsulated in <code>TLS</code>. This is required to use the <code>SNI</code> handshake message to direct the connection to the correct backend service.</p> <p>Contour can handle <code>ingresses</code>, but introduces additionally a new ingress API <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md" rel="nofollow noreferrer">IngressRoute</a> which is implemented via a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow noreferrer"><code>CRD</code></a>. The TLS connection can be <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#tls-passthrough-to-the-backend-service" rel="nofollow noreferrer">terminated at your backend</a> service. An <code>IngressRoute</code> might look like this:</p> <pre><code>apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: postgres namespace: postgres-one spec: virtualhost: fqdn: postgres-one.example.com tls: passthrough: true tcpproxy: services: - name: postgres port: 5432 routes: - match: / services: - name: dummy port: 80 </code></pre>
<p>I have a private GKE cluster on GCP which I access via the Google Shell via GCP (not the SDK). I can add my external shell IP the the --master-authorized-networks, but when I logout and log back in this IP address changes, so I would have to do this (and delete the old one) every time I want to make changes to my private cluster via de shell.</p> <p>How can I access my private cluster via the shell without updating the external IP address in the --master-autorized-networks every time?</p> <p>Any help is greatly appreciated, thanks.</p>
<p>Google shell is just a VM hosted on a "temporary Server" which is running the code ( taking commands from jquery browser). After some time of being inactive the public ip assigned to that "VM" will change. That's why on the majority of the guides using the Shell the first steps are to authenticate your project.</p> <p>A "Workaround" would be to create a free instance, and whitelist the IP from that Instance. or, just use the SDK. </p>
<p>I installed istio with the kubernetes and helm instructions and annotated a namespace to automatically inject the istio proxy, but it does not appear to be working well. The proxy tries to start but continually crashes with a segfault. I'm using istio 1.0.6. This is the log output of the proxy.</p> <pre><code>[2019-02-27 21:48:50.892][78][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:223] gRPC config for type.googleapis.com/envoy.api.v2.Listener update rejected: Error adding/updating listener 10.16.11.206_8293: unable to read file: /etc/certs/root-cert.pem [2019-02-27 21:48:50.892][78][warning][config] bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener 10.16.11.206_8293: unable to read file: /etc/certs/root-cert.pem [2019-02-27 21:48:50.892][78][info][config] external/envoy/source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers [2019-02-27 21:48:50.902][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:125] Caught Segmentation fault, suspect faulting address 0x0 [2019-02-27 21:48:50.902][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:94] Backtrace thr&lt;83&gt; obj&lt;/usr/local/bin/envoy&gt; (If unsymbolized, use tools/stack_decode.py): [2019-02-27 21:48:50.903][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #0 0x487d8d google::protobuf::internal::ArenaStringPtr::CreateInstanceNoArena() [2019-02-27 21:48:50.904][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #1 0x4be9c4 Envoy::Utils::GrpcClientFactoryForCluster() [2019-02-27 21:48:50.906][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #2 0x4b8389 Envoy::Tcp::Mixer::Control::Control() [2019-02-27 21:48:50.907][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #3 0x4ba7c5 std::_Function_handler&lt;&gt;::_M_invoke() [2019-02-27 21:48:50.908][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #4 0x792a15 std::_Function_handler&lt;&gt;::_M_invoke() [2019-02-27 21:48:50.909][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #5 0x7c828b Envoy::Event::DispatcherImpl::runPostCallbacks() [2019-02-27 21:48:50.910][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #6 0x7c836c Envoy::Event::DispatcherImpl::run() [2019-02-27 21:48:50.912][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #7 0x7c4c15 Envoy::Server::WorkerImpl::threadRoutine() [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #8 0xb354ad Envoy::Thread::Thread::Thread()::{lambda()#1}::_FUN() [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:104] thr&lt;83&gt; obj&lt;/lib/x86_64-linux-gnu/libpthread.so.0&gt; [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr&lt;83&gt; #9 0x7f2701a296b9 start_thread [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:104] thr&lt;83&gt; obj&lt;/lib/x86_64-linux-gnu/libc.so.6&gt; [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:117] thr&lt;83&gt; #10 0x7f270145641c (unknown) [2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:121] end backtrace thread 83 2019-02-27T21:48:50.923768Z warn Epoch 0 terminated with an error: signal: segmentation fault 2019-02-27T21:48:50.923870Z warn Aborted all epochs 2019-02-27T21:48:50.923924Z info Epoch 0: set retry delay to 25.6s, budget to 2 </code></pre>
<p>It appears that the issue was that the istio.default secret was missing from that namespace that my pods were running in. I would assume that in the istio infrastructure should do that, but it didn't appear to. Copying that secret from the istio-system namespace to my own seems to have resolved the issue.</p>
<p>I have an application that is ~40 docker containers varying from NoSQL, RDBMS, C applications, Go apps, Python and so on, orchestrated using <code>Kubernetes</code>, Its all running on <code>GCP</code>. With a GLB(Load Balancer) at the frontend.</p> <p>Now if I create a lot of replicas and give a lot of resources to these applications then everything runs properly. But if I give just enough resources then the frontend sometimes loads very slowly, the web application becomes unresponsive for sometime and then mysteriously comes back up again. </p> <p>All this happens with no pod evictions or restarts.</p> <p>When this happens I can see that the CPU/Memory are at 50%, so resources are not exhausted.</p> <p>How to a go about debugging what is the reason for slowness? How to I calibrate which application requires how mush of resources?</p>
<p>You haven't mentioned about any monitoring tools implemented in Kubernetes cluster that you can use to check overall cluster performance or check application resource usage.</p> <p>All the monitoring aspects based on the metrics characteristics, therefore Kubernetes offers <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#resource-metrics-pipeline" rel="nofollow noreferrer">Resource metrics pipeline</a> gathered by <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">metrics-server</a> or <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipelines" rel="nofollow noreferrer">Full metrics pipeline</a> for some more advanced metrics and <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> is a good example for that approach.</p> <p>For GCP related environments you can use <a href="https://cloud.google.com/logging/" rel="nofollow noreferrer">Stackdriver logging</a> with lots of monitoring features and appropriate group of <a href="https://cloud.google.com/monitoring/api/metrics" rel="nofollow noreferrer">metrics</a>.</p> <p>Therefore, I would start from checking monitoring metrics from underlying Kubernetes resources in order to collect measurements and take necessary actions to improve total cluster productivity. </p>
<p>I want to set an environment variable (I'll just name it <code>ENV_VAR_VALUE</code>) to a container during deployment through Kubernetes.</p> <p>I have the following in the pod yaml configuration:</p> <pre><code>... ... spec: containers: - name: appname-service image: path/to/registry/image-name ports: - containerPort: 1234 env: - name: "ENV_VAR_VALUE" value: "some.important.value" ... ... </code></pre> <p>The container needs to use the <code>ENV_VAR_VALUE</code>'s value.<br> But in the container's application logs, it's value always comes out empty.<br> So, I tried checking it's value from inside the container:</p> <pre><code>$ kubectl exec -it appname-service bash root@appname-service:/# echo $ENV_VAR_VALUE some.important.value root@appname-service:/# </code></pre> <p>So, the value was successfully set.</p> <p>I imagine it's because the environment variables defined from Kubernetes are set <strong>after</strong> the container is already initialized.</p> <p>So, I tried overriding the container's CMD from the pod yaml configuration:</p> <pre><code>... ... spec: containers: - name: appname-service image: path/to/registry/image-name ports: - containerPort: 1234 env: - name: "ENV_VAR_VALUE" value: "some.important.value" command: ["/bin/bash"] args: ["-c", "application-command"] ... ... </code></pre> <p>Even still, the value of <code>ENV_VAR_VALUE</code> is still empty during the execution of the command.<br> Thankfully, the application has a restart function<br> -- because when I restart the app, <code>ENV_VAR_VALUE</code> get used successfully.<br> -- I can at least do some other tests in the mean time.</p> <h3>So, the question is...</h3> <blockquote> <p>How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?</p> </blockquote> <p>As requested, here is the Dockerfile.<br> I apologize for the abstraction...</p> <pre><code>FROM ubuntu:18.04 RUN apt-get update &amp;&amp; apt-get install -y some-dependencies COPY application-script.sh application-script.sh RUN ./application-script.sh # ENV_VAR_VALUE is set in this file which is populated when application-command is executed COPY app-config.conf /etc/app/app-config.conf CMD ["/bin/bash", "-c", "application-command"] </code></pre>
<p>You can try also running two commands in Kubernetes POD spec:</p> <ol> <li>(read in env vars): "source /env/required_envs.env" (would come via <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">secret</a> mount in volume)</li> <li>(main command): "application-command"<br></li> </ol> <p>Like this:</p> <hr> <pre><code>containers: - name: appname-service image: path/to/registry/image-name ports: - containerPort: 1234 command: ["/bin/sh", "-c"] args: - source /env/db_cred.env; application-command; </code></pre>
<p>I am trying to get a list of all non-READY containers in all pods to debug a networking issue in my cluster.</p> <p>Is it possible to use kubectl to get a clean list of all containers in all pods with their status (READY/..)?</p> <p>I am currently using</p> <pre><code>$ kubectl get pods </code></pre> <p>However, the output can be huge and it can be difficult to know which containers are READY and which ones have issues.</p> <p>Thanks.</p>
<p><code>kubectl get pods -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .status.containerStatuses[*]}{.ready}{", "}{end}{end}'</code></p> <p>Adapted from this doc: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-by-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-by-pod</a></p> <p><strong>Edit</strong> to describe what the jsonpath is doing:</p> <p>From what I understand with the jsonpath, range iterates all of the .items[*] returned by getting the pods. \n is added to split the result to one per line, otherwise the result would come to one line. To see how the rest work, you should choose one of your pods and run: <code>kubectl get pod podname -o yaml</code><br> .metadata.name corresponds to</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: podname </code></pre> <p>Similarly, .status.containerStatuses[*] corresponds to the list of container statuses that should be towards the bottom. </p>
<p>Please explain the difference between <code>ResourceQuota</code> vs <code>LimitRange</code> objects in Kubernetes...?</p>
<p><code>LimitRange</code> and <code>ResourceQuota</code> are objects used to control resource usage by a Kubernetes cluster administrator.</p> <p><code>ResourceQuota</code> is for limiting the total resource consumption of a namespace, for example:</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: object-counts spec: hard: configmaps: "10" persistentvolumeclaims: "4" replicationcontrollers: "20" secrets: "10" services: "10" </code></pre> <p><code>LimitRange</code>is for managing constraints at a pod and container level within the project.</p> <pre><code>apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Pod" max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" </code></pre> <p>An individual Pod or Container that requests resources outside of these <code>LimitRange</code> constraints will be rejected, whereas a <code>ResourceQuota</code> only applies to all of the namespace/project's objects in aggregate.</p>
<p>I get "pod has unbound immediate PersistentVolumeClaims", and I don't know why. I run minikube v0.34.1 on macOS. Here are the configs:</p> <p>es-pv.yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch spec: capacity: storage: 400Mi accessModes: - ReadWriteOnce hostPath: path: "/data/elasticsearch/" </code></pre> <p>es-statefulset.yaml</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster spec: serviceName: elasticsearch replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3 resources: limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.zen.ping.unicast.hosts value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch" - name: discovery.zen.minimum_master_nodes value: "2" - name: ES_JAVA_OPTS value: "-Xms256m -Xmx256m" initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "standard" resources: requests: storage: 100Mi </code></pre> <p>es-svc.yaml</p> <pre><code>kind: Service apiVersion: v1 metadata: name: elasticsearch labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node </code></pre>
<p>In order to make a volume accessed my many pods, the <strong>accessModes</strong> need to be <strong>"ReadWriteMany"</strong> . Also if each pod wants to have its own directory then <strong>subPath</strong> need to be used. </p> <p>As the issue was resolved in comment section @Michael Böckling . Here is further information <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">using-subpath</a></p> <pre><code>volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data subPath: $(POD_NAME) </code></pre>
<p>I'm have a Gitlab cloud connected to a k8s cluster running on Google (GKE). The cluster was created via Gitlab cloud.</p> <p>I want to customise the <code>config.toml</code> because I want to <em>fix</em> the cache on k8s as suggested in <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/1906" rel="noreferrer">this issue</a>.</p> <p>I found the <code>config.toml</code> configuration in the <code>runner-gitlab-runner</code> ConfigMap. I updated the ConfigMap to contain this <code>config.toml</code> setup:</p> <pre><code> config.toml: | concurrent = 4 check_interval = 3 log_level = "info" listen_address = '[::]:9252' [[runners]] executor = "kubernetes" cache_dir = "/tmp/gitlab/cache" [runners.kubernetes] memory_limit = "1Gi" [runners.kubernetes.node_selector] gitlab = "true" [[runners.kubernetes.volumes.host_path]] name = "gitlab-cache" mount_path = "/tmp/gitlab/cache" host_path = "/home/core/data/gitlab-runner/data" </code></pre> <p>To apply the changes I deleted the <code>runner-gitlab-runner-xxxx-xxx</code> pod so a new one gets created with the updated <code>config.toml</code>.</p> <p>However, when I look into the new pod, the <code>/home/gitlab-runner/.gitlab-runner/config.toml</code> now contains 2 <code>[[runners]]</code> sections:</p> <pre><code>listen_address = "[::]:9252" concurrent = 4 check_interval = 3 log_level = "info" [session_server] session_timeout = 1800 [[runners]] name = "" url = "" token = "" executor = "kubernetes" cache_dir = "/tmp/gitlab/cache" [runners.kubernetes] host = "" bearer_token_overwrite_allowed = false image = "" namespace = "" namespace_overwrite_allowed = "" privileged = false memory_limit = "1Gi" service_account_overwrite_allowed = "" pod_annotations_overwrite_allowed = "" [runners.kubernetes.node_selector] gitlab = "true" [runners.kubernetes.volumes] [[runners.kubernetes.volumes.host_path]] name = "gitlab-cache" mount_path = "/tmp/gitlab/cache" host_path = "/home/core/data/gitlab-runner/data" [[runners]] name = "runner-gitlab-runner-xxx-xxx" url = "https://gitlab.com/" token = "&lt;my-token&gt;" executor = "kubernetes" [runners.cache] [runners.cache.s3] [runners.cache.gcs] [runners.kubernetes] host = "" bearer_token_overwrite_allowed = false image = "ubuntu:16.04" namespace = "gitlab-managed-apps" namespace_overwrite_allowed = "" privileged = true service_account_overwrite_allowed = "" pod_annotations_overwrite_allowed = "" [runners.kubernetes.volumes] </code></pre> <p>The file <code>/scripts/config.toml</code> is the configuration as I created it in the ConfigMap. So I suspect the <code>/home/gitlab-runner/.gitlab-runner/config.toml</code> is somehow updated when registering the Gitlab-Runner with the Gitlab cloud.</p> <p>If if changing the <code>config.toml</code> via the ConfigMap does not work, how should I then change the configuration? I cannot find anything about this in Gitlab or Gitlab documentation.</p>
<p>Inside the mapping you can try to append the volume and the extra configuration parameters:</p> <pre><code># Add docker volumes cat &gt;&gt; /home/gitlab-runner/.gitlab-runner/config.toml &lt;&lt; EOF [[runners.kubernetes.volumes.host_path]] name = &quot;var-run-docker-sock&quot; mount_path = &quot;/var/run/docker.sock&quot; EOF </code></pre> <p>I did the runner deployment using a helm chart; I guess you did the same, in the following link you will find more information about the approach I mention: <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/2578" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/issues/2578</a></p> <p>If after appending the config your pod is not able to start, check the logs, I did test the appending approach and had some errors like &quot;Directory not Found,&quot; and it was because I was appending in the wrong path, but after fixing those issues, the runner works fine.</p>
<p>Im in a bit of a pickle on how to get a File templated.</p> <p>I have a Secret template defined</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: "awx-secrets" type: Opaque data: confd_contents: &lt;value-is-an-entire-file&gt; </code></pre> <p>Now the file <code>credentials.py</code> that is supposed to be value to the key <code>confd_contents</code> looks like</p> <pre><code>DATABASES = { 'default': { 'ATOMIC_REQUESTS': True, 'ENGINE': 'django.db.backends.postgresql', 'NAME': "{{ .Values.dbDatabaseName }}", 'USER': "{{ .Values.dbUser }}", 'PASSWORD': "{{ .Values.dbPassword }}", 'HOST': "{{ .Values.dbHostname }}", 'PORT': "{{ .Values.dbService.port }}", } } </code></pre> <p>As you can see, there are values in this file aswell which are defined in my <code>values.yaml</code> file.</p> <p>Now I want to make sure that the file <code>credentials.py</code> is rendered with the right values and is then passed on to the Secret and that the secret is rendered correctly.</p> <p>I tried to define it as a File object by doing this :</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: "awx-secrets" type: Opaque data: confd_contents: {{ .Files.Get "files/credentials.py" }} </code></pre> <p>But this does not work. If I try to template it, I get the actual variables names :</p> <pre><code>helm template management apiVersion: v1 kind: Secret metadata: name: "awx-secrets" type: Opaque data: confd_contents: DATABASES = { 'default': { 'ATOMIC_REQUESTS': True, 'ENGINE': 'django.db.backends.postgresql', 'NAME': "{{ .Values.dbDatabaseName }}", 'USER': "{{ .Values.dbUser }}", 'PASSWORD': "{{ .Values.dbPassword }}", 'HOST': "{{ .Values.dbHostname }}", 'PORT': "{{ .Values.dbService.port }}", } } </code></pre> <p>I also tried to put the whole file in my secret template. Something like</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: "awx-secrets" type: Opaque data: confd_contents: DATABASES = { 'default': { 'ATOMIC_REQUESTS': True, 'ENGINE': 'django.db.backends.postgresql', 'NAME': "{{ .Values.dbDatabaseName }}", 'USER': "{{ .Values.dbUser }}", 'PASSWORD': "{{ .Values.dbPassword }}", 'HOST': "{{ .Values.dbHostname }}", 'PORT': "{{ .Values.dbService.port }}", } } </code></pre> <p>When I run <code>helm template management</code>, it renders correctly but when I do a <code>helm install</code>, I get the error :</p> <pre><code>error: error parsing test.yaml: error converting YAML to JSON: yaml: line 8: mapping values are not allowed in this context </code></pre> <p>Any idea what Im doing wrong or what might help ? </p>
<p>Solved this by </p> <pre><code>apiVersion: v1 kind: Secret metadata: name: "awx-secrets" type: Opaque data: confd_contents: {{ (tpl (.Files.Get "files/credentials.py") . | quote ) }} </code></pre>
<p>I have a container running within GCE kubernetes engine which is trying to copy some data from a bucket to a mounted persistent disk:</p> <pre><code>gsutil -m rsync -r -d "gs://${DB_BUCKET}/db" /db </code></pre> <p>When the container runs, it fails with the following message:</p> <pre><code>AccessDeniedException: 403 [email protected] does not have storage.objects.list access to my-bucket-db-data </code></pre> <p>If I look at the service account, it does appear to have permissions to view storage buckets. I create a populate this bucket as part of my deployment process if that makes any difference.</p> <p>What permissions do I need to grant/how to be able to sync the data from the bucket across?</p>
<p>You need to add referred permission to <code>[email protected]</code>:</p> <p>Follow this steps:</p> <p>1) Access <em>Permissions</em> tab into ${DB_BUCKET}</p> <p>2) Search for your service account in the search input field</p> <p>3) In the <em>Role(s)</em> column find "Storage Object Viewer" role</p>
<p>I'm trying to </p> <p><code>kubectl create secret tls foo-secret --key /tls.key --cert /tls.crt</code></p> <p>From keys and certs I've used made from LetsEncrypt. This processes makes sense with self-signed certificates, but the files made by LetsEncrypt look like this:</p> <pre><code>cert.pem chain.pem fullchain.pem privkey.pem </code></pre> <p>I can convert those pem files, I don't know if <code>--key</code> want's a public key or a private key, and the only option here is <code>privkey.pem</code>. I assume cert is cert.</p> <p>I can convert <code>private.pem</code> with:</p> <p><code>openssl rsa -outform der -in privkey.pem -out private.key</code></p> <p>And <code>cert.pem</code> with:</p> <p><code>openssl x509 -outform der -in cert.pem -out cert.crt</code></p> <p>Is this the right process? Since I'll be using this secret for <a href="https://github.com/jcmoraisjr/ingress/blob/master/docs/examples/external-auth/dashboard-ingress.yaml" rel="nofollow noreferrer">ingress oauth</a> in place of <code>__INGRESS_SECRET__</code>, is this ingress suppose to have a private key? This ingress is acting as a TLS terminator for other things.</p>
<p>You are correct, you will need to provide your private key for the <code>tls.key</code> portion. However it's a good practice to automate the letsencrypt certificate generate process, using <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">cert-manager</a>. Check out this <a href="https://itnext.io/automated-tls-with-cert-manager-and-letsencrypt-for-kubernetes-7daaa5e0cae4" rel="nofollow noreferrer">tutorial</a>. Dong so will automatically create the tls secret resource for you on the cluster.</p> <p>Your <strong><code>tls.key</code></strong> file is the private key and begins and ends like the following:</p> <pre><code>-----BEGIN RSA PRIVATE KEY----- ... [your private key] -----END RSA PRIVATE KEY----- </code></pre> <p>And your <strong><code>tls.crt</code></strong> is going to be the concatenation of <code>cert.pem</code> and <code>fullchain.pem</code>, and it will look like the following:</p> <pre><code>-----BEGIN CERTIFICATE----- ... [your cert content] -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... [your fullchain cert content] -----END CERTIFICATE----- </code></pre>
<p>I am evaluating a migration of an application working with docker-compose to Kubernates and came across two solutions: Kompose and compose-on-kubernetes.</p> <p>I'd like to know their differences in terms of functionality/ease of use to make decision of which one is more suited.</p>
<p>Both product provide a migration path from docker-compose to Kubernetes, but they do it in a slightly different way.</p> <ul> <li>Compose on Kubernetes runs within your Kubernetes cluster and allows you to deploy your compose setup unchanged on the Kubernetes cluster.</li> <li>Kompose translates your docker-compose files to a bunch of Kubernetes resources.</li> </ul> <p>Compose is a good solution if you want to continue running using docker-compose in parallel to deploying on Kubernetes and so plan to keep the docker-compose format maintained.</p> <p>If you're migrating completely to Kubernetes and don't plan to continue working with docker-compose, it's probably better to complete the migration using Kompose and use that as the starting point for maintaining the configuration directly as Kubernetes resources.</p>
<p>There's plenty of help with configuring CORS when running an Azure Functions app in local dev or inside Azure on the web.</p> <p>But we're currently hosting the functions in our own Kubernetes cluster, and I've tried setting an environment variable 'Host' to '{"CORS":"*"}', which is what it looks like Azure does, but this doesn't seem to have added the CORS headers.</p> <p>Is this because it ignores the environment variable if its not hosted locally or in Azure? In which case, do I need to be running in production with <code>func</code> so I can pass the allowed origins parameters to the commandline app? (The Dockerfile MS give you uses <code>dotnet</code> with the <code>WebHost.dll</code> - and I'm not sure where to find options for that command).</p>
<p>I've made a similar for response for Raspberry Pi in <a href="https://stackoverflow.com/a/54304478/10417839">another SO post</a> which is applicable here too. Here is the same answer for reference</p> <p>CORS is basically just sending the appropriate headers in your response.</p> <p>On Azure, this is taken care of by the platform itself but since you will be running/accessing the functions runtime directly from a container, you can just set them on the response object.</p> <p>For example, if you are using NodeJS/JavaScript for your functions, set the headers using context.res</p> <pre><code>context.res = { status: 200, headers: { 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Allow-Origin': '*', // Or the origins you want to allow requests from 'Content-Type': 'application/json' }, body: { just: 'some data' } }; </code></pre> <p>Also, another way to do CORS is using a reverse proxy that adds the headers for you, especially makes things easier if they are the same for all your functions.</p>
<p>There <em>must</em> be "full-configuration" and example templates of Kubernetes YAML configs <em>somewhere</em> with comments itemizing what parameters do what with runnable examples somewhere. </p> <p>Does anyone know where something like this might be? Or where the "full API" of the most commonly used Kubernetes components are?</p>
<p>There is documentation for every k8s api version available, for example <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/" rel="noreferrer">check this link</a>.</p> <p>The way I found what every key in yaml file represent and what does it mean is via <code>kubectl explain</code> command.</p> <p>For example:</p> <pre><code>$kubectl explain deploy.spec </code></pre> <p>Trick I use while doing CKAD to see full list could be:</p> <pre><code>$kubectl explain deploy --recursive &gt; deployment_spec.txt </code></pre> <p>This will list all available options for kubernetes deployment that could you use in yaml file.</p> <p>To generate some template there is option to use <code>--dry-run</code> and <code>-o yaml</code> in <code>kubectl</code> command, for example to create template for CronJob:</p> <pre><code>$kubectl run cron_job_name --image=busybox --restart=OnFailure --schedule=&quot;*/1 * * * * &quot; --dry-run -o yaml &gt; cron_job_name.yaml </code></pre>
<p>I am setting up NGINX ingress controller on AWS EKS. </p> <p>I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till <a href="https://kubernetes.github.io/ingress-nginx/deploy/#prerequisite-generic-deployment-command" rel="nofollow noreferrer">pre-requisite step</a>. Then the tutorial directs me to create an <strong>ingress</strong> resource.</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource</a></p> <p>But below it is telling me to apply a <strong>service</strong> config. I am confused with this provider-specific step. Which is different in terms of <code>kind, version, spec</code> definition (Service vs Ingress).</p> <p><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml</a></p> <p>I am missing something here?</p>
<p>This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type <code>LoadBalancer</code>. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as <code>ClusterIP</code> and have them exposed through the Nginx ingress controller.</p> <p>Here's a diagram to portray the concept a little better: <a href="https://i.stack.imgur.com/xq4YO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xq4YO.png" alt="nginx-ingress" /></a> <a href="https://www.nginx.com/blog/announcing-nginx-ingress-controller-for-kubernetes-release-1-3-0/" rel="noreferrer">image source</a></p> <p>On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.</p> <p>Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml" rel="noreferrer">this code</a> on your EKS cluster.</p> <p>Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a <code>wordpress</code> deployment. You can define a simple <code>ClusterIP</code> service for this app:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: ${WORDPRESS_APP} namespace: ${NAMESPACE} name: ${WORDPRESS_APP} spec: type: ClusterIP ports: - port: 9000 targetPort: 9000 name: ${WORDPRESS_APP} - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: ${WORDPRESS_APP} </code></pre> <p>This creates a service for your <code>wordpress</code> app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: ${NAMESPACE} name: ${INGRESS_NAME} annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: &quot;true&quot; spec: tls: - hosts: - ${URL} secretName: ${TLS_SECRET} rules: - host: ${URL} http: paths: - path: / backend: serviceName: ${WORDPRESS_APP} servicePort: 80 </code></pre> <p>Now if you run <code>kubectl get svc</code> you can see the following:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress ClusterIP 10.23.XXX.XX &lt;none&gt; 9000/TCP,80/TCP,443/TCP 1m nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m </code></pre> <p>Now you can access your <code>wordpress</code> service through the URL defined, which maps to the public IP of your ingress controller LB service.</p>
<p>Is there any ability to filter by both namespace <strong>and</strong> pod's labels at the same time?</p> <p>The example present in documentation at <a href="https://kubernetes.io/docs/user-guide/networkpolicies/#the-networkpolicy-resource" rel="noreferrer">https://kubernetes.io/docs/user-guide/networkpolicies/#the-networkpolicy-resource</a> </p> <pre><code> - from: - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend </code></pre> <p>means that communication is allowed for pods with <code>role=frontend</code> <strong>or</strong> from <code>namespace myproject</code>.</p> <p>Is there any way to change that "or" into an "and"?</p>
<p>Kubernetes 1.11 and above supports combining podSelector and namespaceSelector with a logical AND:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: database.postgres namespace: database spec: podSelector: matchLabels: app: postgres ingress: - from: - namespaceSelector: matchLabels: namespace: default podSelector: matchLabels: app: admin policyTypes: - Ingress </code></pre> <p>See more details in here: <a href="https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d/#f416" rel="noreferrer">https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d/#f416</a></p>
<p>I am new to helm charts. So please correct me if I am going wrong in understanding. I have a service which I am trying to deploy using helm charts. I want to change the config map name and its key values to read depending on deployment environment. Hence I want to add conditional logic in values.yaml.</p> <p>Can someone point me to some document/link which explains how to add conditional logic in values.yaml? </p>
<p>One way of doing it would be to pass one value in with helm install like: </p> <pre><code>--set environment=&lt;value&gt; </code></pre> <p>And then have multiple set of values in your values file for different environments like:</p> <pre><code>environment: &lt;default&gt; env1: prop1: &lt;value1&gt; prop2: &lt;value2&gt; env2: prop1: &lt;value1&gt; prop2: &lt;value2&gt; </code></pre> <p>Now in your configMap file make use of it like:</p> <pre><code>{{- if eq .Values.environment "env1" }} somekey: {{ .Values.env1.prop1 }} {{- else }} somekey: {{ .Values.env2.prop1 }} {{- end }} </code></pre> <p>That should do the trick for setting dynamic values according to environment or any such condition.</p> <p>Apart from that there is one more thing I would like to bring to your notice that helm has few more inbuilt object just like <code>.Values</code>, one of which is <code>.Capabilities</code> so may be you can make use of <code>.Capabilities.KubeVersion.Platform</code> to find OS of the system</p>
<p>A complex <code>.yaml</code> file from <a href="https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml" rel="noreferrer">this link</a> needs to be fed into a bash script that runs as part of an automation program running on an EC2 instance of Amazon Linux 2. Note that the <code>.yaml</code> file in the link above contains many objects, and that I need to extract one of the environment variables defined inside one of the many objects that are defined in the file. </p> <blockquote> <p>Specifically, how can I extract the <code>192.168.0.0/16</code> value of the <code>CALICO_IPV4POOL_CIDR</code> variable into a bash variable? </p> </blockquote> <pre><code> - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" </code></pre> <p>I have read a lot of other postings and blog entries about parsing flatter, simpler <code>.yaml</code> files, but none of those other examples show how to extract a nested value like the <code>value</code> of <code>CALICO_IPV4POOL_CIDR</code> in this question.</p>
<p>As others are commenting, it is recommended to make use of <code>yq</code> (along with <code>jq</code>) if available.<br> Then please try the following:</p> <pre><code>value=$(yq -r 'recurse | select(.name? == "CALICO_IPV4POOL_CIDR") | .value' "calico.yaml") echo "$value" </code></pre> <p>Output:</p> <pre><code>192.168.0.0/16 </code></pre>
<p>I've upgraded helm templates (by hand)</p> <p>Fragment of previous <code>depoloyment.yaml</code>:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ template "measurement-collector.fullname" . }} labels: app: {{ template "measurement-collector.name" . }} chart: {{ template "measurement-collector.chart" . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ template "measurement-collector.name" . }} release: {{ .Release.Name }} </code></pre> <p>New one:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ include "measurement-collector.fullname" . }} labels: app.kubernetes.io/name: {{ include "measurement-collector.name" . }} helm.sh/chart: {{ include "measurement-collector.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "measurement-collector.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} </code></pre> <p>new <code>service.yaml</code>:</p> <pre><code> name: {{ include "measurement-collector.fullname" . }} labels: app.kubernetes.io/name: {{ include "measurement-collector.name" . }} helm.sh/chart: {{ include "measurement-collector.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: type: {{ .Values.service.type }} ports: protocol: TCP name: http selector: app.kubernetes.io/name: {{ include "measurement-collector.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} </code></pre> <p>Then after running: <code>helm upgrade -i measurement-collector chart/measurement-collector --namespace prod --wait</code></p> <p>I get:</p> <pre><code>Error: UPGRADE FAILED: Deployment.apps "measurement-collector" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"measurement-collector", "app.kubernetes.io/instance":"measurement-collector"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable </code></pre>
<p>If you change the selector label, then you will need to purge the release first before deploying.</p>
<p>I've worked quite a lot with Docker in the past years, but I'm a newbie when it comes to Kubernetes. I'm starting today and I am struggling with the usefulness of the Pod concept in comparison with the way I used to do thinks with Docker swarm.</p> <p>Let's say that I have a cluster with 7 powerful machines and I have the following stack:</p> <ul> <li>I want three Cassandra replicas each running in a dedicated machine (3/7)</li> <li>I want two Kafka replicas each running in a dedicated machine (5/7)</li> <li>I want a MyProducer replica running on its own machine, receiving messages from the web and pushing them into Kafka (6/7)</li> <li>I want 3 MyConsumer replicas all running in the last machine (7/7), which pull from Kafka and insert in Cassandra. </li> </ul> <p>With docker swarm I used to handle container distribution with node labels, e.g. I would label three machines and Cassandra container configuration as C_HOST, 2 machines and Kafka configuration as K_HOST,... The swarm deployment would place each container correctly. </p> <p>I have the following questions:</p> <ul> <li><p>Does Kubernetes pods bring any advantage comparing to my previous approach (e.g. simplicity)? I understood that I am still required to configure labels, if so, I don't see the appeal.</p></li> <li><p>What would be the correct way to configure these pods? Would it be one pod for Cassandra replicas, one pod for Kafka replicas, one pod for MyConsumer replicas and one pod for MyProducer? </p></li> </ul>
<p>Using pod anti-affinity, you can ensure that a pod is not co-located with other pods with specific labels.</p> <p>So say your have a label "app" with values "cassandra", "kafka", "my-producer" and "my-consumer".</p> <p>Since you want to have cassandra, kafka and my-producer on dedicated nodes all by themselves, you simply configure an anti-affinity to ALL the existing labels:</p> <p>(see <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a> for full schema)</p> <pre><code> podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - cassandra - kafka - my-producer - my-consumer </code></pre> <p>This is for a "Pod" resource, so you'd define this in a deployment (where you also define how many replicas) in the pod template.</p> <p>Since you want three instances of my-consumer running on the same node (or really, you don't care where they run, since by now only one node is left), you do not need to define anything about affinity or anti-affinity:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-consumer namespace: default labels: app: my-consumer spec: selector: matchLabels: app: my-consumer replicas: 3 # here you set the number of replicas that should run template: # this is the pod template metadata: labels: app: my-consumer # now this is the label you can set an anti-affinity to spec: containers: - image: ${IMAGE} name: my-consumer # affinity: # now here below this you'd put the affinity-settings from above # for the other deployments </code></pre>
<p>TL; DR: how should I write a <code>Dockerfile</code> or docker commands to run docker containers so that I can stop and exit the running docker container when I hit <code>ctrl+c</code>?</p> <hr> <h3>Background</h3> <p>I need to run an infinite while loop in shell script. When I ran this script locally, the <code>ctrl+c</code> command will exit the infinite loop.</p> <pre><code># content of sync.sh while true; do echo "Do something!" some_syncing_command || { rm -rf /tmp/healthy &amp;&amp; break } echo "Finish doing something!" touch /tmp/healthy sleep ${waitingSeconds} done </code></pre> <p>So based on the shell script, I then created a Docker Image with the following <code>Dockerfile</code> content:</p> <pre><code>FROM debian:stretch COPY sync.sh . ENTRYPOINT ["/sync.sh"] </code></pre> <p>and build the image by running <code>docker build -t infinite-loop .</code></p> <h3>Problem</h3> <p>However, after trying different attempts to run the <code>infinite-loop</code> image, I cannot stop and exit the running docker container after hitting <code>ctrl + c</code>. Here are the docker commands I used to run the docker image:</p> <ol> <li><code>docker run --rm to-infinity-1 infinite-loop</code></li> <li><code>docker run --rm -it to-infinity-2 infinite-loop</code></li> <li><code>docker run --rm -d to-infinity-3 infinite-loop</code>, then run <code>docker attach</code> on <code>to-infinity-3</code></li> </ol> <p>All of the above commands fail to stop and exit the infinite loop after executing <code>ctrl+c</code> directly. Hence I need to run <code>docker stop [container-name]</code> to stop the running containers of infinite loops. What should I change in my implementation to resolve this problem?</p> <p>Thank you.</p> <p><strong>Edit:</strong> additional context, I am using kubernetes with the infinite loop container. I wonder if this <code>ctrl+c</code> problem (related to SIGINT) will interfere with kubernetes if I want to gracefully stop and exit the running pod. Note that although the <code>ctrl+c</code> is problematic, I was still able to use <code>docker stop</code> to stop the running containers.</p>
<p><strong>“docker run” traps or ignores ctrl+c.</strong></p> <p>If you don’t want to lose your shell you can trying stopping the container from another terminal on the same docker host.</p> <p>Open a new shell and execute</p> <pre><code>$ docker ps # get the id of the running container $ docker stop &lt;container&gt; # kill it (gracefully) </code></pre> <p>The container process will end and your original shell will be released.</p>
<p>I am trying to get the performance of the pod scheduling.</p> <p>When kubectl describe pod performancetestpod. We will get something similar as shown below</p> <pre><code>Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21s default-scheduler Successfully assigned default/performancetestpod to ip-172-31-22-111 Normal Pulled 20s kubelet, ip-172-31-22-111 Container image "centos:7.6.1810" already present on machine Normal Created 20s kubelet, ip-172-31-22-111 Created container Normal Started 20s kubelet, ip-172-31-22-111 Started container Normal Killing 10s kubelet, ip-172-31-22-111 Killing container with id docker://performancetestpod:Need to kill Pod </code></pre> <p>1, Is there any way to get the age in milliseconds. </p> <p>2, Is there any other way to get values in milliseconds for pod start, for eg: by using prometheus etc</p>
<p>From my experience there is no way to get milliseconds using kubectl in the way you do it. Answering your 2nd question - take a closer look at <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a>. As per <a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-6-kube-state-metrics-14f4e7c8710b" rel="nofollow noreferrer">A Deep Dive into Kubernetes Metrics</a> article:</p> <p><strong>Object Creation Time</strong></p> <p>Often is it helpful to know at what time objects in Kubernetes where created. Kube-state-metrics exposes a creation time for almost all the objects it tracks. The metric name follows the pattern <code>kube_&lt;OBJECT&gt;_created</code> and will include values for the name of the object and the namespace where it lives. The value is an epoch timestamp to the <strong>millisecond</strong>.</p> <p>For example, the CronJob creation series is called <code>kube_cronjob_created</code>.</p>
<p>I have a pod running in namespace X under Service A. I have a pod running a REST API in namespace Y under Service B.</p> <p>How do i set up this communication?</p> <p>Thank you.</p>
<p>Simply use the full name of the service. </p> <p><code>&lt;TARGET_SERVICE_NAME&gt;.&lt;TARGET_NAMESPACE_NAME&gt;.svc.cluster.local</code></p> <p>Now using your example :</p> <pre><code>curl B.Y.svc.cluster.local RESPONSE FROM THE SERVICE B IN NAMESPACE Y </code></pre> <p>It will work from anywhere in the cluster but is dependent to the namespace.</p> <hr> <p>You can also use <strong><a href="https://akomljen.com/kubernetes-tips-part-1/" rel="nofollow noreferrer">external name</a></strong> which is a bit more complicated but should deal with your problem too.</p>
<p>I'm kinda new here, so please be gentle with me. </p> <p>I've inherited an old (ish) kops install procedure using Ansible scripts, which has a specific version of the "kope.io" image within the Instance Group creation </p> <pre><code>apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: null labels: kops.k8s.io/cluster: {{ k8s_cluster_name }} name: master-{{ vpc_region }}a spec: associatePublicIp: false image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08 machineType: "{{ master_instance_type }}" maxSize: 1 minSize: 1 {% if use_spot %} maxPrice: "{{ spot_price }}" {% endif %} nodeLabels: kops.k8s.io/instancegroup: master-{{ vpc_region }}a role: Master subnets: - {{ vpc_region }}a-private-subnet </code></pre> <p>As you can see the line <code>image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08</code> pins me to a specific k8s version. </p> <p>I want to rebuild with a newer version, but I'm not sure if I still need to specify this image, and if I do which image should I use?</p> <p>I'd like to at least update this to 1.9.11, but ideally I think I should be going to the newest stable version. (1.13.0?) but I know a <strong>lot</strong> has changed since then, so it's likely things will break? </p> <p>So much information by doing a Google search for this, but much of it is confusing or conflicting (or outdated. Any pointers much appreciated.</p>
<p>According to <a href="https://github.com/kubernetes/kops/blob/master/docs/images.md" rel="nofollow noreferrer">kops documentation</a> you can specify an image and that will be used to provision the AMI that will build your instance group.</p> <p>You can find out the latest <code>kope.io</code> images and their respective kubernetes versions at <a href="https://github.com/kubernetes/kops/blob/master/channels/stable" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/channels/stable</a></p> <p>I'm not sure if you can work with different kope.io/k8s-x.xx versions than the ones you are provisioning, or if kops enforces the restrictions that are stated in the stable channel, but you can see that the different kope.io images should be configured to the different Kubernetes versions.</p> <p>You should try your infrastructure in a test environment just to be safe and not lose data. You should keep in mind that if you need to use hostPath-based mountpoints, you should probably migrate those to the new cluster or use some sort of backup mechanism.</p> <p>In any case, take a look at the <a href="https://github.com/kubernetes/kops#kubernetes-release-compatibility" rel="nofollow noreferrer">kops compatibility matrix</a> and see which kops version you should use for the upgrade you want. You may prefer to do upgrades to interim versions so that you can both upgrade the cluster and kops itself until you are up-to-date, in order to use procedures that have probably been more tested :)</p>
<p>This question has been asked and answered before on stackoverflow but because I'm new to K8, I don't understand the answer.</p> <p>Assuming I have two containers with each container in a separate POD (because I believe this is the recommend approach), I think I need to create a single service for my two pods to be apart of.</p> <ol> <li>How does my java application code get the IP address of the service?</li> <li>How does my java application code get the IP addresses of another POD/container (from the service)?</li> <li>This will be a list of IP address because these are stateless and they might be replicated. Is this correct?</li> <li>How do I select the least busy instance of the POD to communicate with?</li> </ol> <p>Thanks Siegfried</p>
<blockquote> <p>How does my java application code get the IP address of the service?</p> </blockquote> <p>You need to create a Service to expose the Pod's port and then you just need to use the Service name and kube-dns will resolve the Pod's IP address</p> <blockquote> <p>How does my java application code get the IP addresses of another POD/container (from the service)?</p> </blockquote> <p>Yes, using the service's name</p> <blockquote> <p>This will be a list of IP address because these are stateless and they might be replicated. Is this correct?</p> </blockquote> <p>The Service will load balance between all pods that matches the selector, so it could be 0, 1 or any number of Pods</p> <blockquote> <p>How do I select the least busy instance of the POD to communicate with?</p> </blockquote> <p>Common way is round robin policy but here are other specific balancing policies <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs</a></p> <p>Cheers ;)</p>
<p>I'm accessing Kubernetes through the CLI tool <code>kubectl</code> and I'm trying to get a list of all context names, one per line.</p> <p>I know that JSONPath can be used to extract and format specific output. I get really close to what I want with</p> <pre class="lang-sh prettyprint-override"><code>kubectl config view -o=jsonpath="{.contexts[*].name}" </code></pre> <p>but this puts all the names on the same line. I'm trying to use <code>range</code> to list all names separated by newlines:</p> <pre class="lang-sh prettyprint-override"><code>kubectl config view -o=jsonpath='{range .contexts[*]}{.name}{"\n"}{end}' </code></pre> <p>But this just gives me an error:</p> <pre><code>error: unexpected arguments: [.contexts[*]}{.name}{"\n"}{end}] See 'kubectl config view -h' for help and examples. </code></pre> <p>I've reviewed the <code>kubectl</code> documentation and what I'm doing is really similar to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-by-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/#list-containers-by-pod</a>, where the command is</p> <pre><code>kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ sort </code></pre> <p>but I can't see where I'm going wrong.</p>
<p>Your command works for me in kubectl 1.9.2</p> <p>If it still doesn't work, you can use tr in bash to replace spaces with new lines:</p> <pre><code>kubectl config view -o=jsonpath="{.contexts[*].name}" | tr " " "\n" </code></pre>
<p>Detect Google Cloud Project Id from a container in Google hosted Kubernetes cluster.</p> <p>When connecting to BigTable; I need to provide the Google Project Id. Is there a way to detect this automatically from within K8s?</p>
<p>In Python, you can find the project id this way:</p> <pre><code>import google.auth _, PROJECT_ID = google.auth.default() </code></pre> <p>The original question didn't mention what programming language was being used, and I had the same question for Python.</p>
<p>The first and most minimal <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">example of a Deployment in the Kubernetes documentation</a> has the <code>app: nginx</code> line that repeats itself three times. I understand it's a tag, but I haven't found anything that explains why this needs to be specified for all of:</p> <ol> <li><code>metadata.labels</code>,</li> <li><code>spec.selector.matchLabels</code>, and</li> <li><code>spec.template.metadata.labels</code></li> </ol> <p>The example deployment file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre>
<p>So 1 and 3 are technically unrelated. 1 is the labels for the deployment object and only matter for your own organizational purposes. 3 are the labels that will be put on the generated pods. As for why Deployments rely on manually specifying a selector against the pod labels, it is to ensure stay stateless. The deployment controller can restart at any time and things will be safe. It could be improved in the future though, if someone has a solid proposal that takes care of all the edge cases.</p>
<p>Our Azure DevOps build agents are setup on Kubernetes. Failed pods can easily be deleted from kube, but they appear as "offline" agents from the Azure DevOps Web UI. </p> <p>Overtime the list of offline agents has grown really long. Is there a way to programmatically delete them ?</p>
<pre><code>$agents = Invoke-RestMethod -uri 'http://dev.azure.com/{organization}/_apis/distributedtask/pools/29/agents' -Method Get -UseDefaultCredentials $agents.value | Where-Object { $_.status -eq 'offline' } | ForEach-Object { Invoke-RestMethod -uri "http://dev.azure.com/{organization}/_apis/distributedtask/pools/29/agents/$($_.id)?api-version=4.1" -Method Delete -UseDefaultCredentials } </code></pre> <p>Some assumptions for this solution:</p> <ol> <li>You are looking for build agents</li> <li>You know the id of the pool you are looking for already. You can get to this programatically also, or just loop through all pools if you want</li> <li>You don't have any issues deleting any offline agents</li> </ol> <p><strong>Note:</strong> I'm using Azure DevOps Server, so replace the <code>-UseDefaultCredentials</code> with your authorization.</p>
<p>I am trying to use the kubectl wait command as outlined in the kubernetes docs: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait</a></p> <p>When I run it, I am getting </p> <pre><code>Error: unknown command "wait" for "kubectl" </code></pre> <p>Docs do mention that the command is "experimental". Is there something I need to do to indicate I want to use experimental commands? All the other kubectl commands work fine.</p> <p>I am using the kubectl command in windows, installed as a gcloud component. I have updated all components. kubectl version returns:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-gke.4", GitCommit:"618716cbb236fb7ca9cabd822b5947e298ad09f7", GitTreeState:"clean", BuildDate:"2019-02-05T19:22:29Z", GoVersion:"go1.10.7b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>I'm not sure in which version <code>kubectl wait</code> was introduced, but I have:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>and it works for me:</p> <pre><code>~ kubectl wait --help Experimental: Wait for a specific condition on one or many resources. </code></pre> <p>Update to the latest version and you will have it</p> <p>Edit: the command was introduced in version <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md" rel="nofollow noreferrer">1.11</a>. Here is the relevant <a href="https://github.com/kubernetes/kubernetes/pull/64034" rel="nofollow noreferrer">PR</a>.</p>
<p>When using Amazon's K8s offering, the <strong>EKS</strong> service, at some point you need to connect the Kubernetes API and configuration to the infrastructure established within AWS. Especially we need a <em>kubeconfig</em> with proper credentials and URLs to connect to the k8s control plane provided by EKS.</p> <p>The Amazon commandline tool <code>aws</code> provides a routine for this task</p> <pre><code>aws eks update-kubeconfig --kubeconfig /path/to/kubecfg.yaml --name &lt;EKS-cluster-name&gt; </code></pre> <h2>Question: do the same through Python/boto3</h2> <p>When looking at the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html" rel="noreferrer">Boto API documentation</a>, I seem to be unable to spot the equivalent for the above mentioned <code>aws</code> routine. Maybe I am looking at the wrong place.</p> <ul> <li>is there a ready-made function in <em>boto</em> to achieve this?</li> <li>otherwise how would you approach this directly within python (other than calling out to <code>aws</code> in a subprocess)?</li> </ul>
<p>There isn't a method function to do this, but you can build the configuration file yourself like this:</p> <pre><code># Set up the client s = boto3.Session(region_name=region) eks = s.client("eks") # get cluster details cluster = eks.describe_cluster(name=cluster_name) cluster_cert = cluster["cluster"]["certificateAuthority"]["data"] cluster_ep = cluster["cluster"]["endpoint"] # build the cluster config hash cluster_config = { "apiVersion": "v1", "kind": "Config", "clusters": [ { "cluster": { "server": str(cluster_ep), "certificate-authority-data": str(cluster_cert) }, "name": "kubernetes" } ], "contexts": [ { "context": { "cluster": "kubernetes", "user": "aws" }, "name": "aws" } ], "current-context": "aws", "preferences": {}, "users": [ { "name": "aws", "user": { "exec": { "apiVersion": "client.authentication.k8s.io/v1alpha1", "command": "heptio-authenticator-aws", "args": [ "token", "-i", cluster_name ] } } } ] } # Write in YAML. config_text=yaml.dump(cluster_config, default_flow_style=False) open(config_file, "w").write(config_text) </code></pre>
<p>I am practicing <a href="https://www.katacoda.com/courses/kubernetes/guestbook" rel="nofollow noreferrer">katacoda k8s lesson</a> with the knowledge from <a href="https://stackoverflow.com/questions/37423117/replication-controller-vs-deployment-in-kubernetes">Stack Overflow</a>. I had tried kill the pods by command line and the result of them are exactly the same with simple example. The pod will get recreated in several moment later after dead.</p> <p><strong>Question:</strong><br> Can I just simply replace the <code>ReplicationController</code> with <code>Deployment</code>?</p>
<p>Don't use replication controller. Those are replaced with ReplicaSet.</p> <p>In your case, use deployment object to manage the application life cycle. With deployment you would be able to control rolling upgrade and, rollbabk features of kubernetes</p> <p>Deployment object works one layer above ReplicaSet and allows you to upgrade the app to new version with zero downtime</p>
<p>I deployed my application pods in azure kubernetes service through VSTS. I have idea on kubernetes with RBAC on on-premise cluster through create users. Now what i want to do is like create some roles and assign different permissions on kubernetes resources for my developers and testers as well in azure kubernetes service. I researched on this and gone through differents links, but i didn't get any proper idea on that. As per my understanding, we can assign permissions for roles only users who have access on Azure Active Directory. if i am wrong Could anybody correct me.</p> <p>I found one way like OpenID Connect Tokens. For this i referred the following <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">link</a>. but i don't have idea on what exactly identity provider is and how to generate the different tokens from Identity provider and client id and client token which are mentioned in the above link? </p> <p>Could anybody help me out to do RBAC in Azure Kubernetes Service or Any alternative ways for this rather than which i mentioned above?</p>
<p>You can use Azure AD RBAC and or internal k8s RBAC (which is exactly the same as the one on your on-premises cluster).</p> <p>For Azure AD RBAC you would use the same approach as for the internal k8s users, but you'd need to bind roles to Azure AD entities:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: binding_name roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: "azure_ad_group_guid" </code></pre> <p>For internal k8s RBAC read the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">official doc</a>.</p> <p>For Azure AD RBAC read the <a href="https://learn.microsoft.com/en-us/azure/aks/aad-integration" rel="nofollow noreferrer">official doc</a></p>
<p>I try to run elasticsearch and kibana on kubernetes. I ran:</p> <pre><code>kubectl run elasticsearch --image=elasticsearch:6.6.1 --env="discovery.type=single-node" --port=9200 --port=9300 kubectl run kibana --image=kibana:6.6.1 --port=5601 </code></pre> <p>then I ran <code>$kubectl proxy</code>, </p> <pre><code>http://localhost:$IP_FROM_KUBECTL_PROXY(usually 8081)/api/v1/namespaces/default/pods/$POD_NAME/proxy/ </code></pre> <p>When I entered to elasticsearch pod, everything looks fine, but when I entered to kibana, the app didn't work (I see "Kibana server is not ready yet" for infinity).</p> <p>The logs of kibana are the following:</p> <pre><code>{"type":"log","@timestamp":"2019-03-02T10:38:47Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"} {"type":"log","@timestamp":"2019-03-02T10:38:49Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} </code></pre> <p>This is kibana.yml on kibana pod:</p> <h1>Default Kibana configuration from kibana-docker.</h1> <pre><code>server.name: kibana server.host: "0" elasticsearch.url: http://elasticsearch:9200 xpack.monitoring.ui.container.elasticsearch.enabled: true </code></pre> <p>I'm pretty new with Kubernetes, and I can't figure out why they can't talk one to another. </p>
<p>This Kibana log entry explains to you what the problem is:</p> <blockquote> <p>{&quot;type&quot;:&quot;log&quot;,&quot;@timestamp&quot;:&quot;2019-03-02T10:38:49Z&quot;,&quot;tags&quot;:[&quot;warning&quot;,&quot;elasticsearch&quot;,&quot;admin&quot;],&quot;pid&quot;:1,&quot;message&quot;:&quot;Unable to revive connection: http://elasticsearch:9200/&quot;}</p> </blockquote> <p>Problem: Naming your pod elasticsearch is not enough for kubernetes.</p> <p>You have a few solutions to fix it depending on the situation:</p> <ol> <li><p>Create a service as <a href="https://stackoverflow.com/users/1626280/amityo">Amityo</a> has suggested. This is enough if kibana and elasticsearch are running in the same namespace.</p> </li> <li><p>If kibana and elasticsearch are running in different namespaces you'll need to use the full DNS name for service: elasticsearch.my-namespace.svc.cluster.local</p> </li> <li><p>If you run elasticsearch and kibana in the same pod. Then localhost:9200 will be enough to be able to query.</p> </li> <li><p>For your current situation. When elasticsearch is running you can use as ELASTICSEARCH_URL the pod DNS name: 1-2-3-4.default.pod.cluster.local when 1-2-3-4 is IP address of pod with dots replaced by dashes.</p> </li> </ol> <p>You can define the hostname when you create a Pod definition for elasticsearch using the following YAML:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: elasticsearch labels: name: elasticsearch-single spec: hostname: elasticsearch subdomain: for-kibana containers: - image: elasticsearch:6.6.1 name: elasticsearch </code></pre> <p>Then you will be able to use as ELASTICSEARCH_URL the pod DNS name: elasticsearch.for-kibana.default.svc.cluster.local service.</p> <p>All information you can find on kubernetes doc <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">here</a></p> <p><strong>Please note:</strong> According to official docs the Env variable ELASTICSEARCH_URL is deprecated in newer elasticsearch versions, more details <a href="https://www.elastic.co/guide/en/kibana/7.x/breaking-changes-7.0.html#_elasticsearch_url_is_no_longer_valid" rel="nofollow noreferrer">see here</a></p>
<p>I'm using helm chart to deploy my application on kubernetes. But services that I'm using in my stack depends on other services how do I make sure helm will not deploy until the dependencies are up?</p>
<p>Typically you don't; you just let Helm (or <code>kubectl apply -f</code>) start everything in one shot and let it retry starting everything.</p> <p>The most common pattern is for a container process to simply crash at startup if an external service isn't available; the Kubernetes Pod mechanism will restart the container when this happens. If the dependency never comes up you'll be stuck in CrashLoopBackOff state forever, but if it appears in 5-10 seconds then everything will come up normally within a minute or two.</p> <p>Also remember that pods of any sort are fairly disposable in Kubernetes. IME if something isn't working in a service one of the first things to try is <code>kubectl delete pod</code> and letting a Deployment controller recreate it. Kubernetes can do this on its own too, for example if it decides it needs to relocate a pod on to a different node. That is: even if some dependency is up when your pod first start sup, there's no guarantee it will stay up forever.</p>
<p>Kubernetes version: 1.13.4 (same problem on 1.13.2).</p> <p>I self-host the cluster on digitalocean.</p> <p>OS: coreos 2023.4.0</p> <p>I have 2 volumes on one node:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- kind: PersistentVolume apiVersion: v1 metadata: name: prometheus-pv-volume labels: type: local name: prometheus-pv-volume spec: storageClassName: local-storage capacity: storage: 100Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem hostPath: path: "/prometheus-volume" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/monitoring operator: Exists --- kind: PersistentVolume apiVersion: v1 metadata: name: grafana-pv-volume labels: type: local name: grafana-pv-volume spec: storageClassName: local-storage capacity: storage: 1Gi persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem accessModes: - ReadWriteOnce hostPath: path: "/grafana-volume" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/monitoring operator: Exists </code></pre> <p>And 2 pvc's using them on a same node. Here is one:</p> <pre><code> storage: volumeClaimTemplate: spec: storageClassName: local-storage selector: matchLabels: name: prometheus-pv-volume resources: requests: storage: 100Gi </code></pre> <p>Everything works fine.</p> <p><code>kubectl get pv --all-namespaces</code> output:</p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE grafana-pv-volume 1Gi RWO Retain Bound monitoring/grafana-storage local-storage 16m prometheus-pv-volume 100Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 16m </code></pre> <p><code>kubectl get pvc --all-namespaces</code> output:</p> <pre><code>NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE monitoring grafana-storage Bound grafana-pv-volume 1Gi RWO local-storage 10m monitoring prometheus-k8s-db-prometheus-k8s-0 Bound prometheus-pv-volume 100Gi RWO local-storage 10m </code></pre> <p>The problem is that im getting these log messages every 2 minutes from kube-controller-manager:</p> <pre><code>W0302 17:16:07.877212 1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -&gt; err:no volume plugin matched W0302 17:16:07.877164 1 plugins.go:845] FindExpandablePluginBySpec(grafana-pv-volume) -&gt; err:no volume plugin matched </code></pre> <p>Why do they appear? How can i fix this?</p>
<p>Seems like this is safe to ignore message that was recently removed (Feb 20) and will not occur in future releases: <a href="https://github.com/kubernetes/kubernetes/pull/73901" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/73901</a></p>
<p>I try to use the master api to update resources.</p> <p>In 1.2 to update a deployment resource I'm doing <code>kubectl apply -f new updateddeployment.yaml</code></p> <p>How to do the same action with the api? </p>
<p>This API is not really convincingly designed, since it forces us to reimplement such basic stuff at the client side...</p> <p>Anyway, here is my attempt to reinvent the hexagonal wheel in Python...</p> <h2>Python module kube_apply</h2> <p>Usage is like <code>kube_apply.fromYaml(myStuff)</code></p> <ul> <li>can read strings or opened file streams (via lib Yaml)</li> <li>handles yaml files with several concatenated objects</li> <li>implementation is <em>rather braindead</em> and first attempts to insert the resource. If this fails, it tries a patch, and if this also fails, it <em>deletes</em> the resource and inserts it anew.</li> </ul> <p>File: <code>kube_apply.py</code></p> <pre><code>#!/usr/bin/python3 # coding: utf-8 # __________ ________________________________________________ # # kube_apply - apply Yaml similar to kubectl apply -f file.yaml # # # # (C) 2019 Hermann Vosseler &lt;[email protected]&gt; # # This is OpenSource software; licensed under Apache License v2+ # # ############################################################### # ''' Utility for the official Kubernetes python client: apply Yaml data. While still limited to some degree, this utility attempts to provide functionality similar to `kubectl apply -f` - load and parse Yaml - try to figure out the object type and API to use - figure out if the resource already exists, in which case it needs to be patched or replaced alltogether. - otherwise just create a new resource. Based on inspiration from `kubernetes/utils/create_from_yaml.py` @since: 2/2019 @author: Ichthyostega ''' import re import yaml import logging import kubernetes.client def runUsageExample(): ''' demonstrate usage by creating a simple Pod through default client ''' logging.basicConfig(level=logging.DEBUG) # # KUBECONFIG = '/path/to/special/kubecfg.yaml' # import kubernetes.config # client = kubernetes.config.new_client_from_config(config_file=KUBECONFIG) # # --or alternatively-- # kubernetes.config.load_kube_config(config_file=KUBECONFIG) fromYaml(''' kind: Pod apiVersion: v1 metadata: name: dummy-pod labels: blow: job spec: containers: - name: sleepr image: busybox command: - /bin/sh - -c - sleep 24000 ''') def fromYaml(rawData, client=None, **kwargs): ''' invoke the K8s API to create or replace an object given as YAML spec. @param rawData: either a string or an opened input stream with a YAML formatted spec, as you'd use for `kubectl apply -f` @param client: (optional) preconfigured client environment to use for invocation @param kwargs: (optional) further arguments to pass to the create/replace call @return: response object from Kubernetes API call ''' for obj in yaml.load_all(rawData): createOrUpdateOrReplace(obj, client, **kwargs) def createOrUpdateOrReplace(obj, client=None, **kwargs): ''' invoke the K8s API to create or replace a kubernetes object. The first attempt is to create(insert) this object; when this is rejected because of an existing object with same name, we attempt to patch this existing object. As a last resort, if even the patch is rejected, we *delete* the existing object and recreate from scratch. @param obj: complete object specification, including API version and metadata. @param client: (optional) preconfigured client environment to use for invocation @param kwargs: (optional) further arguments to pass to the create/replace call @return: response object from Kubernetes API call ''' k8sApi = findK8sApi(obj, client) try: res = invokeApi(k8sApi, 'create', obj, **kwargs) logging.debug('K8s: %s created -&gt; uid=%s', describe(obj), res.metadata.uid) except kubernetes.client.rest.ApiException as apiEx: if apiEx.reason != 'Conflict': raise try: # asking for forgiveness... res = invokeApi(k8sApi, 'patch', obj, **kwargs) logging.debug('K8s: %s PATCHED -&gt; uid=%s', describe(obj), res.metadata.uid) except kubernetes.client.rest.ApiException as apiEx: if apiEx.reason != 'Unprocessable Entity': raise try: # second attempt... delete the existing object and re-insert logging.debug('K8s: replacing %s FAILED. Attempting deletion and recreation...', describe(obj)) res = invokeApi(k8sApi, 'delete', obj, **kwargs) logging.debug('K8s: %s DELETED...', describe(obj)) res = invokeApi(k8sApi, 'create', obj, **kwargs) logging.debug('K8s: %s CREATED -&gt; uid=%s', describe(obj), res.metadata.uid) except Exception as ex: message = 'K8s: FAILURE updating %s. Exception: %s' % (describe(obj), ex) logging.error(message) raise RuntimeError(message) return res def patchObject(obj, client=None, **kwargs): k8sApi = findK8sApi(obj, client) try: res = invokeApi(k8sApi, 'patch', obj, **kwargs) logging.debug('K8s: %s PATCHED -&gt; uid=%s', describe(obj), res.metadata.uid) return res except kubernetes.client.rest.ApiException as apiEx: if apiEx.reason == 'Unprocessable Entity': message = 'K8s: patch for %s rejected. Exception: %s' % (describe(obj), apiEx) logging.error(message) raise RuntimeError(message) else: raise def deleteObject(obj, client=None, **kwargs): k8sApi = findK8sApi(obj, client) try: res = invokeApi(k8sApi, 'delete', obj, **kwargs) logging.debug('K8s: %s DELETED. uid was: %s', describe(obj), res.details and res.details.uid or '?') return True except kubernetes.client.rest.ApiException as apiEx: if apiEx.reason == 'Not Found': logging.warning('K8s: %s does not exist (anymore).', describe(obj)) return False else: message = 'K8s: deleting %s FAILED. Exception: %s' % (describe(obj), apiEx) logging.error(message) raise RuntimeError(message) def findK8sApi(obj, client=None): ''' Investigate the object spec and lookup the corresponding API object @param client: (optional) preconfigured client environment to use for invocation @return: a client instance wired to the apriopriate API ''' grp, _, ver = obj['apiVersion'].partition('/') if ver == '': ver = grp grp = 'core' # Strip 'k8s.io', camel-case-join dot separated parts. rbac.authorization.k8s.io -&gt; RbacAuthorzation grp = ''.join(part.capitalize() for part in grp.rsplit('.k8s.io', 1)[0].split('.')) ver = ver.capitalize() k8sApi = '%s%sApi' % (grp, ver) return getattr(kubernetes.client, k8sApi)(client) def invokeApi(k8sApi, action, obj, **args): ''' find a suitalbe function and perform the actual API invocation. @param k8sApi: client object for the invocation, wired to correct API version @param action: either 'create' (to inject a new objet) or 'replace','patch','delete' @param obj: the full object spec to be passed into the API invocation @param args: (optional) extraneous arguments to pass @return: response object from Kubernetes API call ''' # transform ActionType from Yaml into action_type for swagger API kind = camel2snake(obj['kind']) # determine namespace to place the object in, supply default try: namespace = obj['metadata']['namespace'] except: namespace = 'default' functionName = '%s_%s' %(action,kind) if hasattr(k8sApi, functionName): # namespace agnostic API function = getattr(k8sApi, functionName) else: functionName = '%s_namespaced_%s' %(action,kind) function = getattr(k8sApi, functionName) args['namespace'] = namespace if not 'create' in functionName: args['name'] = obj['metadata']['name'] if 'delete' in functionName: from kubernetes.client.models.v1_delete_options import V1DeleteOptions obj = V1DeleteOptions() return function(body=obj, **args) def describe(obj): return &quot;%s '%s'&quot; % (obj['kind'], obj['metadata']['name']) def camel2snake(string): string = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', string) string = re.sub('([a-z0-9])([A-Z])', r'\1_\2', string).lower() return string if __name__=='__main__': runUsageExample() </code></pre>
<p>I have used nginx-ingress controller as a sub-chart and I want to override controller.service.nodePorts.http in the subchart. I tried few things and nothing seem to work. Here is what I've tried</p> <ul> <li>using --set controller.service.nodePorts.http=32080 during helm install command</li> <li>declaring this path in my chart's value.yaml</li> </ul> <p>I've also gone over the helm documentation for overriding sub-chart values but none seem to work.</p> <p>Any points what I may be missing ? Thanks in advance...</p>
<p>When overriding values of a sub-chart, you need to nest those configurations under the name of the subchart. For example in values.yaml:</p> <pre><code>mysubchart: x: y </code></pre> <p>In your case, if you imported the nginx controller chart as <code>nginx-controller</code>, you could add this to the main chart:</p> <pre><code>nginx-controller: controller: service: nodePorts: http: "32080" </code></pre> <p>This topic is covered in the helm docs under: <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart" rel="noreferrer">https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart</a></p>
<p><strong>Use case / Problem</strong></p> <p>I am in charge of maintaining a kubernetes cluster with 40 nodes (split across 2 zones). We have roughly 100 microservices and platform stuff like Kafka brokers running in this cluster. All microservices have defined resource request &amp; limits. Most of them however are burstable and don't have guaranteed RAM. Developers who deploy their services in our cluster defined limits far greater than the request (see example below) which eventually caused a lot of evicted pods on various nodes. We still want to use burstable resources in our services though, as we can save money using burstable resources. Therefore I need a better monitoring possibility of all pods running on each node, containing these information:</p> <ul> <li>Node name &amp; CPU / RAM capacity</li> <li>All pod names plus <ul> <li>pod's resource requests &amp; limits</li> <li>pod's current cpu &amp; ram usage</li> </ul></li> </ul> <p>This way I could easily identify two problematic kind of services:</p> <p><strong>Case A:</strong> The microservice which just sets huge resource limits, because the developer was just testing stuff or is too lazy to bench/monitor his service</p> <pre><code>resources: requests: cpu: 100m ram: 500Mi limits: cpu: 6 ram: 20Gi </code></pre> <p><strong>Case B:</strong> Too many services on the same node which have set not accurate resource limits (e. g. 500Mi, but the service constantly uses 1.5Gi RAM). This case happened to us, because Java developers didn't notice the Java garbage collector will only start to cleanup when 75% of the available RAM has been used.</p> <p><strong>My question:</strong></p> <p>How could I properly monitor this and therefore identify misconfigured microservices in order to prevent such eviction problems? At a smaller scale I could simply run <code>kubectl describe nodes</code> and <code>kubectl top pods</code> to figure it out manually, but at this scale that doesn't work anymore.</p> <p><em>Note:</em> I couldn't find any existing solution for this problem (including prometheus + grafana boards using kube metrics and similiar). I thought it's possible but visualizing this stuff in Grafana is really hard.</p>
<p>I ended up writing an own prometheus exporter for this purpose. While node exporter provides usage statistics and kube state metrics exposes metrics about your kubernetes resource objects it's not easy to combine and aggregate these metrics so that they provide valuable information to solve the described use case.</p> <p>With Kube Eagle (<a href="https://github.com/google-cloud-tools/kube-eagle/" rel="nofollow noreferrer">https://github.com/google-cloud-tools/kube-eagle/</a>) you can easily create such a dashboard (<a href="https://grafana.com/dashboards/9871" rel="nofollow noreferrer">https://grafana.com/dashboards/9871</a>):</p> <p><a href="https://i.stack.imgur.com/3J7ZQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3J7ZQ.png" alt="Grafana dashboard for Kubernetes resource monitoring"></a></p> <p>I also wrote a medium article about how this has helped me saving lots of hardware resources: <a href="https://medium.com/@martin.schneppenheim/utilizing-and-monitoring-kubernetes-cluster-resources-more-effectively-using-this-tool-df4c68ec2053" rel="nofollow noreferrer">https://medium.com/@martin.schneppenheim/utilizing-and-monitoring-kubernetes-cluster-resources-more-effectively-using-this-tool-df4c68ec2053</a></p>
<p>I'm trying to install Kubernetes on my Ubuntu server/desktop version 18.04.1. But, when I want to add kubernetes to the apt repository using the following command:</p> <pre><code>sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-bionic main" </code></pre> <p>I get the following error:</p> <pre><code>Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease Ign:3 http://dl.google.com/linux/chrome/deb stable InRelease Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:5 http://dl.google.com/linux/chrome/deb stable Release Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:7 https://download.docker.com/linux/ubuntu bionic InRelease Ign:8 https://packages.cloud.google.com/apt kubernetes-bionic InRelease Err:10 https://packages.cloud.google.com/apt kubernetes-bionic Release 404 Not Found [IP: 216.58.211.110 443] Reading package lists... Done E: The repository 'http://apt.kubernetes.io kubernetes-bionic Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. </code></pre> <p>If I then try to install <code>kubeadm</code>, it does not work because I don't have the repository added to apt</p> <p>I hope someone can shed some light on my issue..</p> <p>All of this is running inside a VM on Hyper-V</p> <p>PS: I'm not a die hard Linux expert but coming from Windows!</p>
<p>Bionic folder is still not yet created however I tested with adding repository with ' sudo apt-add-repository "deb <a href="http://apt.kubernetes.io/" rel="noreferrer">http://apt.kubernetes.io/</a> kubernetes-xenial main" and sudo apt-add-repository "deb <a href="http://apt.kubernetes.io/" rel="noreferrer">http://apt.kubernetes.io/</a> kubernetes-yakkety main"</p> <p>I am able to install Kubeadm, Kubectl &amp; kubelet on Ubuntu 18.04 with codename *bionic**.... </p> <p>Please make sure to add GPG key as it is recommended &amp; good practice to follow otherwise this creates vulnerability to push malicious code through proper package name... </p> <pre><code>curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add </code></pre> <p>You can check by yourself what all repository are present on dists - <a href="https://packages.cloud.google.com/apt/dists/" rel="noreferrer">https://packages.cloud.google.com/apt/dists/</a> </p> <p>You can also verify whether the repository has been added to /etc/apt/source.list or /etc/apt/source.list.d, if it is there then you should be fine....</p>
<p>I'm working with OpenJDK 11 and a very simple SpringBoot application that almost the only thing it has is the SpringBoot actuator enabled so I can call <strong>/actuator/health</strong> etc.</p> <p>I also have a kubernetes cluster on GCE very simple with just a pod with a container (containing this app of course) </p> <p>My configuration has some key points that I want to highlight, it has some requirements and limits</p> <pre><code>resources: limits: memory: 600Mi requests: memory: 128Mi </code></pre> <p>And it has a readiness probe</p> <pre><code>readinessProbe: initialDelaySeconds: 30 periodSeconds: 30 httpGet: path: /actuator/health port: 8080 </code></pre> <p>I'm also setting a JVM_OPTS like (that my program is using obviously)</p> <pre><code>env: - name: JVM_OPTS value: "-XX:MaxRAM=512m" </code></pre> <p><strong>The problem</strong> </p> <p>I launch this and it gets OOMKilled in about 3 hours every time!</p> <p>I'm never calling anything myself the only call is the readiness probe each 30 seconds that kubernetes does, and that is enough to exhaust the memory ? I have also not implemented anything out of the ordinary, just a Get method that says hello world along all the SpringBoot imports to have the actuators</p> <p>If I run kubectl top pod XXXXXX I actually see how gradually get bigger and bigger </p> <p>I have tried a lot of different configurations, tips, etc, but anything seems to work with a basic SpringBoot app</p> <p>Is there a way to actually hard limit the memory in a way that Java can raise a OutOfMemory exception ? or to prevent this from happening? </p> <p>Thanks in advance</p> <hr> <h2>EDIT: After 15h running</h2> <pre><code>NAME READY STATUS RESTARTS AGE pod/test-79fd5c5b59-56654 1/1 Running 4 15h </code></pre> <p>describe pod says...</p> <pre><code>State: Running Started: Wed, 27 Feb 2019 10:29:09 +0000 Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Wed, 27 Feb 2019 06:27:39 +0000 Finished: Wed, 27 Feb 2019 10:29:08 +0000 </code></pre> <p>That last span of time is about 4 hours and only have 483 calls to /actuator/health, apparently that was enough to make java exceed the MaxRAM hint ?</p> <hr> <h2>EDIT: Almost 17h</h2> <p>its about to die again</p> <pre><code>$ kubectl top pod test-79fd5c5b59-56654 NAME CPU(cores) MEMORY(bytes) test-79fd5c5b59-56654 43m 575Mi </code></pre> <hr> <h2>EDIT: loosing any hope at 23h</h2> <pre><code>NAME READY STATUS RESTARTS AGE pod/test-79fd5c5b59-56654 1/1 Running 6 23h </code></pre> <p>describe pod:</p> <pre><code>State: Running Started: Wed, 27 Feb 2019 18:01:45 +0000 Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Wed, 27 Feb 2019 14:12:09 +0000 Finished: Wed, 27 Feb 2019 18:01:44 +0000 </code></pre> <hr> <h2>EDIT: A new finding</h2> <p>Yesterday night I was doing some interesting reading: </p> <p><a href="https://developers.redhat.com/blog/2017/03/14/java-inside-docker/" rel="noreferrer">https://developers.redhat.com/blog/2017/03/14/java-inside-docker/</a> <a href="https://banzaicloud.com/blog/java10-container-sizing/" rel="noreferrer">https://banzaicloud.com/blog/java10-container-sizing/</a> <a href="https://medium.com/adorsys/jvm-memory-settings-in-a-container-environment-64b0840e1d9e" rel="noreferrer">https://medium.com/adorsys/jvm-memory-settings-in-a-container-environment-64b0840e1d9e</a></p> <p>TL;DR I decided to remove the memory limit and start the process again, the result was quite interesting (after like 11 hours running)</p> <pre><code>NAME CPU(cores) MEMORY(bytes) test-84ff9d9bd9-77xmh 218m 1122Mi </code></pre> <p>So... WTH with that CPU? I kind expecting a big number on memory usage but what happens with the CPU? </p> <p>The one thing I can think is that the GC is running as crazy thinking that the MaxRAM is 512m and he is using more than 1G. I'm wondering, is Java detecting ergonomics correctly? (I'm starting to doubt it) </p> <p>To test my theory I set a limit of 512m and deploy the app this way and I found that from the start there is a unusual CPU load that it has to be the GC running very frequently</p> <pre><code>kubectl create ... limitrange/mem-limit-range created pod/test created kubectl exec -it test-64ccb87fd7-5ltb6 /usr/bin/free total used free shared buff/cache available Mem: 7658200 1141412 4132708 19948 2384080 6202496 Swap: 0 0 0 kubectl top pod .. NAME CPU(cores) MEMORY(bytes) test-64ccb87fd7-5ltb6 522m 283Mi </code></pre> <p>522m is too much vCPU, so my logical next step was to ensure I'm using the most appropriated GC for this case, I changed the JVM_OPTS this way:</p> <pre><code> env: - name: JVM_OPTS value: "-XX:MaxRAM=512m -Xmx128m -XX:+UseSerialGC" ... resources: requests: memory: 256Mi cpu: 0.15 limits: memory: 700Mi </code></pre> <p>And thats bring the vCPU usage to a reasonable status again, after <code>kubectl top pod</code></p> <pre><code>NAME CPU(cores) MEMORY(bytes) test-84f4c7445f-kzvd5 13m 305Mi </code></pre> <p>Messing with Xmx having MaxRAM is obviously affecting the JVM but how is not possible to control the amount of memory we have on virtualized containers ? I know that <code>free</code> command will report the host available RAM but OpenJDK should be using <strong>cgroups</strong> rihgt?.</p> <p>I'm still monitoring the memory ...</p> <hr> <h2>EDIT: A new hope</h2> <p>I did two things, the first one was to remove again my container limit, I want to analyze how much it will grow, also I added a new flag to see how the process is using the native memory <code>-XX:NativeMemoryTracking=summary</code></p> <p>At the beginning every thing was normal, the process started consuming like 300MB via <code>kubectl top pod</code> so I let it running about 4 hours and then ...</p> <pre><code>kubectl top pod NAME CPU(cores) MEMORY(bytes) test-646864bc48-69wm2 54m 645Mi </code></pre> <p>kind of expected, right ? but then I checked the native memory usage</p> <pre><code>jcmd &lt;PID&gt; VM.native_memory summary Native Memory Tracking: Total: reserved=2780631KB, committed=536883KB - Java Heap (reserved=131072KB, committed=120896KB) (mmap: reserved=131072KB, committed=120896KB) - Class (reserved=203583KB, committed=92263KB) (classes #17086) ( instance classes #15957, array classes #1129) (malloc=2879KB #44797) (mmap: reserved=200704KB, committed=89384KB) ( Metadata: ) ( reserved=77824KB, committed=77480KB) ( used=76069KB) ( free=1411KB) ( waste=0KB =0.00%) ( Class space:) ( reserved=122880KB, committed=11904KB) ( used=10967KB) ( free=937KB) ( waste=0KB =0.00%) - Thread (reserved=2126472KB, committed=222584KB) (thread #2059) (stack: reserved=2116644KB, committed=212756KB) (malloc=7415KB #10299) (arena=2413KB #4116) - Code (reserved=249957KB, committed=31621KB) (malloc=2269KB #9949) (mmap: reserved=247688KB, committed=29352KB) - GC (reserved=951KB, committed=923KB) (malloc=519KB #1742) (mmap: reserved=432KB, committed=404KB) - Compiler (reserved=1913KB, committed=1913KB) (malloc=1783KB #1343) (arena=131KB #5) - Internal (reserved=7798KB, committed=7798KB) (malloc=7758KB #28415) (mmap: reserved=40KB, committed=40KB) - Other (reserved=32304KB, committed=32304KB) (malloc=32304KB #3030) - Symbol (reserved=20616KB, committed=20616KB) (malloc=17475KB #212850) (arena=3141KB #1) - Native Memory Tracking (reserved=5417KB, committed=5417KB) (malloc=347KB #4494) (tracking overhead=5070KB) - Arena Chunk (reserved=241KB, committed=241KB) (malloc=241KB) - Logging (reserved=4KB, committed=4KB) (malloc=4KB #184) - Arguments (reserved=17KB, committed=17KB) (malloc=17KB #469) - Module (reserved=286KB, committed=286KB) (malloc=286KB #2704) </code></pre> <p>Wait, What ? 2.1 GB reserved for threads? and 222 MB being used, what is this ? I currently don't know, I just saw it...</p> <p>I need time trying to understand why this is happening </p>
<p>I finally found my issue and I want to share it so others can benefit in some way from this.</p> <p>As I found on my last edit I had a thread problem that was causing all the memory consumption over time, specifically we was using an asynchronous method from a third party library without properly taking care those resources (ensure those calls was ending correctly in this case).</p> <p>I was able to detect the issue because I used a memory limit on my kubernete deployment from the beginning (which is a good practice on production environments) and then I monitored very closely my app memory consumption using tools like <code>jstat, jcmd, visualvm, kill -3</code> and most importantly the <code>-XX:NativeMemoryTracking=summary</code> flag that gave me so much detail in this regard.</p>
<p>Let's say there is a <code>config.xml</code> file outside the chart directory. Can this file be copied to a directory inside the container?</p> <p>If its inside the chart directory, its pretty easy to use <code>configMap</code> as in</p> <pre><code>{{ (tpl (.Files.Glob "myconf/*").AsConfig . ) | indent 2 }} </code></pre> <p>Since the file is outside chart directory its not supported in helm2(although there is some talk of supporting in helm3).</p> <p>Thought of putting as in </p> <pre><code>key: | &lt;tag&gt;abc&lt;/tag&gt; </code></pre> <p>Then read in the configMap and put as file.</p> <p>Is there any elgant way to do it?</p>
<p>Yes. All examples I see until now use this way.</p>
<p>Getting the below mentioned error while trying to connect to cluster from worker node.</p> <p>Cluster version is <code>1.10.4</code> and node version is <code>1.11.0</code></p> <pre><code>[discovery] Successfully established connection with API Server "10.148.0.2:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace configmaps "kubelet-config-1.11" is forbidden: User "system:bootstrap:7fho7b" cannot get configmaps in the namespace "kube-system" </code></pre>
<p>Definitely check your version of kubeadm and kubelet, make sure the same version of these packages is used along all of your nodes. Prior to installing, you should "mark and hold" your versions of these on your hosts:</p> <h3>check your current version of each:</h3> <p>kubelet --version</p> <h3>check kubeadm</h3> <p>kubeadm version</p> <h3>if they're different, you've got problems. You should reinstall the same version among all your nodes and allow downgrades. My versions in the command below are probably older than what is currently out, you can replace the version number with some more up to date, but this will work:</h3> <p>sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00 --allow-downgrades</p> <h3>then, once they're installed, mark and hold them so they can't be upgraded automatically, and break your system</h3> <p>sudo apt-mark hold docker-ce kubelet kubeadm kubectl</p>
<p>I am trying to connect my elastic search pod, to ports 9200 and 9300. When I go to:</p> <p><code>http://localhost:$IP_FROM_KUBECTL_PROXY(usually 8001)/api/v1/namespaces/default/pods/$POD_NAME/proxy/</code></p> <p>I see the following error:</p> <pre><code>Error: 'net/http: HTTP/1.x transport connection broken: malformed HTTP status code "is"' Trying to reach: 'http://172.17.0.5:9300/' </code></pre> <p>What I did is, running :</p> <pre><code>kubectl run elasticsearch --image=elasticsearch:6.6.1 -labels="elasticsearch" --env="discovery.type=single-node" --port=9200 --port=9300 </code></pre> <p>and running the following service:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: elasticsearch spec: selector: host: elasticsearch subdomain: for-kibana app: elasticsearch ports: - protocol: TCP name: serving port: 9200 targetPort: 9200 - protocol: TCP name: node2node port: 9300 targetPort: 9300 </code></pre> <p>It's weird, because when I just use port 9200, all works, but when I run with 9300, its fail.</p>
<p>Port 9300 is binary protocol (not http) and used for node communication. Only port 9200 exposed the Rest Api</p> <p>From the <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/_talking_to_elasticsearch.html" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Both Java clients talk to the cluster over port 9300, using the native Elasticsearch transport protocol. The nodes in the cluster also communicate with each other over port 9300. If this port is not open, your nodes will not be able to form a cluster.</p> </blockquote>
<p>I need to scale my application so that it won't get banned for passing request rate-limit of a site it uses frequently (which allow up to X requests per minute per IP).<br> I meant to use kubernetes and split the requests between multiple workers, but I saw that all the pods get the same external IP. so what can I do?</p>
<p>I used kubernetes DaemonSet to attach pod to each node, and instead of scaling by changing deployment, I'm scaling by adding new nodes.</p>
<p>I have an app which using in memory database. I created statefulset using the Pod with let's say 3 replica. used PVC for storing the database related files.</p> <p>I used a Loabalancer to expose the statefulset.</p> <p>So when the traffic hits loadbalancer each time it's forwarded to different pods. </p> <p>Is there anyway I can control the loadbalacing to the Pod based on some condition ( like if Client IP is X go to pod Y) ?</p>
<p>The very fact that you have a leader/follower topology, the ask to direct traffic to a said nome (master node) is flawed for a couple of reasons:</p> <ol> <li>What happens when the current leader fails over and there is a new election to select a new leader </li> <li>The fact that pods are ephemeral they should not have major roles to play in production, instead work with deployments and their replicas. What you are trying to achieve is an anti-pattern</li> </ol> <p>In any case, if this is what you want, maybe you want to read about <code>gateways in istio</code> which can be found <a href="https://istio.io/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">here</a></p>
<p>We have had our database running on Kubernetes cluster (deployed to our private network) in Google cloud for a few months now. Last week we noticed that for some reason the IP address of all underlying nodes (VMs) changed. This caused an outage. We have been using the NodePort configuration of Kubernetes for our service to access our database (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a>). We understand that the IP address of the pods within the VMs are dynamic and will eventually change, however we did not know that the IP address of the actual nodes (VMs) may also change. Is this normal? Does anyone know what can cause a VM IP address change in a Kubernetes cluster?</p>
<p>From the documentation about <a href="https://cloud.google.com/compute/docs/ip-addresses/#ephemeraladdress" rel="nofollow noreferrer">Ephemeral IP Addresses</a> on GCP,</p> <blockquote> <p>When you create an instance or forwarding rule without specifying an IP address, the resource is automatically assigned an ephemeral external IP address. Ephemeral external IP address are released from a resource if you delete the resource. For VM instances, if you stop the instance, the IP address is also released. Once you restart the instance, it is assigned a new ephemeral external IP address.</p> </blockquote> <p>You can assign static external IP addresses to instances, but as @Notauser mentioned, it is not recommended for Kubernetes nodes. This is because you may configure autoscaler for your instance groups and node sizes can be minimized or maximized. Also, you need to reserve a static IP address for each node, which is not recommended. Moreover you will waste Static IP address resources and if the reserved static IP addresses are not used, you will still be charged for that.</p> <p>Otherwise you can <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">configure HTTP loadbalancer using ingress</a> and then reserve a static IP address for your load balancer. Instead of using NodePort you should use ClusterIP type services and create an ingress rule forwarding the traffic to those services.</p>
<p>I created kubernetes cluster in my Azure resource group using <strong>Azure Kubernetes Service</strong> and login into cluster with the help of resource group credentials through Azure CLI. I could opened the kubernetes dashboard successfully for the first time. After that i deleted my resource group and other resource groups which are defaultly created along with kubernetes cluster. I created resource group and kubernetes cluster one more time in my azure account. i am trying to open the kubernetes dashboard next time, getting error like 8001 port not open. I tried with proxy port-forwarding, but i don't have idea how to hit the dashboard url with different port?.</p> <p>Could anybody suggest me how to resolve this issue? </p>
<p>I think you need to delete your kubernetes config and pull new one with <code>az aks get-credentials</code> or whatever you are using, because you are probably still using config from the previous cluster (hint: it wont work because its a different cluster).</p> <p>you can do that by deleting this file: <code>~/.kube/config</code> and pull the new one and try <code>kubectl get nodes</code>. if that works try the port-forward. It is not port related. something is wrong with your config\az cli</p> <p>ok, I recall in the previous question you mentioned you started using RBAC, you need to add cluster role to the dashboard:</p> <pre><code>kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard </code></pre> <p><a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard#for-rbac-enabled-clusters" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard#for-rbac-enabled-clusters</a></p>
<p>I got the following architecture:</p> <pre><code> [Service] / | \ [Pod1][Pod2][Pod3] </code></pre> <p>We assert the following Pod IPs:</p> <ul> <li>Pod 1: 192.168.0.1</li> <li>Pod 2: 192.168.0.2</li> <li>Pod 3: 192.168.0.3</li> </ul> <p>I am executing a loop like this:</p> <pre><code>for ((i=0;i&lt;10000;i++)); do curl http://someUrlWhichRespondsWithPodIP &gt;&gt; curl.txt; done; </code></pre> <p>This writes the pods IP 10000 times. I expected it to be round robin schemed but it was not. File looks similar to this:</p> <pre><code>192.168.0.1 192.168.0.1 192.168.0.3 192.168.0.2 192.168.0.3 192.168.0.1 </code></pre> <p>The service config looks like this:</p> <pre><code>kind: Service metadata: name: service spec: type: NodePort selector: app: service ports: - name: web protocol: TCP port: 31001 nodePort: 31001 targetPort: 8080 </code></pre> <p>Anyone has an idea what kind of internal load balancing Kubernetes is using?</p>
<p>You are probably using the default <code>iptables</code> mode of kube-proxy, which uses iptables NAT in random mode to implement load balancing. Check out the <code>ipvs</code> support (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs</a>) for a big pile of other modes including round robin.</p>
<p>Is there a different way than <code>kubectl edit</code> to delete an annotation in Kubernetes? </p> <p>I do not like the interactivity of <code>kubectl edit</code>. I prefer something usable in a script. </p>
<p>Use minus <code>-</code> sign at the end of the annotation in <code>kubectl annotate</code>.</p> <p>Example:</p> <p><code>kubectl annotate service shopping-cart prometheus.io/scrape-</code> </p> <p>Removes annotation <code>prometheus.io/scrape</code> from <code>shopping-cart</code> service.</p>
<p>I've had my services set with type <code>NodePort</code> however in reality external access it not required - they only need to be able to talk to each other.</p> <p>Therefore I presume I should change these to the default <code>ClusterIP</code> however the question is - how can I continue to access these services during my local development?</p> <p>So when i make the change from <code>NodePort</code> to <code>ClusterIP</code> then go to <code>minikube service list</code> it naturally shows <code>no node port</code> however how can I now access - is there some special endpoint address I can get from somewhere?</p> <p>Thanks.</p>
<p>You would need to access it like any other out-of-cluster case. Generally this means either <code>kubectl port-forward</code> or <code>kubectl proxy</code>, I favor the former though. In general, ClusterIP services are only accessible from inside the cluster, accessing through forwarders is only used for debugging or infrequent access.</p>
<p>I am practicing the <code>k8s</code> by following the <a href="https://www.katacoda.com/courses/kubernetes/create-kubernetes-ingress-routes" rel="nofollow noreferrer">ingress chapter</a>. I am using Google Cluster. Specification are as follows</p> <p><code>master</code>: <code>1.11.7-gke.4</code><br> <code>node</code>: <code>1.11.7-gke.4</code></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME gke-singh-default-pool-a69fa545-1sm3 Ready &lt;none&gt; 6h v1.11.7-gke.4 10.148.0.46 35.197.128.107 Container-Optimized OS from Google 4.14.89+ docker://17.3.2 gke-singh-default-pool-a69fa545-819z Ready &lt;none&gt; 6h v1.11.7-gke.4 10.148.0.47 35.198.217.71 Container-Optimized OS from Google 4.14.89+ docker://17.3.2 gke-singh-default-pool-a69fa545-djhz Ready &lt;none&gt; 6h v1.11.7-gke.4 10.148.0.45 35.197.159.75 Container-Optimized OS from Google 4.14.89+ docker://17.3.2 </code></pre> <p><code>master endpoint</code>: <code>35.186.148.93</code></p> <p>DNS: <code>singh.hbot.io</code> (master IP)</p> <p>To keep my question short. I post my source code in the snippet and links back to here.</p> <p>Files:</p> <p><a href="https://gist.github.com/elcolie/ca12ae7f242f4f340488aebec7bae7e8" rel="nofollow noreferrer"><code>deployment.yaml</code></a> <a href="https://gist.github.com/elcolie/04400ccc8e1250624980d859bf566879" rel="nofollow noreferrer"><code>ingress.yaml</code></a> <a href="https://gist.github.com/elcolie/19da63c1b0153341a647a032814d4896" rel="nofollow noreferrer"><code>ingress-rules.yaml</code></a></p> <p><strong>Problem:</strong><br> <code>curl http://singh.hbot.io/webapp1</code> got timed out</p> <p><strong>Description</strong><br></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get deployment -n nginx-ingress NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-ingress 1 1 1 0 2h </code></pre> <p><code>nginx-ingress</code> deployment is not available. </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe deployment -n nginx-ingress Name: nginx-ingress Namespace: nginx-ingress CreationTimestamp: Mon, 04 Mar 2019 15:09:42 +0700 Labels: app=nginx-ingress Annotations: deployment.kubernetes.io/revision: 1 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-ingress","namespace":"nginx-ingress"},"s... Selector: app=nginx-ingress Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: app=nginx-ingress Service Account: nginx-ingress Containers: nginx-ingress: Image: nginx/nginx-ingress:edge Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: -nginx-configmaps=$(POD_NAMESPACE)/nginx-config -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret Environment: POD_NAMESPACE: (v1:metadata.namespace) POD_NAME: (v1:metadata.name) Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: nginx-ingress-77fcd48f4d (1/1 replicas created) Events: &lt;none&gt; </code></pre> <p><strong>pods:</strong><br></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods --all-namespaces=true NAMESPACE NAME READY STATUS RESTARTS AGE default webapp1-7d67d68676-k9hhl 1/1 Running 0 6h default webapp2-64d4844b78-9kln5 1/1 Running 0 6h default webapp3-5b8ff7484d-zvcsf 1/1 Running 0 6h kube-system event-exporter-v0.2.3-85644fcdf-xxflh 2/2 Running 0 6h kube-system fluentd-gcp-scaler-8b674f786-gvv98 1/1 Running 0 6h kube-system fluentd-gcp-v3.2.0-srzc2 2/2 Running 0 6h kube-system fluentd-gcp-v3.2.0-w2z2q 2/2 Running 0 6h kube-system fluentd-gcp-v3.2.0-z7p9l 2/2 Running 0 6h kube-system heapster-v1.6.0-beta.1-5685746c7b-kd4mn 3/3 Running 0 6h kube-system kube-dns-6b98c9c9bf-6p8qr 4/4 Running 0 6h kube-system kube-dns-6b98c9c9bf-pffpt 4/4 Running 0 6h kube-system kube-dns-autoscaler-67c97c87fb-gbgrs 1/1 Running 0 6h kube-system kube-proxy-gke-singh-default-pool-a69fa545-1sm3 1/1 Running 0 6h kube-system kube-proxy-gke-singh-default-pool-a69fa545-819z 1/1 Running 0 6h kube-system kube-proxy-gke-singh-default-pool-a69fa545-djhz 1/1 Running 0 6h kube-system l7-default-backend-7ff48cffd7-trqvx 1/1 Running 0 6h kube-system metrics-server-v0.2.1-fd596d746-bvdfk 2/2 Running 0 6h kube-system tiller-deploy-57c574bfb8-xnmtj 1/1 Running 0 1h nginx-ingress nginx-ingress-77fcd48f4d-rfwbk 0/1 CrashLoopBackOff 35 2h </code></pre> <p><strong>describe pod</strong><br></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe pods -n nginx-ingress Name: nginx-ingress-77fcd48f4d-5rhtv Namespace: nginx-ingress Priority: 0 PriorityClassName: &lt;none&gt; Node: gke-singh-default-pool-a69fa545-djhz/10.148.0.45 Start Time: Mon, 04 Mar 2019 17:55:00 +0700 Labels: app=nginx-ingress pod-template-hash=3397804908 Annotations: &lt;none&gt; Status: Running IP: 10.48.2.10 Controlled By: ReplicaSet/nginx-ingress-77fcd48f4d Containers: nginx-ingress: Container ID: docker://5d3ee9e2bf7a2060ff0a96fdd884a937b77978c137df232dbfd0d3e5de89fe0e Image: nginx/nginx-ingress:edge Image ID: docker-pullable://nginx/nginx-ingress@sha256:16c1c6dde0b904f031d3c173e0b04eb82fe9c4c85cb1e1f83a14d5b56a568250 Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: -nginx-configmaps=$(POD_NAMESPACE)/nginx-config -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Mon, 04 Mar 2019 18:16:33 +0700 Finished: Mon, 04 Mar 2019 18:16:33 +0700 Ready: False Restart Count: 9 Environment: POD_NAMESPACE: nginx-ingress (v1:metadata.namespace) POD_NAME: nginx-ingress-77fcd48f4d-5rhtv (v1:metadata.name) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-zvcwt (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: nginx-ingress-token-zvcwt: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-token-zvcwt Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-77fcd48f4d-5rhtv to gke-singh-default-pool-a69fa545-djhz Normal Created 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Created container Normal Started 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Started container Normal Pulling 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz pulling image "nginx/nginx-ingress:edge" Normal Pulled 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Successfully pulled image "nginx/nginx-ingress:edge" Warning BackOff 62s (x112 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Back-off restarting failed container </code></pre> <p><strong>Fix container terminated</strong><br> Add to the <a href="https://serverfault.com/questions/924243/back-off-restarting-failed-container-error-syncing-pod-in-minikube"><code>command</code></a> to <code>ingress.yaml</code> to prevent <code>container</code> finish running and get terminated by <code>k8s</code>. <code>command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]</code></p> <p><code>Ingress</code> has no <code>IP</code> address from GKE. Let me have a look in details</p> <p><strong>describe ingress:</strong><br></p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe ing Name: webapp-ingress Namespace: default Address: Default backend: default-http-backend:80 (10.48.0.8:8080) Rules: Host Path Backends ---- ---- -------- * /webapp1 webapp1-svc:80 (&lt;none&gt;) /webapp2 webapp2-svc:80 (&lt;none&gt;) webapp3-svc:80 (&lt;none&gt;) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"webapp-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"webapp1-svc","servicePort":80},"path":"/webapp1"},{"backend":{"serviceName":"webapp2-svc","servicePort":80},"path":"/webapp2"},{"backend":{"serviceName":"webapp3-svc","servicePort":80}}]}}]}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer" </code></pre> <p>From this line I got all the ultimate solution from <code>Christian Roy</code> Thank you very much.</p> <p><strong>Fix the <code>ClusterIP</code></strong> It is default value then I have to edit my manifest file using <code>NodePort</code> as follow</p> <pre><code>apiVersion: v1 kind: Service metadata: name: webapp1-svc labels: app: webapp1 spec: type: NodePort ports: - port: 80 selector: app: webapp1 </code></pre> <p>And that is.</p>
<p>The answer is in your question. The describe of your ingress shows the problem.</p> <p>You did <code>kubectl describe ing</code> and the last part of that output was:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer" </code></pre> <p>The important part is:</p> <blockquote> <p>error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"</p> </blockquote> <p><strong>Solution</strong></p> <p>Just change all your services to be of type <code>NodePort</code> and it will work.</p>
<p>I deployed Prometheus and Grafana into my cluster.</p> <p>When I open the dashboards I don't get data for pod CPU usage.</p> <p>When I check Prometheus UI, it shows pods 0/0 up, however I have many pods running in my cluster.</p> <p>What could be the reason? I have node exporter running in all of nodes.</p> <p><a href="https://i.stack.imgur.com/HkXbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HkXbL.png" alt="enter image description here"></a></p> <p>Am getting this for kube-state-metrics,</p> <pre><code>I0218 14:52:42.595711 1 builder.go:112] Active collectors: configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,jobs,limitranges,namespaces,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets I0218 14:52:42.595735 1 main.go:208] Starting metrics server: 0.0.0.0:8080 </code></pre> <p>Here is my Prometheus config file: <a href="https://gist.github.com/karthikeayan/41ab3dc4ed0c344bbab89ebcb1d33d16" rel="nofollow noreferrer">https://gist.github.com/karthikeayan/41ab3dc4ed0c344bbab89ebcb1d33d16</a></p> <p>I'm able to hit and get data for:</p> <pre><code>http://localhost:8080/api/v1/nodes/&lt;my_worker_node&gt;/proxy/metrics/cadvisor </code></pre>
<p>As it was mentioned by <a href="https://stackoverflow.com/users/2436847/karthikeayan">karthikeayan</a> in comments:</p> <blockquote> <p>ok, i found something interesting in the <code>values.yaml</code> comments, <strong>prometheus.io/scrape:</strong> <em>Only scrape pods that have a value of true</em>, when i remove this relabel_config in k8s configmap, i got the data in prometheus ui.. unfortunately k8s configmap doesn't have comments, i believe helm will remove the comments before deploying it.</p> </blockquote> <p>And just for clarification:</p> <blockquote> <h3><a href="https://github.com/kubernetes/kube-state-metrics#kube-state-metrics-vs-metrics-server" rel="nofollow noreferrer">kube-state-metrics vs. metrics-server</a></h3> <p>The <strong>metrics-server</strong> is a project that has been inspired by <strong>Heapster</strong> and is implemented to serve the goals of the Kubernetes Monitoring Pipeline. It is a cluster level component which periodically scrapes metrics from all Kubernetes nodes served by Kubelet through Summary API. The metrics are aggregated, stored in memory and served in Metrics API format. The metric-server stores the latest values only and is not responsible for forwarding metrics to third-party destinations.</p> <p><strong>kube-state-metrics</strong> is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). It holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it. And just like the metric-server it too is not responsibile for exporting its metrics anywhere.</p> <p>Having kube-state-metrics as a separate project also enables access to these metrics from monitoring systems such as Prometheus.</p> </blockquote>
<p>I've been learning Kubernetes recently and just came across this small issue. For some sanity checks, here is the functionality of my grpc app running locally:</p> <pre><code>&gt; docker run -p 8080:8080 -it olamai/simulation:0.0.1 &lt; omitted logs &gt; &gt; curl localhost:8080/v1/todo/all {&quot;api&quot;:&quot;v1&quot;,&quot;toDos&quot;:[{}]} </code></pre> <p>So it works! All I want to do now is deploy it in Minikube and expose the port so I can make calls to it. My end goal is to deploy it to a GKE or Azure cluster and make calls to it from there (again, just to learn and get the hang of everything.)<br /> <a href="https://github.com/OlamAI/Simulation/blob/v2/k8s/deployment.yaml" rel="nofollow noreferrer">Here is the yaml I'm using to deploy to minikube</a></p> <p>And this is what I run to deploy it on minikube</p> <pre><code>&gt; kubectl create -f deployment.yaml </code></pre> <p>I then run this to get the url</p> <pre><code>&gt; minikube service sim-service --url http://192.168.99.100:30588 </code></pre> <p>But this is what happens when I make a call to it</p> <pre><code>&gt; curl http://192.168.99.100:30588/v1/todo/all curl: (7) Failed to connect to 192.168.99.100 port 30588: Connection refused </code></pre> <p>What am I doing wrong here?</p> <p>EDIT: I figured it out, and you should be able to see the update in the linked file. I had pull policy set to Never so it was out of date 🤦</p> <p>I have a new question now... I'm now able to just create the deployment in minikube (no NodePort) and still make calls to the api... shouldn't the deployment need a NodePort service to expose ports?</p>
<p>I checked your yaml file and it works just fine. But only I realized that you put 2 types for your services, <code>LoadBalancer</code> and also <code>NodePort</code> which is not needed. As if you check from <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">this documentation</a> definition of LoadBalancer, you will see </p> <blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.</p> </blockquote> <p>As an answer for your next question, you probably put <code>type: LoadBalancer</code> to your deployment yaml file, that's why you are able to see NodePort anyway. If you put <code>type: ClusterIP</code> to your yaml, then service will be exposed only within cluster, and you won't able to reach to your service outside of cluster.</p> <p>From same <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType</p> </blockquote>
<pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 30s cert-manager Generated new private key Normal OrderCreated 30s cert-manager Created Order resource "letsencrypt-prod-2527702610" Warning FailedOrder 27s cert-manager Order "letsencrypt-prod-2527702610" failed. Waiting 1h0m0s before retrying issuance. </code></pre> <p>i am trying change the email in cluster issuer i have already one generated with this certificate-manager in K8s Cluster.</p> <p>Nginx ingress controller and cert-manager both running on cluster then why i am getting this error. </p> <blockquote> <p>Yesterday i tried for staging certificate and it's work but production it is not working</p> </blockquote>
<p>I face a similar issue when I rapidly delete and then install my application using helm. You probably run into the following rate limit:</p> <blockquote> <p>We also have a Duplicate Certificate limit of 5 certificates per week. A certificate is considered a duplicate of an earlier certificate if they contain the exact same set of hostnames, ignoring capitalization and ordering of hostnames. For instance, if you requested a certificate for the names [www.example.com, example.com], you could request four more certificates for [www.example.com, example.com] during the week. If you changed the set of names by adding [blog.example.com], you would be able to request additional certificates.</p> </blockquote> <p><a href="https://letsencrypt.org/docs/rate-limits/" rel="nofollow noreferrer">Source</a></p>
<p>I am starting to experiment with Oauth2 authorisation for a Kubernetes cluster. </p> <p>I have found a good Oauth2 identity provider using <a href="https://github.com/hortonworks/docker-cloudbreak-uaa" rel="nofollow noreferrer">UAA</a></p> <p>My original intention was to deploy this into a Kubernetes cluster, and then allow it to provide authentication over that cluster. This would provide a single sign on solution hosted in the cloud, and enable that solution to manage Kubernetes access as well as access to the applications running on my cluster. </p> <p>However, when thinking this solution through, there would seem to be some edge cases where this kind of configuration could be catastrophic. For instance if my cluster stops then I do not think I will be able to restart that cluster, as the Oauth2 provider would not be running, and thus I could not be authenticated to perform any restart operations.</p> <ul> <li>Has anybody else encountered this conundrum ? </li> <li>Is this a real risk ? </li> <li>Is there a 'standard' approach to circumvent this issue ? </li> </ul> <p>Many Thanks for taking the time to read this !</p>
<p>The use of UAA consists of two 2 procedures—authentification and authorization—where the latter allows for performing certain actions within a cluster. They are used through the kubectl command-line tool. </p> <p>One can use <strong>2 existing modules of authorization</strong> (<a href="https://kubernetes.io/docs/reference/access-authn-authz/abac/" rel="nofollow noreferrer">ABAC</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a>). <a href="https://www.altoros.com/blog/configuring-uaa-to-provide-a-single-entry-point-for-kubernetes-and-cloud-foundry/" rel="nofollow noreferrer">Here</a> you can find a side-by-side comparison of these two options where the author vouched for the RBAC mode as it "doesn’t require the API server to be restarted every time the policy files get updated". </p> <p>If I understood your question right, <a href="https://www.altoros.com/blog/configuring-uaa-to-provide-a-single-entry-point-for-kubernetes-and-cloud-foundry/" rel="nofollow noreferrer">this article</a> may be of help. </p>
<p>How to configure routing between Pods on multiple Azure Kubernetes clusters?</p> <p>Something similar to <code>ip-alias/vpc-native</code> on Google Cloud</p>
<p>In AKS, I think you can create the AKS clusters in the different subnets in the same vnet with an advanced network. For more details about the network, see <a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-network" rel="nofollow noreferrer">Choose the appropriate network model</a>. But it's not a perfect solution and there are some limitations for it. For example, the AKS clusters should be in the same region as the vnet.</p> <p>You can take a look at <a href="https://www.cockroachlabs.com/blog/experience-report-running-across-multiple-kubernetes-clusters/" rel="nofollow noreferrer">Gotchas &amp; Solutions Running a Distributed System Across Kubernetes Clusters</a>. As it said, it's hard to communicate from different regions and nothing has yet solved the problem of running a distributed system that spans multiple clusters. It’s still a very hard experience that isn’t really documented.</p> <p>So, maybe there would be a long time to wait for the perfect solution.</p>
<p>So I currently have a self-managed certificate, but I want to switch to a google-managed certificate. The google docs for it say to keep the old certificate active while the new one is provisioned. When I try to create a google-managed certificate for the same ingress IP, I get the following error: <code>Invalid value for field 'resource.IPAddress': 'xx.xxx.xx.xx'. Specified IP address is in-use and would result in a conflict.</code></p> <p>How can I keep the old certificate active, like it tells me to, if it won't let me start provisioning a certificate for the same ingress?</p>
<p>This can happen if 2 load balancers are sharing the same IP address (<a href="https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/blob/master/troubleshooting.md" rel="nofollow noreferrer">source</a>). most likely you would have to detach that IP - or add another IP and then swap, once the certificate had been provisioned. it's difficult to tell by the error message, while not knowing which command had been issued.</p>
<p>I have deployed a cluster on Azure using AKS-engine on a existing VNET. My group has <code>Owner</code> permission over resources. Now all my load balancer service is not getting Public IP, it hangs on a <code>pending</code> state forever.</p> <pre><code>kubectl describe svc zevac-frontend-lb Name: zevac-frontend-lb Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"zevac-frontend-lb","namespace":"default"},"spec":{"loadBalancerIP":"52.172.46.... Selector: app=zevac-frontend Type: LoadBalancer IP: 10.0.245.52 IP: 52.172.46.210 Port: &lt;unset&gt; 80/TCP TargetPort: 80/TCP NodePort: &lt;unset&gt; 31723/TCP Endpoints: 10.0.3.11:80,10.0.3.45:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning CreatingLoadBalancerFailed 6m (x9 over 35m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/zevac-frontend-lb: timed out waiting for the condition Normal EnsuringLoadBalancer 1m (x10 over 37m) service-controller Ensuring load balancer </code></pre>
<p>In your issue, I think there are two possible reasons. One is that your public IP is not in the same region with your AKS cluster. I think it's the most possible reason. The other one is your AKS cluster does not have the permission to do that action.</p> <p>When you use the static public IP, there are also two means. One is in the node group and the other is outside the node group. When you use the public IP outside the node group your AKS cluster should have the "Network Contributor" permission of the group that the public IP in at least. See <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip#use-a-static-ip-address-outside-of-the-node-resource-group" rel="nofollow noreferrer">Use a static IP address outside of the node resource group</a>.</p>
<p>Team, I have below cluster role on kubernetes that allows access to everything but I wan't to restrict node level commands and allow all rest.</p> <p>What to modify below? Basically, user should be able to run</p> <pre><code>kubectl get all --all-namespaces </code></pre> <p>but not nodes info should NOT display</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: &quot;true&quot; labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin-test rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*' </code></pre>
<p>Rules are purely additive, means that you cannot restrict rules.</p> <p>Thus, you will need to list all accessible <a href="https://kubernetes.io/docs/reference/kubectl/overview/#resource-types" rel="noreferrer">resources</a>, but "nodes" with appropriate <a href="https://kubernetes.io/docs/reference/kubectl/overview/#operations" rel="noreferrer">operations</a></p> <p>For example:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin rules: - apiGroups: [""] resources: ["pods","services","namespaces","deployments","jobs"] verbs: ["get", "watch", "list"] </code></pre> <p>Also, it is highly not recommended to change <strong>cluster-admin</strong> role. It is worth to create a new role and assign users to it.</p>
<p>I'm using minikube to test kubernetes on latest MacOS.</p> <p>Here are my relevant YAMLs:</p> <p><strong>namespace.yml</strong></p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: micro labels: name: micro </code></pre> <p><strong>deployment.yml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: adderservice spec: replicas: 1 template: metadata: labels: run: adderservice spec: containers: - name: adderservice image: jeromesoung/adderservice:0.0.1 ports: - containerPort: 8080 </code></pre> <p><strong>service.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: adderservice labels: run: adderservice spec: ports: - port: 8080 name: main protocol: TCP targetPort: 8080 selector: run: adderservice type: NodePort </code></pre> <p>After running <code>minikube start</code>, the steps I took to deploy is as follows:</p> <ol> <li><p><code>kubectl create -f namespace.yml</code> to create the namespace</p></li> <li><p><code>kubectl config set-context minikube --namespace=micro</code></p></li> <li><p><code>kubectl create -f deployment.yml</code></p></li> <li><p><code>kubectl create -f service.yml</code></p></li> </ol> <p>Then, I get the NodeIP and NodePort with below commands:</p> <ol> <li><code>kubectl get services</code> to get the NodePort</li> </ol> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE adderservice NodePort 10.99.155.255 &lt;none&gt; 8080:30981/TCP 21h </code></pre> <ol start="2"> <li><code>minikube ip</code> to get the nodeIP</li> </ol> <pre><code>$ minikube ip 192.168.99.103 </code></pre> <p>But when I do curl, I always get <strong>Connection Refused</strong> like this:</p> <pre><code>$ curl http://192.168.99.103:30981/add/1/2 curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused </code></pre> <p>So I checked node, pod, deployment and endpoint as follows:</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 23h v1.13.3 $ kubectl get pods NAME READY STATUS RESTARTS AGE adderservice-5b567df95f-9rrln 1/1 Running 0 23h $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE adderservice 1 1 1 1 23h $ kubectl get endpoints NAME ENDPOINTS AGE adderservice 172.17.0.5:8080 21h </code></pre> <p>I also checked service list from minikube with:</p> <pre><code>$ minikube service -n micro adderservice --url http://192.168.99.103:30981 </code></pre> <p>I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?</p> <p>I know namespace will change the DNS, so, to be complete, I ran below commands also:</p> <pre><code>$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 $ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: kubernetes.micro Address: 198.105.244.130 Name: kubernetes.micro Address: 104.239.207.44 </code></pre> <p>Could anyone help me out? Thank you.</p>
<p>The error <code>Connection Refused</code> mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.</p> <p>Things you need to be aware of:</p> <ul> <li>Make sure that your application bind to <code>0.0.0.0</code> so it can receive requests from outside the container either externally as in public or through other containers.</li> <li>Make sure that your application is actually listening on the <code>containerPort</code> and <code>targetPort</code> as expect</li> </ul> <p>In your case you have to make sure that <code>ADDERSERVICE_SERVICE_HOST</code> equals to <code>0.0.0.0</code> and <code>ADDERSERVICE_SERVICE_PORT</code> equals to <code>8080</code> which should be the same value as <code>targetPort</code> in <code>service.yml</code> and <code>containerPort</code> in <code>deployment.yml</code></p>
<p>I want to update my deployment on kubernetes with a new image which exists on 'eu.gcr.io' (same project), I have done this before. But now the pods fail to pull the image because they are not authorized to do so. This is the error that we get in the pod logs.</p> <pre><code>Failed to pull image "eu.gcr.io/my-gcp-project/my-image:v1.009": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. </code></pre> <p>The service account on the cluster has <strong>kubernetes admin</strong> and <strong>storage admin</strong> roles which should be <a href="https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry" rel="noreferrer">sufficient</a>. But even when I make the service account <strong>project editor</strong> (for debugging sake) it still doesn't work (same error).</p> <p>I have also tried creating a fresh new cluster (default settings) and apply my deployment there, but then I got the exact same issue.</p> <p>I'm not sure what I can try anymore.</p> <p>Any help or suggestions are greatly appreciated.</p> <p>EDIT:</p> <p>I just found out that I can still pull and deploy older images. But every new image I build cannot be pulled by the kubernetes pods.</p>
<p>According to your desciption </p> <blockquote> <p>I just found out that I can still pull and deploy older images. But every new image I build cannot be pulled by the kubernetes pods.</p> </blockquote> <p>I assume you can pull docker image by command, but not kubectl.</p> <pre><code>docker pull eu.gcr.io/my-gcp-project/my-image:v1.009 </code></pre> <p>So reference by this article <a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="noreferrer">Using Google Container Registry with Kubernetes</a>, the authenication is differnet between pull docker image by <strong>docker pull</strong> and <strong>kubectl</strong> . </p> <p>Did you give access token to GKE?</p> <pre><code>kubectl create secret docker-registry gcr-access-token \ --docker-server=eu.gcr.io \ --docker-username=oauth2accesstoken \ --docker-password="$(gcloud auth print-access-token)" \ [email protected] </code></pre>
<p>I'm using the official <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="noreferrer">stable/prometheus-operator</a> chart do deploy Prometheus with helm.</p> <p>It's working good so far, except for the annoying <code>CPUThrottlingHigh</code> alert that is firing for many pods (including the own Prometheus' <a href="https://github.com/coreos/prometheus-operator/issues/2171" rel="noreferrer">config-reloaders containers</a>). This alert is <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108" rel="noreferrer">currently under discussion</a>, and I want to silence its notifications for now.</p> <p>The Alertmanager has a <a href="https://prometheus.io/docs/alerting/alertmanager/#silences" rel="noreferrer">silence feature</a>, but it is web-based:</p> <blockquote> <p>Silences are a straightforward way to simply mute alerts for a given time. Silences are configured in the web interface of the Alertmanager.</p> </blockquote> <p><strong>There is a way to mute notifications from <code>CPUThrottlingHigh</code> using a config file?</strong></p>
<p>One option is to route alerts you want silenced to a "null" receiver. In <code>alertmanager.yaml</code>:</p> <pre><code>route: # Other settings... group_wait: 0s group_interval: 1m repeat_interval: 1h # Default receiver. receiver: "null" routes: # continue defaults to false, so the first match will end routing. - match: # This was previously named DeadMansSwitch alertname: Watchdog receiver: "null" - match: alertname: CPUThrottlingHigh receiver: "null" - receiver: "regular_alert_receiver" receivers: - name: "null" - name: regular_alert_receiver &lt;snip&gt; </code></pre>
<h1>Answer My Own Question</h1> <p>I recently upgraded to Airflow 1.10 which introduced the <code>KubernetesPodOperator</code> however whenever I use it I see the following error in the webserver: <code>No module named 'kubernetes'</code>. Why does this happen and how can I fix it?</p>
<p>This happens because the kubernetes library isn't installed with vanilla airflow. You can fix this by running <code>pip install "apache-airflow[kubernetes]"</code>. </p>
<p>I'm trying to deploy pods on the nodes which have labels like <strong>es-node: data-1, es-node:data-2, es-node:data-3</strong>. I can use all the labels in pod's nodeaffinity spec but i just want to use single label entry as <strong>es-node:data-*</strong> so that it gets deployed on all the nodes. Is this even possible?</p>
<p>I don't think you can specify regular expressions on label selectors but you can just add an additional label, let's say <code>es-node-type: data</code> and put that as a label selector for your deployment or stateful set.</p>
<p>I have a website:</p> <p>xyz.com -> Standalone Angular4 APP (Container)</p> <p>xyz.com/api/ -> Standalone Rails API APP (Container - ruby-2.4.0)</p> <p>xyz.com/api/sidekiq -> Mounted to Rails api routes.</p> <p>When I try to access the /api/sidekiq it loads the data but assets are still pointing to xyz.com/sidekiq instead of xyz.com/api/sidekiq. And all when I click retry it tries to submit to xyz.com/sidekiq instead of xyz.com/api/sidekiq. Is there a way to force sidekiq to use a different assets path and base url path?</p> <p>Is there a way to get the sidekiq web view as a seperate standalone application container?</p> <p>I am using Kubernetes as my orchestration tool. I am using Nginx Ingress to do path based routing. When I go to xyz.com/api/sidekiq it loads the data but the urls and assets point xyz.com/sidekiq which is why I cant retry a sidekiq job as the sidekiq web doesn't work.</p>
<p>You can have a look at the thread I raised with sidekiq team "<a href="https://github.com/mperham/sidekiq/issues/3815" rel="nofollow noreferrer">Sidekiq Route Mount doesn't rewrite URL in Rails API</a>" that helped me understand how sidekiq <code>mount "/sidekiq"</code> works.</p> <p>I have a simple work around for this problem and found it to be the best way to resolve my path based routing issue. You can do this using <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotations in your Ingress configurations by writing 2 Ingress rules as shown below: </p> <ol> <li><p>Routing the "<strong>/</strong>" path to the root path of the rails-app-service ( <code>xyz.com/* -&gt; rails-app-service/*</code> )</p></li> <li><p>Routing the "<strong>/sidekiq</strong>" path to the sidekiq web mount path of the rails-app-service ( <code>xyz.com/sidekiq/* -&gt; rails-app-service/sidekiq/*</code> )</p></li> </ol> <p>To know more about Nginx Ingress Controller rewrite target annotation check the official repository for details: <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite#rewrite-target" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite#rewrite-target</a></p> <p>This is my final code snippet with the solution: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: "rails-app-ingress" annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: xyz.com http: paths: - path: / backend: serviceName: "rails-app-service" servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: "rails-sidekiq-ingress" annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /sidekiq spec: rules: - host: xyz.com http: paths: - path: /sidekiq backend: serviceName: "rails-app-service" servicePort: 80 </code></pre>
<p>Why is there no easy way to get notifications if a pod becomes unhealthy and is restarted?</p> <p>To me, it suggests I shouldn't care that a pod was restarted, but why not?</p>
<p>If a pod/container crashes for some reason Kubernetes is supposed to provide that reliability/availability that it will start somewhere else in the cluster. Having said that you probably want warnings and alerts (if you the pod goes into a <code>Crashloopbackoff</code>.</p> <p>Although you can write your own tool you can watch for specific events in your cluster and then you alert/warn on those using some of these tools:</p> <ul> <li><a href="https://github.com/bitnami-labs/kubewatch" rel="noreferrer">kubewatch</a></li> <li><a href="https://github.com/wongnai/kube-slack" rel="noreferrer">kube-slack</a> (Slack tool).</li> <li>The most popular K8s monitoring tool: <a href="https://prometheus.org" rel="noreferrer">prometheus</a>.</li> <li>A paid tool like <a href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/" rel="noreferrer">Sysdig</a>.</li> </ul>
<p>We have a metric <code>container_memory_rss</code> from cadvisor and a metric <code>kube_pod_container_resource_requests_memory_bytes</code> from Kubernetes itself.</p> <p>Is it possible to join the metrics against one another so that we can directly compare the ratio of both metrics? More specifically I'd like to basically 'join' the following metrics:</p> <pre><code>sum(kube_pod_container_resource_requests_memory_bytes) by (pod, namespace) sum(container_memory_rss) by (container_label_io_kubernetes_pod_name, container_label_io_kubernetes_pod_namespace) </code></pre> <p>The 'join' would be on pod name and namespace. </p> <p>Can PromQL do this, given that the label names vary?</p>
<p>You can use the <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#label_replace" rel="nofollow noreferrer">label_replace</a> function to modify labels on one side of the expression such that they match:</p> <pre><code> sum by (pod_name, namespace) (container_memory_rss) / sum by (pod_name, namespace) ( label_replace( kube_pod_container_resource_requests_memory_bytes, "pod_name", "$1", "pod", "(.*)" ) ) </code></pre>
<p>I'm using Dask deployed using Helm on a Kubernetes cluster in Kubernetes Engine on GCP. My current cluster set up has 5 nodes with each node having 8 cpus, 30 gb:</p> <p>I ran the notebook named <code>05-nyc-taxi.ipynb</code>, which resulted in workers getting killed.</p> <p>When I restarted the Dask client it shows that I now have zero workers and zero memory:</p> <p><a href="https://i.stack.imgur.com/HmfyA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HmfyA.png" alt="enter image description here"></a></p> <p>However, when I run <code>kubectl get services</code> and <code>kubectl get pods</code>, it shows that my pods and services are running:</p> <p><a href="https://i.stack.imgur.com/dJKww.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dJKww.png" alt="enter image description here"></a></p> <p>Any idea why this might be the case?</p>
<p>When you restart the client, is kills all the workers, and starts making new ones. That process is asynchronous, but the rendering of the client object happens immediately - so there are no workers at that moment. You could render the client object again (and again) later:</p> <pre><code>In[]: client </code></pre> <p>Or check the dashboard.</p> <p>Or, better, you could render the cluster object itself which, so long as you have jupyter widgets installed in the environment, will update itself in real-time. If you didn't happen to assign your cluster object before, it will also be available as <code>client.cluster</code>.</p> <p>btw: <em>why</em> are you having to restart the cluster like this?</p>
<p>I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.</p> <p><strong>I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.</strong></p> <p><a href="https://i.stack.imgur.com/cBa5b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBa5b.png" alt="enter image description here"></a></p> <p>Related github question: <a href="https://github.com/nginxinc/kubernetes-ingress/issues/502" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/issues/502</a></p>
<p>Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">docs</a> you and see the kubelet option <code>--service-node-port-range</code> which defaults to <code>30000-32767</code>. You could change it to <code>443-32767</code> or something. Note that every port under <code>1024</code> is restricted to <a href="https://unix.stackexchange.com/questions/16564/why-are-the-first-1024-ports-restricted-to-the-root-user-only">root</a>.</p> <p>In summary, it's not a good idea/practice to run your Kubernetes services on port <code>443</code>. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.</p>
<p>I can use <code>terraform</code> to deploy a <code>Kubernetes</code> cluster in <code>GKE</code>.</p> <p>Then I have set up the provider for <code>Kubernetes</code> as follows:</p> <pre><code>provider "kubernetes" { host = "${data.google_container_cluster.primary.endpoint}" client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}" client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}" cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}" } </code></pre> <p>By default, <code>terraform</code> interacts with <code>Kubernetes</code> with the user <code>client</code>, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with <code>terraform</code>:</p> <pre><code>Error: Error applying plan: 1 error(s) occurred: * kubernetes_deployment.foo: 1 error(s) occurred: * kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default" </code></pre> <p>I don't know how should I proceed now, how should I give this permissions to the <code>client</code> user?</p> <p>If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for <code>HTTP</code> communication with the cluster, which is insecure if it is done through the internet.</p> <pre><code>username = "${data.google_container_cluster.primary.master_auth.0.username}" password = "${data.google_container_cluster.primary.master_auth.0.password}" </code></pre> <p>Is there any other better way of doing so?</p>
<ul> <li>you can use the service account that are running the terraform </li> </ul> <pre><code>data "google_client_config" "default" {} provider "kubernetes" { host = "${google_container_cluster.default.endpoint}" token = "${data.google_client_config.default.access_token}" cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}" load_config_file = false } </code></pre> <p>OR </p> <ul> <li>give permissions to the default "client"</li> <li>But you need a valid authentication on GKE cluster provider to run this :/ <em>ups circular dependency here</em> </li> </ul> <pre><code>resource "kubernetes_cluster_role_binding" "default" { metadata { name = "client-certificate-cluster-admin" } role_ref { api_group = "rbac.authorization.k8s.io" kind = "ClusterRole" name = "cluster-admin" } subject { kind = "User" name = "client" api_group = "rbac.authorization.k8s.io" } subject { kind = "ServiceAccount" name = "default" namespace = "kube-system" } subject { kind = "Group" name = "system:masters" api_group = "rbac.authorization.k8s.io" } } </code></pre>
<p>I've been looking into kubernetes for docker orchestration and one of my use cases is to have multiple containers spawned on different nodes, and each container needs to have read access to a list of very large files (20G+).</p> <p>Because the files can be updated at times, we will be use block volume. I'm running the cluster on esxi we are limited to open source and non-cloud solutions...</p> <p>By reading <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a>, seems like one of the options would be portworx. I'm wondering if there's any other better options out there? </p> <p>I assume this is a pretty common use case. I'm very new to Kubernetes so any help/advice will be greatly appreciated!</p>
<p>First of all on Volumes, there are so many options also depends upon where your cluster is hosted (on prem or managed cloud provider?) managed cloud providers usually would have either their own option of easy mounting their block storage options e.g. Azure storage by Azure, S3 for AWS OR third party driver solutions and so on but things to know and note here:</p> <ul> <li>On Kube volumes, you mentioned - though its outlives container but within Pod and when a Pod ceases to exist, the volume will cease to exist, too. which means you can not use across nodes for diff Pods acorss nodes . This has to be used within specific Pod </li> <li>In case options available, consider managed cloud provider's default block mount solutions. It would be less painful for integration and also persistence issues</li> <li>Finally, from design perspective, this seems against microservice/docker/containerization pattern - you may want to revisit your original need and goals as Pods are supposed to be created on the fly and stateless as possible - created and recreated whenever needed if any issues or for scalability need</li> </ul> <p>Hope this helps</p>
<p>I've updated the SSL certificate for my Kubernetes Ingress services, but I don't know how to restart the instances to use the updated cert secret without manually deleting and restarting the Ingress instances. That isn't ideal because of the number of ingresses that are making use of that specific cert (all sitting on the same TLD). How do I force it to use the updated secret?</p>
<p>You shouldn't need to delete the Ingress object to use the updated TLS Secret.</p> <p>GKE Ingress controller (<a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce</a>) automatically picks up the updated Secret resource and updates it. (Open an issue on the repo if it doesn't).</p> <p>If you're not seeing the changes in ~10-20 minutes, I recommend editing the Ingress object trivially (for example, add a label or an annotation) so that the ingress controller picks up the object again and evaluates goal state vs the current state, then goes ahead to make the changes (update the TLS secret).</p>
<p>I'm curious about how ConfigMap and Deployment works in kubernetes. </p> <p>I wanted to use the values in ConfigMap as arguments to my deployment pods. I've tried this with different images and found different behaviours when passing ConfigMap value as command arguments between containers that use <code>sh</code> as entry point and other commands as entry point.</p> <p>Here is an example configuration to better illustrate my case:</p> <p><strong>configmap.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-envs data: key: "value" BUCKET_NAME: "gs://bucket-name/" OUTPUT_PATH: "/data" </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment spec: template: containers: - name: firstContainer image: busybox command: ["sh"] args: - c - | echo $key echo ${BUCKET_NAME} echo $(OUTPUT_PATH) envFrom: - configMapRef: name: app-envs - name: secondContainer image: someImage args: [ "cmd", "${BUCKET_NAME}", "${OUTPUT_DATA}", "${key}" ] envFrom: - configMapRef: name: app-envs - name: thirdContainer image: someImage args: [ "cmd", "$(BUCKET_NAME)", "$(OUTPUT_DATA)", "$(key)" ] envFrom: - configMapRef: name: app-envs </code></pre> <p><code>someImage</code> is a docker image, which has certain bash script as its entry point that prints the environment values.</p> <hr> <p>The <code>firstContainer</code> and <code>thirdContainer</code> are able to print all the ConfigMap values correctly, meaning, all <code>value</code>, <code>gs://bucket-name/</code> and <code>/data</code> are received as input arguments.</p> <p>However, the <code>secondContainer</code> is unable to print these values correctly. I tried to echo the received arguments, and it turned out that it receives:</p> <blockquote> <p><code>${BUCKET_NAME}</code>, <code>${OUTPUT_DATA}</code>, and <code>${key}</code> literally as input arguments instead of the actual values from ConfigMaps.</p> </blockquote> <p>So after observing the above behaviours, here are my questions:</p> <ol> <li><p>What's the relationship between deployment and ConfigMap? Is there some kind of order which specify how resources are created in a k8s pod/deployment (e.g., whether ConfigMap is loaded first, then the volumeMounts, and then the container or some kind of orderings)?</p></li> <li><p>What's the difference between <code>${}</code> and <code>$()</code>? Why does the ConfigMap values are received as literal strings when using <code>${}</code> to a container that has different entry point than <code>bash</code> or <code>sh</code>?</p></li> </ol> <p>Thank you. Your help would be appreciated.</p>
<p>Kubernetes only directly understands environment variable references in parentheses <code>$(VAR)</code>; see for example the note in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="nofollow noreferrer">Define a Command and Arguments for a Container</a>.</p> <pre><code>args: [ "cmd", "$(BUCKET_NAME)", "$(OUTPUT_DATA)", "$(key)" ] </code></pre> <p>Kubernetes itself knows what the environment variables are and does the substitution, so the container is launched as <code>cmd gs://bucket-name/ /data key</code>.</p> <pre><code>command: ["sh"] args: - c - | echo $key echo ${BUCKET_NAME} echo $(OUTPUT_PATH) </code></pre> <p>Kubernetes expands <code>$(OUTPUT_PATH)</code> but doesn't understand any other form of braces, so the other strings get sent on as-is. Since you're explicitly running this through a shell, though, both <code>$key</code> and <code>${BUCKET_NAME}</code> are standard shell variable expansions, so the shell expands these values.</p> <pre><code>args: [ "cmd", "${BUCKET_NAME}", "${OUTPUT_DATA}", "${key}" ] </code></pre> <p>Kubernetes doesn't expand things in curly braces, and there's no shell or anything else to expand these variables, so the variable strings (and not their contents) get passed along as-is.</p>
<p>I have successfully installed a Kubernetes cluster and can verify this by:</p> <pre><code>C:\windows\system32&gt;kubectl cluster-info Kubernetes master is running at https://&lt;ip&gt;:&lt;port&gt; KubeDNS is running at https://&lt;ip&gt;:&lt;port&gt;/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>Then I am trying to run the SparkPi with the Spark I downloaded from <a href="https://spark.apache.org/downloads.html" rel="nofollow noreferrer">https://spark.apache.org/downloads.html</a> . </p> <pre><code>spark-submit --master k8s://https://192.168.99.100:8443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=gettyimages/spark c:\users\&lt;username&gt;\Desktop\spark-2.4.0-bin-hadoop2.7\examples\jars\spark-examples_2.11-2.4.0.jar </code></pre> <p>I am getting this error:</p> <pre><code>Error: Master must either be yarn or start with spark, mesos, local Run with --help for usage help or --verbose for debug output </code></pre> <p>I tried versions 2.4.0 and 2.3.3. I also tried </p> <pre><code>spark-submit --help </code></pre> <p>to see what I can get regarding the <strong>--master</strong> property. This is what I get:</p> <pre><code>--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local. </code></pre> <p>According to the documentation [<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html]" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html]</a> on running Spark workloads in Kubernetes, spark-submit does not even seem to recognise the k8s value for master. [ included in possible Spark masters: <a href="https://spark.apache.org/docs/latest/submitting-applications.html#master-urls" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/submitting-applications.html#master-urls</a> ]</p> <p>Any ideas? What would I be missing here?</p> <p>Thanks</p>
<p>Issue was my CMD was recognising a previous spark-submit version I had installed(2.2) even though i was running the command from the bin directory of spark installation.</p>
<p>We are trying to setup scalable jenkins on a kubernetes cluster to build and deploy our app. Able to successfully scale jenkins slaves using kubernetes on a Dev Machine(Spec: CentoOS 7, 12 cpu/core, 16G).</p> <p>However Application build time is being effected drastically. Time taken to build an application on a debian docker image is 1.5hrs on a CentOS host. Whereas building the same application on the same image inside a slave pod takes ~5hrs.</p> <p>Tried setting the CPU/Mem (limits, requests) on the slave pod and also tried setting multiple default values in limitrange. but it has no effect on build time. <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a></p> <p>What are we missing?</p> <p>minikube node capacity</p> <pre><code>Capacity: cpu: 10 memory: 9206328Ki pods: 110 Allocatable: cpu: 10 memory: 9103928Ki pods: 110 </code></pre> <p>Jenkins pipeline code</p> <pre><code>def label = "slave-${UUID.randomUUID().toString()}" podTemplate(label: label, containers: [ containerTemplate(name: 'todebian', image: 'registry.gitlab.com/todebian:v1', command: 'cat', ttyEnabled: true, resourceRequestCpu: '2', resourceLimitCpu: '3', resourceRequestMemory: '1024Mi', resourceLimitMemory: '2048Mi') ], volumes: [ hostPathVolume(mountPath: '/workspace', hostPath: '/hosthome/workspace_linux1') ]) { node(label) { container('todebian'){ sh """ cd /workspace ./make """ } } } </code></pre> <p>Please help me in troubleshooting.</p>
<p>Your problem may be exactly in using Minikube that uses full-blown virtualization. My suggestion for you would be to setup <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">single master cluster</a> to get native performance and get rid of minikube. As per experience - using this approach can dramatically increase performance.</p>
<p>I'm working on a little gRCP API and I've run into a small issue. I want the gRPC service to be accessible from a React front end, which means I need to have an envoy proxy that translates gRPC -> HTTP1 and vice versa. </p> <p><strong>TL;DR</strong> I think I can reach the envoy proxy, but the proxy isn't routing to my service correctly. </p> <p>I'll put the info for the services here and explain what's happening below. </p> <p>K8s (deployment and service) yaml files for each of my services:<br> <a href="https://github.com/OlamAI/Simulation/blob/v2/k8s/deployment.yaml" rel="nofollow noreferrer">Simulation deployment</a> </p> <p>Envoy config file and docker image<br> <a href="https://github.com/OlamAI/Envoy-Proxy/blob/master/envoy-proxy.yaml" rel="nofollow noreferrer">Envoy Config</a><br> <a href="https://github.com/OlamAI/Envoy-Proxy/blob/master/Dockerfile" rel="nofollow noreferrer">Envoy Dockerfile</a> </p> <p>Here is a quick sanity check making sure that my service works, disregarding the envoy proxy:</p> <pre><code>&gt; kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 37d sim-dep NodePort 10.105.65.65 &lt;none&gt; 9090:30780/TCP 3s sim-dep-http1 NodePort 10.111.190.170 &lt;none&gt; 8080:30948/TCP 3s sim-envoy-proxy NodePort 10.110.178.132 &lt;none&gt; 9091:32068/TCP 17h &gt; curl &lt;minikube ip&gt;:30948/v1/todo/all {"api":"v1","toDos":[{}]} &gt; ./client-grpc -server=192.168.99.100:30780 &lt;success from server, too much to put here but it works&gt; </code></pre> <p>So from what I understand, this envoy proxy should accept connections on port 9091, then re-route them to the address sim-dep (K8s DNS is running) on port 9090.<br> Here is my code and error when I run this on a React app on my host machine (not in minikube). </p> <pre><code>var todoService = new ToDoServiceClient( "http://192.168.99.100:32068/", null, null ); var todo = new ToDo(); todo.setTitle("JS Created ToDo"); todo.setDescription("This todo was created with JS client"); var createRequest = new CreateRequest(); createRequest.setApi("v1"); createRequest.setTodo(todo); var response = await todoService.create(createRequest, {}, &lt;callback&gt;); </code></pre> <p><strong>Logs:</strong></p> <pre><code>upstream connect error or disconnect/reset before headers </code></pre> <p>I'm assuming this means it can't connect at some level, but that error isn't very descriptive. I also can't curl through the envoy proxy, but I did try sending a request with PostMan and got this:</p> <p><a href="https://i.stack.imgur.com/r1ktn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r1ktn.jpg" alt="enter image description here"></a></p> <p>I'm not too sure what to make of that, but it sort of seems like it's reaching envoy but isn't reaching the service. I would appreciate any help on this! :) And let me know if I missed any info here. Thanks!</p>
<p>Just figured it out! </p> <p>To debug this I ran the proxy container locally. I was able to connect to it and was getting this error:</p> <pre><code>unknown service /v1.ToDoService </code></pre> <p>I hate myself for this, but that error was essentially saying this in english: </p> <pre><code>I can connect to your server, but I can't find a service called /v1.ToDoService </code></pre> <p>and after a ridiculous amount of staring at my code and doing sanity checks, I realized that it was looking for the service in the wrong spot... the error really should give a bit more info but all I had to do was change the connection url in my JS client to this:</p> <pre><code>http://localhost:32068 </code></pre> <p><strong>No slash at the end! Whoops!</strong></p> <p>Now to deal with deploying this proxy... </p> <p>So the deployment for the gRPC service has metadata on it that creates a DNS address(? not sure what its called) for the service. So in my case, I called my service <code>sim-dep</code>. If you follow the <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">DNS section tutorial here</a>, you can do a DNS lookup for your service which in my case gave me </p> <p><code>sim-dep.default.svc.cluster.local</code>. </p> <p>Now all I had to do after finding that was replace the host address of the envoy config to that address and everything worked out!</p>
<p>I have a statfulset application which has a server running on port 1000 and has 3 replicas. Now, I want to expose the application so I have used <code>type: NodePort</code>. But, I also want 2 replicas to communicate with each other at the same port. When I do nslookup in case of NodePort type application it gives only one dns name <code>&lt;svc_name&gt;.&lt;namespace&gt;.svc.cluster.local</code> (individual pods don't get a dns) and the application is exposed.</p> <p>When I do <code>clusterIP: None</code> I get node specific DNS <code>&lt;statfulset&gt;.&lt;svc_name&gt;.&lt;namespace&gt;.svc.cluster.local</code> but application is not exposed. But both do not work together. How can I achieve both, expose the same port for inter replica communication and expose same port externally?</p>
<p><strong>LoadBalancer:</strong> Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.</p>
<p>I deployed kafka and zookeeper in kubernetes. My kafka readiness probes keeps failing if I have readiness probes for zookeeper. In case if I comment or delete readiness probes of zookeeper and deploy again, then the kafka server is starting without any problem( and as well as kafka readiness not failing).</p> <p>This is the readiness probe for zookeeper:-</p> <pre><code>readinessProbe: tcpSocket: port: 2181 initialDelaySeconds: 20 periodSeconds: 20 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 </code></pre> <p>my zookeeper log is</p> <pre><code>2018-06-18 11:27:24,863 [myid:0] - WARN [SendWorker:5135603447292250196:QuorumCnxManager$SendWorker@951] - Send worker leaving thread 2018-06-18 11:27:24,864 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:57728 2018-06-18 11:27:24,864 [myid:0] - WARN [RecvWorker:1586112601866174465:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 1586112601866174465, my id = 0, error = java.io.IOException: Received packet with invalid packet: -66911279 at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1012) 2018-06-18 11:27:24,865 [myid:0] - WARN [RecvWorker:1586112601866174465:QuorumCnxManager$RecvWorker@1028] - Interrupting SendWorker 2018-06-18 11:27:24,865 [myid:0] - WARN [SendWorker:1586112601866174465:QuorumCnxManager$SendWorker@941] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2025) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2099) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:429) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1094) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:929) 2018-06-18 11:27:24,868 [myid:0] - WARN [SendWorker:1586112601866174465:QuorumCnxManager$SendWorker@951] - Send worker leaving thread 2018-06-18 11:30:54,282 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:47944 2018-06-18 11:31:39,342 [myid:0] - WARN [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager@461] - Exception reading or writing challenge: java.net.SocketException: Connection reset 2018-06-18 11:31:39,342 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:47946 2018-06-18 11:31:39,342 [myid:0] - WARN [RecvWorker:5135603447292250196:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 5135603447292250196, my id = 0, error = java.io.IOException: Received packet with invalid packet: 1414541105 at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1012) 2018-06-18 11:31:39,343 [myid:0] - WARN [RecvWorker:5135603447292250196:QuorumCnxManager$RecvWorker@1028] - Interrupting SendWorker 2018-06-18 11:31:39,343 [myid:0] - WARN [SendWorker:5135603447292250196:QuorumCnxManager$SendWorker@941] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2025) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2099) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:429) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1094) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:929) 2018-06-18 11:31:39,343 [myid:0] - WARN [SendWorker:5135603447292250196:QuorumCnxManager$SendWorker@951] - Send worker leaving thread 2018-06-18 11:31:44,433 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51010 2018-06-18 11:31:44,437 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51012 2018-06-18 11:31:44,439 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:31:44,440 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51012 (no session established for client) 2018-06-18 11:31:44,452 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51014 2018-06-18 11:31:49,438 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:31:49,438 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51010 (no session established for client) 2018-06-18 11:31:49,452 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:31:49,453 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51014 (no session established for client) 2018-06-18 11:33:59,669 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51148 2018-06-18 11:33:59,700 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:33:59,700 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51148 (no session established for client) 2018-06-18 11:33:59,713 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51150 2018-06-18 11:33:59,730 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:33:59,730 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51150 (no session established for client) 2018-06-18 11:34:00,274 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:48860 2018-06-18 11:34:00,275 [myid:0] - WARN [RecvWorker:4616370699239609664:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 4616370699239609664, my id = 0, error = java.io.IOException: Received packet with invalid packet: -1200847881 at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1012) 2018-06-18 11:34:00,275 [myid:0] - WARN [RecvWorker:4616370699239609664:QuorumCnxManager$RecvWorker@1028] - Interrupting SendWorker 2018-06-18 11:34:00,275 [myid:0] - WARN [SendWorker:4616370699239609664:QuorumCnxManager$SendWorker@941] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2025) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2099) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:429) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1094) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:929) 2018-06-18 11:34:00,276 [myid:0] - WARN [SendWorker:4616370699239609664:QuorumCnxManager$SendWorker@951] - Send worker leaving thread 2018-06-18 11:34:00,277 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:48862 2018-06-18 11:34:00,285 [myid:0] - WARN [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager@461] - Exception reading or writing challenge: java.net.SocketException: Connection reset 2018-06-18 11:40:10,712 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.30.99.87:51522 2018-06-18 11:40:10,713 [myid:0] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x0, likely client has closed socket 2018-06-18 11:40:10,713 [myid:0] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed socket connection for client /172.30.99.87:51522 (no session established for client) 2018-06-18 11:40:10,782 [myid:0] - INFO [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager$Listener@743] - Received connection request /10.186.58.164:49556 2018-06-18 11:40:10,782 [myid:0] - WARN [kafka1-zookeeper-0.kafka1-zookeeper/172.30.99.87:3888:QuorumCnxManager@461] - Exception reading or writing challenge: java.net.SocketException: Connection reset 2018-06-18 16:07:03,456 [myid:0] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started. 2018-06-18 16:07:03,459 [myid:0] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed </code></pre>
<p>I had similar issue. following changes, helped me overcoming this.</p> <pre><code># readinessProbe &amp; livenessProbe readinessProbe: tcpSocket: port: 9092 timeoutSeconds: 5 periodSeconds: 5 initialDelaySeconds: 45 livenessProbe: exec: command: - sh - -c - "kafka-broker-api-versions.sh --bootstrap-server=localhost:9092" timeoutSeconds: 5 periodSeconds: 5 initialDelaySeconds: 60 </code></pre> <p>Basing on your requirement, You can update following value.</p> <blockquote> <p>initialDelaySeconds</p> </blockquote>
<p>I am using minikube on my local machine. Getting this error while using kubernetes port forwarding. Can anyone help?</p> <pre><code>mjafary$ kubectl port-forward sa-frontend 88:80 Unable to listen on port 88: All listeners failed to create with the following errors: Unable to create listener: Error listen tcp4 127.0.0.1:88: bind: permission denied, Unable to create listener: Error listen tcp6 [::1]:88: bind: permission denied error: Unable to listen on any of the requested ports: [{88 80}] </code></pre>
<p><code>kubectl</code> fails to open the port 88 because it is a privileged port. All ports &lt;1024 require special permissions.</p> <p>There are many ways to solve your problem.</p> <ul> <li>You can stick to ports &gt;= 1024, and use for example the port 8888 instead of 88: <code>kubectl port-forward sa-frontend 8888:80</code></li> <li>You could use <code>kubectl</code> as root: <code>sudo kubectl port-forward sa-frontend 88:80</code> (not recommended, kubectl would then look for its config as root)</li> <li>You could grant the <code>kubectl</code> binary the capability to open privileged ports. <a href="https://superuser.com/a/892391/48678">This answer</a> explains in depth how to do this.</li> </ul> <p>If you want to go for the 3rd option, here is a short way of doing it:</p> <pre><code>sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/kubectl </code></pre> <p>This will let <code>kubectl</code> open any port while still running with the privileges of a regular user. You can check if this worked by using</p> <pre><code>sudo getcap /usr/bin/kubectl /usr/bin/kubectl = cap_net_bind_service+eip </code></pre> <p>Be aware that this grants the permission to whoever uses the binary. If you want finer grained permissions, use authbind.</p> <p>Note: as <a href="https://stackoverflow.com/users/6463291/ng-sek-long">ng-sek-long</a> <a href="https://stackoverflow.com/questions/53775328/kubernetes-port-forwarding-error-listen-tcp4-127-0-0-188-bind-permission-de/55023272?noredirect=1#comment122743640_55023272">commented</a>, <code>kubectl</code> is not necessarily installed as <code>/usr/bin/kubectl</code>. You should replace it with the path to the kubectl binary on your machine.</p>
<p><strong>Scenario:</strong></p> <p>I'm trying to run a basic <code>ls</code> command using <code>kubernetes</code> package via <code>cli.connect_post_namespaced_pod_exec()</code> however I get a stacktrace that I do not know how to debug. Yes I have tried searching around but I'm not really sure what the problem is as I'm using documentation example from <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_exec" rel="nofollow noreferrer">here</a></p> <p><strong>OS:</strong></p> <p>macOS Sierra 10.12.2</p> <p><strong>Code:</strong></p> <pre><code>#!/usr/local/bin/python2.7 import logging from pprint import pprint from kubernetes import client, config FORMAT = "[%(filename)s:%(lineno)s - %(funcName)s() ] %(message)s" level = logging.DEBUG logging.basicConfig(format=FORMAT, level=level) def main(): path_to_config = "/Users/acabrer/.kube/config" config.load_kube_config(config_file=path_to_config) ns = "default" pod = "nginx" cmd = "ls" cli = cli = client.CoreV1Api() response = cli.connect_post_namespaced_pod_exec(pod, ns, stderr=True, stdin=True, stdout=True, command=cmd) pprint(response) if __name__ == '__main__': main() </code></pre> <p><strong>Stack Trace:</strong></p> <pre><code>Traceback (most recent call last): File "/Users/acabrer/kube.py", line 16, in &lt;module&gt; main() File "/Users/acabrer/kube.py", line 12, in main response = cli.connect_post_namespaced_pod_exec(pod, ns, stderr=True, stdin=True, stdout=True) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 3605, in connect_post_namespaced_pod_exec (data) = self.connect_post_namespaced_pod_exec_with_http_info(name, namespace, **kwargs) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 3715, in connect_post_namespaced_pod_exec_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 328, in call_api _return_http_data_only, collection_formats, _preload_content, _request_timeout) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 152, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 373, in request body=body) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 257, in POST body=body) File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 213, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 21 Jan 2017 00:55:28 GMT', 'Content-Length': '139', 'Content-Type': 'application/json'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Upgrade request required","reason":"BadRequest","code":400} </code></pre> <p>Any input would be much appreaciated.</p> <p><strong>Edit 1:</strong></p> <pre><code>abrahams-mbp:.kube acabrer$ curl --help |grep TLSv -1, --tlsv1 Use &gt;= TLSv1 (SSL) --tlsv1.0 Use TLSv1.0 (SSL) --tlsv1.1 Use TLSv1.1 (SSL) --tlsv1.2 Use TLSv1.2 (SSL) abrahams-mbp:.kube acabrer$ python2.7 -c "import ssl; print ssl.OPENSSL_VERSION_INFO" (1, 0, 2, 10, 15) </code></pre> <p><strong>Edit 2:</strong></p> <pre><code>abrahams-mbp:.kube acabrer$ curl --tlsv1.2 https://x.x.x.x -k Unauthorized abrahams-mbp:.kube acabrer$ curl --tlsv1.1 https://x.x.x.x -k curl: (35) Unknown SSL protocol error in connection to x.x.x.x:-9836 </code></pre> <p><strong>Edit 3:</strong> I put some print statements to see the full on request information in <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/client/api_client.py#L341" rel="nofollow noreferrer">api_client.py</a> and this is what I see.</p> <p><strong>Note:</strong> I removed the ip-address to my endpoint for security.</p> <pre><code>bash-3.2# vim /usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py bash-3.2# /Users/acabrer/kube.py ################ POST https://x.x.x.x/api/v1/namespaces/default/pods/nginx/exec [('stdin', True), ('command', 'ls'), ('stderr', True), ('stdout', True)] {'Content-Type': 'application/json', 'Accept': '*/*', 'User-Agent': 'Swagger-Codegen/1.0.0-alpha/python'} [] None ################ </code></pre> <p>Thanks,</p> <p>-Abe.</p>
<p>I have encountered with the same problem.The solution to this is using kubernetes.stream .You need to import the package and change just one line code of code as follows:</p> <pre><code>from kubernetes.stream import stream #response = cli.connect_post_namespaced_pod_exec(pod, ns, stderr=True, stdin=True,stdout=True, command=cmd) response = stream(cli.connect_post_namespaced_pod_exec,pod, ns, stderr=True,stdin=True, stdout=True, command=cmd) </code></pre>
<p>These are my pods</p> <pre><code>hello-kubernetes-5569fb7d8f-4rkhs 0/1 ImagePullBackOff 0 5d2h hello-minikube-5857d96c67-44kfg 1/1 Running 1 5d2h hello-minikube2 1/1 Running 0 3m24s hello-minikube2-74654c8f6f-trrrw 1/1 Running 0 4m8s hello-newkubernetes 0/1 ImagePullBackOff 0 5d1h </code></pre> <p>If I try</p> <pre><code>curl $(minikube service hello-minikube2 --url) curl: (7) Failed to connect to 192.168.99.100 port 31591: Connection refused </code></pre> <p>Let's check VBox</p> <pre><code>inet 192.168.99.1/24 brd 192.168.99.255 scope global vboxnet0 valid_lft forever preferred_lft forever inet6 fe80::800:27ff:fe00:0/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>Why is my connection refused?</p> <pre><code>kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR hello-kubernetes NodePort 10.98.65.138 &lt;none&gt; 8080:30062/TCP 5d2h run=hello-kubernetes hello-minikube NodePort 10.105.166.56 &lt;none&gt; 8080:30153/TCP 5d3h run=hello-minikube hello-minikube2 NodePort 10.96.94.39 &lt;none&gt; 8080:31591/TCP 42m run=hello-minikube2 kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d4h &lt;none&gt; tomcat-deployment NodePort 10.96.205.228 &lt;none&gt; 8080:30613/TCP 2m13s app=tomcat kubectl get ep -o wide NAME ENDPOINTS AGE hello-kubernetes 5d14h hello-minikube 172.17.0.7:8080 5d14h hello-minikube2 172.17.0.4:8080,172.17.0.5:8080 12h kubernetes 192.168.99.100:8443 5d16h tomcat-deployment 172.17.0.6:8080 11h </code></pre> <p>I want to show service endpoint</p> <pre><code>minikube service tomcat-deployment --url http://192.168.99.100:30613 </code></pre> <p>Why is this url different from get ep -o wide output?</p>
<p>Apparently, you are trying to reach your service outside of the cluster, thus you need to expose your service IP for external connection.</p> <p>Run <code>kubectl edit svc hello-minikube2</code> and change </p> <pre><code>type: NodePort </code></pre> <p>to </p> <pre><code>type: LoadBalancer </code></pre> <p>Or </p> <pre><code>kubectl expose deployment hello-minikube2 --type=LoadBalancer --port=8080 </code></pre> <p>On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.</p> <p>Run the following command:</p> <pre><code>minikube service hello-minikube2 </code></pre>
<p>I'm trying to configure SSL certificates in kubernetes with cert-manager, istio ingress and LetsEncrypt. I have installed istio with helm, cert-manager, created ClusterIssuer and then I'm trying to create a Certificate. The acme challenge can't be validated, i'm trying to do it with http01 and can't figure it out how to use istio ingress for this. Istio is deployed with following options:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>helm install --name istio install/kubernetes/helm/istio ` --namespace istio-system ` --set global.controlPlaneSecurityEnabled=true ` --set grafana.enabled=true` --set tracing.enabled=true --set kiali.enabled=true ` --set ingress.enabled=true</code></pre> </div> </div> </p> <p>Certificate configuration:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: example.com namespace: istio-system spec: secretName: example.com issuerRef: name: letsencrypt-staging kind: ClusterIssuer commonName: 'example.com' dnsNames: - example.com acme: config: - http01: ingress: istio-ingress domains: - example.com</code></pre> </div> </div> </p> <p>When trying this way, for some reason, istio-ingress can't be found, but when trying to specify ingressClass: some-name, instead of ingress: istio-ingress, I get 404 because example.com/.well-known/acme-challenge/token can't be reached. How can this be solved? Thank you!</p>
<p>Istio ingress has been deprecated, you can use the Ingress Gateway with the DNS challenge. </p> <p>Define a generic public ingress gateway:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: public-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" tls: httpsRedirect: true - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE privateKey: /etc/istio/ingressgateway-certs/tls.key serverCertificate: /etc/istio/ingressgateway-certs/tls.crt </code></pre> <p>Create an issuer using one of the DNS providers supported by cert-manager. Here is the config for GCP CloudDNS:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Issuer metadata: name: letsencrypt-prod namespace: istio-system spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-prod dns01: providers: - name: cloud-dns clouddns: serviceAccountSecretRef: name: cert-manager-credentials key: gcp-dns-admin.json project: my-gcp-project </code></pre> <p>Create a wildcard cert with:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: istio-gateway namespace: istio-system spec: secretname: istio-ingressgateway-certs issuerRef: name: letsencrypt-prod commonName: "*.example.com" acme: config: - dns01: provider: cloud-dns domains: - "*.example.com" - "example.com" </code></pre> <p>It takes of couple of minutes for cert-manager to issue the cert:</p> <pre><code>kubectl -n istio-system describe certificate istio-gateway Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CertIssued 1m52s cert-manager Certificate issued successfully </code></pre> <p>You can find a step-by-step guide on setting up Istio ingress on GKE with Let's Encrypt here <a href="https://docs.flagger.app/install/flagger-install-on-google-cloud#cloud-dns-setup" rel="nofollow noreferrer">https://docs.flagger.app/install/flagger-install-on-google-cloud#cloud-dns-setup</a></p>
<p>I'm trying to install VerneMQ on a Kubernetes cluster over Oracle OCI usign Helm chart.</p> <p>The Kubernetes infrastructure seems to be up and running, I can deploy my custom microservices without a problem.</p> <p>I'm following the instructions from <a href="https://github.com/vernemq/docker-vernemq" rel="nofollow noreferrer">https://github.com/vernemq/docker-vernemq</a></p> <p>Here the steps:</p> <ul> <li><code>helm install --name="broker" ./</code> from helm/vernemq directory</li> </ul> <p>the output is:</p> <pre><code>NAME: broker LAST DEPLOYED: Fri Mar 1 11:07:37 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/RoleBinding NAME AGE broker-vernemq 1s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE broker-vernemq-headless ClusterIP None &lt;none&gt; 4369/TCP 1s broker-vernemq ClusterIP 10.96.120.32 &lt;none&gt; 1883/TCP 1s ==&gt; v1/StatefulSet NAME DESIRED CURRENT AGE broker-vernemq 3 1 1s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE broker-vernemq-0 0/1 ContainerCreating 0 1s ==&gt; v1/ServiceAccount NAME SECRETS AGE broker-vernemq 1 1s ==&gt; v1/Role NAME AGE broker-vernemq 1s NOTES: 1. Check your VerneMQ cluster status: kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show 2. Get VerneMQ MQTT port echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883" kubectl port-forward svc/broker-vernemq 1883:1883 </code></pre> <p>but when I do this check</p> <p><code>kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show</code></p> <p>I got</p> <pre><code>Node '[email protected]' not responding to pings. command terminated with exit code 1 </code></pre> <p>I think there is something wrong with subdomain (the double dots without nothing between them)</p> <p>Whit this command</p> <pre><code>kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns </code></pre> <p>The last log line is</p> <pre><code>I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up. </code></pre> <p>I've also tried with this custom yaml:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: namespace: default name: vernemq labels: app: vernemq spec: serviceName: vernemq replicas: 3 selector: matchLabels: app: vernemq template: metadata: labels: app: vernemq spec: containers: - name: vernemq image: erlio/docker-vernemq:latest imagePullPolicy: Always ports: - containerPort: 1883 name: mqtt - containerPort: 8883 name: mqtts - containerPort: 4369 name: epmd env: - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS value: "off" - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES value: "1" - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL value: "vernemq" - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE value: "/etc/vernemq-passwd/vmq.passwd" volumeMounts: - name: vernemq-passwd mountPath: /etc/vernemq-passwd readOnly: true volumes: - name: vernemq-passwd secret: secretName: vernemq-passwd --- apiVersion: v1 kind: Service metadata: name: vernemq labels: app: vernemq spec: clusterIP: None selector: app: vernemq ports: - port: 4369 name: epmd --- apiVersion: v1 kind: Service metadata: name: mqtt labels: app: mqtt spec: type: ClusterIP selector: app: vernemq ports: - port: 1883 name: mqtt --- apiVersion: v1 kind: Service metadata: name: mqtts labels: app: mqtts spec: type: LoadBalancer selector: app: vernemq ports: - port: 8883 name: mqtts </code></pre> <p>Any suggestion?</p> <p>Many thanks<br> Jack</p>
<p>It seems to be a bug in the Docker image. The suggestion on github is to built your own image or use the later VerneMQ image (after 1.6.x) where it has been fixed.</p> <p>Suggestion mentioned here: <a href="https://github.com/vernemq/docker-vernemq/pull/92" rel="nofollow noreferrer">https://github.com/vernemq/docker-vernemq/pull/92</a></p> <p>Pull-Request for a possible fix: <a href="https://github.com/vernemq/docker-vernemq/pull/97" rel="nofollow noreferrer">https://github.com/vernemq/docker-vernemq/pull/97</a></p> <p>EDIT: </p> <p>I only got it to work without helm. Using <code>kubectl create -f ./cluster.yaml</code>, with the following <code>cluster.yaml</code>:</p> <pre><code>--- apiVersion: apps/v1 kind: StatefulSet metadata: name: vernemq namespace: default spec: serviceName: vernemq replicas: 3 selector: matchLabels: app: vernemq template: metadata: labels: app: vernemq spec: serviceAccountName: vernemq containers: - name: vernemq image: erlio/docker-vernemq:latest ports: - containerPort: 1883 name: mqttlb - containerPort: 1883 name: mqtt - containerPort: 4369 name: epmd - containerPort: 44053 name: vmq - containerPort: 9100 - containerPort: 9101 - containerPort: 9102 - containerPort: 9103 - containerPort: 9104 - containerPort: 9105 - containerPort: 9106 - containerPort: 9107 - containerPort: 9108 - containerPort: 9109 env: - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES value: "1" - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL value: "vernemq" - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM value: "9100" - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM value: "9109" - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE value: "1" # only allow anonymous access for development / testing purposes! # - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS # value: "on" --- apiVersion: v1 kind: Service metadata: name: vernemq labels: app: vernemq spec: clusterIP: None selector: app: vernemq ports: - port: 4369 name: empd - port: 44053 name: vmq --- apiVersion: v1 kind: Service metadata: name: mqttlb labels: app: mqttlb spec: type: LoadBalancer selector: app: vernemq ports: - port: 1883 name: mqttlb --- apiVersion: v1 kind: Service metadata: name: mqtt labels: app: mqtt spec: type: NodePort selector: app: vernemq ports: - port: 1883 name: mqtt --- apiVersion: v1 kind: ServiceAccount metadata: name: vernemq --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: endpoint-reader rules: - apiGroups: ["", "extensions", "apps"] resources: ["endpoints", "deployments", "replicasets", "pods"] verbs: ["get", "list"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: endpoint-reader subjects: - kind: ServiceAccount name: vernemq roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: endpoint-reader </code></pre> <p>Needs a few seconds to get the pods ready.</p>
<p>I'm trying to understand the relationship among Kubernetes pods and the cores and memory of my cluster nodes when using Dask.</p> <p>My current setup is as follows:</p> <ul> <li>Kubernetes cluster using GCP's Kubernetes Engine</li> <li>Helm package manager to install Dask on the cluster</li> </ul> <p>Each node has 8 cores and 30 gb of ram. I have 5 nodes in my cluster:</p> <p><a href="https://i.stack.imgur.com/JnEFa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JnEFa.png" alt="cluster info"></a></p> <p>I then scaled the number of pods to 50 by executing</p> <pre><code>kubectl scale --replicas 50 deployment/nuanced-armadillo-dask-worker </code></pre> <p>When I initialize the client in Dask using <code>dask.distributed</code> I see the following</p> <p><a href="https://i.stack.imgur.com/ZKoex.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZKoex.png" alt="dask distributed client info"></a></p> <p>What puzzles me is that the client says that there are 400 cores and 1.58 tb of memory in my cluster (see screenshot). I suspect that by default each pod is being allocated 8 cores and 30 gb of memory, but how is this possible given the constraints on the actual number of cores and memory in each node?</p>
<p>If you don't specify a number of cores or memory then every Dask worker tries to take up the entire machine on which it is running.</p> <p>For the helm package you can specify the number of cores and amount of memory per worker by adding resource limits to your worker pod specification. These are listed in the configuration options of the chart.</p>
<p>I am trying to set up a static external IP for my load balancer on GKE but having no luck. Here is my Kubernetes service config file: </p> <pre><code>kind: Service apiVersion: v1 metadata: name: myAppService spec: selector: app: myApp ports: - protocol: TCP port: 3001 targetPort: 3001 type: LoadBalancer loadBalancerIP: ********* </code></pre> <p>This doesn't work. I expect to see my external IP as ********* but it just says pending: </p> <pre><code>➜ git:(master) kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ********* &lt;none&gt; 443/TCP 5m myAppService ********* &lt;pending&gt; 3001:30126/TCP 5m </code></pre> <p>More details: </p> <pre><code>➜ git:(master) kubectl describe services Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: ClusterIP IP: ********* Port: https 443/TCP Endpoints: ********* Session Affinity: ClientIP Events: &lt;none&gt; Name: myAppService Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=myApp Type: LoadBalancer IP: ********* Port: &lt;unset&gt; 3001/TCP NodePort: &lt;unset&gt; 30126/TCP Endpoints: Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 5m 20s 7 service-controller Normal CreatingLoadBalancer Creating load balancer 5m 19s 7 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service default/myAppService: Cannot EnsureLoadBalancer() with no hosts </code></pre> <p>Any ideas?</p>
<p>This got me stuck as well, I hope someone finds this helpful.</p> <p>In addition to what <a href="https://stackoverflow.com/a/51323242/340888">Dirk</a> said, if you happen to reserve a global static IP address as oppose to a regional one; you need to use Ingres as describe here in documentation: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#step_2b_using_an_ingress" rel="noreferrer">Configuring Domain Names with Static IP Addresses</a> specifically step 2b. </p> <p>So basically you reserve the static ip <code>gcloud compute addresses create helloweb-ip --global</code></p> <p>and add an Ingres:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: helloweb # this is where you you add your reserved ip annotations: kubernetes.io/ingress.global-static-ip-name: helloweb-ip labels: app: hello spec: backend: serviceName: helloweb-backend servicePort: 8080 --- apiVersion: v1 kind: Service metadata: name: helloweb-backend labels: app: hello spec: type: NodePort selector: app: hello tier: web ports: - port: 8080 targetPort: 8080 </code></pre> <p>The doc also describe how to assign a static ip if you choose type "LoadBalancer" under step 2a.</p>
<p>I have a spring-boot application running on a container. One of the APIs is a file upload API and every time a file is uploaded it has to be scanned for viruses. We have uvscan to scan the uploaded file. I'm looking at adding uvscan to the base image but the virus definitions need to be updated on a daily basis. I've created a script to update the virus definitions. The simplest way currently is to run a cron inside the container which invokes the script. Is there any other alternative to do this? Can the uvscan utility be isolated from the app pod and invoked from the application?</p>
<p>There are many ways to solve the problem. I hope, I can help you to find what suits you best.</p> <p>From my perspective, it would be pretty convenient to have a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> that builds and pushes the new docker image with uvscan and the updated virus definition database on a daily basis. </p> <p>In your file processing sequence you can create a scan <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> using Kubernetes API, and provide it access to shared <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">volume</a> with a file you need to scan. </p> <p>Scan Job will use <code>:latest</code> <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">image</a>, and if new images will appear in the registry it will download new image and create pod from it.</p> <p>The downside is when you create images daily it consumes "some" amount of disk space, so you may need to invent the process of removing the old images from the registry and from the docker cache on each node of Kubernetes cluster.</p> <p>Alternatively, you can put AV database on a shared volume or using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation" rel="nofollow noreferrer">Mount Propagation</a> and update it independently of pods. If uvscan opens AV database in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">read-only mode</a> it should be possible. </p> <p>On the other hand it usually takes time to load virus definition into the memory, so it might be better to run virus scan as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> than as a Job with a daily restart after new image was pushed to the registry.</p>
<p>I just deployed a docker with Postgres on it on AWS EKS. </p> <p>Below is the description<a href="https://i.stack.imgur.com/O7HxL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O7HxL.png" alt="enter image description here"></a> details.</p> <p>How do i access or test if postgres is working. I tried accessing both IP with post within VPC from worker node. </p> <pre><code> psql -h #IP -U #defaultuser -p 55432 </code></pre> <p>Below is the deployment. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:10.4 imagePullPolicy: "IfNotPresent" ports: - containerPort: 55432 # envFrom: # - configMapRef: # name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: efs </code></pre> <p>Surprisingly I am able to connect to psql but on 5432. :( Not sure what I am doing wrong. I passed containerPort as 55432</p>
<p>In short, you need to run the following command to expose your database on 55432 port.</p> <pre><code>kubectl expose deployment postgres --port=55432 --target-port=5432 --name internal-postgresql-svc </code></pre> <p>From now on, you can connect to it via port 55432 from inside your cluster by using the service name as a hostname, or via its ClusterIP address:</p> <pre><code>kubectl get internal-postgresql-svc </code></pre> <p>What you did in your deployment manifest file, you just attached additional information about the network connections a container uses, between misleadingly, because your container exposes 5432 port only (you can verify it by your self <a href="https://github.com/docker-library/postgres/blob/cc305ee1c59d93ac1808108edda6556b879374a4/10/Dockerfile#L174" rel="nofollow noreferrer">here</a>). <br><br>You should use a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> - abstraction which enables access to your PODs, and does the necessary port mapping behind the scene. <br><br> Please check also <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">different</a> port Types, if you want to expose your postgresql database outside of the Kubernetes cluster. <br></p> <hr> <p>To test if progress is running fine inside POD`s container:</p> <pre><code>kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=&lt;HERE_YOUR_PASSWORD&gt;" --command -- psql --host &lt;HERE_HOSTNAME=SVC_OR_IP&gt; -U &lt;HERE_USERNAME&gt; </code></pre>
<p>I'm trying to figure out the steps to setup CI/CD for an Asp.Net Core web application using AKS with VSTS. Are the steps described in <a href="https://learn.microsoft.com/en-us/vsts/build-release/apps/cd/azure/deploy-container-kubernetes" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/vsts/build-release/apps/cd/azure/deploy-container-kubernetes</a> valid for what I'm trying to do? Are windows container supported in AKS?</p>
<p>If your application is in ASP.Net Core, then you can host it in Linux as your code is platform independent. I have done this using Docker-file where your container is a self hosted app running on AKS. </p> <p>VSTS provides a Inbuilt task to deploy to your AKS cluster in your build pipeline. </p>
<p>I've updated the SSL certificate for my Kubernetes Ingress services, but I don't know how to restart the instances to use the updated cert secret without manually deleting and restarting the Ingress instances. That isn't ideal because of the number of ingresses that are making use of that specific cert (all sitting on the same TLD). How do I force it to use the updated secret?</p>
<p>Turns out the reason why it wasn't updating was that the certs weren't chained properly; I uploaded the ca-bundle without the actual end cert, and I guess Google refuses to update the LB certs if they're not a valid chain. Which is weird, but, okay, sure.</p>
<p>Using Traefik as an ingress controller (on a kube cluster in GCP). Is it possible to create an ingress rule that uses a backend service from a different namespace?</p> <p>We have a namespace for each of our "major" versions of code.</p> <p>1-service.com -> 1-service.com ingress in the 1-service ns -> 1-service svc in the same ns</p> <p>2-service.com -> 2-service.com ingress in the 2-service ns... and so on</p> <p>I also would like another ingress rule in the "unversioned" namespace that will route traffic to one of the major releases.</p> <p>service.com -> service.com ingress in the "service" ns -> X-service in the X-service namespace</p> <p>I would like to keep major versions separate in k8s using versioned host names (1-service.com etc), but still have a "latest" that points to the latest of the releases.</p> <p>I believe voyager can do cross namespace ingress -> svc. can Traefik do the same??</p>
<p>You can use a workaround like this:</p> <ol> <li>Create a Service with type <code>ExternalName</code> in your namespace when you want to create an ingress:</li> </ol> <pre><code>apiVersion: v1 kind: Service metadata: name: service-1 namespace: unversioned spec: type: ExternalName externalName: service-1.service-1-ns.svc.cluster.local ports: - name: http port: 8080 protocol: TCP </code></pre> <ol start="2"> <li>Create an ingress that point to this service:</li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: traefik name: ingress-to-other-ns namespace: service-1-ns spec: rules: - host: latest.example.com http: paths: - backend: serviceName: service-1 servicePort: 8080 path: / </code></pre>
<p>I am a bit very stuck on the step of <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers" rel="nofollow noreferrer">Launching worker node in the AWS EKS guide</a>. And to be honest, at this point, I don't know what's wrong. When I do kubectl get svc, I get my cluster so that's good news. I have this in my aws-auth-cm.yaml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: arn:aws:iam::Account:role/rolename username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes </code></pre> <p>Here is my config in .kube</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: CERTIFICATE server: server name: arn:aws:eks:region:account:cluster/clustername contexts: - context: cluster: arn:aws:eks:region:account:cluster/clustername user: arn:aws:eks:region:account:cluster/clustername name: arn:aws:eks:region:account:cluster/clustername current-context: arn:aws:eks:region:account:cluster/clustername kind: Config preferences: {} users: - name: arn:aws:eks:region:account:cluster/clustername user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - clustername command: aws-iam-authenticator.exe </code></pre> <p>I have launched an EC2 instance with the advised AMI.</p> <p>Some things to note :</p> <ul> <li>I launched my cluster with the CLI,</li> <li>I created some Key Pair,</li> <li>I am not using the Cloudformation Stack,</li> <li>I attached those policies to the role of my EC2 : AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy.</li> </ul> <p>It is my first attempt at kubernetes and EKS, so please keep that in mind :). Thanks for your help!</p>
<p>Your config file and auth file looks right. Maybe there is some issue with the security group assignments? Can you share the exact steps that you followed to create the cluster and the worker nodes? And any special reason why you had to use the CLI instead of the console? I mean if it's your first attempt at EKS, then you should probably try to set up a cluster using the console at least once.</p>
<p>I'm using Helm 2 without Tiller in readiness for Helm 3 by using the following commands:</p> <pre><code>helm template --name HelmReleaseName --output-dir ./Output ./HelmChartName kubectl apply --recursive --filename ./Output </code></pre> <p>I'm interested in using <code>helm test</code> to run tests against my Helm release to make sure it's running. Is it possible to do this without Tiller in Helm 2?</p>
<p>Well, this is not possible. Precisely, Tiller maintains all releases and stores all needed information in Kubernetes ConfigMap objects that are located in Tiller namespace</p> <p>When you create yaml files and apply them using <code>kubectl apply --recursive --filename ./Output</code> - you create objects in your cluster, but not appropriate ConfigMaps and release itself .</p>
<p>Is Kubernetes max available cpu/mem resourceQuota for a namespace the total of #cpu across all nodes in the cluster or per node? i.e if I have 4 nodes each with <strong>cpu=10 and mem=100Gi</strong>, to set 50% resources to the namespace, would that be limits.cpu: "20" (4*10/2) or "5" (10 per node/2). I'm using statefulsets w/label+namespace for node selection so that pods are specifically deployed to use only these 4 nodes. I think it would have to be the latter, so: <strong>limits.cpu: "5" limits.memory: 50Gi</strong> would allow each node in that namespace to use <strong>50% of the resources</strong>. One reason I ask is it may not always be true that each node has the same number cpu/memory, i.e. 50% resources on one node may not be 50% on another node - in my cluster they are the same, but I can see where that might not be the case.</p>
<p>it's the sum of CPU of all worker nodes </p> <p><a href="https://i.stack.imgur.com/w0yAj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w0yAj.png" alt="enter image description here"></a></p> <p>Source: <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#compute-resource-quota" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/resource-quotas/#compute-resource-quota</a></p> <p>If Pods in Running state exceeds that amount k8s fails to schedule the Pods because there aren't nodes available to fulfill the request </p>
<p>I have an application running inside <code>kubernetes</code> which has a file mounted through <code>configmaps</code>. Now, from inside the application I want to perform some action when this file (from configmap) gets updated (lets say through <code>kubectl update configmaps xyz</code> command).</p> <p>Lets say I have created a configmap using the following command:</p> <pre><code>kubectl create configmap myy-config --from-file=config.json </code></pre> <p>and I have my Deployment created like this:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - image: "xyz" name: myapp ports: - containerPort: 5566 volumeMounts: - mountPath: /myapp/config name: config dnsPolicy: ClusterFirstWithHostNet volumes: - name: config configMap: name: my-config </code></pre> <p>Now, if I do <code>kubectl exec -it &lt;pod&gt; sh</code> I can see the file. If I edit the configmap using <code>kubectl edit configmap my-config</code> and change the content, the application running in my pod doesn't get the file changed notification. I am using GO Lang for the application and it doesn't receive the fsnotify on the file <code>/myapp/config/config.json</code> even though I can see that after the edit, the file has changed.</p> <p>If I run the same application in my laptop, of course, the code gets the fsnotify and my application updates it configuration. The same code from within the kubernetes with the file coming from configmap, it doesn't work. I have read other SOF questions <a href="https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes">like this</a> and various others, but nothing specifically has solution for the problem I face.</p> <p>I understand that the file (which comes from the configmap) is a symlink and the actual file is in a folder called <code>..data/config.json</code>. I tried to add that file also, but still not getting the fsnotify signal. Is it possible to get fsnotify signal for files which come from configmap (as well as secrets) within the application? If so, can someone please help me and show how to do it (preferably in GO lang)?</p>
<p>You might be experience a problem <a href="https://medium.com/@xcoulon/kubernetes-configmap-hot-reload-in-action-with-viper-d413128a1c9a" rel="nofollow noreferrer">like this</a>:</p> <blockquote> <p>When a ConfigMap changes, the real path to the config files it contains changed, but this is kinda “hidden” by 2 levels of symlinks: [..]</p> </blockquote> <p>So it seems you need to follow the chain of symlinks and watch that. Since your application is written in <code>go</code> you could just use <a href="https://github.com/spf13/viper" rel="nofollow noreferrer"><code>spf13/viper</code></a> since the feature <a href="https://github.com/spf13/viper/commit/e0f7631cf3ac7e7530949c7e154855076b0a4c17" rel="nofollow noreferrer">WatchConfig and Kubernetes</a> was added.</p> <p>Alternatively you can get notified by the Kubernetes API on <a href="https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/config-map-v1/#list-list-or-watch-objects-of-kind-configmap-1" rel="nofollow noreferrer">changes of a ConfigMap</a>. This requires configuring some <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-examples" rel="nofollow noreferrer">access rules</a> upfront most probably.</p>