prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have two deployments (A and B), each one exposing ClusterIP Service. Before deploying Istio, I was able to communicate from pod A to any of B pods via its Service (e.g. <a href="http://B.default.svc.cluster.local/dosomecrazystuff" rel="nofollow noreferrer">http://B.default.svc.cluster.local/dosomecrazystuff</a>)</p> <p>After deploying Istio (1.0.5), I getting "<a href="http://B.default.svc.cluster.local" rel="nofollow noreferrer">http://B.default.svc.cluster.local</a> refusing connection" when calling it from pod in deployment A.</p> <p>What is default routing policy in Istio? I don't need some cleaver load balancing or version based routing, just straightforward communication from A to B (the same way as I would do that without Istio).</p> <p>What the absolute minimal required configuration to make it work?</p>
<p>Well, it seems like some local issue I having on my MicroK8s deployment. On EKS and another MicroK8s I able to communicate as desired without anything special.</p> <p>So, the answer is: no special configuration required to make it work, it supposed to be able to communicate just as is.</p>
<p>I'm looking for a possible way to reference the secrets in my deployment.yaml (1 liner)</p> <p>Currently I'm using the</p> <pre class="lang-sh prettyprint-override"><code>containers: - name: {{ template "myapp.name" . }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: Always env: - name: COUCHDB_USER valueFrom: secretKeyRef: name: {{ .Release.Name }}-secrets key: COUCHDB_USER - name: COUCHDB_PASSWORD valueFrom: secretKeyRef: name: {{ .Release.Name }}-secrets key: COUCHDB_PASSWORD </code></pre> <p>With the minimal modification possible, I want to achieve something like this:</p> <pre class="lang-sh prettyprint-override"><code>containers: - name: {{ template "myapp.name" . }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: Always env: - name: COUCHDB_URL value: http://${COUCHDB_USER}:${COUCHDB_PASSWORD}@{{ .Release.Name }}-couchdb:5984 </code></pre> <p>Just carious if I can do this in 1 step in during the deployment, instead of passing 2 env vars and parse them in my application.</p>
<p>I am not seeing any way to achieve it without setting <code>COUCHDB_USER</code> and <code>COUCHDB_PASSWORD</code> in container env. </p> <p>One workaround is, you can specify your secret in <code>container.EnvFrom</code> and all your secret <code>keys</code> will be converted to Environment variables. then, You can use those environment variables to create your composite env (ie, COUCHDB_URL). </p> <p>FYI, To create env from another env in kubernetes, <code>()</code> is used. Curly braces <code>{}</code> won't work at this very moment. </p> <hr> <p>One sample is, </p> <pre class="lang-sh prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: COUCHDB_USER: YWRtaW4= COUCHDB_PASSWORD: MWYyZDFlMmU2N2Rm --- apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers: - name: mycontainer image: redis envFrom: - secretRef: name: mysecret env: - name: COUCHDB_URL value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)rest-of-the-url </code></pre> <p>You can confirm, the output by,</p> <pre><code>$ kubectl exec -it secret-env-pod bash root@secret-env-pod:/data# env | grep COUCHDB COUCHDB_URL=http://admin:1f2d1e2e67dfrest-of-the-url COUCHDB_PASSWORD=1f2d1e2e67df COUCHDB_USER=admin </code></pre> <hr> <p>In your case, the yaml for container is:</p> <pre class="lang-sh prettyprint-override"><code> containers: - name: {{ template "myapp.name" . }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: Always envFrom: - secretRef: name: {{ .Release.Name }}-secrets env: - name: COUCHDB_URL value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)@{{ .Release.Name }}-couchdb:5984 </code></pre>
<p>I'm trying to connect to an existing kubernetes that's running on AWS and run arbitrary commands on it using Java. Specifically, we are using fabric8 (although I am open to another api if you can provide a sufficient answer using one). The reason I need to do this in Java is because we plan to eventually incorporate this into our existing junit live tests.</p> <p>For now I just need an example of how to connect to the sever and get all of the pod names as an array of Strings. Can somebody show me a simple, concise example of how to do this.</p> <p>i.e. I want the equivalent of this bash script using a java api (again preferably using fabric8, but I'll accept another api if you know one)</p> <pre><code>#!bin/bash kops export kubecfg --name $CLUSTER --state=s3://$STATESTORE kubectl get pod -o=custom-colums=NAME:.metadata.name -n=$NAMESPACE </code></pre>
<p>Here is the fabric8 kubernetes client for kubernetes:</p> <p><a href="https://github.com/fabric8io/kubernetes-client/" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client/</a></p> <p>It comes with a fluent DSL to work with kubernetes/Openshift resources. It has got pagination support too. If you want to list resources in certain namespace then you can use <code>inNamespace("your namespace")</code> parameter in dsl.</p> <pre><code>String master = "https://192.168.42.20:8443/"; Config config = new ConfigBuilder().withMasterUrl(master).build(); try (final KubernetesClient client = new DefaultKubernetesClient(config)) { // Simple Listing: PodList simplePodList = client.pods().inAnyNamespace().list(); // List with limit and continue options: PodList podList = client.pods().inAnyNamespace().list(5, null); podList.getItems().forEach((obj) -&gt; { System.out.println(obj.getMetadata().getName()); }); podList = client.pods().inAnyNamespace().list(5, podList.getMetadata().getContinue()); podList.getItems().forEach((obj) -&gt; { System.out.println(obj.getMetadata().getName()); }); } catch (KubernetesClientException e) { logger.error(e.getMessage(), e); } </code></pre>
<p>Is there a way I can view the secret via kubectl? </p> <p>Given the secret name, and the data (file?) how do I view the raw result?</p>
<p>The following solution relies on <code>jq</code>.</p> <pre><code>secretName="example-secret-name" secKeyName="example.key" kubectl get secret "$secretName" -o json | jq -r ".[\"data\"][\"$secKeyName\"]" | base64 -d </code></pre>
<p>I want to install Kubernettes Minikube (both Linux OS and Minikube) on a 56GB SSD drive. The Kubernettes web site is silent on disk space requirements for <a href="https://kubernetes.io/docs/setup/minikube/#persistent-volumes" rel="noreferrer">Minikube</a> binaries and storage.</p>
<p>The minikube install by default around 16GB VM, you can configure your VM disk space using the:</p> <pre><code>minikube start --vm-driver kvm2 --disk-size 20GB </code></pre> <p>This way it will allocate 20GB diskspace to your VM.</p>
<p>I am using helm to install istio-1.0.0 version with <code>--set grafana.enabled=true</code>.</p> <p>To access the grafana dashboard, I have to do port forwarding using <code>kubectl</code> command. It works okay. However, i want to access it using public ip, hence I am using this gateway yaml file</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: agung-ns spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 15031 name: http-grafana protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana-global-route namespace: agung-ns spec: hosts: - "grafana.domain" gateways: - grafana-gateway - mesh http: - route: - destination: host: "grafana.istio-system" port: number: 3000 weight: 100 </code></pre> <p>I tried to <code>curl</code> it, but it returns 404 status, which means something wrong with routing logic and/or my configuration above.</p> <pre><code>curl -HHost:grafana.domain http://&lt;my-istioingressgateway-publicip&gt;:15031 -I HTTP/1.1 503 Service Unavailable date: Tue, 14 Aug 2018 13:04:27 GMT server: envoy transfer-encoding: chunked </code></pre> <p>Any idea?</p>
<p>I did expose it like this:</p> <p>grafana.yml</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "my.dns.com" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana-vts namespace: istio-system spec: hosts: - "my.dns.com" gateways: - grafana-gateway http: - match: - uri: prefix: / route: - destination: host: grafana port: number: 3000 </code></pre> <p>then:</p> <p><code>kubectl apply grafana.yml</code></p>
<p>I'm not sure how load balancing works with Ingress.<br> If I understand correctly, what happens is actually something like this: </p> <p><a href="https://i.stack.imgur.com/DFh63.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DFh63.png" alt="enter image description here"></a></p> <p>I fail to see how the load balancing is performed.<br> What is wrong in the above scheme that I have drawn?<br> Can you help me rectify it? </p> <p>Notes:<br> - The following answer tells me that the Ingress controller itself is of type 'loadbalancer': <a href="https://stackoverflow.com/questions/53959974/ingress-service-type">Ingress service type</a><br> - I use kind ClusterIP because I don't want to expose the loadbalancer to the outside world. The following article does not support this claim, where the load balancer would be provided by the service: </p> <p><a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p> <p><a href="https://i.stack.imgur.com/x5sgl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/x5sgl.png" alt="enter image description here"></a></p>
<p>The <code>ClusterIP</code> services themselves perform load balancing. The naming can be confusing as <code>LoadBalancer</code> services are not the only services that involve load balancing - <code>LoadBalancer</code> actually means something more like 'cloud provider please create an external load balancer and point it at this service'. The kubernetes <code>ClusterIP</code> services also <a href="https://stackoverflow.com/a/49892871/9705485">load-balance across Pods in different Nodes using the kube-proxy</a>. If you don't want kubernetes to do load balancing then you have to specifically disable it by <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">creating a headless service</a>.</p>
<blockquote> <p>Updated with more information</p> </blockquote> <p>I am trying to set up OpenTSDB on Bigtable, following this guide: <a href="https://cloud.google.com/solutions/opentsdb-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/solutions/opentsdb-cloud-platform</a></p> <p>Works well, all good. </p> <p>Now I was trying to open the <code>opentsdb-write</code> service with a LoadBalancer (type). Seems to work well, too.</p> <p>Note: using a GCP load balancer.</p> <p>I am then using insomnia to send a POST to the <code>./api/put</code> endpoint - and I get a <code>204</code> as expected (also, using the <code>?details</code> shows no errors, neither does the <code>?sync</code>) (see <a href="http://opentsdb.net/docs/build/html/api_http/put.html" rel="nofollow noreferrer">http://opentsdb.net/docs/build/html/api_http/put.html</a>)</p> <p>When querying the data (GET on <code>./api/query</code>), I don't see the data (same effect in grafana). Also, I do not see any data added in the <code>tsdb</code> table in bigtable.</p> <p>My conclusion: no data is written to Bigtable, although tsd is returning 204. </p> <p>Interesting fact: the <strong>metric</strong> is created (I can see it in Bigtable (<code>cbt read tsdb-uid</code>) and also the autocomplete in the opentsdb-ui (and grafana) pick the metric up right away. But no data.</p> <p>When I use the Heapster-Example as in the tutorial, it all works.</p> <p>And the interesting part (to me):</p> <p>NOTE: It happened a few times, with massive delay or after stoping/restarting the kubernetes cluster, that the data appeared. Suddenly. I could not reproduce as of now.</p> <p>I must be missing something really simple. </p> <p>Note: I don't see any errors in the logs (stackdriver) and UI (opentsdb UI), neither bigtable, nor Kubernetes, nor anything I can think of.</p> <p>Note: the configs I am using are as linked in the tutorial.</p> <p>The put I am using (see the 204):</p> <p><a href="https://i.stack.imgur.com/pmaBR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmaBR.png" alt="enter image description here"></a></p> <p>and if I add <code>?details</code>, it indicates success:</p> <p><a href="https://i.stack.imgur.com/cSsJO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cSsJO.png" alt="enter image description here"></a></p>
<p>My guess is that this relates to the opentsdb flush frequency. When a tsdb cluster is shutdown, there's an automatic flush. I'm not 100% sure, but I think that the <code>tsd.storage.flush_interval</code> configuration manages that process.</p> <p>You can reach the team that maintains the libraries via the google-cloud-bigtable-discuss group, which you can get to from the <a href="https://cloud.google.com/bigtable/docs/support/getting-support" rel="nofollow noreferrer">Cloud Bigtable support page</a> for more nuanced discussions.</p> <p>As an FYI, we (Google) are actively updating the <a href="https://cloud.google.com/solutions/opentsdb-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/solutions/opentsdb-cloud-platform</a> to the latest versions of OpenTSDB and AsyncBigtable which should improve performance at high volumes.</p>
<p>I have a Kubernetes job that runs for some time, and I need to check if it failed or was successful. </p> <p>I am checking this periodically: </p> <pre><code>kubectl describe job/myjob | grep "1 Succeeded" </code></pre> <p>This works but I am concerned that a change in kubernetes can break this; say, the message is changed to "1 completed with success" (stupid text but you know what I mean) and now my grep will not find what it is looking for. </p> <p>Any suggestions? this is being done in a bash script. </p>
<p>You can get this information from the job using jsonpath filtering to select the .status.succeeded field of the job you are interested in. It will only return the value you are interested in.</p> <p>from <code>kubectl explain job.status.succeeded</code>:</p> <blockquote> <p>The number of pods which reached phase Succeeded.</p> </blockquote> <p>This command will get you that field for the particular job specified:</p> <pre><code>kubectl get job &lt;jobname&gt; -o jsonpath={.status.succeeded} </code></pre>
<p>I am currently trying to build a build-pipeline. the pipeline currently is a job with several init containers. One of the init containers is the actual image builder.</p> <p>To make use of its cache feature, I'd need a mechanism of having the data remain on the storage, so the next iteration can use the cached data.</p> <p>The only lead for that would be:</p> <blockquote> <p>However, an administrator can configure a custom recycler pod template using the Kubernetes controller manager command line arguments as described here.</p> </blockquote> <p>(link for that) <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/</a></p> <p>But this link does not really explain anything. And it actually sounds like I'd have to change the recycle policy globally, which is something I'd rather not do.</p> <p>Leading to the question:</p> <p>How to tackle this problem gracefully?</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a> running your workload in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> to store your cache. </p> <p>Essentially, when you use it a PVC (Persistent Volume Claim) gets created that is associated with a Physical Volume (The PV also gets created initially) and the PVC is also associated with your pod. So the next time your pod restarts it uses the same Volume. The types of volumes supported will depend on your cloud provider.</p>
<p>I'm just learning Kubernetes and I'd like to avoid spending money on Elastic Load Balancing while running it on AWS. </p> <p>Here's the command I use to install Kubernetes:</p> <pre><code>kops create cluster \ --cloud aws \ --name ${MY_KUBE_NAME}.k8s.local \ --state s3://${MY_KUBE_NAME} \ --master-count 1 \ --master-size ${MY_KUBE_MASTER_AWS_INSTANCE_SIZE} \ --master-volume-size ${MY_KUBE_MASTER_AWS_VOLUME_SIZE} \ --master-zones ${MY_KUBE_AWS_ZONE} \ --zones ${MY_KUBE_AWS_ZONE} \ --node-count 1 \ --node-size ${MY_KUBE_WORKER_AWS_INSTANCE_SIZE} \ --node-volume-size ${MY_KUBE_WORKER_AWS_VOLUME_SIZE} </code></pre> <p>After running that command I can see a load balancer gets created through Amazon's ELB service. </p> <p>Generally, that all worked well for me and then I could use <code>kubectl</code> to monitor and manage my cluster and also install Kubernetes Dashboard with its help. But one thing I don't like is that <code>kops</code> makes use of ELB. That was ok in the beginning and I used the URL provided by the load balancer to access the dashboard. Now I believe I can avoid using ELB to cut down my expenses on AWS. Could you please tell me how I can use <code>kops create cluster</code> without any ELB but still be able to connect to my cluster and dashboard from my local machine? </p>
<p>The LB is needed to talk to the kube-apiserver which runs on the master. You can bypass that by deleting the ELB from the AWS console and modifying your configs to talk directly to the public or private IP of your master. You might have to <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">re-issue your certificates</a> on the master so that you can talk to the new IP address. Kops creates an ELB because that's more a standard 'production' ready type of practice and also it's compatible if you have more than one master. In other words, it's still recommended to have that ELB.</p> <p>As far as the dashboard, generally, the dashboard is exposed as a <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">Kubernetes LoadBalancer Service</a> in AWS that creates an ELB. You can simply delete the service and the load balancer should be deleted.</p> <pre><code>$ kubectl delete svc &lt;your-dashboard-svc&gt; </code></pre> <p>Now if you want to avoid creating a load balancer on a service you just create a service with a ClusterIP or a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>. Then you can access your service using something like <code>kubectl proxy</code>.</p>
<p>What is the physical representation of a namespace in cluster (I am using AWS) is it an EC2 server?</p> <p>Can someone help me understand by giving a metaphor of what is the physical representation of:</p> <ol> <li>Cluster</li> <li>Namespace</li> <li>Pods</li> <li>Containers</li> </ol>
<p>Let's think of <code>namespace</code> as linux file system and ignore the fact of mounted directories for the sake of this question:</p> <pre><code>/srv =&gt; namespace-1 /var =&gt; namespace-2 /mnt =&gt; namespace-3 /bin =&gt; namespace-4 </code></pre> <ul> <li>All these 4 directories belongs to same <code>/</code></li> <li>Inside a directory, you can have different type of files</li> <li>If you do <code>ls /srv</code> you won't see files in <code>/var</code></li> <li>Different users can have different type of permissions on each directory</li> </ul> <p>Now let's apply above 4 points in from the view of k8s namespace</p> <ul> <li>All these 4 <code>namespaces</code> belongs to same kubernetes cluster</li> <li>Inside a <code>namespace</code>, you can have different type of kubernetes objects</li> <li>If you do <code>kubectl get deployments</code> from <code>namespace-1</code>, you won't see deployments in <code>namespace-2</code></li> <li>Different <code>users</code> can have different type of authorization on each <code>namespaces</code></li> </ul>
<p>I have a Kubernetes pod consisting of two containers - main app (writes logs to file on volume) and Fluentd sidecar that tails log file and writes to Elasticsearch.</p> <p>Here is the Fluentd configuration:</p> <pre><code>&lt;source&gt; type tail format none path /test/log/system.log pos_file /test/log/system.log.pos tag anm &lt;/source&gt; &lt;match **&gt; @id elasticsearch @type elasticsearch @log_level debug time_key @timestamp include_timestamp true include_tag_key true host elasticsearch-logging.kube-system.svc.cluster.local port 9200 logstash_format true &lt;buffer&gt; @type file path /var/log/fluentd-buffers/kubernetes.system.buffer flush_mode interval retry_type exponential_backoff flush_thread_count 2 flush_interval 5s retry_forever retry_max_interval 30 chunk_limit_size 2M queue_limit_length 8 overflow_action block &lt;/buffer&gt; &lt;/match&gt; </code></pre> <p>Everything is working, Elasticsearch host &amp; port are correct since API works correctly on that URL. In Kibana I see only records every 5 seconds about Fluentd creating new chunk:</p> <pre><code>2018-12-03 12:15:50 +0000 [debug]: #0 [elasticsearch] Created new chunk chunk_id="57c1d1c105bcc60d2e2e671dfa5bef04" metadata=#&lt;struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag="anm", variables=nil&gt; </code></pre> <p>but no actual logs in Kibana (the ones that are being written by the app to system.log file). Kibana is configured to the "logstash-*" index pattern that matches the one and only existing index.</p> <p>Version of Fluentd image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4</p> <p>Version of Elasticsearch: k8s.gcr.io/elasticsearch:v6.3.0</p> <p>Where can I check to find out what's wrong? Looks like Fluentd does not get to put the logs into Elasticsearch, but what can be the reason?</p>
<p>The answer turned out to be embarrassingly simple, maybe will help someone in the future. </p> <p>I figured the problem was with this source config line: </p> <pre><code>&lt;source&gt; ... format none ... &lt;/source&gt; </code></pre> <p>That meant that no usual tags where added when saved to elasticsearch (e.g. pod or container name) and I had to search for these records in Kibana in a completely different way. For instance, I used my own tag to search for those records and found them alright. The custom tag was originally added just in case, but turned out to be very useful:</p> <pre><code>&lt;source&gt; ... tag anm ... &lt;/source&gt; </code></pre> <p>So, the final takeaway could be the following. Use "format none" with caution, and if the source data actually is unstructured, add your own tags, and possibly enrich with additional tags/info (e.g. "hostname", etc) using fluentd's <a href="https://docs.fluentd.org/v0.12/articles/filter_record_transformer" rel="nofollow noreferrer">record_transformer</a>, which I ended up also doing. Then it will be much easier to locate the records via Kibana.</p>
<p>If a distributed computing framework spins up nodes for running Java/ Scala operations then it has to include the JVM in every container. E.g. every Map and Reduce step spawns its own JVM.</p> <p>How does the efficiency of this instantiation compare to spinning up containers for languages like Python? Is it a question of milliseconds, few seconds, 30 seconds? Does this cost add up in frameworks like Kubernetes where you need to spin up many containers?</p> <p>I've heard that, much like Alpine Linux is just a few MB, there are stripped down JVMs, but still, there must be a cost. Yet, Scala is the first class citizen in Spark and MR is written in Java.</p>
<p>Linux container technology uses layered filesystems so bigger container images don't generally have a ton of runtime overhead, though you do have to download the image the first time it is used on a node which can potentially add up on truly massive clusters. In general this is not usually a thing to worry about, aside from the well known issues of most JVMs being a bit slow to start up. Spark, however, does not spin up a new container for every operation as you describe. It creates a set of executor containers (pods) which are used for the whole Spark execution run.</p>
<pre><code>kubectl logs web-deployment-76789f7f64-s2b4r </code></pre> <p>returns nothing! The console prompt returns without error. </p> <p>I have a pod which is in a CrashLoopbackOff cycle (but am unable to diagnose it) --> </p> <pre><code>web-deployment-7f985968dc-rhx52 0/1 CrashLoopBackOff 6 7m </code></pre> <p>I am using Azure AKS with kubectl on Windows. I have been running this cluster for a few months without probs. The container runs fine on my workstation with docker-compose.</p> <p>kubectl describe doesn't really help much - no useful information there.</p> <pre><code>kubectl describe pod web-deployment-76789f7f64-s2b4r Name: web-deployment-76789f7f64-j6z5h Namespace: default Node: aks-nodepool1-35657602-0/10.240.0.4 Start Time: Thu, 10 Jan 2019 18:58:35 +0000 Labels: app=stweb pod-template-hash=3234593920 Annotations: &lt;none&gt; Status: Running IP: 10.244.0.25 Controlled By: ReplicaSet/web-deployment-76789f7f64 Containers: stweb: Container ID: docker://d1e184a49931bd01804ace51cb44bb4e3479786ec0df6e406546bfb27ab84e31 Image: virasana/stwebapi:2.0.20190110.20 Image ID: docker-pullable://virasana/stwebapi@sha256:2a1405f30c358f1b2a2579c5f3cc19b7d3cc8e19e9e6dc0061bebb732a05d394 Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 10 Jan 2019 18:59:27 +0000 Finished: Thu, 10 Jan 2019 18:59:27 +0000 Ready: False Restart Count: 3 Environment: SUPPORT_TICKET_DEPLOY_DB_CONN_STRING_AUTH: &lt;set to the key 'SUPPORT_TICKET_DEPLOY_DB_CONN_STRING_AUTH' in secret 'mssql'&gt; Optional: false SUPPORT_TICKET_DEPLOY_DB_CONN_STRING: &lt;set to the key 'SUPPORT_TICKET_DEPLOY_DB_CONN_STRING' in secret 'mssql'&gt; Optional: false SUPPORT_TICKET_DEPLOY_JWT_SECRET: &lt;set to the key 'SUPPORT_TICKET_DEPLOY_JWT_SECRET' in secret 'mssql'&gt; Optional: false KUBERNETES_PORT_443_TCP_ADDR: kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io KUBERNETES_PORT: tcp://kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io:443 KUBERNETES_PORT_443_TCP: tcp://kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io:443 KUBERNETES_SERVICE_HOST: kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-98c7q (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-98c7q: Type: Secret (a volume populated by a Secret) SecretName: default-token-98c7q Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned web-deployment-76789f7f64-j6z5h to aks-nodepool1-35657602-0 Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-35657602-0 MountVolume.SetUp succeeded for volume "default-token-98c7q" Normal Pulled 24s (x4 over 1m) kubelet, aks-nodepool1-35657602-0 Container image "virasana/stwebapi:2.0.20190110.20" already present on machine Normal Created 22s (x4 over 1m) kubelet, aks-nodepool1-35657602-0 Created container Normal Started 22s (x4 over 1m) kubelet, aks-nodepool1-35657602-0 Started container Warning BackOff 7s (x6 over 1m) kubelet, aks-nodepool1-35657602-0 Back-off restarting failed container </code></pre> <p>Any ideas on how to proceed?</p> <p>Many Thanks!</p>
<p>I am using a multi-stage docker build, and was building using the wrong target! (I had cloned a previous Visual Studio docker build task, which had the following argument: </p> <pre><code>--target=test </code></pre> <p>Because the "test" build stage has no defined entry point, the container was launching and then exiting without logging anything! So that's why <strong>kubectl logs</strong> returned blank.</p> <p>I changed this to </p> <pre><code>--target=final </code></pre> <p>and all is working!</p> <p>My Dockerfile looks like this: </p> <pre><code>FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY . . WORKDIR "/src" RUN dotnet clean ./ST.Web/ST.Web.csproj RUN dotnet build ./ST.Web/ST.Web.csproj -c Release -o /app FROM build AS test RUN dotnet tool install -g dotnet-reportgenerator-globaltool RUN chmod 755 ./run-tests.sh &amp;&amp; ./run-tests.sh FROM build AS publish RUN dotnet publish ./ST.Web/ST.Web.csproj -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "ST.Web.dll"] </code></pre>
<p>I have created a MySQL deployment in kubernetes and exposed it as nodes-port.</p> <p><em><strong>What I can do:</strong></em><br> Access it from inside the cluster using<br> <code>kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword</code></p> <p><strong>What I want to do:</strong><br> Access the MySQL server from outside the cluster(like accessing a normal MySQL server).<br> Kubernetes v1.13 in DigitalOcean Cloud.<br> Guide me, please.</p>
<p>You can access it by <code>mysql -u {username} -p {password} -h {any kubernetes worker ip} -P {nodePort}</code>. After you start mysql container and expose it ad node port through a service.</p>
<p>I have Kubernetes Cluster setup with a master and worker node. Kubectl cluster-info shows kubernetes-master as well as kube-dns running successfully.</p> <p>I am trying to access below URL and since it is internal to my organization, below URL is not visible to external world. </p> <p><a href="https://10.118.3.22:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://10.118.3.22:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a></p> <p>But I am getting below error when I access it -</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "services \"kube-dns:dns\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"", "reason": "Forbidden", "details": { "name": "kube-dns:dns", "kind": "services" }, "code": 403 } </code></pre> <p>Please let me know how to grant full access to anonymous user. I read RBAC mentioned in <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a> But unable to figure out what exactly I need to do. Thanks</p>
<p>You can grant the admin privileges to the anonymous user, but I strongly strongly discourage it. This will give anyone outside the cluster access to the services using the url.</p> <p>Even after that you decided to grant all the access to the anonymous user you can do it following way:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: anonymous-role rules: - apiGroups: [""] resources: ["services/proxy"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: anonymous-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: anonymous-role subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:anonymous </code></pre> <p>This will give anonymous:user to proxy your services, not all resources. If you want that for all resources you need to provide <code>resources: ["*"]</code> in anonymous-role.</p> <p>Hope this helps</p>
<p>As the title indicates I'm trying to setup grafana using helmfile with a datasource via values.</p> <p>I can find the docs <a href="https://github.com/helm/charts/tree/master/stable/grafana" rel="noreferrer">here</a> but sadly my knowledge is too limited to make it work.</p> <p>The relevant part of my helmfile is here</p> <pre><code>releases: ... - name: grafana namespace: grafana chart: stable/grafana values: - datasources: - name: Prometheus type: prometheus url: http://prometheus-server.prometheus.svc.cluster.local </code></pre> <p>I stumbled upon <a href="https://github.com/cloudposse/helmfiles/pull/4#pullrequestreview-142973932" rel="noreferrer">this</a> and it seems I can also do it via an environment variable but I can't seem to find an easy way to set such in my helmfile.</p> <p>It would be greatly appreciated if someone with a better understanding of helmfile, json and whatnot could either show me or guide me in the right direction.</p> <p><strong>Update</strong>: Thanks to @WindyFields my final solution is as follows</p> <pre><code>releases: ... - name: grafana namespace: grafana chart: stable/grafana values: - datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy url: http://prometheus-server.prometheus.svc.cluster.local isDefault: true </code></pre>
<p><strong>Answer</strong></p> <p>Just add the following snipped straight into <code>values.yaml</code>:</p> <pre><code>datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-server.prometheus.svc.cluster.local </code></pre> <p><strong>Details</strong></p> <p>After Helm renders the template there will be the following configmap generated:</p> <pre><code># Source: grafana/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: RELEASE-NAME-grafana labels: app: grafana chart: grafana-1.20.0 release: RELEASE-NAME heritage: Tiller data: grafana.ini: | ... datasources.yaml: | apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-server.prometheus.svc.cluster.local </code></pre> <p>After Helms installs the chart, k8s will take datasources configuration <code>datatsources.yaml</code> from <code>config.yaml</code> and mount it by the following path <code>/etc/grafana/provisioning/datasources/datasources.yaml</code>, where it will be picked up by Grafana app. </p> <p>See Grafana <a href="http://docs.grafana.org/administration/provisioning/" rel="noreferrer">datasources provisioning doc</a>.</p> <p><strong>Tip:</strong> to see rendered Helm template use <code>helm template &lt;path_to_chart&gt;</code></p>
<p>Now we're using Kubernetes to run users' tasks. We need the features of Kubernetes Jobs to restart the tasks when failure occurs.</p> <p>But our users may submit the problematic applications which always exit with non-zore code. Kubernetes will restart this task over and over again.</p> <p>Is is possible to configure the restart times about this? </p>
<p><code>backoffLimit: 4</code> - number of retries before throwing error</p> <p><code>completions: 3</code> - number of times to run</p>
<p>I am trying to use it to run some integration tests, so to verify the service code I am deploying is actually doing the right thing.</p> <p>Basically how I setup is (as described here: <a href="https://docs.helm.sh/developing_charts/#chart-tests" rel="nofollow noreferrer">https://docs.helm.sh/developing_charts/#chart-tests</a>) creating this <code>templates/tests/integration-test.yaml</code> chart test file, and inside it specify to run a container, which basically is a customized maven image with test code added in and the test container is simply started by command “mvn test”, which does some simple curl check on the kube service this whole helm release deploys.</p> <p>In this way, the helm test does work.</p> <p>However, the issue is, during the helm test is running, the new version of the service code is actually already online and being exposed to the outside world/users. I can of course immediately do a roll back if the helm test fails, but this will not stop me hosting the problem-version of the service code for a while to the outside world.</p> <p>Is there a way, where one can run a service/integration test on a pod, after the pod is started but before it is exposed to the Kubernetes service?</p>
<p>Ideally you'll install and test on a test environment first, either a dedicated test cluster or namepsace. For an additional check you could install the chart first into a new namespace and let the tests run there and then delete that namespace when it is all passed. This does require writing the tests in a way that they can hit URLs that are specific to that namespace. Cluster-internal URLs based on service names will be namespace-relative anyway but if you use external URLs in the tests then you'd either need to switch them to internal or use prefixing.</p>
<p>Right now I got an architecture like this:</p> <pre><code> internet | [ IngressController ] | | [ Ingress A] [ Ingress B] --|-----|-- --|-----|-- [ Service A] [ Service B] | | [ Pod A] [ Pod B] </code></pre> <p>So if <code>Service A</code> request data from <code>Service B</code> it is using the <code>full qualified name</code>, e.g. </p> <blockquote> <p><code>ResponseEntity&lt;Object&gt; response = restTemplate.exchange(host.com/serviceB, HttpMethod.POST, entity, Object.class);</code></p> </blockquote> <p>As all of them are in the same cluster I would try to change the architecture to improve communication between the services. I imagined something like this:</p> <pre><code> internet | [ IngressController ] | | [ Ingress A] [ Ingress B] --|-----|-- --|-----|-- [ Service A]-[ Service B] | | [ Pod A] [ Pod B] </code></pre> <p>So the services would be allowed to request each other via the name only or something. I am just not sure how to realize this using <code>REST-Services</code>.</p>
<p>You need to call <code>Service B</code> by it's <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes DNS name</a> and it should connect directly without going through the ingress.</p> <p>To clarify <code>Service A</code> doesn't talk to <code>Service B</code>, but rather <code>Pod A</code> talks to <code>Service B</code> and <code>Pod B</code> talks to <code>Service A</code>. So, long all pods and services are in the same <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">Kubernetes namespace</a>, you can communicate with the service with the service name as the hostname and that will resolve to the internal IP address of the service and then forward the traffic to the pod.</p> <p>If the pods happen to be in different namespaces you would connect with the namespace name added to the service: <code>&lt;service-name&gt;.&lt;namespace-name&gt;</code> or with <code>&lt;service-name&gt;.&lt;namespace-name&gt;.svc.cluster.local</code></p> <p>Hope it helps.</p>
<p>I am running Kubernetes jobs via cron. In some cases the jobs may fail and I want them to restart. I'm scheduling the jobs like this:</p> <p><code>kubectl run collector-60053 --schedule=30 10 * * * * --image=gcr.io/myimage/collector --restart=OnFailure --command node collector.js</code></p> <p>I'm having a problem where some of these jobs are running and failing but the associated pods are disappearing, so I have no way to look at the logs and they are not restarting.</p> <p>For example:</p> <pre><code>$ kubectl get jobs | grep 60053 collector-60053-1546943400 1 0 1h $ kubectl get pods -a | grep 60053 $ // nothing returned </code></pre> <p>This is on Google Cloud Platform running 1.10.9-gke.5</p> <p>Any help would be much appreciated!</p> <p>EDIT:</p> <p>I discovered some more information. I have auto-scaling setup on my GCP cluster. I noticed that when the servers are removed the pods are also removed (and their meta data). Is that expected behavior? Unfortunately this gives me no easy way to look at the pod logs.</p> <p>My theory is that as pods fail, the CrashLoopBackOff kicks in and eventually auto-scaling decides that the node is no longer needed (it doesn't see the pod as an active workload). At this point, the node goes away and so do the pods. I don't think this is expected behavior with Restart OnFailure but I basically witnessed this by watching it closely.</p>
<p>After digging much further into this issue, I have an understating of my situation. According to <a href="https://github.com/kubernetes/kubernetes/issues/54870" rel="nofollow noreferrer">issue 54870</a> on the Kubernetes repository, there are some problems with jobs when set to <code>Restart=OnFailure</code>.</p> <p>I have changed my configuration to use <code>Restart=Never</code> and to set a <code>backoffLimit</code> for the job. Even though restart is set to never, in my testing with restart never, Kubernetes will actually restart the pods up to the <code>backoffLimit</code> setting and keep the error pods around for inspection.</p>
<p>I am using kubernetes 1.9.2 created but kubeadm. this kubernetes cluster is running in 4 ec2 nodes.</p> <p>I have a deployment that requires using cache in every pod. in order to accomlish that we used session affinity from ClusterIP.</p> <p>since I was ELB in front of my Kubernetes cluster I wonder how the session affinity is behaving.</p> <p>the natural behavior would be that for every client IP a different will get the requests but given the traffic is transferred via ELB , whoch IP does the session affinity recognizes , the ELB IP or the actual Client IP?</p> <p>when I check the traffic to the pods I see that 102 pods get all the requests and the 2 other pods are just waiting.</p> <p>many thanks for any help.</p>
<p><code>SessionAffinity</code> recognizes Client IP and ELB should pass the Client IP.</p> <p>I think you should work with <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html" rel="nofollow noreferrer">HTTP Headers and Classic Load Balancers</a> and setup <code>X-Forwarded-For: client-ip-address</code></p> <p>Also, this seems to be a know issue <a href="https://github.com/kubernetes/ingress-nginx/issues/3056" rel="nofollow noreferrer">enabling Session affinity goes to a single pod only #3056</a>.</p> <p>It was reported for <code>0.18.0</code> and <code>0.19.0</code> version of NGINX Ingress controller.</p> <p>Issue was closed and commented that is was fixed in version <code>0.21.0</code>, but in December initial author said it still doesn't work for him.</p>
<p>I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.<br> I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..<br> Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.<br> But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. <strong>So where is this container running and how can i connect to it from some other container</strong> ?</p>
<p>Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -</p> <pre><code>kubectl get pods --all-namespaces </code></pre> <p>If you are not able to find the docker container there than I would suggest performing below steps - </p> <ol> <li>docker ps -a {verify running docker status}</li> <li>Ensuring that while mounting docker.sock there are no permission errors</li> <li>If there are permission errors, escalate privileges to the appropriate level</li> </ol> <p>To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format - </p> <pre><code>"&lt;servicename&gt;.&lt;namespacename&gt;.svc.cluster.local" </code></pre> <p>I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.</p>
<p>I have defined a couple of case classes for JSON representation but I am not sure whether I did it properly as there a lot of nested case classes. Entities like spec, meta and so on are of type JSONObject as well as the Custom object itself.</p> <p>Here is all the classes I have defined:</p> <pre><code> case class CustomObject(apiVersion: String,kind: String, metadata: Metadata,spec: Spec,labels: Object,version: String) case class Metadata(creationTimestamp: String, generation: Int, uid: String,resourceVersion: String,name: String,namespace: String,selfLink: String) case class Spec(mode: String,image: String,imagePullPolicy: String, mainApplicationFile: String,mainClass: String,deps: Deps,driver: Driver,executor: Executor,subresources: Subresources) case class Driver(cores: Double,coreLimit: String,memory: String,serviceAccount: String,labels: Labels) case class Executor(cores: Double,instances: Double,memory: String,labels: Labels) case class Labels(version: String) case class Subresources(status: Status) case class Status() case class Deps() </code></pre> <p>And this is a JSON structure for the custom K8s object I need to transform:</p> <pre><code>{ "apiVersion": "sparkoperator.k8s.io/v1alpha1", "kind": "SparkApplication", "metadata": { "creationTimestamp": "2019-01-11T15:58:45Z", "generation": 1, "name": "spark-example", "namespace": "default", "resourceVersion": "268972", "selfLink": "/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example", "uid": "uid" }, "spec": { "deps": {}, "driver": { "coreLimit": "1000m", "cores": 0.1, "labels": { "version": "2.4.0" }, "memory": "1024m", "serviceAccount": "default" }, "executor": { "cores": 1, "instances": 1, "labels": { "version": "2.4.0" }, "memory": "1024m" }, "image": "gcr.io/ynli-k8s/spark:v2.4.0, "imagePullPolicy": "Always", "mainApplicationFile": "http://localhost:8089/spark_k8s_airflow.jar", "mainClass": "org.apache.spark.examples.SparkExample", "mode": "cluster", "subresources": { "status": {} }, "type": "Scala" } } </code></pre> <p>UPDATE: I want to convert JSON into case classes with Circe, however, with such classes I face this error:</p> <pre><code>Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject] implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject] </code></pre> <p>I have defined implicit decoders for all case classes:</p> <pre><code> implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder[Labels] implicit val customObjectSubresourcesDecoder: Decoder[Subresources] = deriveDecoder[Subresources] implicit val customObjectDepsDecoder: Decoder[Deps] = deriveDecoder[Deps] implicit val customObjectStatusDecoder: Decoder[Status] = deriveDecoder[Status] implicit val customObjectExecutorDecoder: Decoder[Executor] = deriveDecoder[Executor] implicit val customObjectDriverDecoder: Decoder[Driver] = deriveDecoder[Driver] implicit val customObjectSpecDecoder: Decoder[Spec] = deriveDecoder[Spec] implicit val customObjectMetadataDecoder: Decoder[Metadata] = deriveDecoder[Metadata] implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject] </code></pre>
<p>The reason you can't derive a decode for <code>CustomObject</code> is because of the <code>labels: Object</code> member. </p> <p>In circe all decoding is driven by static types, and circe does not provide encoders or decoders for types like <code>Object</code> or <code>Any</code>, which have no useful static information. </p> <p>If you change that case class to something like the following:</p> <pre><code>case class CustomObject(apiVersion: String, kind: String, metadata: Metadata, spec: Spec) </code></pre> <p>…and leave the rest of your code as is, with the import:</p> <pre><code>import io.circe.Decoder, io.circe.generic.semiauto.deriveDecoder </code></pre> <p>And define your JSON document as <code>doc</code> (after adding a quotation mark to the <code>"image": "gcr.io/ynli-k8s/spark:v2.4.0,</code> line to make it valid JSON), the following should work just fine:</p> <pre><code>scala&gt; io.circe.jawn.decode[CustomObject](doc) res0: Either[io.circe.Error,CustomObject] = Right(CustomObject(sparkoperator.k8s.io/v1alpha1,SparkApplication,Metadata(2019-01-11T15:58:45Z,1,uid,268972,spark-example,default,/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example),Spec(cluster,gcr.io/ynli-k8s/spark:v2.4.0,Always,http://localhost:8089/spark_k8s_airflow.jar,org.apache.spark.examples.SparkExample,Deps(),Driver(0.1,1000m,1024m,default,Labels(2.4.0)),Executor(1.0,1.0,1024m,Labels(2.4.0)),Subresources(Status())))) </code></pre> <p>Despite what one of the other answers says, circe can definitely derive encoders and decoders for case classes with no members—that's definitely not the problem here.</p> <p>As a side note, I wish it were possible to have better error messages than this:</p> <pre><code>Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject </code></pre> <p>But given the way circe-generic has to use Shapeless's <code>Lazy</code> right now, this is the best we can get. You can try <a href="https://github.com/circe/circe-derivation" rel="nofollow noreferrer">circe-derivation</a> for a mostly drop-in alternative for circe-generic's semi-automatic derivation that has better error messages (and some other advantages), or you can use a compiler plugin like <a href="https://github.com/tek/splain" rel="nofollow noreferrer">splain</a> that's specifically designed to give better error messages even in the presence of things like <code>shapeless.Lazy</code>.</p> <p>As one final note, you can clean up your semi-automatic definitions a bit by letting the type parameter on <code>deriveDecoder</code> be inferred:</p> <pre><code>implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder </code></pre> <p>This is entirely a matter of taste, but I find it a little less noisy to read.</p>
<p>After creation of a standard GKE cluster in the Google Cloud Platform Console I find when I click on the cluster and look at the clusters' setting s a 'Permissions' setting, which looks like this: <a href="https://i.stack.imgur.com/A5NpQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/A5NpQ.png" alt="enter image description here"></a></p> <p>what I don't understand is that I have allowed API access on a lot of these service I believe, so why does only 'Cloud Platform' show 'enabled'? Is this what is enabled at creation of the cluster maybe?!</p> <p>When selecting 'edit' you can not 'enable' these services from here..., so what exactly are these Permissions?</p>
<p>The GKE cluster will be created with the permissions that is set on the 'Access scopes' section in the 'Advanced edit' tab. So only the APIs with the access enabled in this section will be shown as enabled. These permissions denote the type and level of API access granted to the VM in the node pool. Scopes inform the access level your cluster nodes will have to specific GCP services as a whole. Please see this <a href="https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam" rel="noreferrer">link</a> for more information about accesss scopes.</p> <p>In the tab of 'Create a Kubernetes cluster', click 'Advanced edit'. Then you will see another tab called 'Edit node pool' pops up with more options. If you click 'Set access for each API', you will see the option to set these permissions.</p> <p>'Permissions' are defined when the cluster is created. You can not edit it directly on the cluster after the creation. You may want to create a new cluster with appropriate permissions or create a new Node Pool with the new scopes you need and then delete your old 'default' Node Pool as specified in this <a href="https://stackoverflow.com/questions/40134385/is-it-necessary-to-recreate-a-google-container-engine-cluster-to-modify-api-perm">link</a> .</p>
<p>I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.<br> I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..<br> Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.<br> But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. <strong>So where is this container running and how can i connect to it from some other container</strong> ?</p>
<p>You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually <code>docker run</code> (or equivalent) and as you note normal administrative calls like <code>kubectl get pods</code> won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.</p> <p>A process running in a pod can <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">access the Kubernetes API directly</a>, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, <code>servicename.namespacename.svc.cluster.local</code> is a valid DNS name that reaches any Pod connected to the Service.)</p> <p>You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (<em>e.g.</em>, RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.</p>
<p>I'm new to Helm and Kubernetes and cannot figure out how to use <code>helm install --name kibana --namespace logging stable/kibana</code> with the Logtrail plugin enabled. I can see there's an option in the <a href="https://github.com/helm/charts/blob/master/stable/kibana/values.yaml" rel="nofollow noreferrer">values.yaml file</a> to enable plugins during installation but I cannot figure out how to set it. </p> <p>I've tried this without success:</p> <pre><code>helm install --name kibana --namespace logging stable/kibana \ --set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip </code></pre> <p><strong>Update:</strong></p> <p>As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the <code>helm install</code>. And here's how to set it. In your custom values file set the following:</p> <pre><code>extraConfigMapMounts: - name: logtrail configMap: logtrail mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json subPath: logtrail.json </code></pre> <p>After that the full content of your custom values file should look similar to this: </p> <pre><code>image: repository: "docker.elastic.co/kibana/kibana-oss" tag: "6.5.4" pullPolicy: "IfNotPresent" commandline: args: [] env: {} # All Kibana configuration options are adjustable via env vars. # To adjust a config option to an env var uppercase + replace `.` with `_` # Ref: https://www.elastic.co/guide/en/kibana/current/settings.html # # ELASTICSEARCH_URL: http://elasticsearch-client:9200 # SERVER_PORT: 5601 # LOGGING_VERBOSE: "true" # SERVER_DEFAULTROUTE: "/app/kibana" files: kibana.yml: ## Default Kibana configuration from kibana-docker. server.name: kibana server.host: "0" elasticsearch.url: http://elasticsearch:9200 ## Custom config properties below ## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html # server.port: 5601 # logging.verbose: "true" # server.defaultRoute: "/app/kibana" deployment: annotations: {} service: type: ClusterIP externalPort: 443 internalPort: 5601 # authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer ## External IP addresses of service ## Default: nil ## # externalIPs: # - 192.168.0.1 # ## LoadBalancer IP if service.type is LoadBalancer ## Default: nil ## # loadBalancerIP: 10.2.2.2 annotations: {} # Annotation example: setup ssl with aws cert when service.type is LoadBalancer # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT labels: {} ## Label example: show service URL in `kubectl cluster-info` # kubernetes.io/cluster-service: "true" ## Limit load balancer source ips to list of CIDRs (where available) # loadBalancerSourceRanges: [] ingress: enabled: false # hosts: # - kibana.localhost.localdomain # - localhost.localdomain/kibana # annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" # tls: # - secretName: chart-example-tls # hosts: # - chart-example.local serviceAccount: # Specifies whether a service account should be created create: false # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template # If set and create is false, the service account must be existing name: livenessProbe: enabled: false initialDelaySeconds: 30 timeoutSeconds: 10 readinessProbe: enabled: false initialDelaySeconds: 30 timeoutSeconds: 10 periodSeconds: 10 successThreshold: 5 # Enable an authproxy. Specify container in extraContainers authProxyEnabled: false extraContainers: | # - name: proxy # image: quay.io/gambol99/keycloak-proxy:latest # args: # - --resource=uri=/* # - --discovery-url=https://discovery-url # - --client-id=client # - --client-secret=secret # - --listen=0.0.0.0:5602 # - --upstream-url=http://127.0.0.1:5601 # ports: # - name: web # containerPort: 9090 resources: {} # limits: # cpu: 100m # memory: 300Mi # requests: # cpu: 100m # memory: 300Mi priorityClassName: "" # Affinity for pod assignment # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity # affinity: {} # Tolerations for pod assignment # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations: [] # Node labels for pod assignment # Ref: https://kubernetes.io/docs/user-guide/node-selection/ nodeSelector: {} podAnnotations: {} replicaCount: 1 revisionHistoryLimit: 3 # To export a dashboard from a running Kibana 6.3.x use: # curl --user &lt;username&gt;:&lt;password&gt; -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=&lt;some-dashboard-uuid&gt; &gt; my-dashboard.json # A dashboard is defined by a name and a string with the json payload or the download url dashboardImport: timeout: 60 xpackauth: enabled: false username: myuser password: mypass dashboards: {} # k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json # List of plugins to install using initContainer # NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well. plugins: # set to true to enable plugins installation enabled: false # set to true to remove all kibana plugins before installation reset: false # Use &lt;plugin_name,version,url&gt; to add/upgrade plugin values: - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip # - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip # - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip # - other_plugin persistentVolumeClaim: # set to true to use pvc enabled: false # set to true to use you own pvc existingClaim: false annotations: {} accessModes: - ReadWriteOnce size: "5Gi" ## If defined, storageClassName: &lt;storageClass&gt; ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # storageClass: "-" # default security context securityContext: enabled: false allowPrivilegeEscalation: false runAsUser: 1000 fsGroup: 2000 extraConfigMapMounts: - name: logtrail configMap: logtrail mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json subPath: logtrail.json </code></pre> <p>And the last thing you should do is add this ConfigMap resource to Kubernetes:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: logtrail namespace: logging data: logtrail.json: | { "version" : 1, "index_patterns" : [ { "es": { "default_index": "logstash-*" }, "tail_interval_in_seconds": 10, "es_index_time_offset_in_seconds": 0, "display_timezone": "local", "display_timestamp_format": "MMM DD HH:mm:ss", "max_buckets": 500, "default_time_range_in_days" : 0, "max_hosts": 100, "max_events_to_keep_in_viewer": 5000, "fields" : { "mapping" : { "timestamp" : "@timestamp", "hostname" : "kubernetes.host", "program": "kubernetes.pod_name", "message": "log" }, "message_format": "{{{log}}}" }, "color_mapping" : { } }] } </code></pre> <p>After that you're ready to <code>helm install</code> with the values file specified via the <code>-f</code> flag.</p>
<p>Getting input with <code>--set</code> that matches to what the <a href="https://github.com/helm/charts/blob/master/stable/kibana/values.yaml#L148" rel="nofollow noreferrer">example in the values file</a> has is a bit tricky. Following the example we want the values to be:</p> <pre><code>plugins: enabled: true values: - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip </code></pre> <p>The <code>plugin.values</code> here is tricky because it is an array, which means you need to enclose with <a href="https://github.com/helm/helm/issues/1987" rel="nofollow noreferrer"><code>{}</code></a>. And the relevant entry contains commas, which <a href="https://github.com/helm/helm/issues/1556#issuecomment-342169418" rel="nofollow noreferrer">have to be escaped with backslash</a>. To get it to match you can use:</p> <p><code>helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}</code></p> <p>If you add <code>--dry-run --debug</code> then you can see what the computed values are for any command you run, including with <code>--set</code>, so this can help check the match. This kind of value is easier to set with a <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/values_files.md#values-files" rel="nofollow noreferrer">custom values file referenced with -f</a> as it avoids having to work out how the <code>--set</code> evaluates to values.</p>
<p>Replica Set 1</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: labels: app: nginx name: rs-1 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx version: 1.7.1 spec: containers: - image: nginx:1.7.1 name: nginx-1 restartPolicy: Always </code></pre> <p>Replica Set 2</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: labels: app: nginx name: rs-2 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx version: 1.7.9 spec: containers: - image: nginx:1.7.9 name: nginx-1 restartPolicy: Always </code></pre> <p>When I create these two ReplicaSets, one ignores the pods created by the other. </p> <pre><code>C02T30K2GTFM:ask erkanerol$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS rs-1-996cz 1/1 Running 0 5m13s app=nginx,version=1.7.1 rs-1-ktv9z 1/1 Running 0 5m13s app=nginx,version=1.7.1 rs-1-w7sbg 1/1 Running 0 5m13s app=nginx,version=1.7.1 rs-2-2z8rb 1/1 Running 0 4m26s app=nginx,version=1.7.9 rs-2-5c56s 1/1 Running 0 4m26s app=nginx,version=1.7.9 rs-2-hls9p 1/1 Running 0 4m26s app=nginx,version=1.7.9 </code></pre> <p>As far as I understand from the documentation, if there are enough pods which match a replicaset's selector, it shouldn't create new pods. Why is this happenning? Is it using ownerReferences?</p>
<p>It seems they are using ownerReferences. If so, it doesn't fit the documented behaviour. </p> <p>PR: <a href="https://github.com/kubernetes/kubernetes/pull/27600" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/27600</a></p> <p>Code: <a href="https://github.com/kubernetes/kubernetes/blob/0048d2da400b8c48ae83acc6a223a65f3551674a/pkg/controller/controller_ref_manager.go#L69-L72" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/0048d2da400b8c48ae83acc6a223a65f3551674a/pkg/controller/controller_ref_manager.go#L69-L72</a></p> <p>Issue: <a href="https://github.com/kubernetes/website/issues/12205" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/12205</a></p>
<p>I want setup a working environment on local machine where I had installed microk8s before. And when I install Jenkins from helm chart (stable/jenkins) I have problem</p> <blockquote> <p>pod has unbound immediate PersistentVolumeClaims</p> </blockquote> <p>I started cluster in Linux Ubuntu 18.x which installed and working in Oracle Virtual box. Please give me any tips.</p> <p>Persistent volume started without any problems. I tried to change size of PV but that didn't help with problem. In pod's log there is only one sentence about that pod is initializing. Searching of similar problem gave me nothing.</p> <p>Pod's log content:</p> <blockquote> <p>container &quot;jazzed-anteater-jenkins&quot; in pod &quot;jazzed-anteater-jenkins-69886499b4-6gbhn&quot; is waiting to start: PodInitializing</p> </blockquote>
<p>In my case problem was related with droped IPtables FORWARD policy. I had investigated it with the help of <code>microk8s.inspect</code> command.</p> <p>InitContainer couldn't get access to the Internet and it stopped deployment of whole pod with main container. Solving was easy</p> <pre><code>sudo iptables -P FORWARD ACCEPT </code></pre> <p>and redeployment with Helm.</p>
<p>I am trying to setup own single-node kubernetes cluster on bare metal dedicated server. I am not that experienced in dev-ops but I need some service to be deployed for my own project. I already did a cluster setup with <code>juju</code> and <code>conjure-up kubernetes</code> over <code>LXD</code>. I have running cluster pretty fine.</p> <pre><code>$ juju status Model Controller Cloud/Region Version SLA Timestamp conjure-canonical-kubern-3b3 conjure-up-localhost-db9 localhost/localhost 2.4.3 unsupported 23:49:09Z App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 195 ubuntu etcd 3.2.10 active 3 etcd jujucharms 338 ubuntu flannel 0.10.0 active 2 flannel jujucharms 351 ubuntu kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 525 ubuntu exposed kubernetes-master 1.13.1 active 1 kubernetes-master jujucharms 542 ubuntu kubernetes-worker 1.13.1 active 1 kubernetes-worker jujucharms 398 ubuntu exposed Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 0 10.213.117.66 Certificate Authority connected. etcd/0* active idle 1 10.213.117.171 2379/tcp Healthy with 3 known peers etcd/1 active idle 2 10.213.117.10 2379/tcp Healthy with 3 known peers etcd/2 active idle 3 10.213.117.238 2379/tcp Healthy with 3 known peers kubeapi-load-balancer/0* active idle 4 10.213.117.123 443/tcp Loadbalancer ready. kubernetes-master/0* active idle 5 10.213.117.172 6443/tcp Kubernetes master running. flannel/1* active idle 10.213.117.172 Flannel subnet 10.1.83.1/24 kubernetes-worker/0* active idle 7 10.213.117.136 80/tcp,443/tcp Kubernetes worker running. flannel/4 active idle 10.213.117.136 Flannel subnet 10.1.27.1/24 Entity Meter status Message model amber user verification pending Machine State DNS Inst id Series AZ Message 0 started 10.213.117.66 juju-b03445-0 bionic Running 1 started 10.213.117.171 juju-b03445-1 bionic Running 2 started 10.213.117.10 juju-b03445-2 bionic Running 3 started 10.213.117.238 juju-b03445-3 bionic Running 4 started 10.213.117.123 juju-b03445-4 bionic Running 5 started 10.213.117.172 juju-b03445-5 bionic Running 7 started 10.213.117.136 juju-b03445-7 bionic Running </code></pre> <p>I also deployed Hello world application to output some hello on port <code>8080</code> inside the pod and <code>nginx-ingress</code> for it to re-route the traffic to this service on specified host.</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/hello-world-696b6b59bd-fznwr 1/1 Running 1 176m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/example-service NodePort 10.152.183.53 &lt;none&gt; 8080:30450/TCP 176m service/kubernetes ClusterIP 10.152.183.1 &lt;none&gt; 443/TCP 10h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/hello-world 1/1 1 1 176m NAME DESIRED CURRENT READY AGE replicaset.apps/hello-world-696b6b59bd 1 1 1 176m </code></pre> <p>When I do <code>curl localhost</code> as expected I have <code>connection refused</code>, which looks still fine as it's not exposed to cluster. when I curl the <code>kubernetes-worker/0</code> with public address <code>10.213.117.136</code> on port <code>30450</code> (which I get from <code>kubectl get all</code>) </p> <pre><code>$ curl 10.213.117.136:30450 Hello Kubernetes! </code></pre> <p>Everything works like a charm (which is obvious). When I do </p> <pre><code>curl -H "Host: testhost.com" 10.213.117.136 Hello Kubernetes! </code></pre> <p>It works again like charm! That means ingress controller is successfully routing port 80 based on <code>host</code> rule to correct services. At this point I am 100% sure that cluster works as it should. </p> <p>Now I am trying to access this service over the internet externally. When I load <code>&lt;server_ip&gt;</code> obviously nothing loads as it's living inside own <code>lxd</code> subnet. Therefore I was thinking forward port <code>80</code> from server <code>eth0</code> to this IP. So I added this rule to iptables </p> <p><code>sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.213.117.136</code> (For the sake of example let's route everything not only port 80). Now when I open on my computer <code>http://&lt;server_ip&gt;</code> it loads! </p> <p>So the real question is how to do that on production? Should I setup this forwarding rule in iptables? Is that normal approach or hacky solution and there is something "standard" which I am missing? The thing is to add this rule with static <code>worker</code> node will make the cluster completely static. IP eventually change, I can remove/add units to workers and it will stop working. I was thinking about writing script which will obtain this IP address from <code>juju</code> like this:</p> <pre><code>$ juju status kubernetes-worker/0 --format=json | jq '.machines["7"]."dns-name"' "10.213.117.136" </code></pre> <p>and add it to IP-tables, which is more okay-ish solution than hardcoded IP but still I feel it's a tricky and there must be a better way.</p> <p>As last idea I get to run <code>HAProxy</code> outside of the cluster, directly on the machine and just do forwarding of traffic to all available workers. This might eventually also work. But still I don't know the answer what is the <code>correct</code> solution and what is usually used in this case. Thank you! </p>
<blockquote> <p>So the real question is how to do that on production?</p> </blockquote> <p>The normal way to do this in a production system is to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>.</p> <p>The simplest case is when you just want your application to be accessible from outside on your node(s). In that case you can use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Type NodePort</a> Service. This would create the iptables rules necessary to forward the traffic from the host IP address to the pod(s) providing the service.</p> <p>If you have a single node (which is not recommended in production!), you're ready at this point.</p> <p>If you have multiple nodes in your Kubernetes cluster, all of them would be configured by Kubernetes to provide access to the service (your clients could use any of them to access the service). Though, you'd have to solve the problem of how the clients would know which nodes are available to be contacted...</p> <p>There are several ways to handle this:</p> <ul> <li><p>use a protocol understood by the client to publish the currently available IP addresses (for example DNS),</p></li> <li><p>use a floating (failover, virtual, HA) IP address managed by some software on your Kubernetes nodes (for example pacemaker/corosync), and direct the clients to this address,</p></li> <li><p>use an external load-balancer, configured separately, to forward traffic to some of the operating nodes,</p></li> <li><p>use an external load-balancer, configured automatically by Kubernetes using a cloud provider integration script (by using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Type LoadBalancer</a> Service), to forward traffic to some of the operating nodes.</p></li> </ul>
<p>I have a mutual TLS enabled Istio mesh. My setup is as follows</p> <p><a href="https://i.stack.imgur.com/Vcxa4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Vcxa4.png" alt="enter image description here"></a></p> <ol> <li>A service running inside a pod (Service container + envoy)</li> <li>An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes <code>/info/</code> route to the above service.</li> <li>Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs <code>/info/</code> path to the service described in 2</li> </ol> <p>I'm attempting to access the service from the ingress gateway using a curl command such as:</p> <pre><code>$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v </code></pre> <p>But I'm getting a 503 not found error as below:</p> <pre><code>$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v Note: Unnecessary use of -X or --request, GET is already inferred. * Trying 10.105.138.94... * Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0) &gt; GET /info/ HTTP/1.1 &gt; Host: istio-ingressgateway.istio-system &gt; User-Agent: curl/7.47.0 &gt; Accept: */* &gt; Authorization: Bearer ... &gt; &lt; HTTP/1.1 503 Service Unavailable &lt; content-length: 57 &lt; content-type: text/plain &lt; date: Sat, 12 Jan 2019 13:30:13 GMT &lt; server: envoy &lt; * Connection #0 to host istio-ingressgateway.istio-system left intact </code></pre> <p>I checked the logs of <code>istio-ingressgateway</code> pod and the following line was logged there</p> <pre><code>[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-" </code></pre> <p>If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.</p> <pre><code># curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v </code></pre> <p>Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.</p> <p>Here are the config details for my service mesh in case you need additional info.</p> <p><strong>Pods</strong></p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m kube-system etcd-ubuntu 1/1 Running 0 49m kube-system kube-apiserver-ubuntu 1/1 Running 0 49m kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m kube-system kube-proxy-r868m 1/1 Running 0 50m kube-system kube-scheduler-ubuntu 1/1 Running 0 49m </code></pre> <p><strong>Services</strong></p> <pre><code>$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default hr--gateway-service ClusterIP 10.100.238.144 &lt;none&gt; 80/TCP,443/TCP 39m default hr--hr-service ClusterIP 10.96.193.43 &lt;none&gt; 80/TCP 39m default hr--sts-service ClusterIP 10.99.54.137 &lt;none&gt; 8080/TCP,8081/TCP,8090/TCP 39m default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 52m ingress-nginx default-http-backend ClusterIP 10.109.166.229 &lt;none&gt; 80/TCP 44m ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m istio-system grafana ClusterIP 10.102.141.231 &lt;none&gt; 3000/TCP 44m istio-system istio-citadel ClusterIP 10.101.128.187 &lt;none&gt; 8060/TCP,9093/TCP 44m istio-system istio-egressgateway ClusterIP 10.102.157.204 &lt;none&gt; 80/TCP,443/TCP 44m istio-system istio-galley ClusterIP 10.96.31.251 &lt;none&gt; 443/TCP,9093/TCP 44m istio-system istio-ingressgateway LoadBalancer 10.105.138.94 &lt;pending&gt; 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m istio-system istio-pilot ClusterIP 10.100.170.73 &lt;none&gt; 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m istio-system istio-policy ClusterIP 10.104.77.184 &lt;none&gt; 9091/TCP,15004/TCP,9093/TCP 44m istio-system istio-sidecar-injector ClusterIP 10.100.180.152 &lt;none&gt; 443/TCP 44m istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 &lt;none&gt; 9102/TCP,9125/UDP 44m istio-system istio-telemetry ClusterIP 10.110.55.232 &lt;none&gt; 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m istio-system jaeger-agent ClusterIP None &lt;none&gt; 5775/UDP,6831/UDP,6832/UDP 44m istio-system jaeger-collector ClusterIP 10.102.43.21 &lt;none&gt; 14267/TCP,14268/TCP 44m istio-system jaeger-query ClusterIP 10.104.182.189 &lt;none&gt; 16686/TCP 44m istio-system prometheus ClusterIP 10.100.0.70 &lt;none&gt; 9090/TCP 44m istio-system servicegraph ClusterIP 10.97.65.37 &lt;none&gt; 8088/TCP 44m istio-system tracing ClusterIP 10.109.87.118 &lt;none&gt; 80/TCP 44m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 52m </code></pre> <p><strong>Gateway and virtual service described in point 2</strong></p> <pre><code>$ kubectl describe gateways.networking.istio.io hr--gateway Name: hr--gateway Namespace: default API Version: networking.istio.io/v1alpha3 Kind: Gateway Metadata: ... Spec: Selector: App: hr--gateway Servers: Hosts: * Port: Name: http2 Number: 80 Protocol: HTTP2 Hosts: * Port: Name: https Number: 443 Protocol: HTTPS Tls: Mode: PASSTHROUGH $ kubectl describe virtualservices.networking.istio.io hr--gateway Name: hr--gateway Namespace: default Labels: app=hr--gateway Annotations: &lt;none&gt; API Version: networking.istio.io/v1alpha3 Kind: VirtualService Metadata: ... Spec: Gateways: hr--gateway Hosts: * Http: Match: Uri: Prefix: /info/ Rewrite: Uri: / Route: Destination: Host: hr--hr-service </code></pre> <p><strong>Gateway and virtual service described in point 3</strong></p> <pre><code>$ kubectl describe gateways.networking.istio.io ingress-gateway Name: ingress-gateway Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel... API Version: networking.istio.io/v1alpha3 Kind: Gateway Metadata: ... Spec: Selector: Istio: ingressgateway Servers: Hosts: * Port: Name: http2 Number: 80 Protocol: HTTP2 $ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs Name: hr--gateway-ingress-vs Namespace: default Labels: app=hr--gateway API Version: networking.istio.io/v1alpha3 Kind: VirtualService Metadata: Spec: Gateways: ingress-gateway Hosts: * Http: Match: Uri: Prefix: /info/ Route: Destination: Host: hr--gateway-service Events: &lt;none&gt; </code></pre>
<p>The problem is probably as follows: <em>istio-ingressgateway</em> initiates mTLS to <em>hr--gateway-service</em> on port 80, but <em>hr--gateway-service</em> expects plain HTTP connections.</p> <p>There are multiple solutions:</p> <ol> <li>Define a DestinationRule to instruct clients to disable mTLS on calls to <em>hr--gateway-service</em></li> </ol> <pre><code> apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: hr--gateway-service-disable-mtls spec: host: hr--gateway-service.default.svc.cluster.local trafficPolicy: tls: mode: DISABLE </code></pre> <ol start="2"> <li>Instruct <em>hr-gateway-service</em> to accept mTLS connections. For that, configure the <a href="https://istio.io/docs/reference/config/networking/v1alpha3/gateway/#Server-TLSOptions" rel="noreferrer">server TLS options</a> on port 80 to be <code>MUTUAL</code> and to use Istio certificates and the private key. Specify <code>serverCertificate</code>, <code>caCertificates</code> and <code>privateKey</code> to be <code>/etc/certs/cert-chain.pem</code>, <code>/etc/certs/root-cert.pem</code>, <code>/etc/certs/key.pem</code>, respectively.</li> </ol>
<p>I have CRD definition in Kubernetes. When I try to send a request with <code>kubectl proxy</code> by this link:</p> <pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/ </code></pre> <p>I get created custom object information. However, when I try to get the status of this custom object with:</p> <pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/status </code></pre> <p>I get an error:</p> <pre><code> { &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;sparkapplications.sparkoperator.k8s.io \&quot;status\&quot; not found&quot;, &quot;reason&quot;: &quot;NotFound&quot;, &quot;details&quot;: { &quot;name&quot;: &quot;status&quot;, &quot;group&quot;: &quot;sparkoperator.k8s.io&quot;, &quot;kind&quot;: &quot;sparkapplications&quot; }, &quot;code&quot;: 404 </code></pre> <p>Why there is no status for the custom object? Is there something wrong with CRD definition?</p> <p>I use Minikube version v0.32.0 which I start this way:</p> <pre><code>minikube start --kubernetes-version v1.13.0 --memory 8048 --cpus 3 --feature-gates=CustomResourceSubresources=true </code></pre> <p>CRD definition looks like this:</p> <pre><code>apiVersion: sparkoperator.k8s.io/v1alpha1 kind: SparkApplication metadata: name: spark-example namespace: default spec: type: Scala image: gcr.io/ynli-k8s/spark:v2.4.0-SNAPSHOT mainClass: org.apache.spark.examples.SparkExample mainApplicationFile: http://localhost:8089/spark_k8s_airflow.jar mode: cluster deps: {} driver: coreLimit: 1000m cores: 0.1 labels: version: 2.4.0 memory: 1024m serviceAccount: default executor: cores: 1 instances: 1 labels: version: 2.4.0 memory: 1024m imagePullPolicy: Always subresources: status: {} </code></pre> <p>UPDATE: I have called the object spark-example specifically, the object data is returned, but the status call returns error.</p> <pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example/status </code></pre> <p>returns this message:</p> <pre><code>the server could not find the requested resource </code></pre> <p>Although, there is subresources definition in the CRD definition.</p>
<pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/ </code></pre> <p>The above request will give you the list of <code>SparkApplication</code> kind objects in <code>default</code> namespaces.</p> <p>To get specific object, you have to specify the object name:</p> <pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/&lt;object-name&gt; </code></pre> <p><code>status</code> is the part of the object, not <code>SparkApplication</code> kind. That's why you are getting that error. If you try for specific object, it will work.</p> <pre><code>curl http://localhost:8090/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/&lt;object-name&gt;/status </code></pre> <blockquote> <p>Note: I am assuming that you enabled <code>status</code> subresource for <code>SparkApplication</code> CRD. Otherwise it will give error.</p> </blockquote> <p>If <code>status</code> subresource is not enabled in CRD definition, then you can not get status in <code>/status</code> subpath. This is the feature of <strong>subresource</strong>.</p> <p>How to know whether <code>status</code> subresource enabled or not:</p> <p>Check CRD yaml:</p> <pre><code>$ kubectl get crds/foos.try.com -o yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: foos.try.com spec: group: try.com version: v1alpha1 scope: Namespaced subresources: status: {} names: plural: foos singular: foo kind: Foo </code></pre> <p>If CRD has following field under <code>spec</code>, then <code>status</code> subresource is enabled.</p> <pre><code>subresources: status: {} </code></pre>
<pre><code>Jenkins version : 2.121.3 </code></pre> <p>I am using the k8s plugin in Jenkins which helps me deploy my sequential and parallel jobs on my k8s cluster.</p> <p>Here is the part of the <code>Jenkinsfile</code> where the job fails</p> <pre><code> parallel([ build: { stage('check formatting') // some code stage('build') // some code stage('build image') // some code stage('push image') // some code }, test: { stage('test') // some code stage('build e2e test image') // some code } stage('push e2e test image') // some code }, failFast: true]) </code></pre> <p>While running a job which has parallel stages, I come across the error</p> <pre><code>caught java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API </code></pre> <p>Then I goto Manage Jenkins --> Configure System --> Cloud --> Kubernetes and increase "Max connections to Kubernetes API" by say.... 20 and re-run the job and it works. And if I try to re-run the job after that, it fails with the same error and then again I have to increase the limit. So Im kind of in an endless loop doing the following:</p> <p>1) Restart jenkins evrerytime</p> <p>2) Bump up API Server requests limit.</p> <p>So Im not sure why this is happening. Is there a way for me to find out how many requests are being handled by the API server ? And whats their source ? And how can I find out if there are any lingering requests from previous jobs ?</p> <p>Full error log :</p> <pre><code>Failed in branch build [Pipeline] // parallel [Pipeline] echo caught java.io.IOException: Connection was rejected, you should increase the Max connections to Kubernetes API [Pipeline] echo org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:329) org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:237) hudson.Launcher$ProcStarter.start(Launcher.java:449) org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:188) org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99) org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:278) org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:178) org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) sun.reflect.GeneratedMethodAccessor646.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157) org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155) org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129) com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) WorkflowScript.run(WorkflowScript:109) ___cps.transform___(Native Method) com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) sun.reflect.GeneratedMethodAccessor243.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:103) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) sun.reflect.GeneratedMethodAccessor243.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:60) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) sun.reflect.GeneratedMethodAccessor243.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) com.cloudbees.groovy.cps.Next.step(Next.java:83) com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122) org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261) com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34) org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59) org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108) org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58) org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182) org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332) org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83) org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244) org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232) org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64) java.util.concurrent.FutureTask.run(FutureTask.java:266) hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131) jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:836) </code></pre>
<p>it's looks it is the official bug of plugin </p> <p><a href="https://issues.jenkins-ci.org/browse/JENKINS-55392" rel="nofollow noreferrer">https://issues.jenkins-ci.org/browse/JENKINS-55392</a></p> <p>You can try comment this issue, to help community to solve the problem.</p>
<p>I'm working with a large monolithic application with some private routes. These private routes are currently managed by an plain classic nginx server.</p> <p>I need to migrate this to Kubernetes, and I must deny all external access to these routes. I'm using GKE, and AFAIK, privatize routes can be done inside the nginx-ingress controller.</p> <p>I'm trying with server-snippet, but it doesn't seems to work. Here's the current code:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.org/websocket-services: service-ws nginx.org/server-snippet: | location /private { allow 10.100.0.0/16; #Pods IPs allow 10.200.0.0/16; #Pods IPs deny all; } generation: 3 </code></pre> <p>The result is that /private routes always return 200 instead of 401/403. I've also tried to create a redirection instead of allow/deny, and also get 200 instead of 301 redirections.</p> <p>Do you have some ideas or tips to make this work?</p>
<p>Following many links, the trick was the prefix is not up-to-date in most documentation:</p> <p>Here's a working sample:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/server-snippet: |- location /management_api { allow 1.2.3.4/16; # Pod address range allow 1.3.4.5/16; # Pod address range deny all; proxy_http_version 1.1; proxy_redirect off; proxy_intercept_errors on; proxy_set_header Connection ""; proxy_set_header X-CF-Visitor $http_cf_visitor; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://10.11.12.13; } </code></pre> <p>Enjoy!</p>
<p>In my terraform infrastructure, I spin up several Kubernetes clusters based on parameters, then install some standard contents to those Kubernetes clusters using the kubernetes provider.</p> <p>When I change the parameters and one of the clusters is no longer needed, terraform is unable to tear it down because the provider and resources are both in the module. I don't see an alternative, however, because I create the kubernetes cluster in that same module, and the kubernetes object are all per kubernetes cluster.</p> <p>All solutions I can think of involve adding a bunch of boilerplate to my terraform config. Should I consider generating my terraform config from a script?</p> <hr> <p>I made a git repo that shows exactly the problems I'm having:</p> <p><a href="https://github.com/bukzor/terraform-gke-k8s-demo" rel="nofollow noreferrer">https://github.com/bukzor/terraform-gke-k8s-demo</a></p>
<h1>TL;DR</h1> <p>Two solutions:</p> <ol> <li>Create two separate modules with Terraform</li> <li><p>Use interpolations and depends_on between the code that creates your Kubernetes cluster and the kubernetes resources:</p> <pre><code>resource "kubernetes_service" "example" { metadata { name = "my-service" } depends_on = ["aws_vpc.kubernetes"] } resource "aws_vpc" "kubernetes" { ... } </code></pre></li> </ol> <h1>When destroying resources</h1> <p>You are encountering a dependency lifecycle issue</p> <p><em>PS: I don't know the code you've used to create / provision your Kubernetes cluster but I guess it looks like this</em></p> <ol> <li>Write code for the Kubernetes cluster (creates a VPC)</li> <li>Apply it</li> <li>Write code for provisionning Kubernetes (create an Service that creates an ELB)</li> <li>Apply it</li> <li>Try to destroy everything => Error</li> </ol> <p>What is happenning is that by creating a <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="noreferrer">LoadBalancer Service</a>, Kubernetes will provision an ELB on AWS. But Terraform doesn't know that and there is no link between the ELB created and any other resources managed by Terraform. So when terraform tries to destroy the resources in the code, it will try to destroy the VPC. But it can't because there is an ELB inside that VPC that terraform doesn't know about. The first thing would be to make sure that Terraform "deprovision" the Kubernetes cluster and then destroy the cluster itself.</p> <p>Two solutions here:</p> <ol> <li><p>Use different modules so there is no dependency lifecycle. For example the first module could be <code>k8s-infra</code> and the other could be <code>k8s-resources</code>. The first one manages all the squeleton of Kubernetes and is apply first / destroy last. The second one manages what is inside the cluster and is apply last / destroy first.</p></li> <li><p>Use the <a href="https://www.terraform.io/docs/configuration/resources.html#depends_on" rel="noreferrer"><code>depends_on</code></a> parameter to write the dependency lifecycle explicitly</p></li> </ol> <h1>When creating resources</h1> <p>You might also ran into a dependency issue when <code>terraform apply</code> cannot create resources even if nothing is applied yet. I'll give an other example with a postgres </p> <ol> <li>Write code to create an RDS PostgreSQL server</li> <li>Apply it with Terraform</li> <li>Write code, <strong>in the same module</strong>, to provision that RDS instance with the postgres terraform provider</li> <li>Apply it with Terraform</li> <li>Destroy everything</li> <li>Try to apply everything => ERROR</li> </ol> <p>By debugging Terraform a bit I've learned that all the providers are initialized at the beggining of the <code>plan</code> / <code>apply</code> so if one has an invalid config (wrong API keys / unreachable endpoint) then Terraform will fail.</p> <p>The solution here is to use the <a href="https://www.terraform.io/docs/commands/apply.html#target-resource" rel="noreferrer">target parameter</a> of a <code>plan</code> / <code>apply</code> command. Terraform will only initialize providers that are related to the resources that are applied.</p> <ol> <li>Apply the RDS code with the AWS provider: <code>terraform apply -target=aws_db_instance</code></li> <li>Apply everything <code>terraform apply</code>. Because the RDS instance is already reachable, the PostgreSQL provider can also initiate itself</li> </ol>
<p>I have my Kubernetes cluster and I need to know how long it takes to create a pod? Is there any Kubernetes command show me that ? Thanks in advance</p>
<p>What you are asking for is not existing.</p> <p>I think you should first understand the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">Pod Overview</a>.</p> <blockquote> <p>A <em>Pod</em> is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.</p> <p>A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: <em>a single instance of an application in Kubernetes</em>, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.</p> </blockquote> <p>While you are deploying a <code>POD</code> it's going through <code>phases</code></p> <blockquote> <p><code>Pending</code> The Pod has been accepted by the Kubernetes system, but one or more of the Container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.</p> <p><code>Running</code> The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.</p> <p><code>Succeeded</code> All Containers in the Pod have terminated in success, and will not be restarted.</p> <p><code>Failed</code> All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system.</p> <p><code>Unknown</code> For some reason the state of the Pod could not be obtained, typically due to an error in communicating with the host of the Pod.</p> </blockquote> <p>As for <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="nofollow noreferrer">Pod Conditions</a> it have a <code>type</code> which can have following values:</p> <blockquote> <ul> <li><code>PodScheduled</code>: the Pod has been scheduled to a node;</li> <li><code>Ready</code>: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services;</li> <li><code>Initialized</code>: all <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers" rel="nofollow noreferrer">init containers</a> have started successfully;</li> <li><code>Unschedulable</code>: the scheduler cannot schedule the Pod right now, for example due to lacking of resources or other constraints;</li> <li><code>ContainersReady</code>: all containers in the Pod are ready.</li> </ul> </blockquote> <p>Please refer to the documentation regarding <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod Lifecycle</a> for more information.</p> <p>When you are deploying your <code>POD</code>, you have to consider how many containers will be running in it. The image will have to be downloaded, depending on the size it might take longer. Also default pull policy is <code>IfNotPresent</code>, which means that Kubernetes will skip the image pull if it already exists. You can find more about <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">Updating Images</a> can be found <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">here</a>.</p> <p>You also need to consider how much resources your <code>Master</code> and <code>Node</code> has.</p>
<p>I would like to add a flag to the <strong>kube-apiserver</strong>.</p> <p>So I logged in the docker container of the kube-apiserver on the master node and went on a mission to find <code>kube-apiserver.yaml</code>. I heard reports that it was located in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>.</p> <p>Unfortunatly it was missing ! I only have a "SSL" directory in the /etc/kubernetes/" folder and the kube-apiserver.yaml is nowhere to be seen...</p> <hr> <p>FYI :<br> Installed kubernetes 1.12.2 with Ansible playbook (kubespray).<br> Got 6 nodes and 3 masters. </p> <p>Thx for your help</p>
<p>The <code>kube-apiserver.yaml</code> is in the directory you have specified - <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> but not on the kube-apiserver container/pod but on the master itself. </p>
<p>We have a virtual machine with Ubuntu Server 18.04.1.0. We have used <a href="https://itnext.io/tutorial-part-1-kubernetes-up-and-running-on-lxc-lxd-b760c79cd53f" rel="nofollow noreferrer">this</a> tutorial to install lxd and we have used <a href="https://itnext.io/tutorial-part-2-kubernetes-up-and-running-on-lxc-lxd-6d60e98f22df" rel="nofollow noreferrer">this</a> tutorial to install kubernetes.</p> <p>Now we want to install Rancher regarding <a href="https://rancher.com/blog/2018/2018-05-18-how-to-run-rancher-2-0-on-your-desktop/" rel="nofollow noreferrer">this</a> tutorial (it works fine on Docker for Desktop on Windows) on this ubuntu machine.</p> <p>The problem is: we stuck on the <code>nginx-ingress</code> part. Nginx does not get any IP, state will be <code>pending</code> forever. I already tried to <code>set rbac.create=true</code> (which is already set in the helm chart defaults); but I cannot figure out what's wrong here and why the <code>nginx-ingress</code> does not get any IP on the ubuntu kubernetes cluster.</p> <p>What we have missed? Thanks</p>
<p>Take look at <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">Here</a>.<br> I think you should change service type of nginx-ingress-controller service to <code>NodePort</code> to solve the pending problem. As default nginx-ingress-controller service type is <code>Loadbalancer</code> and you have to have an external load balancer to use this type of service. on Cloud Providers like AWS or GKE it is OK but on bare metal you have to use other types of services like <code>NodePort</code>.<br> Also if you use <code>NodePort</code> and you need to serve on port 80/443 you will need a reverse proxy out of your cluster.</p>
<p>I'm currently trying to alert on Kubernetes pods stacking within an availability zone. I've managed to use two different metrics to the point where I can see how many pods for an application are running on a specific availability zone. However, due to scaling, I want the alert to be percentage based...so we can alert when a specific percentage of pods are running on one AZ (i.e. over 70%).</p> <p>My current query:</p> <pre><code>sum(count(kube_pod_info{namespace="somenamespace", created_by_kind="StatefulSet"}) by (created_by_name, node) * on (node) group_left(az_info) kube_node_labels) by (created_by_name, az_info) </code></pre> <p>And some selected output:</p> <pre><code>{created_by_name="some-db-1",az_info="az1"} 1 {created_by_name="some-db-1",az_info="az2"} 4 {created_by_name="some-db-2",az_info="az1"} 2 {created_by_name="some-db-2",az_info="az2"} 3 </code></pre> <p>For example, in the above output we can see that 4 db-1 pods are stacking on az2 as opposed to 1 pod on az1. In this scenario we would want to alert as 80% of db-1 pods are stacked on a single AZ.</p> <p>As the output contains multiple pods on multiple AZs, it feels like it may be difficult to get the percentage using a single Prometheus query, but wondered if anyone with more experience could offer a solution?</p> <p>Thanks!</p>
<pre><code> your_expression / ignoring(created_by_name) group_left sum without(created_by_name)(your_expression) </code></pre> <p>will give you the ratio of the whole for each, and then you can do <code>&gt; .8</code> on that.</p>
<p>Well, my company is considering to move from Hadoop to Kubernetes. We can find solutions in Kubernetes for tools such as cassandra, sparks, etc. So the last problem for us is how to store massive amount of files in Kubernetes, saying 1 PB. FYI, we DO NOT want to use online storage services such as S3. </p> <p>As far as I know, HDFS is merely used in Kubernetes and there are a few replacement products such as Torus and Quobyte. So my question is, any recommendation for the filesystem on Kubernetes? Or any better solution?</p> <p>Many thanks.</p>
<p>You can use a <em>Hadoop Compatible</em> FileSystem such as Ceph or Minio. Both of which offer S3-compatible REST APIs for reading and writing. In Kubernetes, Ceph can be deployed using the <a href="https://rook.io/" rel="nofollow noreferrer">Rook</a> project. </p> <p>But overall, running HDFS in Kubernetes would require stateful services like the NameNode, and DataNodes with proper affinity and network rules in place. The <a href="https://hadoop.apache.org/ozone/" rel="nofollow noreferrer">Hadoop Ozone</a> project is a realization that object storage is more common for microservice workloads than HDFS block storage as reasonably trying to analyze PB of data using distributed microservices wasn't feasible. (I'm only speculating)</p> <p>The alternative is to <a href="https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/data-operating-system/content/run_docker_containers_on_yarn.html" rel="nofollow noreferrer">use Docker support in Hadoop &amp; YARN 3.x</a></p>
<p>I am trying to set up a copy of our app on my development machine using minikube. But I get an error showing up in minikube dashboard: </p> <pre><code>0/1 nodes are available: 1 Insufficient ephemeral-storage </code></pre> <p>Any ideas as to how I fix this?</p> <p>The relevant part of the yaml configuration file looks like so: </p> <pre><code> resources: requests: memory: 500Mi cpu: 1 ephemeral-storage: 16Gi limits: memory: 4Gi cpu: 1 ephemeral-storage: 32Gi </code></pre> <p>I have tried assigning extra disk space at startup with the following but the error persists:</p> <pre><code>minikube start --disk-size 64g </code></pre>
<p>The issue is that minikube <a href="https://github.com/kubernetes/minikube/issues/1002" rel="noreferrer">can't resize</a> the VM disk.</p> <p>Depending on the type Hypervisor driver (xhyve, virtualbox, hyper-v) and disk type (qcow2, sparse, raw, etc.) resizing the VM disk will be different. For example, for if you have:</p> <pre><code>/Users/username/.minikube/machines/minikube/minikube.rawdisk </code></pre> <p>You can do something like this:</p> <pre><code>$ cd /Users/username/.minikube/machines/minikube $ mv minikube.rawdisk minikube.img $ hdiutil resize -size 64g minikube.img $ mv minikube.img minikube.rawdisk $ minikube start $ minikube ssh </code></pre> <p>Then in the VM:</p> <pre><code>$ sudo resize2fs /dev/vda1 # &lt;-- or the disk of your VM </code></pre> <p>Otherwise, if you don't care about the data in your VM:</p> <pre><code>$ rm -rf ~/.minikube $ minikube start --disk-size 64g </code></pre>
<p>I’m voluntarily operating (developing and hosting) a community project. Meaning time and money are tight. Currently it runs on a bare-metal machine at AWS (t2.micro, (1 vCPU, 1 GB memory)). For learning purposes I would like to containerize my application. Now I'm looking for hosting. The Google Cloud Plattform seems to be the cheapest to me. I setup a Kubernetes cluster with 1 node (1.10.9-gke.5, g1-small (1 vCPU shared, 1.7 GB memory)).</p> <p>After I set up the one node Kubernetes cluster I checked how much memory and CPU is already used by the Kubernetes system. (Please see kubectl describe node).</p> <p>I was wondering if I can run the following application with 30% CPU and 30% memory left on the node. Unfortunately I don't have experience with how much the container in my example will need in terms of resources. But having only 30% CPU and 30% memory left doesn't seem like much for my kind of application.</p> <p><strong>kubectl describe node</strong></p> <pre><code>Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system event-exporter-v0.2.3-54f94754f4-bznpk 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-scaler-6d7bbc67c5-pbrq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v3.1.0-fjbz6 100m (10%) 0 (0%) 200Mi (17%) 300Mi (25%) kube-system heapster-v1.5.3-66b7745959-4zbcl 138m (14%) 138m (14%) 301456Ki (25%) 301456Ki (25%) kube-system kube-dns-788979dc8f-krrtt 260m (27%) 0 (0%) 110Mi (9%) 170Mi (14%) kube-system kube-dns-autoscaler-79b4b844b9-vl4mw 20m (2%) 0 (0%) 10Mi (0%) 0 (0%) kube-system kube-proxy-gke-spokesman-cluster-default-pool-d70d068f-wjtk 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system l7-default-backend-5d5b9874d5-cgczj 10m (1%) 10m (1%) 20Mi (1%) 20Mi (1%) kube-system metrics-server-v0.2.1-7486f5bd67-ctbr2 53m (5%) 148m (15%) 154Mi (13%) 404Mi (34%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 681m (72%) 296m (31%) 807312Ki (67%) 1216912Ki (102%) </code></pre> <p><strong>Here my app</strong></p> <pre><code>PROD: API: ASP.NET core 1.1 (microsoft/dotnet:1.1-runtime-stretch) Frontend: Angular app (nginx:1.15-alpine) Admin: Angular app (nginx:1.15-alpine) TEST: API: ASP.NET core 1.1 (microsoft/dotnet:1.1-runtime-stretch) Frontend: Angular app (nginx:1.15-alpine) Admin: Angular app (nginx:1.15-alpine) SHARDED Database: Postgres (postgres:11-alpine) </code></pre> <p>Any suggestions are more than welcome. </p> <p>Thanks in advance!</p>
<p>If you intend to run a containerized app on a single node, a <a href="https://cloud.google.com/compute/docs/containers/" rel="nofollow noreferrer">GCE instance</a> could be better to begin with.</p> <p>When moving into GKE, check out this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu" rel="nofollow noreferrer">GCP's guide</a> explaining resource allocation per machine type before any workload and kube-system pods. You'd still need to have estimated resources usage per app component or container, maybe from monitoring your Dev or GCE environment.</p> <p>If you want to explore other alternatives on GCP for your app (e.g. App Engine supports <a href="https://cloud.google.com/appengine/docs/flexible/dotnet/customizing-the-dotnet-runtime" rel="nofollow noreferrer">.NET</a>), here's a <a href="https://cloud.google.com/blog/products/gcp/choosing-the-right-compute-option-in-gcp-a-decision-tree" rel="nofollow noreferrer">post</a> with a decision tree that might help you. I also found this <a href="https://medium.com/google-cloud/app-engine-flex-container-engine-946fbc2fe00a" rel="nofollow noreferrer">article/tutorial</a> about running containers on App Engine and GKE, comparing both with load tests.</p>
<p>I understand the use case of setting CPU request less than limit - it allows for CPU bursts in each container if the instances has free CPU hence resulting in max CPU utilization. However, I cannot really find the use case for doing the same with memory. Most application don't release memory after allocating it, so effectively applications will request up to 'limit' memory (which has the same effect as setting request = limit). The only exception is containers running on an instance that already has all its memory allocated. I don't really see any pros in this, and the cons are more nondeterministic behaviour that is hard to monitor (one container having higher latencies than the other due to heavy GC). Only use case I can think of is a shaded in memory cache, where you want to allow for a spike in memory usage. But even in this case one would be running the risk of of one of the nodes underperforming. </p>
<p>Maybe not a real answer, but a point of view on the subject.</p> <p>The difference with the limit on CPU and Memory is what happens when the limit is reached. In case of the CPU, the container keeps running but the CPU usage is limited. If memory limit is reached, container gets killed and restarted.</p> <p>In my use case, I often set the memory request to the amount of memory my application uses on average, and the limit to +25%. This allows me to avoid container killing most of the time (which is good) but of course it exposes me to memory overallocation (and this could be a problem as you mentioned).</p>
<p>As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.</p> <p>The relevant part of my helmfile is here</p> <pre><code>releases: ... - name: grafana namespace: grafana chart: stable/grafana values: - datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy url: http://prometheus-server.prometheus.svc.cluster.local isDefault: true - dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: '' type: file disableDeletion: false editable: true options: path: /var/lib/grafana/dashboards - dashboards: default: k8s: url: https://grafana.com/api/dashboards/8588/revisions/1/download </code></pre> <p>As far as I can understand by reading <a href="https://github.com/plotly/helm-charts/blob/master/stable/grafana/values.yaml#L125" rel="nofollow noreferrer">here</a> I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )</p> <pre><code> - dashboards: default: url: https://grafana.com/api/dashboards/8588/revisions/1/download </code></pre> <p>I get the following error message</p> <pre><code>Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at &lt;$value&gt;: wrong type for value; expected map[string]interface {}; got string </code></pre> <p>Any clues about what I'm doing wrong?</p>
<p>I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:</p> <pre><code>values: datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-prometheus-server access: proxy isDefault: true dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: '' type: file disableDeletion: false editable: true options: path: /var/lib/grafana/dashboards dashboards: default: k8s: url: https://grafana.com/api/dashboards/8588/revisions/1/download </code></pre> <p>The grafana chart <a href="https://github.com/plotly/helm-charts/blob/master/stable/grafana/values.yaml#L100" rel="nofollow noreferrer">has them as maps</a> and <a href="https://github.com/cloudposse/helmfiles/blob/master/helmfiles/grafana.yaml#L40" rel="nofollow noreferrer">using helmfile doesn't change that</a></p>
<p>I've setup Prometheus to monitor Kubernetes. However when i watch the Prometheus dashboard I see <strong>kubernetes-cadvisor</strong> <em>DOWN</em> </p> <p>I would want to know if we need it to monitor Kubernetes because on Grafana i already get different information as memory usage, disk space ... </p> <p>Would it be used to monitor containers in order to make <strong>precise requests</strong> such as the use of <strong>memory used by a pod</strong> of a <strong>specific namespace</strong>? </p>
<p>The error you have provided means that the cAdvisor's content does not comply with the Prometheus exposition format.<a href="https://groups.google.com/forum/#!msg/prometheus-users/DpgYKZr6UyU/" rel="nofollow noreferrer">[1]</a> But to be honest, it is one of the possibilities and as you did not provide more information we will have to leave it for now (I mean the information asked by Oliver + versions of Prometheus and Grafana and environment in which you are running the cluster). </p> <p>Answering your question, although you don't need to use cAdvisor for monitoring, it does provide some important metrics and is pretty well integrated with Kubernetes. So until you need container level metrics, then you should use cAdvisor. As specified in this <a href="https://medium.com/@DazWilkin/kubernetes-metrics-ba69d439fac4" rel="nofollow noreferrer">article</a>(you can find configuration tutorial there):</p> <blockquote> <p>you can’t access cAdvisor directly (through 4194). You can (!) access cAdvisor by duplicating the job_name (called “k8s”) in the prometheus.yml file, calling the copy “cAdvisor” (perhaps) and inserting an additional line to define “metrics_path”. Prometheus assumes exporters are on “/metrics” but, for cAdvisor, our metrics are on “/metrics/cadvisor”.</p> </blockquote> <p>I think that could be the reason, but if this does not solve your issue I will try to recreate it in my cluster. </p> <p>Update:</p> <p>Judging from your yaml file, you did not configure Prometheus to scrape metrics from the cAdvisor. Add this to your yaml file:</p> <pre><code>scrape_configs: - job_name: cadvisor scrape_interval: 5s static_configs: - targets: - cadvisor:8080 </code></pre> <p>As specified <a href="https://prometheus.io/docs/guides/cadvisor/" rel="nofollow noreferrer">here</a>.</p>
<p>According to this article one need to set some nginx properties when running a .NET Core 2.x application using https and Azure AD Authentication behind Nginx in a Kubernetes cluster:</p> <p><a href="https://stackoverflow.com/questions/48964429/net-core-behind-nginx-returns-502-bad-gateway-after-authentication-by-identitys">.Net Core behind NGINX returns 502 Bad Gateway after authentication by IdentityServer4</a></p> <p>The answer outlines how to do this for a regular Nginx installation, but I would like to do this when installing Nginx in a Kubernetes cluster using Helm.</p> <p>These are the Nginx properties one need to set:</p> <pre><code>nginx.conf: http{ ... proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; large_client_header_buffers 4 16k; ... } default.conf: location /{ ... fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; ... } </code></pre> <p>The command I use to install Nginx in the Kubernetes cluster is:</p> <pre><code>helm install stable/nginx-ingress --namespace kube-system </code></pre> <p>How does one set the above properties when installing Nginx using Helm in a Kubernetes cluster?</p>
<p>Fully agree with @Mozafar Gholami, you can change parameters using ConfigMap while deploying nginx or update your current configuration. To update parameters before installation suggest you next:</p> <p>1.fetch chart to your local machine and unzip it:</p> <pre><code>helm fetch stable/nginx-ingress tar -xzf nginx-ingress-1.1.4.tgz </code></pre> <ol start="2"> <li>edit controller.config section in the values.yml</li> </ol> <p>example for you:</p> <pre><code>controller: name: controller image: repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller tag: "0.21.0" pullPolicy: IfNotPresent # www-data -&gt; uid 33 runAsUser: 33 config: proxy-buffer-size: "128k" proxy-buffers: "4 256k" </code></pre> <p>3.check what will be added to new configmap</p> <pre><code>helm template . | less </code></pre> <p>4. install chart</p> <pre><code>helm install --name nginx-ingress --namespace kube-system ./nginx-ingress </code></pre> <p>Please keep in mind that:</p> <ol> <li><p>Instead of ConfigMaps you can change parameters with Annotations.</p></li> <li><p>Unfortunately NOT ALL parameters can be changed in nginx-ingress by above approach.</p></li> <li><p>For more information reading the <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/configmap-and-annotations.md#Summary-of-ConfigMap-and-Annotations" rel="nofollow noreferrer">nginx-ingress customization</a> page where you can find all values you are able to change. For example in your case I wasnt able to update <code>proxy_busy_buffers_size</code> and <code>large_client_header_buffers</code> parameters.</p></li> </ol> <p>Hope this help you.</p>
<p>Is there a way to easily query Kubernetes resources in an intuitive way? Basically I want to run queries to extract info about objects which match my criteria. Currently I face an issue where my match labels isn't quite working and I would like to run the match labels query manually to try and debug my issue. </p> <p>Basically in a pseudo code way:</p> <p>Select * from pv where labels in [red,blue,green]</p> <p>Any third party tools who do something like this? Currently all I have to work with is the search box on the dashboard which isn't quite robust enough. </p>
<p>You could use <code>kubectl</code> with JSONPath (<a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/jsonpath/</a>). More information on JSONPath: <a href="https://github.com/json-path/JsonPath" rel="nofollow noreferrer">https://github.com/json-path/JsonPath</a></p> <p>It allows you to query any resource property, example:</p> <pre><code>kubectl get pods -o=jsonpath='{$.items[?(@.metadata.namespace=="default")].metadata.name}' </code></pre> <p>This would list all pod names in namespace "default". Your pseudo code would be something along the lines:</p> <pre><code>kubectl get pv -o=jsonpath='{$.items[?(@.metadata.label in ["red","blue","green"])]}' </code></pre>
<p>I understand the use case of setting CPU request less than limit - it allows for CPU bursts in each container if the instances has free CPU hence resulting in max CPU utilization. However, I cannot really find the use case for doing the same with memory. Most application don't release memory after allocating it, so effectively applications will request up to 'limit' memory (which has the same effect as setting request = limit). The only exception is containers running on an instance that already has all its memory allocated. I don't really see any pros in this, and the cons are more nondeterministic behaviour that is hard to monitor (one container having higher latencies than the other due to heavy GC). Only use case I can think of is a shaded in memory cache, where you want to allow for a spike in memory usage. But even in this case one would be running the risk of of one of the nodes underperforming. </p>
<p>Actually the topic you mention is interesting and in the meantime complex, just as Linux memory management is. As we know when the process is using more memory than the limit it will quickly move up on the potential "to-kill" process "ladder". Going further, the purpose of limit is to tell the kernel when it should consider the process to be potentially killed. Requests on the other hand are a direct statement "my container will need this much memory", but other than that they provide valuable information to the Scheduler about where can the Pod be scheduled (based on available Node resources). </p> <p>If there is no memory request and high limit, Kubernetes will default the request to the limit (this might result in scheduling fail, even if the pods real requirements are met). </p> <p>If you set a request, but not limit - container will use the default limit for namespace (if there is none, it will be able to use the whole available Node memory)</p> <p>Setting memory request which is lower than limit you will give your pods room to have activity bursts. Also you make sure that a memory which is available for Pod to consume during boost is actually a reasonable amount. </p> <p>Setting memory limit == memory request is not desirable simply because activity spikes will put it on a highway to be OOM killed by Kernel. The memory limits in Kubernetes cannot be throttled, if there is a memory pressure that is the most probable scenario (lets also remember that there is no swap partition). </p> <p>Quoting <a href="https://medium.com/@will.tomlin" rel="nofollow noreferrer">Will Tomlin</a> and his interesting article on <a href="https://medium.com/hotels-com-technology/kubernetes-container-resource-requirements-part-1-memory-a9fbe02c8a5f" rel="nofollow noreferrer">Requests vs Limits</a> which I highly recommend:</p> <blockquote> <p>You might be asking if there’s reason to set limits higher than requests. If your component has a stable memory footprint, you probably shouldn’t since when a container exceeds its requests, it’s more likely to be evicted if the worker node encounters a low memory condition.</p> </blockquote> <p>To summarize - there is no straight and easy answer. You have to determine your memory requirements and use monitoring and alerting tools to have control and be ready to change/adjust the configuration accordingly to needs. </p>
<p>I have secrets configured in config/yaml file. There is one secret value that is causing trouble. I just want to print out the value being injected:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: myapplication-config data: config.yaml: | 'mysecret1': ${DB_PASSWORD} 'mysecret2': ${ANOTHER_SECRET} </code></pre> <p>I make a GET request to the controller to print out the secret:</p> <pre><code>@Autowired Environment env; @GetMapping(&quot;/test&quot;) public String print(){ System.out.println(env.getProperty(&quot;mysecret2&quot;)); } </code></pre> <p>When I print it, it throws an error:</p> <blockquote> <p>Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'mysecret2' in value &quot;${mysecret2}&quot; at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:172)</p> </blockquote> <p>Any idea how I can check the secrets sent to the application from config/env/dev/config.yaml?</p>
<p>I was able to read them thru environment variable:</p> <pre><code>@Autowired private org.springframework.core.env.Environment env; //inside some method @GetMapping("/test") public String print(){ System.out.println(env.getProperty("mysecret2")); } </code></pre> <p>I tried using below but that didn't work.</p> <pre><code>@Value("${mysecret2}) private String mySecret2; //didn't work .... System.getEnv("mySecret2"); //didn't work System.getProperty("mySecret2"); //didn't work </code></pre>
<p>After kubectl apply -f cluster.yaml (yaml example file from rook GitHub repository), I have only one pod rook-ceph-mon-a-*** running, even if I wait 1 hour. How can I investigate this problem?</p> <pre><code>NAME READY STATUS RESTARTS AGE rook-ceph-mon-a-7ff4fd545-qc2wl 1/1 Running 0 20m </code></pre> <p>And below the logs of the single running pod</p> <pre><code>$ kubectl logs rook-ceph-mon-a-7ff4fd545-qc2wl -n rook-ceph 2019-01-14 17:23:40.578 7f725478c140 0 ceph version 13.2.2 *** No filesystems configured 2019-01-14 17:23:40.643 7f723a050700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -&gt; 0 2019-01-14 17:23:40.643 7f723a050700 0 log_channel(cluster) log [DBG] : fsmap 2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 1009089990564790272, adjusting msgr requires 2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2019-01-14 17:23:40.643443 mon.a unknown.0 - 0 : [INF] mkfs cb8db53e-2d36-42eb-ab25-2a0918602655 2019-01-14 17:23:40.645 7f723a050700 1 mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -&gt; 3 2019-01-14 17:23:40.647 7f723a050700 0 log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in 2019-01-14 17:23:40.648 7f723a050700 0 log_channel(cluster) log [DBG] : mgrmap e1: no daemons active 2019-01-14 17:23:40.635473 mon.a mon.0 10.32.0.43:6790/0 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2019-01-14 17:23:40.641926 mon.a mon.0 10.32.0.43:6790/0 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) </code></pre>
<p>Assuming you have followed the official ceph-quickstart guide from rook`s Github page <a href="https://github.com/rook/rook/blob/master/Documentation/ceph-quickstart.md" rel="nofollow noreferrer">here</a>, please check first for the problematic pods with command:</p> <pre><code>kubectl -n rook-ceph get pod </code></pre> <p>and retrieve from them logs with:</p> <pre><code>kubectl logs &lt;pod_name&gt; </code></pre> <p>Please update your original question to include these command outputs. </p>
<p>According to this article one need to set some nginx properties when running a .NET Core 2.x application using https and Azure AD Authentication behind Nginx in a Kubernetes cluster:</p> <p><a href="https://stackoverflow.com/questions/48964429/net-core-behind-nginx-returns-502-bad-gateway-after-authentication-by-identitys">.Net Core behind NGINX returns 502 Bad Gateway after authentication by IdentityServer4</a></p> <p>The answer outlines how to do this for a regular Nginx installation, but I would like to do this when installing Nginx in a Kubernetes cluster using Helm.</p> <p>These are the Nginx properties one need to set:</p> <pre><code>nginx.conf: http{ ... proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; large_client_header_buffers 4 16k; ... } default.conf: location /{ ... fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; ... } </code></pre> <p>The command I use to install Nginx in the Kubernetes cluster is:</p> <pre><code>helm install stable/nginx-ingress --namespace kube-system </code></pre> <p>How does one set the above properties when installing Nginx using Helm in a Kubernetes cluster?</p>
<p>It is possible to customize nginx configuration using ConfigMap.<br> <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-configuration" rel="nofollow noreferrer">This is an example</a><br> after adding the ConfigMap if nginx doesn't get updated, update your nginx chart using Helm:<br> <code>helm upgrade [RELEASE] [CHART]</code><br> <code>heml upgrade my-release stable/nginx-ingress</code><br> if you don't know the [RELEASE] use following command:<br> <code>helm list</code></p>
<p>Has anyone used the kubedb operator before? <a href="https://kubedb.com/docs/0.9.0/welcome/" rel="nofollow noreferrer">https://kubedb.com/docs/0.9.0/welcome/</a></p> <p>I've gotten a postgres instance bootstrapped and now im trying to do a snapshot to s3 but cant seem to get it working</p> <p><code>Waiting... database is not ready yet</code></p> <p>The db is up and accepting connections:</p> <pre><code>$ kubectl exec -it db-0 -n ${namespace} bash bash-4.3# pg_isready /var/run/postgresql:5432 - accepting connections </code></pre> <p>The db pod is running at :</p> <p><code>db-0 1/1 Running 0 37m</code></p> <p>Which is accessible in pgadmin via the server name <code>db.${namespace}</code></p> <p>Here's my snapshot object spec:</p> <pre><code>--- apiVersion: kubedb.com/v1alpha1 kind: Snapshot metadata: name: db-snapshot namespace: ${namespace} labels: kubedb.com/kind: Postgres spec: databaseName: db storageSecretName: s3-creds s3: endpoint: 's3.amazonaws.com' bucket: ${bucket} </code></pre> <p>If anyone can point out where im going wrong that would be great!</p>
<pre><code>#while ! nc "$DB_HOST" "$DB_PORT" -w 30 &gt;/dev/null; do # echo "Waiting... database is not ready yet" # sleep 5 #done </code></pre> <p>This nc command wasnt connecting to the db host for some reason. The container could psql into it using the db name so I commented it out and it worked like a charm.</p> <p>Guess there's some issue with the nc binary that's bundled in this container.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#force-deletion-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#force-deletion-of-pods</a></p> <p>This section of the kubernetes documentation points out that "Force deletions can be potentially dangerous for some pods", but doesn't really go into detail on the dangers.</p> <p>I understand that force deleting a pod will immediately "deregister" the pod from the API before the kubelet container confirms the underlying container is actually deleted, which could lead to a bunch of orphaned containers running if the kubelet fails to delete them. However, I don't know how to tell if a pod is "dangerous" to force-delete before I do so, or if there is even a way to predict this.</p> <p>Are there any guidelines on safely force-deleting a pod? Or is this just an inherently unsafe operation?</p>
<p>It really depends on what point of view.</p> <p>From the K8s master and etcd which keeps the state in K8s it's safe as the entry is deleted in etcd. </p> <p>However, the kube-scheduler tells the kubelet on the node to kill the pod and sometimes the kubelet might not be able to kill it (Most of the times it is). </p> <p>A reason why it might not be able to kill the pod is if something like docker or your runtime isn't responding or a Linux system resource is not being released which could be anything like a deadlock, hardware failure, etc.</p> <p>So most of the times it's safe but there might be a few specific cases where it's not due to the nature of your application and the state of your system.</p>
<p>I have three master nodes with each 80 GB disk size. Recently I ran into this problem:</p> <pre><code>Normal Pulling 52s (x2 over 6m17s) kubelet, 192.168.10.37 pulling image &quot;gcr.io/kubeflow-images-public/tensorflow-serving-1.8gpu:latest&quot; Warning Evicted 8s (x5 over 4m19s) kubelet, 192.168.10.37 The node was low on resource: ephemeral-storage. </code></pre> <p>–&gt; &quot;The node was low on resource: ephemeral-storage.&quot;</p> <p>The storage on the execution node looks like this:</p> <pre><code>Filesystem Size Used Available Use% Mounted on overlay 7.4G 5.2G 1.8G 74% / tmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda1 7.4G 5.2G 1.8G 74% /opt /dev/vda1 7.4G 5.2G 1.8G 74% /mnt /dev/vda1 7.4G 5.2G 1.8G 74% /media /dev/vda1 7.4G 5.2G 1.8G 74% /home none 3.9G 1.5M 3.9G 0% /run /dev/vda1 7.4G 5.2G 1.8G 74% /etc/resolv.conf /dev/vda1 7.4G 5.2G 1.8G 74% /etc/selinux /dev/vda1 7.4G 5.2G 1.8G 74% /etc/logrotate.d /dev/vda1 7.4G 5.2G 1.8G 74% /usr/lib/modules devtmpfs 3.9G 0 3.9G 0% /host/dev shm 64.0M 0 64.0M 0% /host/dev/shm /dev/vda1 7.4G 5.2G 1.8G 74% /usr/lib/firmware none 3.9G 1.5M 3.9G 0% /var/run /dev/vda1 7.4G 5.2G 1.8G 74% /etc/docker /dev/vda1 7.4G 5.2G 1.8G 74% /usr/sbin/xtables-multi /dev/vda1 7.4G 5.2G 1.8G 74% /var/log /dev/vda1 7.4G 5.2G 1.8G 74% /etc/hosts /dev/vda1 7.4G 5.2G 1.8G 74% /etc/hostname shm 64.0M 0 64.0M 0% /dev/shm /dev/vda1 7.4G 5.2G 1.8G 74% /usr/bin/system-docker-runc /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/boot2docker /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/docker /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/kubelet /dev/vda1 7.4G 5.2G 1.8G 74% /usr/bin/ros /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/rancher /dev/vda1 7.4G 5.2G 1.8G 74% /usr/bin/system-docker /dev/vda1 7.4G 5.2G 1.8G 74% /usr/share/ros /dev/vda1 7.4G 5.2G 1.8G 74% /etc/ssl/certs/ca-certificates.crt.rancher /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/rancher/conf /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/rancher/cache devtmpfs 3.9G 0 3.9G 0% /dev shm 64.0M 0 64.0M 0% /dev/shm /dev/vda1 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2 overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/0181228584d6531d794879db05bf1b0c0184ed7a4818cf6403084c19d77ea7a0/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/655a92612d5b43207cb50607577a808065818aa4d6442441d05b6dd55cab3229/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/b0d8200c48b07df410d9f476dc60571ab855e90f4ab1eb7de1082115781b48bb/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/f36e7d814dcb59c5a9a5d15179543f1a370f196dc88269d21a68fb56555a86e4/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/842157b72f9155a86d2e4ee2547807c4a70c06320f5eb6b2ffdb00d2756a2662/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/cee5e99308a13a32ce64fdb853d2853c5805ce1eb71d0c050793ffaf8a000db9/merged shm 64.0M 0 64.0M 0% /var/lib/docker/containers/6ee5a7ad205bf24f1795fd9374cd4a707887ca2edd6f7e1b4a7698f51361966c/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/79decf02c3a0eb6dd681c8f072f9717c15ba17fcb47d693fcfa1c392b3aef002/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/acc7d374f838256762e03aea4378b73de7a38c97b07af77d62ee01135cc1377b/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/46cb89b550bb1d5394fcbd66d2746f34064fb792a4a6b14d524d4f76a1710f7e/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/0db3a0057c9194329fbacc4d5d94ab40eb2babe06dbb180f72ad96c8ff721632/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/6c17379244983233c7516062979684589c24b661bc203e6e1d53904dd7de167f/shm tmpfs 3.9G 12.0K 3.9G 0% /opt/rke/var/lib/kubelet/pods/ea5b0e7d-18d6-11e9-86c9-fa163ebea4e5/volumes/kubernetes.io~secret/canal-token-gcxzd tmpfs 3.9G 12.0K 3.9G 0% /opt/rke/var/lib/kubelet/pods/eab6dac4-18d6-11e9-86c9-fa163ebea4e5/volumes/kubernetes.io~secret/cattle-token-lbpxh tmpfs 3.9G 8.0K 3.9G 0% /opt/rke/var/lib/kubelet/pods/eab6dac4-18d6-11e9-86c9-fa163ebea4e5/volumes/kubernetes.io~secret/cattle-credentials tmpfs 3.9G 12.0K 3.9G 0% /opt/rke/var/lib/kubelet/pods/5c672b02-18df-11e9-a246-fa163ebea4e5/volumes/kubernetes.io~secret/nginx-ingress-serviceaccount-token-vc522 overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/c29dc914ee801d2b36d4d2b688e5b060be6297665187f1001f9190fc9ace009d/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/0591531eb89d598a8ef9bf49c6c21ea8250ad08489372d3ea5dbf561d44c9340/merged shm 64.0M 0 64.0M 0% /var/lib/docker/containers/c89f839b36e0f7317c78d806a1ffb24d43a21c472a2e8a734785528c22cce85b/shm shm 64.0M 0 64.0M 0% /var/lib/docker/containers/33050b02fc38091003e6a18385446f48989c8f64f9a02c64e41a8072beea817c/shm overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/a81da21f41c5c9eb2fb54ccdc187a26d5899f35933b4b701139d30f1af3860a4/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/f6d546b54d59a29526e4a9187fb75c22c194d28926fca5c9839412933c53ee9d/merged shm 64.0M 0 64.0M 0% /var/lib/docker/containers/7b0f9471bc66513589e79cc733ed6d69d897270902ffba5c9747b668d0f43472/shm overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/cae4765e9eb9004e1372b4b202e03a2a8d3880c918dbc27c676203eef7336080/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/81ee00944f4eb367d4dd06664a7435634916be55c1aa0329509f7a277a522909/merged overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/7888843c2e76b5c3c342a765517ec06edd92b9eab25d26655b0f5812742aa790/merged tmpfs 3.9G 12.0K 3.9G 0% /opt/rke/var/lib/kubelet/pods/c19a2ca3-18df-11e9-a246-fa163ebea4e5/volumes/kubernetes.io~secret/default-token-nzc2d overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/4d1c7efa3af94c1bea63021b594704db4504d4d97f5c858bdb6fe697bdefff9b/merged shm 64.0M 0 64.0M 0% /var/lib/docker/containers/e10b7da6d372d241bebcf838e2cf9e6d86ce29801a297a4e7278c7b7329e895d/shm overlay 7.4G 5.2G 1.8G 74% /var/lib/docker/overlay2/50df5234e85a2854b27aa8c7a8e483ca755803bc8bf61c25060a6c14b50a932c/merged </code></pre> <p>I already tried to prune all docker systems on all nodes and reconfigured and restarted all.</p> <p>Is it may be connected with the fact that all the volumes have a limit of 7.4 GB?</p> <p>How can I increase the ephemeral-storage therefore?</p>
<blockquote> <p>Is it may be connected with the fact that all the volumes have a limit of 7.4 GB?</p> </blockquote> <p>You really have a single volume <code>/dev/vda1</code> and multiple mount points and not several volumes with 7.4GB</p> <p>Not sure where you are running Kubernetes but that looks like a virtual volume (in a VM). You can increase the size in the VM configuration or cloud provider and then run this to increase the size of the filesystem:</p> <ul> <li><p>ext4:</p> <pre><code>$ resize2fs /dev/vda1 </code></pre></li> <li><p>xfs:</p> <pre><code>$ xfs_growfs /dev/vda1 </code></pre></li> </ul> <p>Other filesystems will have their own commands too.</p> <p>The most common issue for running out of disk space on the master(s) is log files, so if that's the case you can set up a cleanup job for them or change the log size configs.</p>
<p>I'm trying to enable efk in my kubernetes cluster. I find a file about fluentd's config: <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml</a> </p> <p>In this file, there's:</p> <pre><code>&lt;filter kubernetes.**&gt; @id filter_kubernetes_metadata @type kubernetes_metadata &lt;/filter&gt; # Fixes json fields in Elasticsearch &lt;filter kubernetes.**&gt; @id filter_parser @type parser key_name log reserve_data true remove_key_name_field true &lt;parse&gt; @type multi_format &lt;pattern&gt; format json &lt;/pattern&gt; &lt;pattern&gt; format none &lt;/pattern&gt; &lt;/parse&gt; &lt;/filter&gt; </code></pre> <p>I want to use different parsers for different deployments. So I wonder:</p> <ol> <li><p>what's 'kubernetes.**' in kubernetes? Is it the name of a deployment or label of a deployment? </p></li> <li><p>In docker-compose file, we can tag on different containers and use the tag in fluentd's 'filter'. In kubernetes, is there any similar way?</p></li> </ol> <p>Thanks for your help!</p>
<p>It isn't related to kubernetes, or to deployments; it is <code>fluentd</code> syntax that represents the top-level <code>kubernetes</code> "tag" and all its subkeys that are published as an event, as one can see <a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter#example-inputoutput" rel="nofollow noreferrer">here</a></p>
<p>I am trying to set Traefik as my ingress controller and load balancer on a single node cluster(Digital Ocean). Following <a href="https://github.com/containous/traefik/blob/master/docs/user-guide/kubernetes.md#deploy-traefik-using-helm-chart" rel="nofollow noreferrer">the official Traefik setup guide</a> I installed Traefik using helm:</p> <pre><code>helm install --values values.yaml stable/traefik # values.yaml dashboard: enabled: true domain: traefik-ui.minikube kubernetes: namespaces: - default - kube-system #output RESOURCES: ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE operatic-emu-traefik-f5dbf4b8f-z9bzp 0/1 ContainerCreating 0 1s ==&gt; v1/ConfigMap NAME AGE operatic-emu-traefik 1s ==&gt; v1/Service operatic-emu-traefik-dashboard 1s operatic-emu-traefik 1s ==&gt; v1/Deployment operatic-emu-traefik 1s ==&gt; v1beta1/Ingress operatic-emu-traefik-dashboard 1s </code></pre> <p>Then I created the service exposing the Web UI <code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml</code></p> <p>Then I can clearly see my traefik pod running and an external-ip being assigned:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dashboard ClusterIP 10.245.156.214 &lt;none&gt; 443/TCP 11d service/kubernetes ClusterIP 10.245.0.1 &lt;none&gt; 443/TCP 14d service/operatic-emu-traefik LoadBalancer 10.245.137.41 &lt;external-ip&gt; 80:31190/TCP,443:30207/TCP 5m7s service/operatic-emu-traefik-dashboard ClusterIP 10.245.8.156 &lt;none&gt; 80/TCP 5m7s </code></pre> <p>Then opening <a href="http://external-ip/dashboard/" rel="nofollow noreferrer">http://external-ip/dashboard/</a> leads to 404 page not found</p> <p>I read a ton of answers and tutorials but keep missing something. Any help is highly appreciated. </p>
<p>I am writing this post as the information is a bit much to fit in a comment. After spending enough time on understanding how k8s and helm charts work, this is how I solved it:</p> <p>Firstly, I missed the RBAC part, I did not create ClusterRole and ClusterRoleBinding in order to authorise Traefik to use K8S API (as I am using 1.12 version). Hence, either I should have deployed ClusterRole and ClusterRoleBinding manually or added the following in my <code>values.yaml</code></p> <pre><code>rbac: enabled: true </code></pre> <p>Secondly, I tried to access dashboard ui from ip directly without realising Traefik uses hostname to direct to its dashboard as @Rico mentioned above (I am voting you up as you did provide helpful info but I did not manage to connect all pieces of the puzzle at that time). So, either edit your <code>/etc/hosts</code> file linking your hostname to the <code>external-ip</code> and then access the dashboard via browser or test that it is working with curl:</p> <pre><code>curl http://external-ip/dashboard/ -H 'Host: traefik-ui.minikube' </code></pre> <p>To sum up, you should be able to install Traefik and access its dashboard ui by installing:</p> <pre><code>helm install --values values.yaml stable/traefik # values.yaml dashboard: enabled: true domain: traefik-ui.minikube rbac: enabled: true kubernetes: namespaces: - default - kube-system </code></pre> <p>and then editing your hosts file and opening the hostname you chose.</p> <p>Now the confusing part from the <a href="https://github.com/containous/traefik/blob/master/docs/user-guide/kubernetes.md#deploy-traefik-using-helm-chart" rel="noreferrer">official traefik setup guide</a> is the section named <code>Submitting an Ingress to the Cluster</code> just below the <code>Deploy Traefik using Helm Chart</code> that instructs to install a service and an ingress object in order to be able to access the dashboard. This is unneeded as the official stable/traefik helm chart provides both of them. You would need that if you want to install traefik by deploying all needed objects manually. However for a person just starting out with k8s and helm, it looks like that section needs to be completed after installing helm via the official stable/traefik chart.</p>
<p>I have two pods in a cluster. Lets call them A and B. I've installed kubectl inside pod A and I am trying to run a command inside pod B from pod A using <code>kubectl exec -it podB -- bash</code>. I am getting the following error </p> <p><code>Error from server (Forbidden): pods "B" is forbidden: User "system:serviceaccount:default:default" cannot create pods/exec in the namespace "default"</code></p> <p>I've created the following Role and RoleBinding to get access. Role yaml</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: api-role namespace: default labels: app: tools-rbac rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] </code></pre> <p>RoleBinding yaml</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: global-rolebinding namespace: default labels: app: tools-rbac subjects: - kind: Group name: system:serviceaccounts apiGroup: rbac.authorization.k8s.io </code></pre> <p>Any help is greatly appreciated. Thank you</p>
<p>You would need to give access to the <code>pods/exec</code> subresource in addition to <code>pods</code> like you have there. That said, this is a very weird thing to do and probably think very hard as to if this is the best solution.</p>
<p>The command <code>kubectl get service</code> returns a list of services that were created at one point in time:</p> <pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE car-example-service 10.0.0.129 &lt;nodes&gt; 8025:31564/TCP,1025:31764/TCP 10h circle-example-service 10.0.0.48 &lt;nodes&gt; 9000:30362/TCP 9h demo-service 10.0.0.9 &lt;nodes&gt; 8025:30696/TCP,1025:32047/TCP 10h example-servic 10.0.0.168 &lt;nodes&gt; 8080:30231/TCP 1d example-service 10.0.0.68 &lt;nodes&gt; 8080:32308/TCP 1d example-service2 10.0.0.184 &lt;nodes&gt; 9000:32727/TCP 13h example-webservice 10.0.0.35 &lt;nodes&gt; 9000:32256/TCP 1d hello-node 10.0.0.224 &lt;pending&gt; 8080:32393/TCP 120d kubernetes 10.0.0.1 &lt;none&gt; 443/TCP 120d mouse-example-service 10.0.0.40 &lt;nodes&gt; 9000:30189/TCP 9h spring-boot-web 10.0.0.171 &lt;nodes&gt; 8080:32311/TCP 9h spring-boot-web-purple 10.0.0.42 &lt;nodes&gt; 8080:31740/TCP 9h </code></pre> <p>I no longer want any of these services listed, because when I list resources: <code>% kubectl get rs</code></p> <p>I am expecting that I only see the spring-boot-web resource listed.</p> <pre><code>NAME DESIRED CURRENT READY AGE spring-boot-web-1175758536 1 1 0 18m </code></pre> <p>Please help clarify why I am seeing services that are listed , when the resources only show 1 resource.</p>
<p>Simply call this command.</p> <ol> <li>Get all available services:</li> </ol> <pre><code>kubectl get service -o wide </code></pre> <ol start="2"> <li>Then you can delete any services like this:</li> </ol> <pre><code>kubectl delete svc &lt;YourServiceName&gt; </code></pre>
<p>Trying to install Che in Kubernertes:</p> <p>from: <a href="https://www.eclipse.org/che/docs/che-6/kubernetes-single-user.html" rel="nofollow noreferrer">https://www.eclipse.org/che/docs/che-6/kubernetes-single-user.html</a></p> <p>Deploying Che:</p> <pre><code>helm upgrade --install my-che-installation --namespace my-che-namespace -f ./ </code></pre> <p>Error: Error: This command needs 2 arguments: release name, chart path</p>
<p>I think the problem is the <code>-f</code> - that is normally used for a values file but it is pointing to a whole dir and not a values file. If you take that out and run <code>helm upgrade --install my-che-installation --namespace my-che-namespace ./</code> from the suggested path then you get a different error because the dependencies are not built. If you then run <code>helm dep build .</code> and try again then it works.</p>
<p>I've created a Kubernetes job that has now failed. Where can I find the logs to this job?</p> <p>I'm not sure how to find the associated pod (I assume once the job fails it deletes the pod)?</p> <p>Running <code>kubectl describe job</code> does not seem to show any relevant information:</p> <pre><code>Name: app-raiden-migration-12-19-58-21-11-2018 Namespace: localdev Selector: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0 Labels: jobType=database-migration Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"labels":{"jobType":"database-migration"},"name":"app-raiden-migration-12-19-58-21-1... Parallelism: 1 Completions: 1 Start Time: Wed, 21 Nov 2018 12:19:58 +0000 Pods Statuses: 0 Running / 0 Succeeded / 1 Failed Pod Template: Labels: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0 job-name=app-raiden-migration-12-19-58-21-11-2018 Containers: app: Image: pp3-raiden-app:latest Port: &lt;none&gt; Command: php artisan migrate Environment: DB_HOST: local-mysql DB_PORT: 3306 DB_DATABASE: raiden DB_USERNAME: &lt;set to the key 'username' in secret 'cloudsql-db-credentials'&gt; Optional: false DB_PASSWORD: &lt;set to the key 'password' in secret 'cloudsql-db-credentials'&gt; Optional: false LOG_CHANNEL: stderr APP_NAME: Laravel APP_KEY: ABCDEF123ERD456EABCDEF123ERD456E APP_URL: http://192.168.99.100 OAUTH_PRIVATE: &lt;set to the key 'oauth_private.key' in secret 'laravel-oauth'&gt; Optional: false OAUTH_PUBLIC: &lt;set to the key 'oauth_public.key' in secret 'laravel-oauth'&gt; Optional: false Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m job-controller Created pod: app-raiden-migration-12-19-58-21-11-2018-pwnjn Warning BackoffLimitExceeded 2m job-controller Job has reach the specified backoff limit </code></pre>
<p>One other approach:</p> <ul> <li><code>kubectl describe job $JOB</code></li> <li>Pod name is shown under "Events"</li> <li><code>kubectl logs $POD</code></li> </ul>
<p>I am going to start with an example. Say I have an AKS cluster with three nodes. Each of these nodes runs a set of pods, let's say 5 pods. That's 15 pods running on my cluster in total, 5 pods per node, 3 nodes.</p> <p>Now let's say that my nodes are not fully utilized at all and I decide to scale down to 2 nodes instead of 3.</p> <p>When I choose to do this within Azure and change my node count from 3 to 2, Azure will close down the 3rd node. However, it will also delete all pods that were running on the 3rd node. How do I make my cluster reschedule the pods from the 3rd node to the 1st or 2nd node so that I don't lose them and their contents?</p> <p>The only way I feel safe to scale down on nodes right now is to do the rescheduling manually.</p>
<p>Assuming you are using Kubernetes deployments (or replica sets) then it should do this for you. Your deployment is configured with a set number of replicas to create for each pod when you remove a node the scheduler will see that the current active number is less than the desired number and create new ones.</p> <p>If you are just deploying pods without a deployment, then this won't happen and the only solution is manually redeploying, which is why you want to use a deployment.</p> <p>Bear in mind though, what you get created are new pods, you are not moving the previously running pods. Any state you had on the previous pods that is not persisted will be gone. This is how it is intended to work.</p>
<p>How to Do Kubectl cp from running pod to local,says no such file or directory</p> <p>I have contents in Ubuntu container as below</p> <pre><code>vagrant@ubuntu-xenial:~/k8s/pods$ kubectl exec command-demo-67m2b -c ubuntu -- sh -c "ls /tmp" docker-sock </code></pre> <p>Now simply i want to copy above /tmp contents using below kubectl cp command</p> <pre><code>kubectl cp command-demo-67m2b/ubuntu:/tmp /home </code></pre> <p>I have a command output as below</p> <pre><code>vagrant@ubuntu-xenial:~/k8s/pods$ kubectl cp command-demo-67m2b/ubuntu:/tmp /home error: tmp no such file or directory </code></pre> <p>Now All i want to do is copy above /tmp folder to local host,unfortunately kubectl says no such file or directory. I amn confused when /tmp folder exists in Ubuntu container why kubectl cp saying folder not found</p> <p>My pod is command-demo-67m2b and container name is ubuntu</p> <p>But the pod is up and running as shown below</p> <pre><code>vagrant@ubuntu-xenial:~/k8s/pods$ kubectl describe pods command-demo-67m2b Name: command-demo-67m2b Namespace: default Node: ip-172-31-8-145/172.31.8.145 Start Time: Wed, 16 Jan 2019 00:57:05 +0000 Labels: controller-uid=a4ac12c1-1929-11e9-b787-02d8b37d95a0 job-name=command-demo Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: memory request for container ubuntu; memory limit for container ubuntu Status: Running IP: 10.1.40.75 Controlled By: Job/command-demo Containers: command-demo-container: Container ID: docker://c680fb336242f456d90433a9aa89cf3e1cb1d45d73447769fcf86ce329176437 Image: tarunkumard/fromscratch6.0 Image ID: docker- ullable://tarunkumard/fromscratch6.0@sha256:709b588aa4edcc9bc2b39bee60f248bb02347a605da09fb389c448e41e2f543a Port: &lt;none&gt; Host Port: &lt;none&gt; State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 16 Jan 2019 00:57:07 +0000 Finished: Wed, 16 Jan 2019 00:58:36 +0000 Ready: False Restart Count: 0 Limits: memory: 1Gi Requests: memory: 900Mi Environment: &lt;none&gt; Mounts: /opt/gatling-fundamentals/build/reports/gatling/ from docker-sock (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-w6jt6 (ro) ubuntu: Container ID: docker://7da9d43816253048fb4137fadc6c2994aac93fd272391b73f2fab3b02487941a Image: ubuntu:16.04 Image ID: docker- Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/bash -c -- Args: while true; do sleep 10; done; State: Running Started: Wed, 16 Jan 2019 00:57:07 +0000 Ready: True Restart Count: 0 Limits: memory: 1Gi Requests: memory: 1Gi Environment: JVM_OPTS: -Xms900M -Xmx1G Mounts: /docker-sock from docker-sock (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-w6jt6 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: docker-sock: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-w6jt6: Type: Secret (a volume populated by a Secret) SecretName: default-token-w6jt6 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s </code></pre> <p>Events: </p> <p>Here is my yaml file in case you need for reference:-</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: command-demo spec: ttlSecondsAfterFinished: 100 template: spec: volumes: - name: docker-sock # Name of the AWS EBS Volume emptyDir: {} restartPolicy: Never containers: - name: command-demo-container image: tarunkumard/fromscratch6.0 volumeMounts: - mountPath: /opt/gatling-fundamentals/build/reports/gatling/ # Mount path within the container name: docker-sock # Name must match the AWS EBS volume name defined in spec.Volumes imagePullPolicy: Never resources: requests: memory: "900Mi" limits: memory: "1Gi" - name: ubuntu image: ubuntu:16.04 command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 10; done;" ] volumeMounts: - mountPath: /docker-sock # Mount path within the container name: docker-sock # Name must match the AWS EBS volume name defined in spec.Volumes imagePullPolicy: Never env: - name: JVM_OPTS value: "-Xms900M -Xmx1G" </code></pre> <p>I expect kubectl cp command to copy contents from pod container to local</p>
<p>In your original command to exec into a container, you pass the <code>-c ubuntu</code> command, meaning you're selecting the Ubuntu container from the pod:</p> <pre><code>kubectl exec command-demo-67m2b -c ubuntu -- sh -c "ls /tmp" </code></pre> <p>However, in your <code>kubectl cp</code> command, you're not specifying the same container:</p> <pre><code>kubectl cp command-demo-67m2b/ubuntu:/tmp /home </code></pre> <p>Try this:</p> <pre><code>kubectl cp command-demo-67m2b:/tmp /home -c ubuntu </code></pre>
<p>I deployed a spring boot app in kubernetes pod. But normally I access any app in this way of proxy port forwarding -</p> <p><a href="http://192.64.125.29:8001/api/v1/namespaces/kube-system/services/https:hello-app:/proxy/" rel="nofollow noreferrer">http://192.64.125.29:8001/api/v1/namespaces/kube-system/services/https:hello-app:/proxy/</a></p> <p>But my spring boot app is running in this below url -</p> <p><a href="http://192.64.125.29:8001/api/v1/namespaces/kube-system/services/https:myspringbootapp:/proxy/" rel="nofollow noreferrer">http://192.64.125.29:8001/api/v1/namespaces/kube-system/services/https:myspringbootapp:/proxy/</a></p> <p>But I have no idea how to invoke my controller end point /visitid</p>
<p>If you are just trying to do a quick check then you can <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">port-forward to the pod</a> - do <code>kubectl get pods</code> to find the pod name and then <code>kubectl port-forward &lt;pod-name&gt; 8080:8080</code> or whatever port you use if not 8080. Then you can hit your endpoint in your browser or with curl on localhost. For example, if you have the spring boot actuator running you could go to <code>http://localhost:8080/actuator/health</code>.</p> <p>If you are trying to access the Pod through the Service then you can <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod" rel="nofollow noreferrer">port-forward to the Service</a> but you may want to expose the Service externally. You'll want to pick <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">how to expose it externally</a> and set that up. Then you'll have an external URL you can use and won't need to go via the kube internal APIs.</p> <p>It is also possible <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls" rel="nofollow noreferrer">to construct a URL to hit the Service when proxying with <code>kubectl proxy</code></a>. For example you could hit the actuator on a spring boot app using http (not https) with <code>api/v1/namespaces/&lt;namespace&gt;/services/&lt;http:&gt;&lt;service_name&gt;:&lt;port_name&gt;/proxy/actuator/health</code>. The <code>&lt;port_name&gt;</code> will be in the service spec and you'll find it in the output of <code>kubectl describe service</code>.</p>
<p>I have a <code>RestTemplate</code> that I build it with <code>RestTemplateBuilder</code>. I set the rootUri for builder. In below method (updateState1) sometimes I got the "URI is not absolute" error. For example when I called this method concurrently for 2 times I often got 1 error.</p> <p><strong>EDIT and Solution</strong>: I use this <code>RestTemplate</code> in service task of camunda process. I launch this project in kubernetes container that has different timezone with the oracle database. When I add timezone variable every things work fine.</p> <p>Spring boot version: 2.1.1.RELEASE</p> <p>Here is my code:</p> <pre><code>@Component @Slf4j public class CoreServiceClient { private RestTemplate restTemplate; private static final String root = "http://localhost:8080/test/api/"; public CoreServiceClient(RestTemplateBuilder restTemplateBuilder) { restTemplate = restTemplateBuilder.rootUri(root).build(); } public void updateState1(UpdateParam updateParam) { HttpHeaders headers = generateHeader(); UpdateRequest updateRequest = new UpdateRequest(updateParam.getState()); HttpEntity&lt;UpdateRequest&gt; httpEntity = new HttpEntity&lt;&gt;(updateRequest, headers); ResponseEntity&lt;String&gt; response = restTemplate.exchange( "/food/{id}/state", HttpMethod.PUT, httpEntity, String.class, updateParam.getId()); } public void updateState2(String id) { HttpHeaders headers = generateHeader(); UpdateRequest updateRequest = new UpdateRequest("done"); HttpEntity&lt;UpdateRequest&gt; httpEntity = new HttpEntity&lt;&gt;(updateRequest, headers); ResponseEntity&lt;String&gt; response = restTemplate.exchange( "/food/{id}/state", HttpMethod.PUT, httpEntity, String.class, id); } } </code></pre> <p>cuase (stacktrace):</p> <pre><code>Caused by: java.lang.IllegalArgumentException: URI is not absolute at java.net.URI.toURL(URI.java:1088) at org.springframework.http.client.SimpleClientHttpRequestFactory.createRequest(SimpleClientHttpRequestFactory.java:145) at org.springframework.http.client.support.HttpAccessor.createRequest(HttpAccessor.java:87) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:730) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:669) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:578) at com.test.client.CoreServiceClient.updateState(CoreServiceClient.java:39) at sun.reflect.GeneratedMethodAccessor263.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.camunda.bpm.engine.impl.javax.el.BeanELResolver.invoke(BeanELResolver.java:479) ... 85 more </code></pre>
<p>Remove <code>/</code> in root:</p> <pre><code>private static final String root = "http://localhost:8080/test/api"; </code></pre> <p>RestTemplate accepts uriTemplate as long as they start with <code>/</code> so your root should be without it. if it doesn't start with <code>/</code> it will consider it as a full <code>URL</code></p>
<p>MY istio destintion rules are not working, getting below error in kiali</p> <p><a href="https://i.stack.imgur.com/ZmJJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZmJJ6.png" alt="enter image description here"></a> VirtualService and destination rule for echo service:</p> <p>My calling <code>echo-svc:8080</code> and <code>echo-svc:8080/v1</code> from my another virtualservices , I'm not able to do route in specific version.</p> <p>When making request from another virtualservice: <code>echo-svc:8080/v1</code> or <code>echo-svc:8080</code>, I'm getting response from both the subsets. </p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: echo-vsvc spec: hosts: - "echo-svc.default.svc.cluster.local" http: - match: - uri: prefix: "/v1" route: - destination: host: echo-svc.default.svc.cluster.local subset: v1 - route: - destination: host: echo-svc.default.svc.cluster.local subset: v2 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: echo-destination spec: host: echo-svc.default.svc.cluster.local subsets: - name: v1 labels: version: v0.1 - name: v2 labels: version: v0.2 </code></pre> <p>If I'm attaching my echo-service to gateway and then making service to <code>v1</code> endpoint via istio-ingress, my all requests are routed to required k8s service, but if it's internal(echo service not attached to gateway) envoy is not routing the requests to required k8s service..</p> <p>Update:</p> <pre><code> $ &gt; k get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS echo-deploy-v1-bdf758994-8f54b 2/2 Running 0 2m56s app=echo-app,pod-template-hash=bdf758994,version=v0.1 echo-deploy-v2-68bb64684d-9gg2r 2/2 Running 0 2m51s app=echo-app,pod-template-hash=68bb64684d,version=v0.2 frontend-v2-569c89dbd8-wfnc4 2/2 Running 2 12h app=frontend,pod-template-hash=569c89dbd8,version=v2 </code></pre>
<p>Found my mistake, for istio destination rules to work be very careful about these: <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/spec-requirements/</a>.</p> <p>My mistake was of <strong>named port</strong> for service. Updating it from "web" to "http-web" worked for me. It should be of form : <code>&lt;protocol&gt;[-&lt;suffix&gt;]</code></p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: echo-svc labels: app: echo-app spec: ports: - port: 80 targetPort: 8080 name: http-web selector: app: echo-app --- </code></pre>
<p>My k8s namespace contains a <code>Secret</code> which is created at deployment time (by <code>svcat</code>), so the values are not known in advance.</p> <pre><code>apiVersion: v1 kind: Secret type: Opaque metadata: name: my-database-credentials data: hostname: ... port: ... database: ... username: ... password: ... </code></pre> <p>A <code>Deployment</code> needs to inject these values in a slightly different format:</p> <pre><code>... containers: env: - name: DATABASE_URL valueFrom: secretKeyRef: name: my-database-credentials key: jdbc:postgresql:&lt;hostname&gt;:&lt;port&gt;/&lt;database&gt; // ?? - name: DATABASE_USERNAME valueFrom: secretKeyRef: name: my-database-credentials key: username - name: DATABASE_PASSWORD valueFrom: secretKeyRef: name: my-database-credentials key: password </code></pre> <p>The <code>DATABASE_URL</code> needs to be composed out of the <code>hostname</code>, <code>port</code>, 'database` from the previously defined secret.</p> <p>Is there any way to do this composition? </p>
<p>Kubernetes allows you to use previously defined environment variables as part of subsequent environment variables elsewhere in the configuration. From the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#envvar-v1-core" rel="noreferrer">Kubernetes API reference docs</a>:</p> <blockquote> <p>Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.</p> </blockquote> <p>This <code>$(...)</code> syntax defines <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/" rel="noreferrer">interdependent environment variables</a> for the container.</p> <p>So, you can first extract the required secret values into environment variables, and then compose the <code>DATABASE_URL</code> with those variables.</p> <pre class="lang-yaml prettyprint-override"><code>... containers: env: - name: DB_URL_HOSTNAME // part 1 valueFrom: secretKeyRef: name: my-database-credentials key: hostname - name: DB_URL_PORT // part 2 valueFrom: secretKeyRef: name: my-database-credentials key: port - name: DB_URL_DBNAME // part 3 valueFrom: secretKeyRef: name: my-database-credentials key: database - name: DATABASE_URL // combine value: jdbc:postgresql:$(DB_URL_HOSTNAME):$(DB_URL_PORT)/$(DB_URL_DBNAME) ... </code></pre>
<p>In my Kubernetes environment I have following to pods running</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE httpd-6cc5cff4f6-5j2p2 1/1 Running 0 1h 172.16.44.12 node01 tomcat-68ccbb7d9d-c2n5m 1/1 Running 0 45m 172.16.44.13 node02 </code></pre> <p>One is a Tomcat instance and other one is a Apache instance.</p> <p>from <code>node01</code> and <code>node02</code> I can curl the httpd which is using port <code>80</code>. But If i curl the tomcat server which is running on <code>node2</code> from <code>node1</code> it fails. I get below output.</p> <pre><code>[root@node1~]# curl -v 172.16.44.13:8080 * About to connect() to 172.16.44.13 port 8080 (#0) * Trying 172.16.44.13... * Connected to 172.16.44.13 (172.16.44.13) port 8080 (#0) &gt; GET / HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: 172.16.44.13:8080 &gt; Accept: */* &gt; ^C [root@node1~]# wget -v 172.16.44.13:8080 --2019-01-16 12:00:21-- http://172.16.44.13:8080/ Connecting to 172.16.44.13:8080... connected. HTTP request sent, awaiting response... </code></pre> <p>But I'm able telnet to port <code>8080</code> on <code>172.16.44.13</code> from <code>node1</code></p> <pre><code>[root@node1~]# telnet 172.16.44.13 8080 Trying 172.16.44.13... Connected to 172.16.44.13. Escape character is '^]'. ^] telnet&gt; </code></pre> <p>Any reason for this behavior? why am I able to telnet but unable to get the web content? I have also tried different ports but curl is working only for port 80.</p>
<p>I was able to get this fixed by disabling <code>selinux</code> on my nodes.</p>
<p>I am working on an application on Kubernetes in GCP and I need a really huge SSD storage for it.</p> <p>So I created a <code>StorageClass</code> recourse, a <code>PersistentVolumeClaim</code> that requests 500Gi of space and then a <code>Deployment</code> recourse.</p> <p>StorageClass.yaml: </p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd </code></pre> <p>PVC.yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongo-volume spec: storageClassName: faster accessModes: - ReadWriteOnce resources: requests: storage: 500Gi </code></pre> <p>Deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 2 selector: matchLabels: app: mongo template: metadata: creationTimestamp: null labels: app: mongo spec: containers: - image: mongo name: mongo ports: - containerPort: 27017 volumeMounts: - mountPath: /data/db name: mongo-volume volumes: - name: mongo-volume persistentVolumeClaim: claimName: mongo-volume </code></pre> <p>When I applied the PVC, it stuck in <code>Pending...</code> state for hours. I found out experimentally that it binds correctly with maximum 200Gi of requested storage space.</p> <p>However, I can create several 200Gi PVCs. Is there a way to bind them to one path to work as one big PVC in Deployment.yaml? Or maybe the 200Gi limit can be expanded?</p>
<p>I have just tested it on my own env and it works perfectly. So the problem is in Quotas.</p> <p>For this check: </p> <p>IAM &amp; admin -> Quotas -> Compute Engine API Local SSD (GB) "your region" Amount which you used. </p> <p>I've created the situation when I`m run out of Quota and it stack in pending status the same as your. It happens because you create PVC for each pod for 500GB each. </p>
<p>I have a web deployment and a mongoDB statefulset. The web deployment connects to the mongodb but once in a while a error may occur in the mongodb and it reboots and starts up. The connection from the web deployment to the mongodb never get restarted. Is there a way in the web deployment. If the mongodb pod restarts to restart the web pod as well?</p>
<p>Yes, you can use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness probe</a> on your application container that probes your Mongo Pod/StatefulSet. You can configure it in such a way that it fails if it fails to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP connect</a> to your Mongo Pod/StatefulSet when Mongo crashes (Maybe check every second)</p> <p>Keep in mind that with this approach you will have to always start your Mongo Pod/StatefulSet first.</p> <p>The sidecar function described in the other answer should work too, only it would take a bit more configuration.</p>
<p>I am trying to install Traefik on my DigitalOcean Kubernetes cluster using Helm.</p> <pre><code>$ helm install -f traefik.values.yaml stable/traefik </code></pre> <p>I own the hypothetical domain <code>example.org</code> and the DNS record is managed through Digital Ocean</p> <p>The <code>traefik.values.yaml</code> values file contains (you can view the <a href="https://github.com/helm/charts/blob/master/stable/traefik/values.yaml" rel="nofollow noreferrer">full list of options here</a>):</p> <pre><code>--- accessLogs: enabled: true dashboard: enabled: true domain: traefik.example.org debug: enabled: true ssl: enabled: true enforced: true acme: enabled: true logging: true staging: true email: &lt;redacted&gt; challengeType: "dns-01" dnsProvider: name: digitalocean digitalocean: DO_AUTH_TOKEN: "&lt;redacted&gt;" domains: enabled: true domainsList: - main: "traefik.example.org" rbac: enabled: true </code></pre> <p>But the service never creates an external IP address. When I check the logs, I see:</p> <pre><code>$ k logs messy-koala-traefik-584cc9f68b-d9p6h -f {"level":"info","msg":"Using TOML configuration file /config/traefik.toml","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"No tls.defaultCertificate given for https: using the first item in tls.certificates as a fallback.","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Traefik version v1.7.6 built on 2018-12-14_06:43:37AM","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Global configuration loaded {\"LifeCycle\":{\"RequestAcceptGraceTimeout\":0,\"GraceTimeOut\":10000000000},\"GraceTimeOut\":0,\"Debug\":true,\"CheckNewVersion\":true,\"SendAnonymousUsage\":false,\"AccessLogsFile\":\"\",\"AccessLog\":{\"format\":\"common\",\"fields\":{\"defaultMode\":\"keep\",\"headers\":{\"defaultMode\":\"keep\"}}},\"TraefikLogsFile\":\"\",\"TraefikLog\":{\"format\":\"json\"},\"Tracing\":null,\"LogLevel\":\"\",\"EntryPoints\":{\"http\":{\"Address\":\":80\",\"TLS\":null,\"Redirect\":{\"regex\":\"^http://(.*)\",\"replacement\":\"https://$1\"},\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":true,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}},\"https\":{\"Address\":\":443\",\"TLS\":{\"MinVersion\":\"\",\"CipherSuites\":null,\"Certificates\":[{\"CertFile\":\"/ssl/tls.crt\",\"KeyFile\":\"/ssl/tls.key\"}],\"ClientCAFiles\":null,\"ClientCA\":{\"Files\":null,\"Optional\":false},\"DefaultCertificate\":{\"CertFile\":\"/ssl/tls.crt\",\"KeyFile\":\"/ssl/tls.key\"},\"SniStrict\":false},\"Redirect\":null,\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":true,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}},\"traefik\":{\"Address\":\":8080\",\"TLS\":null,\"Redirect\":null,\"Auth\":null,\"WhitelistSourceRange\":null,\"WhiteList\":null,\"Compress\":false,\"ProxyProtocol\":null,\"ForwardedHeaders\":{\"Insecure\":true,\"TrustedIPs\":null}}},\"Cluster\":null,\"Constraints\":[],\"ACME\":{\"Email\":\"[email protected]\",\"Domains\":[{\"Main\":\"traefik.example.org\",\"SANs\":null}],\"Storage\":\"/acme/acme.json\",\"StorageFile\":\"\",\"OnDemand\":false,\"OnHostRule\":true,\"CAServer\":\"https://acme-staging-v02.api.letsencrypt.org/directory\",\"EntryPoint\":\"https\",\"KeyType\":\"\",\"DNSChallenge\":{\"Provider\":\"digitalocean\",\"DelayBeforeCheck\":0,\"Resolvers\":null,\"DisablePropagationCheck\":false},\"HTTPChallenge\":null,\"TLSChallenge\":null,\"DNSProvider\":\"\",\"DelayDontCheckDNS\":0,\"ACMELogging\":true,\"OverrideCertificates\":false,\"TLSConfig\":null},\"DefaultEntryPoints\":[\"http\",\"https\"],\"ProvidersThrottleDuration\":2000000000,\"MaxIdleConnsPerHost\":200,\"IdleTimeout\":0,\"InsecureSkipVerify\":false,\"RootCAs\":null,\"Retry\":null,\"HealthCheck\":{\"Interval\":30000000000},\"RespondingTimeouts\":null,\"ForwardingTimeouts\":null,\"AllowMinWeightZero\":false,\"KeepTrailingSlash\":false,\"Web\":null,\"Docker\":null,\"File\":null,\"Marathon\":null,\"Consul\":null,\"ConsulCatalog\":null,\"Etcd\":null,\"Zookeeper\":null,\"Boltdb\":null,\"Kubernetes\":{\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\",\"IngressEndpoint\":null},\"Mesos\":null,\"Eureka\":null,\"ECS\":null,\"Rancher\":null,\"DynamoDB\":null,\"ServiceFabric\":null,\"Rest\":null,\"API\":{\"EntryPoint\":\"traefik\",\"Dashboard\":true,\"Debug\":true,\"CurrentConfigurations\":null,\"Statistics\":null},\"Metrics\":null,\"Ping\":null,\"HostResolver\":null}","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Setting Acme Certificate store from Entrypoint: https","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Preparing server traefik \u0026{Address::8080 TLS:\u003cnil\u003e Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:false ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0002c3120} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Creating regex redirect http -\u003e ^http://(.*) -\u003e https://$1","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Preparing server http \u0026{Address::80 TLS:\u003cnil\u003e Redirect:0xc00019fdc0 Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:true ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0002c30c0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Preparing server https \u0026{Address::443 TLS:0xc000221170 Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:true ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0002c30e0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting provider configuration.ProviderAggregator {}","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting server on :8080","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting server on :80","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting server on :443","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting provider *kubernetes.Provider {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Trace\":false,\"TemplateVersion\":0,\"DebugLogGeneratedTemplate\":false,\"Endpoint\":\"\",\"Token\":\"\",\"CertAuthFilePath\":\"\",\"DisablePassHostHeaders\":false,\"EnablePassTLSCert\":false,\"Namespaces\":null,\"LabelSelector\":\"\",\"IngressClass\":\"\",\"IngressEndpoint\":null}","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Starting provider *acme.Provider {\"Email\":\"[email protected]\",\"ACMELogging\":true,\"CAServer\":\"https://acme-staging-v02.api.letsencrypt.org/directory\",\"Storage\":\"/acme/acme.json\",\"EntryPoint\":\"https\",\"KeyType\":\"\",\"OnHostRule\":true,\"OnDemand\":false,\"DNSChallenge\":{\"Provider\":\"digitalocean\",\"DelayBeforeCheck\":0,\"Resolvers\":null,\"DisablePropagationCheck\":false},\"HTTPChallenge\":null,\"TLSChallenge\":null,\"Domains\":[{\"Main\":\"traefik.example.org\",\"SANs\":null}],\"Store\":{}}","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Testing certificate renew...","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Using Ingress label selector: \"\"","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"ingress label selector is: \"\"","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Creating in-cluster Provider client","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Configuration received from provider ACME: {}","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Looking for provided certificate(s) to validate [\"traefik.example.org\"]...","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Domains [\"traefik.example.org\"] need ACME certificates generation for domains \"traefik.example.org\".","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Loading ACME certificates [traefik.example.org]...","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"The key type is empty. Use default key type 4096.","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Server configuration reloaded on :80","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Server configuration reloaded on :443","time":"2019-01-15T16:25:20Z"} {"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-01-15T16:25:20Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1beta1.Ingress","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Configuration received from provider kubernetes: {\"backends\":{\"traefik.example.org\":{\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"traefik.example.org\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"traefik.example.org\",\"routes\":{\"traefik.example.org\":{\"rule\":\"Host:traefik.example.org\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint http","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating load-balancer wrr","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint https","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating load-balancer wrr","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-01-15T16:25:21Z"} {"level":"info","msg":"Server configuration reloaded on :443","time":"2019-01-15T16:25:21Z"} {"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-01-15T16:25:21Z"} {"level":"info","msg":"Server configuration reloaded on :80","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Try to challenge certificate for domain [traefik.example.org] founded in Host rule","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Looking for provided certificate(s) to validate [\"traefik.example.org\"]...","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"No ACME certificate generation required for domains [\"traefik.example.org\"].","time":"2019-01-15T16:25:21Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-01-15T16:25:22Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:23Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:23Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:23Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:23Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:25Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:25Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:25Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:25Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:27Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:27Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:27Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:27Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:29Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:29Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:29Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:29Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:31Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:31Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:31Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:31Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:33Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:33Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:33Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:33Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Configuration received from provider kubernetes: {\"backends\":{\"traefik.example.org\":{\"servers\":{\"messy-koala-traefik-584cc9f68b-d9p6h\":{\"url\":\"http://10.244.94.3:8080\",\"weight\":1}},\"loadBalancer\":{\"method\":\"wrr\"}}},\"frontends\":{\"traefik.example.org\":{\"entryPoints\":[\"http\",\"https\"],\"backend\":\"traefik.example.org\",\"routes\":{\"traefik.example.org\":{\"rule\":\"Host:traefik.example.org\"}},\"passHostHeader\":true,\"priority\":0,\"basicAuth\":null}}}","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint http","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating load-balancer wrr","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating server messy-koala-traefik-584cc9f68b-d9p6h at http://10.244.94.3:8080 with weight 1","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint https","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating load-balancer wrr","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating server messy-koala-traefik-584cc9f68b-d9p6h at http://10.244.94.3:8080 with weight 1","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-01-15T16:25:35Z"} {"level":"info","msg":"Server configuration reloaded on :80","time":"2019-01-15T16:25:35Z"} {"level":"info","msg":"Server configuration reloaded on :443","time":"2019-01-15T16:25:35Z"} {"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Try to challenge certificate for domain [traefik.example.org] founded in Host rule","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Looking for provided certificate(s) to validate [\"traefik.example.org\"]...","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"No ACME certificate generation required for domains [\"traefik.example.org\"].","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:35Z"} {"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:37Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:37Z"} </code></pre> <p>After which the following logs are repeated forever:</p> <pre><code>{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:37Z"} {"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-01-15T16:25:37Z"} </code></pre> <p>Am I missing some config? I can't assign an A record to the LoadBalancer until it has an external IP address.</p> <p><strong>UPDATE</strong></p> <p>I cancelled and retried and the second time, it worked. I just didn't wait long enough. I was able to manually set an A record on Digital Ocean after it came up.</p> <p>When I went to the Traefik dashboard, however, I was warned about my certificate. Automating the DNS might bring the app up in time to coordinate with Let's Encrypt CA... haven't tried this yet.</p>
<p>Yes, the acme config for traefik is expecting you to have a DNS record exist.</p> <p>You need to use something like <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">external-dns</a> to register a DNS record for your ingress </p>
<p>I recently saw a <a href="https://github.com/apache/spark/pull/21092" rel="noreferrer">pull request</a> that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here. </p> <p>My question is:</p> <p>Is there a way to create SparkContexts using PySpark's <code>SparkSession.Builder</code> with master set to <code>k8s://&lt;...&gt;:&lt;...&gt;</code>, and have the resulting jobs run on <code>spark-on-k8s</code>, instead of on <code>local</code>?</p> <p>E.g.:</p> <pre><code>from pyspark.sql import SparkSession spark = SparkSession.builder.master('k8s://https://kubernetes:443').getOrCreate() </code></pre> <p>I have an interactive Jupyter notebook running inside a Kubernetes pod, and I'm trying to use PySpark to create a <code>SparkContext</code> that runs on spark-on-k8s instead of resorting to using <code>local[*]</code> as <code>master</code>.</p> <p>Till now, I've been getting an error saying that:</p> <blockquote> <p>Error: Python applications are currently not supported for Kubernetes.</p> </blockquote> <p>whenever I set <code>master</code> to <code>k8s://&lt;...&gt;</code>.</p> <p>It seems like PySpark always runs in <code>client</code> mode, which doesn't seem to be supported for <code>spark-on-k8s</code> at the moment -- perhaps there's some workaround that I'm not aware of.</p> <p>Thanks in advance!</p>
<p>pyspark client mode works on Spark's latest version 2.4.0</p> <p>This is how I did it (in Jupyter lab):</p> <pre><code>import os os.environ['PYSPARK_PYTHON']="/usr/bin/python3.6" os.environ['PYSPARK_DRIVER_PYTHON']="/usr/bin/python3.6" from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession sparkConf = SparkConf() sparkConf.setMaster("k8s://https://localhost:6443") sparkConf.setAppName("KUBERNETES-IS-AWESOME") sparkConf.set("spark.kubernetes.container.image", "robot108/spark-py:latest") sparkConf.set("spark.kubernetes.namespace", "playground") spark = SparkSession.builder.config(conf=sparkConf).getOrCreate() sc = spark.sparkContext </code></pre> <p>Note: I am running kubernetes locally on Mac with Docker Desktop.</p>
<p>I'm trying to use cert-manager to issue a certificate via LetsEncrypt.</p> <p>I've followed through with the steps here <a href="http://docs.cert-manager.io/en/latest/getting-started/index.html" rel="noreferrer">http://docs.cert-manager.io/en/latest/getting-started/index.html</a></p> <p>However, my existing ingress is not being modified (I assume it needs to modify it due to adding a path for <code>.well-known/...</code>. </p> <p>Instead I see an ingress created for this with a name like: <code>cm-acme-http-solver-kgpz6</code>? Which is rather confusing?</p> <p>If I get the yaml for that ingress I see the following for rules:</p> <pre><code>spec: rules: - host: example.com http: paths: - backend: serviceName: cm-acme-http-solver-2dd97 servicePort: 8089 path: /.well-known/acme-challenge/2T2D_XK1-zIJJ9_f2ANlwR-AcNTm3-WenOExNpmUytY </code></pre> <p>How exactly is this meant to work? As the documentation seems rather sparse.</p>
<p>The record you are seeing is for the challenge. It needs to succeed to configure the cert. If you are using "example.com" as the domain then it will not succeed. To get this to work you'll need to configure a DNS record for a valid hostname so that LetsEncrypt can resolve the domain and complete the check. </p> <p>Usually you will not even see the challenge ingress resource. It usually runs the challenge and then removes itself as long as DNS and the hostname have been configured correctly. After it is removed the resource you created will get loaded into your ingress controller. </p> <p>There are a few ingress controllers that do not support multiple ingress resources per hostname. They will load one ingress resource and ignore the other, so this is sort of a workaround/fix to the issue.</p>
<p>I want to scale up/down the number of machines to increase/decrease the number of nodes in my Kubernetes cluster. When I add one machine, I’m able to successfully register it with Kubernetes; therefore, a new node is created as expected. However, it is not clear to me how to smoothly shut down the machine later. A good workflow would be:</p> <ol> <li>Mark the node related to the machine that I am going to shut down as unschedulable;</li> <li>Start the pod(s) that is running in the node in other node(s);</li> <li>Gracefully delete the pod(s) that is running in the node;</li> <li>Delete the node.</li> </ol> <p>If I understood correctly, even <code>kubectl drain</code> (<a href="https://groups.google.com/forum/#!topic/google-containers/FxOYvJp82T0" rel="noreferrer">discussion</a>) doesn't do what I expect since it doesn’t start the pods before deleting them (it relies on a replication controller to start the pods afterwards which may cause downtime). Am I missing something?</p> <p>How should I properly shutdown a machine?</p>
<p>List the nodes and get the <code>&lt;node-name&gt;</code> you want to drain or (remove from cluster) </p> <pre><code>kubectl get nodes </code></pre> <p>1) First drain the node</p> <pre><code>kubectl drain &lt;node-name&gt; </code></pre> <p>You might have to ignore daemonsets and local-data in the machine</p> <pre><code>kubectl drain &lt;node-name&gt; --ignore-daemonsets --delete-local-data </code></pre> <p>2) Edit instance group for nodes (Only if you are using kops)</p> <pre><code>kops edit ig nodes </code></pre> <p>Set the MIN and MAX size to whatever it is -1 Just save the file (nothing extra to be done)</p> <p>You still might see some pods in the drained node that are related to daemonsets like networking plugin, fluentd for logs, kubedns/coredns etc</p> <p>3) Finally delete the node</p> <pre><code>kubectl delete node &lt;node-name&gt; </code></pre> <p>4) Commit the state for KOPS in s3: (Only if you are using kops)</p> <pre><code>kops update cluster --yes </code></pre> <p>OR (if you are using kubeadm)</p> <p>If you are using kubeadm and would like to reset the machine to a state which was there before running <code>kubeadm join</code> then run</p> <pre><code>kubeadm reset </code></pre>
<p>Is it possible to configure which storageclasses can be used by namespace?</p> <p>So for example I have a single cluster for production and development.</p> <p>I want to configure a set of storageclasses for development and a different set of storageclasses for production.</p> <p>I want to strictly configure that in development no one could use the storageclasses of production.</p> <p>Is this possible?</p> <p>I have only seen the option to use the resource quotas at namespace level, but it is not the same, with quotas I can configure the amount of disk that can be used in each storageclass, so if I create a new storageclass I will have to modify all the quotas in all the namespaces to add the constraints about the new storageclass.</p>
<p>A <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storage class</a> in Kubernetes is a cluster-wide resource, so you can't restrict the usage regarding a namespace out-of-the-box. What you can do, however, is to write a customer controller akin to what Banzai did with their <a href="https://banzaicloud.com/blog/pvc-operator/" rel="noreferrer">PVC Operator</a> or Raffaele Spazzoli's <a href="https://github.com/raffaelespazzoli/namespace-configuration-controller" rel="noreferrer">Namespace Configuration Controller</a>.</p>
<p>Currently we have our kubernetes cluster master set to zonal, and require it to be regional. My idea is to convert the existing cluster and all workloads/nodes/resources to some infrastructure-as-code - preferably terraform (but could be as simple as a set of <code>gcloud</code> commands).</p> <p>I know with GCP I can generate raw command lines for commands I'm about to run, but I don't know how (or if I even can) to convert existing infrastructure to the same.</p> <p>Based on my research, it looks like it isn't exactly possible to do what I'm trying to do [in a straight-forward fashion]. So I'm looking for any advice, even if it's just to read some other documentation (for a tool I'm not familiar with maybe).</p> <p>TL;DR: I'm looking to take my existing Google Cloud Platform Kubernetes cluster and rebuild it in order to change the location type from zonal to master - I don't actually care how this is done. What is a currently accepted best-practice way of doing this? If there isn't one, what is a quick and dirty way of doing this?</p> <p>If you require me to specify further, I will - I have intentionally left out linking to specific research I've done.</p>
<p>Creating a Kubernetes cluster with terraform is very straightforward because ultimately making a Kubernetes cluster in GKE is straightforward, you'd just use the <code>google_container_cluster</code> and <code>google_container_node_pool</code> resources, like so:</p> <pre><code>resource "google_container_cluster" "primary" { name = "${var.name}" region = "${var.region}" project = "${var.project_id}" min_master_version = "${var.version}" addons_config { kubernetes_dashboard { disabled = true } } maintenance_policy { daily_maintenance_window { start_time = "03:00" } } lifecycle { ignore_changes = ["node_pool"] } node_pool { name = "default-pool" } } resource "google_container_node_pool" "default" { name = "default" project = "${var.project_id}" region = "${var.region}" cluster = "${google_container_cluster.primary.name}" autoscaling { min_node_count = "${var.node_pool_min_size}" max_node_count = "${var.node_pool_max_size}" } management { auto_repair = "${var.node_auto_repair}" auto_upgrade = "${var.node_auto_upgrade}" } lifecycle { ignore_changes = ["initial_node_count"] } node_config { machine_type = "${var.node_machine_type}" oauth_scopes = [ "https://www.googleapis.com/auth/cloud-platform", ] } depends_on = ["google_container_cluster.primary"] } </code></pre> <p>For a more fully featured experience, there are terraform modules available like <a href="https://github.com/google-terraform-modules/terraform-google-kubernetes-engine" rel="nofollow noreferrer">this one</a></p> <p>Converting an existing cluster is considerably more fraught. If you want to use <code>terraform import</code></p> <pre><code>terraform import google_container_cluster.mycluster us-east1-a/my-cluster </code></pre> <p>However, in your comment , you mentioned wanting to convert a zonal cluster to a regional cluster. Unfortunately, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters" rel="nofollow noreferrer">that's not possible</a> at this time</p> <blockquote> <p>You decide whether your cluster is zonal or regional when you create it. You cannot convert an existing zonal cluster to regional, or vice versa.</p> </blockquote> <p>Your best bet, in my opinion, is to:</p> <ul> <li>Create a regional cluster with terraform, giving the cluster a new name</li> <li>Backup your existing zonal cluster, either using an etcd backup, or a more sophisticated backup using <a href="https://github.com/heptio/ark" rel="nofollow noreferrer">heptio-ark</a></li> <li>Restore that backup to your regional cluster</li> </ul>
<p>I'm looking to do 3 legged oauth on istio+kubernetes. I did not find a way to route unauthenticated requests to an authentication proxy service which performs the authentication and route the traffic back to the target service. I've done this with <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">nginx</a> kubernetes ingress controller using the following annotations -</p> <pre> nginx.ingress.kubernetes.io/auth-url //Auth url that requests will be forwarded to nginx.ingress.kubernetes.io/auth-signin //Sign in page the request is routed to when the above returns 401 </pre> <p>I did not find equivalent ones in Istio. I've checked the <a href="https://istio.io/docs/reference/config/istio.authentication.v1alpha1/" rel="nofollow noreferrer">documentation</a> and it says it supports custom auth in addition to jwt, however I did not find any such support.</p>
<p>Answering my own question. At this point I've figured out the only way to do this is via <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#EnvoyFilter" rel="nofollow noreferrer">EnvoyFilter</a> on istio. This allows us to write a custom <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/lua_filter" rel="nofollow noreferrer">lua filter</a> to to route unauthenticated requests to an oauth proxy which can perform 3-legged oauth flow.</p> <p>The request control flow is</p> <blockquote> <p>client --> ingress gateway --> istio-proxy sidecar --> envoy filter --> target</p> </blockquote> <p>The filter is capable of making http calls and manipulate headers, which fits this requirement.</p> <p>Edit: Details about it are <a href="https://medium.com/@suman_ganta/openid-authentication-with-istio-a32838adb492" rel="nofollow noreferrer">here</a>.</p>
<p>I'm setting a deploy of laravel to kubernetes and want to have redis.</p> <p>Actually i have a Dockerfile for nginx and another one for php-fpm-alpine and all the kubernetes yaml files(the ingress with tls, deployments and services)</p> <h3>I expected to add the php redis to the php-fpm container, any ideas?</h3> <p>here the actual php /Dockerfile</p> <pre><code># # PHP Dependencies # FROM composer:1 as vendor COPY database/ database/ COPY composer.json composer.json COPY composer.lock composer.lock RUN composer install \ --ignore-platform-reqs \ --no-interaction \ --no-plugins \ --no-scripts \ --prefer-dist # # Application # FROM php:fpm-alpine RUN apk add --no-cache --virtual .build-deps \ $PHPIZE_DEPS \ curl \ libtool \ libxml2-dev \ &amp;&amp; apk add --no-cache \ curl \ git \ mysql-client \ &amp;&amp; docker-php-ext-install \ mbstring \ pdo \ pdo_mysql \ tokenizer \ bcmath \ opcache \ xml \ &amp;&amp; apk del -f .build-deps \ &amp;&amp; docker-php-ext-enable pdo_mysql WORKDIR /var/www/html COPY . /var/www/html COPY --from=vendor /app/vendor/ /var/www/html/vendor/ COPY .env.example /var/www/html/.env RUN chown -R root:www-data . EXPOSE 9000 CMD ["php-fpm"] </code></pre> <p>and the nginx /Dockerfile</p> <pre><code>FROM nginx:stable-alpine ADD default.conf /etc/nginx/conf.d/default.conf COPY public /var/www/html/public WORKDIR /var/www/html/public </code></pre> <p>finally the nginx default /conf.d</p> <pre><code>server { listen 80; index index.php index.html; root /var/www/html/public; client_max_body_size 32M; location / { try_files $uri /index.php?$args; } location ~ \.php$ { fastcgi_pass php:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } </code></pre>
<p>Since you are using official PHP docker image, you can install php-redis extension via PECL:</p> <pre><code>RUN pecl install redis \ &amp;&amp; docker-php-ext-enable redis </code></pre> <p>Simple as that! </p> <p>You can learn more about <a href="https://github.com/docker-library/docs/tree/master/php#how-to-install-more-php-extensions" rel="noreferrer">installing PHP extensions</a> from official PHP docker docs (in case of <code>php-redis</code>, <a href="https://github.com/docker-library/docs/tree/master/php#pecl-extensions" rel="noreferrer">installing PECL extensions</a>).</p> <p>So in your case, <code>RUN</code> command can look something like this:</p> <pre><code># Your PHP Dockerfile RUN apk add --no-cache --virtual .build-deps \ $PHPIZE_DEPS \ curl \ libtool \ libxml2-dev \ &amp;&amp; apk add --no-cache \ curl \ git \ mysql-client \ &amp;&amp; pecl install redis \ # install redis extension via PECL &amp;&amp; docker-php-ext-install \ mbstring \ pdo \ pdo_mysql \ tokenizer \ bcmath \ opcache \ xml \ &amp;&amp; apk del -f .build-deps \ &amp;&amp; docker-php-ext-enable \ pdo_mysql \ redis # don't forget to enable redis extension </code></pre>
<p>I'm trying to create a migration but it's failing with the below error:</p> <pre><code>Error from server (BadRequest): error when creating "kubernetes/migration-job.yaml": Job in version "v1" cannot be handled as a Job: v1.Job: Spec: v1.JobSpec: </code></pre> <p>What is the cause of this error?</p>
<p>The issue was to do with one of the yaml fields:</p> <pre><code>env: - name: DB_HOST value: "mysql" - name: DB_PORT value: 3306 </code></pre> <p><code>3306</code> should be a string (<code>"3306</code>") instead... </p>
<p>I'm trying to mount a NFS into my Kubernetes pod. </p> <p>I'm using Minikube on my localmachine &amp; used to have a hostPath volume but it's performance was pretty bad (page load takes about 30 secs or longer)</p> <p>I've setup my NFS server on my Mac like this:</p> <pre><code>echo "/Users/my-name/share-folder -alldirs -mapall="$(id -u)":"$(id -g)" $(minikube ip)" | sudo tee -a /etc/exports &amp;&amp; sudo nfsd restart </code></pre> <p>and validated it with: </p> <pre><code>showmount -e </code></pre> <p>This shows: </p> <pre><code>Exports list on localhost: /Users/my-name/share-folder 192.168.xx.x </code></pre> <p>I've setup / applied a persistentVolume as followed: </p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: default-sources-volume spec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: standard nfs: server: 192.168.xx.x # Minikube gateway to host path: '/Users/my-name/share-folder' </code></pre> <p>I've setup / applied a persistentVolumeClaim as followed: </p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: sources-volume-claim namespace: default spec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 10Gi </code></pre> <p>I can see the persistentVolume &amp; persistentVolumeClaim in my Kubernetes dashboard. It even says that the persistentVolumeClaim is bound, however ... </p> <p>When I applied my updated deployment.yaml file through the kubectl command, I keep seeing the error: </p> <p>MountVolume.SetUp failed for volume "default-sources-volume" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f4b26fe1-9657-11e8-8f3d-e61688dc52f9/volumes/kubernetes.io~nfs/default-sources-volume --scope -- mount -t nfs 192.168.xx.x:/Users/my-name/share-folder /var/lib/kubelet/pods/f4b26fe1-9657-11e8-8f3d-e61688dc52f9/volumes/kubernetes.io~nfs/default-sources-volume Output: Running scope as unit: run-r1b76b36ffcb1409284e308b11c729744.scope mount.nfs: access denied by server while mounting 192.168.xx.x:/Users/my-name/share-folder</p>
<p>please try this in your '/etc/exports'<br> <code>/Users/my-name/share-folder *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)</code></p> <p>and<br> <code>sudo exportfs -a</code></p> <p>I find that <code>insecure</code> is the key.</p>
<p>I have defined Kafka and Kafka schema registry configuration using Kubernetes deployments and services. I used <a href="https://docs.confluent.io/current/installation/docker/docs/config-reference.html" rel="nofollow noreferrer">this link</a> as a reference for the environment variables set up. However, when I try to run Kafka with registry I see that the schema registry pods crashes with an error message in the logs:</p> <pre><code>[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. </code></pre> <p>What could be the reason of this error?</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-service spec: ports: - name: client port: 9092 selector: app: kafka server-id: "1" --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-1 spec: selector: matchLabels: app: kafka server-id: "1" replicas: 1 template: metadata: labels: app: kafka server-id: "1" spec: volumes: - name: kafka-data emptyDir: {} containers: - name: server image: confluentinc/cp-kafka:5.1.0 env: - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_BROKER_ID value: "2" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" ports: - containerPort: 9092 volumeMounts: - mountPath: /var/lib/kafka name: kafka-data --- apiVersion: v1 kind: Service metadata: name: schema-registry-service spec: ports: - name: client port: 8081 selector: app: kafka-schema-registry --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-schema-registry spec: replicas: 1 selector: matchLabels: app: kafka-schema-registry template: metadata: labels: app: kafka-schema-registry spec: containers: - name: kafka-schema-registry image: confluentinc/cp-schema-registry:5.1.0 env: - name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL value: zookeeper:2181 - name: SCHEMA_REGISTRY_HOST_NAME value: localhost - name: SCHEMA_REGISTRY_LISTENERS value: "http://0.0.0.0:8081" - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS value: PLAINTEXT://localhost:9092 ports: - containerPort: 8081 </code></pre>
<p>You've configured Schema Registry to look for the Kafka broker at <code>kafka:9092</code>, but you've also configured the Kafka broker to advertise its address as <code>localhost:9092</code>. </p> <p>I'm not familiar with Kubernetes specifically, but <a href="https://rmoff.net/2018/08/02/kafka-listeners-explained/" rel="nofollow noreferrer">this article</a> describes how to handle networking config in principle when using containers, IaaS, etc. </p>
<p>Sometimes we need to drain nodes in Kubernetes. When I manually set up a k8s cluster, I can drain the specific node then terminate that machine. While in EKS, nodes are under auto scaling group, which means I can't terminate a specific instance(node). If I manually terminate a instance, another instance(node) will be automatically added into eks cluster.</p> <p>So is there any suggested method to drain a node in EKS?</p>
<p>These steps should work:</p> <ol> <li><code>kubectl get nodes</code></li> <li><code>kubectl cordon &lt;node name&gt;</code></li> <li><code>kubectl drain &lt;node name&gt; --ignore-daemonsets</code> or<br /> <code>kubectl drain &lt;node name&gt; --ignore-daemonsets --delete-emptydir-data</code></li> <li><code>aws autoscaling terminate-instance-in-auto-scaling-group --instance-id &lt;instance-id&gt; --should-decrement-desired-capacity</code></li> </ol> <p>For AWS autoscaling group, if you have nodes span out to multiple zones, consider delete nodes in each zones instead of all nodes from a single zone.</p> <p>After the execution of the above commands, check the autoscaling group's desired number. It should decrease automatically. If you are using Terraform or another automation framework, don't forget to update your autoscaling group config in your infrastructure script.</p>
<p>I have a Kubernetes cron job that creates a zip file which takes about 1 hour. After it's completion I want to upload this zip file to an AWS s3 bucket.</p> <p>How do I tell the cron job to only do the s3 command after the zip is created?</p> <p>Should the s3 command be within the same cron job?</p> <p>Currently my YAML looks like this:</p> <pre><code>kind: CronJob metadata: name: create-zip-upload spec: schedule: "27 5 * * *" # everyday at 05:27 AM concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: mycontainer image: 123456789.my.region.amazonaws.com/mycompany/myproject/rest:latest args: - /usr/bin/python3 - -m - scripts.createzip </code></pre>
<p>Kubernetes doesn't have a concept of a relationship between resources. There isn't an official or clean way to have something occurring in one resource cause an effect on another resource.</p> <p>Because of this, the best solution is to just put the s3 cmd into the same cronjob.</p> <p>There's two ways to do this:</p> <ol> <li>Add the s3 cmd logic to your existing container.</li> <li>Create a new container in the same cronjob that watches for the file and then runs the s3 cmd.</li> </ol>
<p>We are going to write Helm chart and providing configuration file using configmap. </p> <p>For some reasons our app is using JSON format configuration file. Currently we provide configuration file in Helm chart's values.yaml like this.</p> <pre><code>conffiles: app_conf.json: ...(content in YAML)... </code></pre> <p>And to make it easy to modify, in values.yaml we use YAML format and in configmap's template we did conversion using "toJson",</p> <pre><code>data: {{- range $key, $value := .Values.conffiles }} {{ $key }}: | {{ toJson $value | default "{}" | indent 4 }} {{- end -}} {{- end -}} </code></pre> <p>So in values.yaml it's YAML, and in configmap it will be JSON, then in container it will be stored into JSON file.</p> <p>Our question is,</p> <ul> <li>Is there a way to convert YAML to JSON when saving files into container? That is, we hope those configuration content could be 1) YAML in values.yaml 2) YAML in configmap 3) JSON file in container </li> </ul> <p>Thank in advance.</p>
<p>I don't think there is anything out of the box but you do have options, depending upon your motivation.</p> <p>Your app is looking for json and the configmap is mounted for your app to read that json. Your helm deployment isn't going to modify the container itself. But you could change your app to read yaml instead of json.</p> <p>If you want to be able to easily see the yaml and json versions you could create two configmaps - one containing yaml and one with json. </p> <p>Or if you're just looking to be able to see what the yaml was that was used to create the configmap then you could use <a href="https://github.com/helm/helm/blob/master/docs/helm/helm_get_values.md" rel="nofollow noreferrer"><code>helm get values &lt;release_name&gt;</code></a> to look at the values that were used to create that release (which will include the content of the <code>conffiles</code> entry).</p>
<p>Approach 1 (kubernetes volume is attached to google persistent disk, kubernetes volume claim is attached to kubernetes volume)</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: volume-1 spec: storageClassName: "" capacity: storage: 50Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: pd-test-1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pv-claim-1 spec: storageClassName: "" volumeName: volume-1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>Approach 2 (Kubernetes volume claim is directly attached to google persistent disk)</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pv-claim-1 spec: volumeName: pd-test-1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>Approach 3 (pod directly uses google persistent disk <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer">docs</a>)</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume # This GCE PD must already exist. gcePersistentDisk: pdName: my-data-disk fsType: ext4 </code></pre> <p>I'm not sure which method should be used in which scenarios. <br>What is the difference between three approaches and which one should I use if I want to store data on google persistent disks ? </p>
<p>In order of best to worst approach:</p> <ul> <li>Best: Approach 2 - Dynamic Volume Provisioning</li> <li>Ok: Approach 1 - Pre-provisioned volumes via <code>PersistentVolumeClaim</code></li> <li>Worst: Approach 3 - Direct reference of disk via pod <em>without</em> <code>PersistentVolumeClaim</code></li> </ul> <p>Approach 3 is the worst because you lose portability. If you move your pod to a Kubernetes cluster where GCE PD isn't available you will have to modify your pod with whatever type of storage is available on the new cluster. You should not use this approach.</p> <p>With both Approach 1 &amp; 2 your <code>Pod</code> and <code>PersistentVolumeClaim</code> objects remain portable and do not contain cluster specific details in them.</p> <p>Use Approach 1 (manually creating both <code>PersistentVolumeClaim</code> and <code>PersistentVolume</code>) if you already have an existing disk that you want to use with Kubernetes. First you create a <code>PersistentVolume</code> object to represent the disk in Kubernetes, then you create a <code>PersistentVolumeClaim</code> to bind to it and act as a pointer that you can use in your Pod. You have to be careful to make sure the objects point to each other, see <a href="https://stackoverflow.com/a/34323691/5443528">https://stackoverflow.com/a/34323691/5443528</a> for details on how to do this. This is the approach you should use for <em>existing</em> GCE PDs.</p> <p>Approach 2 (manually create a <code>PersistentVolumeClaim</code> and let system automatically create a <code>PersistentVolume</code>). If your storage system supports Kubernetes dynamic volume provisioning, you just create a <code>PersistentVolumeClaim</code> object and your storage system will automatically create a new volume. Kubernetes on GCE and GKE has a default StorageClass installed for GCE PD, so this should work out of the box, and this is the approach you should use to create and use <em>new</em> GCE PDs.</p> <p>See <a href="https://www.youtube.com/watch?v=uSxlgK1bCuA" rel="noreferrer">https://www.youtube.com/watch?v=uSxlgK1bCuA</a> for details on all of this.</p>
<p>I did</p> <pre><code>helm install ibm-charts/ibm-istio --name=istio --namespace istio-system --set grafana.enabled=true,kiali.enabled=true,tracing.enabled=true </code></pre> <p>I have a bunch of services e.g. <code>kubectl get svc</code> and was expecting to see some information about them in Jaegar dropdown, but I only see Istio-related ones. My services properly show up in tools like Grafana, etc.</p> <p>Is there something extra I need to configure to see information about them in Jaegar?</p> <p><a href="https://i.stack.imgur.com/0Z65m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Z65m.png" alt="enter image description here"></a></p>
<p>Below is an python snippet that can help you with traces. As @rinormaloku said you need to forward above headers to get span.</p> <pre><code>import sys from flask import Flask, abort, request import requests app = Flask(__name__) def getForwardHeaders(request): headers = {} incoming_headers = [ 'x-request-id', 'x-b3-traceid', 'x-b3-spanid', 'x-b3-parentspanid', 'x-b3-sampled', 'x-b3-flags', 'x-ot-span-context' ] for ihdr in incoming_headers: val = request.headers.get(ihdr) if val is not None: headers[ihdr] = val print("incoming: "+ihdr+":"+val, file=sys.stderr) return headers @app.route("/") def f1(): tracking_headers = getForwardHeaders(request) return requests.get('http://paytm-svc', headers=tracking_headers).content </code></pre> <p>Above snippet is working on istio in kubernetes.</p> <p>If you are still getting any other errors, let me know.</p>
<p>Want to understand if it's possible to create a configmap with a blank or empty key. The value isn't empty though.</p>
<p>No, it is not possible. While the YAML syntax allows for an empty string to be specified as the key, Kubernetes validation will not accept it:</p> <pre><code>$ cat test-cm.yaml apiVersion: v1 data: key1: value1 key2: value2 "": value3 kind: ConfigMap metadata: name: test-cm $ kubectl apply -f test-cm.yaml The ConfigMap "test-cm" is invalid: data[]: Invalid value: "": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+') $ </code></pre> <p>The validation regexp printed in the error message <code>[-._a-zA-Z0-9]+</code> clearly states that the key length may not be zero.</p> <p>Using the null key is also unacceptable to Kubernetes:</p> <pre><code>$ cat test-cm.yaml apiVersion: v1 data: key1: value1 key2: value2 ? : value3 kind: ConfigMap metadata: name: test-cm $ kubectl apply -f test-cm.yaml error: error converting YAML to JSON: Unsupported map key of type: %!s(&lt;nil&gt;), key: &lt;nil&gt;, value: "value3" $ </code></pre>
<p>How can I check and/or wait that apiVersion and kind exists before trying to apply resource using those?</p> <p><strong>Example:</strong></p> <p><em>Install cilium and network policy using cilium</em></p> <pre><code>kubectl apply -f cilium.yaml kubectl apply -f policy.yaml # fails if run just after installing cilium, since cilium.io/v2 and CiliumNetworkPolicy doesn't exist yet </code></pre> <p><a href="https://github.com/cilium/cilium/blob/master/examples/kubernetes/1.13/cilium.yaml" rel="nofollow noreferrer"><em>cilium.yaml</em></a></p> <p><em>policy.yaml</em></p> <pre><code>apiVersion: cilium.io/v2 description: example policy kind: CiliumNetworkPolicy ... </code></pre> <p><strong>EDIT:</strong> <em>(solved with following script)</em></p> <pre><code>#! /bin/bash function check_api { local try=0 local retries=30 until (kubectl "api-$1s" | grep -P "\b$2\b") &amp;&gt;/dev/null; do (( ++try &gt; retries )) &amp;&amp; exit 1 echo "$2 not found. Retry $try/$retries" sleep 3 done } kubectl apply -f cilium.yaml check_api version cilium.io/v2 check_api resource CiliumNetworkPolicy kubectl apply -f policy.yaml </code></pre>
<p>You can use the following to check for supported versions and kinds, that is, check what the API server you're talking to supports:</p> <pre><code>$ kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 ... storage.k8s.io/v1 storage.k8s.io/v1beta1 v1 </code></pre> <p>There's also <code>kubectl api-resources</code> that provides you with a tabular overview of the kinds, shortnames, and if a resource is namespaced or not.</p>
<p>i am trying to deploy snipe-it on k8s cluster</p> <p>i have running mysql on kubernetes</p> <p>i want to deploy snipe-it application on kubernetes </p> <p>my yaml file is like</p> <pre><code>apiVersion: v1 kind: Service metadata: name: snipeit labels: app: snipeit spec: ports: - port: 80 selector: app: snipeit tier: frontend type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: snipeit-pv-claim labels: app: snipeit spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: snipeit labels: app: snipeit spec: selector: matchLabels: app: snipeit tier: frontend strategy: type: Recreate template: metadata: labels: app: snipeit tier: frontend spec: containers: - image: snipe/snipe-it name: snipeit env: - name: DB_CONNECTION value: mysql - name: DB_HOST value: mysql - name: DB_USERNAME value: root - name: DB_DATABASE value: snipeit - name: APP_URL value: url - name: DB_PASSWORD value: password ports: - containerPort: 80 name: snipeit volumeMounts: - name: snipeit-persistent-storage mountPath: /var/www/html volumes: - name: snipeit-persistent-storage persistentVolumeClaim: claimName: snipeit-pv-claim </code></pre> <p>this is not working </p> <p>image i am using is from docker hub : </p> <pre><code>https://hub.docker.com/r/snipe/snipe-it </code></pre> <p>github snipe-it : <code>https://github.com/snipe/snipe-it</code></p> <p>container start running but i logged inside container and check var/www/html but no content there </p>
<pre><code>apiVersion: v1 kind: ConfigMap metadata: name: snipe-it-config data: # Mysql Parameters MYSQL_PORT_3306_TCP_ADDR: "address" MYSQL_PORT_3306_TCP_PORT: "3306" MYSQL_DATABASE: "snipeit" MYSQL_USER: "user" MYSQL_PASSWORD: "pass" # Email Parameters # - the hostname/IP address of your mailserver MAIL_PORT_587_TCP_ADDR: "&lt;smtp-host&gt;" #the port for the mailserver (probably 587, could be another) MAIL_PORT_587_TCP_PORT: "587" # the default from address, and from name for emails MAIL_ENV_FROM_ADDR: "[email protected]" MAIL_ENV_FROM_NAME: "Snipe-IT" # - pick 'tls' for SMTP-over-SSL, 'tcp' for unencrypted MAIL_ENV_ENCRYPTION: "tls" # SMTP username and password MAIL_ENV_USERNAME: "&lt;smtp-username&gt;" MAIL_ENV_PASSWORD: "&lt;smtp-password&gt;" # Snipe-IT Settings APP_ENV: "production" APP_DEBUG: "false" APP_KEY: "key" APP_URL: "http://127.0.0.1:80" APP_TIMEZONE: "Asia/Kolkata" APP_LOCALE: "en" --- apiVersion: v1 kind: Service metadata: name: snipeit labels: app: snipeit spec: ports: - port: 80 selector: app: snipeit tier: frontend type: LoadBalancer --- apiVersion: apps/v1beta2 kind: Deployment metadata: name: snipeit labels: app: snipeit spec: selector: matchLabels: app: snipeit tier: frontend strategy: type: Recreate template: metadata: labels: app: snipeit tier: frontend spec: containers: - image: snipe/snipe-it name: snipeit envFrom: - configMapRef: name: snipe-it-config ports: - containerPort: 80 name: snipeit volumeMounts: - name: snipeit-persistent-storage mountPath: /var/lib/snipeit volumes: - name: snipeit-persistent-storage persistentVolumeClaim: claimName: snipeit-pv-claim </code></pre> <p>instead of using configmap i was adding enviroment variables and parameters in deployment section...so just added config map and smoothly it's up and running </p>
<p><strong>Issue:</strong></p> <p>My flask API is unable to connect to my Postgres instance. I've verified that the database and api are both working as expected on their own, the deployments and services are running within kubernetes. It must be the connection itself. The connection is defined inside of the Flask config file so perhaps I'm specifying it incorrectly? I'm at a loss of next steps to take.</p> <p><strong>Error</strong></p> <p>This is the error I see when I check the logs of the pod specific to the API which is trying to reach out to postgres.</p> <pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out Is the server running on host "postgres" (198.105.244.228) and accepting TCP/IP connections on port 5432? </code></pre> <p><strong>Stack info</strong></p> <ul> <li>Minikube </li> <li>Kubernetes </li> <li>Docker </li> <li>Flask </li> <li>Postgres </li> <li>SQLAlchemy</li> </ul> <p><strong>Flask/Config.py</strong></p> <p><code>SQLALCHEMY_DATABASE_URI = 'postgres://postgres:postgres@postgres:5432/postgres'</code>. This is identical to the one I was using with Docker Compose before the switch to Kubernetes.</p> <p><strong>Kubernetes/postgres-cluster-ip-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres-cluster-ip-service spec: type: ClusterIP selector: component: postgres ports: - port: 5432 targetPort: 5432 </code></pre> <p><strong>Kubernetes/postgres-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postres-deployment spec: replicas: 1 selector: matchLabels: component: postgres template: metadata: labels: component: postgres spec: volumes: - name: postgres-storage persistentVolumeClaim: claimName: database-persistent-volume-claim imagePullSecrets: - name: regcred containers: - name: postgres image: my/image-db ports: - containerPort: 5432 env: - name: POSTGRES_PASSWORD value: postgres - name: POSTGRES_USER value: postgres volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data subPath: postgres </code></pre> <p><strong>Kuberenetes/database-persistent-volume-claim</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: database-persistent-volume-claim spec: # Access mode gets some instance of storage. # ReadWriteOncence means that it can be used by a single node. accessModes: - ReadWriteOnce resources: requests: # find a storage option that has at least 2 gigs of space storage: 2Gi </code></pre> <p>Happy to add any other files that would help! </p>
<p>The name of the Service is a host name, so you should be connecting to <code>postgres-cluster-ip-service.default.svc.cluster.local</code> (change <code>default</code> if you're deploying to some different Kubernetes namespace). Your error message looks like you're connecting to some other system named <code>postgres</code> outside of your cluster environment.</p>
<p>I am trying to implement the distcc for my compilations using docker &amp; kubernetes. How do I dynamically give IP Addresses of the containers?</p>
<p>To save your time, I would rather recommend using a concept of <a href="https://coreos.com/operators/" rel="nofollow noreferrer">service-operator</a> in Kubernetes. this deploys a software stack for you on Kubernetes cluster, without the need of taking care about the details (like IP address of container where instance of distcc daemon is running).</p> <p>Please check this <a href="https://github.com/mbrt/k8cc" rel="nofollow noreferrer">k8cc</a> - Distcc autoscaler project on github.</p>
<pre><code> apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx4 name: nginx4 spec: containers: - image: nginx name: nginx4 nodeSelector: app: "v1-tesla" resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} </code></pre> <p>When I run the above template kubectl create -f pod.yaml, I get the following error:</p> <pre><code> error: error validating "podOnANode.yaml": error validating data: ValidationError(Pod.spec.nodeSelector.resources): invalid type for io.k8s.api.core.v1.PodSpec.nodeSelector: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>Any pointers to fix this would be great.</p>
<p>The above error is for:</p> <pre><code>nodeSelector: app: "v1-tesla" resources: {} </code></pre> <p>Here, <code>resources: {}</code> representing <code>map</code>, but it should be <code>string</code>. So remove <code>resources: {}</code> or change it's value to <code>string</code>.</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx4 name: nginx4 spec: containers: - image: nginx name: nginx4 nodeSelector: app: "v1-tesla" resources: "whatever" dnsPolicy: ClusterFirst restartPolicy: Never status: {} </code></pre>
<p>I have an API written in Go that has been Dockerised and runs in a Kubernetes cluster on GKE.</p> <p>At the moment my API server does not handle any shutdown scenarios such as a Pod dying or being purposefully brought down.</p> <p>What set of UNIX signals should I expect to trap to gracefully shutdown the server and what circumstances would trigger them? For example, crashes, K8s shutdowns etc.</p>
<p>Kubernetes sends a <code>SIGTERM</code> signal. So the graceful shutdown may look like this:</p> <pre><code>package main import ( "context" "log" "net/http" "os" "os/signal" "syscall" ) func main() { var srv http.Server idleConnsClosed := make(chan struct{}) go func() { sigint := make(chan os.Signal, 1) // interrupt signal sent from terminal signal.Notify(sigint, os.Interrupt) // sigterm signal sent from kubernetes signal.Notify(sigint, syscall.SIGTERM) &lt;-sigint // We received an interrupt signal, shut down. if err := srv.Shutdown(context.Background()); err != nil { // Error from closing listeners, or context timeout: log.Printf("HTTP server Shutdown: %v", err) } close(idleConnsClosed) }() if err := srv.ListenAndServe(); err != http.ErrServerClosed { // Error starting or closing listener: log.Printf("HTTP server ListenAndServe: %v", err) } &lt;-idleConnsClosed } </code></pre> <p>Also you should add Liveness and Readiness probes to your pods:</p> <pre><code>livenessProbe: httpGet: path: /health port: 80 readinessProbe: httpGet: path: /health port: 80 </code></pre>
<p>I have a set of yaml files which are of different Kinds like </p> <p>1 PVC</p> <p>1 PV (The above PVC claims this PV)</p> <p>1 Service </p> <p>1 StatefulSet Object (The above Service is for this Stateful Set </p> <p>1 Config Map (The above Stateful set uses this config map</p> <p>Does the Install order of these objects matter to bring up an application using these? </p>
<p>If you do <code>kubectl apply -f dir</code> on a directory containing all of those files then it should work, at least if you have the latest version as <a href="https://github.com/kubernetes/kubernetes/issues/64203" rel="nofollow noreferrer">there have been bugs raised and addressed in this area</a>. </p> <p>However, there are some dependencies which aren't hard dependencies and <a href="https://github.com/kubernetes/kubernetes/issues/16448" rel="nofollow noreferrer">for which there is discussion</a>. For this reason some are choosing to <a href="https://github.com/kubernetes/kubernetes/issues/16448#issuecomment-454218437" rel="nofollow noreferrer">order the resources themselves</a> or use a deployment tool like <a href="https://github.com/helm/helm/issues/1228" rel="nofollow noreferrer">helm which deploys</a> resources <a href="https://stackoverflow.com/questions/51957676/helm-install-in-certain-order">in a certain order</a>.</p>
<h2>problem statement</h2> <p>we are planning to use azure api management service as a reverse proxy for our AKS . I took reference of following URL for configuring azure api manager with AKS. Although it gives information about node port but same can be applied through internal load balancer IP address.</p> <p><a href="https://fizzylogic.nl/2017/06/16/how-to-connect-azure-api-management-to-your-kubernetes-cluster/" rel="nofollow noreferrer">https://fizzylogic.nl/2017/06/16/how-to-connect-azure-api-management-to-your-kubernetes-cluster/</a></p> <p>we are currently having multiple environments such as dev1,dev2, dev3, dev, uat,stage, prod. we are trying to automate this configuration step and dont need to bind to specific IP but need to point to dns name associated with internal load balancer fro k8s.</p>
<p>If you use annotation on the service to use an internal loadbalancer you will get an IP Address on the vNet for your Service rather than an external IP. </p> <p>annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true"</p> <p>You can then use the external-dns service (<a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a>) to automatically create DNS entries for your services inside Azure DNS zones. You should then be able to resolve to the service DNS name.</p> <p>Although not explicitly supported, it does work with Private DNS zones as well.</p>
<p>Setting up a new k8s cluster on Centos 7 using flannel as the CNI plugin. When joining a worker to the cluster, the CNI0 bridge is not created.</p> <p>Environment is kubernetes 13.2.1, Docker-CE 18.09, Flannel 010. Centos 7.4. My understanding is that CNI0 is created by brctl when called by flannel. With docker debug, I can see that the install-cni-kube-flannel container is instantiated. In looking at /var/lib, I do not see that /var/lib/cni directory is created.</p> <p>I would expect that CNI0 and the /var/lib/cni directory would be created by the install-cni-kube-flannel container. How would I troubleshoot this further ? Are there log capabilities for the CNI interface ?</p>
<p>With further research, I observed that the /var/lib/cni directory on the worker node was not created until I deployed a pod to that node and exposed a service. Once I did that, the CNI plugin was called, /var/lib/cni was created as well as CNI0. </p>
<p>I'm new to docker/k8s world... I was asked if I could deploy a container using args to modify the behavior (typically if the app is working in "master" or "slave" version), which I did. Maybe not the optimal solution but it works:</p> <p>This is a simple test to verify. I made a custom image with a script inside: role.sh:</p> <pre><code>#!/bin/sh ROLE=$1 echo "You are running "$ROLE" version of your app" </code></pre> <p>Dockerfile:</p> <pre><code>FROM centos:7.4.1708 COPY ./role.sh /usr/local/bin RUN chmod a+x /usr/local/bin/role.sh ENV ROLE="" ARG ROLE ENTRYPOINT ["role.sh"] CMD ["${ROLE}"] </code></pre> <p>If I start this container with docker using the following command:</p> <pre><code>docker run -dit --name test docker.local:5000/test master </code></pre> <p>I end up with the following log, which is exactly what I am looking for:</p> <pre><code>You are running master version of your app </code></pre> <p>Now I want to have the same behavior on k8s, using a yaml file. I tried several ways but none worked.</p> <p>YAML file:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: master-pod labels: app: test-master spec: containers: - name: test-master-container image: docker.local:5000/test command: ["role.sh"] args: ["master"] </code></pre> <p>I saw so many different ways to do this and I must say that I still don't get the difference between ARG and ENV.</p> <p>I also tried with</p> <pre><code> - name: test-master-container image: docker.local:5000/test env: - name: ROLE value: master </code></pre> <p>and</p> <pre><code> - name: test-master-container image: docker.local:5000/test args: - master </code></pre> <p>but none of these worked, my pods are always in CrashLoopBackOff state.. Thanks in advance for your help!</p>
<p>In terms of specific fields:</p> <ul> <li>Kubernetes's <code>command:</code> matches Docker's "entrypoint" concept, and whatever is specified here is run as the main process of the container. You don't need to specify a <code>command:</code> in a pod spec if your Dockerfile has a correct <code>ENTRYPOINT</code> already.</li> <li>Kubernetes's <code>args:</code> matches Docker's "command" concept, and whatever is specified here is passed as command-line arguments to the entrypoint.</li> <li>Environment variables in both Docker and Kubernetes have their usual Unix semantics.</li> <li>Dockerfile <code>ARG</code> specifies a <em>build-time</em> configuration setting for an image. The <a href="https://docs.docker.com/engine/reference/builder/#using-arg-variables" rel="noreferrer">expansion rules and interaction with environment variables</a> are a little odd. In my experience this has a couple of useful use cases ("which JVM version do I actually want to build against?"), but <em>every</em> container built from an image will have the same inherited <code>ARG</code> value; it's not a good mechanism for <em>run-time</em> configuration.</li> <li>For various things that could be set in either the Dockerfile or at runtime (<code>ENV</code> variables, <code>EXPOSE</code>d ports, a default <code>CMD</code>, especially <code>VOLUME</code>) there's no particular need to "declare" them in the Dockerfile to be able to set them at run time.</li> </ul> <p>There are a couple of more-or-less equivalent ways to do what you're describing. (I will use <code>docker run</code> syntax for the sake of compactness.) Probably the most flexible way is to have <code>ROLE</code> set as an environment variable; when you run the entrypoint script you can assume <code>$ROLE</code> has a value, but it's worth checking.</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/sh # --&gt; I expect $ROLE to be set # --&gt; Pass some command to run as additional arguments if [ -z "$ROLE" ]; then echo "Please set a ROLE environment variable" &gt;&amp;2 exit 1 fi echo "You are running $ROLE version of your app" exec "$@" </code></pre> <pre class="lang-sh prettyprint-override"><code>docker run --rm -e ROLE=some_role docker.local:5000/test /bin/true </code></pre> <p>In this case you can specify a default <code>ROLE</code> in the Dockerfile if you want to.</p> <pre><code>FROM centos:7.4.1708 COPY ./role.sh /usr/local/bin RUN chmod a+x /usr/local/bin/role.sh ENV ROLE="default_role" ENTRYPOINT ["role.sh"] </code></pre> <p>A second path is to take the role as a command-line parameter:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/sh # --&gt; pass a role name, then a command, as parameters ROLE="$1" if [ -z "$ROLE" ]; then echo "Please pass a role as a command-line option" &gt;&amp;2 exit 1 fi echo "You are running $ROLE version of your app" shift # drops first parameter export ROLE # makes it an environment variable exec "$@" </code></pre> <pre class="lang-sh prettyprint-override"><code>docker run --rm docker.local:5000/test some_role /bin/true </code></pre> <p>I would probably prefer the environment-variable path both for it being a little easier to supply multiple unrelated options and to not mix "settings" and "the command" in the "command" part of the Docker invocation.</p> <p>As to why your pod is "crashing": Kubernetes generally expects pods to be long-running, so if you write a container that just prints something and exits, Kubernetes will restart it, and when it doesn't stay up, it will <em>always</em> wind up in <code>CrashLoopBackOff</code> state. For what you're trying to do right now, don't worry about it and look at the <code>kubectl logs</code> of the pod. Consider setting the pod spec's <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer">restart policy</a> if this bothers you.</p>
<p>I'm trying to deploy an ftp server image in Azure AKS. To expose the server to public, I've added a service of type LoadBalancer.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: test-import-ftp namespace: staging spec: loadBalancerIP: 168.63.x.x type: LoadBalancer ports: - port: 21 name: ftp-control targetPort: 21 - port: 50000 name: ftp-data-0 - port: 50001 name: ftp-data-1 - port: 50002 name: ftp-data-2 - port: 50003 name: ftp-data-3 - port: 50004 name: ftp-data-4 - port: 50005 name: ftp-data-5 - port: 50006 name: ftp-data-6 - port: 50007 name: ftp-data-7 - port: 50008 name: ftp-data-8 - port: 50009 name: ftp-data-9 selector: app: test-import-ftp </code></pre> <p>It works fine for the control port but not for the data ports. Reason is, that it configures probes for all ports and ftp servers don't listen on data ports. These ports will be opened "on demand".</p> <p>How can I disable health checks for data ports?</p>
<p>AFAIK, you cannot disable health checks, but you can make them work with FTP servers.</p> <p>Adapt your configuration like so:</p> <pre><code>[...] spec: loadBalancerIP: 168.63.x.x type: LoadBalancer healthCheckNodePort: 30021 externalTrafficPolicy: Local ports: [...] </code></pre> <p>So, you need to set <code>healthCheckNodePort</code> to a port in the legal nodePort range, and set <code>externalTrafficPolicy</code> to <code>Local</code>. </p> <p>This will make the service open up a nodePort, and the LoadBalancer will now only check that port to determine availability. The drawback is that your health check now only checks that the node is up, not that the ftp service is running.</p> <p>For this to work, you MUST set externalTrafficPolicy to Local. This means that the container will see the actual client source ip as the traffic source, not the internal kubernetes source. Adjust any of your service settings accordingly. For FTP, however, this is desirable, as it allows the server to check that a passive data connection attempt is done by the same client as the original control connection.</p> <p>See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/</a>, section "Preserving the client source IP"</p>
<p>I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute <code>kubectl</code> commands to interact with the stack</p> <p>I have 2 EKS stacks on 2 different AWS accounts (PROD &amp; NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as <code>error: You must be logged in to the server (the server has asked for the client to provide credentials)</code>. </p> <p>I have followed the following link to add additional AWS IAM role to the config:</p> <ul> <li><a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a></li> </ul> <p>But I'm not sure what I'm not doing right.</p> <p>I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: [hidden] server: https://[hidden].eu-west-1.eks.amazonaws.com name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks contexts: - context: cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks kind: Config preferences: {} users: - name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - triage-eks command: aws-iam-authenticator </code></pre> <p>and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:</p> <pre><code>data: mapRoles: | - rolearn: arn:aws:iam::[hidden]:role/ci_deployer username: system:node:{{EC2PrivateDNSName}} groups: - system:masters </code></pre> <p>My CI/CD EC2 instance can assume the <code>ci_deployer</code> role for either AWS accounts.</p> <p>Expected: I can call "kubectl version" to see both Client and Server versions</p> <p>Actual: but I get "the server has asked for the client to provide credentials"</p> <p>What is still missing?</p> <p>After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption. </p> <p>Look forward to further @Kubernetes support on this</p>
<blockquote> <p>Can kubectl work from an assumed role from AWS</p> </blockquote> <p>Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:</p> <pre><code>$ aws sts get-caller-identity </code></pre> <p>You can see the <a href="https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html" rel="noreferrer"><code>Arn</code></a> for the role (or user) and then make sure there's a <a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/edit_trust.html" rel="noreferrer">trust relationship</a> in <a href="https://aws.amazon.com/iam/" rel="noreferrer">IAM</a> between that and the role that you specify here in your <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="noreferrer">kubeconfig</a>:</p> <pre><code>command: aws-iam-authenticator args: - "token" - "-i" - "&lt;cluster-name&gt;" - "-r" - "&lt;role-you-want-to-assume-arn&gt;" </code></pre> <p>or with the newer option:</p> <pre><code>command: aws args: - eks - get-token - --cluster-name - &lt;cluster-name&gt; - --role - &lt;role-you-want-to-assume-arn&gt; </code></pre> <p>Note that if you are using <code>aws eks update-kubeconfig</code> you can pass in the <code>--role-arn</code> flag to generate the above in your kubeconfig.</p> <p>In your case, some things that you can look at:</p> <ul> <li><p>The credential environment variables are not set in your CI?:</p> <pre><code>AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY </code></pre></li> <li><p>Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:</p> <pre><code>[default] aws_access_key_id = xxxx aws_secret_access_key = xxxx </code></pre></li> <li><p>Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.</p></li> <li><p>It could also be the <code>AWS_PROFILE</code> env variable or the <code>AWS_PROFILE</code> config in <code>~/.kube/config</code></p> <pre><code>users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "&lt;cluster-name&gt;" - "-r" - "&lt;role-arn&gt;" env: - name: AWS_PROFILE &lt;== is this value set value: "&lt;aws-profile&gt;" </code></pre></li> <li><p>Is the profile set correctly under <code>~/.aws/config</code>?</p></li> </ul>
<p>I am using Kafka Helm charts from <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">here</a>. I was trying Horizontal Pod Autoscaler for the same.</p> <p>I added a hpa.yaml file as given below inside the templates folder.</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: kafka-hpa spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: {{ include "kafka.fullname" . }} minReplicas: {{ .Values.replicas }} maxReplicas: 5 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 50 - type: Resource resource: name: memory targetAverageValue: 8000Mi </code></pre> <p>I have also tried the above YAML with <strong>kind: StatefulSet</strong> but the same issue persists.</p> <p>My intention is to have 3 Kafka pods initially and scale it up to 5 based on CPU and memory targetValues as mentioned above.</p> <p>However, the hpa gets deployed but it is unable to read the metrics as per my understanding as the current usage shows unknown as mentioned below.</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE kafka-hpa Deployment/whopping-walrus-kafka &lt;unknown&gt;/8000Mi, &lt;unknown&gt;/50% 3 5 0 1h . </code></pre> <p>I am new to helm and Kubernetes, so I am assuming there might be some issue with my understanding.</p> <p>I have also deployed metrics-server.</p> <pre><code>$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE metrics-server 1 1 1 1 1d whopping-walrus-kafka-exporter 1 1 1 1 1h </code></pre> <p>Pods output</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE metrics-server-55cbf87bbb-vm2v5 1/1 Running 0 15m whopping-walrus-kafka-0 1/1 Running 1 1h whopping-walrus-kafka-1 1/1 Running 0 1h whopping-walrus-kafka-2 1/1 Running 0 1h whopping-walrus-kafka-exporter-5c66b5b4f9-mv5kv 1/1 Running 1 1h whopping-walrus-zookeeper-0 1/1 Running 0 1h </code></pre> <p>I want the <strong>whopping-walrus-kafka</strong> pod to scale up to 5 on load, however, there's no deployment corresponding to it.</p> <p>StatefulSet Output</p> <pre><code>$ kubectl get statefulset NAME DESIRED CURRENT AGE original-bobcat-kafka 3 2 2m original-bobcat-zookeeper 1 1 2m </code></pre> <p>Output of describe hpa when kind in hpa.yaml is <strong>StatefulSet</strong>.</p> <pre><code>$ kubectl describe hpa Name: kafka-hpa Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Fri, 18 Jan 2019 12:13:59 +0530 Reference: StatefulSet/original-bobcat-kafka Metrics: ( current / target ) resource memory on pods: &lt;unknown&gt; / 8000Mi resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 5% Min replicas: 3 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "StatefulSet" in group "extensions" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 15s (x17 over 8m) horizontal-pod-autoscaler no matches for kind "StatefulSet" in group "extensions" </code></pre> <p>Output of describe hpa when kind in hpa.yaml is <strong>Deployment</strong>.</p> <pre><code>$ kubectl describe hpa Name: kafka-hpa Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Fri, 18 Jan 2019 12:30:07 +0530 Reference: Deployment/good-elephant-kafka Metrics: ( current / target ) resource memory on pods: &lt;unknown&gt; / 8000Mi resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 5% Min replicas: 3 Max replicas: 5 Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: could not fetch the scale for deployments.extensions good-elephant-kafka: deployments/scale.extensions "good-elephant-kafka" not found Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 9s horizontal-pod-autoscaler could not fetch the scale for deployments.extensions good-elephant-kafka: deployments/scale.extensions "good-elephant-kafka" not found </code></pre> <p>Output from metrics server pod</p> <pre><code>$ kubectl describe pods metrics-server-55cbf87bbb-vm2v5 Name: metrics-server-55cbf87bbb-vm2v5 Namespace: default Node: docker-for-desktop/192.168.65.3 Start Time: Fri, 18 Jan 2019 11:26:33 +0530 Labels: app=metrics-server pod-template-hash=1176943666 release=metrics-server Annotations: &lt;none&gt; Status: Running IP: 10.1.0.119 Controlled By: ReplicaSet/metrics-server-55cbf87bbb Containers: metrics-server: Container ID: docker://ee4b3d9ed1b15c2c8783345b0ffbbc565ad25f1493dec0148f245c9581443631 Image: gcr.io/google_containers/metrics-server-amd64:v0.3.1 Image ID: docker-pullable://gcr.io/google_containers/metrics-server-amd64@sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /metrics-server --kubelet-insecure-tls --kubelet-preferred-address-types=InternalIP --logtostderr State: Running Started: Fri, 18 Jan 2019 11:26:35 +0530 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-d2g7b (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: metrics-server-token-d2g7b: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-token-d2g7b Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>Guys please feel free to clear my understanding as well if I am going wrong somewhere.</p> <p>Help on this would be highly appreciated.</p>
<p>You need to add the following command in your <code>metrics-server</code> deployment file:</p> <pre><code>containers: - command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP name: metrics-server </code></pre> <p>I believe metrics-server couldn't find the kubelet with <code>InternalIP</code>, hence the issue. For more information, check my following answer for step by step instruction to setup HPA.</p> <p><a href="https://stackoverflow.com/questions/53725248/how-to-enable-kubeapi-server-for-hpa-autoscaling-metrics/53727101#53727101">How to Enable KubeAPI server for HPA Autoscaling Metrics</a></p>