Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I am using AWS Opensearch to retrieve the logs from all my Kubernetes applications. I have the following pods: <code>Kube-proxy</code>, <code>Fluent-bit</code>, <code>aws-node</code>, <code>aws-load-balancer-controller</code>, and all my apps (around 10).</p> <p>While fluent-bit successfully send all the logs from <code>Kube-proxy</code>, <code>Fluent-bit</code>, <code>aws-node</code> and <code>aws-load-balancer-controller</code>, none of the logs from my applications are sent. My applications had <code>DEBUG</code>, <code>INFO</code>, <code>ERROR</code> logs, and none are sent by fluent bit.</p> <p>Here is my fluent bit configuration:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: my-namespace labels: k8s-app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host my-host.es.amazonaws.com Port 443 TLS On AWS_Auth On AWS_Region ap-southeast-1 Retry_Limit 6 parsers.conf: | [PARSER] Name apache Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^ ]*) +\S*)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?&lt;time&gt;[^\]]*)\] \[(?&lt;level&gt;[^\]]*)\](?: \[pid (?&lt;pid&gt;[^\]]*)\])?( \[client (?&lt;client&gt;[^\]]*)\])? (?&lt;message&gt;.*)$ [PARSER] Name nginx Format regex Regex ^(?&lt;remote&gt;[^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog Format regex Regex ^\&lt;(?&lt;pri&gt;[0-9]+)\&gt;(?&lt;time&gt;[^ ]* {1,2}[^ ]* [^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;ident&gt;[a-zA-Z0-9_\/\.\-]*)(?:\[(?&lt;pid&gt;[0-9]+)\])?(?:[^\:]*\:)? *(?&lt;message&gt;.*)$ Time_Key time Time_Format %b %d %H:%M:%S </code></pre> <p>I followed <a href="https://www.eksworkshop.com/intermediate/230_logging/deploy/" rel="nofollow noreferrer">this documentation</a></p> <p>Thanks a lot for the help.</p>
<p>Finally, I did two things that solved my issue:</p> <ol> <li>Modified this configuration:</li> </ol> <pre><code># before output-elasticsearch.conf: | [OUTPUT] Name es Match * Host search-blacaz-logs-szzq6vokwwm4y5fkfwyngjwjxq.ap-southeast-1.es.amazonaws.com Port 443 TLS On AWS_Auth On AWS_Region ap-southeast-1 Retry_Limit 6 # after output-elasticsearch.conf: | [OUTPUT] Name es Match * Host search-blacaz-logs-szzq6vokwwm4y5fkfwyngjwjxq.ap-southeast-1.es.amazonaws.com Port 443 TLS On AWS_Auth On Replace_Dots On // added this AWS_Region ap-southeast-1 Retry_Limit 6 </code></pre> <p>Then, I had to delete the fluent-bit Elastic search index, and re-create it. Indeed, the index was probably not well suited for my JAVA logs at first, and adjusted to it after re-creation.</p>
<p>When you edit a deployment to update the docker image, I need to run a one-time script which changes parts of my application database and sends an email that the rolling upgrade process is complete and the result is passed / failed.</p> <p>Is there a hook where I can attach this script to?</p>
<p>Kubernetes doesn't implement such thing. This can be done by CI/CD pipeline or manually checking rolling update status. As you have said you can write simple script which will check status of rolling update and send it via e-mail and attach it to created <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">pipeline in Jenkins</a>.</p> <p>To manually check status of rolling update execute command:</p> <p><code>$ kubectl rollout status deploy/your-deployment -n your-namespace</code></p> <p>If for example you are passing variables using ConfigMap you can use <a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">Reloader</a> to perform your rolling updates automatically when a configmap/secret changed.</p>
<p>I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: test-deployment-app namespace: test-ns spec: gateways: - mesh hosts: - test-deployment-app.test-ns.svc.cluster.local http: - name: canary match: - headers: x-canary: exact: &quot;true&quot; - port: 8080 headers: response: set: x-canary: &quot;true&quot; route: - destination: host: test-deployment-app-canary.test-ns.svc.cluster.local port: number: 8080 weight: 100 - name: stable route: - destination: host: test-deployment-app.test-ns.svc.cluster.local port: number: 8080 weight: 100 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: test-deployment-app-internal namespace: test-ns spec: gateways: - istio-system/default-gateway hosts: - myapp.dev.bla http: - name: default route: - destination: host: test-deployment-app.test-ns.svc.cluster.local port: number: 8080 weight: 100 </code></pre> <p>So I am expecting <code>x-canary:true</code> response header when I call <code>myapp.dev.bla</code> but I don't see that.</p>
<p>Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.</p> <p>Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. <code>Gateway</code> is object that creates standard deployment + service. Pods in the deployment consist standalone <code>envoy-proxy</code> container.</p> <p><code>Gateway</code> object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.</p> <ol> <li>If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your <code>mesh</code> gateway. It's nothing else than k8s Service inside <code>istio-gateway</code> or <code>istio-system</code> namespace</li> <li>Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in <code>istio-gateway</code> or <code>istio-system</code> namespace. Alternatively you can create new Gateway just for your application and apply <code>VirtualService</code> to the new gateway as well.</li> </ol>
<p>I'm currently using this line in a user_data.sh file</p> <pre><code>https://storage.googleapis.com/kubernetes-release/release/v1.21.5/bin/linux/amd64/kubectl </code></pre> <p>I've tried to put <a href="https://storage.googleapis.com/kubernetes-release/" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/</a> in a browser to see which other versions are available but it doesn't work. If I look for a release number in github and replace v1.21.5 with the updated number in the line above I can download just fine..but is there not a way of just looking at kubernetes releases at <a href="https://storage.gpoogleapis.com" rel="nofollow noreferrer">https://storage.gpoogleapis.com</a> ?</p>
<p>Use that link in your browser: <a href="https://console.cloud.google.com/storage/browser/kubernetes-release" rel="nofollow noreferrer">https://console.cloud.google.com/storage/browser/kubernetes-release</a></p>
<p>Is there any way to get instance metadata on IBM Cloud Kubernetes cluster, from internal pod? Something like doing <code>curl</code> to <code>http://metadata.google.internal/computeMetadata/v1/instance/...</code> on GKE clusters, or <code>http://169.254.169.254/latest/...</code> on EKS clusters.</p> <p>Any help will be appreciated. Thanks!</p>
<p>I think I follow what you're after.. you're able to get details/metadata about your worker node from the SoftLayer APIs available - <a href="https://sldn.softlayer.com/reference/softlayerapi/" rel="nofollow noreferrer">https://sldn.softlayer.com/reference/softlayerapi/</a></p> <p>For k8s specific info you can utilize the k8s api-server to query metadata about things like the node, pods, etc at the <code>Kubernetes.default.svc.cluster.local</code> address from inside a pod. You can find the service account and token from within your pod at <code>/var/run/secrets/kubernetes.io</code></p> <p>Hope that helps. </p>
<p>My scenary is like the image below:</p> <p><a href="https://i.stack.imgur.com/3txty.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3txty.png" alt="enter image description here" /></a></p> <p>After a couple of days trying to find a way to block connections among pods based on a rule i found the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policy</a>. But it's not working for me neither at Google Cloud Platform or Local Kubernetes!</p> <p>My scenary is quite simple, i need a way to block connections among pods based in a rule (e.g. namespace, workload label and so on). At the first glance i tought the will work for me, but i don't know why it's not working at the Google Cloud, even when i create a cluster from the scratch with the option &quot;Network policy&quot; enable.</p>
<p>Network policy will allow you to do exactly what you described on picture. You can allow or block based on labels or namespaces.</p> <p>It's difficult to help you when you don't explain what you exactly did and what is not working. Update your question with actual network policy YAML you created and ideally also send <code>kubectl get pod --show-labels</code> from the namespace with the pods.</p> <p>What do you mean by 'local kubernetes' is also unclear but it depends largely on network CNI you're using as it must support network policies. For example Calico or Cilium support it. Minikube in it's default setting don't so you should follow i.e. this guide: <a href="https://medium.com/@atsvetkov906090/enable-network-policy-on-minikube-f7e250f09a14" rel="nofollow noreferrer">https://medium.com/@atsvetkov906090/enable-network-policy-on-minikube-f7e250f09a14</a></p>
<p>I want to dedicate each vm to each user. In <a href="https://cloud.google.com/run/docs/about-concurrency" rel="nofollow noreferrer">cloud run</a> it is simply working by set maximum concurrent requests to 1, however cloud run does not support gpu and it is not an option for me. How is it possible on Kubernetes? I am running Unreal Engine on VMs and this should not be shared.</p>
<p>No Kubernetes out of the box. But install <a href="https://knative.dev/docs/" rel="nofollow noreferrer">KNative</a> and you will be able to achieve it (and you will be able to reuse the same YAML service definition as Cloud Run, because Cloud Run implements the KNative APIs)</p>
<p>I have the strange problem that a Spark job ran on Kubernetes fails with a lot of "Missing an output location for shuffle X" in jobs where there is a lot of shuffling going on. Increasing executor memory does not help. The same job run on just a single node of the Kubernetes cluster in local[*] mode runs fine however so I suspect it has to do with Kubernetes or underlying Docker. When an executor dies, the pods are deleted immediately so I cannot track down why it failed. Is there an option that keeps failed pods around so I can view their logs?</p>
<p><strong>If you're using the spark executor</strong>: There is a <code>deleteOnTermination</code> setting in the spark application yaml. See <a href="https://github.com/mesosphere/spark-on-k8s-operator/blob/master/docs/api-docs.md#sparkoperator.k8s.io/v1beta2.ExecutorSpec" rel="nofollow noreferrer">the spark-on-kubernetes README.md</a>.</p> <blockquote> <p><code>deleteOnTermination</code> - <em>(Optional)</em> <code>DeleteOnTermination</code> specify whether executor pods should be deleted in case of failure or normal termination.<br><br> Maps to <code>spark.kubernetes.executor.deleteOnTermination</code> that is available since Spark 3.0.</p> </blockquote> <p><strong>If you're using kubernetes jobs</strong>: Set the job <code>spec.ttlSecondsAfterFinished</code> parameter or get the previous pods logs with kubectl. There is also a setting for keeping failed jobs around if you're using cronjobs.</p>
<p>I have a python application running on a virtual machine, were a legacy, and now I'm migrating to a Kubernetes.</p> <p>I use <code>influxdb==5.2.3</code> package, connecting to this form <code>Influx(host=r'influx_HOST', port=8086, username='MY_USER', password='***', database='DB_NAME', ssl=True)</code>. This python script calls an InfluxBD using an SSL certificate and when I run directly using <code>python app.py</code> works well, but, the problem is when:</p> <h3> I dockerized the script shows me the following error.</h3> <pre><code>Traceback (most recent call last): File &quot;app.py&quot;, line 591, in &lt;module&gt; get_horas_stock() File &quot;app.py&quot;, line 513, in get_horas_stock df_temp = influx_temperatura.multiple_query_to_df(queries_temperatura) File &quot;/usr/src/app/analitica_py_lib_conexiones/conexion_influx.py&quot;, line 82, in multiple_query_to_df resultado = self.__cliente.query(&quot;;&quot;.join(queries)) File &quot;/usr/local/lib/python3.8/site-packages/influxdb/client.py&quot;, line 445, in query response = self.request( File &quot;/usr/local/lib/python3.8/site-packages/influxdb/client.py&quot;, line 302, in request raise InfluxDBClientError(response.content, response.status_code) influxdb.exceptions.InfluxDBClientError: 400: Client sent an HTTP request to an HTTPS server. </code></pre> <p>I understand to, from inside the container, the script is using https to call Influx, but I think the connection is redirected outside the container using Http and lose all SSL configuration, why? I don't know.</p> <h3>I tested deployed on Kubernetes</h3> <p>I tried to deploy on Kubernetes thinking I get the same error, but it`s changed.</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.8/site-packages/urllib3/response.py&quot;, line 360, in _error_catcher yield File &quot;/usr/local/lib/python3.8/site-packages/urllib3/response.py&quot;, line 442, in read data = self._fp.read(amt) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 454, in read n = self.readinto(b) File &quot;/usr/local/lib/python3.8/http/client.py&quot;, line 498, in readinto n = self.fp.readinto(b) File &quot;/usr/local/lib/python3.8/socket.py&quot;, line 669, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 104] Connection reset by peer During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/local/lib/python3.8/site-packages/requests/models.py&quot;, line 750, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File &quot;/usr/local/lib/python3.8/site-packages/urllib3/response.py&quot;, line 494, in stream data = self.read(amt=amt, decode_content=decode_content) File &quot;/usr/local/lib/python3.8/site-packages/urllib3/response.py&quot;, line 459, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File &quot;/usr/local/lib/python3.8/contextlib.py&quot;, line 131, in __exit__ self.gen.throw(type, value, traceback) File &quot;/usr/local/lib/python3.8/site-packages/urllib3/response.py&quot;, line 378, in _error_catcher raise ProtocolError('Connection broken: %r' % e, e) urllib3.exceptions.ProtocolError: (&quot;Connection broken: ConnectionResetError(104, 'Connection reset by peer')&quot;, ConnectionResetError(104, 'Connection reset by peer')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;app.py&quot;, line 591, in &lt;module&gt; get_horas_stock() File &quot;app.py&quot;, line 513, in get_horas_stock df_temp = influx_temperatura.multiple_query_to_df(queries_temperatura) File &quot;/usr/src/app/analitica_py_lib_conexiones/conexion_influx.py&quot;, line 82, in multiple_query_to_df resultado = self.__cliente.query(&quot;;&quot;.join(queries)) File &quot;/usr/local/lib/python3.8/site-packages/influxdb/client.py&quot;, line 445, in query response = self.request( File &quot;/usr/local/lib/python3.8/site-packages/influxdb/client.py&quot;, line 274, in request response = self._session.request( File &quot;/usr/local/lib/python3.8/site-packages/requests/sessions.py&quot;, line 533, in request resp = self.send(prep, **send_kwargs) File &quot;/usr/local/lib/python3.8/site-packages/requests/sessions.py&quot;, line 686, in send r.content File &quot;/usr/local/lib/python3.8/site-packages/requests/models.py&quot;, line 828, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File &quot;/usr/local/lib/python3.8/site-packages/requests/models.py&quot;, line 753, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: (&quot;Connection broken: ConnectionResetError(104, 'Connection reset by peer')&quot;, ConnectionResetError(104, 'Connection reset by peer')) </code></pre> <p>I don't know if is related to the previous error.</p> <p><a href="https://i.stack.imgur.com/jNjEy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jNjEy.png" alt="enter image description here" /></a></p> <h3>Edited</h3> <p>Dockerfile</p> <pre><code>FROM python:3.8.2-buster WORKDIR /usr/src/app COPY . . RUN pip install --no-cache-dir -r requirements.txt EXPOSE 8080 CMD [&quot;python&quot;, &quot;app.py&quot;] </code></pre> <p>Kubernetes Deployment YML</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-ms spec: replicas: 1 selector: matchLabels: ip-service: my-ms template: metadata: labels: ip-service: my-ms spec: containers: - name: my-ms image: myprivate.azurecr.io/my-ms:latest ports: - containerPort: 8080 resources: requests: cpu: 100m memory: 10Mi imagePullSecrets: - name: tecnoregistry </code></pre> <h3>Edited 2</h3> <p>I get the same error I have on Kubernetes, but running the script locally, I change the SSL value from True to False in the service call <code>Influx(host=r'influx_HOST', port=8086, username='MY_USER', password='***', database='DB_NAME', ssl=False)</code>.</p> <pre><code>~$ python3 app.py Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/urllib3/response.py&quot;, line 360, in _error_catcher yield File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/urllib3/response.py&quot;, line 442, in read data = self._fp.read(amt) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py&quot;, line 457, in read n = self.readinto(b) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py&quot;, line 501, in readinto n = self.fp.readinto(b) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py&quot;, line 589, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 54] Connection reset by peer During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/requests/models.py&quot;, line 750, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/urllib3/response.py&quot;, line 494, in stream data = self.read(amt=amt, decode_content=decode_content) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/urllib3/response.py&quot;, line 459, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py&quot;, line 130, in __exit__ self.gen.throw(type, value, traceback) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/urllib3/response.py&quot;, line 378, in _error_catcher raise ProtocolError('Connection broken: %r' % e, e) urllib3.exceptions.ProtocolError: (&quot;Connection broken: ConnectionResetError(54, 'Connection reset by peer')&quot;, ConnectionResetError(54, 'Connection reset by peer')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;app.py&quot;, line 1272, in &lt;module&gt; generate_excels_consumo() File &quot;/Volumes/DATA/IdeaProjects/australia/analitica_py_sw_recolect_info/calculo_excels.py&quot;, line 206, in generate_excels_consumo df_sector = influx_kpis.multiple_query_to_multiple_df(queries) File &quot;/Volumes/DATA/IdeaProjects/australia/analitica_py_sw_recolect_info/analitica_py_lib_conexiones/conexion_influx.py&quot;, line 126, in multiple_query_to_multiple_df resultado = self.__cliente.query(&quot;;&quot;.join(query)) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/influxdb/client.py&quot;, line 450, in query expected_response_code=expected_response_code File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/influxdb/client.py&quot;, line 283, in request timeout=self._timeout File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/requests/sessions.py&quot;, line 533, in request resp = self.send(prep, **send_kwargs) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/requests/sessions.py&quot;, line 686, in send r.content File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/requests/models.py&quot;, line 828, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/requests/models.py&quot;, line 753, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: (&quot;Connection broken: ConnectionResetError(54, 'Connection reset by peer')&quot;, ConnectionResetError(54, 'Connection reset by peer')) </code></pre>
<p>It seems to be a network problem. I faced a similar problem connecting with MongoDB because of Proxy.</p> <p>By default, each container run by Docker has its own network namespace.</p> <p>Offering Some Solution Methods:</p> <ol> <li><code>traceroute &lt;cluster-ip&gt;</code> perform this command and check whether arrive to cluster inside of container attach container and perform the command <code>docker exec -it container /bin/bash</code>.</li> <li>Check Proxy. <code>export</code> check environment variables related with proxy.</li> <li><code>curl www.google.com</code></li> <li>Try to connect another things in the cluster. For example MySQL, etc. (Once time a machine write me blacklist automatically I tried to solve connection problem 2 day)</li> <li>Try to use python-slim image. slim image is based from ubuntu.</li> </ol> <p>It is very good document to understand and solve docker connection problem.</p> <p><a href="https://pythonspeed.com/articles/docker-connection-refused/" rel="nofollow noreferrer">https://pythonspeed.com/articles/docker-connection-refused/</a></p>
<p>I am following this tutorial: <a href="https://cloud.google.com/sql/docs/mysql/connect-instance-kubernetes" rel="nofollow noreferrer">Connect to Cloud SQL for MySQL from Google Kubernetes Engine</a>. I have created a cluster. I have created a docker image in the repository. I have created a database. I am able to run my application outside of Kubernetes and it connects to the database. But after deploying application, pods are not in a valid state and I see in the logs of the pod error:</p> <pre><code>Caused by: java.lang.RuntimeException: [quizdev:us-central1:my-instance] Failed to update metadata for Cloud SQL instance. ...[na:na] Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden GET https://sqladmin.googleapis.com/sql/v1beta4/projects/quizdev/instances/my-instance/connectSettings { &quot;code&quot;: 403, &quot;details&quot;: [ { &quot;@type&quot;: &quot;type.googleapis.com/google.rpc.ErrorInfo&quot;, &quot;reason&quot;: &quot;ACCESS_TOKEN_SCOPE_INSUFFICIENT&quot;, &quot;domain&quot;: &quot;googleapis.com&quot;, &quot;metadata&quot;: { &quot;service&quot;: &quot;sqladmin.googleapis.com&quot;, &quot;method&quot;: &quot;google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings&quot; } } ], &quot;errors&quot;: [ { &quot;domain&quot;: &quot;global&quot;, &quot;message&quot;: &quot;Insufficient Permission&quot;, &quot;reason&quot;: &quot;insufficientPermissions&quot; } ], &quot;message&quot;: &quot;Request had insufficient authentication scopes.&quot;, &quot;status&quot;: &quot;PERMISSION_DENIED&quot; } at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-2.2.0.jar:2.2.0] ... 2023-06-14T06:57:49.508Z WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata </code></pre> <p>What could be the issue? What can I check to diagnose the problem?</p> <h2>Edit</h2> <p>I have created the cluster using this command:</p> <pre><code> gcloud container clusters create questy-java-cluster \ --num-nodes 2 \ --machine-type n1-standard-1 \ --zone us-central1-c </code></pre>
<p>I'm pretty sure that you create a cluster by default. If you did that, you used the Compute Engine default parameter that you can see here</p> <p><a href="https://i.stack.imgur.com/FKEyE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKEyE.png" alt="enter image description here" /></a></p> <p>Default service account and access scope. If you did that, it's normal you have no access: the minimal scope does not allow the Cloud SQL access.</p> <hr /> <p>To solve that, you have to select either a user managed service account (the best solution) or still use the default service account but allow full scopes access.</p> <p>2 solutions to enforce that:</p> <ul> <li>Either delete and recreate correctly your cluster</li> <li>Or, you can create another node pool with the correct parameters.</li> </ul>
<p>Kubernetes documentation says that for mysql pods we need to use Stateful sets in order to avoid "split brain" situations when one pod dies, in other words, to declare one "master" node to which data will be written to, and if that pod dies, elect new master, that's why i want this deployment and service to transfer to stateful set:</p> <pre><code> --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-container spec: replicas: 3 selector: matchLabels: app: mysql-container template: metadata: labels: app: mysql-container spec: containers: - name: mysql-container image: mysql:dev imagePullPolicy: "IfNotPresent" envFrom: - secretRef: name: prod-secrets ports: - containerPort: 3306 # container (pod) path volumeMounts: - name: mysql-persistent-storage mountPath: /data/db # minikube path volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pvc #resources: # requests: # memory: 300Mi # cpu: 400m # limits: # memory: 400Mi # cpu: 500m restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: mysql spec: # Open port 3306 only to pods in cluster selector: app: mysql-container ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 type: ClusterIP </code></pre> <p>i created stateful set following: <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">this guide</a></p> <p>Under containers section i specified environment variables from file, ie. removed</p> <pre><code> env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" </code></pre> <p>Statefulset:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql spec: initContainers: - name: init-mysql image: mysql:5.7 command: - bash - "-c" - | set -ex # Generate mysql server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] &gt; /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) &gt;&gt; /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/master.cnf /mnt/conf.d/ else cp /mnt/config-map/slave.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map - name: clone-mysql image: gcr.io/google-samples/xtrabackup:1.0 command: - bash - "-c" - | set -ex # Skip the clone if data already exists. [[ -d /var/lib/mysql/mysql ]] &amp;&amp; exit 0 # Skip the clone on master (ordinal index 0). [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] &amp;&amp; exit 0 # Clone data from previous peer. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # Prepare the backup. xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d containers: - name: mysql image: mysql:dev imagePullPolicy: "IfNotPresent" envFrom: - secretRef: name: prod-secrets ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: #requests: # cpu: 300m # memory: 1Gi livenessProbe: exec: command: ["mysqladmin", "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off). command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 - name: xtrabackup image: gcr.io/google-samples/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 command: - bash - "-c" - | set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info &amp;&amp; "x$(&lt;xtrabackup_slave_info)" != "x" ]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing slave. (Need to remove the tailing semicolon!) cat xtrabackup_slave_info | sed -E 's/;$//g' &gt; change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). rm -f xtrabackup_slave_info xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We're cloning directly from master. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm -f xtrabackup_binlog_info xtrabackup_slave_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ MASTER_LOG_POS=${BASH_REMATCH[2]}" &gt; change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" mysql -h 127.0.0.1 \ -e "$(&lt;change_master_to.sql.in), \ MASTER_HOST='mysql-0.mysql', \ MASTER_USER='root', \ MASTER_PASSWORD='', \ MASTER_CONNECT_RETRY=10; \ START SLAVE;" || exit 1 # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \ "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql - name: data persistentVolumeClaim: claimName: mysql-pvc </code></pre> <p>Services:</p> <pre><code># Headless service for stable DNS entries of StatefulSet members. apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 type: ClusterIP selector: app: mysql --- # Client service for connecting to any MySQL instance for reads. # For writes, you must instead connect to the master: mysql-0.mysql. apiVersion: v1 kind: Service metadata: name: mysql-read labels: app: mysql spec: ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 type: ClusterIP selector: app: mysql </code></pre> <p>I have env file from which i created secret:</p> <pre><code>kubectl create secret prod-secrets \ --from-env-file=env.example </code></pre> <p>Problem is that i can't access mysql (Access denied), pods using credentials specified in secret, without Stateful set, all works fine. All pods are running, no errors in logs</p> <p>How to specify values in secrets into Statefulset ?</p> <p>I presume that i need somehow to pass those secrets to command section but have no idea how, example from Kuberenets page assumes credentials are not used</p> <p>If there is less complicated way to use stateful set for mysql,please let me know, thanks.</p>
<p>At the end i managed to escape above complications by creating volume templates, created PV for each pod, both volumes are synchronized, no duplicate entries in database, and if one node fails,data are preserved</p> <p>storage.yaml:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: localstorage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate reclaimPolicy: Delete allowVolumeExpansion: True --- kind: PersistentVolume apiVersion: v1 metadata: name: mysql-01 labels: type: local spec: storageClassName: localstorage capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/mysql01" --- kind: PersistentVolume apiVersion: v1 metadata: name: mysql-02 labels: type: local spec: storageClassName: localstorage capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/mysql02" </code></pre> <p>Statefulset:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql-container spec: serviceName: mysql replicas: 2 selector: matchLabels: app: mysql-container template: metadata: labels: app: mysql-container spec: containers: - name: mysql-container image: mysql:dev imagePullPolicy: "IfNotPresent" envFrom: - secretRef: name: prod-secrets ports: - containerPort: 3306 # container (pod) path volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql resources: requests: memory: 300Mi cpu: 400m limits: memory: 400Mi cpu: 500m restartPolicy: Always volumeClaimTemplates: - metadata: name: mysql-persistent-storage spec: storageClassName: localstorage accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi selector: matchLabels: type: local </code></pre>
<p>I've been working on a <code>Kubernetes</code> cluster with microservices written in <code>Flask</code> for some time now and I'm not sure if my current method for containerizing them is correct. </p> <p>I've been using <a href="https://github.com/tiangolo/uwsgi-nginx-flask-docker" rel="nofollow noreferrer">this</a> image as a base.</p> <p>But I've been seeing various posts saying that something like that may be a bit overkill.</p> <p>The problem I have is that whenever I look up an article about using <code>Flask</code> with <code>Kubernetes</code>, they always skip over the details of the actual container and focus on building the cluster, which is something I already have a pretty solid handle on. I guess what I'm wondering is whether there's a better way to build a docker image for a single <code>Flask</code> app because it's hard to find a straight answer.</p>
<p>I got your point you are thinking you docker image creation method may be wrong. </p> <p>The main idea while building docker image. The image should have only your dependencies. As you told to find an answer is hard because we don't know your requirement maybe your dockerfile is only way. </p> <p>I recommend you to this document. I read this document and really my docker images is improved. I think it will help you. </p> <p><a href="https://drive.google.com/file/d/16t_-DRTohzyVPJy6Cx8a3PxLQ-95CfYK/view" rel="nofollow noreferrer">https://drive.google.com/file/d/16t_-DRTohzyVPJy6Cx8a3PxLQ-95CfYK/view</a></p> <p>When I checked you dockerfile; I noticed only, I would like to share you;</p> <p><a href="https://github.com/tiangolo/uwsgi-nginx-flask-docker/blob/master/docker-images/python3.7.dockerfile#L5" rel="nofollow noreferrer">https://github.com/tiangolo/uwsgi-nginx-flask-docker/blob/master/docker-images/python3.7.dockerfile#L5</a></p> <p>Line 5th It should not be perform like that. The dockerfile should not be updated frequently and each dependency should be versioned.</p> <p>I build this image like that.</p> <p>I write all dependencies to req.txt with version.</p> <p>req.txt</p> <pre><code>flask==1.1.1 </code></pre> <p>Dockerfile</p> <pre><code>+ COPY req.txt . + RUN pip install -r req.txt </code></pre> <p>Also I would like to perform line between 5 and 28 in a bash script. </p> <blockquote> <p>An Important Information; My manager don't want comment line in a Dockerfile. She wants to understand when she read it. She wants to simple and readable docker file. :) She is a wiser. You should keep simple your dockerfile and it should be understand whitout any comment lines. </p> </blockquote>
<p>Consider this partial k8s deployment manifest for a basic rest application called <code>myapp</code></p> <pre><code>spec: replicas: 1 ... containers: name: myapp image: us.gcr.io/my-org/myapp.3 ... resources: limits: cpu: 1000m memory: 1.5Gi </code></pre> <p>We store incremental builds in Google Container Registry (GCR) eg (<code>myapp.1, myapp.2, myapp.3</code>). In the current setup, our CI system (Jenkins) does the following:</p> <ul> <li>Docker builds new image <code>myapp.4</code> and uploads to GCR</li> <li>Runs <code>kubectl set image myapp.4</code> to update the deployment.</li> </ul> <p>This works well for most deploys, but what about changes to the deployment manifest itself? For example, if we changed <code>resource &gt; cpu</code> to 1500m, we'd now have to manually run <code>kubectl apply</code>. <strong>This step needs to be automated, which brings me to my point:</strong> instead of using <code>kubectl set image</code> couldn't the build system itself just run <code>kubectl apply</code> each time? When is it appropriate to use <code>kubectl set image</code> vs. <code>kubectl apply</code> in a CI/CD pipeline?</p> <p>As long as the new image were provided, wouldn't <code>kubectly apply</code> handle both image updates AND other config changes? If so, what are the pros/cons against just <code>kubcetl set image</code>?</p> <p>PS - our deploys are simple and mainly rely on single replica and 100% uptime is not necessarily required.</p>
<p>With a <code>kubectl set image</code>, you only patch the image deployed in your deployment. To patch the other values (CPU, memory, replicas,...) you can use other commands like <code>path</code> or <code>set repiclas</code></p> <p>The problem is that you loose the consistency with your original YAML definition. If your cluster crash and you want to recreate one, you won't have the exact same deployment because your YAML file will be outdated (the &quot;patches&quot; won't be known)</p> <hr /> <p>With <code>kubectl apply</code> you overwrite the existing control plane config and you set the exact content of your YAML. It's more consistent and it's a common practice when you are in GitOps mode.</p> <hr /> <p>Which one to use? All depends on what you need and what you want to achieve. I prefer the <code>kubectl apply</code> mode for its consistency and its replayability, but it's up to you!</p>
<p>I want to create persistent volume for postgres kubernetes pod on windows 10 (<code>C:\Postrges</code>)</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: localstorage provisioner: docker.io/hostpath volumeBindingMode: Immediate reclaimPolicy: Delete allowVolumeExpansion: True --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: database-persistent-volume-claim spec: storageClassName: localstorage accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- # How do we want it implemented apiVersion: v1 kind: PersistentVolume metadata: name: local-storage spec: storageClassName: localstorage capacity: storage: 2Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete hostPath: path: "/C/postgres" type: DirectoryOrCreate $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-storage 2Gi RWO Delete Available localstorage 4m14s pvc-7e1d810f-1114-4b7c-9160-2b07830c682f 2Gi RWO Delete Bound default/database-persistent-volume-claim localstorage 4m14s $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE database-persistent-volume-claim Bound pvc-7e1d810f-1114-4b7c-9160-2b07830c682f 2Gi RWO localstorage 5m36s </code></pre> <p>No events for PV, for PVC it shows "waiting for a volume to be created, either by external provisioner"</p> <pre><code>kubectl describe pvc Name: database-persistent-volume-claim Namespace: default StorageClass: localstorage Status: Bound Volume: pvc-7e1d810f-1114-4b7c-9160-2b07830c682f Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath Finalizers: [kubernetes.io/pvc-protection] Capacity: 2Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 7m24s persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator Normal Provisioning 7m24s docker.io/hostpath_storage-provisioner_4bfa0399-b72d-4e03-ae9d-70ee4d8a7c33 External provisioner is provisioning volume for claim "default/database-persistent-volume-claim" Normal ProvisioningSucceeded 7m24s docker.io/hostpath_storage-provisioner_4bfa0399-b72d-4e03-ae9d-70ee4d8a7c33 Successfully provisioned volume pvc-7e1d810f-1114-4b7c-9160-2b07830c682f </code></pre> <p>But in windows explorer i can't see postgres folder on C drive</p> <p>PVC is assigned to postgres pod</p> <pre><code> volumes: - name: postgres-storage persistentVolumeClaim: claimName: database-persistent-volume-claim containers: - name: postgres image: postgres ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data subPath: postgres name: postgres-storage </code></pre> <p>Pod has no any errors, ran CMD as Administrator before executing <code>kubectl apply</code>, same configuration works without issues on Linux. Any thougts ?</p> <p><a href="https://i.stack.imgur.com/6K0r1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6K0r1.png" alt="enter image description here"></a></p>
<p>In fact, everything is OK, Docker Desktop doesn't map Windows local storage but reclaim space on VM created when Docker Desktop is installed.</p> <p>This VM can be accessed as described in <a href="https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/" rel="nofollow noreferrer">this post</a></p> <pre><code>kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE database-persistent-volume-claim Bound pvc-ceb8dfa1-37ca-48fe-b4bc-cc642faac6c4 2Gi RWO localstorage 33m </code></pre> <p>Browsing Docker desktop VM: </p> <pre><code>ls /var/lib/k8s-pvs/database-persistent-volume-claim/pvc-ceb8dfa1-37ca-48fe-b4bc-cc642faac6c4/postgres PG_VERSION pg_hba.conf pg_replslot pg_subtrans postgresql.auto.conf base pg_ident.conf pg_serial pg_tblspc postgresql.conf global pg_logical pg_snapshots pg_twophase postmaster.opts pg_commit_ts pg_multixact pg_stat pg_wal pg_dynshmem pg_notify pg_stat_tmp pg_xact </code></pre> <p>When postgres deployment is deleted, database files remain</p> <p><a href="https://i.stack.imgur.com/20728.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/20728.png" alt="enter image description here"></a></p>
<p>Let be the following service :</p> <pre><code>serivce1.yml apiVersion: v1 kind: Service metadata: name: service1 spec: type: ClusterIP ports: - port: 90 name: port0 targetPort: 40000 selector: app: nginx </code></pre> <p>I apply as follow : kubectl apply -f service1.yml</p> <p>Now I want to change the ports section. I could edit the yml and apply again but I prefere to use patch :</p> <pre><code>kubectl patch service service1 -p '{&quot;spec&quot;:{&quot;ports&quot;: [{&quot;port&quot;: 80,&quot;name&quot;:&quot;anotherportspec&quot;}]}}' service/service1 patched </code></pre> <p>but this command adds a new port and keeps the old one :</p> <pre><code>$ kubectl describe svc service1 Name: service1 Namespace: abdelghani Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=nginx Type: ClusterIP IP Families: &lt;none&gt; IP: 10.98.186.21 IPs: &lt;none&gt; Port: anotherportspec 80/TCP TargetPort: 80/TCP Endpoints: 10.39.0.3:80 Port: port0 90/TCP TargetPort: 40000/TCP Endpoints: 10.39.0.3:40000 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>My question: Is it possible to change the ports sections by replacing the old section with the one passed as parameter?</p> <p>Thx</p>
<p>As we've (with @Abdelghani) discussed the problem is in patch strategy. Using following command:</p> <p><code>$ kubectl patch svc service1 --type merge -p '{&quot;spec&quot;:{&quot;ports&quot;: [{&quot;port&quot;: 80,&quot;name&quot;:&quot;anotherportspec&quot;}]}}'</code> solves the problem.</p> <p>Flag <code>--type merge</code> will enable replace the existing values.</p> <p>Read more: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">kubectl-patch</a>.</p> <p>As an alternative for patching you have couple options:</p> <p><strong>1.</strong> Edit your service using <code>kubectl edit</code> command: In prompt <code>$ kubectl edit svc &lt;service_name&gt; -n &lt;namespace&gt;</code></p> <pre><code>i - to edit the service ESC, :wq - update your service </code></pre> <p>Paste proper port and save file.</p> <p><strong>2.</strong> You can also manually edit service conf file:</p> <pre><code>vi your-service.yaml </code></pre> <p>update port number and apply changes</p> <pre><code>$ kubectl apply -f your-service.yaml </code></pre>
<p>I am trying to run a container image in GCP Cloud run.</p> <p>Tech Stack: .NET 6, C#, Orleans.NET, ASP.NET</p> <p>My code requires the podIP to work properly.</p> <p>This programs works properly in a regular Kubernetes cluster, because I am able to set an environment variable with the assigned podIP with the following configuration in the &quot;env&quot; section of the deployment.yaml file:</p> <pre><code> - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP </code></pre> <p>When I try to provide the same in Cloud Run yaml, I get: <a href="https://i.stack.imgur.com/dBbp2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dBbp2.png" alt="GCP Cloud Run - YAML Edit" /></a></p> <p>When I tried digging into the documentation of k8s.io.api.core.v1,</p> <p><a href="https://cloud.google.com/run/docs/reference/rpc/k8s.io.api.core.v1#k8s.io.api.core.v1.EnvVarSource" rel="nofollow noreferrer">GCP Documentation - Kubernetes</a> <a href="https://i.stack.imgur.com/kpki2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kpki2.jpg" alt="GCP Cloud Run - Documentation" /></a></p> <p>This is saying that &quot;Not supported by Cloud Run&quot;.</p> <p>Is there any way to set podIP in the environment variable of GCP - Cloud Run?</p>
<p>Cloud Run is a serverless product. As serverless product, you don't manage the servers and the network. Therefore, asking for the podIP make no sense and has no value (at least, it will be a private IP in the Google Cloud serverless network).</p> <p>So, the answer is &quot;NO, you can't get the podIP&quot;.</p> <p>But why do you need it? You can set a dummy value and see what happens we your container. Or to update your code to make optional that value.</p>
<p>I have a very basic <code>REST API</code> that provides some information, which was written with <code>JAX-RS</code>.</p> <p>Now I want to implement some azure cli commands, like <a href="https://learn.microsoft.com/en-us/cli/azure/acr/repository?view=azure-cli-latest" rel="nofollow noreferrer">az acr repository list</a> as well as kubectl.</p> <p>I found the <a href="https://github.com/Azure/azure-sdk-for-java" rel="nofollow noreferrer">Azure Java SDK</a>, and read its <a href="https://learn.microsoft.com/en-us/java/api/com.microsoft.azure.management.containerregistry?view=azure-java-stable" rel="nofollow noreferrer">API reference</a>, but I couldn't figure out how I will basically list the repositories.</p> <p><a href="https://github.com/Azure-Samples/aks-java-manage-kubernetes-cluster" rel="nofollow noreferrer">Kubernetes</a> example is much better, but I am stuck with Azure Container Registry.</p> <p>Basically I am asking a code sample (reference), or tutorial, or guidance.</p>
<p>Whatever my experience of using Azure docs, it's little hard to find API doc which will work for you and you may found some discrepancies between docs like one of API via cli vs sdk behaves different.</p> <p>As per my work with various clouds, most of the cloud clients are inclined to use python SDKs. 3-4 ago, I had used java for openstack, now python. For VCloud, used java but now moving to python, in fact they stopped to provide java sdk.</p> <p>Hope you have got some pointers.</p>
<p>I am trying to follow the instruction of AWS to create an ALB for EKS (Elastic K8s Services in AWS). The instruction is here: <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html</a><br /> I have problems at step 7 (Install the controller manually). When I try to apply a yaml file in substep 7-b-c, I get an error:</p> <pre><code>Error from server (InternalError): error when creating &quot;v2_0_0_full.yaml&quot;: Internal error occurred: failed calling webhook &quot;webhook.cert-manager.io&quot;: Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: x509: certificate is valid for ip-192-168-121-78.eu-central-1.compute.internal, not cert-manager-webhook.cert-manager.svc </code></pre> <p>Has anyone experienced similar kind of problem and what are the best ways to troubleshoot and solve the problem?</p>
<p>It seems that <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager</a> doesn't run on Fargate as expected - <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1606" rel="nofollow noreferrer">#1606</a>.</p> <p>First option as a workaround is to install the helm chart which doesn't have the cert-manager dependency. Helm will generate the self-signed cert, and secret resources.</p> <p>Different option is to remove all cert-manager stuff from the YAML manifest and provide your own self-signed certificate if you don't have helm as a dependency.</p> <p>Take a look: <a href="https://www.gitmemory.com/issue/kubernetes-sigs/aws-load-balancer-controller/1563/726859452" rel="nofollow noreferrer">alb-cert-manager</a>, <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1563" rel="nofollow noreferrer">alb-eks-cert-manager</a>.</p> <p>Useful article: <a href="https://medium.com/@yspreen/get-started-with-fargate-on-aws-https-ingress-991a09020cc0" rel="nofollow noreferrer">aws-fargate</a>.</p>
<p>I use k8s to run large compute Jobs (to completion). There are not many jobs nor nodes in my setup so scale is small.</p> <p>I noticed that the scheduler do not start a pod immediately when a node is available. It takes between 5 to 40 seconds for a pod to be scheduled once node is ready. </p> <p>Is there a way to make the scheduler more "aggressive"? I cant find a setting for the interval in Default Scheduler custom policy. Is implementing my own scheduler the only way forward? Any tips appreciated!</p>
<p>There is difference between pod scheduling and pod creation. Scheduler only find suitable node and schedule pod to that node but pod creation done by kubelet. </p> <p>Kubelet polls api-server for desired state and get newly scheduled pod spec and then create pod. </p> <p>So this process can take time you Specfied in question. </p> <p>So i dont think writing custom scheduler help here. </p>
<p>I have a Kubernetes-orchestraded infrastructure that is served on an AWS-hosted cluster. I'd like to have routines that would allow me to spawn similar infrastructures. The difference between the original infrastructure and the newly-spawned ones would mainly be the DNS's used and the images that I would serve.</p> <p>My question is: What is the most appropriate place for this similar-infrastructure-spawning code to reside: Kubernetes ? My CI/CD tool, Drone? Some other DevOps stack component of which I'm not even aware?</p>
<p>Do you ever think about InfraAsCode tech.</p> <p>You can directly develop your infrastructure using code like:</p> <ul> <li>Cloudformation (AWS)</li> <li>Terraform (multi provider)</li> <li>Ansible</li> <li>...</li> </ul> <p>You will then be able to configure all providers services (not only your cluster)</p> <p>You will then be able to deploy it with a single command and use parameters on it.</p> <p>Otherwise you can also you a tool like Kops (<a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">https://github.com/kubernetes/kops</a>) to automate deployment of you K8s cluster.</p> <p>Once you choose the right tool, you will then be able to source it using Git repository or whatever.</p>
<p>I have a python pod running. This python pod is using different shared libraries. To make it easier to debug the shared libraries I would like to have the libraries directory on my host too. The python dependencies are located in <code>/usr/local/lib/python3.8/site-packages/</code> and I would like to access this directory on my host to modify some files.</p> <p>Is that possible and if so, how? I have tried with <code>emptyDir</code> and <code>PV</code> but they always override what already exists in the directory.</p> <p>Thank you!</p>
<p>This is by design. Kubelet is responsible for preparing the mounts for your container. At the time of mounting they are empty and kubelet has no reason to put any content in them.</p> <p>That said, there are ways to achieve what you seem to expect by using init container. In your pod you define init container using your docker image, mount your volume in it in some path (ie. <em>/target</em>) but instead of running regular content of your container, run something like</p> <pre><code>cp -r /my/dir/* /target/ </code></pre> <p>which will initiate your directory with expected content and exit allowing further startup of the pod Please take a look: <a href="https://stackoverflow.com/questions/51739517/how-to-avoid-override-the-container-directory-when-using-pvc-in-kubernetes">overriding-directory</a>.</p> <p>Another option is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a>. Subpath references files or directories that are controlled by the user, not the system. Take a loot on this example how to mount single file into existing directory:</p> <pre><code>--- volumeMounts: - name: &quot;config&quot; mountPath: &quot;/&lt;existing folder&gt;/&lt;file1&gt;&quot; subPath: &quot;&lt;file1&gt;&quot; - name: &quot;config&quot; mountPath: &quot;/&lt;existing folder&gt;/&lt;file2&gt;&quot; subPath: &quot;&lt;file2&gt;&quot; restartPolicy: Always volumes: - name: &quot;config&quot; configMap: name: &quot;config&quot; --- </code></pre> <p>Check full example <a href="https://gist.github.com/tuannvm/0fc6e94a3759c91b1abe71c149152f77" rel="nofollow noreferrer">here</a>. See: <a href="https://stackoverflow.com/questions/39751421/mountpath-overrides-the-rest-of-the-files-in-that-same-folder">mountpath</a>, <a href="https://stackoverflow.com/questions/33415913/whats-the-best-way-to-share-mount-one-file-into-a-pod">files-in-folder-overriding</a>.</p> <p>You can also as @DavidMaze said debug your setup in a non-container Python virtual environment if you can, or as a second choice debugging the image in Docker without Kubernetes.</p>
<p>In my project I have to create a kubernetes cluster on my GCP with an External Load Balancer service for my django app. I create it with this <code>yaml</code> file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mydjango namespace: test1 labels: app: mydjango spec: ports: - name: http port: 8000 targetPort: 8000 selector: app: mydjango type: LoadBalancer </code></pre> <p>I apply it and all work is done on my cluster except for the fact that kubernetes create a Load balancer using <code>http</code>.</p> <p>How can I modify my <code>yaml</code> to create the same Load Balancer using <code>https</code> instead <code>http</code> using my google managed certs?</p> <p>So many thanks in advance Manuel</p>
<p>If you want to serve HTTPS, you need a certificate. For that, you can follow this documentation with <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Google managed certificates</a>.</p> <p>You also have to define an ingress to route the traffic.</p>
<p>We are deploying Cassandra docker image 3.10 via k8s as StatefullSet.</p> <p>I tried to set GC to G1GC adding <code>-XX:+UseG1GC</code> to JAVA_OPTS environment variable, but Cassandra is using the default CMS GC as set in the jvm.opts file.</p> <p>from running <code>ps aux</code> in the pod I'm getting Cassandra configuration:</p> <pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND cassand+ 1 28.0 10.1 72547644 6248956 ? Ssl Jan28 418:43 java -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=1000003 -XX:+AlwaysPreTouch -XX:-UseBiasedLocking -XX:+UseTLAB -XX:+ResizeTLAB -XX:+UseNUMA -XX:+PerfDisableSharedMem -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSWaitDuration=10000 -XX:+CMSParallelInitialMarkEnabled -XX:+CMSEdenChunksRecordAlways -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Xms2G -Xmx2G -Xmn1G -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -Dcassandra.jmx.remote.port=7199 -Dcom.sun.management.jmxremote.rmi.port=7199 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password -Djava.library.path=/usr/share/cassandra/lib/sigar-bin -javaagent:/usr/share/cassandra/jmx_prometheus_javaagent-0.10.jar=7070:/etc/cassandra/jmx_prometheus_cassandra.yaml </code></pre> <p>there is no <code>-XX:+UseG1GC</code> property.</p> <p>Is there a way to override the jvm.opts at runtime, so I don't have to build the image for every small change? or I must add the costume jvm.opts file to the docker image I'm building?</p>
<p>Best and ideal option is ConfigMap. You can create ConfigMap for that file so that jvm.opts file can be accessed and changed from outside of pod. So without recreating new pod or even touching pod, you can change configuration as many times as you want.</p> <p>For more details refer : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files</a> </p>
<p>My project needs to deploy a <code>PV</code> on the current master node, so this involves a <strong>resource scheduling</strong> problem. I use hostPath, so I must specify the scheduling to the <code>current node</code>.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: local-pv-redis-0 spec: capacity: storage: 8Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage-redis persistentVolumeReclaimPolicy: Retain local: path: /home/lab/redis/0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1 </code></pre> <p>But I need a <code>common way</code> to implement this method for any cluster. What I'm thinking of is getting the nodeName for the current node, which by default is the same as the hostname which can be obtained with the shell's <code>hostname</code>.</p> <p>But I want to do it in a more generic way to prevent it from being overridden by <code>--hostname-override</code>.</p> <p>I would also like to be able to make the <strong>host directory</strong> settings more generic, But kss does not seem to support <code>relative paths</code> and the <code>${PWD}</code> setting</p> <p>I would appreciate it if you could tell me how to solve these two problems which include <strong>resource scheduling and mount directory</strong> questions?</p>
<p><code>envsubst &lt; redis.yaml | kubectl apply -f -</code></p> <pre><code># redis.yaml apiVersion: apps/v1 kind: List items: - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage-redis namespace: test-system provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer - apiVersion: v1 kind: PersistentVolume metadata: name: local-pv-redis-0 namespace: test-system spec: capacity: storage: 8Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage-redis persistentVolumeReclaimPolicy: Retain local: path: ${PWD}/redis-0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ${HOST_NAME} </code></pre> <p>That's how I did it.</p>
<p>Some secrets need to be fetched by the PODS, secrets are stored in GCP secret manager, what is the secure and efficient way to fetch the secrets within the pod ?</p> <p>Thank you !</p>
<p>There isn't a native integration between Kubernetes Secrets and Google Secret Manager. As described in <a href="https://cloud.google.com/secret-manager/docs/using-other-products#google-kubernetes-engine" rel="nofollow noreferrer">the documentation</a>, the best solution is to use the Secret Manager client library to interact with secret manager and especially to access them.</p> <p>At the security point of view, using Workload Identity is also the best solution to use a specific service account for your deployment and then enforce the least privilege principle.</p> <p>If you don't do that, you will use, by default, the Node service account (this one on the Compute Engine) and you will have to grant this service account to access the secrets. And, because it's the NODE identity, all the pods running on these nodes will have the same permissions and will be allowed to access the secrets!</p>
<p>Hi I am getting following error</p> <p><code>ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=Failed precondition when calling the ServiceConsumerManager: tenantmanager::185014: Consumer 370525719069 should enable service:container.googleapis.com before generating a service account</code>.</p> <p>can someone help me ?</p>
<p>You should enable API service - container.googleapis.com, execute command:</p> <pre><code>$ gcloud services enable container.googleapis.com </code></pre> <p>Please take a look at this intorduction especially: <a href="https://codelabs.developers.google.com/codelabs/cloud-deploy-website-on-gke#2" rel="noreferrer">environment setup and cluster creation on GKE</a>.</p> <p>See more: <a href="https://cloud.google.com/service-usage/docs/enable-disable" rel="noreferrer">gcp-enabling-disabling-services</a>, <a href="https://cloud.google.com/iam/docs/troubleshooting-access#gcloud" rel="noreferrer">troubleshooting-enabling-gcp-services</a>, <a href="https://stackoverflow.com/questions/62751262/gcloud-console-cant-view-gke-resources">enabling-gcp-services-example</a>.</p> <p><strong>Another option:</strong></p> <p>It hard to answer due to lack of important additional information (e.g. environment )but such error may indicate that you are working on some kind of multi-tenancy environment. Make sure that you have proper rights to create new clusters:</p> <blockquote> <p>Assign roles using IAM<br /> You can control access to Google Cloud resources through IAM policies. Start by identifying the groups needed for your organization and their scope of operations, then assign the appropriate IAM role to the group. Use Google Groups to efficiently assign and manage IAM for users.</p> </blockquote> <p>See: <a href="https://www.google.com/url?q=https://cloud.google.com/kubernetes-engine/docs/best-practices/enterprise-multitenancy%23assign-iam-roles&amp;sa=D&amp;source=hangouts&amp;ust=1603872827370000&amp;usg=AFQjCNEfcBMv755zdpSipKsHPuH8hg_rwA" rel="noreferrer">enterprise-multitenancy-roles</a>.</p> <p>Also take a look on best practices how to set up <a href="https://cloud.google.com/kubernetes-engine/docs/best-practices/enterprise-multitenancy" rel="noreferrer">multi-tenant-cluster-gke-enterprise</a>.</p>
<p>what are the features of the open-source Ambassador API gateway and what are the features of the enterprise version Ambassador API gateway?</p>
<p>The open source version of the Ambassador API Gateway is essentially the same product as the API Gateway included with the enterprise offering Edge Stack. The Edge Stack offered by Ambassador Labs is bundled with their additional products, (i.e. Service Preview and Developer Portal). It also offers additional functionality to the API gateway to extend it's capabilities, such as authentication integrations. Enterprise support is also included with the edge stack product. I suggest reading the ambassador labs docs or contacting them directly for the latest info.</p> <p><a href="https://www.getambassador.io/editions/" rel="nofollow noreferrer">https://www.getambassador.io/editions/</a></p>
<p>I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain &quot;somedomain.com&quot; which is registered in Route53. In ingress I have the host mapping like:</p> <pre><code> hosts: - host: sub1.somedomain.com paths: - path </code></pre> <p>After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE? So far I have seen only mapping to static IP.</p> <p>Also I red here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2</a> that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.</p> <p>Thanks in advance!</p>
<p>The type of service exposition depends on the K8S implementation on each cloud provider.</p> <ul> <li>If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)</li> <li>If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.</li> </ul> <p>So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.</p>
<p>I am attempting to set up a single node Kubernetes cluster for testing on an Ubuntu Server 18.04.2 VM running on my Windows Hyper-V. I install Docker with the standard <code>apt install docker.io</code> command, and I initialize Kubernetes the cluster with the command <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> and then add the Flannel CNI with <code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code> and then install rabbitmq-ha using its &quot;stable&quot; Helm chart.</p> <p>If the VM is using the Hyper-V’s “Default Switch”, then everything works as expected. I also have an “Extra Switch” configured as an “External network” with “Allow management operating system to share this network adapter” checked, pointing to my wireless adapter. When I am in the office, with the Ubuntu VM set to use this “Extra Switch”, the IP address of my laptop and the Ubuntu VM (eth0) is set to 10.X.X.X, and everything works as expected.</p> <p>When I am at home, with the Ubuntu VM set to use this same “Extra Switch”, the IP address of my laptop and the Ubuntu VM (eth0) is set to 192.168.X.X, and rabbitmq pod throws errors and continuously restarts, with the logged error message:</p> <p><code>Failed to get nodes from k8s - {failed_connect,[{to_address,{&quot;kubernetes.default.svc.cluster.local&quot;,443}}, {inet,[inet],nxdomain}]}</code></p> <p>Doing some google research tells me that the rabbitmq pod cannot resolve the name &quot;kubernetes.default.svc.cluster.local&quot;.</p> <p>Adding the “apiserver-advertise-address” to kubeadm inti, set to my laptop’s 192.168.X.X IP address, does not make a difference.</p> <p>I also tried using Weave Works instead of Flannel, and omitting the CIDR in the ‘kubeadm init’ command, but I get the same rabbitmq error at home on &quot;Extra Switch&quot;, while working on the home Default Switch and working on the office network.</p> <p>So what am I missing? Is there a Ubuntu network setting I need when running under Hyper-V and an external switch? Or is there a Kubernetes or Docker configuration setting that I am missing? Thanks!</p>
<p>As @Steve solved problem, solution is:</p> <p>Firstly the newer version of the RabbitMQ Helm chart does not have this problem. But if you want to find workaround the &quot;bad&quot; name resolution, you can add a DNS rewrite rule to the CoreDNS config map, see <a href="https://stackoverflow.com/q/63895949/11057678">configmap-coredns</a>.</p> <p>Take a look at <a href="https://github.com/coredns/coredns/issues/4125" rel="nofollow noreferrer">coredns-issue</a> for a full description of the problem and how to narrowed it down to the particular DNS name in the chain that needed the &quot;rewrite&quot; rule.</p>
<p>Is there a way to make a volume as <code>ReadWriteMany</code> instead of having a NFS share? Does GCP support <code>ReadWriteMany</code> volumes? I'm right now using NFS share and a persistent volume pointing it to NFS Share and then mounting it to pods. But this has a issue.</p> <p>When NFS share has crashed or unable to connect, then all the pods that use this NFS persistent volume hangs. We can't even terminate the pods until the NFS issue is fixed. The pod doesn't crash either. We just had this issue and the pods were not responding for hours and terminating pod get stuck in <code>terminating</code> state. Once the NFS server issue is fixed then we can delete the pod and recreate it.</p> <p>What can be done instead of having a NFS share?</p>
<p>With Kubernetes, you need a NFS like persistent volume to ReadWriteMany. On Google Cloud, <a href="https://cloud.google.com/filestore" rel="nofollow noreferrer">Filestore</a> is the product to achieve that. You have <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">a documentation</a> that show you how to mount it to your pod</p>
<p>I have a remote privately managed Kubernetes cluster that I reach by going via an intermediary VM. To use kubectl from my machine I have setup an SSH tunnel that hops onto my VM and then onto my master node - this works fine.</p> <p>I am trying to configure Telepresence (<a href="https://www.telepresence.io/" rel="nofollow noreferrer">https://www.telepresence.io/</a>) which attempts to start up (correctly detecting that kubectl works) but then fails due to a timeout.</p> <pre><code>subprocess.TimeoutExpired: Command '['ssh', '-F', '/dev/null', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-q', '-p', '65367', '[email protected]', '/bin/true']' timed out after 5 seconds </code></pre> <p>Is this a setup that telepresence should support or is the presence of an intermediary VM going to be a roadblock for me?</p>
<p>Telepresence 2 should support this better as it installs a sidecar container that makes it more resilient to interrupted connections. I would give the new version a try to see if you're still seeing timeout errors.</p> <p><a href="https://www.getambassador.io/docs/latest/telepresence/quick-start/" rel="nofollow noreferrer">https://www.getambassador.io/docs/latest/telepresence/quick-start/</a></p>
<p>We deploy a laravel project in k8s(GCP) with mysql database. Now i want time to time backup of this database with the help of cronjob and I followed an <a href="https://medium.com/searce/cronjob-to-backup-mysql-on-gke-23bb706d9bbf" rel="nofollow noreferrer">articles</a> but i'm unable to create a backup file. However, as per article we need to create the storage bucket and service account in <a href="https://support.google.com/a/answer/7378726?hl=en" rel="nofollow noreferrer">GCP</a> </p> <p>It is working properly still there is no backup file in storage bucket.</p> <p>cronjob.yaml file</p> <p><code> apiVersion: batch/v1beta1 kind: CronJob metadata: name: backup-cronjob spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: backup-container image: gcr.io/thereport/abcd env: - name: DB_NAME valueFrom: configMapKeyRef: name: backup-configmap key: db - name: GCS_BUCKET valueFrom: configMapKeyRef: name: backup-configmap key: gcs-bucket - name: DB_HOST valueFrom: secretKeyRef: name: backup key: db_host - name: DB_USER valueFrom: secretKeyRef: name: backup key: username - name: DB_PASS valueFrom: secretKeyRef: name: backup key: password - name: GCS_SA valueFrom: secretKeyRef: name: backup key: thereport-541be75e66dd.json args: - /bin/bash - -c - mysqldump --u root --p"root" homestead > trydata.sql; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp /trydata.sql gs://backup-buck</p> <pre><code> restartPolicy: OnFailure </code></pre> <p></code> </p>
<p>You don't copy the right file:</p> <blockquote> <p>mysqldump --u root --p"root" homestead > <strong>trydata.sql</strong>; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp <strong>/laravel.sql</strong> gs://backup-buck</p> </blockquote>
<p>I have build on one kuberntes vm 2 pods of ignite - and when I check state everything looks fine but it isn't working fine on kuberntes cluster</p> <p>I set discoveryIP as you can see attached as well can you suggest why? and what can be done?</p> <pre><code> &lt;!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. --&gt; &lt;property name=&quot;discoverySpi&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi&quot;&gt; &lt;property name=&quot;ipFinder&quot;&gt; &lt;!-- Enables Kubernetes IP finder and setting custom namespace and service names. --&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder&quot;&gt; &lt;property name=&quot;namespace&quot; value=&quot;production&quot;/&gt; &lt;property name=&quot;serviceName&quot; value=&quot;{{ include &quot;ignite.fullname&quot; . }}&quot;/&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; </code></pre> <p>shard logs and status below.</p> <pre><code>&gt; Cluster ID: 0d910f16-fc9e-4837-b502-eecd100c530e Cluster tag: &gt; loving_lovelace &gt; -------------------------------------------------------------------------------- Cluster is active Command [STATE] finished with code: 0 Control &gt; utility has completed execution at: 2020-12-17T08:20:26.317 </code></pre> <p>however when I deploy the exact same chart on cluster I recived the following errors:</p> <pre><code>Failed to activate cluster. Connection to cluster failed. Latest topology update failed. Execution time: 33284 ms command terminated with exit code 2 </code></pre> <p>and in logs:</p> <pre><code>Dec 17, 2020 10:14:55 AM org.apache.ignite.logger.java.JavaLogger error SEVERE: Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries). class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:170) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1965) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1913) at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1277) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1105) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.&lt;init&gt;(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:158) ... 20 more Dec 17, 2020 10:19:55 AM org.apache.ignite.logger.java.JavaLogger error SEVERE: Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries). class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:170) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1965) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1913) at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1277) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1105) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.&lt;init&gt;(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:158) ... 20 more Dec 17, 2020 10:24:55 AM org.apache.ignite.logger.java.JavaLogger error SEVERE: Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries). class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:170) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1965) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1913) at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1277) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1105) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.&lt;init&gt;(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:158) ... 20 more Dec 17, 2020 10:29:56 AM org.apache.ignite.logger.java.JavaLogger error SEVERE: Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries). class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:170) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1965) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1913) at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1277) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1105) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.&lt;init&gt;(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:158) ... 20 more Dec 17, 2020 10:34:56 AM org.apache.ignite.logger.java.JavaLogger error SEVERE: Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries). class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:170) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1965) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1913) at org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1277) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1105) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:462) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2120) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:967) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1935) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1298) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2046) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1698) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1114) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1032) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:918) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:817) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:687) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:656) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300) Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666) at sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.&lt;init&gt;(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:158) ... 20 more </code></pre> <p>Service -</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{ include &quot;ignite.fullname&quot; . }} labels: app: {{ include &quot;ignite.fullname&quot; . }} spec: ports: - name: jdbc port: 11211 targetPort: 11211 - name: spi-communication port: 47100 targetPort: 47100 - name: spi-discovery port: 47500 targetPort: 47500 - name: jmx port: 49112 targetPort: 49112 - name: sql port: 10800 targetPort: 10800 - name: rest port: 8080 targetPort: 8080 - name: thin-clients port: 10900 targetPort: 10900 selector: app: {{ include &quot;ignite.fullname&quot; . }} sessionAffinity: None type: clusterIP </code></pre>
<p>Well, from the error, it seems that the default K8s service is not working or probably doesn't exist:</p> <pre><code>Caused by: java.net.UnknownHostException: kubernetes.default.svc.cluster.local </code></pre> <p>Please, check the following documentation: <a href="https://apacheignite.readme.io/docs/kubernetes-ip-finder" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/kubernetes-ip-finder</a>. By default Kubernetes IP Finder tries to reach the &quot;/kubernetes.default.svc.cluster.local:443&quot; that might not be true in your case. I think you need to check your URL and adjust the parameter:</p> <blockquote> <p>setMasterUrl(String) Sets the host name of the Kubernetes API server. <a href="https://kubernetes.default.svc.cluster.local:443" rel="nofollow noreferrer">https://kubernetes.default.svc.cluster.local:443</a></p> </blockquote> <p>Could you also add some notes about your deployment or k8s version to the question?</p>
<p>hy folks</p> <p>I'm currently trying to set up the livenessProbe and readinessProbe. After starting the container I ask to create an empty file <em>touch /tmp/healthy</em>.</p> <p>However i don't know why it can't create it. I looked at the kubernet site and examples on the internet and from what I could see everything seems correct. </p> <p>The liveness send me back this answear <strong>Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory</strong></p> <p>Thank you for your help </p> <p><a href="https://gist.github.com/zyriuse75/5c79f7f96e4a6deb7b79753bde688663" rel="nofollow noreferrer">https://gist.github.com/zyriuse75/5c79f7f96e4a6deb7b79753bde688663</a></p>
<p>I think you should write /bin/sh in commad and -c touch /tmp/healthy in args fields as below: </p> <pre><code>command: ["/bin/sh"] args: ["-c", "touch /tmp/healthy"] </code></pre>
<p>I am new to Python and kubernetes. I am trying to deploy a docker container in k8s pod on GCP and after an hr or so it is killed. Below is the docker file and the script that I am trying to execute.</p> <pre><code>FROM python:3.7-slim AS build WORKDIR /app COPY . . RUN pip3 install pymysql &amp;&amp; pip3 install waitress ENV DICT_FILE=&quot;/opt/srv/projects.dict&quot; \ MODEL_FILE=&quot;/opt/srv/projects.model.cpickle&quot; \ INDEX_FILE=&quot;/opt/srv/projects.index&quot; \ EXTERNAL_INDEX_FILE=&quot;/opt/srv/projects.mm.metadata.cpickle&quot; EXPOSE 5000 EXPOSE 3306 ENTRYPOINT [&quot;/bin/sh&quot;, &quot;serve.sh&quot;] </code></pre> <p>serve.sh</p> <pre><code>#!/bin/sh mkdir -p /opt/srv python3 setup.py bdist_wheel pip3 install dist/app_search*.whl &amp;&amp; semanticsearch-preprocess cp /app/dist/app_search*.whl /opt/srv/ cd /opt/srv pip3 install app_search*.whl waitress-serve --call app_search.app:main </code></pre> <p>The last log I see before the crash is</p> <blockquote> <p>Successfully installed Flask-1.1.2 Jinja2-2.11.2 MarkupSafe-1.1.1 Werkzeug-0.16.1 aniso8601-8.0.0 attrs-20.2.0 certifi-2020.6.20 chardet-3.0.4 click-7.1.2 flask-restplus-0.12.1 gensim-3.6.0 idna-2.10 importlib-metadata-2.0.0 itsdangerous-1.1.0 jsonschema-3.2.0 numpy-1.19.2 pyrsistent-0.17.3 pytz-2020.1 requests-2.24.0 scipy-1.5.2 six-1.15.0 smart-open-3.0.0 app-search-0.0.9 urllib3-1.25.10 zipp-3.3.0</p> </blockquote> <p>If I docker build and run on my local machine, the app works and is served on port 5000</p>
<p>When you run your container on Kubernetes, the used os is COS (Container Optimized OS). The container that on it are secure and prevent any update at runtime. Here, you try to add dependencies and to copy file at runtime, it can't work!</p> <p>Install all the stuff that you need when you build your container, and only execute your container at runtime.</p> <p><em>Of course, if you attach volume to your container, you can read and write on this mounted volume</em></p> <p>On your local environment, you don't have this same constrain and Docker run is more fair with you!</p>
<p>I have two ingress controllers (for public/internal traffic), what I would like is for all endpoints to use the public ingress except for /metrics, which should be internal, all using the same host.</p> <p>E.g.</p> <pre><code>example.com/ -&gt; public ingress example.com/metrics -&gt; internal ingress </code></pre> <p>This is what I have tried:</p> <p>internal ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-metrics-ingress annotations: kubernetes.io/ingress.class: ingress-internal spec: rules: - host: example.com http: paths: - path: /metrics backend: serviceName: example-servicename servicePort: 80 </code></pre> <p>and public ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: example.com http: paths: - path: backend: serviceName: example-servicename servicePort: 80 </code></pre> <p>The internal ingress is currently being ignored when I visit example.com/metrics (it uses the public one instead).</p> <p>If I change the internal ingress to use the same ingress controller as the public one and change the service port to 81 (as an example), this provides an error (which is expected), this shows that the two different ingresses are being used. However, as soon as I use two different ingress controllers, then the one ingress' rules are not being picked up.</p> <p>How can I configure my ingresses to achieve my desired result?</p>
<p>When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default <code>--ingress-class</code> value (see <code>IsValid</code> method in <code>internal/ingress/annotations/class/main.go</code>), otherwise the class annotation become required.</p> <p>If <code>--ingress-class</code> is set to the default value of <code>nginx</code>, the controller will monitor Ingresses with no class annotation <em>and</em> Ingresses with annotation class set to <code>nginx</code>. Use a non-default value for <code>--ingress-class</code>, to ensure that the controller only satisfied the specific class of Ingresses.</p> <p>In your case use the combination of the annotation <code>kubernetes.io/ingress.class: &quot;EXTERNAL|INTERNAL&quot;</code> and the flag <code>--ingress-class=EXTERNAL|INTERNAL</code> allows you to filter which Ingress rules should be picked by the nginx ingress controller.</p> <p>Take a look: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">multiple-ingress</a>, <a href="https://github.com/kubernetes/ingress-nginx/issues/1976" rel="nofollow noreferrer">ingress-nginx-traffic</a>.</p>
<p>I try to build Apache Ignite on Azure Kubernetes Service. AKS version is 1.19.</p> <p>I followed the below instructions from the official apache page.</p> <p><a href="https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment" rel="nofollow noreferrer">Microsoft Azure Kubernetes Service Deployment</a></p> <p>But when I check the status of my pods, they seem failed. The status of pods is CrashLoopBackOff.</p> <p><a href="https://i.stack.imgur.com/4BdD3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4BdD3.png" alt="status of pods" /></a></p> <p>When I check the logs, It says the problem is node-configuration.xml Here is the XML of the node configuration.</p> <pre><code> &lt;bean class=&quot;org.apache.ignite.configuration.IgniteConfiguration&quot;&gt; &lt;property name=&quot;discoverySpi&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi&quot;&gt; &lt;property name=&quot;ipFinder&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder&quot;&gt; &lt;constructor-arg&gt; &lt;bean class=&quot;org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration&quot;&gt; &lt;property name=&quot;namespace&quot; value=&quot;default&quot; /&gt; &lt;property name=&quot;serviceName&quot; value=&quot;ignite&quot; /&gt; &lt;/bean&gt; &lt;/constructor-arg&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; </code></pre> <p>Also, here is the output of the log.</p> <pre><code>PS C:\Users\kaan.akyalcin\ignite&gt; kubectl logs ignite-cluster-755f6665c8-djdcn -n ignite class org.apache.ignite.IgniteException: **Failed to instantiate Spring XML application context [springUrl=file:/ignite/config/node-configuration.xml, err=Line 1 in XML document from URL** [file:/ignite/config/node-configuration.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 71; cvc-elt.1: Cannot find the declaration of element 'bean'.] at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1098) at org.apache.ignite.Ignition.start(Ignition.java:356) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367) Caused by: class org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML application context [springUrl=file:/ignite/config/node-configuration.xml, err=Line 1 in XML document from URL [file:/ignite/config/node-configuration.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 71; cvc-elt.1: Cannot find the declaration of element 'bean'.] at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:392) at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104) at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98) at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:736) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:937) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:846) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:716) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:685) at org.apache.ignite.Ignition.start(Ignition.java:353) </code></pre> <p>I didn't find the solution. How can I fix this problem?</p>
<p>You have to pass a valid XML file, looks like the docs need to be adjusted accordingly:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;beans xmlns=&quot;http://www.springframework.org/schema/beans&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd&quot;&gt; &lt;bean class=&quot;org.apache.ignite.configuration.IgniteConfiguration&quot;&gt; &lt;property name=&quot;discoverySpi&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi&quot;&gt; &lt;property name=&quot;ipFinder&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder&quot;&gt; &lt;property name=&quot;namespace&quot; value=&quot;degault&quot;/&gt; &lt;property name=&quot;serviceName&quot; value=&quot;ignite&quot;/&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/beans&gt; </code></pre>
<p>Kubernetes allows to limit pod resource usage.</p> <pre><code>requests: cpu: 100m memory: 128Mi limits: cpu: 200m # which is 20% of 1 core memory: 256Mi </code></pre> <p>Let's say my kubernetes node has 2 core. And I run this pod with limit of CPU: 200m on this node. In this case, will my pod use it's underlying node's <strong>1Core's 200m</strong> or <strong>2Core's 100m+100m</strong>?</p> <p>This calculation is needed for my <a href="http://docs.gunicorn.org/en/stable/design.html#how-many-workers" rel="noreferrer">gunicorn worker's number formula</a>, or nginx worker's number etc.. In gunicorn documentation it says </p> <blockquote> <p>Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with.</p> </blockquote> <p>So should I use 5 workers? (my node has 2 cores). Or it doesn't even matter since my pod has only allocated 200m cpu and I should consider my pod has 1 core?</p> <p><em><strong>TLDR:</strong></em> How many cores do pods use when its cpu usage is limited by kubernetes? If I run <code>top</code> inside pod, I'm seeing 2 cores available. But I'm not sure my application is using this 2 core's 10%+10% or 1core's 20%..</p>
<p>It will limit to 20% of one core, i.e. 200m. Also, <code>limit</code> means a pod can touch a maximum of that much CPU and no more. So pod CPU utilization will not always touch the limit.</p> <p>Total CPU limit of a cluster is the total amount of cores used by all nodes present in cluster.</p> <p>If you have a 2 node cluster and the first node has 2 cores and second node has 1 core, K8s CPU capacity will be 3 cores (2 core + 1 core). If you have a pod which requests 1.5 cores, then it will not be scheduled to the second node, as that node has a capacity of only 1 core. It will instead be scheduled to first node, since it has 2 cores.</p>
<p>I deployed gridgain cluster in google kubernetes cluster following[1]. I enabled native persistency using statefulset. In statefulset.yaml in [2] <strong>terminationGracePeriodSeconds</strong> set to <strong>60000</strong>. What is the purpose of this large timeout?</p> <p>When deleting pod using <strong>kubectl delete pod</strong> command it take very large time. What is the suitable value for <strong>terminationGracePeriodSeconds</strong> without loss any data.</p> <p>[1]. <a href="https://www.gridgain.com/docs/latest/installation-guide/kubernetes/gke-deployment" rel="nofollow noreferrer">https://www.gridgain.com/docs/latest/installation-guide/kubernetes/gke-deployment</a></p> <p>[2]. <a href="https://www.gridgain.com/docs/latest/installation-guide/kubernetes/gke-deployment#creating-pod-configuration" rel="nofollow noreferrer">https://www.gridgain.com/docs/latest/installation-guide/kubernetes/gke-deployment#creating-pod-configuration</a></p>
<p>I believe the reason behind setting it to 60000 was - do not rely on it. Prior to Ignite 2.9 there was an issue with the startup script that didn't bypass SYS SIGNAL to the underlying Java app, making it impossible to perform a graceful shutdown.</p> <p>If a node is being restarted gracefully and <a href="https://www.gridgain.com/docs/latest/developers-guide/starting-nodes#preventing-partition-loss-on-shutdown" rel="nofollow noreferrer">IGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN</a> is enabled, Ignite will ensure that the node leave won't lead to a data loss. Sometimes a rebalance might take a while.</p> <p>Keeping the above in mind: the hang issue might happen for Apache Ignite 2.8 and below, keeping the recommended terminationGracePeriodSeconds should be fine and never be used in practice (in a normal flow).</p>
<p>We are following the below article to establish C# client connection to the Ignite Cluster, both of them deployed in the Kubernetes.</p> <p><a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service</a></p> <p>We do not find the equivalent C# class/method to perform the connection configuration in the C# client application.</p> <p><a href="https://i.stack.imgur.com/JJ3Gq.png" rel="nofollow noreferrer">enter image description here</a></p> <p>Please help us to find alternate methods to do the connection configuration for Kubernetes.</p>
<p>This API is not yet available for .NET, the <a href="https://issues.apache.org/jira/browse/IGNITE-13011" rel="nofollow noreferrer">relevant ticket</a> is in progress and most likely will be included into the next release.</p> <p>For now, you can list a set of server IPs for your thin clients explicitly. And for your server and thick client nodes it's fine to rely on spring.xml configuration. More details <a href="https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml" rel="nofollow noreferrer">here</a>.</p> <p>Example:</p> <pre><code> var cfg = new IgniteConfiguration { ... SpringConfigUrl = &quot;/path/to/spring.xml&quot; }; </code></pre> <p>And your spring configuration:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;beans xmlns=&quot;http://www.springframework.org/schema/beans&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd&quot;&gt; &lt;bean class=&quot;org.apache.ignite.configuration.IgniteConfiguration&quot;&gt; &lt;!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. --&gt; &lt;property name=&quot;discoverySpi&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi&quot;&gt; &lt;property name=&quot;ipFinder&quot;&gt; &lt;!-- Enables Kubernetes IP finder and setting custom namespace and service names. --&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder&quot;&gt; &lt;property name=&quot;namespace&quot; value=&quot;ignite&quot;/&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/beans&gt; </code></pre>
<p>In Google Kubernetes Engine, I created a Load Balancer (External IP Address). I can access it using the IP address. However, I want to get a domain name. ( I am not asking about buying my own domain and adding DNS records ). I am not able to find how to get the url.</p> <p>For eg. in Azure, in Azure Kubernetes Service, I created a Load Balancer and added a label. So, I can get a url like http://&lt;dns_label_which_i_gave&gt;.&lt;region_name&gt;.cloudapp.azure.com. So, for trial purpose, I don't have to pay for a domain and I can get an easy to read domain name.</p> <p>How to get the same in GCP Load Balancer?</p>
<p>With Google Cloud you can't do this. The Load balancer expose an IP and you have to create a A record in your registrar to make the link.</p>
<p>In my deployment a pod can be in a situation where it needs to be recreated. In this case it can still process traffic but should be recreated asap.</p> <p>So I think about having a livenessProbe that reports failure if the pod needs to be restarted. The readiness probe will still report ok.</p> <p>I know that eventually kubernetes will recreate all pods and the system will be fine again. </p> <p>My question now is can this be done without outage? So lets assume all pods of a replicaset report are not alive at the same time. Will kubernetes kill them all and then replace them or will it act in a rolling update fashion where it starts a new pod, waits for it to be ready, then kills one not alive pod and continue this way until all are replaced?</p> <p>Is this the default behaviour of kubernetes? If not can it be configured to behave like this?</p>
<p>K8 will not use rolling to start pod if they are failed due to probe or any other reason. </p> <p>Also about probes,when to start liveness probe first time and how frequently to do that, is specified in liveness probe itself. As you have multiple replicas of same pod, these values will be same for all replicas of pods managed by single ReplicaSet. So yes this is default behavior.</p> <p>But at all you want to do this without outage, you can create two ReplicaSet who manages two different set of same pods but with different values for below liveness probe params:</p> <pre><code>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. </code></pre>
<p>I'm trying to pull an image from my priavte harbor registry. In Kubernetes I created a secret first as explained in this documentation:</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <p>Then I tried to implement that into my deployment.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-k8s-test9 namespace: k8s-test9 spec: replicas: 1 template: metadata: labels: app: nginx-k8s-test9 spec: containers: - name: nginx-k8s-test9 image: my-registry.com/nginx-test/nginx:1.14.2 imagePullSecrets: - name: harborcred imagePullPolicy: Always volumeMounts: - name: webcontent mountPath: usr/share/nginx/html ports: - containerPort: 80 volumes: - name: webcontent configMap: name: webcontent --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: webcontent namespace: k8s-test9 annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadWriteOnce] resources: requests: storage: 5Gi </code></pre> <p>When I try to create the deployment I get the following error message:</p> <pre><code>error: error validating "deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "ports" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>I guess it's a yaml issue somehow but I don't know where it should be. </p>
<p>And here is the solution:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-k8s-test9 namespace: k8s-test9 spec: replicas: 1 template: metadata: labels: app: nginx-k8s-test9 spec: containers: - name: nginx-k8s-test9 image: my-registry.com/nginx-test/nginx:1.14.2 volumeMounts: - name: webcontent mountPath: usr/share/nginx/html ports: - containerPort: 80 volumes: - name: webcontent configMap: name: webcontent imagePullSecrets: - name: harborcred-test --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: webcontent namespace: k8s-test9 annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadWriteOnce] resources: requests: storage: 5Gi </code></pre> <p>The imagePullSecrets section was not at the right place.</p>
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p> <p>We get the below error message on start:</p> <p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p> <p>We included the openjdk8 in the docker image. Here is the docker file.</p> <pre><code> #FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base #WORKDIR /app #EXPOSE 80 #EXPOSE 443 ARG REPO=mcr.microsoft.com/dotnet/runtime FROM $REPO:3.1.24-alpine3.15 AS base # Install ASP.NET Core RUN aspnetcore_version=3.1.24 \ &amp;&amp; wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \ &amp;&amp; aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \ &amp;&amp; echo &quot;$aspnetcore_sha512 aspnetcore.tar.gz&quot; | sha512sum -c - \ &amp;&amp; tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \ &amp;&amp; rm aspnetcore.tar.gz RUN apk add openjdk8 ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk ENV PATH=&quot;$JAVA_HOME/bin:${PATH}&quot; WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src ... RUN dotnet restore &quot;API.csproj&quot; COPY . . WORKDIR &quot;API&quot; RUN dotnet build &quot;API.csproj&quot; -c Release -o /app/build FROM build AS publish RUN dotnet publish &quot;API.csproj&quot; -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT [&quot;dotnet&quot;, &quot;API.dll&quot;]```` </code></pre>
<p>In addition to Pavel's response, instead of building your own docker image, you can utilize base image available in GridGain edition: <a href="https://hub.docker.com/r/gridgain/community-dotnet" rel="nofollow noreferrer">https://hub.docker.com/r/gridgain/community-dotnet</a></p> <p>GridGain Community edition is built on Apache Ignite, is free and open source as well. You might check the official docs for more details.</p>
<p>I want to deploy rook for kubernetes. I use 1 master and 3 worker and the host is ubuntu in baremetal. but the container stuck in creating container. after a lot of search i understand i should use this document <a href="https://github.com/rook/rook/blob/master/Documentation/flexvolume.md#most-common-readwrite-flexvolume-path" rel="nofollow noreferrer">https://github.com/rook/rook/blob/master/Documentation/flexvolume.md#most-common-readwrite-flexvolume-path</a> that said</p> <blockquote> <p>Configuring the Rook operator You must provide the above found FlexVolume path when deploying the rook-operator by setting the environment variable FLEXVOLUME_DIR_PATH. For example:</p> <p>env: [...] - name: FLEXVOLUME_DIR_PATH value: "/var/lib/kubelet/volumeplugins" (In the operator.yaml manifest replace with the path or if you use helm set the agent.flexVolumeDirPath to the FlexVolume path)</p> <p>Configuring the Kubernetes kubelet You need to add the flexvolume flag with the path to all nodes's kubelet in the Kubernetes cluster:</p> <p>--volume-plugin-dir=PATH_TO_FLEXVOLUME (Where the PATH_TO_FLEXVOLUME is the above found FlexVolume path)</p> </blockquote> <p>the question is how can i add flexvolume flag with the path to all nodes's kubelet ?</p>
<p>@yasin lachini,<br> If you deploy kubernetes cluster on baremetal, you don't need to configure anything. That is because /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ is the kubelet default FlexVolume path and Rook assumes the default FlexVolume path if not set differently.</p> <p>My env:<br> rook-ceph/operator.yml (use default FLEXVOLUME_DIR_PATH) :</p> <pre><code>... # Set the path where the Rook agent can find the flex volumes # - name: FLEXVOLUME_DIR_PATH # value: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec" ... </code></pre> <p>After deploy,on node: </p> <pre><code># ls /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ ceph.rook.io~rook ceph.rook.io~rook-ceph-system rook.io~rook rook.io~rook-ceph-system </code></pre>
<p>I've a CA (Cluster Autoscaler) deployed on EKS followed <a href="https://eksworkshop.com/scaling/deploy_ca/" rel="nofollow noreferrer">this post</a>. What I'm wondering is CA automatically scales down the cluster whenever at least <strong>a single pod</strong> is deployed on that node i.e. if there are 3 nodes with the capacity of 8 pods, if 9th pod comes up, CA would provision 4th nodes to run that 9th pod. What I see is CA is continuously terminating &amp; creating a node randomly chosen from within the cluster disturbing other pods &amp; nodes.</p> <p>How can I tell EKS (without defining minimal nodes or disabling scale-in policy in ASG) to <strong>not to kill the node having at least 1 pod running on it</strong>. Any suggestion would be appreciated.</p>
<p>You cannot use pod as unit. CA work with resources cpu and memory unit.</p> <p>If the cluster does not have enough cpu or memory it add one new.</p> <p>You have to play with your requests resources (in the pod definition) or redefine your node to take an instance type with more or less resources depending how many pod you want on each.</p> <p>Or you can play with the param <code>scale-down-utilization-threshold</code></p> <p><a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca</a></p>
<p>If I have Kubernetes service (cluster IP with port 12345) with three pods behind it as endpoints (port 16789) in a namespace, what should be whitelisted in network policy, just the service port or the endpoint port or DNS port? Network policy can only take pod/namespace labels as selectors, not service labels. It is not clear from the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">documentation</a>. Trying to access the service from a different namespace. The environment uses Calico as CNI.</p>
<p>The network policies are on the pods network interfaces. You can have pod without service and want to add network policy.</p> <p>So you have to allow port 16789. </p>
<p>i'm writing this Google Cloud Function (Python)</p> <pre><code>def create_kubeconfig(request): subprocess.check_output("curl https://sdk.cloud.google.com | bash | echo "" ",stdin=subprocess.PIPE, shell=True ) os.system("./google-cloud-sdk/install.sh") os.system("gcloud init") os.system("curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl") os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**") os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**") conf = KubeConfig() conf.use_context('**cluster name**') </code></pre> <p>when i run the code it gives me the error <strong>'Invalid kube-config file. ' kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.</strong></p> <p>help me to solve it please</p>
<p>You have to reach programmatically the K8S API. You have the description of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17" rel="nofollow noreferrer">API in the documentation</a></p> <p>But it's not easy and simple to perform. However, here some inputs for achieving what you want.</p> <p>First, get the GKE master IP <a href="https://i.stack.imgur.com/IzfVh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IzfVh.png" alt="enter image description here"></a></p> <p>Then you can access to the cluster easily. Here for reading the deployment</p> <pre><code> import google.auth from google.auth.transport import requests credentials, project_id = google.auth.default() session = requests.AuthorizedSession(credentials) response = session.get('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', verify=False) response.raise_for_status() print(response.json()) </code></pre> <p>For creating one, you can do this</p> <pre><code> import google.auth from google.auth.transport import requests credentials, project_id = google.auth.default() session = requests.AuthorizedSession(credentials) with open("deployment.yaml", "r") as f: data = f.read() response = session.post('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', data=data, headers={'content-type': 'application/yaml'}, verify=False) response.raise_for_status() print(response.json()) </code></pre> <p>According with the object that you want to build, you have to use the correct file definition and the correct API endpoint. I don't know a way to apply a whole <code>yaml</code> with several definition in only one API call.</p> <p>Last things, be sure to provide the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam" rel="nofollow noreferrer">correct GKE roles</a> to the Cloud Function service Account</p> <p><strong>UPDATE</strong></p> <p>Another solution is to use Cloud Run. Indeed, with Cloud Run and thanks to the Container capability, you have the ability to install and to call system process (it's totally open because <a href="https://cloud.google.com/run/docs/reference/container-contract#sandbox" rel="nofollow noreferrer">your container runs into a GVisor sandbox</a>, but most of common usages are allowed)</p> <p>The idea is the following: use a gcloud SDK base image and deploy your application on it. Then, code your app to perform system calls.</p> <p>Here a working example in Go</p> <p>Docker file</p> <pre><code>FROM golang:1.13 as builder # Copy local code to the container image. WORKDIR /app/ COPY go.mod . ENV GO111MODULE=on RUN go mod download COPY . . # Perform test for building a clean package RUN go test -v ./... RUN CGO_ENABLED=0 GOOS=linux go build -v -o server # Gcloud capable image FROM google/cloud-sdk COPY --from=builder /app/server /server CMD ["/server"] </code></pre> <p>Note: The image cloud-sdk image is heavy: 700Mb</p> <p>The content example (only the happy path. I remove error management, and the stderr/stdout feedback for simplifying the code)</p> <pre><code> ....... // Example here: recover the yaml file into a bucket client,_ := storage.NewClient(ctx) reader,_ := client.Bucket("my_bucket").Object("deployment.yaml").NewReader(ctx) content,_:= ioutil.ReadAll(reader) // You can store locally the file into /tmp directory. It's an in-memory file system. Don't forget to purge it to avoid any out of memory crash ioutil.WriteFile("/tmp/file.yaml",content, 0644) // Execute external command // 1st Recover the kube authentication exec.Command("gcloud","container","clusters","get-credentials","cluster-1","--zone=us-central1-c").Run() // Then interact with the cluster with kubectl tools and simply apply your description file exec.Command("kubectl","apply", "-f","/tmp/file.yaml").Run() ....... </code></pre>
<ul> <li>Since versions 2.6 (Apache Hadoop) <strong>Yarn handles docker containers</strong>. Basically it distributes the requested amount of containers on a Hadoop cluster, restart failed containers and so on.</li> <li><strong>Kubernetes</strong> seemed to do the <strong>same</strong>.</li> </ul> <p>Where are the major differences?</p>
<p>Kubernetes is developed almost from a clean slate for extending Docker container kernel to become a platform. Kubernetes development has taken bottom up approach. It has good optimization on specifying per container/pod resource requirements, but it lacks a effective global scheduler that can partition resources into logical grouping. Kubernetes design allows multiple schedulers to run in the cluster. Each scheduler manages resources within its own pods. However, Kubernetes cluster can suffer from instability when application demands more resources than physical systems can handle. It work best in infrastructure capacity exceeding application demands. Kubernetes scheduler will attempt to fill up the idle nodes with incoming application requests and terminate low priority and starvation containers to improve resource utilization. Kubernetes containers can integrate with external storage system like S3 to provide resilience to data. Kubernetes framework uses etcd to store cluster data. Etcd cluster nodes and Hadoop Namenode are both single point of failures in Kubernetes or Hadoop platform. Etcd can have more replica than Namenode, hence, from reliability point of view seems to favor Kubernetes in theory. However, Kubernetes security is default open, unless RBAC are defined with fine-grained role binding. Security context is set correctly for pods. If omitted, primary group of the pod will default to root, which can be problematic for system administrators trying to secure the infrastructure. </p> <p>Apache Hadoop YARN was developed to run isolated java processes to process big data workload then improved to support Docker containers. YARN provides global level resource management like capacity queues for partitioning physical resources into logical units. Each business unit can be assigned with percentage of the cluster resources. Capacity resource sharing system is designed in favor of guarentee resource availability for Enterprise priority instead of squeezing every available physical resources. YARN does score more points in security. There are more security featuers in Kerberos, access control for privileged/non-privileged containers, trusted docker images, and placement policy constraints. Most docker related security are default to close, and system admin needs to manually turn on flags to grant more power to containers. Large enterprises tend to run Hadoop more than Kubernetes because securing the system cost less. There are more distributed SQL engines built on top of YARN, including Hive, Impala, SparkSQL and IBM BigSQL. Database options make YARN an attrative option because the ability to run online transaction processing in containers, and online analytical processing using batch workload. Hadoop Developer toolchains can be overwhelming. Mapreduce, Hive, Pig, Spark and etc, each have its own style of development. The user experience is inconsistent and take a while to learn them all. Kubernetes feels less obstructive by comparison because it only deploys docker containers. With introduction of YARN services to run Docker container workload, YARN can feel less wordy than Kubernetes. </p> <p>If your plan is to out source IT operations to public cloud, pick Kubernetes. If your plan is to build private/hybrid/multi-clouds, pick Apache YARN.</p>
<p>So I want to deploy a master-slaves MySQL cluster in k8s. I found 2 ways that seem popular:</p> <ol> <li>The first one is to use statefulsets directly from k8s official document: <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/</a></li> <li>The second one is to use operator, i.e. <a href="https://github.com/oracle/mysql-operator" rel="nofollow noreferrer">https://github.com/oracle/mysql-operator</a></li> </ol> <p>Which way is most commonly used?</p> <p>Also, in statefulsets, if my MySQL master dies, will k8s automatically promote the slave to be the master?</p> <p>Lastly, when my logic backend app performs an operation (CRUD) to MySQL cluster, how does k8s know which pod to route to, i.e. write operation can only be sent to master while read is sent to all?</p>
<p>Users can deploy and maintain a set of highly available MySQL services in k8s based on StatefulSets, the process is relatively complex. This process requires users to familiarize themselves with various k8s resource objects, learn many MySQL operation details and maintain a set of complex management scripts. Kubernetes Operators are designed to reduce the threshold for deploying complex applications on k8s.</p> <p>Operator hides the orchestration details of complex applications and greatly reduces the threshold to use them in k8s. If you need to deploy other complex applications, we recommend that you use the Operator.</p> <p>Speaking about master election while using StatefulSet. Electing potential slave to be a master is not an automatic process - you have to configure this manually using <a href="https://www.percona.com/software/mysql-database/percona-xtrabackup" rel="nofollow noreferrer">Xtrabackup</a> - here is more information - <a href="https://www.percona.com/doc/percona-xtrabackup/2.1/howtos/setting_up_replication.html" rel="nofollow noreferrer">setting_up_replication</a>.</p> <p>Take a look: <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/#cloning-existing-data" rel="nofollow noreferrer">cloning-existing-data</a>, <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/#starting-replication" rel="nofollow noreferrer">starting-replication</a>, <a href="https://medium.com/@Alibaba_Cloud/kubernetes-application-management-stateful-services-ca17640f93fa" rel="nofollow noreferrer">mysql-statefulset-operator</a>.</p> <p>Useful tools: <a href="https://github.com/vitessio/vitess" rel="nofollow noreferrer">vitess</a> for better MySQL networking management and <a href="https://github.com/helm/charts/tree/master/stable/percona-xtradb-cluster" rel="nofollow noreferrer">percona-xtradb-cluster</a> that provides superior performance, scalability and instrumentation.</p>
<p>So I'm trying to achieve the following: - Via Terraform deploy Rancher 2 on GCE - Create K8s Cluster - Add firewall rules so the nodes are able to talk back to the Racher Master Vm.</p> <p>I was able to add a firewall rule with the External IPs of the Nodes to access the rancher master, but instead of adding the IPs I should be able to use a tag. Google Kubernetes Engine create a compute Engine Instance Group</p> <pre><code> gke-c-wlvrt-default-0-5c42eb4e-grp </code></pre> <p>When I add in the firewall rules:</p> <pre><code>Target Tag: rancher-master Source Tag: gke-c-wlvrt-default-0-5c42eb4e-grp </code></pre> <p>nothing works.</p> <p>when I change it to: </p> <pre><code>Target Tag: rancher-master Source IP: 35.xx.xx.xx, 35.xx.xx.xx.xx, 35.xx.x.xxx.x </code></pre> <p>it works. </p> <p>So to I get the tags for the Kubernetes Nodes working on the firewall rule ? </p>
<p>You don't use the correct tag. For knowing it, go to Compute Engine page and click on the detail on a VM. You can see this:</p> <p><a href="https://i.stack.imgur.com/TqhCw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TqhCw.png" alt="enter image description here"></a></p> <p>The instance group name is not the same as the network tag name. Use the network tag instead of the instance group name.</p> <p>You can also see these values when you go to the instance group page, and you go to the instance template detail.</p> <p><strong>UPDATE</strong></p> <p>Because you can't (or I don't know how to do) know the network tag applied to the VM, you can use a special trick on GCP.</p> <p>Start to update your node pool definition with a service account</p> <pre><code>resource "google_service_account" "sa-node" { account_id = "sa-node" display_name = "sa-node" } resource "google_container_node_pool" "primary_preemptible_nodes" { name = "my-node-pool" location = "us-central1" cluster = google_container_cluster.primary.name node_count = 1 node_config { preemptible = true machine_type = "n1-standard-1" service_account = google_service_account.sa-node.email .... </code></pre> <p>Then define a firewall rule by using the service account as source, instead of the network tag</p> <pre><code>resource "google_compute_firewall" "default" { name = "test-firewall" network = google_compute_network.default.name allow { protocol = "tcp" ports = ["80", "8080", "1000-2000"] } source_service_accounts = [google_service_account.sa-node.email] } </code></pre> <p>Sadly, you can't mix <code>target tag</code> and <code>source service account</code>, but you can use a <code>target service account</code>. Thus, do the same thing on Rancher. Use a specific service account for your rancher deployment and that should work.</p> <p>Hope this help!</p>
<p>We have configured the istio 1.4.0 with demo profile on Kubernetes cluster 1.15.1. It was working as expected but after some time facing issues with the application which are connecting to backend servers like mongo DB. Application pod is going in <code>crashloopbackup</code> and if i disabled istio it works properly. </p> <p>Upon checking the istio-proxy logs found lines stating http/1.1 DPE and mongo DB IP and port number</p> <p>Below is the Istio-proxy logs (sidecar), </p> # <blockquote> <pre><code>[2020-03-11T13:40:28.504Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - &lt;mongo IP&gt;:27017 10.233.92.103:49412 - - [2020-03-11T13:40:28.508Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - &lt;mongo IP&gt;:27017 10.233.92.103:52062 - - [2020-03-11T13:40:28.528Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - &lt;mongo IP&gt;:27017 10.233.92.103:37182 - - [2020-03-11T13:40:28.529Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - &lt;mongo IP&gt;:27017 10.233.92.103:49428 - - [2020-03-11T13:40:28.530Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.26.61.18:27017 10.233.92.103:52078 - - [2020-03-11T13:40:28.569Z] "POST /intake/v2/events HTTP/1.1" 202 - "-" "-" 941 0 3 1 "-" "elasticapm-node/3.3.0 elastic-apm-http-client/9.3.0 node/10.12.0" "8954f0a1-709b-963c-a480-05b078955c89" "&lt;apm&gt;:8200" "10.26.61.45:8200" PassthroughCluster - &lt;apm&gt;:8200 10.233.92.103:49992 - - [2020-03-11T13:40:28.486Z] "- - -" 0 - "-" "-" 47 3671 98 - "-" "-" "-" "-" "&lt;redis&gt;:6379" PassthroughCluster 10.233.92.103:37254 &lt;redis&gt;:6379 10.233.92.103:37252 - - [2020-03-11T13:40:30.168Z] "- - -" 0 - "-" "-" 632 1212236 104 - "-" "-" "-" "-" "104.16.25.35:443" PassthroughCluster 10.233.92.103:60760 104.16.25.35:443 10.233.92.103:60758 - -``` </code></pre> </blockquote> # <p>and application logs giving error as below </p> <blockquote> <p><code>{ err: 'socketHandler', trace: '', bin: undefined, parseState: { sizeOfMessage: 1347703880, bytesRead: undefined, stubBuffer: undefined } }</code></p> </blockquote>
<p>The issue has been resolved. </p> <p>RCA :- I have manually created service and endpoint of MongoDB with port name as http</p> <p>After that when i checked the listeners in proxy-config via istioctl command i found an entry with address 0.0.0.0 and port 27017</p> <p>ADDRESS PORT TYPE 0.0.0.0 27017 TCP</p> <p>In json output, i interpret that its going into blackholecluster even if i set allow_any in passthroughcluster. </p> <p>The output in istio-proxy always give me DPE error.</p> <p>After understanding the issue i changed the name from http to http1 and it worked properly. </p> <p>Now need to understand why the name http was creating so much issue</p>
<p>TL;DR In a GKE private cluster, I'm unable to expose service with internal/private IP.</p> <p>We have our deployment consisting of around 20 microservices and 4 monoliths, currently running entirely on VMs on GoogleCloud. I'm trying to move this infrastructure to GKE. The first step of the project is to build a private GKE cluster (i.e without any public IP) as a replacement of our staging. As this is staging, I need to expose all the microservice endpoints along with the monolith endpoints internally for debugging purpose (means, only to those connected to the VPC) and that is where I'm stuck. I tried 2 approaches:</p> <ol> <li>Put an internal load balancer (ILB) in front of each service and monolith. Example:</li> </ol> <pre><code>apiVersion: v1 kind: Service metadata: name: session annotations: cloud.google.com/load-balancer-type: "Internal" labels: app: session type: ms spec: type: LoadBalancer selector: app: session ports: - name: grpc port: 80 targetPort: 80 protocol: TCP </code></pre> <p><a href="https://i.stack.imgur.com/R4Ndo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R4Ndo.png" alt="enter image description here"></a></p> <p>This works, though with severe limitation. ILB creates a forwarding rule, and <strong>GCP has a <a href="https://cloud.google.com/vpc/docs/quota#per_network" rel="nofollow noreferrer">limitation</a> of 75 forwarding rule per network</strong>. This means we can not build more than 3 clusters in a network. Not acceptable to us.</p> <ol start="2"> <li>a. I tried placing an ingress controller in front of all the services, which always exposes the entire cluster with a public IP - an absolute no-no.</li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress hostNetwork: true metadata: name: ingress-ms-lb annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: gs03 http: paths: - path: /autodelivery/* backend: serviceName: autodelivery servicePort: 80 - path: /session/* backend: serviceName: session servicePort: 80 </code></pre> <p>b. I tried using a nginx ingress controller which ends up not having an ip at all.</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-ms-lb annotations: nginx.ingress.kubernetes.io/rewrite-target: / # cloud.google.com/load-balancer-type: "Internal" nginx.ingress.kubernetes.io/ingress.class: nginx kubernetes.io/ingress.class: "nginx" # nginx.ingress.kubernetes.io/whitelist-source-range: 10.100.0.0/16, 10.110.0.0/16 spec: rules: - host: svclb http: paths: - path: /autodelivery/* backend: serviceName: autodelivery servicePort: 80 - path: /session/* backend: serviceName: session servicePort: 80 </code></pre> <p>The third option is to configure firewall rules, which will cut off any access to the public IPs. This was rejected internally, given the security concerns.</p> <p>I'm stuck at this stage and need some pointers to move forward. Please help</p>
<p>I could see from the screenshot that you attached that your GKE cluster is a private cluster.</p> <p>As you would like to reach your services and applications inside the GKE Cluster from all the resources in the same VPC Network, I would like to suggest you to use NodePort [1].</p> <p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport</a></p>
<p>I want to install <code>jenkins</code> using its official helm <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">chart</a> on GKE.</p> <p>I want to expose the agent service (port 50000) using LoadBalancer (will be hitting it from some remote agents).</p> <p>Will <a href="https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml#L194" rel="nofollow noreferrer">this</a> annotation</p> <pre><code>service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8" </code></pre> <p>also help secure a GCP load balancer, or is it only applicable on AWS?</p> <p>Will the agents initiated internally in GKE still have to pass through the internet to reach the service, or will they be routed internally to the corresponding agent service?</p>
<p>If you are asking about capability to whitelist firewalls using 'loadBalancerSourceRanges' parameter <code>service.beta.kubernetes.io/load-balancer-source-ranges</code> annotation is supported and often use on GCP.</p> <p>Here is example Loadbalancer service with defined source-ranges:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: example-loadbalancer annotations: service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8" spec: type: LoadBalancer ports: - protocol: TCP port: 8888 targetPort: 8888 </code></pre> <p>Unlike Network Load Balancing, access to TCP Proxy Load Balancing cannot be controlled by using firewall rules. This is because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud and firewall rules are implemented on instances in the data center. <a href="https://i.stack.imgur.com/Obbph.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Obbph.png" alt="enter image description here"></a> Useful documentations: <a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">gcp-external-load-balancing</a>, <a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">load-balancing</a>.</p>
<p>When for whatever reasons I delete the pod running the Job that was started by a CronJob, I immediately see a new pod being created. It is only once I delete something like six times the <code>backoffLimit</code> number of pods, that new ones stop being created.</p> <p>Of course, if I'm actively monitoring the process, I can delete the CronJob, but what if the Pod inside the job fails when I'm not looking? I would like it not to be recreated.</p> <p>How can I stop the CronJob from persisting in creating new jobs (or pods?), and wait until the next scheduled time if the current job/pod failed? Is there something similar to Jobs' <code>backoffLimit</code>, but for CronJobs?</p>
<p>Set <strong><code>startingDeadlineSeconds</code></strong> to a large value or left unset (the default).</p> <p>At the same time set <strong><code>.spec.concurrencyPolicy</code></strong> as <strong><code>Forbid</code></strong> and the CronJobs skips the new job run while previous created job is still running. </p> <p>If <strong><code>startingDeadlineSeconds</code></strong> is set to a large value or left unset (the default) and if <strong><code>concurrencyPolicy</code></strong> is set to <strong><code>Forbid</code></strong>, the job will not be run if failed.</p> <p>Concurrent policy field you can add to specification to defintion of your CronJob (.spec.concurrencyPolicy), but this is optional. </p> <p>It specifies how to treat concurrent executions of a job that is created by this CronJob. The spec may specify only one of these three concurrency policies:</p> <ul> <li><strong>Allow (default)</strong> - The cron job allows concurrently running jobs</li> <li><strong>Forbid</strong> - The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run</li> <li><strong>Replace</strong> - If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run</li> </ul> <p>It is good to know that currency policy applies just to the jobs created by the same CronJob. If there are multiple CronJobs, their respective jobs are always allowed to run concurrently.</p> <p>A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If <strong><code>concurrencyPolicy</code></strong> is set to <strong><code>Forbid</code></strong> and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.</p> <p>For every CronJob, the CronJob controller checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error</p> <p>More information you can find here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer"><code>CronJobs</code></a> and <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer"><code>AutomatedTask</code></a>.</p> <p>I hope it helps.</p>
<p>The following chain describes how Pods that define an API are reached from the outside.</p> <pre><code>Client -&gt; Route53 (.example.com) -&gt; LoadBalancer -&gt; Nginx -&gt; Service -&gt; Pod </code></pre> <p>Some pods, in addition to defining an API, communicate and use the API of others in the same cluster k8s. To allow communication between pods I can do it using the internal dns: eg. <code>api1.ns.svc.cluster.local</code> or using the Route53 <code>api1.example.com</code> domain.</p> <p>The first case is more efficient but on the other hand I need to keep a list of the necessary services and namespaces for each pod.</p> <p>The second case is easier to manage. I know that each API responds to <code>* .example.com</code> so I only need to know the subdomain to call. This approach is extremely inefficient:</p> <pre><code>Pod1 -&gt; Route53 (api2.example.com) -&gt; LoadBalancer -&gt; Nginx -&gt; Service -&gt; Pod2 </code></pre> <p>In this scenario I would like to know if there are known solutions for which a pod to communicate with another can use the same domain managed by Route53 but without leaving the cluster and maintaining internal traffic.</p> <p>I know I can use a core dns rewrite but in that case I should still keep an updated list, also Route53 holds subdomains pointing to services outside the cluster, e.g. <code>db.example.com</code></p> <p>So the idea is an autodiscovery of the ingress and keep internal traffic if possible:</p> <pre><code>Pod1 -&gt; k8sdns with api2.example.com ingress -&gt; Nginx -&gt; Service -&gt; Pod2 </code></pre> <p>Or</p> <pre><code>Pod1 -&gt; k8sdns without db.example.com ingress -&gt; Route53 -&gt; LoadBalancer -&gt; DB </code></pre> <p>Thanks</p>
<p>Yes, you can do it using the CoreDNS <code>rewrite plugin</code>. This is the <a href="https://coredns.io/plugins/rewrite/" rel="nofollow noreferrer">official documentation</a> and I'll give you an example how to implement it.</p> <ol> <li>Edit the CoreDNS ConfigMap</li> </ol> <pre class="lang-bash prettyprint-override"><code>kubectl edit cm -n kube-system coredns </code></pre> <ol start="2"> <li>Add this line inside the config:</li> </ol> <pre class="lang-yaml prettyprint-override"><code>rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local </code></pre> <p>Your <code>cm</code> is going to look like:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } </code></pre> <ol start="3"> <li>Save the edit and delete your CoreDNS pods</li> </ol> <pre class="lang-bash prettyprint-override"><code>kubectl delete pod -n kube-system --selector k8s-app=kube-dns </code></pre> <ol start="4"> <li>And test it inside a dummy pod or query directly to the CoreDNS</li> </ol> <pre class="lang-bash prettyprint-override"><code># dig app1.yourdomain.com ; &lt;&lt;&gt;&gt; DiG 9.16.33-Debian &lt;&lt;&gt;&gt; app1.yourdomain.com ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 51020 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: c06f814fbf04a827 (echoed) ;; QUESTION SECTION: ;app1.yourdomain.com. IN A ;; ANSWER SECTION: app1.default.svc.cluster.local. 30 IN A 10.110.113.195 ;; Query time: 5 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Wed Oct 26 04:49:47 UTC 2022 ;; MSG SIZE rcvd: 107 </code></pre>
<p>Want to get the list of pods ussing ansible. HEre is the approach I did</p> <pre class="lang-yaml prettyprint-override"><code># tasks file for elasticsearch_secure - name: Fetch podd deatils k8s_info: kind: Pod field_selectors: - status.phase=Running namespace: &lt;ns&gt; register: pod_list become: yes delegate_to: localhost </code></pre> <p>I am getting connection time out</p> <pre><code>FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;Failed to get client due to HTTPSConnectionPool(host='&lt;clusterip&gt;', port=6443): Max retries exceeded with url: /version (Caused by NewConnectionError('&lt;urllib3.connection.HTTPSConnection object at 0x1081d5d00&gt;: Failed to establish a new connection: [Errno 60] Operation timed out'))&quot;} </code></pre> <p>IS there anyother way I can get podd details</p> <pre><code>$ python --version Python 2.7.16 $ pip3 list | grep openshift openshift 0.11.2 </code></pre> <p>Please pour your suggestion stucked here.</p>
<p>I had the same issue and it seems to be a problem with Python's <code>kubernetes</code> module: <a href="https://github.com/kubernetes-client/python/issues/1333" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/issues/1333</a></p> <p>What worked for me was to uninstall the <code>kubernetes</code> and <code>openshift</code> modules and install them again with the compatible versions for <code>openshift==0.11.2</code>:</p> <pre class="lang-sh prettyprint-override"><code># Uninstall kubernetes and openshift $ pip3 uninstall -y openshift kubernetes # This command also installs kubernetes $ pip3 install -Iv openshift==0.11.2 # Installed versions $ pip3 list | grep -e openshift -e kubernetes kubernetes 11.0.0 openshift 0.11.2 </code></pre>
<p>I have an application in a container which reads certain data from a configMap which goes like this</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.yaml: | server: port: 8080 host: 0.0.0.0 ## ## UCP configuration. ## If skipped, it will default to looking inside of the connections.xml file. database: ApplicationDB: username: username password: hello123 </code></pre> <p>Now I created a secret for the password and mounted as env variable while starting the container.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: appdbpassword type: Opaque stringData: password: hello123 </code></pre> <p>My pod looks like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: {{ .Values.pod.name }} spec: containers: - name: {{ .Values.container.name }} image: {{ .Values.image }} command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;"] env: - name: password valueFrom: secretKeyRef: name: appdbpassword key: password volumeMounts: - name: config-volume mountPath: /app/app-config/application.yaml subPath: application.yaml volumes: - name: config-volume configMap: name: app-config </code></pre> <p>I tried using this env variable inside the configMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.yaml: | server: port: 8080 host: 0.0.0.0 ## ## UCP configuration. ## If skipped, it will default to looking inside of the connections.xml file. database: ApplicationDB: username: username **password: ${password}** </code></pre> <p>But my application is unable to read this password. Am I missing something here? </p> <p>EDIT:</p> <p>I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?</p>
<p>your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.</p> <p>consider a scenario where instead of application.yaml pass this file</p> <pre><code>application.sh: | echo "${password}" </code></pre> <p>now go inside /app/app-config you will see application.sh file . And now do <code>sh application.sh</code> you will see the value of environment variable. </p> <p>I hope this might clear your point.</p>
<p>I've configured application to run on 2 ec2 instances and k8s service <code>type = LoadBalancer</code> for this application (<code>Selector:app=some-app</code>). Also, I have 10+ instances running in EKS cluster. According to the service output - everything is ok:</p> <pre><code>Name: some-app Namespace: default Labels: app=some-app Annotations: external-dns.alpha.kubernetes.io/hostname: some-domain service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600 service.beta.kubernetes.io/aws-load-balancer-internal: true Selector: app=some-app Type: LoadBalancer IP: 172.20.206.150 LoadBalancer Ingress: internal-blablabla.eu-west-1.elb.amazonaws.com Port: default 80/TCP TargetPort: 80/TCP NodePort: default 30633/TCP Endpoints: 10.30.21.238:80,10.30.22.38:80 Port: admin 80/TCP Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>But when I check AWS console I see that all instances are included (10+) into ELB. (if I use Application load balancer - only 2 instances are present) Is there any configuration to remove odd instances?</p>
<p>Thats the default behaviour for the elb/nlb, once traffic hits the instances kube-proxy will redirect it to the instances with your pods running.</p> <p>If you're using the alb ingress controller, then again its standard behaviour, it will only add the instances were your pods are running, skipping the iptables mumbo jumbo ;) </p>
<p>I would run Redis for caching with separate Pod in k8s using <a href="https://github.com/helm/charts/tree/master/stable/redis" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/redis</a> chart.</p> <p>So redis is managing PVC and using for persistence and in my opinion my application's pod should ideally only need to connect to redis as a service. So, this host:port should be enough as far as I can think..This situation is same as any database.</p> <p>So my doubt is, should I make any extra configuration in application's yaml for volume which is relates with Redis or PostgreSQL? I mean, should application's pod mount it as well? What is the common usage for following best practices to connect redis or database from application's pod?</p> <p><strong>i.e parts of configuration in redis for volume</strong></p> <pre><code> enabled: true path: /data subPath: "" accessModes: - ReadWriteOnce size: 8Gi matchLabels: {} matchExpressions: {} </code></pre> <p><strong>Application's deployment.yaml</strong></p> <pre><code>env: - name: REDIS_HOST value: redis-master - name: REDIS_PORT value: "6379" - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: {{ .Release.Name }}-config key: POSTGRES_HOST - name: POSTGRES_PORT valueFrom: configMapKeyRef: name: {{ .Release.Name }}-config key: POSTGRES_PORT - name: POSTGRES_DB valueFrom: configMapKeyRef: name: {{ .Release.Name }}-config key: POSTGRES_DB </code></pre>
<p>As far as I understand, I think in your case you have to configure PVC/PV. It is properly to setup PVC directly in deployment definition:</p> <p>Example for redis, creating PVC (only if you have enabled dynamic provisioning): </p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: your-mysql-pv-claim labels: app: redis spec: storageClassName: your-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 8Gi </code></pre> <p>In Redis deployment configuration file in specification section add following lines:</p> <pre><code> volumes: - name: your-mysql-persistent-storage persistentVolumeClaim: claimName: your-mysql-pv-claim </code></pre> <p>Same steps you have to fill for postgress. Remember ito check if you have storageclass. Otherwise you will have to do it manually. Also remember to define path where specific volume should be mounted. </p> <p><strong>Storage provisioning in cloud:</strong></p> <blockquote> <p><strong>Static</strong></p> <p>A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.</p> <p><strong>Dynamic</strong></p> <p>When none of the static PVs the administrator created match a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur. Claims that request the class "" effectively disable dynamic provisioning for themselves.</p> <p>To enable dynamic storage provisioning based on storage class, the cluster administrator needs to enable the DefaultStorageClass admission controller on the API server. This can be done, for example, by ensuring that DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component. For more information on API server command-line flags, check kube-apiserver documentation.</p> </blockquote> <p>You can also have <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">shared volumes</a> then two containers can use these volumes to communicate.</p> <p>More information you can find here: <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">pvc</a>, <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">pvc-kubernetes</a>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/" rel="nofollow noreferrer">pvc-kubernetes-pod</a>.</p>
<p>I feel there is a big blocker in Rancher V2.2.2 where I can't define the Private Azure registry containing the docker images to be used to create a K8s deployment.</p> <p>I can define the azure registry credentials in the Resources -> Registries and authenticate it to create a workload. ( The Workload access the private azure registry and authenticates it using the credentials set )</p> <p>Now if I create a Helm chart that access the same private Azure registry to pull the image and create a pod , it fails saying the docker image could not be pulled. I have researched over it and I find that K8s deployment can find the credentials set in the Rancher UI but the kublet has no access to this credentials.</p> <p>The common suggestion that people give is to use the secrets in the help chart deployment file and that works also but it is a security concern as any person can access the helm chart to find the azure credentials described in it. I feel its still a common problem in Rancher V2. </p> <p>The Question : <a href="https://stackoverflow.com/questions/49669077/helm-chart-deployment-and-private-docker-repository">Helm chart deployment and private docker repository</a> caters to the problem but it has the security concern as expressed above.</p> <p>I am not sure if Rancher community also has the answer because the helm repo also suggests the same solution. Please refer (<a href="https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#creating-image-pull-secrets" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#creating-image-pull-secrets</a>)</p> <p>I dont want to define image pull secrets in deployement.yaml file of Helm chart as mentioned below</p> <pre><code> name: credentials-name registry: private-docker-registry username: user password: pass </code></pre>
<p>When you configure a new set of registry credentials under Resources -> Registries in your current project, Rancher creates a Kubernetes secret resource for you that holds the specified credentials.</p> <p>You can verify that the secret exists in all namespaces belonging to the project by running the following command:</p> <pre><code>$ kubectl get secrets -n &lt;some-project-namespace&gt; </code></pre> <p>Then - <strong>instead of persisting your plaintext account credentials in your deployment.yaml</strong> - you are going to reference the secret resource in the containers spec like so:</p> <pre><code>spec: containers: - name: mycontainer image: myregistry.azurecr.io/org/myimage imagePullSecrets: - name: project-azure-reg-creds </code></pre> <p>In the example above <code>project-azure-reg-creds</code> matches the name of the registry credential you added in Rancher. Also note, that your deployment must be created in a namespace assigned to the project.</p>
<p>Follwing the documentation and provided example here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job</a></p> <p>I run <code>kubectl apply -f job.yaml</code></p> <pre><code>kubectl apply -f job.yaml job.batch/pi created </code></pre> <p>Monitoring the job with get pods <code>pi-fts6q 1/2 Running 0 52s</code></p> <p>I always see 1/2 Running even after the job is complete and checking the logs shows it is completed.</p> <p>How can I get the job to show a completed status? The job will stay in a running state showing no completions forever.</p> <pre><code>Parallelism: 1 Completions: 1 Start Time: Thu, 06 Jun 2019 16:21:36 -0500 Pods Statuses: 1 Running / 0 Succeeded / 0 Failed </code></pre> <p>It seems the underlying pod that did the work completed, but the actual job-controller stays alive forever.</p>
<p>Problem is in incomplete implementation of proxy-agent.</p> <p>Simply add a <strong>'/quitquitquit'</strong> handler to proxy-agent. </p> <p>Users can manually add a curl or http request to localhost to stop sidecar when the job is done. P0 is a workaround.</p> <p>More information you can find here: <a href="https://github.com/istio/istio/issues/6324" rel="nofollow noreferrer"><code>istio-issue</code></a></p>
<p>I deployed Traefik helm chart and created IngressRoute for dashboard and Middleware for Basic Auth, instead of dashboard I see 404 error.</p> <p>Ingress also returns 404.</p> <p>IngressRoute and Ingress also don't work with other services</p> <p>Traefik - 2.7.1 k8s - v1.22.8-gke.202 (GKE Autopilot)</p> <p>Helm values:</p> <pre class="lang-yaml prettyprint-override"><code>additionalArguments: - &quot;--log.level=DEBUG&quot; - &quot;--entrypoints.web.http.redirections.entryPoint.to=:443&quot; - &quot;--providers.file.filename=/config/dynamic.yaml&quot; volumes: - name: tls-cert mountPath: &quot;/certs&quot; type: secret - name: traefik-config mountPath: &quot;/config&quot; type: configMap service: spec: externalTrafficPolicy: Local loadBalancerIP: &quot;xxx.xxx.xxx.xxx&quot; ingressRoute: dashboard: enabled: false </code></pre> <p>Configmap:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: traefik-config namespace: ingress data: dynamic.yaml: | tls: stores: default: defaultCertificate: certFile: '/certs/tls.crt' keyFile: '/certs/tls.key' </code></pre> <p>And IngressRoute:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: dashboard namespace: ingress spec: entryPoints: - websecure routes: - match: Host(`traefik.example.domain`) kind: Rule services: - name: api@internal kind: TraefikService middlewares: - name: admin-auth namespace: ingress --- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: admin-auth spec: basicAuth: namespace: ingress secret: ingress-authsecret --- apiVersion: v1 kind: Secret metadata: name: ingress-authsecret namespace: ingress data: users: some-base64-encoded-credentials </code></pre>
<p>Solution:</p> <p>IngressRoute was ignored because no tls configuration was prodided</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: dashboard namespace: ingress spec: entryPoints: - websecure routes: - match: Host(`traefik.example.domain`) kind: Rule services: - name: api@internal kind: TraefikService middlewares: - name: admin-auth namespace: ingress tls: secretName: tls-cert # here </code></pre> <p>So, I remove default certificate configmap and add tls paramether to dynamic conf</p>
<p>Kubernetes not able to resolve DNS. Container/Pods not able to access Internet.</p> <p>I have a Kubernetes 2 node cluster on separate AWS EC2 instances (t2.Medium) container networking has been done using: Flannel version: flannel:v0.10.0-amd64 (image) Kubernetes version: 1.15.3</p> <p>DNS Logs <a href="https://i.stack.imgur.com/CzuxA.png" rel="nofollow noreferrer">DNS Logs</a></p> <p><a href="https://i.stack.imgur.com/dVjXx.png" rel="nofollow noreferrer">nodes</a></p> <p><a href="https://i.stack.imgur.com/Elg2a.png" rel="nofollow noreferrer">Kubernetes svc:</a></p> <p><a href="https://i.stack.imgur.com/yX6UE.png" rel="nofollow noreferrer">enter image description here</a></p> <p><a href="https://i.stack.imgur.com/yOv5I.png" rel="nofollow noreferrer">enter image description here</a></p> <p>At times when I delete core-dns pods, the DNS issue gets resolved for some time but it is not consistant. Please suggest what can be done. I flannel mapping may have something to do with this. Please let me know if any other information is also needed. </p>
<p>Errors such you get: <code>nslookup: can't resolve 'kubernetes.default'</code> indicate that you have problem with the coredns/kube-dns add-on or associated Services.</p> <p>Please check if you did following steps to debug DNS: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#check-if-the-dns-pod-is-running" rel="nofollow noreferrer">coredns</a>.</p> <p>It also seems that like DNS inside busybox does not work properly.</p> <p>Try to use busybox images &lt;= <strong>1.28.4</strong></p> <p>Change pod configuration file:</p> <pre><code> containers: - name: busybox-image image: busybox:1.28.3 </code></pre> <p>Learn more about most known dns kubernetes issues: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="nofollow noreferrer">kubernetes-dns</a>.</p>
<p>When using <code>nginx-ingress</code> in Kubernetes, how can I define a custom port which should be used for HTTPS, instead of 443? My configuration looks as follows:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: "true" creationTimestamp: "2019-04-17T14:15:25Z" generation: 3 name: foo namespace: foo resourceVersion: "1535141" selfLink: /apis/extensions/v1beta1/namespaces/foo/ingresses/foo uid: f1b4dae9-6072-1239-a12a-fa161aff25ae spec: rules: - host: example.com http: paths: - backend: serviceName: foo servicePort: 80 path: / tls: - hosts: - example.com secretName: foo-ingress-tls status: loadBalancer: ingress: - {} </code></pre> <p>Preferably, I would like to be able to access the service via HTTPS on both the default 443 port and an additional custom port.</p>
<p>Unfortunately in ingress-nginx it is impossible to listen to more than one HTTPS port. You can change HTTPS port number using --https-port command line argument, but there can only be one HTTPS port.</p>
<p>Github repo: <a href="https://github.com/jonesrussell/portfolio-sapper" rel="nofollow noreferrer">https://github.com/jonesrussell/portfolio-sapper</a></p> <p>Here is my Github Action YML:</p> <pre class="lang-yaml prettyprint-override"><code>name: Build and Deploy on: push: branches: - main jobs: export-docker: runs-on: ubuntu-latest steps: - name: Set up QEMU uses: docker/setup-qemu-action@v1 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v1 - name: Login to DockerHub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Build and push id: docker_build uses: docker/build-push-action@v2 with: push: true tags: jonesrussell/portfolio-app:latest - name: Image digest run: echo ${{ steps.docker_build.outputs.digest }} - name: DigitalOcean Kubernetes uses: matootie/[email protected] with: personalAccessToken: ${{ secrets.DIGITALOCEAN_TOKEN }} clusterName: galaxy-k8s </code></pre> <p>What I can't figure out is how to: kubectl apply -f kubernetes.yml with matootie/[email protected]</p> <p>Cheers</p>
<p>What <code>matootie/dokube</code> does is it configures <code>kubectl</code> and adds it to the runner path. After you use it in your workflows you can add any additional steps using the <code>kubectl</code> executable as you would anything else, just as I have added to your original workflow file:</p> <pre class="lang-yaml prettyprint-override"><code>name: Build and Deploy on: push: branches: - main jobs: export-docker: runs-on: ubuntu-latest steps: ... - name: Apply changes run: kubectl apply -f kubernetes.yml </code></pre> <p><strong>Disclaimer:</strong> I'm the author of the repository.</p>
<p>I have an existing github project. I want to create/add a <code>helm</code> folder to the project to store the helm yaml files. I want to reference this github project/folder to act like a helm repo in my local/dev environment. I know I can add the charts to my local/default helm repo. The use case is if another developer checks out the code in github and he needs to work on the charts then he can run <code>helm install</code> directly from the working folder. The <code>helm.sh</code> website has instructions of adding a <code>gh-pages</code> branch but I am wondering if I can avoid it.</p> <p>Can I use an existing github project and it via the <code>helm repo add</code> command? </p>
<p>Firstly make sure that you have have fully functional helm repository. The tricky part is to access it as if it was simple HTTP server hosting raw files. Fortunately Github provides such feature using <code>raw.githubusercontent.com</code>. In order for helm to be able to pull files from such repository you need to provide it with Github username and token (Personal Access Token):</p> <pre><code>&gt; helm repo add - username &lt;your_github_username&gt; - password &lt;your_github_token&gt; my-github-helm-repo 'https://raw.githubusercontent.com/my_organization/my-github-helm-repo/master/' &gt; helm repo update &gt; helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts my-github-helmrepo https://raw.githubusercontent.com/my_organization/my-github-helm-repo/master/ &gt; helm search my-app NAME CHART VERSION APP VERSION DESCRIPTION my-github-helmrepo/my-app-chart 0.1.0 1.0 A Helm chart for Kubernetes </code></pre> <p><strong>These are steps for adding new packages to existing repository</strong></p> <p>If you want to add new package to existing repository simply:</p> <p><strong>1.</strong> Place new package in your local repository root</p> <p><strong>2.</strong> Execute: helm repo index .. This will detect new file/folder and make updates.</p> <p><strong>3.</strong> Commit and push your new package </p> <p><strong>4.</strong> Finally execute command: <code>helm repo update</code></p> <p><strong>Security ascpect</strong></p> <p>It is important to realize where does helm actually store your <a href="https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line" rel="noreferrer">Github token</a>. It is stored as plain text in <code>~/.helm/repository/repositories.yaml</code>. In this case it will be good to generate token with as few permissions as possible.</p> <p>Take a look here: <a href="https://blog.softwaremill.com/hosting-helm-private-repository-from-github-ff3fa940d0b7" rel="noreferrer">hosting helm private repository</a>.</p>
<p>I am using Ambassador as the ingress controller for my kubernetes setup. I need to connect to multiple ports on my containers, for example, I have a RethinkDB container and I need to connect to port 8085 for its web-ui, port 28015 for RDB-API and port 29015 for adding nodes to Rethinkdb and clustering.</p> <p>I tried different configuration but they didn't work. The configurations that I tried: 1- This configuration works for the latest mapping which means if I replace 8085 mapping with 29015 and put it at the end I am able to access the web-ui but not other parts and so on.</p> <pre><code>getambassador.io/config: | --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping prefix: /rethinkdb:28015/ service: rethinkdb:28015 labels: ambassador: - request_label: - rethinkdb:28015 --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping - prefix: /rethinkdb:8085/ service: rethinkdb:8085 labels: ambassador: - request_label: - rethinkdb:8085 --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping prefix: /rethinkdb:29015/ service: rethinkdb:29015 labels: ambassador: - request_label: - rethinkdb:29015 </code></pre> <p>2- This one didn't work at all</p> <pre><code>getambassador.io/config: | --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping - prefix: /rethinkdb:8085/ service: rethinkdb:8085 - prefix: /rethinkdb:29015/ service: rethinkdb:29015 - prefix: /rethinkdb:28015/ service: rethinkdb:28015 </code></pre> <p>How shall I configure Ambassador so I can have access to all ports of my container?</p>
<p>Try to put different names of Mappings like in example below:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rethinkdb annotations: getambassador.io/config: | --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping prefix: /rethinkdb:28015/ service: rethinkdb:28015 labels: ambassador: - request_label: - rethinkdb:28015 --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping1 prefix: /rethinkdb:8085/ service: rethinkdb:8085 labels: ambassador: - request_label: - rethinkdb:8085 --- apiVersion: ambassador/v1 kind: Mapping name: rethinkdb_mapping2 prefix: /rethinkdb:29015/ service: rethinkdb:29015 labels: ambassador: - request_label: - rethinkdb:29015 spec: type: ClusterIP clusterIP: None </code></pre> <p>Remember to put right name of service into service label inside mappings definition.</p> <p><strong>Note on indents and correct syntax.</strong></p> <p>I hope it helps.</p>
<p>While learning Kubernetes going by the book Kubernetes for developer, I am stuck at this point now.</p> <p>I am trying to start Rabbitmq pod but but after lot of troubleshooting I have managed to get to this point but do not get clue where do I fix to get rid of the permission denied error.</p> <pre><code># kubectl get pods NAME READY STATUS RESTARTS AGE rabbitmq-56c67d8d7d-s8vp5 0/1 CrashLoopBackOff 5 5m40s </code></pre> <p>if I look at the logs of this contianer thats where I found:</p> <pre><code># kubectl logs rabbitmq-56c67d8d7d-s8vp5 21:22:58.49 21:22:58.50 Welcome to the Bitnami rabbitmq container 21:22:58.51 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq 21:22:58.51 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues 21:22:58.52 Send us your feedback at [email protected] 21:22:58.52 21:22:58.52 INFO ==&gt; ** Starting RabbitMQ setup ** 21:22:58.54 INFO ==&gt; Validating settings in RABBITMQ_* env vars.. 21:22:58.56 INFO ==&gt; Initializing RabbitMQ... 21:22:58.57 INFO ==&gt; Generating random cookie mkdir: cannot create directory ‘/bitnami/rabbitmq’: Permission denied </code></pre> <p>Here is my <strong>rabbitmq-deployment.yml</strong></p> <pre><code>--- # EXPORT SERVICE INTERFACE kind: Service apiVersion: v1 metadata: name: message-queue labels: app: rabbitmq role: master tier: queue spec: ports: - port: 5672 targetPort: 5672 selector: app: rabbitmq role: master tier: queue --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rabbitmq-pv-claim labels: app: rabbitmq spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: rabbitmq spec: replicas: 1 selector: matchLabels: app: rabbitmq role: master tier: queue template: metadata: labels: app: rabbitmq role: master tier: queue spec: nodeSelector: boardType: x86vm containers: - name: rabbitmq image: bitnami/rabbitmq:3.7 envFrom: - configMapRef: name: bitnami-rabbitmq-config ports: - name: queue containerPort: 5672 - name: queue-mgmt containerPort: 15672 livenessProbe: exec: command: - rabbitmqctl - status initialDelaySeconds: 120 timeoutSeconds: 5 failureThreshold: 6 readinessProbe: exec: command: - rabbitmqctl - status initialDelaySeconds: 10 timeoutSeconds: 3 periodSeconds: 5 volumeMounts: - name: rabbitmq-storage mountPath: /bitnami volumes: - name: rabbitmq-storage persistentVolumeClaim: claimName: rabbitmq-pv-claim </code></pre> <p>This is the <strong>rabbitmq-storage-class.yml</strong></p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: rabbitmq-storage-class labels: app: rabbitmq provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p>and <strong>persistant-volume.yml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: rabbitmq-pv-claim labels: app: rabbitmq spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /bitnami </code></pre> <p>Logs:</p> <pre><code># kubectl describe pods rabbitmq-5f7f787479-fpg6g Name: rabbitmq-5f7f787479-fpg6g Namespace: default Priority: 0 Node: kube-worker-vm2/192.168.1.36 Start Time: Mon, 03 May 2021 12:29:17 +0100 Labels: app=rabbitmq pod-template-hash=5f7f787479 role=master tier=queue Annotations: cni.projectcalico.org/podIP: 192.168.222.4/32 cni.projectcalico.org/podIPs: 192.168.222.4/32 Status: Running IP: 192.168.222.4 IPs: IP: 192.168.222.4 Controlled By: ReplicaSet/rabbitmq-5f7f787479 Containers: rabbitmq: Container ID: docker://bbdbb9c5d4b6737519d3dcf4bdda242a7fe904f2336334afe686e9b204fd6d5c Image: bitnami/rabbitmq:3.7 Image ID: docker-pullable://bitnami/rabbitmq@sha256:8b6057997b74ebc81e934dd6c94e9da745635faa2d79b382cfda27b9176e0e6d Ports: 5672/TCP, 15672/TCP Host Ports: 0/TCP, 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 03 May 2021 12:30:48 +0100 Finished: Mon, 03 May 2021 12:30:48 +0100 Ready: False Restart Count: 4 Liveness: exec [rabbitmqctl status] delay=120s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [rabbitmqctl status] delay=10s timeout=3s period=5s #success=1 #failure=3 Environment Variables from: bitnami-rabbitmq-config ConfigMap Optional: false Environment: &lt;none&gt; Mounts: /bitnami from rabbitmq-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qmxr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: rabbitmq-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: rabbitmq-pv-claim ReadOnly: false default-token-4qmxr: Type: Secret (a volume populated by a Secret) SecretName: default-token-4qmxr Optional: false QoS Class: BestEffort Node-Selectors: boardType=x86vm Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m20s default-scheduler Successfully assigned default/rabbitmq-5f7f787479-fpg6g to kube-worker-vm2 Normal Created 96s (x4 over 2m18s) kubelet Created container rabbitmq Normal Started 95s (x4 over 2m17s) kubelet Started container rabbitmq Warning BackOff 65s (x12 over 2m16s) kubelet Back-off restarting failed container Normal Pulled 50s (x5 over 2m18s) kubelet Container image &quot;bitnami/rabbitmq:3.7&quot; already present on machine </code></pre>
<p>When creating an image, the image creator often chooses to use a user other than root to run the process. This is the case for your image, and the user does not have write permissions on the <code>/bitnami</code> directory. You can verify this by commenting out the volume.</p> <p>To fix the issue, you need to set a security contect for your pod: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod</a></p> <p>Not sure about the exact syntax, but something like this should do the trick:</p> <pre><code>spec: securityContext: fsGroup: 1001 # the userid that is used in the image nodeSelector: boardType: x86vm containers: - name: rabbitmq image: bitnami/rabbitmq:3.7 envFrom: - configMapRef: name: bitnami-rabbitmq-config </code></pre> <p>This makes the directory writeable by the user in the image.</p> <p>Another thing: A deployment is for stateless services by design. If you have state to keep, always use a statefulset. It's very similiar to a deployment from a configuration point of view, but Kubernetes treats it very differently. See <a href="https://www.youtube.com/watch?v=Vrxr-7rjkvM" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Vrxr-7rjkvM</a> for good explanation.</p>
<p>I have a node app that loads its data based on domain name. domains are configured with a CNAME like <code>app.service.com</code> (which is the node app). </p> <p>The Node app sees the request domain and sends a request to API to get app data.</p> <p>for example: <code>domain.com</code> CNAME <code>app.service.com</code> -> then node app asks api for domain.com data</p> <p>the problem is setting up HTTPS (with letsencrypt) for all the domains. I think cert-manager can help but have no idea how to automate this without the need to manually change config file for each new domain.</p> <p>or is there a better way to achieve this in Kubernetes?</p>
<p>The standard method to support more than one domain name and / or subdomain names is to use one SSL Certificate and implement SAN (Subject Alternative Names). The extra domain names are stored together in the SAN. All SSL certificates support SAN, but not all certificate authorities will issue multi-domain certificates. Let's Encrypt does support <a href="https://www.ssl.com/faqs/what-is-a-san-certificate/" rel="nofollow noreferrer">SAN</a> so their certificates will meet your goal.</p> <p>First, you have to create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">job</a> in our cluster that uses an image to run a shell script. The script will spin up an HTTP service, create the certs, and save them into a predefined secret. Your domain and email are environment variables, so be sure to fill those in:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: letsencrypt-job labels: app: letsencrypt spec: template: metadata: name: letsencrypt labels: app: letsencrypt spec: containers: # Bash script that starts an http server and launches certbot # Fork of github.com/sjenning/kube-nginx-letsencrypt - image: quay.io/hiphipjorge/kube-nginx-letsencrypt:latest name: letsencrypt imagePullPolicy: Always ports: - name: letsencrypt containerPort: 80 env: - name: DOMAINS value: kubernetes-letsencrypt.jorge.fail # Domain you want to use. CHANGE ME! - name: EMAIL value: [email protected] # Your email. CHANGE ME! - name: SECRET value: letsencrypt-certs restartPolicy: Never </code></pre> <p>You have a job running, so you can create a service to direct traffic to this job:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: letsencrypt spec: selector: app: letsencrypt ports: - protocol: "TCP" port: 80 </code></pre> <p>This job will now be able to run, but you still have three things we need to do before our job actually succeeds and we’re able to access our service over HTTPs.</p> <p>First, you need to create a secret for the job to actually update and store our certs. Since you don’t have any certs when we create the secret, the secret will just start empty.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: letsencrypt-certs type: Opaque # Create an empty secret (with no data) in order for the update to work </code></pre> <p>Second, you’ll have to add the secret to the Ingress controller in order for it to fetch the certs. Remember that it is the Ingress controller that knows about our host, which is why our certs need to be specified here. The addition of our secret to the Ingress controller will look something like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: "kubernetes-demo-app-ingress-service" spec: tls: - hosts: - kubernetes-letsencrypt.jorge.fail # Your host. CHANGE ME secretName: letsencrypt-certs # Name of the secret rules: </code></pre> <p>Finally you have to redirect traffic through the host, down to the job, through our Nginx deployment. In order to do that you’ll add a new route and an upstream to our Nginx configuration: This could be done through the Ingress controller by adding a <code>/.well-known/*</code> entry and redirecting it to the letsencrypt service. That’s more complex because you would also have to add a health route to the job, so instead you’ll just redirect traffic through the Nginx deployment:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: default.conf: | </code></pre> <p>...</p> <pre><code> # Add upstream for letsencrypt job upstream letsencrypt { server letsencrypt:80 max_fails=0 fail_timeout=1s; } server { listen 80; </code></pre> <p>...</p> <pre><code> # Redirect all traffic in /.well-known/ to letsencrypt location ^~ /.well-known/acme-challenge/ { proxy_pass http://letsencrypt; } } </code></pre> <p>After you apply all these changes, destroy your Nginx Pod(s) in order to make sure that the ConfigMap gets updated correctly in the new Pods:</p> <pre><code>$ kubectl get pods | grep ngi | awk '{print $1}' | xargs kubectl delete pods </code></pre> <p>Make sure it works.</p> <p>In order to verify that this works, you should make sure the job actually succeeded. You can do this by getting the job through kubectl, you can also check the Kubernetes dashboard.</p> <pre><code>$ kubectl get job letsencrypt-job NAME DESIRED SUCCESSFUL AGE letsencrypt-job 1 1 1d </code></pre> <p>You can also check the secret to make sure the certs have been properly populated. You can do this through kubectl or through the dashboard:</p> <pre><code>$ kubectl describe secret letsencrypt-certs Name: letsencrypt-certs Namespace: default Labels: &lt;none&gt; Annotations: Type: Opaque Data ==== tls.crt: 3493 bytes tls.key: 1704 bytes </code></pre> <p>Now that as you can see that the certs have been successfully created, you can do the very last step in this whole process. For the Ingress controller to pick up the change in the secret (from having no data to having the certs), you need to update it so it gets reloaded. In order to do that, we’ll just add a timestamp as a label to the Ingress controller:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: "kubernetes-demo-app-ingress-service" labels: # Timestamp used in order to force reload of the secret last_updated: "1494099933" ... </code></pre> <p>Please take a look at: <a href="https://runnable.com/blog/how-to-use-lets-encrypt-on-kubernetes" rel="nofollow noreferrer">kubernetes-letsencrypt</a>.</p>
<p>Cant share a PVC with multiple pods in the GCP (with the GCP-CLI)</p> <p>When I apply the config with <code>ReadWriteOnce</code> works at once</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: &lt;name&gt; namespace: &lt;namespace&gt; spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi </code></pre> <p>But with <code>ReadWriteMany</code> the status hangs on pending</p> <p>Any ideas?</p>
<p>So it is normal that when you apply the config with <strong>ReadWriteOnce</strong> works at once, that's the rule.</p> <p><strong>ReadWriteOnce</strong> is the most common use case for Persistent Disks and works as the default access mode for most applications. </p> <p>GCE persistent disk do not support <strong>ReadWriteMany</strong> !</p> <p><a href="https://i.stack.imgur.com/8sjgv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8sjgv.png" alt="enter image description here"></a> Instead of <strong>ReadWriteMany</strong>, you can just use <strong>ReadOnlyMany.</strong> More information you can find here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">persistentdisk</a>. But as you now result will not be the same as you want.</p> <p>If you want to share volumes you could try some workarounds:</p> <p>You can create services.</p> <p>Your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. Multi-service access to data is an anti-pattern akin to global variables in OOP.</p> <p>If you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure or add extra features.</p> <p>Also try to use <code>high-performance, fully managed file storage</code> for applications that require a file system interface and a shared file system. More information you can find here: <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">access-fileshare</a>.</p>
<p>I have to edit elasticsearch.yml in order to create a backup (setting the path.repo like this is necessary): <code>path.repo: /mnt/backup</code></p> <p>But I have elasticsearch running on Kubernetes, and I would like to set the path.repo from a statefulset or something similar to all pods at the same time. Can anyone tell me how to do that? Thanks</p> <p>I tried to do this with configmap like this: <a href="https://discuss.elastic.co/t/modify-elastic-yml-file-in-kubernetes-pod/103612" rel="nofollow noreferrer">https://discuss.elastic.co/t/modify-elastic-yml-file-in-kubernetes-pod/103612</a></p> <p>but when I restarted the pod it threw an error: <code>/usr/share/elasticsearch/bin/run.sh: line 28: ./config/elasticsearch.yml: Read-only file system</code></p>
<p>ConfigMaps are mounted to pods as read-only filesystems, this behavior cannot be changed. </p> <p>If you want to be able to modify config once for all pods then you have to mount config/ directory as a ReadWriteMany persistent volume (NFS, GlusterFS and so on). </p>
<p>Please! Is it possible to squash multiple helm templates into one and then refer to it as a one-liner in a deployment file?</p> <p>EG: </p> <pre><code> {{- define "foo.deploy" -}} value: {{- include "foo.1" . | nindent 6 }} {{- include "foo.2" . | nindent 6 }} {{- include "foo.3" . | nindent 6 }} </code></pre> <p>And then do an <strong>{{- include "foo.deploy" . }}</strong> in a separate deployment file.</p> <p>Which should then contain foo.1, foo.2 and foo.3, and their respective definitions.</p> <p>As opposed to literally writing out all three different 'includes' especially if you've got loads.</p> <p>Much appreciated,</p> <p>Thanks,</p>
<p>A <strong>named template</strong> (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name. We’ll see two ways to create them, and a few different ways to use them. Template names are global. As a result of this, if two templates are declared with the same name the last occurrence will be the one that is used. Since templates in subcharts are compiled together with top-level templates, it is best to name your templates with chart specific names. A popular naming convention is to prefix each defined template with the name of the chart: {{ define &quot;mychart.labels&quot; }}.</p> <p>More information about named templates you can find here: <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="nofollow noreferrer">named-template</a>.</p> <p>Proper configuration file should look like:</p> <pre><code>{{/* Generate basic labels */}} {{- define &quot;mychart.labels&quot; }} labels: generator: helm date: {{ now | htmlDate }} {{- end }} </code></pre> <p>In your case part of file should looks like:</p> <pre><code>{{- define &quot;foo.deploy&quot; -}} {{- include &quot;foo.1&quot; . | nindent 6 }} {{- include &quot;foo.2&quot; . | nindent 6 }} {{- include &quot;foo.3&quot; . | nindent 6 }} {{ end }} </code></pre>
<p>I have an Ingress setup and initially I used a placeholder name, &quot;thesis.info&quot;. Now I would like change this hostname but whenever I to change it I just end up getting 404 errors.</p> <ol> <li>Change the spec.tls.rules.host value in the yaml to the new hostname</li> <li>Change CN value which openssl uses for the crt and key that are generated for TLS</li> <li>Edit the value /etc/hosts value on my local machine</li> </ol> <p>Is there a step I am missing that could be causing a problem. I am baffled by why it works with one value but not the other.</p> <p>Below is the ingress yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-ingress namespace: thesis annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/add-base-url: &quot;true&quot; nginx.ingress.kubernetes.io/service-upstream: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; spec: tls: - hosts: - thesis secretName: ingress-tls rules: - host: pod1out.ie http: paths: - path: / pathType: Prefix backend: service: name: frontend port: number: 3000 --- </code></pre>
<p>Most likely, you can find a hint on what is going on in the nginx logs. If you have access, you can access the logs using something like this:</p> <pre><code>kubectl -n &lt;ingress-namespace&gt; get pods # should be one or more nginx pods kubectl -n &lt;ingress-namespace&gt; logs &lt;nginx-pod&gt; </code></pre> <p>Not sure if this is the only issue, but according to the documentation, the host in 'tls' has to match explicitly the host in the rules:</p> <pre><code>spec: tls: - hosts: - pod1out.ie secretName: ingress-tls rules: - host: pod1out.ie </code></pre> <p>Before struggling with tls, I would recommend making the http route itself work (eg. by creating another ingress resource), and if this works with the host you want, go for tls.</p>
<p>I'm trying to move a number of docker containers on a linux server to a test kubernets-based deployment running on a different linux machine where I've installed kubernetes as a <a href="https://k3s.io/" rel="nofollow noreferrer">k3s</a> instance inside a vagrant virtual machine.</p> <p>One of these containers is a mariadb container instance, with a bind volume mapped</p> <p>This is the relevant portion of the docker-compose I'm using:</p> <pre><code> academy-db: image: 'docker.io/bitnami/mariadb:10.3-debian-10' container_name: academy-db environment: - ALLOW_EMPTY_PASSWORD=yes - MARIADB_USER=bn_moodle - MARIADB_DATABASE=bitnami_moodle volumes: - type: bind source: ./volumes/moodle/mariadb target: /bitnami/mariadb ports: - '3306:3306' </code></pre> <p>Note that this works correctly. (the container is used by another application container which connects to it and reads data from the db without problems).</p> <p>I then tried to convert this to a kubernetes configuration, copying the volume folder to the destination machine and using the following kubernetes .yaml deployment files. This includes a deployment .yaml, a persistent volume claim and a persistent volume, as well as a NodePort service to make the container accessible. For the data volume, I'm using a simple hostPath volume pointing to the contents copied from the docker-compose's bind mounts.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: academy-db spec: replicas: 1 selector: matchLabels: name: academy-db strategy: type: Recreate template: metadata: labels: name: academy-db spec: containers: - env: - name: ALLOW_EMPTY_PASSWORD value: &quot;yes&quot; - name: MARIADB_DATABASE value: bitnami_moodle - name: MARIADB_USER value: bn_moodle image: docker.io/bitnami/mariadb:10.3-debian-10 name: academy-db ports: - containerPort: 3306 resources: {} volumeMounts: - mountPath: /bitnami/mariadb name: academy-db-claim restartPolicy: Always volumes: - name: academy-db-claim persistentVolumeClaim: claimName: academy-db-claim --- apiVersion: v1 kind: PersistentVolume metadata: name: academy-db-pv labels: type: local spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain hostPath: path: &quot;&lt;...full path to deployment folder on the server...&gt;/volumes/moodle/mariadb&quot; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: academy-db-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: &quot;&quot; volumeName: academy-db-pv --- apiVersion: v1 kind: Service metadata: name: academy-db-service spec: type: NodePort ports: - name: &quot;3306&quot; port: 3306 targetPort: 3306 selector: name: academy-db </code></pre> <p>after applying the deployment, everything seems to work fine, in the sense that with <code>kubectl get ...</code> the pod and the volumes seem to be running correctly</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE academy-db-5547cdbc5-65k79 1/1 Running 9 15d . . . kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE academy-db-pv 1Gi RWO Retain Bound default/academy-db-claim 15d . . . kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE academy-db-claim Bound academy-db-pv 1Gi RWO 15d . . . </code></pre> <p>This is the pod's log:</p> <pre><code>kubectl logs pod/academy-db-5547cdbc5-65k79 mariadb 10:32:05.66 mariadb 10:32:05.66 Welcome to the Bitnami mariadb container mariadb 10:32:05.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb mariadb 10:32:05.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues mariadb 10:32:05.66 mariadb 10:32:05.67 INFO ==&gt; ** Starting MariaDB setup ** mariadb 10:32:05.68 INFO ==&gt; Validating settings in MYSQL_*/MARIADB_* env vars mariadb 10:32:05.68 WARN ==&gt; You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment. mariadb 10:32:05.69 INFO ==&gt; Initializing mariadb database mariadb 10:32:05.69 WARN ==&gt; The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file. mariadb 10:32:05.70 INFO ==&gt; Using persisted data mariadb 10:32:05.71 INFO ==&gt; Running mysql_upgrade mariadb 10:32:05.71 INFO ==&gt; Starting mariadb in background </code></pre> <p>and the describe pod command:</p> <pre><code>Name: academy-db-5547cdbc5-65k79 Namespace: default Priority: 0 Node: zdmp-kube/192.168.33.99 Start Time: Tue, 22 Dec 2020 13:33:43 +0000 Labels: name=academy-db pod-template-hash=5547cdbc5 Annotations: &lt;none&gt; Status: Running IP: 10.42.0.237 IPs: IP: 10.42.0.237 Controlled By: ReplicaSet/academy-db-5547cdbc5 Containers: academy-db: Container ID: containerd://68af105f15a1f503bbae8a83f1b0a38546a84d5e3188029f539b9c50257d2f9a Image: docker.io/bitnami/mariadb:10.3-debian-10 Image ID: docker.io/bitnami/mariadb@sha256:1d8ca1757baf64758e7f13becc947b9479494128969af5c0abb0ef544bc08815 Port: 3306/TCP Host Port: 0/TCP State: Running Started: Thu, 07 Jan 2021 10:32:05 +0000 Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 07 Jan 2021 10:22:03 +0000 Finished: Thu, 07 Jan 2021 10:32:05 +0000 Ready: True Restart Count: 9 Environment: ALLOW_EMPTY_PASSWORD: yes MARIADB_DATABASE: bitnami_moodle MARIADB_USER: bn_moodle MARIADB_PASSWORD: bitnami Mounts: /bitnami/mariadb from academy-db-claim (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-x28jh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: academy-db-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: academy-db-claim ReadOnly: false default-token-x28jh: Type: Secret (a volume populated by a Secret) SecretName: default-token-x28jh Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 15d (x8 over 15d) kubelet Container image &quot;docker.io/bitnami/mariadb:10.3-debian-10&quot; already present on machine Normal Created 15d (x8 over 15d) kubelet Created container academy-db Normal Started 15d (x8 over 15d) kubelet Started container academy-db Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 8m14s (x2 over 18m) kubelet Container image &quot;docker.io/bitnami/mariadb:10.3-debian-10&quot; already present on machine Normal Created 8m14s (x2 over 18m) kubelet Created container academy-db Normal Started 8m14s (x2 over 18m) kubelet Started container academy-db </code></pre> <p>Later, though, I notice that the client application has problems in connecting. After some investigation I've concluded that though the pod is running, the mariadb process running inside it could have crashed just after startup. If I enter the container with <code>kubectl exec</code> and try to run for instance the mysql client I get:</p> <pre><code>kubectl exec -it pod/academy-db-5547cdbc5-65k79 -- /bin/bash I have no name!@academy-db-5547cdbc5-65k79:/$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2) </code></pre> <p>Any idea of what could cause the problem, or how can I investigate further the issue? (Note: I'm not an expert in Kubernetes, but started only recently to learn it)</p> <p><strong>Edit</strong>: Following @Novo's comment, I tried to delete the volume folder and let mariadb recreate the deployment from scratch. Now my pod doesn't even start, terminating in <code>CrashLoopBackOff</code> !</p> <p>By comparing the pod logs I notice that in the previous mariabd log there was a message:</p> <pre><code>... mariadb 10:32:05.69 WARN ==&gt; The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file. mariadb 10:32:05.70 INFO ==&gt; Using persisted data mariadb 10:32:05.71 INFO ==&gt; Running mysql_upgrade mariadb 10:32:05.71 INFO ==&gt; Starting mariadb in background </code></pre> <p>Now replaced with</p> <pre><code>... mariadb 14:15:57.32 INFO ==&gt; Updating 'my.cnf' with custom configuration mariadb 14:15:57.32 INFO ==&gt; Setting user option mariadb 14:15:57.35 INFO ==&gt; Installing database </code></pre> <p>Could it be that the issue is related with some access right problem to the volume folders in the host vagrant machine?</p>
<p>By default, hostPath directories are created with permission 755, owned by the user and group of the kubelet. To use the directory, you can try adding the following to your deployment:</p> <pre><code> spec: securityContext: fsGroup: &lt;gid&gt; </code></pre> <p>Where gid is the group used by the process in your container.</p> <p>Also, you could fix the issue on the host itself by changing the permissions of the folder you want to mount into the container:</p> <pre><code>chown-R &lt;uid&gt;:&lt;gid&gt; /path/to/volume </code></pre> <p>where uid and gid are the userId and groupId from your app.</p> <pre><code>chmod -R 777 /path/to/volume </code></pre> <p>This should solve your issue.</p> <p>But overall, a deployment is not what you want to create in this case, because deployments should not have state. For stateful apps, there are 'StatefulSets' in Kubernetes. Use those together with a 'VolumeClaimTemplate' plus spec.securityContext.fsgroup and k3s will create the persitent volume and the persistent volume claim for you, using it's default storage class, which is local storage (on your node).</p>
<p>I am using Docker Desktop on Windows and have created a local Kubernetes cluster. I've been following this (<a href="https://blog.sourcerer.io/a-kubernetes-quick-start-for-people-who-know-just-enough-about-docker-to-get-by-71c5933b4633#8787" rel="nofollow noreferrer">quick start guide</a>) and am running into issues identifying my external IP. When creating a service I'm supposed to list the "master server's IP address".</p> <p>I've identified the master node <code>kubectl get node</code>:</p> <pre><code>NAME STATUS ROLES AGE VERSION docker-desktop Ready master 11m v1.14.7 </code></pre> <p>Then used <code>kubectl describe node docker-desktop</code>...but an external IP is not listed anywhere.</p> <p>Where can I find this value?</p>
<p>Use the following command so you can see more information about the nodes.</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>or</p> <pre><code>kubectl get nodes -o json </code></pre> <p>You'll be able to see the internal-ip and external-ip.</p> <p>Pd: In my cluster, the internal-ip works as external-ip, even tho the external-ip is listed as none.</p>
<p>I’d like to use Kubernetes, as I’m reading everywhere : “Kubernetes is, without a doubt, the leading container orchestrator available today.“</p> <p>My issue here is with the networking. I need to expose external IP to each of the pods. I need that my pods are seen as if they were traditional VM or HW servers. Meaning that I need all the ports to be exposed.</p> <p>What I see so far, is I can expose only a limited list of ports.</p> <p>Am I correct ? Or do I miss something ?</p> <p>Cheers, Raoul</p>
<p>In Kubernetes, you will need <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> to communicate with pods. To expose the pods outside the Kubernetes cluster, you can use k8s service of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> type.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp type: NodePort ports: - port: 8080 nodePort: 30000 name: my-port-8080 - port: 8081 nodePort: 30001 name: my-port-8081 - port: 8082 nodePort: 30002 name: my-port-8081 </code></pre> <p>Then you will be able to reach your pods at, <code>https://&lt;node-ip&gt;:nodePort</code>. For in-cluster communication, you can use service's dns: <code>&lt;service-name&gt;.&lt;namespace&gt;.svc:PORT</code></p> <p><strong>Update:</strong></p> <p>Take a look at this guide: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">Using a Service to Expose Your App</a></p>
<p>I've created a cluster with <em>minikube</em> </p> <pre><code>minikube start </code></pre> <p>Applied this yaml manifest:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: gateway-deployment spec: selector: matchLabels: app: gateway replicas: 1 template: metadata: labels: app: gateway spec: containers: - name: gateway image: docker_gateway imagePullPolicy: Never ports: - containerPort: 4001 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: gateway spec: selector: app: gateway ports: - protocol: TCP port: 4001 </code></pre> <p>And my GO app in the container <code>docker_gateway</code> is just a gin http server with one route</p> <pre><code>package main import "github.com/gin-gonic/gin" func main() { r := gin.Default() r.GET("/hello", func(c *gin.Context) { c.JSON(200, gin.H{ "message": "hello", }) }) server = &amp;http.Server{ Addr: ":4001", Handler: r, } server.ListenAndServe() } </code></pre> <p>In Postman I make requests to 192.168.252.130:4001/hello and get a responses</p> <p>But Kubernetes Pod's logs in Kubernetes <em>don't print</em> those requests. I expect to get this:</p> <pre><code>[GIN] 2019/10/25 - 14:17:20 | 200 | 1.115µs | 192.168.252.1| GET /hello </code></pre> <p>But an interesting thing is <em>when I add Ingress</em></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress spec: backend: serviceName: gateway servicePort: 4001 </code></pre> <p>I am able to make requests to 192.168.252.130/hello and 192.168.252.130:4001/hello And <em>without</em> the port Pod's logs print requests, but <em>with</em> the port - they don't.</p> <pre><code>[GIN] 2019/10/25 - 14:19:13 | 200 | 2.433µs | 192.168.252.1| GET /hello </code></pre>
<p>It's because you cannot access a kubernetes service of <code>ClusterIP</code> type from outside(in your case, outside of minikube) of the cluster. </p> <p>Learn more about service types <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a></p> <p>To access your service from outside, change your service to <code>NodePort</code> type.</p> <p>Something like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gateway spec: selector: app: gateway ports: - protocol: TCP nodePort: 30036 port: 4001 type: NodePort </code></pre> <p>Then you will be able to access it at <code>http://&lt;minikube-ip&gt;:30036/</code></p>
<p>It might be silly however I couldn't easily find any documentation or commands which I can use to list all the groups, resources and verbs which I can use to construct my custom roles for k8s deployment. Usually the api documents will have some info about rbac permission however the k8s api doc doesn't really have the details. For e.g. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#-strong-read-operations-strong--60" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#-strong-read-operations-strong--60</a> pod resource has 6 types of read operations and 6 types of write operations however if I see the permission set of cluster admin role (kubectl describe clusterrole admin) the it gives me only these verbs assigned to role</p> <pre><code>pods [] [] [create delete deletecollection get list patch update watch] </code></pre> <p>Now I'm wondering what should be my reference if I want to create my own custom roles with very specific groups, resources and verbs. Any direction or help would be grate. </p>
<p>To get full list of API groups and resources in your cluster you may execute</p> <pre><code>kubectl api-resources </code></pre> <p>The list of verbs is pretty much standard and you already got it.</p>
<p>Disclaimer: First time I use Prometheus.</p> <p>I am trying to send a Slack notification every time a Job ends successfully. </p> <p>To achieve this, I installed kube-state-metrics, Prometheus and AlertManager.</p> <p>Then I created the following rule:</p> <pre><code>rules: - alert: KubeJobCompleted annotations: identifier: '{{ $labels.instance }}' summary: Job Completed Successfully description: Job *{{ $labels.namespace }}/{{ $labels.job_name }}* is completed successfully. expr: | kube_job_spec_completions{job="kube-state-metrics"} - kube_job_status_succeeded{job="kube-state-metrics"} == 0 labels: severity: information </code></pre> <p>And added the AlertManager receiver text (template) :</p> <pre><code>{{ define "custom_slack_message" }} {{ range .Alerts }} {{ .Annotations.description }} {{ end }} {{ end }} </code></pre> <p>My current result: Everytime a new job completes successfully, I receive a Slack notification with the list of all Job that completed successfully.</p> <p>I don't mind receiving the whole list at first but after that I would like to receive notifications that contain only the newly completed job(s) in the specified group interval.</p> <p>Is it possible?</p>
<p>Just add extra rule which will just display last completed job(s):</p> <p>line: <code>for: &lt;10m&gt;</code> - which will list just lastly completed job(s) in 10 minutes:</p> <pre><code>rules: - alert: KubeJobCompleted annotations: identifier: '{{ $labels.instance }}' summary: Job Completed Successfully description: Job *{{ $labels.namespace }}/{{ $labels.job_name }}* is completed successfully. expr: | kube_job_spec_completions{job="kube-state-metrics"} - kube_job_status_succeeded{job="kube-state-metrics"} == 0 for: 10m labels: severity: information </code></pre>
<p>Can I know why kubectl run sometimes create deployment and sometimes pod. </p> <p>You can see 1st one creates pod and 2nd one creates deployment. only diff is --restart=Never</p> <pre><code> // 1 chams@master:~/yml$ kubectl run ng --image=ngnix --command --restart=Never --dry-run -o yaml apiVersion: v1 kind: Pod .. status: {} //2 chams@master:~/yml$ kubectl run ng --image=ngnix --command --dry-run -o yaml kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: run: ng name: ng .. status: {} </code></pre>
<p>By default <code>kubectl run</code> command creates a <strong>Deployment.</strong></p> <p>Using <code>kubectl run</code> command you can create and run a particular image, possibly replicated. Creates a deployment or job to manage the created containers.</p> <p>The difference in your case is seen in command (1st one) including restart policy argument.</p> <p>If value of restart policy is set to '<strong>Never</strong>', a regular <strong>pod</strong> is created. For the latter two --replicas must be 1. Default '<strong>Always</strong>', for CronJobs Never.</p> <p>Try to use command:</p> <pre><code>$ kubectl run --generator=run-pod/v1 ng --image=ngnix --command --dry-run -o yaml </code></pre> <p>instead of</p> <pre><code>$ kubectl run ng --image=ngnix --command --dry-run -o yaml </code></pre> <p>to avoid statement <code>"kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead."</code></p> <p>More information you can find here:<a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">docker-kubectl</a>, <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">kubectl-run</a>.</p>
<p>Is that possible to deploy an openshift in DMZ zone ( Restricted zone ).What are the challenges i will face?.What are the things i have to do in DMZ zone network?</p>
<p><strong>You can deploy Kubernetes and OpenShift in DMZ. You can also add DMZ in front of Kubernetes and OpenShift.</strong></p> <p>The Kubernetes and OpenShift network model is a flat SDN model. All pods get IP addresses from the same network CIDR and live in the same logical network regardless of which node they reside on.</p> <p>We have ways to control network traffic within the SDN using the NetworkPolicy API. NetworkPolicies in OpenShift represent firewall rules and the NetworkPolicy API allows for a great deal of flexibility when defining these rules.</p> <p>With NetworkPolicies it is possible to create zones, but one can also be much more granular in the definition of the firewall rules. Separate firewall rules per pod are possible and this concept is also known as microsegmentation (see this post for more details on NetworkPolicy to achieve microsegmentation).</p> <p>The DMZ is in certain aspects a special zone. This is the only zone exposed to inbound traffic coming from outside the organization. It usually contains software such as IDS (intrusion detection systems), WAFs (Web Application Firewalls), secure reverse proxies, static web content servers, firewalls and load balancers. Some of this software is normally installed as an appliance and may not be easy to containerize and thus would not generally be hosted within OpenShift.</p> <p>Regardless of the zone, communication internal to a specific zone is generally unrestricted.</p> <p>Variations on this architecture are common and large enterprises tend to have several dedicated networks. But the principle of purpose-specific networks protected by firewall rules always applies.</p> <p>In general, traffic is supposed to flow only in one direction between two networks (as in an osmotic membrane), but often exceptions to this rule are necessary to support special use cases.</p> <p>Useful article: <a href="https://blog.openshift.com/openshift-and-network-security-zones-coexistence-approaches/" rel="nofollow noreferrer">openshift-and-network-security-zones-coexistence-approache</a>.</p> <p>It's very secure if you follow <a href="https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/" rel="nofollow noreferrer">standard security practices</a> for your cluster. But nothing is 100% secure. So adding a DMZ would help reduce your attack vectors. </p> <p>In terms of protecting your Ingress from outside, you can limit your access for your external load balancer just to HTTPS, and most people do that but note that HTTPS and your application itself can also have vulnerabilities.</p> <p>As for pods and workloads, you can increase security (at some performance cost) using things like a well-crafted <a href="https://github.com/docker/labs/tree/master/security/seccomp" rel="nofollow noreferrer">seccomp</a> profile and or adding the right <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">capabilities</a> in your pod security context. You can also add more security with <a href="https://docs.docker.com/engine/security/apparmor/#load-and-unload-profiles" rel="nofollow noreferrer">AppArmor</a> or <a href="https://www.projectatomic.io/docs/docker-and-selinux/" rel="nofollow noreferrer">SELinux</a>, but lots of people don't since it can get very complicated.</p> <p>There are also other alternatives to Docker in order to more easily sandbox your pods (still early in their lifecycle as of this writing): <a href="https://katacontainers.io/" rel="nofollow noreferrer">Kata Containers</a>, <a href="https://nabla-containers.github.io/" rel="nofollow noreferrer">Nabla Containers</a> and <a href="https://github.com/google/gvisor" rel="nofollow noreferrer">gVisor</a>.</p> <p>Take look on: <a href="https://stackoverflow.com/questions/52781946/should-i-add-a-dmz-in-front-of-kubernetes/52782700#52782700">dmz-kubernetes</a>.</p> <p>Here is similar problem: <a href="https://www.reddit.com/r/kubernetes/comments/8t4itf/kubernetes_and_the_dmz/" rel="nofollow noreferrer">dmz</a>.</p>
<p>Has anyone been able to get minikube to work with --driver=podman?</p> <p>I have tried Fedora 30,31, CentOS7,8, RHEL7,8 all with the same results. </p> <pre><code># # minikube start --driver=podman --container-runtime=cri-o --cri-socket=/var/run/crio/crio.sock 😄 minikube v1.9.2 on Fedora 30 (vbox/amd64) ✨ Using the podman (experimental) driver based on user configuration 👍 Starting control plane node m01 in cluster minikube 🚜 Pulling base image ... E0409 16:50:17.654306 30363 cache.go:114] Error downloading kic artifacts: error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 🤦 StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds 🔄 Restarting existing podman container for "minikube" ... 💣 Failed to start podman container. "minikube start" may fix it.: driver start: get kic state: "podman inspect -f {{.State.Status}} minikube" failed: exit status 125: Error: error getting image "minikube": unable to find a name and tag match for minikube in repotags: no such image 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose </code></pre> <p>It feels like cri-o needs to be installed and running. I have done that but still get the same results.</p> <p>Update 1:</p> <pre><code># minikube start --driver=podman --container-runtime=cri-o --cri-socket=/var/run/crio/crio.sock --network-plugin=cni --enable-default-cni --v=1 😄 minikube v1.9.2 on Fedora 30 ✨ Using the podman (experimental) driver based on user configuration 👍 Starting control plane node m01 in cluster minikube 🚜 Pulling base image ... E0415 12:38:49.764297 24903 cache.go:114] Error downloading kic artifacts: error loading image: Error response from daemon: 404 page not found 🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: failed args: [run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume /root/.minikube/machines/minikube/var:/var:exec --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8] output: Error: invalid option type "exec" : exit status 125 🔄 Restarting existing podman container for "minikube" ... 💣 Failed to start podman container. "minikube start" may fix it.: driver start: get kic state: "podman inspect -f {{.State.Status}} minikube" failed: exit status 125: Error: error getting image "minikube": unable to find a name and tag match for minikube in repotags: no such image 😿 minikube is exiting due to an error. If the above message is not useful, open an issue: 👉 https://github.com/kubernetes/minikube/issues/new/choose </code></pre> <p>I'm happy to flip over to Fedora 31 or CentOS if needed.</p> <p>Update 2: Set selinux to permissive with same failure.</p> <p>Update 3: Per suggestion from @vbatts the minikube start cmd is pretty close to working. It seems crio sock lives in /var/run/crio/ so I updated that path. Now I'm getting the following...</p> <pre><code>[root@test ~]# minikube start --network-plugin=cni --enable-default-cni --extra-config=kubelet.container-runtime=remote --extra-config=kubelet.container-runtime-endpoint=/var/run/crio/crio.sock --extra-config=kubelet.image-service-endpoint=/var/run/crio/crio.sock --driver=podman 😄 minikube v1.9.2 on Fedora 30 ✨ Using the podman (experimental) driver based on user configuration 👍 Starting control plane node m01 in cluster minikube 🚜 Pulling base image ... 💾 Downloading Kubernetes v1.18.0 preload ... &gt; preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB E0416 09:25:47.539842 1632 cache.go:114] Error downloading kic artifacts: error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ... ▪ kubelet.container-runtime=remote ▪ kubelet.container-runtime-endpoint=/var/run/crio/crio.sock ▪ kubelet.image-service-endpoint=/var/run/crio/crio.sock 🌟 Enabling addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" </code></pre> <p>And now these cmds hang ....</p> <pre><code>[root@test ~]# kubectl get nodes ^C [root@test ~]# minikube status E0416 09:30:30.741722 10795 api_server.go:169] unable to get freezer state: cat: /sys/fs/cgroup/freezer/libpod_parent/libpod-16c8b830eb8e4cb0baa576e98d8343fdab1dacea8db4a6a6d84bbb8fbc7c0f92/kubepods/burstable/pod7dd7509c8b924aaaebd697cbbc2aff89/aa2abeea32b056907d33590baf5fc0c213b718cc8b16b548f251326675f32337/freezer.state: No such file or directory Error: non zero exit code: 1: OCI runtime error ^C </code></pre>
<p>Firstly make sure that you configured <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">Minikube</a> properly: <a href="https://asciinema.org/a/142924" rel="nofollow noreferrer">minikube-configuration</a>.</p> <p>You have to specify network plugin and enable it, also it is important to add image-service endpoint for runtime: <code>/var/run/crio/crio.sock</code>:</p> <pre><code>$ sudo minikube start \ --network-plugin=cni \ --enable-default-cni \ --extra-config=kubelet.container-runtime=remote \ --extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \ --extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \ --driver=podman </code></pre> <p>Please take a look: <a href="https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/" rel="nofollow noreferrer">minikube-crio-podman</a>, <a href="https://books.google.pl/books?id=0JmZDwAAQBAJ&amp;pg=PA296&amp;lpg=PA296&amp;dq=crio+minikube+start&amp;source=bl&amp;ots=sxbiWGdCZE&amp;sig=ACfU3U3nA4G-w_A24nb1WU6S2hLdO9RX2A&amp;hl=en&amp;sa=X&amp;ved=2ahUKEwi8hP_51ezoAhXL_KQKHS8fCtkQ6AEwB3oECAwQOA#v=onepage&amp;q=crio%20minikube%20start&amp;f=false" rel="nofollow noreferrer">crio-minikube</a>.</p>
<p>For internal purpose, I'm building a dashboard application. In this dashboard I need to display some info about Kubernetes (running pods, clusters, etc).</p> <p>I'm trying to call my Kubernetes API from my web app (from browser). Url of API is <code>http://localhost:8001/api/v1/</code></p> <p>I'm getting error on fetch data (CORS origin not allowed).</p> <p>I've search on internet for hours trying to find solution but nothing is working. I know there is other stack post giving some solution but I'm not sure how to apply it. For ex. : <a href="https://stackoverflow.com/questions/34117640/enabling-cors-in-kubernetes-api">Enabling CORS in Kubernetes API</a></p> <p>Any of you know how to allow CORS origin on Kubernetes API (Docker for windows)?</p> <p>Note: I'm using <code>kubectl proxy</code></p>
<p>You can edit kubernetes API server yaml file, to get CORS working.</p> <p>Add line <strong>--cors-allowed-origins=["http://*"]</strong> argument to <strong>/etc/default/kube-apiserver</strong> or <strong>/etc/kubernetes/manifests/kube-apiserver.yaml</strong> file, it depends where your kube-apiserver configuration file is located. </p> <pre><code>spec: containers: - command: - kube-apiserver - --cors-allowed-origins=["http://*"] </code></pre> <p>Restart kube-apiserver.</p> <p>Then add annotation to service configuration to <code>dns.alpha.kubernetes.io/external: "http://localhost:8001/api/v1/"</code> in your service configuration file and apply changes.</p>
<p>I have created Clusterissuer that uses Vault and then issued certificate through it but the <em>Ready</em> status of the certificate is blank. There is nothing appearing in the events and cert-manager pod logs. It has not created a secret as well. </p> <pre><code>kubectl get cert NAMESPACE NAME READY SECRET AGE default example-com example-com 139m </code></pre> <p>clusterissuer.yaml</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: vault-clusterissuer spec: vault: path: pki_int/sign/&lt;role name&gt; server: https://vault-cluster.example.com:8200 caBundle: &lt;base64 encoded cabundle pem&gt; auth: appRole: path: approle roleId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" secretRef: name: cert-manager-vault-approle key: secretId </code></pre> <p>The role name mentioned in the Path is the same as the role created in the Vault under pki_init.</p> <p>certificate.yaml</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: example-com spec: secretName: example-com issuerRef: name: vault-clusterissuer kind: ClusterIssuer commonName: abc.example.com dnsNames: - abc.example.com </code></pre> <p>Since it is not generating any message or logs so I am not sure from where to start troubleshooting.</p> <p>Does the value of <em>path</em> in clusterissuer.yaml looks right to you..?</p> <p>Thank you in advance</p>
<p><strong>CertificateConditionReady</strong> indicates that a certificate is ready for use.</p> <p>This is defined as:</p> <ul> <li><p>The target secret exists</p> </li> <li><p>The target secret contains a certificate that has not expired</p> </li> <li><p>The target secret contains a private key valid for the certificate</p> </li> <li><p>The <code>commonName</code> and <code>dnsNames</code> attributes match those specified on the Certificate</p> </li> </ul> <p>I think the issue is in wrong <code>dnsNames</code> defined in <code>certificate.yaml</code> file:</p> <p>Your certificate configuration file:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: example-com spec: secretName: example-com issuerRef: name: vault-clusterissuer kind: ClusterIssuer commonName: abc.example.com dnsNames: - abc.example.com </code></pre> <p><code>dnsNames</code> field should have value: <code>www.abc.example.com</code> not <code>abc.example.com</code></p> <p>Final version should looks like:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: example-com spec: secretName: example-com issuerRef: name: vault-clusterissuer kind: ClusterIssuer commonName: abc.example.com dnsNames: - www.abc.example.com </code></pre> <p>Also remember that <code>path</code> field is the Vault role path of the PKI backend and <em>server</em> is the Vault server base URL. The <code>path</code> <strong>MUST</strong> USE the vault <code>sign</code> endpoint.</p> <p>Please take a look: <a href="https://docs.cert-manager.io/en/release-0.8/tasks/issuers/setup-vault.html" rel="nofollow noreferrer">issuer-vault-setup</a>, <a href="https://github.com/jetstack/cert-manager/pull/1114/files" rel="nofollow noreferrer">cert-clusterissuer</a>.</p>
<pre><code>922:johndoe:db-operator:(master)λ kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.12-gke.14", GitCommit:"021f778af7f1bd160d8fba226510f7ef9c9742f7", GitTreeState:"clean", BuildDate:"2019-03-30T19:30:57Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I created a custom resource definition along with an operator to control that resource, but the operator gets a 'forbidden' error in runtime.</p> <p>The custom resource definition <code>yaml</code>, the <code>role.yaml</code> and <code>role_bidning.yaml</code> are:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: db-operator rules: - apiGroups: [''] resources: ['pods', 'configmaps'] verbs: ['get'] - apiGroups: [''] resources: ['configmaps'] verbs: ['create'] - apiGroups: [''] resources: ['secrets'] verbs: ['*'] - apiGroups: [''] resources: ['databaseservices.app.example.com', 'databaseservices', 'DatabaseServices'] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: db-operator subjects: - kind: ServiceAccount name: db-operator namespace: default roleRef: kind: Role name: db-operator apiGroup: rbac.authorization.k8s.io apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: databaseservices.app.example.com spec: group: app.example.com names: kind: DatabaseService listKind: DatabaseServiceList plural: databaseservices singular: databaseservice scope: Namespaced subresources: status: {} validation: openAPIV3Schema: properties: apiVersion: description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources' type: string kind: description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds' type: string metadata: type: object spec: type: object status: type: object version: v1alpha1 versions: - name: v1alpha1 served: true storage: true </code></pre> <ul> <li>Notice that I'm trying to reference the custom resource by plural name, by name with group as well as by kind.</li> </ul> <p>As visible in the Role definition, permissions for other resources seem to work.</p> <p>However the operator always errors with:</p> <pre><code>E0425 09:02:04.687611 1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha1.DatabaseService: databaseservices.app.example.com is forbidden: User "system:serviceaccount:default:db-operator" cannot list databaseservices.app.example.com in the namespace "default" </code></pre> <p>Any idea what might be causing this?</p>
<p>Try this Role definition for your custom resource:</p> <pre><code>- apiGroups: ['app.example.com'] resources: ['databaseservices'] verbs: ['*'] </code></pre>
<p>I want to deploy a front-end pod which is connected to backend pod( having mysql in it) and store the data into persistent volume.</p>
<p><strong>Connecting fronted pod with backend pod</strong></p> <p>Create a service for your deployment and point your app to that service name The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">selectors</a> to find the Pods that it routes traffic to.</p> <p>First, configure the MySQL service as a <code>ClusterIP</code> service. It will be private, visible only for other services. This is could be done removing the line with the option <code>type</code>.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: app-api-mysql-svc spec: selector: app: app-api-mysql ports: - protocol: TCP port: 80 targetPort: [the port exposed by the mysql pod] </code></pre> <p>Now that you have your backend, you can create a frontend that connects to the backend. The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is “<strong>app-api-mysql-svc</strong>”, which is the value of the <code>name</code> field in the preceding Service configuration file.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: frontend spec: selector: app: app-api-mysql ports: - protocol: "TCP" port: 80 targetPort: 80 type: LoadBalancer </code></pre> <p>Similar to the backend, the frontend has service too. The configuration for the Service has <code>type: LoadBalancer</code>, which means that the Service uses the default load balancer of your cloud provider.</p> <p>You can also <strong>proxy</strong> all your backend calls through your front end server</p> <p>If you are routing (or willing to route) all your microservices/backend call thru the server side of your front end and if are deploying both your front end and backend in the same k8s cluster in the same namespace, then you can use KubeDNS add-on (If it is not available in your k8s cluster yet, you can check with the k8s admin) to resolve the backend service name to it's IP. From your front end <strong>server</strong>, Your backend service will always be resolvable by it's name.</p> <p>Since you have kubeDNS in your k8s cluster, and both frontend and backend services resides in same k8s cluster and same namespace, we can make use of k8s' inbuilt service discovery mechanism. Backend service and frontend service will be discoverable each other by it's name. That means, you can simply use the DNS name "backend" to reach your backend service from your frontend <strong>pods</strong>. So, just proxy all the backend request through your front end nginx to your upstream backend service. In the frontend nginx <strong>pods</strong>, backend service's IP will resolvable for the domain name "backend". This will save you the CORS headache too. This setup is portable, meaning, it doesn't matter whether you are deploying in dev or stage or prod, name "backend" will always resolve to the corresponding backend.</p> <p>More information you can find here: <a href="https://stackoverflow.com/questions/45165418/kubernetes-communication-between-frontend-and-backend">backend-frontend</a>, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">frontend-backend-pod-connection</a>.</p> <p><strong>Connecting Persistent Volume to pod</strong></p> <p>MySQL requires a PersistentVolume to store data. Their PersistentVolumeClaims will be created at the deployment step.</p> <p>Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster’s default StorageClass is used instead.</p> <p>When a <strong>PersistentVolumeClaim</strong> is created, a <strong>PersistentVolume</strong> is dynamically provisioned based on the <strong>StorageClass</strong> configuration.</p> <p>Here you can find detailed guide how to configure MySQL pod with Persisten Volume: <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">pv-mysql-wordpress</a>.</p>
<p>I'm new to kubernetes and <strong>trying to deploy VM using Kubernetes</strong> and using this <strong><a href="https://gist.github.com/rajatsingh25aug/5f522906f2606613927c5cd43d2b6c9e" rel="nofollow noreferrer">YAML</a></strong>. But when I do <strong><code>oc create -f &lt;yaml_link_above&gt;</code></strong>, I get an error as</p> <p><em><strong><code> The &quot;&quot; is invalid: : spec.template.spec.volumes.userData in body is a forbidden property</code></strong></em></p> <p>I don't see any problem with the formatting with the YAML or whatsoever, maybe I'm missing anything?.</p>
<p>It seems that your Dynamic Provisioning doesn't work properly. Follow these steps <a href="https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/ceph_rbd_dynamic_example.html" rel="nofollow noreferrer">dynamics-provisioning-ceph</a> to configure Ceph RBD Dynamic Storage Class.</p> <p>Then check if pvc is created properly. After all apply your VM config file.</p> <p>Here are useful documentations: <a href="https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/virtualized-hardware-configuration.html" rel="nofollow noreferrer">hardware-configuration</a>, <a href="https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/disks-and-volumes.html" rel="nofollow noreferrer">disk-volumes</a>.</p>
<p>I have a spring boot application running on kubernetes, a client application serving a react app and proxying request to other services.</p> <p>That client app is avalaible through an Ingress using a Nginx controller. I have a domain name targetting the Nginx controler service and I've been able to generate a valid certificate and key with cert-manager from let's encrypt, also the certificate and key is automaticly updated when it's necessary.</p> <p>Till now I used self signed certificates generated with keytool to secure the communication between my differents applications (I guess I can still use that tools for internal communications), but for the client app I need to use the let's encrypt generated key.</p> <p>Now my client app does not use ssl (ssl.enable is false in my bootstrap.yml). So the communication between the Ngix Ingress controller and the client app is not secure I think.</p> <p>A k8s secret has been created with a certificate and a key during the process so I guess I can use it but what is the best way ? I would also like to provit the automatic update of the certificate if it's possible.</p> <p>Thanks for your advices</p>
<p>There are clients out there which re-use the private key used previously (certbot when used with the <strong>--reuse-key</strong> option and also acme.sh). Unless someone knows a client with such a feature, you should check the clients from the list and see if the client makes importing an existing private key possible. Or at least not very difficult.</p> <p>Certbot would need an issued certificate first to re-use the key. What could be a working option is:</p> <ul> <li>install certbot (see <a href="https://certbot.eff.org/" rel="nofollow noreferrer">https://certbot.eff.org/</a> 29)</li> <li><p>get a certificate issued with certbot without caring about the keys, just get it working.</p></li> <li><p>use --staging for test certificates first manually exchange the PEM formatted private key in <strong>/etc/letsencrypt/archive/name-of-your-certificate/privkey1.pem</strong> with your own PEM formatted private key renew the certificate with certbot renew <strong>--reuse-key</strong></p></li> <li>check if the public key in the renewed certificate corresponds with your own public/private key If the above checks out (with the <strong>--staging</strong> option for testing), you can remove the test certificate and do the above again, but without <strong>--staging</strong> to get a real working certificate.</li> </ul> <p>Useful documentations <a href="https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/" rel="nofollow noreferrer">cert-manager</a>, <a href="https://certbot.eff.org/about/" rel="nofollow noreferrer">certbot</a>.</p>
<p>I am working on a system that is spread across both Digital Ocean and AWS. Their Node.js instances are on a Kubernetes cluster on Digital Ocean and their databases and S3 spaces are hosted on AWS. I was able to connect to the Kubernetes cluster using kubectl. Then, I was able to do a 'terraform init' with no issues. But, when I tried to do a 'terraform plan', I got this error.</p> <p>Error: Error retrieving Kubernetes cluster: GET <a href="https://api.digitalocean.com/v2/kubernetes/clusters/1234" rel="nofollow noreferrer">https://api.digitalocean.com/v2/kubernetes/clusters/1234</a>: 401 Unable to authenticate you.</p> <p>I am new to both Kubernetes and Terraform. Does Terraform expect the Kubernetes config information to be in a different place then where kubectl expects it? </p>
<p>You need a token so that Digital Ocean’s servers know that you have permission to access your account. Follow the steps in the instruction <a href="https://www.digitalocean.com/docs/api/create-personal-access-token/" rel="nofollow noreferrer">creating-access-token</a> and copy the token to your clipboard.Remember to store it as an environment variable: <code>export TF_VAR_do_token=your-token</code>.</p> <p>Set environment variables:</p> <pre><code>export TF_VAR_do_token=your_digital_ocean_token export TF_VAR_do_cluster_name=your_cluster_name </code></pre> <p>Otherwise problem is with the API token. Create a new token and then the operation will succeeded.</p> <p>Useful blog article about setting up <a href="https://web.archive.org/web/20190314161613/https://ponderosa.io/blog/kubernetes/2019/03/13/terraform-cluster-create/" rel="nofollow noreferrer">Kubernetes cluster wit Digital Ocean and Terraform</a>.</p>
<p>Before I begin, I would like to mention that I am on the free trial version of GKE. I have a simple server running in a GKE cluster. I have a service that I use to expose the server. I am trying to configure an Ingress Controller and attach it to this service. </p> <p>Everything works perfectly if my service is of type LoadBalancer, NodePort. However, if my service is of type ClusterIP, I get an error saying </p> <pre><code>error while evaluating the ingress spec: service "default/simple-server" is type "ClusterIP" , expected "NodePort" or "LoadBalancer" </code></pre> <p>GKE then stops trying to provision an IP for the ingress. Why can't I provision a service of type clusterIP and is there a work around? </p> <p>I tried using <code>annotations.kubernetes.io/ingress.class: "nginx"</code> and it still didn't work.</p>
<p>The native GKE ingress controller does not support <code>ClusterIP</code>, but it works perfectly with <code>LoadBalancer</code> and <code>NodePort</code> type. Take a look at this <a href="https://github.com/kubernetes/kubernetes/issues/26508#issuecomment-222376886" rel="nofollow noreferrer">issue</a> </p> <p>Non-native ingress controller <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx</a> works with <code>ClusterIP</code>.</p>
<p>I have two kubernetes clusters on GKE: one public that handles interaction with the outside world and one private for internal use only. </p> <p>The public cluster needs to access some services on the private cluster and I have exposed these to the pods of the public cluster through <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">internal load balancers</a>. Currently I'm specifying the internal IP addresses for the load balancers to use and passing these IPs to the public pods, but I would prefer if the load balancers could choose any available internal IP addresses and I could pass their DNS names to the public pods.</p> <p><a href="https://cloud.google.com/load-balancing/docs/internal/dns-names" rel="noreferrer">Internal load balancer DNS</a> is available for regular internal load balancers that serve VMs and the DNS will be of the form <code>[SERVICE_LABEL].[FORWARDING_RULE_NAME].il4.[REGION].lb.[PROJECT_ID].internal</code>, but is there something available for internal load balancers on GKE? Or is there a workaround that would enable me to accomplish something similar?</p>
<p>Never heard of built-in DNS for load balancers in GKE, but we do it actually quite simply. We have <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">External DNS</a> Kubernetes service which manages DNS records for various things like load balancers and ingresses. What you may do:</p> <ol> <li>Create Cloud DNS internal zone. Make sure you integrate it with your VPC(s).</li> <li>Make sure your Kubernetes nodes service account has DNS Administrator (or super wide Editor) permissions.</li> <li>Install External DNS.</li> <li>Annotate your internal Load Balancer service with <code>external-dns.alpha.kubernetes.io/hostname=your.hostname.here</code></li> <li>Verify that DNS record was created and can be resolved within your VPC.</li> </ol>
<p>Could anyone explain me what is the best way to add basic Auth to a kubernetes cluster deployment that is running a webapp on Google Cloud (GCP)</p> <p>We are exposing it using: </p> <pre><code>kubectl expose deployment testSanbox --type=LoadBalancer --port 80 --target-port 80 </code></pre> <p>We don't need anything fancy as this is only a dev sandbox but we don't want anyone to be able to reach it. It could be a single user/pass combo or maybe use the google credentials that we manage with IAM.</p> <p>Sorry as you probably already noticed I'm not really experienced with kubernetes or GCP.</p> <p>Thanks</p>
<p>If you looking for HTTP Basic Auth you can use NGINX and Ingress. Here is setup instruction <a href="https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-services-with-ingress-tls-letsencrypt/" rel="nofollow noreferrer">authentication-ingress-nginx</a>, <a href="https://banzaicloud.com/blog/ingress-auth/" rel="nofollow noreferrer">ingress-auth</a>. </p> <p>But in context of security http authentication is not good enough secure authentication method. The problem is that, unless the process is strictly enforced throughout the entire data cycle to SSL for security, the authentication is transmitted in open on insecure lines. This lends itself to man in the middle attacks, where a user can simply capture the login data and authenticate via a copy-cat HTTP header attached to a malicious packet.</p> <p>Here is overview in kubernetes official documentation about authorization <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">authorization-kubernetes</a>.</p> <p>If you look for better solutions use API Keys, OAuth provider such as Google, Auth0, etc. developers.google.com/identity/protocols/OAuth2WebServer AND developers.google.com/identity/protocols/OAuth2UserAgent There are many options for authentication and authorization. Here are xplainations of above terms: <a href="https://nordicapis.com/3-common-methods-api-authentication-explained/" rel="nofollow noreferrer">api-authentication</a>.</p> <p>Approach to authenticate users using Auth on GCP: <a href="https://cloud.google.com/endpoints/docs/openapi/authenticating-users-auth0" rel="nofollow noreferrer">authentication-gcp-app</a>.</p> <p>Please let me know if it helps.</p>
<p>I have a Kubernetes cluster with some pods deployed (DB, Frontend, Redis). A part that I can't fully grasp is what happens to the PVC after the pod is deleted.</p> <p>For example, if I delete POD_A which is bound to CLAIM_A I know that CLAIM_A is not deleted automatically. If I then try to recreate the POD, it is attached back to the same PVC but the all the data is missing.</p> <p>Can anyone explain what happens, I've looked at the official documentation but not making any sense at the moment.</p> <p>Any help is appreciated.</p>
<p>PVCs have a lifetime independent of pods. If <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">PV</a> still exists it may be because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone. </p> <p>PersistentVolumes can have various reclaim policies, including “Retain”, “Recycle”, and “Delete”. For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”. This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data. Notice that the <strong>RECLAIM POLICY</strong> is Delete (default value), which is one of the two reclaim policies, the other one is Retain. (A third policy Recycle has been deprecated). In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.</p> <p>In that case, it is more appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.</p> <p>This may also happens too when persistent volume is protected. You should be able to cross verify this:</p> <p>Command:</p> <pre><code>$ kubectl describe pvc PVC_NAME | grep Finalizers </code></pre> <p>Output:</p> <pre><code>Finalizers: [kubernetes.io/pvc-protection] </code></pre> <p>You can fix this by setting finalizers to null using kubectl patch:</p> <pre><code>$ kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge </code></pre> <p><strong>EDIT:</strong></p> <p>A PersistentVolume can be mounted on a host in any way supported by the resource provider. Each PV gets its own set of access modes describing that specific PV’s capabilities.</p> <p>The access modes are:</p> <ul> <li>ReadWriteOnce – the volume can be mounted as read-write by a single node</li> <li>ReadOnlyMany – the volume can be mounted read-only by many nodes</li> <li>ReadWriteMany – the volume can be mounted as read-write by many nodes</li> </ul> <p>In the CLI, the access modes are abbreviated to:</p> <ul> <li>RWO - ReadWriteOnce</li> <li>ROX - ReadOnlyMany</li> <li>RWX - ReadWriteMany</li> </ul> <p>So if you recreated pod and scheduler put it on different node and your PV has reclaim policy set to ReadWriteOnce it is normal that you cannot access your data.</p> <p>Claims use the same conventions as volumes when requesting storage with specific access modes. My advice is to edit PV access mode to ReadWriteMany.</p> <pre><code>$ kubectl edit pv your_pv </code></pre> <p>You should be updating the access mode in PersistentVolume as shown below</p> <pre><code> accessModes: - ReadWriteMany </code></pre>
<p>I set up a single node cluster using minikube. I have also configured a client pod inside this node and also a nodeport service to access the pod. But the service is unreachable on browser.</p> <p>Below are the config files for client pod and the nodeport service:</p> <p><strong>client-pod.yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: client-pod labels: component:web spec: containers: - name: client image: stephengrider/multi-worker ports: - containerPort: 9999 </code></pre> <p><strong>client-node-port.yaml</strong></p> <pre><code>kind: Service metadata: name: client-node-port spec: type: NodePort ports: - port: 3050 targetPort: 3000 nodePort: 31515 selector: component:web </code></pre> <p>I can see the status of both the pod and the service as running when I run the below commands:</p> <pre><code>&gt; kubectl get pods NAME READY STATUS RESTARTS AGE client-pod 1/1 Running 0 60m </code></pre> <pre><code>&gt; kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE client-node-port NodePort 10.99.14.107 &lt;none&gt; 3050:31515/TCP 63m </code></pre> <p>Then I found out the service url on the minikube cluster:</p> <pre><code>&gt; minikube service list |-------------|------------------|-----------------------------| | NAMESPACE | NAME | URL | |-------------|------------------|-----------------------------| | default | client-node-port | http://192.168.99.101:31515 | | default | hello-minikube | http://192.168.99.101:31324 | | default | kubernetes | No node port | | kube-system | kube-dns | No node port | |-------------|------------------|-----------------------------| </code></pre> <p>I was able to access hello-minikube service with the URL mentioned against it in the table. But I could not access the client-node-port service and it just says:</p> <pre><code>This site can’t be reached192.168.99.101 refused to connect. </code></pre> <p>How do I proceed?</p>
<blockquote> <p>ContainerPort is used to decide which of the container's ports should be used by the service.</p> <p>The target port on svc spec specifies which port the service should hit when a request arrives.</p> </blockquote> <pre><code>spec: containers: - name: client image: stephengrider/multi-worker ports: - containerPort: 3000 </code></pre> <p>Since the <code>stephengrider/multi-worker</code> image you're using used <code>3000</code> as default port, set <code>containerPort</code> to <code>3000</code>. Update the pod yaml. It should work. </p> <p>N.B: You should always target those ports from your service that are valid from pod's side.</p>
<p>We do not want to use Helm in our kubernetes cluster, but would like to have Istio. For me it looks like Isto can be installed on kubernetes only with Helm.</p> <p>I guess i can copy all helm charts and substitute the helm-variables to become a kubernetes ready yaml-files. But this is a lot of manual work i do not want to do (for all new versions also).</p> <p>Any ideas, if there is already a solution for this?</p>
<p>If you don't have Tiller in your cluster and you don't want to install it - you can use installation method without Tiller (using only client Helm binary) - <a href="https://istio.io/docs/setup/kubernetes/install/helm/#option-1-install-with-helm-via-helm-template" rel="noreferrer">https://istio.io/docs/setup/kubernetes/install/helm/#option-1-install-with-helm-via-helm-template</a></p> <p>For example, to get full Istio YAML manifest you can do</p> <pre><code>helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system &gt; istio.yaml </code></pre> <p>If you want to upgrade - download newer release of Istio chart and do the same and apply rendered manifest to your cluster.</p>
<h2>We want to archieve the following:</h2> <ul> <li>Magento Shop running on Google Kubernetes</li> <li>Deployment via config file (eg. stage.yaml, live.yaml) etc.</li> <li>PHP 7.2</li> <li>MySQL 5.6 / MariaDB</li> <li>Redis</li> <li>nginx:alpine</li> <li>https</li> <li>Persistent volume claims for Magento and MySQL</li> </ul> <p>I am learning kubernetes for a few weeks now but I am struggling with some design concepts and some basic questions came up.</p> <p>I first tried docker-compose, than building docker images via Dockerfiles, stumbled over helm and kubectl. And now I came across building pods and building deployments. Now I know many different things but a real life example or some best practice knowledge would be appreciated. Google is great.. but it seems there is not just one way.</p> <h3>1. Regarding Pods</h3> <p>I understand, that pods should be able to be easily replaced / destroyed /recreated ... </p> <p>Is it better to have a POD configuration like - nginx container - php container - mysql container - redis container edit: as I just read, pods share an IP-Adress so it would make no sense to inlcude mysql or redis here, right?</p> <p>or better one pod with a - mysql container and one pod with containers for - nginx - php</p> <p>and another with a - redis container</p> <h3>2. Mounting a persistent volume claim or a remote webroot like /var/www/html locally to work on.</h3> <p>The content of the local webroot comes from a git repo.</p> <h3>3. Handling of type: pod vs. type:deployment</h3> <p>I can create a yaml file for defining the containers inside my pod (type:pod). But I also can define a deployment.yaml (type:deployment).</p> <p>Do I have to reference my pod.yaml inside my deployment.yaml or does the deployment includes all pod configuration and replaces the pod.yaml?</p>
<p><strong>Regarding Pods.</strong> You can create one pod with everything you need. But that will be very fat pod. Remember, pod runs only on one node, it is not possible to run one pod partially on one node, and partially on another. One pod runs only on one node. That means from scalability standpoint many small pods are better than one big. Many small pods also generally provide more uniform resource and load distribution between nodes.</p> <p>Also when you update one container in pod - the whole pod gets restarted. So if you have application and database in the same pod - if you update app code - database will also be restarted. Not cool, eh?</p> <p>But in some cases running several containers in one pod may be reasonable. Remember, all containers in pod share network address and localhost. So containers within pod have very low network latency. </p> <p>Also containers within pod can share volumes between each other. That is also important in some cases. </p> <p><strong>Persistent volumes</strong> You cannot mount a Git repo into pod. Well, at least that's not what you should do. You should pack your webroot into Docker image and run that in Kubernetes. And this should be done by Jenkins which can build on commit. </p> <p>Alternatively you can place your files onto shared persistent volume if you want to share files between deployment replicas. That is also possible, you must find so called ReadWriteMany volumes like NFS or GlusterFS that can be shared between multiple pods.</p>
<p>I have two microservices pods in Openshift, I want to clean cache both microservices. I have a endpoint that clear-evict cache, the request only enter in one pod. How to avoid restart the pods for clear-cache? I use 2.1.10 springboot and I have a cron configurated every two hours but I want to clean cache in HTTP Request.</p> <p>I use @Cacheable from org.springframework.cache.annotation in the request and org.springframework.cache.CacheManager.clear for evict/clean the cache.</p>
<p>Just like other services in the Spring Framework, the caching service is an abstraction, not an implementation and requires the use of an implementation to store the cache data. This abstraction is materialized by the org.springframework.cache.Cache and org.springframework.cache.CacheManager interfaces.</p> <p>You have not spoken about which is your Spring cache implementation, so I suppose that you are using the default implementation.</p> <p>By default, Spring chooses java.util.concurrent.ConcurrentMap. This implementation stores your data in-memory and will disappear it when you will close your JVM. For your concrete enviroment, with multiples pods, you need a <strong>cluster-aware cache implementation</strong>. You can't rely on cache implementations based on in-memory storage.</p> <p>So, you should check out implementations like Spring Redis Cache, which is a cluster-aware cache implementation, and configure it in all your pods.</p>
<p>I am using Stackdriver Monitoring API to get the metrics related to the containers. The JSON object returned from the API has the following details of the container.</p> <p>Example:</p> <pre><code>{ "metric": { "type": "container.googleapis.com/container/cpu/utilization" }, "resource": { "type": "gke_container", "labels": { "zone": "us-central1-a", "pod_id": "1138528c-c36e-11e9-a1a7-42010a800198", "project_id": "auto-scaling-springboot", "cluster_name": "load-test", "container_name": "", "namespace_id": "f0965889-c36d-11e9-9e00-42010a800198", "instance_id": "3962380509873542383" } }, "metricKind": "GAUGE", "valueType": "DOUBLE", "points": [ { "interval": { "startTime": "2019-09-04T04:00:00Z", "endTime": "2019-09-04T04:00:00Z" }, "value": { "doubleValue": 0.050707947222229495 } } ] } </code></pre> <p>When I execute <code>kubectl describe pod [pod name]</code>, I get none of these information unique to a container. Therefore I am unable to identify the results corresponding to a container.</p> <p>Therfore, how to I get the pod ID so that I'll be able to identify it?</p>
<h3>Use kubectl <code>jsonpath</code></h3> <p>To get a specific pod's UID:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n &lt;namespace&gt; &lt;pod-name&gt; -o jsonpath='{.metadata.uid}' </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system kubedb-66f78 -o jsonpath='{.metadata.uid}' 275ecb36-5aa8-4c2a-9c47-d8bb681b9aff⏎ </code></pre> <h3>Use kubectl <code>custom-columns</code></h3> <p>List all PodName along with its UID of a namespace:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n &lt;namespace&gt; -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid PodName PodUID coredns-6955765f44-8kp9t 0ae5c03d-5fb3-4eb9-9de8-2bd4b51606ba coredns-6955765f44-ccqgg 6aaa09a1-241a-4013-b706-fe80ae371206 etcd-kind-control-plane c7304563-95a8-4428-881e-422ce3e073e7 kindnet-jgb95 f906a249-ab9d-4180-9afa-4075e2058ac7 kube-apiserver-kind-control-plane 971165e8-6c2e-4f99-8368-7802c1e55e60 kube-controller-manager-kind-control-plane a0dce3a7-a734-485d-bfee-8ac3de6bb486 kube-proxy-27wgd d900c0b2-dc21-46b5-a97e-f30e830aa9be kube-scheduler-kind-control-plane 9c6f2399-4986-4259-9cd7-875eff1d7198 </code></pre> <h3>Use Unix/Linux command <code>grep</code></h3> <p>You can use <code>kubectl get pods</code> along with <code>grep</code>.</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n &lt;namespace&gt; &lt;pod-name&gt; -o yaml | grep uid uid: bcfbdfb5-ce0f-11e9-b83e-080027d4916d </code></pre>
<p>I was successfully able to connect to the kubernetes cluster and work with the services and pods. At one point this changed and everytime I try to connect to the cluster I get the following error:</p> <pre><code>PS C:\Users\xxx&gt; kubectl get pods Unable to connect to the server: error parsing output for access token command "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": yaml: line 4: could not find expected ':' </code></pre> <p>I am unsure of what the issue is. Google unfortunately doesn't yield any results for me either. </p> <p>I have not changed any config files or anything. It was a matter of it working one second and not working the next. </p> <p>Thanks.</p>
<p>It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.</p> <p>As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:</p> <p><code>gcloud config set container/use_application_default_credentials true</code> Try running this or set environment variable</p> <p><code>%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS%</code> to true.</p> <p>Referenced from <a href="https://stackoverflow.com/a/44660833/11317776">here</a></p> <p>The workaround for this issue being:</p> <p><code>gcloud container clusters get-credentials &lt;cluster-name&gt;</code> If you dont know your cluster name find it by <code>gcloud container clusters list</code> Finally, if those don't have issues, do <code>gcloud auth application-default login</code> and login with relative details</p>