Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>i've found two similar posts here but one hasn't been answered and the other was about android. I have a spring boot project and I want to access GCP Storage files within my application.</p> <p>Locally everything works fine I can access my bucket and read as well as store files in my storage. But when i upload it to gcp kubernetes I get the following exception:</p> <blockquote> <p>&quot;java.nio.file.FileSystemNotFoundException: Provider &quot;gs&quot; not installed at java.nio.file.Paths.get(Paths.java:147) ~[na:1.8.0_212] at xx.xx.StorageService.saveFile(StorageService.java:64) ~[classes!/:0.3.20-SNAPSHOT]</p> </blockquote> <p>My line of code where it appears is like follows:</p> <pre><code>public void saveFile(MultipartFile multipartFile, String path) { String completePath = filesBasePath + path; Path filePath = Paths.get(URI.create(completePath)); // &lt;- exception appears here Files.createDirectories(filePath); multipartFile.transferTo(filePath); } </code></pre> <p>}</p> <p>The completePath could result in something like &quot;gs://my-storage/path/to/file/image.jpg&quot;</p> <p>I have the following dependencies:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-gcp-starter-storage&lt;/artifactId&gt; &lt;version&gt;1.2.6.RELEASE&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;com.google.cloud&lt;/groupId&gt; &lt;artifactId&gt;google-cloud-nio&lt;/artifactId&gt; &lt;version&gt;0.122.5&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>Does anyone have a clue where to look at? The only real difference except for the infrastructure is that i don't explicitly don't use authentication on kubernetes as it is not required according to the documentation</p> <blockquote> <p>When using Google Cloud libraries from a Google Cloud Platform environment such as Compute Engine, Kubernetes Engine, or App Engine, no additional authentication steps are necessary.</p> </blockquote>
<p>Following <a href="https://stackoverflow.com/a/67526886/11604596">Averi Kitsch's answer</a> and using the same <a href="https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/appengine-java11/springboot-helloworld" rel="nofollow noreferrer">springboot-helloworld</a> example, I was able to get it working locally after updating pom.xml. However, much like it did for you, it only worked when I ran it locally and would fail when I deployed it on Google Cloud. The issue was that the Dockerfile I was using was ignoring all of the jar files in the /target/lib directory, and I needed to copy that directory to the image. Note that I used Google Cloud Run, but I believe this should work for most deployments using Dockerfile.</p> <p>Here's what I ended up with:</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM maven:3.8-jdk-11 as builder WORKDIR /app COPY pom.xml . COPY src ./src RUN mvn package -DskipTests FROM adoptopenjdk/openjdk11:alpine-jre COPY --from=builder /app/target/springboot-helloworld-*.jar /springboot-helloworld.jar # IMPORTANT! - Copy the library jars to the production image! COPY --from=builder /app/target/lib /lib CMD [&quot;java&quot;, &quot;-Djava.security.egd=file:/dev/./urandom&quot;, &quot;-jar&quot;, &quot;/springboot-helloworld.jar&quot;] </code></pre> <p><strong>pom.xml</strong></p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt; &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt; &lt;groupId&gt;com.example.appengine&lt;/groupId&gt; &lt;artifactId&gt;springboot-helloworld&lt;/artifactId&gt; &lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt; &lt;parent&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt; &lt;version&gt;2.5.4&lt;/version&gt; &lt;relativePath/&gt; &lt;/parent&gt; &lt;properties&gt; &lt;maven.compiler.target&gt;11&lt;/maven.compiler.target&gt; &lt;maven.compiler.source&gt;11&lt;/maven.compiler.source&gt; &lt;/properties&gt; &lt;dependencies&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;com.google.cloud&lt;/groupId&gt; &lt;artifactId&gt;google-cloud-nio&lt;/artifactId&gt; &lt;version&gt;0.123.8&lt;/version&gt; &lt;/dependency&gt; &lt;/dependencies&gt; &lt;build&gt; &lt;plugins&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt; &lt;executions&gt; &lt;execution&gt; &lt;id&gt;copy-dependencies&lt;/id&gt; &lt;phase&gt;prepare-package&lt;/phase&gt; &lt;goals&gt; &lt;goal&gt;copy-dependencies&lt;/goal&gt; &lt;/goals&gt; &lt;configuration&gt; &lt;outputDirectory&gt; ${project.build.directory}/lib &lt;/outputDirectory&gt; &lt;/configuration&gt; &lt;/execution&gt; &lt;/executions&gt; &lt;/plugin&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-jar-plugin&lt;/artifactId&gt; &lt;configuration&gt; &lt;archive&gt; &lt;manifest&gt; &lt;addClasspath&gt;true&lt;/addClasspath&gt; &lt;classpathPrefix&gt;lib/&lt;/classpathPrefix&gt; &lt;mainClass&gt; com.example.appengine.springboot.SpringbootApplication &lt;/mainClass&gt; &lt;/manifest&gt; &lt;/archive&gt; &lt;/configuration&gt; &lt;/plugin&gt; &lt;/plugins&gt; &lt;/build&gt; &lt;/project&gt; </code></pre>
<p>We are currently using the following command to update configmap settings in Openshift (after which we restart the pods for the settings to take effect):</p> <pre><code>oc apply -f configmap.yml </code></pre> <p>My question is:</p> <p>Will this command delete the existing configmap and replace it with the content of this file, or does it only import the settings from the file and leaving any other settings untouched?</p> <p>Basically, if the live configmap contains a setting <code>mytest: true</code> and the new file does not include the parameter <code>mytest</code>, does the parameter remain in the live configmap in Openshift, or does it get deleted because it is not listed in the imported file?</p>
<p>I've reproduced your case and after applying new yaml with different configmap settings, the new version is taking place. So OpenShift isn't merging configmap, it's replacing. </p> <p>Let's go trough it together...</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: example-config data: mytest0: "HELLO" mytest1: "STACK" mytest2: "COMMUNITY" mytest3: "!!!" </code></pre> <p><code>oc apply -f configmap_lab.yaml</code></p> <p>As we can see, we have everything included as expected:</p> <pre><code>$ oc get configmap/example-config -o yaml apiVersion: v1 data: mytest0: HELLO mytest1: STACK mytest2: COMMUNITY mytest3: '!!!' kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"mytest0":"HELLO","mytest1":"STACK","mytest2":"COMMUNITY","mytest3":"!!!"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config","namespace":"myproject"}} creationTimestamp: 2020-01-09T10:42:11Z name: example-config namespace: myproject resourceVersion: "7987774" selfLink: /api/v1/namespaces/myproject/configmaps/example-config uid: b148dbef-32cc-11ea-9339-525400d653ae </code></pre> <p>Now let's deploy a new yaml over this one: </p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: example-config data: mytest0: "THANKS" mytest1: "STACK" newmytest0: "COMMUNITY" newmytest1: "!!!" </code></pre> <p>Here we are changing the value, removing 2 and adding 2 parameters. Let's check how OC will treat that: </p> <pre><code>oc apply -f configmap_lab_new.yaml </code></pre> <pre><code>$ oc get configmap/example-config -o yaml apiVersion: v1 data: mytest0: THANKS mytest1: STACK newmytest0: COMMUNITY newmytest1: '!!!' kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"mytest0":"THANKS","mytest1":"STACK","newmytest0":"COMMUNITY","newmytest1":"!!!"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config","namespace":"myproject"}} creationTimestamp: 2020-01-09T10:42:11Z name: example-config namespace: myproject resourceVersion: "7988585" selfLink: /api/v1/namespaces/myproject/configmaps/example-config uid: b148dbef-32cc-11ea-9339-525400d653ae </code></pre> <p>As we can notice, all changes where accepted and are active.</p> <p>Although if you want to do it in a more controlled way, you may want to use <code>oc patch</code>. <a href="https://labs.consol.de/development/2019/04/08/oc-patch-unleashed.html" rel="nofollow noreferrer">Doc here</a>.</p>
<p>Hi everyone,</p> <p>I have a cluster based on <strong>kubeadm</strong> having 1 master and 2 workers. I have already implemented built-in horizontalPodAutoscaling (based on cpu_utilization and memory) and now i want to perform autoscaling on the basis of custom metrics (<strong>response time</strong> in my case). </p> <p>I am using <strong>Prometheus Adapter</strong> for custom metrics.And, I could not find any metrics with the name of response_time in prometheus.</p> <ol> <li><p>Is there any metric available in prometheus which scales the application based on response time and what is its name?</p></li> <li><p>Whether i will need to edit the default horizontal autoscaling algorithm or i will have to make an algorithm for autoscaling from scratch which could scale my application on the basis of response time?</p></li> </ol>
<p>Prometheus has only 4 <a href="https://prometheus.io/docs/concepts/metric_types/" rel="nofollow noreferrer">metric types</a>: <strong>Counter</strong>, <strong>Gauge</strong>, <strong>Histogram</strong> and <strong>Summary</strong>.</p> <p>I guess <strong>Histogram</strong> is that what you need</p> <blockquote> <p>A <em>histogram</em> samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values.</p> <p>A histogram with a base metric name of <code>&lt;basename&gt;</code> exposes multiple time series during a scrape:</p> <ul> <li>cumulative counters for the observation buckets, exposed as <code>&lt;basename&gt;_bucket{le="&lt;upper inclusive bound&gt;"}</code></li> <li>the <strong>total sum</strong> of all observed values, exposed as <code>&lt;basename&gt;_sum</code></li> <li>the <strong>count</strong> of events that have been observed, exposed as <code>&lt;basename&gt;_count</code> (identical to <code>&lt;basename&gt;_bucket{le="+Inf"}</code> above)</li> </ul> </blockquote> <h2>1.</h2> <p>There is a <a href="https://stackoverflow.com/questions/47305424/measure-service-latency-with-prometheus">stackoverflow question</a>, where you can get a query for latency (response time), so I think this might be useful for you. </p> <h2>2.</h2> <p>I dont know if I understand you correctly, but if you want to edit <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a>, you can edit the yaml file, delete previous HPA and create new one instead.</p> <pre><code>kubectl delete hpa &lt;name.yaml&gt; kubectl apply -f &lt;name.yaml&gt; </code></pre> <p>There is <a href="https://itnext.io/horizontal-pod-autoscale-with-custom-metrics-8cb13e9d475" rel="nofollow noreferrer">good article</a> about <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">Autoscaling on custom metrics</a> with custom Prometheus Metrics.</p>
<p>I'm attempting to get a view in Wireshark of live network traffic in one of my Kubernetes pods. In plain old Docker, I was able to run this:</p> <p><code>docker run --rm --net=container:app_service_1 crccheck/tcpdump -i any --immediate-mode -w - | wireshark -k -i -</code></p> <p>This spins up a simple container that runs <code>tcpdump</code> with the arguments shown, and pipes the packet captures in pcap format to stdout (the <code>-w -</code> argument). This output is then piped to Wireshark running on my host machine, which displays the packets as they arrive.</p> <p><strong>How do I do something similar in Kubernetes?</strong> </p> <p>I've tried applying a patch as follows:</p> <pre><code> template: spec: containers: - name: tcpdumper image: crccheck/tcpdump args: ["-i", "any", "--immediate-mode", "-w", "-"] tty: true stdin: true </code></pre> <p>And I apply this by running <code>k attach -it app-service-7bdb7798c5-2lr6q | wireshark -k -i -</code></p> <p>But this doesn't seem to work; Wireshark starts up but it immediately shows an error:</p> <p><code>Data written to the pipe is neither in a supported pcap format nor in pcapng format</code></p>
<p>I highly suggest you to read <a href="https://developers.redhat.com/blog/2019/02/27/sidecars-analyze-debug-network-traffic-kubernetes-pod/" rel="nofollow noreferrer">Using sidecars to analyze and debug network traffic in OpenShift and Kubernetes pods</a>.</p> <p>This article explains why you cant read traffic data directly from a pod and gives you an alternative on how to do it using a sidecar. </p> <p>In short words, the containers most likely run on an internal container platform network that is not directly accessible by your machine.</p> <p>A sidecar container is a container that is running in the same pod as the actual service/application and is able to provide additional functionality to the service/application.</p> <p>TCPdump effectively in Kubernetes is a bit tricky and requires you to create a side car to your pod. What you are facing is actually the expected behavior. </p> <blockquote> <p>run good old stuff like TCPdump or ngrep would not yield much interesting information, because you link directly to the bridge network or overlay in a default scenario.</p> <p>The good news is, that you can link your TCPdump container to the host network or even better, to the container network stack. Source: <a href="https://medium.com/@xxradar/how-to-tcpdump-effectively-in-docker-2ed0a09b5406" rel="nofollow noreferrer">How to TCPdump effectively in Docker</a></p> </blockquote> <p>The thing is that you have two entry points, one is for nodeIP:NodePort the second is ClusterIP:Port. Both are pointing to the same set of randomization rules for endpoints set on kubernetes iptables. </p> <p>As soon as it can happen on any node it's hard to configure tcpdump to catch all interesting traffic in just one point.</p> <p>The best tool I know for such kind of analysis is Istio, but it works mostly for HTTP traffic.</p> <p>Considering this, the best solution is to use a tcpdumper sidecar for each pod behind the service.</p> <p>Let's go trough an example on how to achieve this </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: web name: web-app spec: replicas: 2 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: web-app image: nginx imagePullPolicy: Always ports: - containerPort: 80 protocol: TCP - name: tcpdumper image: docker.io/dockersec/tcpdump restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: web-svc namespace: default spec: ports: - nodePort: 30002 port: 80 protocol: TCP targetPort: 80 selector: app: web type: NodePort </code></pre> <p>On this manifest we can notice tree important things. We have a nginx container and one tcpdumper container as a side car and we have a service defined as NodePort. </p> <p>To access our sidecar, you have to run the following command: </p> <pre><code>$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper </code></pre> <p>Example: </p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 13d web-svc NodePort 10.108.142.180 &lt;none&gt; 80:30002/TCP 9d </code></pre> <pre><code>$ curl localhost:30002 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to nginx!&lt;/h1&gt; &lt;p&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;/p&gt; &lt;p&gt;For online documentation and support please refer to &lt;a href="http://nginx.org/"&gt;nginx.org&lt;/a&gt;.&lt;br/&gt; Commercial support is available at &lt;a href="http://nginx.com/"&gt;nginx.com&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <pre><code>$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper Unable to use a TTY - container tcpdumper did not allocate one If you don't see a command prompt, try pressing enter. &gt; web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 300957902 ecr 300958061], length 77: HTTP: GET / HTTP/1.1 12:03:16.884512 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.1336: Flags [.], ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 0 12:03:16.884651 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.1336: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 239: HTTP: HTTP/1.1 200 OK 12:03:16.884705 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.1336: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 612: HTTP 12:03:16.884743 IP 192.168.250.64.1336 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 300957902 ecr 300958061], length 0 12:03:16.884785 IP 192.168.250.64.1336 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 300957902 ecr 300958061], length 0 12:03:16.889312 IP 192.168.250.64.1336 &gt; web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 300957903 ecr 300958061], length 0 12:03:16.889351 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.1336: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 300958062 ecr 300957903], length 0 12:03:16.889535 IP 192.168.250.64.1336 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 300957903 ecr 300958062], length 0 12:08:10.336319 IP6 fe80::ecee:eeff:feee:eeee &gt; ff02::2: ICMP6, router solicitation, length 16 12:15:47.717966 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [S], seq 3314747302, win 28400, options [mss 1420,sackOK,TS val 301145611 ecr 0,nop,wscale 7], length 0 12:15:47.717993 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.2856: Flags [S.], seq 2539474977, ack 3314747303, win 27760, options [mss 1400,sackOK,TS val 301145769 ecr 301145611,nop,wscale 7], length 0 12:15:47.718162 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 0 12:15:47.718164 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 77: HTTP: GET / HTTP/1.1 12:15:47.718191 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.2856: Flags [.], ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 0 12:15:47.718339 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.2856: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 239: HTTP: HTTP/1.1 200 OK 12:15:47.718403 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.2856: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 612: HTTP 12:15:47.718451 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 301145611 ecr 301145769], length 0 12:15:47.718489 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 301145611 ecr 301145769], length 0 12:15:47.723049 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 301145612 ecr 301145769], length 0 12:15:47.723093 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.2856: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 301145770 ecr 301145612], length 0 12:15:47.723243 IP 192.168.250.64.2856 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 301145612 ecr 301145770], length 0 12:15:50.493995 IP 192.168.250.64.31340 &gt; web-app-db7f7c59-d4xm6.80: Flags [S], seq 124258064, win 28400, options [mss 1420,sackOK,TS val 301146305 ecr 0,nop,wscale 7], length 0 12:15:50.494022 IP web-app-db7f7c59-d4xm6.80 &gt; 192.168.250.64.31340: Flags [S.], seq 3544403648, ack 124258065, win 27760, options [mss 1400,sackOK,TS val 301146463 ecr 301146305,nop,wscale 7], length 0 12:15:50.494189 IP 192.168.250.64.31340 &gt; web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options </code></pre> <p>You can also take a look at <a href="https://github.com/eldadru/ksniff" rel="nofollow noreferrer">ksniff</a> tool, a kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster.</p>
<p>I built a rancher using docker on server 1.</p> <p>I created and added a kubernetes cluster on server 2, and I wanted to access the kubernetes with the kubectl command on server 2 local, but localhost:8080 error is displayed.</p> <p>How can I apply kubectl command to kubernetes configured with docker rancher locally?</p>
<p>I fixed that issue modifying the kube config file.</p> <p>The kubeconfig file can be checked by entering the rancher</p> <p>The file to be modified is ~/.kube/config</p>
<p>I have a local Kubernetes Cluster running under Docker Desktop on Mac. I am running another docker-related process locally on my machine (a local insecure registry). I am interested in getting a process inside the local cluster to push/pull images from the local docker registry.</p> <p>How can I expose the local registry to be reachable from a pod inside the local Kubernetes cluster?</p> <p>A way to do this would be to have both the Docker Desktop Cluster and the docker registry use the same docker network. Adding the registry to an existing network is easy. How does one add the Docker Desktop Cluster to the network?</p>
<p>As I mentioned in comments</p> <p>I think what you're looking for is mentioned in the documentation <a href="https://docs.docker.com/registry/insecure/" rel="nofollow noreferrer">here</a>. You would have to add your local insecure registry as insecure-registries value in docker for desktop. Then after restart you should be able to use it.</p> <h2>Deploy a plain HTTP registry</h2> <blockquote> <p>This procedure configures Docker to entirely disregard security for your registry. This is very insecure and is not recommended. It exposes your registry to trivial man-in-the-middle (MITM) attacks. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment.</p> <p>Edit the daemon.json file, whose default location is /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows Server. If you use Docker Desktop for Mac or Docker Desktop for Windows, click the Docker icon, choose Preferences (Mac) or Settings (Windows), and choose Docker Engine.</p> <p>If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:</p> </blockquote> <pre><code>{ &quot;insecure-registries&quot; : [&quot;myregistrydomain.com:5000&quot;] } </code></pre> <hr /> <p>Also found a tutorial for that on medium with macOS. Take a look <a href="https://medium.com/htc-research-engineering-blog/setup-local-docker-repository-for-local-kubernetes-cluster-354f0730ed3a" rel="nofollow noreferrer">here</a>.</p>
<p>I want to be able to capture (log) (at least some of) <code>envoy</code>'s HTTP headers on my <code>istio</code> service mesh.</p> <p>I have gone through <code>envoy</code>'s <a href="https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/run-envoy#debugging-envoy" rel="noreferrer">docs</a>, and in the log levels' section, it does not mention any header-specific information.</p> <p>Currently, my <code>istio-proxy</code> log is like this (this is from a <code>stern</code> output):</p> <pre><code>mysvc-69c46fbc75-d9v8j istio-proxy {&quot;bytes_sent&quot;:&quot;124&quot;,&quot;upstream_cluster&quot;:&quot;inbound|80|http|mysvc.default.svc.cluster.local&quot;,&quot;downstream_remote_address&quot;:&quot;10.11.11.1:0&quot;,&quot;authority&quot;:&quot;some.url.com&quot;,&quot;path&quot;:&quot;/health?source=dd_cluster_agent&quot;,&quot;protocol&quot;:&quot;HTTP/1.1&quot;,&quot;upstream_service_time&quot;:&quot;1&quot;,&quot;upstream_local_address&quot;:&quot;127.0.0.1:40406&quot;,&quot;duration&quot;:&quot;2&quot;,&quot;upstream_transport_failure_reason&quot;:&quot;-&quot;,&quot;route_name&quot;:&quot;default&quot;,&quot;downstream_local_address&quot;:&quot;10.11.32.32:20000&quot;,&quot;user_agent&quot;:&quot;Datadog Agent/7.24.0&quot;,&quot;response_code&quot;:&quot;200&quot;,&quot;response_flags&quot;:&quot;-&quot;,&quot;start_time&quot;:&quot;2021-01-17T18:54:57.449Z&quot;,&quot;method&quot;:&quot;GET&quot;,&quot;request_id&quot;:&quot;61ae63c7-aa10-911b-9562-939kdhd49ddhj&quot;,&quot;upstream_host&quot;:&quot;127.0.0.1:20000&quot;,&quot;x_forwarded_for&quot;:&quot;10.16.32.1&quot;,&quot;requested_server_name&quot;:&quot;outbound_.80_.mysvc_.faros.default.svc.cluster.local&quot;,&quot;bytes_received&quot;:&quot;0&quot;,&quot;istio_policy_status&quot;:&quot;-&quot;} </code></pre> <p>Is there a way to log <code>http</code> headers? (ideally <strong>some</strong> of them, to keep the logging cost under control)</p> <p><strong>edit1</strong> following advice in the comments, I checked my <code>istio-operator</code> resource and I see that access logging seems to be enabled</p> <pre><code> meshConfig: accessLogEncoding: JSON accessLogFile: /dev/stdout </code></pre> <p><strong>edit2</strong> I have also tried the following:</p> <pre><code>curl -i -H &quot;Custom-Header: application/json&quot; https://my.url.net </code></pre> <p>but in the logs of the <code>istio-ingressgateway</code> I don't see my custom header</p> <pre><code>istio-ingressgateway-58f69d8696-rmpwn istio-proxy {&quot;user_agent&quot;:&quot;curl/7.64.1&quot;,&quot;response_code&quot;:&quot;200&quot;,&quot;response_flags&quot;:&quot;-&quot;,&quot;start_time&quot;:&quot;2021-01-18T19:02:48.645Z&quot;,&quot;method&quot;:&quot;GET&quot;,&quot;request_id&quot;:&quot;8e32c93c-484d-9c56-9489-8c5392793d97&quot;,&quot;upstream_host&quot;:&quot;10.16.32.55:20000&quot;,&quot;x_forwarded_for&quot;:&quot;10.16.32.1&quot;,&quot;requested_server_name&quot;:&quot;my.url.net&quot;,&quot;bytes_received&quot;:&quot;0&quot;,&quot;istio_policy_status&quot;:&quot;-&quot;,&quot;bytes_sent&quot;:&quot;124&quot;,&quot;upstream_cluster&quot;:&quot;outbound|80||mysvc.default.svc.cluster.local&quot;,&quot;downstream_remote_address&quot;:&quot;10.16.32.1:52804&quot;,&quot;authority&quot;:&quot;my.url.net&quot;,&quot;path&quot;:&quot;/&quot;,&quot;protocol&quot;:&quot;HTTP/2&quot;,&quot;upstream_service_time&quot;:&quot;9&quot;,&quot;upstream_local_address&quot;:&quot;10.16.32.17:49826&quot;,&quot;duration&quot;:&quot;10&quot;,&quot;upstream_transport_failure_reason&quot;:&quot;-&quot;,&quot;route_name&quot;:&quot;-&quot;,&quot;downstream_local_address&quot;:&quot;10.16.32.17:8443&quot;} </code></pre>
<p>I think I've succesfully made a reproduction of your issue and I was able to print <code>MY_CUSTOM_HEADER</code> in the ingress gateway logs.</p> <p>There is a part of my istio ingress gateway logs.</p> <pre><code>[2021-01-20T08:26:18.587Z] pkarambol GET /productpage HTTP/1.1 200 </code></pre> <p>I've used below curl command:</p> <pre><code>curl -v -H &quot;MY_CUSTOM_HEADER: pkarambol&quot; xx.xxx.xx.xxx/productpage </code></pre> <hr /> <p>To make that happen you have to change the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage?fbclid=IwAR0xaS-EtEniJfbpGWf-GN_o75aLSpyn5FI2Uojp9YPWQD4PjQqs9ObqavQ#default-format-string" rel="noreferrer">default format</a> of the logs. As mentioned in the <a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging" rel="noreferrer">documentation</a>, you can use <code>meshConfig.accessLogFormat</code> to change that.</p> <hr /> <p>There is an example of <a href="https://istio.io/latest/docs/setup/install/operator/" rel="noreferrer">Istio Operator</a> I've used.</p> <p><code>%REQ(MY_CUSTOM_HEADER)%</code> is a part responsible for displaying the custom header.</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: example-istiocontrolplane spec: profile: demo meshConfig: accessLogFile: /dev/stdout accessLogFormat: &quot;[%START_TIME%] %REQ(MY_CUSTOM_HEADER)% %REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL% %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% %REQ(X-FORWARDED-FOR)% %REQ(USER-AGENT)% %REQ(X-REQUEST-ID)% %REQ(:AUTHORITY)% %UPSTREAM_HOST%\n&quot; </code></pre>
<p>As <code>kubectl convert</code> is deprecated since v1.14 and will be removed in v1.17, how do you convert Kubernetes object's API versions?</p> <p>Are there other easy and safe solutions for doing that?</p>
<p>This is being discussed in <a href="https://github.com/kubernetes/kubectl/issues/725" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/725</a>. At the moment there is no alternative for this command. </p> <p>The plan was to remove it on 1.17 but I have 1.18 running and the command is still working. </p> <p>I would say that for now you don't need to worry about alternatives as it still works. </p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <pre><code>$ kubectl convert --help Convert config files between different API versions. Both YAML and JSON formats are accepted. The command takes filename, directory, or URL as input, and convert it into format of version specified by --output-version flag. If target version is not specified or not supported, convert to latest version. The default output will be printed to stdout in YAML format. One can use -o option to change to output destination. Examples: # Convert 'pod.yaml' to latest version and print to stdout. kubectl convert -f pod.yaml # Convert the live state of the resource specified by 'pod.yaml' to the latest version # and print to stdout in JSON format. kubectl convert -f pod.yaml --local -o json # Convert all files under current directory to latest version and create them all. kubectl convert -f . | kubectl create -f - Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. -f, --filename=[]: Filename, directory, or URL to files to need to get converted. -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. --local=true: If true, convert will NOT try to contact api-server but run locally. -o, --output='yaml': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. --output-version='': Output the formatted object with the given group version (for ex: 'extensions/v1beta1'). -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --validate=true: If true, use a schema to validate the input before sending it Usage: kubectl convert -f FILENAME [options] Use "kubectl options" for a list of global command-line options (applies to all commands). </code></pre>
<p>I have a trouble with enabling <code>--api-server-authorized-ip-ranges</code> feature, let me describe my case:</p> <ol> <li>I have AKS without enabled feature, but want to add it using this command:</li> </ol> <pre><code>az aks update --resource-group test-aks-service-rg -n test-aksCluster --api-server-authorized-ip-ranges 1.2.3.4/32 </code></pre> <p>I got a result that ip address has been successfully added as allowed for API.</p> <ol start="2"> <li>Then, I'm trying to get logs of the pods, but I'm getting timeout issue:</li> </ol> <pre><code>Error from server ... dail tcp ... i/o timeout. </code></pre> <p>What is wrong with my configuration? How to properly add auth ip ranges?</p> <p><strong>Note:</strong> I verified that my ip outbound address is 1.2.3.4</p>
<p>It was my fault, I have missed important thing in Microsoft documentation. An answer is here:</p> <p><a href="https://learn.microsoft.com/bs-cyrl-ba/azure/aks/api-server-authorized-ip-ranges" rel="nofollow noreferrer">https://learn.microsoft.com/bs-cyrl-ba/azure/aks/api-server-authorized-ip-ranges</a></p> <p>I forgot to add firewall public ip addresses into auth ip ranges list.</p> <p>The following addresses must be in the list, to get it work:</p> <ul> <li>The firewall public IP address</li> <li>Any range that represents networks that you'll administer the cluster from</li> <li>If you are using Azure Dev Spaces on your AKS cluster, you have to allow additional ranges based on your region.</li> </ul>
<p>I am very new in Istio therefore it might be simple question but I have several confusion regarding Istio.I am using Istio 1.8.0 and 1.19 for k8s.Sorry for the multiple questions but will be appreciate if you can help me to clarify best approaches.</p> <ol> <li><p>After I inject Istio, I suppose ı could not able to access service to service directly inside pod but as you can see below ı can. Maybe I 've misunderstanding but is it expected behaviour? Meanwhile how can I debug whether services talk each other via envoy proxy with mTLS ? I am using <code>STRICT</code> mode and should I deploy peerauthentication the namespace where microservices are running to avoid this?</p> <pre><code> kubectl get peerauthentication --all-namespaces NAMESPACE NAME AGE istio-system default 26h </code></pre> </li> <li><p>How can I restrict the traffic lets say api-dev service should not be access to auth-dev but can access backend-dev?</p> </li> <li><p>Some of the microservices needs to commnunicate with database where its also running in <code>database</code> namespace. We have also some which we do not want to inject istio also using same database? So, should database also deployed in the same namespace where we have istio injection? If yes, then does it mean I need to deploy another database instance for rest of services?</p> </li> </ol> <hr /> <pre><code> $ kubectl get ns --show-labels NAME STATUS AGE LABELS database Active 317d name=database hub-dev Active 15h istio-injection=enabled dev Active 318d name=dev capel0068340585:~ semural$ kubectl get pods -n hub-dev NAME READY STATUS RESTARTS AGE api-dev-5b9cdfc55c-ltgqz 3/3 Running 0 117m auth-dev-54bd586cc9-l8jdn 3/3 Running 0 13h backend-dev-6b86994697-2cxst 2/2 Running 0 120m cronjob-dev-7c599bf944-cw8ql 3/3 Running 0 137m mp-dev-59cb8d5589-w5mxc 3/3 Running 0 117m ui-dev-5884478c7b-q8lnm 2/2 Running 0 114m redis-hub-master-0 2/2 Running 0 2m57s $ kubectl get svc -n hub-dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE api-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 13h auth-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 13h backend-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 14h cronjob-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 14h mp-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 13h ui-dev ClusterIP xxxxxxxxxxxxx &lt;none&gt; 80/TCP 13h redis-hub-master ClusterIP xxxxxxxxxxxxx &lt;none&gt; 6379/TCP 3m47s ---------- $ kubectl exec -ti ui-dev-5884478c7b-q8lnm -n hub-dev sh Defaulting container name to oneapihub-ui. Use 'kubectl describe pod/ui-dev-5884478c7b-q8lnm -n hub-dev' to see all of the containers in this pod. /usr/src/app $ curl -vv http://hub-backend-dev * Trying 10.254.78.120:80... * TCP_NODELAY set * Connected to backend-dev (10.254.78.120) port 80 (#0) &gt; GET / HTTP/1.1 &gt; Host: backend-dev &gt; User-Agent: curl/7.67.0 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 404 Not Found &lt; content-security-policy: default-src 'self' &lt; &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot;&gt; &lt;title&gt;Error&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;pre&gt;Cannot GET /&lt;/pre&gt; &lt;/body&gt; &lt;/html&gt; * Connection #0 to host oneapihub-backend-dev left intact /usr/src/app $ </code></pre>
<ol> <li>According to <a href="https://istio.io/latest/docs/concepts/security/#peer-authentication" rel="nofollow noreferrer">documentation</a>, if you use <code>STRICT</code> mtls, then workloads should only accept encrypted traffic.</li> </ol> <h2>Peer authentication</h2> <blockquote> <p>Peer authentication policies specify the mutual TLS mode Istio enforces on target workloads. The following modes are supported:</p> <ul> <li>PERMISSIVE: Workloads accept both mutual TLS and plain text traffic. This mode is most useful during migrations when workloads without sidecar cannot use mutual TLS. Once workloads are migrated with sidecar injection, you should switch the mode to STRICT.</li> <li>STRICT: Workloads only accept mutual TLS traffic.</li> <li>DISABLE: Mutual TLS is disabled. From a security perspective, you shouldn’t use this mode unless you provide your own security solution.</li> </ul> <p>When the mode is unset, the mode of the parent scope is inherited. Mesh-wide peer authentication policies with an unset mode use the PERMISSIVE mode by default.</p> </blockquote> <p>Also worth to take a look <a href="https://banzaicloud.com/blog/istio-mtls/#mtls-modes-in-practice" rel="nofollow noreferrer">here</a>, as it's very well described here by banzaicloud.</p> <hr /> <p>You can enable strict mtls mode <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode" rel="nofollow noreferrer">globally</a>, but also per specific <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#namespace-wide-policy" rel="nofollow noreferrer">namespace</a> or <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#enable-mutual-tls-per-workload" rel="nofollow noreferrer">workload</a>.</p> <hr /> <ol start="2"> <li>You can use istio <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">Authorization Policy</a> to do that.</li> </ol> <blockquote> <p>Istio Authorization Policy enables access control on workloads in the mesh.</p> </blockquote> <p>There is an example.</p> <blockquote> <p>The following is another example that sets action to “DENY” to create a deny policy. It denies requests from the “dev” namespace to the “POST” method on all workloads in the “foo” namespace.</p> </blockquote> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [&quot;dev&quot;] to: - operation: methods: [&quot;POST&quot;] </code></pre> <hr /> <ol start="3"> <li>You can set up Database without injecting it and then add it to Istio registry using <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">ServiceEntry</a> object so it would be able to communicate with the rest of istio services.</li> </ol> <blockquote> <p>ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.</p> </blockquote> <p>There are examples in istio documentation:</p> <ul> <li><a href="https://istio.io/latest/blog/2018/egress-tcp/" rel="nofollow noreferrer">Consuming External TCP Services</a></li> <li><a href="https://istio.io/latest/blog/2018/egress-mongo/" rel="nofollow noreferrer">Consuming External MongoDB Services</a></li> </ul> <hr /> <p>To answer your main question about how to debug mtls communication</p> <p>The most basic test would be to try to call from not injected pod to injected pod, with a curl for example. There is istio <a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">documentation</a> about that.</p> <p>You can also use <code>istioctl x describe</code>, more about it <a href="https://istio.io/latest/docs/ops/diagnostic-tools/istioctl-describe/#verifying-strict-mutual-tls" rel="nofollow noreferrer">here</a>.</p> <hr /> <p>Not sure what's wrong with the <code>curl -vv http://hub-backend-dev</code>, but as it's 404 I suspect it might be an issue with your istio dependencies, like wrong virtual service configuration.</p>
<p>Kubernetes is a very sophisticated tool, but some of us are a bit crude, and so we get in trouble.</p> <p>I'm trying to run a simple kubernetes job on a pod in my cluster, and in the kubernetes yaml config file i define the name of the pod under metadata like</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: my-job </code></pre> <p>Then when I create this job I see that the name of the pod is not really my-job. It's:</p> <pre><code>my-job-'randomstuff' </code></pre> <p>I understand this is very cool for replicasets and whatnots, but I need my pod to be named what I tell it to be named, because I use that name in callbacks function further down the road..</p> <p>It seems to me to be strange, that I can't have complete control over what I want to call my pod when I create it.</p> <p>I tell myself that it must be possible, but I've googledfrenzied for an hour..</p> <p>Thank you very much for any ideas :)</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a> are designed to have random suffixes because they may have multiple completions.</p> <p>Example:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi spec: completions: 5 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never backoffLimit: 4 </code></pre> <p>As you can see, this Job will be executed until it achieves 5 competitions, and it would not be possible if the name didn't have a random suffix attached to it. Check the result of the execution of the Job of the example:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE pi-7mx4k 0/1 Completed 0 3m9s pi-bfw6p 0/1 Completed 0 3m17s pi-ls9lh 0/1 Completed 0 3m43s pi-njfpq 0/1 Completed 0 3m35s pi-ssn68 0/1 Completed 0 3m27s </code></pre> <p>So, the answer to your question is no, you can't force it to use a &quot;fixed&quot; name.</p> <p>If you need to have control under the name, consider using a Pod instead (kind Pod).</p>
<p>I have spent days now trying to figure out a dependency issue I'm experiencing with (Py)Spark running on Kubernetes. I'm using the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/" rel="noreferrer">spark-on-k8s-operator</a> and Spark's Google Cloud connector.</p> <p>When I try to submit my spark job <strong>without</strong> a dependency using <code>sparkctl create sparkjob.yaml ...</code> with below .yaml file, it works like a charm.</p> <pre><code>apiVersion: &quot;sparkoperator.k8s.io/v1beta2&quot; kind: SparkApplication metadata: name: spark-job namespace: my-namespace spec: type: Python pythonVersion: &quot;3&quot; hadoopConf: &quot;fs.gs.impl&quot;: &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem&quot; &quot;fs.AbstractFileSystem.gs.impl&quot;: &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS&quot; &quot;fs.gs.project.id&quot;: &quot;our-project-id&quot; &quot;fs.gs.system.bucket&quot;: &quot;gcs-bucket-name&quot; &quot;google.cloud.auth.service.account.enable&quot;: &quot;true&quot; &quot;google.cloud.auth.service.account.json.keyfile&quot;: &quot;/mnt/secrets/keyfile.json&quot; mode: cluster image: &quot;image-registry/spark-base-image&quot; imagePullPolicy: Always mainApplicationFile: ./sparkjob.py deps: jars: - https://repo1.maven.org/maven2/org/apache/spark/spark-sql-kafka-0-10_2.11/2.4.5/spark-sql-kafka-0-10_2.11-2.4.5.jar sparkVersion: &quot;2.4.5&quot; restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 driver: cores: 1 coreLimit: &quot;1200m&quot; memory: &quot;512m&quot; labels: version: 2.4.5 serviceAccount: spark-operator-spark secrets: - name: &quot;keyfile&quot; path: &quot;/mnt/secrets&quot; secretType: GCPServiceAccount envVars: GCS_PROJECT_ID: our-project-id executor: cores: 1 instances: 1 memory: &quot;512m&quot; labels: version: 2.4.5 secrets: - name: &quot;keyfile&quot; path: &quot;/mnt/secrets&quot; secretType: GCPServiceAccount envVars: GCS_PROJECT_ID: our-project-id </code></pre> <p>The Docker image <code>spark-base-image</code> is built with Dockerfile</p> <pre><code>FROM gcr.io/spark-operator/spark-py:v2.4.5 RUN rm $SPARK_HOME/jars/guava-14.0.1.jar ADD https://repo1.maven.org/maven2/com/google/guava/guava/28.0-jre/guava-28.0-jre.jar $SPARK_HOME/jars ADD https://repo1.maven.org/maven2/com/google/cloud/bigdataoss/gcs-connector/hadoop2-2.0.1/gcs-connector-hadoop2-2.0.1-shaded.jar $SPARK_HOME/jars ENTRYPOINT [ &quot;/opt/entrypoint.sh&quot; ] </code></pre> <p>the main application file is uploaded to GCS when submitting the application and subsequently fetched from there and copied into the driver pod upon starting the application. The problem starts whenever I want to supply my own Python module <code>deps.zip</code> as a dependency to be able to use it in my main application file <code>sparkjob.py</code>.</p> <p>Here's what I have tried so far:</p> <p>1</p> <p>Added the following lines to spark.deps in sparkjob.yaml</p> <pre><code>pyFiles: - ./deps.zip </code></pre> <p>which resulted in the operator not being able to even submit the Spark application with error</p> <pre><code>java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found </code></pre> <p><code>./deps.zip</code> is successfully uploaded to the GCS bucket along with the main application file but while the main application file can be successfully fetched from GCS (I see this in the logs in jobs with no dependencies as defined above), <code>./deps.zip</code> can somehow not be fetched from there. I also tried adding the gcs-connector jar to the spark.deps.jars list explicitly - nothing changes.</p> <p>2</p> <p>I added <code>./deps.zip</code> to the base docker image used for starting up the driver and executor pods by adding <code>COPY ./deps.zip /mnt/</code> to the above Dockerfile and adding the dependency in the sparkjob.yaml via</p> <pre><code>pyFiles: - local:///mnt/deps.zip </code></pre> <p>This time the spark job can be submitted and the driver pod is started, however I get a <code>file:/mnt/deps.zip not found</code> error when the Spark context is being initialized I also tried to additionally set <code>ENV SPARK_EXTRA_CLASSPATH=/mnt/</code> in the Dockerfile but without any success. I even tried to explicitly mount the whole <code>/mnt/</code> directory into the driver and executor pods using volume mounts, but that also didn't work.</p> <hr /> <p>edit:</p> <p>My workaround (2), adding dependencies to the Docker image and setting <code>ENV SPARK_EXTRA_CLASSPATH=/mnt/</code> in the Dockerfile actually worked! Turns out the tag didn't update and I've been using an old version of the Docker image all along. Duh.</p> <p>I still don't know why the (more elegant) solution 1 via the gcs-connector isn't working, but it might be related to <a href="https://stackoverflow.com/questions/62408395/mountvolume-setup-failed-for-volume-spark-conf-volume">MountVolume.Setup failed for volume &quot;spark-conf-volume&quot;</a></p>
<p>I had same problems with s3a (org.apache.hadoop.fs.s3a.S3AFileSystem not found). When I tried:</p> <pre><code> deps: files: - &quot;s3a://k8s-3c172e28d7da2e-bucket/test.jar&quot; </code></pre> <p>Even added jars files inside image: &quot;image-registry/spark-base-image&quot; did not work. But I fixed this problem when I added necessary jars inside spark-operator pod. You can rebuild you Docker image by adding jars. In may case with s3, I rebuild it:</p> <pre><code>FROM ghcr.io/googlecloudplatform/spark-operator:v1beta2-1.3.7-3.1.1 ENV SPARK_HOME /opt/spark RUN curl https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.4/hadoop-aws-2.7.4.jar -o ${SPARK_HOME}/jars/hadoop-aws-2.7.4.jar RUN curl https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk/1.7.4/aws-java-sdk-1.7.4.jar -o ${SPARK_HOME}/jars/aws-java-sdk-1.7.4.jar </code></pre> <p>In spark-operator inside has hadoop version 2.7 and we need use all dependencies exactly for this version on <a href="https://mvnrepository.com/" rel="nofollow noreferrer">https://mvnrepository.com/</a></p> <p>First for tests I went to inside spark-operator pod by command</p> <pre><code>kubectl exec -it spark-operator-fb8f779cb-gt657 -n spark-operator -- bash </code></pre> <p>where pod name is</p> <blockquote> <p>spark-operator-fb8f779cb-gt657</p> </blockquote> <p>and -n spark-operator it is name of namespace</p> <p>you can get all you pods and find pod name just use</p> <p>kubectl get po -A</p> <p>Then inside my spark-operator pod I go to /opt/spark/jars and upload jars (for example curl <a href="https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.4/hadoop-aws-2.7.4.jar" rel="nofollow noreferrer">https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.4/hadoop-aws-2.7.4.jar</a>)</p> <p>Then I tried apply my manifest with deps.files and it is worked.</p>
<p>My RequestAuthentication is this</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: testing-dev-authenticator namespace: istio-system spec: selector: matchLabels: istio: ingressgateway jwtRules: - issuer: &quot;https://www.googleapis.com/service_accounts/v1/jwk/[email protected]&quot; jwksUri: &quot;https://securetoken.google.com/&lt;project-name&gt;&quot; </code></pre> <p>My AuthorizationPolicy is this</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: test-dev-authorizer-all-svc namespace: dev spec: action: ALLOW rules: - from: - source: notRequestPrincipals: [&quot;*&quot;] to: - operation: notPaths: [&quot;/message/ping&quot;] </code></pre> <p>My requirement is i dont want jwt auth to check in the healthz(my case is /message/ping), but am getting always <strong><strong>Response of the above is &quot;RBAC: access denied&quot;</strong></strong></p>
<blockquote> <p>I wanted all the pods deployed in &quot;dev&quot; namespace to be authenticated except a healthcheck, path of it is path : [&quot;/user/ping&quot;, &quot;/message/ping&quot;] but iam unable to give both at a time</p> </blockquote> <p>I've reproduced your issue and I think it's working as you wanted it to work.</p> <hr /> <p>There are my RequestAuthentication and AuthorizationPolicy yamls.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: testing-dev-authenticator namespace: istio-system spec: selector: matchLabels: istio: ingressgateway jwtRules: - issuer: &quot;[email protected]&quot; jwksUri: &quot;https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json&quot; --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: ALLOW rules: - from: - source: requestPrincipals: [&quot;*&quot;] to: - operation: paths: [&quot;/productpage&quot;] - to: - operation: paths: [&quot;/api/v1/products&quot;] </code></pre> <p>You can use the following to exclude path (e.g. &quot;/api/v1/products&quot; ) from JWT, when &quot;/productpage&quot; require JWT and will reject all requests without the token.</p> <p>If you want to exclude more than one path then this should work:</p> <pre><code>paths: [&quot;/api/v1/products&quot;,&quot;/login&quot;] </code></pre> <p>So in your case that would be</p> <pre><code>paths: [&quot;/user/ping&quot;, &quot;/message/ping&quot;] </code></pre> <hr /> <p>I have tested above configuration on <a href="https://istio.io/latest/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo</a> application.</p> <p>There is the token I have used</p> <pre><code>TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) </code></pre> <p>Tests:</p> <pre><code>api/v1/products Without token 200 With token 200 ------------------------------------------------------------------ /productpage Without token 403 With token 200 </code></pre> <hr /> <p>You also mentioned that you want to do that in particular namespace, then you could try with these RequestAuthentication and AuthorizationPolicy yamls.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: testing-dev-authenticator namespace: dev spec: jwtRules: - issuer: &quot;[email protected]&quot; jwksUri: &quot;https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json&quot; --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: test-dev-only-authorized-api namespace: dev spec: action: DENY rules: - from: - source: notRequestPrincipals: [&quot;*&quot;] to: - operation: paths: [&quot;/productpage&quot;] </code></pre> <p>Also based on the <a href="https://istio.io/latest/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo</a> application.</p> <p>Tests:</p> <pre><code>api/v1/products Without token 200 With token 200 ------------------------------------------------------------------ /productpage Without token 403 With token 200 </code></pre> <hr /> <p>Additional resources:</p> <ul> <li><a href="https://github.com/istio/istio/issues/27432" rel="nofollow noreferrer">Disable RequestAuthentication JWT rules for specific paths</a></li> <li><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/" rel="nofollow noreferrer">Authorization with JWT</a></li> <li><a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#require-valid-tokens-per-path" rel="nofollow noreferrer">Authentication Policy</a></li> </ul>
<p>I'm trying to connect my angular FE with my signalR hub in my backend (.NET core) that is hosted inside kubernetes, when I try I get this error message:<br /> has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. Does anyone know how to solve this? this is my configuration:</p> <pre><code>services.AddCors(x =&gt; x.AddPolicy(&quot;my-cors&quot;, y=&gt; y.WithOrigins(&quot;https://subdomain.mydomain.com&quot;) .AllowAnyMethod().AllowAnyHeader().AllowCredentials())); app.UseRouting(); app.UseCors(&quot;my-cors&quot;); app.UseEndpoints(endpoints =&gt; { endpoints.MapControllers(); endpoints.MapHub&lt;MessageHub&gt;(&quot;/messageHub&quot;); }); [Authorize(AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] public class MessageHub : Hub { ... } </code></pre> <p>In my ingress configuration I have this:</p> <pre><code>kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-methods: &quot;PUT, GET, POST, OPTIONS&quot; nginx.ingress.kubernetes.io/cors-expose-headers: &quot;*&quot; nginx.ingress.kubernetes.io/cors-allow-origin: &quot;https://subdomain.mydomain.com&quot; nginx.ingress.kubernetes.io/auth-tls-verify-client: &quot;on&quot; nginx.ingress.kubernetes.io/auth-tls-secret: &quot;ingress-sps-tst/tls-secret&quot; nginx.ingress.kubernetes.io/proxy-body-size: 50m nginx.ingress.kubernetes.io/service-upstream: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-credentials: &quot;true&quot; </code></pre> <p>From the angular FE i do this:</p> <pre><code>const hubConnection = new signalR.HubConnectionBuilder() .withUrl(this.signalREndpoint +'/messageHub', { accessTokenFactory: () =&gt; token, }).build(); hubConnection.start().then(....) </code></pre>
<p>I'm facing the same issue when configuring SignalR for my project using SPA framework. But my error information is a little different from you <code>Request header field x-signalr-user-agent is not allowed by Access-Control-Allow-Headers in preflight response</code></p> <p>After spending a lot of time for research in the internet and Ingress Nginx document. I can solve problems with CORS policy configuration.</p> <p>In your annotations section of metadata using following config:</p> <pre><code>metadata: name: ingress-svc annotations: kubernetes.io/ingress.class: nginx # CORS for api setup nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; nginx.ingress.kubernetes.io/cors-allow-methods: &quot;GET, POST, PUT, DELETE, PATCH, OPTIONS&quot; nginx.ingress.kubernetes.io/cors-allow-origin: &quot;http://localhost:3000&quot; nginx.ingress.kubernetes.io/cors-allow-headers: &quot;Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,X-SignalR-User-Agent&quot; nginx.ingress.kubernetes.io/cors-allow-credentials: &quot;true&quot; </code></pre> <p>As you can see <code>nginx.ingress.kubernetes.io/cors-allow-headers</code> in document they say the default value does not include what SignalR need <strong>X-SignalR-User-Agent</strong>. Read about it in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors" rel="nofollow noreferrer">here</a></p> <p>Using this config now my project can run without any problem.</p> <p><img src="https://i.stack.imgur.com/IxbsL.png" alt="Network result tab" /></p>
<p>This might be a hard one to solve, but I'm depending on you! This question is bothering me for days now and I can't seem to figure it out on my own. Our Airflow instance is deployed using the Kubernetes Executor. The Executor starts Worker Pods, which in turn start Pods with our data-transformation logic. The worker and operator pods all run fine, but Airflow has trouble adopting the status.phase:'Completed' pods. It tries and tries, but to no avail.</p> <p>All the pods are running in the airflow-build Kubernetes namespace.</p> <p>The Airflow workers are created with this template:</p> <pre><code>pod_template_file.yaml: apiVersion: v1 kind: Pod metadata: name: dummy-name spec: containers: - args: [] command: [] resources: requests: memory: &quot;100Mi&quot; cpu: &quot;100m&quot; limits: memory: &quot;2Gi&quot; cpu: &quot;1&quot; imagePullPolicy: IfNotPresent name: base image: dummy_image env: - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN valueFrom: secretKeyRef: name: connection-secret key: connection - name: AIRFLOW__CORE__FERNET_KEY valueFrom: secretKeyRef: name: fernet-secret key: fernet volumeMounts: - name: airflow-logs mountPath: &quot;/opt/airflow/logs&quot; subPath: airflow/logs - name: airflow-dags mountPath: /opt/airflow/dags readOnly: true subPath: airflow/dags - name: airflow-config mountPath: &quot;/opt/airflow/airflow.cfg&quot; subPath: airflow.cfg readOnly: true - name: airflow-config mountPath: /opt/airflow/pod_template_file.yaml subPath: pod_template_file.yaml readOnly: true restartPolicy: Never serviceAccountName: scheduler-serviceaccount volumes: - name: airflow-dags persistentVolumeClaim: claimName: airflow-dags-claim - name: airflow-logs persistentVolumeClaim: claimName: airflow-logs-claim - name: airflow-config configMap: name: base-config </code></pre> <p>My simple DAG looks like this:</p> <pre><code>from airflow import DAG from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow import configuration as conf from datetime import datetime, timedelta from kubernetes.client import CoreV1Api, models as k8s # Default settings applied to all tasks default_args = {     'owner': 'airflow',     'depends_on_past': False,     'email_on_failure': False,     'email_on_retry': False,     'retries': 1,     'retry_delay': timedelta(minutes=5) } dag = DAG('test-mike',          start_date=datetime(2019, 1, 1),          max_active_runs=1,          default_args=default_args,          catchup=False           )  namespace = conf.get('kubernetes', 'NAMESPACE') op=KubernetesPodOperator(         task_id='mike-test-task',         namespace=namespace,         service_account_name='scheduler-serviceaccount',         image=f'acrdatahubaragdev.azurecr.io/data-test-mike2:latest',         name='mike-test-name',         is_delete_operator_pod=True,         startup_timeout_seconds=300,         in_cluster=True,         get_logs=True,         dag=dag) </code></pre> <p>Now, everything runs fine. When the Operator Pod is done, it is cleaned up quickly. However, the worker Pod remains on the &quot;Completed&quot; status, like this:</p> <pre><code>Name: testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3 Namespace: airflow-build Priority: 0 Node: aks-default-23404167-vmss000000/10.161.132.13 Start Time: Tue, 23 Mar 2021 08:12:57 +0100 Labels: airflow-worker=201 airflow_version=2.0.0 dag_id=test-mike execution_date=2021-03-23T07_12_54.887218_plus_00_00 kubernetes_executor=True task_id=mike-test-task try_number=1 Annotations: dag_id: test-mike execution_date: 2021-03-23T07:12:54.887218+00:00 task_id: mike-test-task try_number: 1 Status: Succeeded IP: 10.161.132.18 IPs: IP: 10.161.132.18 Containers: base: Container ID: docker://abe9e33c356de398af865736e0054b0eaaa6f3b99c44a6d021b4ca3a981161ce Image: apache/airflow:2.0.0-python3.8 Image ID: docker-pullable://apache/airflow@sha256:76bd7cd6d47ffea98df98f5744680a860663bc26836fd3d67d529b06caaf97a7 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: airflow tasks run test-mike mike-test-task 2021-03-23T07:12:54.887218+00:00 --local --pool default_pool --subdir /opt/airflow/dags/rev-5428eb02735b885217bf43fc900c95fb0312c536/test-dag-mike.py State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 23 Mar 2021 08:12:59 +0100 Finished: Tue, 23 Mar 2021 08:13:39 +0100 Ready: False Restart Count: 0 Limits: cpu: 1 memory: 2Gi Requests: cpu: 100m memory: 100Mi Environment: AIRFLOW__CORE__SQL_ALCHEMY_CONN: &lt;set to the key 'connection' in secret 'connection-secret'&gt; Optional: false AIRFLOW__CORE__FERNET_KEY: &lt;set to the key 'fernet' in secret 'fernet-secret'&gt; Optional: false AIRFLOW_IS_K8S_EXECUTOR_POD: True Mounts: /opt/airflow/airflow.cfg from airflow-config (ro,path=&quot;airflow.cfg&quot;) /opt/airflow/dags from airflow-dags (ro,path=&quot;airflow/dags&quot;) /opt/airflow/logs from airflow-logs (rw,path=&quot;airflow/logs&quot;) /opt/airflow/pod_template_file.yaml from airflow-config (ro,path=&quot;pod_template_file.yaml&quot;) /var/run/secrets/kubernetes.io/serviceaccount from scheduler-serviceaccount-token-wt57g (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: airflow-dags: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: airflow-dags-claim ReadOnly: false airflow-logs: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: airflow-logs-claim ReadOnly: false airflow-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: base-config Optional: false scheduler-serviceaccount-token-wt57g: Type: Secret (a volume populated by a Secret) SecretName: scheduler-serviceaccount-token-wt57g Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled &lt;unknown&gt; Successfully assigned airflow-build/testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3 to aks-default-23404167-vmss000000 Normal Pulled 75s kubelet, aks-default-23404167-vmss000000 Container image &quot;apache/airflow:2.0.0-python3.8&quot; already present on machine Normal Created 74s kubelet, aks-default-23404167-vmss000000 Created container base Normal Started 74s kubelet, aks-default-23404167-vmss000000 Started container base </code></pre> <p>The scheduler tries to adopt the completed Worker pod. But this pod is not cleaned and the scheduler keeps trying:</p> <pre><code> [2021-03-23 07:14:46,270] {scheduler_job.py:1751} INFO - Resetting orphaned tasks for active dag runs [2021-03-23 07:14:46,377] {rest.py:228} DEBUG - response body: { &quot;kind&quot;: &quot;PodList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;selfLink&quot;: &quot;/api/v1/namespaces/airflow-build/pods&quot;, &quot;resourceVersion&quot;: &quot;2190163&quot; }, &quot;items&quot;: [ { &quot;metadata&quot;: { &quot;name&quot;: &quot;testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3&quot;, &quot;namespace&quot;: &quot;airflow-build&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/airflow-build/pods/testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3&quot;, &quot;uid&quot;: &quot;24b7341b-99a3-4dae-9266-40e79ac7cd70&quot;, &quot;resourceVersion&quot;: &quot;2190012&quot;, &quot;creationTimestamp&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;labels&quot;: { &quot;airflow-worker&quot;: &quot;201&quot;, &quot;airflow_version&quot;: &quot;2.0.0&quot;, &quot;dag_id&quot;: &quot;test-mike&quot;, &quot;execution_date&quot;: &quot;2021-03-23T07_12_54.887218_plus_00_00&quot;, &quot;kubernetes_executor&quot;: &quot;True&quot;, &quot;task_id&quot;: &quot;mike-test-task&quot;, &quot;try_number&quot;: &quot;1&quot; }, &quot;annotations&quot;: { &quot;dag_id&quot;: &quot;test-mike&quot;, &quot;execution_date&quot;: &quot;2021-03-23T07:12:54.887218+00:00&quot;, &quot;task_id&quot;: &quot;mike-test-task&quot;, &quot;try_number&quot;: &quot;1&quot; }, &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kubelet&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:status&quot;: { &quot;f:conditions&quot;: { &quot;k:{\&quot;type\&quot;:\&quot;ContainersReady\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} }, &quot;k:{\&quot;type\&quot;:\&quot;Initialized\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} }, &quot;k:{\&quot;type\&quot;:\&quot;Ready\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} } }, &quot;f:hostIP&quot;: {}, &quot;f:phase&quot;: {}, &quot;f:podIP&quot;: {}, &quot;f:podIPs&quot;: { &quot;.&quot;: {}, &quot;k:{\&quot;ip\&quot;:\&quot;10.161.132.18\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:ip&quot;: {} } }, &quot;f:startTime&quot;: {} } } }, { &quot;manager&quot;: &quot;OpenAPI-Generator&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2021-03-23T07:13:42Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;.&quot;: {}, &quot;f:dag_id&quot;: {}, &quot;f:execution_date&quot;: {}, &quot;f:task_id&quot;: {}, &quot;f:try_number&quot;: {} }, &quot;f:labels&quot;: { &quot;.&quot;: {}, &quot;f:airflow-worker&quot;: {}, &quot;f:airflow_version&quot;: {}, &quot;f:dag_id&quot;: {}, &quot;f:execution_date&quot;: {}, &quot;f:kubernetes_executor&quot;: {}, &quot;f:task_id&quot;: {}, &quot;f:try_number&quot;: {} } }, &lt;.....&gt; &quot;status&quot;: { &quot;phase&quot;: &quot;Succeeded&quot;, &quot;conditions&quot;: [ { &quot;type&quot;: &quot;Initialized&quot;, &quot;status&quot;: &quot;True&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;Ready&quot;, &quot;status&quot;: &quot;False&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;ContainersReady&quot;, &quot;status&quot;: &quot;False&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;PodScheduled&quot;, &quot;status&quot;: &quot;True&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:12:57Z&quot; } ], &quot;hostIP&quot;: &quot;10.161.132.13&quot;, &quot;podIP&quot;: &quot;10.161.132.18&quot;, &quot;podIPs&quot;: [ { &quot;ip&quot;: &quot;10.161.132.18&quot; } ], &quot;startTime&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;containerStatuses&quot;: [ { &quot;name&quot;: &quot;base&quot;, &quot;state&quot;: { &quot;terminated&quot;: { &quot;exitCode&quot;: 0, &quot;reason&quot;: &quot;Completed&quot;, &quot;startedAt&quot;: &quot;2021-03-23T07:12:59Z&quot;, &quot;finishedAt&quot;: &quot;2021-03-23T07:13:39Z&quot;, &quot;containerID&quot;: &quot;docker://abe9e33c356de398af865736e0054b0eaaa6f3b99c44a6d021b4ca3a981161ce&quot; } }, &quot;lastState&quot;: {}, &quot;ready&quot;: false, &quot;restartCount&quot;: 0, &quot;image&quot;: &quot;apache/airflow:2.0.0-python3.8&quot;, &quot;imageID&quot;: &quot;docker-pullable://apache/airflow@sha256:76bd7cd6d47ffea98df98f5744680a860663bc26836fd3d67d529b06caaf97a7&quot;, &quot;containerID&quot;: &quot;docker://abe9e33c356de398af865736e0054b0eaaa6f3b99c44a6d021b4ca3a981161ce&quot;, &quot;started&quot;: false } ], &quot;qosClass&quot;: &quot;Burstable&quot; } } ] } [2021-03-23 07:14:46,382] {kubernetes_executor.py:661} INFO - Attempting to adopt pod testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3 [2021-03-23 07:14:46,463] {rest.py:228} DEBUG - response body: { &quot;kind&quot;: &quot;Pod&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3&quot;, &quot;namespace&quot;: &quot;airflow-build&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/airflow-build/pods/testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3&quot;, &quot;uid&quot;: &quot;24b7341b-99a3-4dae-9266-40e79ac7cd70&quot;, &quot;resourceVersion&quot;: &quot;2190012&quot;, &quot;creationTimestamp&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;labels&quot;: { &quot;airflow-worker&quot;: &quot;201&quot;, &quot;airflow_version&quot;: &quot;2.0.0&quot;, &quot;dag_id&quot;: &quot;test-mike&quot;, &quot;execution_date&quot;: &quot;2021-03-23T07_12_54.887218_plus_00_00&quot;, &quot;kubernetes_executor&quot;: &quot;True&quot;, &quot;task_id&quot;: &quot;mike-test-task&quot;, &quot;try_number&quot;: &quot;1&quot; }, &quot;annotations&quot;: { &quot;dag_id&quot;: &quot;test-mike&quot;, &quot;execution_date&quot;: &quot;2021-03-23T07:12:54.887218+00:00&quot;, &quot;task_id&quot;: &quot;mike-test-task&quot;, &quot;try_number&quot;: &quot;1&quot; }, &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kubelet&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:status&quot;: { &quot;f:conditions&quot;: { &quot;k:{\&quot;type\&quot;:\&quot;ContainersReady\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} }, &quot;k:{\&quot;type\&quot;:\&quot;Initialized\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} }, &quot;k:{\&quot;type\&quot;:\&quot;Ready\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} } }, &quot;f:hostIP&quot;: {}, &quot;f:phase&quot;: {}, &quot;f:podIP&quot;: {}, &quot;f:podIPs&quot;: { &quot;.&quot;: {}, &quot;k:{\&quot;ip\&quot;:\&quot;10.161.132.18\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:ip&quot;: {} } }, &quot;f:startTime&quot;: {} } } }, { &quot;manager&quot;: &quot;OpenAPI-Generator&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2021-03-23T07:13:42Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;.&quot;: {}, &quot;f:dag_id&quot;: {}, &quot;f:execution_date&quot;: {}, &quot;f:task_id&quot;: {}, &quot;f:try_number&quot;: {} }, &quot;f:labels&quot;: { &quot;.&quot;: {}, &quot;f:airflow-worker&quot;: {}, &quot;f:airflow_version&quot;: {}, &quot;f:dag_id&quot;: {}, &quot;f:execution_date&quot;: {}, &quot;f:kubernetes_executor&quot;: {}, &quot;f:task_id&quot;: {}, &quot;f:try_number&quot;: {} } }, &quot;f:spec&quot;: { &lt;....&gt; &quot;tolerations&quot;: [ { &quot;key&quot;: &quot;node.kubernetes.io/not-ready&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;tolerationSeconds&quot;: 300 }, { &quot;key&quot;: &quot;node.kubernetes.io/unreachable&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;tolerationSeconds&quot;: 300 }, { &quot;key&quot;: &quot;node.kubernetes.io/memory-pressure&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoSchedule&quot; } ], &quot;priority&quot;: 0, &quot;enableServiceLinks&quot;: true, &quot;preemptionPolicy&quot;: &quot;PreemptLowerPriority&quot; }, &quot;status&quot;: { &quot;phase&quot;: &quot;Succeeded&quot;, &quot;conditions&quot;: [ { &quot;type&quot;: &quot;Initialized&quot;, &quot;status&quot;: &quot;True&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;Ready&quot;, &quot;status&quot;: &quot;False&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;ContainersReady&quot;, &quot;status&quot;: &quot;False&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:13:40Z&quot;, &quot;reason&quot;: &quot;PodCompleted&quot; }, { &quot;type&quot;: &quot;PodScheduled&quot;, &quot;status&quot;: &quot;True&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-03-23T07:12:57Z&quot; } ], &quot;hostIP&quot;: &quot;10.161.132.13&quot;, &quot;podIP&quot;: &quot;10.161.132.18&quot;, &quot;podIPs&quot;: [ { &quot;ip&quot;: &quot;10.161.132.18&quot; } ], &quot;startTime&quot;: &quot;2021-03-23T07:12:57Z&quot;, &quot;containerStatuses&quot;: [ { &quot;name&quot;: &quot;base&quot;, &quot;state&quot;: { &quot;terminated&quot;: { &quot;exitCode&quot;: 0, &quot;reason&quot;: &quot;Completed&quot;, &quot;startedAt&quot;: &quot;2021-03-23T07:12:59Z&quot;, &quot;finishedAt&quot;: &quot;2021-03-23T07:13:39Z&quot;, &quot;containerID&quot;: &quot;docker://abe9e33c356de398af865736e0054b0eaaa6f3b99c44a6d021b4ca3a981161ce&quot; } }, &quot;lastState&quot;: {}, &quot;ready&quot;: false, &quot;restartCount&quot;: 0, &quot;image&quot;: &quot;apache/airflow:2.0.0-python3.8&quot;, &quot;imageID&quot;: &quot;docker-pullable://apache/airflow@sha256:76bd7cd6d47ffea98df98f5744680a860663bc26836fd3d67d529b06caaf97a7&quot;, &quot;containerID&quot;: &quot;docker://abe9e33c356de398af865736e0054b0eaaa6f3b99c44a6d021b4ca3a981161ce&quot;, &quot;started&quot;: false } ], &quot;qosClass&quot;: &quot;Burstable&quot; } } </code></pre> <p>Thanks for helping me out!</p>
<p>Did you try to play with these values?</p> <p>AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: &quot;True&quot; AIRFLOW__KUBERNETES__DELETE_WORKER_PODS_ON_FAILURE: &quot;False&quot;</p>
<pre><code>istioctl kube-inject \ --injectConfigFile inject-config.yaml \ --meshConfigFile mesh-config.yaml \ --valuesFile inject-values.yaml \ --filename samples/sleep/sleep.yaml \ | kubectl apply -f - </code></pre> <p>While trying to inject istio sidecar container manually to pod. I got error -</p> <p>Error: template: inject:469: function &quot;appendMultusNetwork&quot; not defined</p> <p><a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/</a></p>
<p>As mentioned in comments I have tried to reproduce your issue on gke with istio 1.7.4 installed.</p> <p>I've followed the documentation you mentioned and it worked without any issues.</p> <hr /> <p>1.Install istioctl and istio default profile</p> <pre><code>curl -sL https://istio.io/downloadIstioctl | sh - export PATH=$PATH:$HOME/.istioctl/bin istioctl install </code></pre> <p>2.Create <code>samples/sleep</code> directory and create <a href="https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml" rel="nofollow noreferrer">sleep.yaml</a>, for example with vi.</p> <p>3.Create local copies of the configuration.</p> <pre><code>kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' &gt; inject-config.yaml kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' &gt; inject-values.yaml kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' &gt; mesh-config.yaml </code></pre> <p>4.Apply it with istioctl kube-inject</p> <pre><code>istioctl kube-inject \ --injectConfigFile inject-config.yaml \ --meshConfigFile mesh-config.yaml \ --valuesFile inject-values.yaml \ --filename samples/sleep/sleep.yaml \ | kubectl apply -f - </code></pre> <p>5.Verify that the sidecar has been injected</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE sleep-5768c96874-m65bg 2/2 Running 0 105s </code></pre> <hr /> <p>So there are few things worth to check as it might might cause this issue::</p> <ul> <li>Could you please check if you executed all your commands correctly?</li> <li>Maybe you run older version of istio and you should follow older <a href="https://istio.io/archive/" rel="nofollow noreferrer">documentation</a>?</li> <li>Maybe you changed something in above local copies of the configuration and that cause the issue? If you did what exactly did you change?</li> </ul>
<p>I've got a deployment which worked just fine on K8S 1.17 on EKS. After upgrading K8S to 1.18, I tried to use <code>startupProbe</code> feature with a simple deployment. Everything works as expected. But when I tried to add the <code>startupProbe</code> to my production deployment, it didn't work. The cluster simply drops the <code>startupProbe</code> entry when creating pods (the <code>startupProbe</code> entry exists in deployment object definition on the cluster though). Interestingly when I change the <code>serviceAccountName</code> entry to <code>default</code> (instead of my application service account) in the deployment manifest, everything works as expected. So the question now is, why existing service accounts can't have startup probes? Thanks.</p>
<p>Posting this as a community member answer. Feel free to expand.</p> <h2>Issue</h2> <blockquote> <p>startupProbe is not applied to Pod if serviceAccountName is set</p> <p>When adding serviceAccountName and startupProbeto the pod template in my deployment, the resulting pods will not have a startup probe.</p> </blockquote> <p>There is <a href="https://github.com/kubernetes/kubernetes/issues/95604" rel="nofollow noreferrer">github issue</a> about that.</p> <h2>Solution</h2> <p>This issue is being addressed <a href="https://github.com/aws/amazon-eks-pod-identity-webhook/issues/84" rel="nofollow noreferrer">here</a>, currently it is still open and there is no specific answer for this.</p> <p>As mentioned by @mcristina422</p> <blockquote> <p>I think this is due to the old version of k8s.io/api being used in the webhook. The API for the startup probe was added more recently. Updating the k8s packages should fix this</p> </blockquote>
<p>I'm trying to setup an Ingress in my kubernetes cluster, with no success. I followed the instructions specified <a href="https://doc.traefik.io/traefik/v1.7/user-guide/kubernetes/" rel="nofollow noreferrer">here</a>. Basically I applied the following objects:</p> <p>First, set the RBAC infrastructure:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller rules: - apiGroups: - &quot;&quot; resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses/status verbs: - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controller subjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: kube-system **Versions**: &lt;br&gt; Kubectl, kubeadm, kubelet: 1.21.00&lt;br&gt; Traefik 1.17 </code></pre> <p>Then, created a DaemonSet with the Pods running the Ingress Controller:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: kube-system --- kind: DaemonSet apiVersion: apps/v1 metadata: name: traefik-ingress-controller namespace: kube-system labels: k8s-app: traefik-ingress-lb spec: selector: matchLabels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 containers: - image: traefik:v1.7 name: traefik-ingress-lb ports: - name: http containerPort: 80 hostPort: 80 - name: admin containerPort: 8080 hostPort: 8080 securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --api - --kubernetes - --logLevel=INFO --- kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP port: 80 name: web - protocol: TCP port: 8080 name: admin </code></pre> <p>Now, I can access the Ingress web UI using port 8080 on one of my nodes, and I also know that port 80 directs traffic to the Ingress Controller, since when I run</p> <pre><code>$ curl localhost:80 404 page not found </code></pre> <p>The problem is, that when trying to create a Ingress object, for some reason, the Ingress Controller doesn't redirect it to the backend service. Created the following objects:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: svc-myserver spec: type: ClusterIP selector: app: my-server ports: - protocol: TCP port: 80 targetPort: 3000 </code></pre> <p>This works, when running:</p> <pre><code>$ curl 10.100.127.255 Hello world ! Version 1 </code></pre> <p>Now, all there's to do is to create an Ingress that will forward traffic to the service:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-server-ingress namespace: kube-system spec: rules: - host: myapp.com http: paths: - path: / pathType: Prefix backend: service: name: svc-myserver port: number: 80 </code></pre> <p>Now when I run:</p> <pre><code>$ kubectl get ingress -A NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE kube-system my-server-ingress &lt;none&gt; myapp.com 80 13h </code></pre> <p>You can see that the ADDRESS field is empty, and indeed when I try to browse to this <code>myapp.com</code> ( Updated it in the <code>/etc/hosts</code>) - I get 404.</p> <p>What am I missing?</p> <p><strong>Versions</strong> <br> kubeadm, kubectl, kubelet - 1.21.00 <br> traefik 1.17</p>
<p>Ok I understood what the problem was: My Ingress object wasn't in the same namespace as of the Service. When examining the logs of the Ingress Controller's Pod, I got:</p> <pre><code>time=&quot;2021-12-21T19:16:33Z&quot; level=error msg=&quot;Service not found for kube-system/svc-myserver&quot; time=&quot;2021-12-21T19:16:33Z&quot; level=error msg=&quot;Service not found for kube-system/traefik-web-ui&quot; time=&quot;2021-12-21T19:16:33Z&quot; level=error msg=&quot;Service not found for kube-system/traefik-web-ui&quot; time=&quot;2021-12-21T19:16:33Z&quot; level=error msg=&quot;Service not found for kube-system/svc-myserver&quot; time=&quot;2021-12-21T19:16:33Z&quot; level=error msg=&quot;Service not found for kube-system/traefik-web-ui&quot; </code></pre> <p>Once I moved it to the correct namespace, it worked like charm.</p> <p>Thanks!</p>
<p>I have a setup of Apache Ignite server and having SpringBoot application as client in a Kubernetes cluster.</p> <p>During performance test, I start to notice that the below log showing up frequently in SpringBoot application:</p> <p><code>org.apache.ignite.internal.IgniteKernal: Possible too long JVM pause: 714 milliseconds</code></p> <p>According to <a href="https://stackoverflow.com/questions/52400019/apache-ignite-jvm-pause-detector-worker-possible-too-long-jvm-pause">this post</a>, this is due to "JVM is experiencing long garbage collection pauses", but Infrastructure team has confirmed to me that we have included <code>+UseG1GC</code> and <code>+DisableExplicitGC</code> in the Server JVM option and this line of log only show in SpringBoot application.</p> <p>Please help on this following questions:</p> <ol> <li>Is the GC happening in the Client(SpringBoot application) or Server node?</li> <li>What will be that impact of long GC pause?</li> <li>What should I do to prevent the impact?</li> <li>Do I have to configure the JVM option in SpringBoot application as well?</li> </ol>
<p><em>Is the GC happening in the Client(SpringBoot application) or Server node?</em></p> <p>GC error will be logged to the log of the node which suffers problems.</p> <p><em>What will be that impact of long GC pause?</em></p> <p>Such pauses decreases overall performance. Also if pause will be longer than failureDetectionTimeout node will be disconnected from cluster.</p> <p><em>What should I do to prevent the impact?</em></p> <p>General advises are collected here - <a href="https://apacheignite.readme.io/docs/jvm-and-system-tuning" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/jvm-and-system-tuning</a>. Also you can enable GC logs to have full picture of what happens.</p> <p><em>Do I have to configure the JVM option in SpringBoot application as well?</em></p> <p>Looks like that you should, because you have problems with client's node.</p>
<p>I have want to pipe the output of my nodejs file like <code>node example.js|node_modules/bunyan/bin/bunyan</code> for better readability. How can I specify this in the yaml?</p> <p>I tried several things like:</p> <pre><code> command: - node args: - index.js | node_modules/bunyan/bin/bunyan </code></pre> <p>or</p> <pre><code>command: - node args: - index.js - node_modules/bunyan/bin/bunyan </code></pre> <p>or</p> <pre><code>command: - node index.js | node_modules/bunyan/bin/bunyan </code></pre> <p>but none of it worked.<br /> Is it possible and if yes, whats the correct way to do it?</p>
<p>Thanks for your help but I already found a solution that worked for me. Instead of directly using the command I stored it in a shell script and use it for execution.</p>
<p>I found <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html" rel="nofollow noreferrer">that documentation</a> that we can add AWS IAM role to kubernetes serviceaccount and attach to Pods. And what I'm supposed to do is I want to attach that service account to DaemonSet instead of Pods level permission. But I configured same as that documentation and attached to DaemonSet but I've encountered following error message after that:</p> <p><code>Aws::STS::Errors::AccessDenied error=&quot;Not authorized to perform sts:AssumeRoleWithWebIdentity</code></p> <p>Is that meant those type of serviceaccount with IAM role cannot be attached to DaemonSet?</p>
<blockquote> <p>Is that meant those type of serviceaccount with IAM role cannot be attached to DaemonSet?</p> </blockquote> <p>No,there shouldn't be any issues with that. I checked <a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="nofollow noreferrer">here</a> and there is an example with service account in a deployment.</p> <hr /> <p>As @PPShein mentioned in comments the issue occurs because he forgot to add the <code>openid_url</code>.</p> <p>Please refer to <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html" rel="nofollow noreferrer">this</a> and <a href="https://marcincuber.medium.com/amazon-eks-with-oidc-provider-iam-roles-for-kubernetes-services-accounts-59015d15cb0c" rel="nofollow noreferrer">this</a> documentation.</p>
<p>I'm working with microk8s using Kubernetes 1.19. The provided ingress.yaml does not work. Given my troubleshooting below, it seems like ngnix cannot connect to the default-http-backend. Microk8s was installed on a ubuntu 20.04 using snap. I know that there exists a ingress addon. But nonetheless, I would like it to work with this setup.</p> <p><strong>microk8s kubectl get pods --all-namespaces</strong></p> <pre><code>kube-ingress default-http-backend-7744d88f46-45vp7 1/1 Running 0 53m kube-ingress nginx-74dd8dd664-7cn67 0/1 CrashLoopBackOff 15 53m </code></pre> <p><strong>microk8s kubectl logs -n kube-ingress nginx-74dd8dd664-7cn67</strong></p> <pre><code>W1014 08:28:14.903056 6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) W1014 08:28:14.903143 6 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1014 08:28:14.903398 6 main.go:220] Creating API client for https://10.152.183.1:443 I1014 08:28:14.910869 6 main.go:264] Running in Kubernetes cluster version v1.19+ (v1.19.2-34+1b3fa60b402c1c) - git (clean) commit 1b3fa60b402c1c4cb0df8a99b733ad41141a2eb7 - platform linux/amd64 F1014 08:28:14.913646 6 main.go:91] No service with name kube-ingress/default-http-backend found: services &quot;default-http-backend&quot; not found </code></pre> <p><strong>ingress.yml</strong></p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: kube-ingress --- kind: ConfigMap metadata: namespace: kube-ingress name: nginx apiVersion: v1 data: proxy-connect-timeout: &quot;15&quot; proxy-read-timeout: &quot;600&quot; proxy-send-timeout: &quot;600&quot; hsts-include-subdomains: &quot;false&quot; body-size: &quot;200m&quot; server-name-hash-bucket-size: &quot;256&quot; --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend namespace: kube-ingress spec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: kube-ingress spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: nginx containers: - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.0 name: nginx imagePullPolicy: Always env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 80 - containerPort: 443 args: - /nginx-ingress-controller - --default-backend-service=kube-ingress/default-http-backend - --configmap=kube-ingress/nginx --- kind: ServiceAccount apiVersion: v1 metadata: name: nginx namespace: kube-ingress --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nginx-ingress-newrole rules: - apiGroups: - &quot;&quot; resources: - services verbs: - get - list - watch --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nginx-ingress-newrole roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-newrole subjects: - kind: ServiceAccount name: nginx namespace: kube-ingress --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nginx-ingress-clusterole rules: - apiGroups: - &quot;&quot; resources: - services verbs: - get - list - watch --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nginx-ingress-clusterole roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-clusterole subjects: - kind: ServiceAccount name: nginx namespace: kube-ingress </code></pre>
<h2>Issue</h2> <p>As mentioned in the logs</p> <pre><code>No service with name kube-ingress/default-http-backend found: services &quot;default-http-backend&quot; not found </code></pre> <p>The main issue here was the lack of <code>default-http-backend</code> service in <code>kube-ingress</code> namespace.</p> <h2>Solution</h2> <p>The solution here is to simply add the <code>default-http-backend</code> <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a>.</p> <p>You can create it with <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="nofollow noreferrer">kubectl expose</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">yaml</a> file.</p>
<p>I am passing some test variables to my pod which has code in nodejs like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: envar-demo labels: purpose: demonstrate-envars spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 send: - name: DEMO_GREETING value: &quot;Hello from the environment&quot; - name: DEMO_FAREWELL value: &quot;Such a sweet sorrow&quot; </code></pre> <p>I am trying to access the value of these variables from the <strong>package.json</strong> that I have inside my pod but I'm not sure how to do it, if I don't have an .env file from which I can read these variables using grep</p>
<p>No matter how you have set the variable through Kubernetes, export, or just before calling a command. You can get access to it as in the usual bash script.</p> <pre><code>&quot;scripts&quot;: { &quot;envvar&quot;: &quot;echo $TEST_ENV_VAR&quot; }, </code></pre> <p>Then we can run it</p> <pre><code>➜ TEST_ENV_VAR=4342 npm run envvar &gt; envvar &gt; echo $TEST_ENV_VAR 4342 </code></pre>
<p>I am trying to send my app to a Google Cloud Cluster using the <code>kubectl</code> command behind a corporative proxy that needs a certificate ".crt" file to be used when doing HTTPS requests.</p> <p>I already ran the <code>gcloud container clusters get-credentials...</code> command and it also asked for a certificate. I followed the given instructions by Google and I configured my certificate file without any issue and it worked.</p> <p>But when I try the <code>kubectl get pods</code> I am getting the following message:</p> <pre><code>"Unable to connect to the server: x509: certificate signed by unknown authority" </code></pre> <p>How can I configure my certificate file to be used by the kubectl command?</p> <p>I did a search about this subject but I found too difficult steps. Could I just run something like this:</p> <pre><code>kubectl --set_ca_file /path/to/my/cert </code></pre> <p>Thank you</p>
<p>The short answer up to what I know is no.</p> <p>here[1] you can see the step by step of how to get this done in the easiest way I found so far, is not a one line way but is the closest to that.</p> <p>after having your cert files you need to run this:</p> <pre><code>gcloud compute ssl-certificates create test-ingress-1 \ --certificate [FIRST_CERT_FILE] --private-key [FIRST_KEY_FILE] </code></pre> <p>then you need to create your YAML file with the configuration (in the link there are two examples)</p> <p>run this command:</p> <pre><code>kubectl apply -f [NAME_OF_YOUR_FILE].yaml </code></pre> <p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl</a></p>
<p>I have a python script in our K8s cluster that is run as a k8s Cronjob every few minutes. The script checks the nodes in the cluster and if a node is unhealthy for more than 5 minutes, it terminates the node. To connect to AWS I use Boto3. requirement.txt</p> <pre><code>boto3==1.16.11 botocore==1.19.11 </code></pre> <p>and the permissions are passed as pod annotations.</p> <pre><code>Annotations: iam.amazonaws.com/role: arn:aws:iam::123456789:role/k8s-nodes-monitoring-role </code></pre> <p>The IAM role has <code>arn:aws:iam::aws:policy/AmazonEC2FullAccess</code> policy and a valid trust policy.</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: { &quot;Service&quot;: &quot;ec2.amazonaws.com&quot; }, &quot;Action&quot;: &quot;sts:AssumeRole&quot; }, { &quot;Sid&quot;: &quot;&quot;, &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: { &quot;AWS&quot;: &quot;arn:aws:iam::123456789:role/nodes.my-domain.com&quot; }, &quot;Action&quot;: &quot;sts:AssumeRole&quot; } ] } </code></pre> <p>The problem that I facing is that on some occasions the script throws <code>NoCredentialsError('Unable to locate credentials')</code> error. This behaviour is not consistent as on most occasions the script has successfully terminates the unhealthy node and I can cross-check it against AWS CloudTrail events. I can see in kub2iam logs that the Get request receives 200 but the Put request receives 403.</p> <pre><code>ime=&quot;2020-12-21T12:50:16Z&quot; level=info msg=&quot;GET /latest/meta-data/iam/security- credentials/k8s-nodes-monitoring-role (200) took 47918.000000 ns&quot; req.method=GET req.path=/latest/meta-data/iam/security-credentials/k8s-nodes-monitoring-role req.remote=100.116.203.13 res.duration=47918 res.status=200 time=&quot;2020-12-21T12:52:16Z&quot; level=info msg=&quot;PUT /latest/api/token (403) took 19352999.000000 ns&quot; req.method=PUT req.path=/latest/api/token req.remote=100.116.203.14 res.duration=1.9352999e+07 res.status=40 </code></pre> <p>Any help or idea about how to debug this will be highly appreciated.</p>
<p>I dont know kube2iam in detail, but maybe you should switch to a AWS native way called IRSA (IAM Roles for Service Accounts). You can find all necessary information in this blog post: <a href="https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="nofollow noreferrer">https://aws.amazon.com/de/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/</a></p>
<p>When creating a <code>Deployment</code> and <code>HorizontalPodAutoscaler</code> together in a Helm chart, should the deployment’s <code>.spec.replicas</code> be set to <code>null</code>, or should it be unset, or should it be set to some value between the hpa’s <code>minReplicas</code> and <code>maxReplicas</code>?</p> <p>When you create a hpa, the hpa controller manages the deployment’s <code>.spec.replicas</code>, so when you update the deployment’s other fields you shouldn’t change the replicas.</p> <p>Comparing to <code>kubectl apply</code> declarative config, you can modify other fields of the deployment without modifying <code>.spec.replicas</code> if you leave <code>.spec.replicas</code> unset the first time the deployment is created, so that the 3-way diff ignores that field when the deployment is applied in the future. Or to omit the field after creation time, you have to use <code>kubectl apply edit-last-applied</code> to avoid accidentally scaling down to 1 when removing the field (<a href="https://github.com/kubernetes/kubernetes/issues/67135" rel="nofollow noreferrer">kubernetes/kubernetes#67135</a>). So with <code>kubectl apply</code> it is possible to apply a deployment while not touching <code>.spec.replicas</code>.</p> <p>What is the correct way to <code>helm upgrade</code> a deployment’s other fields without changing its scaling?</p>
<p>I checked <a href="https://github.com/helm/helm/issues/7090" rel="nofollow noreferrer">here</a> and If I understand correctly there are 2 ways of doing that.</p> <p>1.Add if statement as a workaround.</p> <p>There is comment about it added by @naseemkullah.</p> <blockquote> <p>so the workaround is to add an if statement around the deployment's spec.replicas to not template it if HPA is enabled</p> </blockquote> <hr /> <p>2.Remove replicas field completely, then it should respect the replicas number managed by HPA.</p> <p>There is <a href="https://github.com/helm/helm/issues/7090#issuecomment-605870466" rel="nofollow noreferrer">comment</a> about it added by @tianchengli.</p> <blockquote> <p>If I remove replicas field completely, it will respect the replicas number managed by HPA.</p> </blockquote>
<p>I recently started using TRAINS, with the server in AWS AMI. We are currently using v0.9.0.</p> <p>I would like to move the TRAINS-server to run on our on-premises kubernetes cluster. However, I don't want to lose the data on the current server in AWS (experiments, models, logins, etc...). Is there a way to backup the current server and restore it to the local server?</p> <p>Thanks!</p>
<p>Since this package is quite new, I'm making sure we are both referring to the same one, <em>TRAINS-server</em> <a href="https://github.com/allegroai/trains-server" rel="nofollow noreferrer">https://github.com/allegroai/trains-server</a> (which I'm one of the maintainers)</p> <p>Backup the persistent data folders in the <em>TRAINS-server</em> AMI distribution:</p> <ul> <li>MongoDB: /opt/trains/data/mongo/ </li> <li>ElasticSearch: /opt/trains/data/elastic/ </li> <li>File Server: /mnt/fileserver/</li> </ul> <p>Once you have your Kubernetes cluster up, restore the three folders to a <em>sharable location</em>. When creating the <em>TRAINS-server</em> deployment yaml make sure you map the <em>sharable location</em> to the specific locations the container expects, e.g. /mnt/shared/trains/data/mongo:/opt/trains/data/mongo </p> <p>Start the Kubernetes <em>TRAINS-server</em>, it should now have all the previous data/users etc.</p>
<p>I have a kubernetes cluster in GKE. Inside the cluster there is an private docker registry service. A certificate for this service is generated inside a docker image by running:</p> <pre class="lang-sh prettyprint-override"><code>openssl req -x509 -newkey rsa:4096 -days 365 -nodes -sha256 -keyout /certs/tls.key -out /certs/tls.crt -subj &quot;/CN=registry-proxy&quot; </code></pre> <p>When any pod that uses an image from this private registry tries to pull the image I get an error:</p> <pre><code>x509: certificate signed by unknown authority </code></pre> <p>Is there any way to put the self signed certificate to all GKE nodes in the cluster to resolve the problem?</p> <p>UPDATE</p> <p>I put the CA certificate to each GKE node as @ArmandoCuevas recommended in his comment, but it doesn't help, still getting the error <code>x509: certificate signed by unknown authority</code>. What could cause it? How docker images are pulled into pods?</p>
<p>TL;DR: Almost all modifications you need to perform to nodes in GKE, like adding trusted root certificates to the server, can be done using Daemonsets.</p> <p>There is an amazing guide that the user <a href="https://github.com/samos123" rel="nofollow noreferrer">Sam Stoelinga</a> created about how to perform what you are looking to do. The link can be found <a href="https://github.com/samos123/gke-node-customizations#2-deploy-daemonset-to-insert-ca-on-gke-nodes" rel="nofollow noreferrer">here</a>.</p> <p>As a summary, the way Sam propose how to perform this changes is by distributing the cert in each of the nodes by using a Daemonsets. Since the Daemonsets guarantees that there is 1 pod on each of the nodes always, the POD will be in charge of adding your certificate to the node so you can pull your images from the private registry.</p> <p>Normally adding the node by your own will not work since if GKE needs to recreate the node you change will be lost. This approach of using DS guarantees that even if the node is recreated, since the Daemonset will schedule one of this &quot;overhaul pod&quot; in the node, you will always going to have the cert in place.</p> <p>The steps that Sam proposed are very simple:</p> <ol> <li>Create an image that with the commands needed to distribute the certificate. This step may be different if you are using Ubuntu nodes or COS nodes. If you are using COS nodes, the commands that your pod needs to run if you are using COS are perfectly outlined by SAM:</li> </ol> <pre><code>cp /myCA.pem /mnt/etc/ssl/certs nsenter --target 1 --mount update-ca-certificates nsenter --target 1 --mount systemctl restart docker </code></pre> <p>If you are running Ubuntu nodes, the commands are outlined in several posts in Ask Ubuntu like <a href="https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate">this one</a>.</p> <ol start="2"> <li><p>Move the image to a container registry that your nodes currently have access like <a href="https://cloud.google.com/container-registry" rel="nofollow noreferrer">GCR</a>.</p> </li> <li><p>Deploy the DS using the image that adds the cert as privileged with the NET_ADMIN capability (needed to perform this operation) and mount host's &quot;/etc&quot; folder inside the POD. Sam added an example of doing this that may help, but you can use your own definition. If you face problems while trying to deploy a privileged pod, may worth taking a look to GKE documentation about <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies" rel="nofollow noreferrer">Using PodSecurityPolicies</a></p> </li> </ol>
<p>I'm trying to install Istio on AKS under our company environment, which means we need to refer to the internal proxy of docker registries, and I'm following this link: <a href="https://learn.microsoft.com/en-us/azure/aks/servicemesh-istio-install?pivots=client-operating-system-macos" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/servicemesh-istio-install?pivots=client-operating-system-macos</a> and <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioComponentSetSpec" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioComponentSetSpec</a></p> <p>The version of Istio I'm trying to install is 1.6.13, not 1.7.x due to 1.7.x is not compatible with Kubeflow: <a href="https://github.com/kubeflow/kubeflow/issues/5434" rel="nofollow noreferrer">https://github.com/kubeflow/kubeflow/issues/5434</a></p> <p>While constructing the IstioOperatorSpec, I'm able to get the basic to work by providing the hub argument, but I'm not able to enable <code>addonComponents</code> such as grafana because they require images from different <code>hub</code>. My question is, how to set <code>hub</code> argument for each <code>addonComponents</code></p> <p>This one doesn't work since I provided <code>hub</code> argument under <code>addonComponents</code>:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: istio-control-plane spec: hub: INTERNAL_DOCKER_HUB_1 profile: default addonComponents: grafana: enabled: true hub: INTERNAL_DOCKER_HUB_2 </code></pre>
<p>I would start with that <code>addonComponents</code> and <code>IstioComponent</code> are 2 differents things.</p> <p>According to documentation:</p> <p><a href="https://i.stack.imgur.com/sAqGl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sAqGl.png" alt="enter image description here" /></a></p> <p>So AFAIK it's not possible to set <code>hub</code> in addonComponents, as it's not possible to configure with <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#ExternalComponentSpec" rel="nofollow noreferrer">ExternalComponentSpec</a>.</p> <hr /> <p>As mentioned in comments by @Joel and @Rinor</p> <blockquote> <p>Istio doesn't recommend to use their addons templates for anything else than PoC / demo purpose, they recommend that you create your own templates. Which perhaps would explain if there's no possibility to provide a specific hub. – Joel</p> </blockquote> <blockquote> <p>Oh but the addons are removed from Istio, he'd have to manually replace the images in the samples if he's going to use those.</p> </blockquote> <p>That's actually the answer to your question, if you want to configure addons, change images, add things like persistence or advanced security settings you should consider creating your own addons and configure them with istio.</p> <p>There are the addons <a href="https://github.com/istio/istio/tree/master/samples/addons" rel="nofollow noreferrer">yamls</a>, you can use it as a reference to build your own setup.</p> <hr /> <p>Note that with istio 1.8 installation of addons with istioctl have been removed.</p> <p>As mentioned <a href="https://istio.io/latest/blog/2020/addon-rework/#changes" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Istio 1.8: Installation of addons by istioctl is removed.</p> <p>Removed the bundled addon installations from istioctl and the operator. Istio does not install components that are not delivered by the Istio project. As a result, Istio will stop shipping installation artifacts related to addons. However, Istio will guarantee version compatibility where necessary. It is the user’s responsibility to install these components by using the official Integrations documentation and artifacts provided by the respective projects. For demos, users can deploy simple YAML files from the <a href="https://github.com/istio/istio/tree/release-1.8/samples/addons" rel="nofollow noreferrer">samples/addons/</a> directory.</p> </blockquote> <hr />
<p>I have my own hosted Kubernetes cluster where I store my secrets in vault. To give my microservices access to the secrets managed by vault, I want to authenticate my microservices via their service accounts. The problem I'm facing is that vault rejects the service accounts (JWTs) with the following error:</p> <pre><code>apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority </code></pre> <p>The service accounts are signed with Kubernetes own CA. I did not replace this with Vault's <code>pki</code> solution. Is it possible to configure Vault to trust my Kubernetes CA certificate and therefore the JWTs?</p>
<p>This kind of error can be caused by a recent change to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery" rel="nofollow noreferrer">Service Account Issuer Discovery</a> in Kubernetes 1.21.</p> <p>In order to mitigate this issue, there are a couple of options that you can choose from based on your expectations:</p> <ol> <li>Manually create a service account, secret and mount it in the pod as mentioned <a href="https://github.com/external-secrets/kubernetes-external-secrets/issues/721#issuecomment-979883828" rel="nofollow noreferrer">on this github post</a>.</li> <li>Disable issuer validation as mentioned <a href="https://github.com/external-secrets/kubernetes-external-secrets/issues/721#issue-868030068" rel="nofollow noreferrer">on another github post</a>.</li> <li>Downgrade the cluster to version 1.20.</li> </ol> <p>There are also a couple of external blog articles about this on <a href="https://banzaicloud.com/blog/kubernetes-oidc/" rel="nofollow noreferrer">banzaicloud.com</a> and <a href="https://particule.io/en/blog/vault-1.21/" rel="nofollow noreferrer">particule.io</a>.</p>
<p>I have a k8s cluster on which I have installed openfaas in the following way:</p> <pre class="lang-yaml prettyprint-override"><code>helm repo add openfaas https://openfaas.github.io/faas-netes/ helm repo update kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml helm upgrade openfaas --install openfaas/openfaas \ --namespace openfaas \ --set generateBasicAuth=true \ --set serviceType=LoadBalancer \ --set clusterRole=true \ --set functionNamespace=openfaas-fn </code></pre> <p>Now, I have the following <code>stack.yml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>version: 1.0 provider: name: openfaas gateway: http://localhost:31112 functions: my-function: lang: csharp handler: ./MyFunction image: my-function:my-tag labels: com.openfaas.scale.min: 1 com.openfaas.scale.max: 1 com.openfaas.scale.factor: 0 </code></pre> <p>The deployed function is then decorated with the above mentioned labels, which I found in the <a href="https://docs.openfaas.com/architecture/autoscaling/#minmax-replicas" rel="nofollow noreferrer">openfaas documentation</a>. However, if I look at the replica set controlling the function's pod, I see it is adorned with the following annotation:</p> <pre><code>deployment.kubernetes.io/max-replicas=2 </code></pre> <p>What is the effect of this latter annotation on the function's replica set over the actual function's scaling? What would happen if I set</p> <pre><code>com.openfaas.scale.max: 3 </code></pre> <p>as my function's label?</p> <p>I would like to make sure to really have control over my function's horizontal scaling. How should I proceed?</p>
<p>The OpenFaas is equiped with autoscaler itself with its own alert manager:</p> <blockquote> <p>OpenFaaS ships with a single auto-scaling rule defined in the mounted configuration file for AlertManager. AlertManager reads usage (requests per second) metrics from Prometheus in order to know when to fire an alert to the API Gateway.</p> </blockquote> <p>After some reading I found out that OpenFaas autoscaler/alertmanger is more focused on API hit rates whereas Kubernetes HPA is more focused on CPU and memory usage so it all depends on what you exactly need.</p> <p>So you have two different annotations for two different tools for scaling. The <code>deployment.kubernetes.io/max-replicas=2</code> is for Kubernetes HPA and <code>com.openfaas.scale.max: 1</code> is for the OpenFaas autoscaler.</p> <p>The OpenFaas has a great <a href="https://docs.openfaas.com/tutorials/kubernetes-hpa/" rel="nofollow noreferrer">example</a> of how you can use HPA instead built in scaler. You can also use custom Prometheus metrics with HPA as described <a href="https://docs.openfaas.com/tutorials/kubernetes-hpa-custom-metrics/" rel="nofollow noreferrer">here</a>.</p>
<p>What is usually preferred in Kubernetes - having a one pod per node configuration, or multiple pods per node?</p> <p>From a performance standpoint, what are the benefits of having multiple pods per node, if there is an overhead in having multiple pods living on the same node?</p> <p>From a performance standpoint, wouldn't it be better to have a single pod per node?</p>
<p>Running only a single pod per node has its cons as well. For example each node will need its own &quot;support&quot; pods such as metrics, logs, network agents and other system pods which most likely will not have its all resources fully utilized. Which in terms of performance would mean that selecting the correct node size to pods amount ratio might result with less costs for the same performance as single pod per node.</p> <p>On the contrary running too many pods in a massive node can cause lack of those resources and cause metrics or logs gaps or lost packets OOM errors etc.</p> <p>Finally, when we also consider auto scaling, scaling up couple more pods on an existing nodes will be lot more responsive than scaling up a new node for each pod.</p>
<p>Each node runs the same pods and all the nodes do the same. I am using Istio ingress gateway with the NodePort. I need traffic that enters NodePort to be routed to pods not leaving the node. I am unable to run <code>istio-ingressgateway</code> on each node to do that. Is it possible for each node to route its own traffic?</p> <p>Bare-metal, k8s 1.19.4, Istio 1.8</p>
<h2>Issue</h2> <p>As @Jonas mentioned in comments</p> <blockquote> <p>The problem is that there is just one istio-ingressgateway pod on node1 and all the traffic from node2 have to come to node1</p> </blockquote> <h2>Solution</h2> <p>You can use <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources" rel="nofollow noreferrer">kubectl scale</a> to scale your ingress gateway replicas. Below command will create 3 ingress gateway pods instead of just one.</p> <pre><code>kubectl scale --replicas=3 deployment/istio-ingressgateway -n istio-system </code></pre> <hr /> <p>Additionally you can set this up with <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec" rel="nofollow noreferrer">istio operator</a> replicaCount value.</p> <p>Note that if you use <strong>cloud</strong> there might be <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">hpa</a> configured and it might immediately scales back up the pods. There is github <a href="https://github.com/istio/istio/issues/13588" rel="nofollow noreferrer">issue</a> about that. You can also set hpa min and max <a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#HorizontalPodAutoscalerSpec" rel="nofollow noreferrer">replicas</a> with istio.</p>
<p>Not able to resolve an API hosted as a ClusterIP service on Minikube when calling from the React JS frontend.</p> <p>The basic architecture of my application is as follows React --> .NET core API</p> <p>Both these components are hosted as ClusterIP services. I have created an ingress service with http paths pointing to React component and the .NET core API.</p> <p>However when I try calling it from the browser, react application renders, but the call to the API fails with net::ERR_NAME_NOT_RESOLVED</p> <p>Below are the .yml files for</p> <hr> <h2>1. React application</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend-clusterip spec: type: ClusterIP ports: - port: 59000 targetPort: 3000 selector: app: frontend </code></pre> <hr> <h2>2. .NET core API</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: backend-svc-nodeport spec: type: ClusterIP selector: app: backend-svc ports: - port: 5901 targetPort: 59001 </code></pre> <hr> <h2>3. ingress service</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /?(.*) backend: serviceName: frontend-clusterip servicePort: 59000 - path: /api/?(.*) backend: serviceName: backend-svc-nodeport servicePort: 5901 </code></pre> <hr> <h2>4. frontend deployment</h2> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: frontend replicas: 1 template: metadata: labels: app: frontend spec: containers: - name: frontend image: upendra409/tasks_tasks.frontend ports: - containerPort: 3000 env: - name: "REACT_APP_ENVIRONMENT" value: "Kubernetes" - name: "REACT_APP_BACKEND" value: "http://backend-svc-nodeport" - name: "REACT_APP_BACKENDPORT" value: "5901" </code></pre> <hr> <p>This is the error I get in the browser:</p> <pre><code>xhr.js:166 GET http://backend-svc-nodeport:5901/api/tasks net::ERR_NAME_NOT_RESOLVED </code></pre> <p>I installed curl in the frontend container to get in the frontend pod to try to connect the backend API using the above URL, but the command doesn't work</p> <pre><code>C:\test\tasks [develop ≡ +1 ~6 -0 !]&gt; kubectl exec -it frontend-655776bc6d-nlj7z --curl http://backend-svc-nodeport:5901/api/tasks Error: unknown flag: --curl </code></pre>
<p>You are getting this error from local machine because <code>ClusterIP</code> service is wrong type for accessing from outside of the cluster. As mentioned in kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">documentation</a> <code>ClusterIP</code> is only reachable from within the cluster.</p> <blockquote> <h2>Publishing Services (ServiceTypes)<a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer"></a></h2> <p>For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.</p> <p>Kubernetes <code>ServiceTypes</code> allow you to specify what kind of Service you want. The default is <code>ClusterIP</code>.</p> <p><code>Type</code> values and their behaviors are:</p> <ul> <li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>: Exposes the Service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer"><code>ExternalName</code></a>: Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record</p> <p>with its value. No proxying of any kind is set up.</p> <blockquote> <p><strong>Note:</strong> You need CoreDNS version 1.7 or higher to use the <code>ExternalName</code> type.</p> </blockquote></li> </ul> </blockquote> <p><strong>I suggest using <code>NodePort</code> or <code>LoadBalancer</code> service type instead.</strong></p> <p>Refer to above documentation links for examples.</p>
<p>I want the traffic thar comes to my cluster as HTTP to be redirected to HTTPS. However, the cluster receives requests from hundreds of domains that change dinamically (creating new certs with cert-manager). So I want the redirect to happen only when the URI doesn't have the prefix <code>/.well-known/acme-challenge</code></p> <p>I am using a gateway that listens to 443 and other gateway that listens to 80 and send the HTTP to an acme-solver virtual service.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: default-gateway spec: selector: istio: ingressgateway servers: - hosts: - site1.com port: name: https-site1.com number: 443 protocol: HTTPS tls: credentialName: cert-site1.com mode: SIMPLE - hosts: - site2.com port: name: https-site2.com number: 443 protocol: HTTPS tls: credentialName: cert-site2.com mode: SIMPLE ... --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: acme-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: acme-solver namespace: istio-system spec: hosts: - &quot;*&quot; gateways: - acme-gateway http: - match: - uri: prefix: /.well-known/acme-challenge route: - destination: host: acme-solver.istio-system.svc.cluster.local port: number: 8089 - redirect: authority: # Should redirect to https://$HOST, but I don't know how to get the $HOST </code></pre> <p>How can I do that using istio?</p>
<p>Looking into the documentation:</p> <ul> <li>The HTTP-01 challenge can only be done on port 80. Allowing clients to specify arbitrary ports would make the challenge less secure, and so it is not allowed by the ACME standard.</li> </ul> <p>As a workaround:</p> <ol> <li>Please consider using DNS-01 challenge:</li> </ol> <p>a) it only makes sense to use DNS-01 challenges if your DNS provider has an API you can use to automate <a href="https://community.letsencrypt.org/t/dns-providers-who-easily-integrate-with-lets-encrypt-dns-validation/86438" rel="nofollow noreferrer">updates</a>.</p> <p>b) using this approach you should consider additional security risk as stated in the <a href="https://letsencrypt.org/docs/challenge-types/#dns-01-challenge" rel="nofollow noreferrer">docs</a>:</p> <p>Pros: You can use this challenge to issue certificates containing wildcard domain names. It works well even if you have multiple web servers.</p> <p>Cons: *<em><strong>Keeping API credentials on your web server is risky.</strong></em> Your DNS provider might not offer an API. Your DNS API may not provide information on propagation times.</p> <p>As mentioned <a href="https://www.eff.org/deeplinks/2018/02/technical-deep-dive-securing-automation-acme-dns-challenge-validation" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies.</p> </blockquote> <hr /> <ol start="2"> <li>I would suggest also another approach to use some simple nginx pod which would redirect all http traffic to https.</li> </ol> <p>There is a tutorial on <a href="https://medium.com/@gregoire.waymel/istio-cert-manager-lets-encrypt-demystified-c1cbed011d67" rel="nofollow noreferrer">medium</a> with nginx configuration you might try to use.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: nginx.conf: | server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } --- apiVersion: v1 kind: Service metadata: name: redirect labels: app: redirect spec: ports: - port: 80 name: http selector: app: redirect --- apiVersion: apps/v1 kind: Deployment metadata: name: redirect spec: replicas: 1 selector: matchLabels: app: redirect template: metadata: labels: app: redirect spec: containers: - name: redirect image: nginx:stable resources: requests: cpu: &quot;100m&quot; imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - mountPath: /etc/nginx/conf.d name: config volumes: - name: config configMap: name: nginx-config </code></pre> <p>Additionally you would have to change your virtual service to send all the traffic except <code>prefix: /.well-known/acme-challenge</code> to nginx.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: acme-solver namespace: istio-system spec: hosts: - &quot;*&quot; gateways: - acme-gateway http: - name: &quot;acmesolver&quot; match: - uri: prefix: /.well-known/acme-challenge route: - destination: host: reviews.prod.svc.cluster.local port: number: 8089 - name: &quot;nginx&quot; route: - destination: host: nginx </code></pre>
<p><strong>What I am trying to achieve:</strong> block all traffic to a service, containing the code to handle this within the same namespace as the service.</p> <p><strong>Why:</strong> this is the first step in &quot;locking down&quot; a specific service to specific IPs/CIDRs</p> <p>I have a primary ingress GW called <code>istio-ingressgateway</code> which works for services.</p> <pre class="lang-yaml prettyprint-override"><code>$ kubectl describe gw istio-ingressgateway -n istio-system Name: istio-ingressgateway Namespace: istio-system Labels: operator.istio.io/component=IngressGateways operator.istio.io/managed=Reconcile operator.istio.io/version=1.5.5 release=istio Annotations: API Version: networking.istio.io/v1beta1 Kind: Gateway Metadata: Creation Timestamp: 2020-08-28T15:45:10Z Generation: 1 Resource Version: 95438963 Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-ingressgateway UID: ae5dd2d0-44a3-4c2b-a7ba-4b29c26fa0b9 Spec: Selector: App: istio-ingressgateway Istio: ingressgateway Servers: Hosts: * Port: Name: http Number: 80 Protocol: HTTP Events: &lt;none&gt; </code></pre> <p>I also have another &quot;primary&quot; GW, the K8s ingress GW to support TLS (thought I'd include this, to be as explicit as possible)</p> <pre class="lang-yaml prettyprint-override"><code>k describe gw istio-autogenerated-k8s-ingress -n istio-system Name: istio-autogenerated-k8s-ingress Namespace: istio-system Labels: app=istio-ingressgateway istio=ingressgateway operator.istio.io/component=IngressGateways operator.istio.io/managed=Reconcile operator.istio.io/version=1.5.5 release=istio Annotations: API Version: networking.istio.io/v1beta1 Kind: Gateway Metadata: Creation Timestamp: 2020-08-28T15:45:56Z Generation: 2 Resource Version: 95439499 Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress UID: edd46c17-9975-4089-95ff-a2414d40954a Spec: Selector: Istio: ingressgateway Servers: Hosts: * Port: Name: http Number: 80 Protocol: HTTP Hosts: * Port: Name: https-default Number: 443 Protocol: HTTPS Tls: Credential Name: ingress-cert Mode: SIMPLE Private Key: sds Server Certificate: sds Events: &lt;none&gt; </code></pre> <p>I want to be able to create another GW, in the namespace <code>x</code> and have an authorization policy attached to that GW. If I create the authorization policy in the <code>istio-system</code> namespace, then it comes back with <code>RBAC: access denied</code> which is great - but that is for all services using the primary GW.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: block-all namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway action: DENY rules: - from: - source: ipBlocks: [&quot;0.0.0.0/0&quot;] </code></pre> <p>What I currently have does not work. Any pointers would be highly appreciated. The following are all created under the <code>x</code> namespace when applying the <code>kubectl apply -f files.yaml -n x</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: annotations: app: x-ingress name: x-gw labels: app: x-ingress spec: selector: istio: ingressgateway servers: - hosts: - x.y.com port: name: http number: 80 protocol: HTTP tls: httpsRedirect: true - hosts: - x.y.com port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE privateKey: sds serverCertificate: sds credentialName: ingress-cert --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: x labels: app: x spec: hosts: - x.y.com gateways: - x-gw http: - route: - destination: host: x --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: x-ingress-policy spec: selector: matchLabels: app: x-ingress action: DENY rules: - from: - source: ipBlocks: [&quot;0.0.0.0/0&quot;] --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: x labels: app: x spec: hosts: - x.y.com gateways: - x-gw http: - route: - destination: host: x </code></pre> <p>The above should be blocking all traffic to the GW, as it matches on the CIDR range of <code>0.0.0.0/0</code></p> <p>I am entirely misunderstanding the concept of GWs/AuthorizationPolicies or have I missed something?</p> <p><strong>Edit</strong> I ended up creating another GW which had the IP restriction block on that, as classic load balancers on AWS do not support IP forwarding.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: istiocontrolplane spec: profile: demo components: ingressGateways: - name: istio-ingressgateway enabled: true - name: admin-ingressgateway enabled: true label: istio: admin-ingressgateway k8s: serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-type: &quot;nlb&quot; --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all-admin namespace: istio-system spec: selector: matchLabels: istio: admin-ingressgateway action: ALLOW rules: - from: - source: ipBlocks: [&quot;176.252.114.59/32&quot;] </code></pre> <pre class="lang-sh prettyprint-override"><code>kubectl patch svc istio-ingressgateway -n istio-system -p '{&quot;spec&quot;:{&quot;externalTrafficPolicy&quot;:&quot;Local&quot;}}' </code></pre> <p>I then used that gateway in my workload that I wanted to lock down.</p>
<p>As far as I know you should rather use <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="noreferrer">AuthorizationPolicy</a> in 3 ways</p> <ul> <li>on ingress gateway</li> <li>on namespace</li> <li>on specific service</li> </ul> <p>I have tried to make it work on a specific gateway with annotations like you did, but I couldn't make it work for me.</p> <p>e.g.</p> <p>the following authorization policy denies all requests to workloads in namespace x.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: x spec: {} </code></pre> <p>the following authorization policy denies all requests on ingress gateway.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: istio-system spec: selector: matchLabels: app: istio-ingressgateway </code></pre> <p>the following authorization policy denies all requests on <a href="https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml" rel="noreferrer">httpbin</a> in x namespace.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-service-x namespace: x spec: selector: matchLabels: app: httpbin </code></pre> <hr /> <p>Let's say you deny all requests on x namespace and allow only get requests for httpbin service.</p> <p>Then you would use this AuthorizationPolicy to deny all requests</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: x spec: {} </code></pre> <p>And this AuthorizationPolicy to allow only get requests.</p> <pre><code>apiVersion: &quot;security.istio.io/v1beta1&quot; kind: &quot;AuthorizationPolicy&quot; metadata: name: &quot;x-viewer&quot; namespace: x spec: selector: matchLabels: app: httpbin rules: - to: - operation: methods: [&quot;GET&quot;] </code></pre> <hr /> <p>And there is the main issue ,which is ipBlocks. There is related <a href="https://github.com/istio/istio/issues/22341" rel="noreferrer">github issue</a> about that.</p> <p>As mentioned here by @incfly</p> <blockquote> <p>I guess the reason why it’s stop working when in non ingress pod is because the sourceIP attribute will not be the real client IP then.</p> <p>According to <a href="https://github.com/istio/istio/issues/22341" rel="noreferrer">https://github.com/istio/istio/issues/22341</a> 7, (not done yet) this aims at providing better support without setting k8s externalTrafficPolicy to local, and supports CIDR range as well.</p> </blockquote> <hr /> <p>I have tried this example from <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/" rel="noreferrer">istio documentation</a> to make it work, but it wasn't working for me, even if I changed <code>externalTrafficPolicy</code>. Then a workaround with <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="noreferrer">envoyfilter</a> came from above istio discuss thread.</p> <p>Answer provided by @hleal18 <a href="https://discuss.istio.io/t/authorization-policy-ip-allow-deny-not-working-on-services-different-than-ingress-gateway/7845/4?u=pikapika" rel="noreferrer">here</a>.</p> <blockquote> <p>Got and example working successfully using EnvoyFilters, specifically with remote_ip condition applied on httbin.</p> <p>Sharing the manifest for reference.</p> </blockquote> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: httpbin namespace: foo spec: workloadSelector: labels: app: httpbin configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: filterChain: filter: name: &quot;envoy.http_connection_manager&quot; subFilter: name: &quot;envoy.router&quot; patch: operation: INSERT_BEFORE value: name: envoy.filters.http.rbac config: rules: action: ALLOW policies: &quot;ip-premissions&quot;: permissions: - any: true principals: - remote_ip: address_prefix: xxx.xxx.xx.xx prefix_len: 32 </code></pre> <hr /> <p>I have tried above envoy filter on my test cluster and as far as I can see it's working.</p> <p>Take a look at below steps I made.</p> <p>1.I have changed the externalTrafficPolicy with</p> <pre><code>kubectl patch svc istio-ingressgateway -n istio-system -p '{&quot;spec&quot;:{&quot;externalTrafficPolicy&quot;:&quot;Local&quot;}}' </code></pre> <p>2.I have created namespace x with istio-injection enabled and deployed httpbin here.</p> <pre><code>kubectl create namespace x kubectl label namespace x istio-injection=enabled kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml -n x kubectl apply -f https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml -n x </code></pre> <p>3.I have created envoyfilter</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: httpbin namespace: x spec: workloadSelector: labels: app: httpbin configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: filterChain: filter: name: &quot;envoy.http_connection_manager&quot; subFilter: name: &quot;envoy.router&quot; patch: operation: INSERT_BEFORE value: name: envoy.filters.http.rbac config: rules: action: ALLOW policies: &quot;ip-premissions&quot;: permissions: - any: true principals: - remote_ip: address_prefix: xx.xx.xx.xx prefix_len: 32 </code></pre> <p><strong>address_prefix</strong> is the <strong>CLIENT_IP</strong>, there are commands I have used to get it.</p> <pre><code>export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==&quot;http2&quot;)].port}') curl &quot;$INGRESS_HOST&quot;:&quot;$INGRESS_PORT&quot;/headers -s -o /dev/null -w &quot;%{http_code}\n&quot; CLIENT_IP=$(curl &quot;$INGRESS_HOST&quot;:&quot;$INGRESS_PORT&quot;/ip -s | grep &quot;origin&quot; | cut -d'&quot;' -f 4) &amp;&amp; echo &quot;$CLIENT_IP&quot; </code></pre> <p>4.I have test it with curl and my browser.</p> <pre><code>curl &quot;$INGRESS_HOST&quot;:&quot;$INGRESS_PORT&quot;/headers -s -o /dev/null -w &quot;%{http_code}\n&quot; 200 </code></pre> <p><a href="https://i.stack.imgur.com/F18z9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/F18z9.png" alt="enter image description here" /></a></p> <hr /> <p>Let me know if you have any more questions, I might be able to help.</p>
<p>How would I display available schedulers in my cluster in order to use non default one using the <strong>schedulerName</strong> field?</p> <p>Any link to a document describing how to &quot;install&quot; and use a custom scheduler is highly appreciated :)</p> <p>Thx in advance</p>
<p>Schedulers can be found among your <code>kube-system</code> pods. You can then filter the output to your needs with <code>kube-scheduler</code> as the search key:</p> <pre class="lang-sh prettyprint-override"><code>➜ ~ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6955765f44-9wfkp 0/1 Completed 15 264d coredns-6955765f44-jmz9j 1/1 Running 16 264d etcd-acid-fuji 1/1 Running 17 264d kube-apiserver-acid-fuji 1/1 Running 6 36d kube-controller-manager-acid-fuji 1/1 Running 21 264d kube-proxy-hs2qb 1/1 Running 0 177d kube-scheduler-acid-fuji 1/1 Running 21 264d </code></pre> <p>You can retrieve the yaml file with:</p> <pre class="lang-sh prettyprint-override"><code>➜ ~ kubectl get pods -n kube-system &lt;scheduler pod name&gt; -oyaml </code></pre> <p>If you bootstrapped your cluster with Kubeadm you may also find the yaml files in the <code>/etc/kubernetes/manifests</code>:</p> <pre class="lang-yaml prettyprint-override"><code>➜ manifests sudo cat /etc/kubernetes/manifests/kube-scheduler.yaml --- apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --bind-address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true image: k8s.gcr.io/kube-scheduler:v1.17.6 imagePullPolicy: IfNotPresent --------- </code></pre> <p>The location for minikube is similar but you do have to login in the minikube's virtual machine first with <code>minikube ssh</code>.</p> <p>For more reading please have a look how to <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">configure multiple schedulers</a> and <a href="https://banzaicloud.com/blog/k8s-custom-scheduler/" rel="nofollow noreferrer">how to write custom schedulers.</a></p>
<p>We have the following configuration for our service that's deployed to EKS but it causes downtime for about 120s whenever we make a deployment.</p> <p>I can successfully make requests to the new pod when I port forward to it directly, so the pod itself seems fine. It seems to be either the AWS NLB that's not routing the traffic or something network related but I'm not sure, and I don't know where to debug further for this.</p> <p>I tried a few things to no avail: added a <code>readinessProbe</code>, tried increasing the <code>initialDelaySeconds</code> to <code>120</code>, tried switching to an <code>IP</code> ELB target, rather than an <code>instance</code> ELB target type, tried reducing the NLB's health check interval but it's not actually being applied and remains as 30s.</p> <p>Any help would be greatly appreciated!</p> <pre><code>--- # Autoscaler for the frontend apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: my-frontend spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-frontend minReplicas: 3 maxReplicas: 8 targetCPUUtilizationPercentage: 60 --- apiVersion: apps/v1 kind: Deployment metadata: name: my-frontend labels: app: my-frontend spec: replicas: 3 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate selector: matchLabels: app: my-frontend template: metadata: labels: app: my-frontend spec: containers: - name: my-frontend image: ${DOCKER_IMAGE} ports: - containerPort: 3001 name: web resources: requests: cpu: &quot;300m&quot; memory: &quot;256Mi&quot; livenessProbe: httpGet: scheme: HTTP path: /v1/ping port: 3001 initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 10 readinessProbe: httpGet: scheme: HTTP path: /v1/ping port: 3001 initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 10 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${SSL_CERTIFICATE_ARN} service.beta.kubernetes.io/aws-load-balancer-ssl-ports: &quot;https&quot; service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: &quot;true&quot; service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: &quot;10&quot; service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: &quot;true&quot; service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: &quot;60&quot; name: my-frontend labels: service: my-frontend spec: ports: - name: http port: 80 targetPort: 3001 - name: https port: 443 targetPort: 3001 externalTrafficPolicy: Local selector: app: my-frontend type: LoadBalancer </code></pre>
<p>This is most likely caused by NLB not reacting quickly enough to the target changes which is related directly to your <code>externalTrafficPolicy</code> settings.</p> <p>If your application does not make any use of client IP you can set the <code>externalTrafficPolicy</code> to <code>ClusterIP</code> or leave it to default by removing it.</p> <p>In case where your application requires to preserve client IP you may use the solution discussed in this <a href="https://github.com/kubernetes/kubernetes/issues/73362#issuecomment-460790924" rel="nofollow noreferrer">github issue</a> which in short requires you to use <a href="https://www.ianlewis.org/en/bluegreen-deployments-kubernetes#:%7E:text=Kubernetes%20has%20a%20really%20awesome%20built-in%20feature%20called%20Deployments.&amp;text=With%20blue/green%20deployments%20a,the%20new%20version%20%28green%29." rel="nofollow noreferrer">blue-green deployment</a>.</p>
<p>I've this configuration on my service mesh:</p> <ul> <li>mTLS globally enabled and meshpolicy default </li> <li>simple-web deployment exposed as clusterip on port 8080</li> <li>http gateway for port 80 and virtualservice routing on my service</li> </ul> <p>Here the gw and vs yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: http-gateway spec: selector: istio: ingressgateway # Specify the ingressgateway created for us servers: - port: number: 80 # Service port to watch name: http-gateway protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: simple-web spec: gateways: - http-gateway hosts: - '*' http: - match: - uri: prefix: /simple-web rewrite: uri: / route: - destination: host: simple-web port: number: 8080 </code></pre> <p>Both vs and gw are in the same namespace. The deployment was created and exposed with these commands:</p> <pre><code>k create deployment --image=yeasy/simple-web:latest simple-web k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web </code></pre> <p>and with k get pods I receive this:</p> <pre><code>pod/simple-web-9ffc59b4b-n9f85 2/2 Running </code></pre> <p>What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error. If I try to curl from ingressgateway pod I can reach the simple-web service. Why I can't reach the website with mTLS enabled? What's the correct configuration?</p>
<p>As @suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.</p> <p>If you chose to upgrade istio to newer version please review documentation <a href="https://istio.io/docs/setup/upgrade/steps/" rel="nofollow noreferrer">1.3 Upgrade Notice</a> and <a href="https://istio.io/docs/setup/upgrade/notice/" rel="nofollow noreferrer">Upgrade Steps</a> as Istio is still in development and changes drastically with each version.</p> <p>Also as mentioned in comments by @Manuel Castro this is most likely issue addressed in <a href="https://istio.io/docs/ops/traffic-management/deploy-guidelines/#avoid-503-errors-while-reconfiguring-service-routes" rel="nofollow noreferrer">Avoid 503 errors while reconfiguring service routes</a> and newer version simply handles them better.</p> <blockquote> <p>Creating both the VirtualServices and DestinationRules that define the corresponding subsets using a single kubectl call (e.g., kubectl apply -f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e., Kubernetes API server) to the Pilot instances in an eventually consistent manner. If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.</p> </blockquote> <p>It should be possible to avoid this issue by temporarily disabling <code>mTLS</code> or by using <code>permissive mode</code> during the deployment.</p>
<p>I've got a problem with roles and authentication kubernetes. I created a one-node (one maser) cluster, on my baremetal server, and I made this cluster listen on different IP than default (with option &quot;<em>--apiserver-advertise-address=</em> <em>ip address</em> &quot;). But now I basically can do nothing in it, because of <em>kubectl</em> does not work. I can't create pods and services I need. When I created the cluster, without this IP changinh, it works. So my question is how to fix this? It is probably an authorization problem, but I can't even create cluster role or cluster role binding because of errors like this: <em>&quot; error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io is forbidden: User &quot;system:node:e4-1&quot; cannot create resource &quot;clusterrolebindings&quot; in API group &quot;rbac.authorization.k8s.io&quot; at the cluster scope&quot;</em>... Is there any way to &quot;login&quot; as admin, or something, or is there a way to change something in configs files to fix this?</p>
<p>Based on the flag you mention I assume you are using <code>kubeadm</code> to create your cluster. Most probable cause is that you are using the wrong <code>.conf</code> file. My suspicions are that you are using the <code>kubelet.conf</code> instead of <code>admin.conf</code>.</p> <p>Below you can find an example of the <code>kubeadm init</code> output. It contains steps that you need to follow to start using <code>kubectl</code>:</p> <pre class="lang-none prettyprint-override"><code>Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a Pod network to the cluster. Run &quot;kubectl apply -f [podnetwork].yaml&quot; with one of the options listed at: /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join &lt;control-plane-host&gt;:&lt;control-plane-port&gt; --token &lt;token&gt; --discovery-token-ca-cert-hash sha256:&lt;hash&gt; </code></pre> <p>As you see one of the commands is to copy <code>admin.conf</code> file into <code>/.kube/config</code> which then <code>kubectl</code> uses to manage cluster.</p>
<p>I want to continuously log from kubernetes pod where my application is running to a custom path/file. Is this possible and how to do it without some 3rd party logging processors?</p>
<p>Kubernetes by itself provides only basic logging like in <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">this</a> tutorial.</p> <p>However in my attempt I was unsuccessful in writing any logs from default nginx container by using custom echo commands from cli using this technique. Only pre-configured nginx logs were working.</p> <p>According to Kubernetes documentation this can't be done without using <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">logging driver</a>.</p> <blockquote> <p>While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:</p> <ul> <li>Use a node-level logging agent that runs on every node.</li> <li>Include a dedicated sidecar container for logging in an application pod.</li> <li>Push logs directly to a backend from within an application.</li> </ul> </blockquote> <p>Which is basically using 3rd party logging processors.</p> <blockquote> <p>Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: <a href="https://kubernetes.io/docs/user-guide/logging/stackdriver" rel="nofollow noreferrer">Stackdriver Logging</a> for use with Google Cloud Platform, and <a href="https://kubernetes.io/docs/user-guide/logging/elasticsearch" rel="nofollow noreferrer">Elasticsearch</a>. You can find more information and instructions in the dedicated documents. Both use <a href="http://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> with custom configuration as an agent on the node.</p> </blockquote> <p>Intercepting stdout and stderr without logging driver also had negative results. Simplest solution is to use logging agent.</p>
<p>I have installed ISTIO with the below configuration</p> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: istio-control-plane spec: # Use the default profile as the base # More details at: https://istio.io/docs/setup/additional-setup/config-profiles/ profile: default # Enable the addons that we will want to use addonComponents: grafana: enabled: true prometheus: enabled: true tracing: enabled: true kiali: enabled: true values: global: # Ensure that the Istio pods are only scheduled to run on Linux nodes defaultNodeSelector: beta.kubernetes.io/os: linux kiali: dashboard: auth: strategy: anonymous components: egressGateways: - name: istio-egressgateway enabled: true meshConfig: accessLogFile: /dev/stdout outboundTrafficPolicy: mode: REGISTRY_ONLY EOF </code></pre> <p>and have configured the Egress Gateway, Destination Rule &amp; Virtual Service as shown below</p> <pre><code>cat &lt;&lt; EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: akv2k8s-test labels: istio-injection: enabled azure-key-vault-env-injection: enabled --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: cnn namespace: akv2k8s-test spec: hosts: - edition.cnn.com ports: - number: 80 name: http-port protocol: HTTP - number: 443 name: https protocol: HTTPS resolution: DNS --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway namespace: akv2k8s-test spec: selector: istio: egressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - edition.cnn.com --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: egressgateway-for-cnn namespace: akv2k8s-test spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: cnn --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: direct-cnn-through-egress-gateway namespace: akv2k8s-test spec: hosts: - edition.cnn.com gateways: - istio-egressgateway - mesh http: - match: - gateways: - mesh port: 80 route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: cnn port: number: 80 weight: 100 - match: - gateways: - istio-egressgateway port: 80 route: - destination: host: edition.cnn.com port: number: 80 weight: 100 EOF </code></pre> <p>it is working perfectly fine</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml -n akv2k8s-test export SOURCE_POD=$(kubectl get pod -l app=sleep -n akv2k8s-test -o jsonpath={.items..metadata.name}) kubectl exec &quot;$SOURCE_POD&quot; -n akv2k8s-test -c sleep -- curl -sL -o /dev/null -D - http://edition.cnn.com/politics kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail </code></pre> <p><a href="https://i.stack.imgur.com/BvXYS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BvXYS.png" alt="enter image description here" /></a></p> <p>however I could not understand the control flow. For example, the below diagram show the control flow of Ingress Gateway</p> <p><a href="https://i.stack.imgur.com/1zCpz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1zCpz.png" alt="enter image description here" /></a></p> <p>Can you please let me know the control flow of ISTIO Egress? or What happens when the deployed application try to access the external service? Does follow the flow : POD -&gt; Proxy -&gt; Virtual Service -&gt; Destination Rule -&gt; Gateway -&gt; External Service?</p>
<p>Yes, Your guess was right on point.</p> <p>The flow is POD &gt; envoy proxy &gt; Gateway &gt; Eternal Service</p> <p>When traffic is being sent out from the application container, it is intercepted by envoy proxy sidecar and envoy filter is applied.</p> <p>The envoy filter chain is generated from <code>VirtualService</code> and <code>DestinationRule</code> objects it can be inspected using <a href="https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration" rel="nofollow noreferrer"><code>istioctl proxy-config</code></a> command.</p>
<p>I have a kubernetes cluster setup on AWS using kops.</p> <p>Right now, the server url is <a href="https://old-server-url.com" rel="nofollow noreferrer">https://old-server-url.com</a>. This url is configured on Route53 pointing to public ip of master instance of cluster.</p> <p>I want to change this to <a href="https://new-server-url.com" rel="nofollow noreferrer">https://new-server-url.com</a>. I configured new url on Route53 same with master IP. But it just opens the kubernetes dashboard with new URL. I can't access kubernetes server via kubectl with this url.</p> <p>This is the error I get when changing the kubeconfig file with new url and running kubectl get pods command.</p> <p><code>"Unable to connect to the server: x509: certificate is valid for internal.old-server-url.com, old-server-url.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not new-server-url.com"</code></p> <p>What configuration do I have to change so that only the server of kubernetes cluster is changed, and I can access it via kube config/ kubectl?</p> <p>Update: I can access my cluster after using --insecure-skip-tls-verify flag along the kubectl command. But this is insecure. I would like to know how can I change my certficates in a kops provisioned cluster with minimal effects for this scenario.</p>
<p>To just resolve the error:</p> <pre><code>"Unable to connect to the server: x509: certificate is valid for internal.old-server-url.com, old-server-url.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not new-server-url.com" </code></pre> <p>You can use the --insecure-skip-tls-verify flag with kubectl command as explained here: <a href="https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master">Invalid x509 certificate for kubernetes master</a></p> <p>This is not recommended for production environments.</p>
<p>I am trying to add liveness and readinessprobe for zookeeper using bitnami/zookeeper image, but the pod creation is failing, please let me know what values need to be added in liveness and readiness probe.</p> <p>Below is the value that I have tried with.</p> <pre><code>livenessProbe: enabled: true initialDelaySeconds: 120 periodSeconds: 30 timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 120 periodSeconds: 30 timeoutSeconds: 5 failureThreshold: 6 successThreshold: 1 </code></pre> <p>I am getting the below error.</p> <p>[spec.containers[0].livenessProbe: Required value: must specify a handler type, spec.containers[0].readinessProbe: Required value: must specify a handler type]</p>
<p>The Kubernetes Probes as the livenessProbe and readinessProbe require a <code>handler</code> which is used for the probe. Kubernetes supports multiple handler types, e.g. a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">HTTP request probe</a> or a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness command probe</a>. There are additional handler types, e.g. <code>TCP probes</code>. You can find all supported handler types in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">documentation</a>.<br> Please note that the handler configuration is required and there isn't a default handler type.</p>
<p>I have some external services running on AWS with lambda and API gateway I'm using istio and i've configure a service entry to api gateway and it works.</p> <p>I'm wondering if i can use envoy filter in istio to invoke the lambda function directly like on gloo. <a href="https://docs.solo.io/gloo/1.0.0/advanced_configuration/fds_mode/" rel="nofollow noreferrer">https://docs.solo.io/gloo/1.0.0/advanced_configuration/fds_mode/</a> so i'll be able to remove one hop</p> <p>I see that in envoy documentation it's still experimental but i would like to know if i can use envoy filter in istio to achieve it ? <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/aws_lambda_filter" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/aws_lambda_filter</a></p>
<p>Based on that <a href="https://istio.io/" rel="nofollow noreferrer">istio</a> is build on <a href="https://www.envoyproxy.io/" rel="nofollow noreferrer">envoy</a> I would say there shouldn't be any problem with that being configured in <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">envoy filter</a>.</p> <blockquote> <p>Istio uses an extended version of the <a href="https://envoyproxy.github.io/envoy/" rel="nofollow noreferrer">Envoy</a> proxy. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Envoy proxies are the only Istio components that interact with data plane traffic.</p> </blockquote> <p>I have checked the <a href="https://raw.githubusercontent.com/istio/istio/release-1.7/samples/httpbin/httpbin.yaml" rel="nofollow noreferrer">httpbin example</a> with <code>istioctl proxy-config bootstrap</code> and <code>envoy.filters.http.aws_lambda</code> is here, so you should be able to configure that.</p> <pre><code>istioctl proxy-config bootstrap httpbin-779c54bf49-9m9sz | grep &quot;envoy.filters.http.aws_lambda&quot; &quot;name&quot;: &quot;envoy.filters.http.aws_lambda&quot;, </code></pre> <hr /> <p>Additionally you mentioned that you can do that on gloo, so maybe you could try to connect both istio and gloo together, like mentioned <a href="https://istio.io/v1.5/blog/2020/istio-sds-gloo/" rel="nofollow noreferrer">here</a> and configure that with gloo?</p> <p>There is <a href="https://docs.solo.io/gloo/latest/guides/integrations/service_mesh/gloo_istio_mtls/#istio-15x" rel="nofollow noreferrer">tutorial</a> about that in gloo documentation.</p>
<p>I am having trouble understanding what traffic is controlled by ingress and egress istio gateways.</p> <ol> <li>For example, an application sets up listeners on an MQ queue. Is this an example of ingress or egress traffic? I thought that where the application initiates the connection, then this traffic will be directed to the egress gateway. Conversely, if the application is an endpoint, then traffic must be routed through the ingress gateway.</li> <li>Let's say application A is an external service to application B. Application A makes a rest request to B. Should this request be routed through ingress? Now application B makes a rest request to A. Should traffic go through egress now?</li> </ol>
<p>Let's start with some theory. I have found few sources which describes how istio ingress gateway and egress gateway works.</p> <h2><a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/" rel="noreferrer">Istio documentation</a></h2> <blockquote> <p>Istio uses ingress and egress gateways to configure load balancers executing at the edge of a service mesh. An ingress gateway allows you to define entry points into the mesh that all incoming traffic flows through. Egress gateway is a symmetrical concept; it defines exit points from the mesh. Egress gateways allow you to apply Istio features, for example, monitoring and route rules, to traffic exiting the mesh.</p> </blockquote> <hr /> <h2><a href="https://www.manning.com/books/istio-in-action" rel="noreferrer">Istio in action book</a></h2> <blockquote> <p>For our applications and services to provide anything meaningful, they’re going to need to interact with applications that live outside of our cluster. That could be existing monolith applications, off-the-shelf software, messaging queues, databases, and 3rd party partner systems. To do this, operators will need to configure Istio to allow traffic into the cluster and be very specific about what traffic is allowed to leave the cluster. The Istio components that provide this functionality are the <strong>istio-ingressgateway</strong> and <strong>istio-egressgateway</strong>.</p> </blockquote> <p>And here's a picture that shows it well</p> <p><a href="https://i.stack.imgur.com/Be14u.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Be14u.png" alt="enter image description here" /></a></p> <hr /> <h2><a href="https://banzaicloud.com/" rel="noreferrer">Banzaicloud</a></h2> <blockquote> <p>An ingress gateway serves as the entry point for all services running within the mesh.</p> </blockquote> <p><a href="https://i.stack.imgur.com/iL2wK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iL2wK.png" alt="enter image description here" /></a></p> <blockquote> <p>egress gateways are exit points from the mesh that allow us to apply Istio features. This includes applying features like monitoring and route rules to traffic that’s exiting the mesh.</p> </blockquote> <p><a href="https://i.stack.imgur.com/sJ3aM.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/sJ3aM.jpg" alt="enter image description here" /></a></p> <hr /> <h2>About your questions</h2> <blockquote> <p>For example, an application sets up listeners on an MQ queue. Is this an example of ingress or egress traffic? I thought that where the application initiates the connection, then this traffic will be directed to the egress gateway. Conversely, if the application is an endpoint, then traffic must be routed through the ingress gateway.</p> </blockquote> <p><a href="https://i.stack.imgur.com/j9owC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j9owC.png" alt="enter image description here" /></a></p> <p>I'm not familiar with message queues, but based on above picture let's assume consumers are inside the mesh, so producer services would have to go there through ingress gateway.</p> <p>[Producer Service] -&gt; ingress gateway -&gt; [envoy sidecar -&gt; Consumer Service]</p> <p>So yes, the traffic must be routed through the ingress gateway</p> <hr /> <blockquote> <p>Let's say application A is an external service to application B. Application A makes a rest request to B. Should this request be routed through ingress? Now application B makes a rest request to A. Should traffic go through egress now?</p> </blockquote> <p>If service inside service mesh want to talk with external service we should start with configuring <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/" rel="noreferrer">egress</a> and <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="noreferrer">service entry</a> for it.</p> <blockquote> <p>Because all outbound traffic from an Istio-enabled pod is redirected to its sidecar proxy by default, accessibility of URLs outside of the cluster depends on the configuration of the proxy. By default, Istio configures the Envoy proxy to passthrough requests for unknown services. Although this provides a convenient way to get started with Istio, configuring stricter control is usually preferable.</p> </blockquote> <p>Based on my knowledge that's how the traffic would like this.</p> <pre><code>appA -&gt; external service outside the mesh appB -&gt; injected service in the istio mesh </code></pre> <p>Let's assume you want to use <a href="https://curl.haxx.se/docs/manpage.html" rel="noreferrer">curl</a> from appA to appB</p> <p>[app A](curl ingress-external-ip/specific path or port) -&gt; ingress gateway -&gt; [envoy sidecar -&gt; appB]</p> <p>Let's assume you want to use <a href="https://curl.haxx.se/docs/manpage.html" rel="noreferrer">curl</a> from appB to appA</p> <p>[appB -&gt; envoy sidecar](curl appA) -&gt; egress gateway -&gt; [appA]</p> <hr /> <p>Let me know in the comments if you have any more questions or you want to discuss something.</p>
<p>I am new to openshift, I have gone through Openshift website for more details but wanted to know if anyone has deployed init container. I want to use that to take dump from database and restore it to new version of it with the help of init container We are using postgres database</p> <p>Any help would be appreciated.</p> <p>Thanks!</p>
<blockquote> <p>I want to use that to take dump from database and restore it to new version of it with the help of init container</p> </blockquote> <p>I would say you should rather use the operator instead of initContainer. Take a look at below <a href="https://www.magalix.com/blog/kubernetes-patterns-the-init-container-pattern" rel="nofollow noreferrer">Init Containers Design Considerations</a></p> <p>There are some considerations that you should take into account when you create initcontainers:</p> <ul> <li>They always get executed before other containers in the Pod. So, they shouldn’t contain complex logic that takes a long time to complete. Startup scripts are typically small and concise. If you find that you’re adding too much logic to init containers, you should consider moving part of it to the application container itself.</li> <li>Init containers are started and executed in sequence. An init container is not invoked unless its predecessor is completed successfully. Hence, if the startup task is very long, you may consider breaking it into a number of steps, each handled by an init container so that you know which steps fail.</li> <li>If any of the init containers fail, the whole Pod is restarted (unless you set restartPolicy to Never). Restarting the Pod means re-executing all the containers again including any init containers. So, you may need to ensure that the startup logic tolerates being executed multiple times without causing duplication. For example, if a DB migration is already done, executing the migration command again should just be ignored.</li> <li>An init container is a good candidate for delaying the application initialization until one or more dependencies are available. For example, if your application depends on an API that imposes an API request-rate limit, you may need to wait for a certain time period to be able to receive responses from that API. Implementing this logic in the application container may be complex; as it needs to be combined with health and readiness probes. A much simpler way would be creating an init container that waits until the API is ready before it exits successfully. The application container would start only after the init container has done its job successfully.</li> <li>Init containers cannot use health and readiness probes as application containers do. The reason is that they are meant to start and exit successfully, much like how Jobs and CronJobs behave.</li> <li>All containers on the same Pod share the same Volumes and network. You can make use of this feature to share data between the application and its init containers.</li> </ul> <hr /> <p>The only thing I found about using it for dumping data is this <a href="https://www.magalix.com/blog/kubernetes-patterns-the-init-container-pattern" rel="nofollow noreferrer">example</a> about doing that with mysql, maybe it can guide you how to do it with postgresql.</p> <blockquote> <p>In this scenario, we are serving a MySQL database. This database is used for testing an application. It doesn’t have to contain real data, but it must be seeded with enough data so that we can test the application's query speed. We use an init container to handle downloading the SQL dump file and restore it to the database, which is hosted in another container. This scenario can be illustrated as below:</p> </blockquote> <p><a href="https://i.stack.imgur.com/fLRQ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fLRQ6.png" alt="enter image description here" /></a></p> <blockquote> <p>The definition file may look like this:</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: mydb labels: app: db spec: initContainers: - name: fetch image: mwendler/wget command: [&quot;wget&quot;,&quot;--no-check-certificate&quot;,&quot;https://sample-videos.com/sql/Sample-SQL-File-1000rows.sql&quot;,&quot;-O&quot;,&quot;/docker-entrypoint-initdb.d/dump.sql&quot;] volumeMounts: - mountPath: /docker-entrypoint-initdb.d name: dump containers: - name: mysql image: mysql env: - name: MYSQL_ROOT_PASSWORD value: &quot;example&quot; volumeMounts: - mountPath: /docker-entrypoint-initdb.d name: dump volumes: - emptyDir: {} name: dump </code></pre> <blockquote> <p>The above definition creates a Pod that hosts two containers: the init container and the application one. Let’s have a look at the interesting aspects of this definition:</p> <p>The init container is responsible for downloading the SQL file that contains the database dump. We use the mwendler/wget image because we only need the wget command. The destination directory for the downloaded SQL is the directory used by the MySQL image to execute SQL files (/docker-entrypoint-initdb.d). This behavior is built into the MySQL image that we use in the application container. The init container mounts /docker-entrypoint-initdb.d to an emptyDir volume. Because both containers are hosted on the same Pod, they share the same volume. So, the database container has access to the SQL file placed on the emptyDir volume.</p> </blockquote> <hr /> <p>Additionally for best practices I would suggest to take a look at kubernetes operators, as far as I know that's the best practice way to menage databases in kubernetes.</p> <p>If you're not familiar with operators I would suggest to start with kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#operators-in-kubernetes" rel="nofollow noreferrer">documentation</a> and this short <a href="https://www.youtube.com/watch?v=ha3LjlD6g7g" rel="nofollow noreferrer">video</a> on youtube.</p> <p>Operators are methods of packaging Kubernetes that enable you to more easily manage and monitor stateful applications. There are many operators already available, such as the</p> <ul> <li><a href="https://github.com/CrunchyData/postgres-operator" rel="nofollow noreferrer">Crunchy Data PostgreSQL Operator</a></li> <li><a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">Postgres Operator</a></li> </ul> <p>which automates and simplifies deploying and managing open source PostgreSQL clusters on Kubernetes by providing the essential features you need to keep your PostgreSQL clusters up and running.</p>
<p>I have recently started learning and implementing istio in AWS EKS cluster. For configuring TLS for ingress gateway, I followed <a href="https://medium.com/faun/managing-tls-keys-and-certs-in-istio-using-amazons-acm-8ff9a0b99033" rel="nofollow noreferrer">this guide</a> which simply asks you to add AWS ACM ARN id to istio-ingressgateway as an annotation. So, I had to neither use certs to create <code>secret</code> nor use envoyproxy's SDS.</p> <p>This setup terminates TLS at gateway, but I also want to enable mTLS within mesh for securing service-service communication. By following <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#namespace-wide-policy" rel="nofollow noreferrer">their documentation</a>, I created this policy to enforce mTLS within a namespace:</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: xyz-mtls-policy namespace: xyz-dev spec: mtls: mode: STRICT </code></pre> <p>But even after applying this, I see one service being able to call another service using <code>http</code>.</p> <p>So my question is: how do I use the ACM certs to implement mTLS in my namespace?</p>
<p>If you're calling from inside the mesh I would say it's working fine, take a look <a href="https://banzaicloud.com/blog/istio-auto-mtls/#setting-strict-mtls-policy" rel="nofollow noreferrer">here</a> and <a href="https://banzaicloud.com/blog/istio-mtls/#mutual-tls-in-istio" rel="nofollow noreferrer">here</a>.</p> <h2>Mutual TLS in Istio</h2> <blockquote> <p>Istio offers mutual TLS as a solution for service-to-service authentication.</p> <p>Istio uses the sidecar pattern, meaning that each application container has a sidecar Envoy proxy container running beside it in the same pod.</p> <ul> <li><p>When a service receives or sends network traffic, the traffic always goes through the Envoy proxies first.</p> </li> <li><p>When mTLS is enabled between two services, the client side and server side Envoy proxies verify each other’s identities before sending requests.</p> </li> <li><p>If the verification is successful, then the client-side proxy encrypts the traffic, and sends it to the server-side proxy.</p> </li> <li><p>The server-side proxy decrypts the traffic and forwards it locally to the actual destination service.</p> </li> </ul> </blockquote> <p><a href="https://i.stack.imgur.com/7LXQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7LXQ3.png" alt="enter image description here" /></a></p> <hr /> <blockquote> <p>I am on istio 1.6.8, think it enables mTLS by default.</p> </blockquote> <p>Yes, it's enabled by default since istio 1.5 version. There are related <a href="https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/#automatic-mutual-tls" rel="nofollow noreferrer">docs</a> about this.</p> <blockquote> <p>Automatic mutual TLS is now enabled by default. Traffic between sidecars is automatically configured as mutual TLS. You can disable this explicitly if you worry about the encryption overhead by adding the option -- set values.global.mtls.auto=false during install. For more details, refer to <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls" rel="nofollow noreferrer">automatic mutual TLS</a>.</p> </blockquote> <hr /> <blockquote> <p>Is there any clear process to prove that it is indeed using mTLS?</p> </blockquote> <p>I would say there are 3 ways</p> <ul> <li>Test with pods</li> </ul> <p>You can change it from strict to permissive and call it from outside the mesh, it should work. Then change it to strict and call it again, it shouldn't work. In both ways you should be able to call it from a pod inside the mesh.</p> <ul> <li>Kiali</li> </ul> <p>If you want to see it visual way kiali should have something like a padlock when mtls is enabled, there is <a href="https://github.com/kiali/kiali/issues/2381" rel="nofollow noreferrer">github issue</a> about that.</p> <p><a href="https://i.stack.imgur.com/3auEo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3auEo.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/df2HG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/df2HG.png" alt="enter image description here" /></a></p> <ul> <li>Prometheus</li> </ul> <p>It was already mentioned in the banzaicloud, and you mentioned that in the comments, you can check the Connection Security Policy metric label. Istio sets this label to mutual_tls if the request has actually been encrypted.</p> <hr /> <p>Let me know if have any more questions.</p>
<p>For configuring a multicluster Isito with replicated control planes, one of the requirements is to configure the k8s coredns service in the kube-system namespace, to forward zone &quot;global&quot; to the IP of the &quot;istiocoredns&quot; service deployed in the istio-system namespace. Like <a href="https://istio.io/latest/docs/setup/install/multicluster/gateways/#setup-dns" rel="nofollow noreferrer">this:</a></p> <pre><code> global:53 { errors cache 30 forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53 } </code></pre> <p>In the example the use that command expansion to get the IP of the istiocoredns ClusterIP type of service.</p> <p>As that is a non static IP and could be modified, I am looking for a way to use something more dynamic and change aware. Using the istiocoredns service FQDN name would be great, but coredns documentation is not mentioning anything about it.</p> <p>Is there any coredns plugin or workaround this?</p> <p>Thank you.</p>
<blockquote> <p>Is there any coredns plugin or workaround this?</p> </blockquote> <p>There is <a href="https://github.com/istio-ecosystem/istio-coredns-plugin" rel="nofollow noreferrer">istio coredns plugin</a>, but as mentioned in the <a href="https://github.com/istio-ecosystem/istio-coredns-plugin#usage" rel="nofollow noreferrer">usage section</a> they set here the IP of the coredns anyway.</p> <blockquote> <p>Update the kube-dns config map to point to this coredns service as the upstream DNS service for the *.global domain. You will have to find out the cluster IP of coredns service and update the config map (or write a controller for this purpose!).</p> </blockquote> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {&quot;global&quot;: [&quot;10.2.3.4&quot;]} </code></pre> <p>But here's some interesting information</p> <blockquote> <p>UPDATE: This plugin is no longer necessary as of Istio 1.8. DNS is built into the istio agent in the sidecar. Sidecar DNS is enabled by default in the preview profile. You can also enable it manually by setting the following config in the istio operator</p> </blockquote> <pre><code> meshConfig: defaultConfig: proxyMetadata: ISTIO_META_DNS_CAPTURE: &quot;true&quot; ISTIO_META_PROXY_XDS_VIA_AGENT: &quot;true&quot; </code></pre> <p>You can find more information about it <a href="https://preliminary.istio.io/latest/docs/ops/deployment/deployment-models/#dns-with-multiple-clusters" rel="nofollow noreferrer">here</a>.</p> <blockquote> <p>There are a few efforts in progress that will help simplify the DNS story:</p> <p>Istio will soon support DNS interception for all workloads with a sidecar proxy. This will allow Istio to perform DNS lookup on behalf of the application.</p> <p><a href="https://github.com/istio-ecosystem/admiral" rel="nofollow noreferrer">Admiral</a> is an Istio community project that provides a number of multicluster capabilities, including automatic creation of service DNS entries.</p> <p><a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api" rel="nofollow noreferrer">Kubernetes Multi-Cluster Services</a> is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting services to multiple clusters. This effectively pushes the responsibility of service visibility and DNS resolution for the entire clusterset onto Kubernetes. There is also work in progress to build layers of MCS support into Istio, which would allow Istio to work with any cloud vendor MCS controller or even act as the MCS controller for the entire mesh.</p> <p>While Admiral is available today, the Istio and Kubernetes communities are actively building more general solutions into their platforms. Stay tuned!</p> </blockquote> <hr /> <p>There is <a href="https://preliminary.istio.io/latest/blog/2020/dns-proxy/" rel="nofollow noreferrer">article</a> about that in 1.8 prelim docs.</p>
<p>I am setting up hybrid cluster(master-centos and 2 worker nodes-windows 2019) with containerd as runtime. I cannot use any CNI like calico and weave as they need docker as runtime.I can use Flannel but it does not support network policies well. Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?</p>
<p><strong>Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?</strong></p> <p>Network policies was create for that exact purpose and as per documents you need CNI that supports them. In other way they will be ignored.</p> <blockquote> <p>Network policies are implemented by the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">network plugin</a>. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.</p> </blockquote> <p>If your only option is to use flannel for networking, you can install Calico network policy to secure cluster communications. So basically you are installing calico for policy and flannel for networking commonly known as Canal. You can find more details in <a href="https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel" rel="nofollow noreferrer">calico docs</a></p> <p>Here's also a good answer how to <a href="https://stackoverflow.com/questions/59744000/kubernetes-1-17-containerd-1-2-0-with-calico-cni-node-not-joining-to-master">setup calico with containerd</a> that you might find useful for your case.</p>
<p>I had istio configured but without the CNI addon enabled.</p> <p>In that time, I had an init container with a service account that would call the Kubernetes API to verify a couple of things (via kubectl).</p> <p>Since I enabled the CNI addon, this init container fails with the following message:</p> <blockquote> <p>The connection to the server 10.23.64.1:443 was refused - did you specify the right host or port?</p> </blockquote> <p>I tried removing all my network policies to see if that was the issue, but same result. I also gave the service account that this pods uses the cluster-admin role, but it didn't do the trick.</p> <p>I tested with both 1.6 and 1.7 branches of Istio.</p> <p>What is the issue here? Other pods without this init container work fine.</p>
<p>In order to have init container network connectivity with istio cni enabled please follow the guide for a workaround from istio <a href="https://istio.io/latest/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <h3>Compatibility with application init containers<a href="https://istio.io/latest/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers" rel="nofollow noreferrer"></a></h3> <p>The Istio CNI plugin may cause networking connectivity problems for any application <code>initContainers</code>. When using Istio CNI, <code>kubelet</code> starts an injected pod with the following steps:</p> <ol> <li>The Istio CNI plugin sets up traffic redirection to the Istio sidecar proxy within the pod.</li> <li>All init containers execute and complete successfully.</li> <li>The Istio sidecar proxy starts in the pod along with the pod’s other containers.</li> </ol> <p>Init containers execute before the sidecar proxy starts, which can result in traffic loss during their execution. Avoid this traffic loss with one or both of the following settings:</p> <ul> <li>Set the <code>traffic.sidecar.istio.io/excludeOutboundIPRanges</code> annotation to disable redirecting traffic to any CIDRs the init containers communicate with.</li> <li>Set the <code>traffic.sidecar.istio.io/excludeOutboundPorts</code> annotation to disable redirecting traffic to the specific outbound ports the init containers use.</li> </ul> </blockquote>
<p>I created my Kubernetes cluster and I try to deploy this yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: httpd-deployment spec: selector: matchLabels: app: httpd replicas: 1 template: metadata: labels: app: httpd spec: containers: - name: httpd image: httpd ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: httpd-service spec: selector: app: httpd-app ports: - protocol: TCP port: 8080 targetPort: 80 nodePort: 30020 name: httpd-port type: NodePort </code></pre> <p>This is the configuration:</p> <pre><code>[root@BCA-TST-K8S01 httpd-deploy]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/httpd-deployment-57fc687dcc-rggx9 1/1 Running 0 8m51s 10.44.0.1 bcc-tst-docker02 &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/httpd-service NodePort 10.102.138.175 &lt;none&gt; 8080:30020/TCP 8m51s app=httpd-app service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 134m &lt;none&gt; NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/httpd-deployment 1/1 1 1 8m51s httpd httpd app=httpd NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/httpd-deployment-57fc687dcc 1 1 1 8m51s httpd httpd app=httpd,pod-template-hash=57fc687dcc </code></pre> <p>But I can't connect to the worker or from the cluster IP:</p> <pre><code>curl http://bcc-tst-docker02:30020 curl: (7) Failed to connect to bcc-tst-docker02 port 30020: Connection refused </code></pre> <p>How can I fix the problem? How can expose the cluster using the internal Matser IP (for example I need to access to the httpd-deploy from the master IP 10.100.170.150 open a browser in the same network)</p> <p><strong>UPDATE:</strong></p> <p>I modified my yaml file as below:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: httpd-deployment spec: selector: matchLabels: app: httpd-app replicas: 2 template: metadata: labels: app: httpd-app spec: containers: - name: httpd image: httpd ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: http-service spec: externalIPs: - 10.100.170.150 **--&gt; IP K8S** externalTrafficPolicy: Cluster ports: - name: httpd-port protocol: TCP port: 8080 targetPort: 80 nodePort: 30020 selector: app: httpd-app sessionAffinity: None type: LoadBalancer </code></pre> <p>And these are the result after I run apply command:</p> <pre><code>[root@K8S01 LoadBalancer]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/httpd-deployment-65d64d47c5-72xp4 1/1 Running 0 60s 10.44.0.2 bcc-tst-docker02 &lt;none&gt; &lt;none&gt; pod/httpd-deployment-65d64d47c5-fc645 1/1 Running 0 60s 10.36.0.1 bca-tst-docker01 &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/http-service LoadBalancer 10.100.236.203 10.100.170.150 8080:30020/TCP 60s app=httpd-app service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 13d &lt;none&gt; NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/httpd-deployment 2/2 2 2 60s httpd httpd app=httpd-app NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/httpd-deployment-65d64d47c5 2 2 2 60s httpd httpd app=httpd-app,pod-template-hash=65d64d47c5 </code></pre> <p>but now when I try to connect to the httpd using K8S IP I receive these error:</p> <pre><code>[root@K8S01 LoadBalancer]# curl http://10.100.170.150:8080 curl: (7) Failed to connect to 10.100.170.150 port 8080: No route to host [root@K8S01 LoadBalancer]# curl http://10.100.236.203:8080 curl: (7) Failed to connect to 10.100.236.203 port 8080: No route to host </code></pre> <p>If I try to connect directly to the node I can connect:</p> <pre><code>[root@K8S01 LoadBalancer]# curl http://bca-tst-docker01:30020 &lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; [root@K8S01 LoadBalancer]# curl http://bcc-tst-docker02:30020 &lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre>
<p>You're are getting the <code>connection refused</code> because the service does not have any endpoints behind it since your label selector is different from the deployment level.</p> <p>The deployment has <code>httpd</code> label while the service is trying to catch all the deployments with <code>httpd-app</code>. Below you can find corrected selector:</p> <pre class="lang-yaml prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: httpd-service spec: selector: app: httpd &lt;------- ports: - protocol: TCP port: 8080 targetPort: 80 nodePort: 30020 name: httpd-port type: NodePort </code></pre> <p>You can always verify if the service has endpoints. Kubernetes has a great section about debugging services and one of it is called: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-have-any-endpoints" rel="nofollow noreferrer">Does the Service have any Endpoints?</a></p>
<p>We have a multi tenant application and for each tenant we provision separate container image. Likewise we create a subdomain for each tenant which will be redirected to its own container. There might be a scenario where 1000s of tenants can exist and its dynamic.</p> <p>So it has become necessary for us to consider the limitations in ingress controllers for Kubernetes in general before choosing. Especially the nginx-ingress.</p> <ol> <li><p>Is there any max limitation on number of Ingress resources or rules inside ingress that can be created? Or will there be any performance or scaling issues when too many ingress resources are created ?</p> </li> <li><p>Is it better to add a new rule(for each subdomain) in same ingress resource or to create separate ingress resource for each subdomain ?</p> </li> </ol>
<p>AFAIK, there are no limits like that, You will either run out of resources or find a choke point first. <a href="https://www.loggly.com/blog/benchmarking-5-popular-load-balancers-nginx-haproxy-envoy-traefik-and-alb/" rel="nofollow noreferrer">This</a> article compares resource consumption of several loadbalancers.</p> <p>As for Nginx-ingress there are few features hidden behind paid nginx plus version as listed <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">here</a>.</p> <p>If You wish to have dynamic configurations and scalability, You should try envoy based ingress like <a href="https://www.getambassador.io/learn/kubernetes-ingress/?gclid=EAIaIQobChMIqtOT6Kyx6gIVhumyCh3PKAf9EAAYASAAEgK9nPD_BwE" rel="nofollow noreferrer">Ambassador</a> or <a href="https://istio.io/latest/docs/concepts/traffic-management/" rel="nofollow noreferrer">Istio</a>.</p> <p>Envoy offers dynamic configuration updates which will not interrupt existing connections. More info <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/hot_restart" rel="nofollow noreferrer">here</a>.</p> <p>Check out <a href="https://medium.com/flant-com/comparing-ingress-controllers-for-kubernetes-9b397483b46b" rel="nofollow noreferrer">this</a> article which compares most of popular kubernetes ingress controllers.</p> <p><a href="https://engineeringblog.yelp.com/2017/05/taking-zero-downtime-load-balancing-even-further.html" rel="nofollow noreferrer">This</a> article shows a great example of pushing HAproxy and Nginx combination to its limits.</p> <p>Hope it helps.</p>
<p>I'm trying to limit my filebeat daemonset to collect logs only from certain namespaces.</p> <p>According to the <a href="https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html#_kubernetes" rel="nofollow noreferrer">official autodiscovery documentation</a>, I can define <code>namespace:</code> but it seems to be singular, not plural.</p> <p>Is there anyway to limit the namespace but for several namespaces?</p> <p>My current configuration looks like this:</p> <pre><code>filebeat.autodiscover: providers: - type: kubernetes node: ${NODE_NAME} namespace: backend hints.enabled: true hints.default_config: type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log include_annotations: '*' </code></pre>
<p>After some reading it looks that you can achieve your goal with <a href="https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html#configuration-autodiscover-hints" rel="nofollow noreferrer">Hints based autodiscover</a>:</p> <p>The hints system looks for hints in Kubernetes Pod annotations or Docker labels that have the prefix <code>co.elastic.logs</code>. As soon as the container starts, Filebeat will check if it contains any hints and launch the proper config for it. Hints tell Filebeat how to get logs for the given container.</p> <p>So basically you enable the hints in your main configuration:</p> <pre class="lang-yaml prettyprint-override"><code>filebeat.autodiscover: providers: - type: kubernetes hints.enabled: true add_resource_metadata.namespace.enabled: true hints.default_config.enabled: false </code></pre> <p>Then you can provide the hint for it in form of annotation:</p> <pre class="lang-yaml prettyprint-override"><code>annotations: co.elastic.logs/enabled: 'true' </code></pre>
<p>I'm trying to install the prometheus operator and inject using the sidecar. Mutual TLS is turned on and works okay for Jaeger. For the operator though we get a failure on the oper-admission job (see image). <a href="https://i.stack.imgur.com/SFlG9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SFlG9.png" alt="Prometheus Operator Error"></a></p> <p>I believe Istio is causing this as if I release prometheus-operator prior to istio or without istio it works okay, but then it isn't injected.</p> <p>I've tried setting the following in the istio operator sidecar settings:</p> <pre><code>rewriteapphttpprobe:true </code></pre> <p>I've also tried to extend the readinessInitialDelaySeconds to 10s but still get the error. Does anyone else have any ideas?</p>
<p>Fisrt of all according to istio <a href="https://istio.io/docs/concepts/observability/#service-level-metrics" rel="nofollow noreferrer">documentation</a> Prometheus is used as default observation operator in istio mesh by default:</p> <blockquote> <p>The <a href="https://istio.io/docs/reference/config/policy-and-telemetry/metrics/" rel="nofollow noreferrer">default Istio metrics</a> are defined by a set of configuration artifacts that ship with Istio and are exported to <a href="https://istio.io/docs/reference/config/policy-and-telemetry/adapters/prometheus/" rel="nofollow noreferrer">Prometheus</a> by default. Operators are free to modify the shape and content of these metrics, as well as to change their collection mechanism, to meet their individual monitoring needs.</p> </blockquote> <p>So by having istio injected prometheus operator You end up with two Prometheus operators in Your istio mesh.</p> <p>Secondly, when you enforce Mutual TLS in Your istio mesh every connection has to be secure (<code>TLS</code>). And as You mentioned it works when there is no istio injection.</p> <p>So the most likely cause is that the readiness probe fails because it is using <code>HTTP</code> protocol which is insecure (plain text) and this is one of the reason why You would get <code>503</code> error.</p> <p>If you really need prometheus operator within istio mesh, this could be fixed by creating <code>DestinationRule</code> to <code>Disable</code> tls mode just for the readiness probe.</p> <p>Example:</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: "readiness-probe-dr" namespace: "prometheus-namespace" spec: host: "prometheus-prometheus-oper-prometheus.svc.cluster.local" trafficPolicy: tls: mode: DISABLE EOF </code></pre> <p>Note: Make sure to modify it so that it matches Your namespaces and hosts. Also there could be some other prometheus collisions within mesh.</p> <hr> <p>The other solution would be not to have prometheus istio injected in the first place. You can disable istio injection in prometheus namespace by using the following commands:</p> <pre><code>$ kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 4d22h enabled istio-system Active 4d22h disabled kube-node-lease Active 4d22h kube-public Active 4d22h kube-system Active 4d22h prometheus Active 30s enabled </code></pre> <pre><code>$ kubectl label namespace prometheus istio-injection=disabled --overwrite namespace/prometheus labeled </code></pre> <pre><code>$ kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 4d22h enabled istio-system Active 4d22h disabled kube-node-lease Active 4d22h kube-public Active 4d22h kube-system Active 4d22h prometheus Active 73s disabled </code></pre>
<p>I try to create a situation which is shown in the picture.</p> <p><a href="https://i.stack.imgur.com/fRPC5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fRPC5.png" alt="enter image description here" /></a></p> <pre><code>kubectl run frontend --image=nginx --labels=&quot;app=frontend&quot; --port=30081 --expose kubectl run backend --image=nginx --labels=&quot;app=backend&quot; --port=30082 --expose kubectl run database --image=nginx --labels=&quot;app=database&quot; --port=30082 </code></pre> <p>I created network policy which should block all ingress and egress access which do not have specific label definition.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: app: frontend matchLabels: app: backend matchLabels: app: database policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: frontend matchLabels: app: backend matchLabels: app: database egress: - to - podSelector: matchLabels: app: frontend matchLabels: app: backend matchLabels: app: database - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP </code></pre> <p>I tried to connect to pod frontend without label(command 1) and with correct label(command 2) as is shown below.</p> <ul> <li>kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http://frontend:30081 --timeout 2</li> <li>kubectl run busybox --image=busybox --rm -it --restart=Never --labels=app=frontend -- wget -O- http://frontend:30081 --timeout 2</li> </ul> <p>I expected that first command which do not use label will be blocked and second command will allow communication but after pressed the second command i see output &quot;wget: can't connect to remote host (10.109.223.254): Connection refused&quot;. Did I define network policy incorrectly?</p>
<p>As mentioned in kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">documentation</a> about Network Policy</p> <blockquote> <p><strong>Prerequisites</strong></p> <p>Network policies are implemented by the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">network plugin</a>. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.</p> </blockquote> <p>As far as I know flannel, which is used by katacoda does not support network policy.</p> <pre><code>controlplane $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE coredns-66bff467f8-4tmhm 1/1 Running 0 16m coredns-66bff467f8-v2dbj 1/1 Running 0 16m etcd-controlplane 1/1 Running 0 16m katacoda-cloud-provider-58f89f7d9-brnk2 1/1 Running 8 16m kube-apiserver-controlplane 1/1 Running 0 16m kube-controller-manager-controlplane 1/1 Running 0 16m kube-flannel-ds-amd64-h5lrd 1/1 Running 1 16m kube-flannel-ds-amd64-sdl4b 1/1 Running 0 16m kube-keepalived-vip-gkhbz 1/1 Running 0 16m kube-proxy-6gd8d 1/1 Running 0 16m kube-proxy-zkldz 1/1 Running 0 16m kube-scheduler-controlplane 1/1 Running 1 16m </code></pre> <p>As mentioned <a href="https://github.com/coreos/flannel#:%7E:text=Networking%20details&amp;text=Flannel%20is%20responsible%20for%20providing,traffic%20is%20transported%20between%20hosts.&amp;text=For%20network%20policy%2C%20other%20projects%20such%20as%20Calico%20can%20be%20used." rel="nofollow noreferrer">here</a></p> <blockquote> <p>Flannel is focused on networking. For network policy, other projects such as Calico can be used.</p> </blockquote> <p>Additionally there is nice <a href="https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-april-2019-4a9886efe9c4" rel="nofollow noreferrer">tutorial</a> which show which CNI support network policy.</p> <p><a href="https://i.stack.imgur.com/yc2fI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yc2fI.png" alt="enter image description here" /></a></p> <p>So I would say it´s not possible to do on katacoda playground.</p>
<p>I have helm charts created for a microservice that I have built and everything is working as expected. Now, I have created a new k8s namespace and I want to try to deploy the same helm charts as my old namespace. Although, I have just one value that I need it different while everything else remain the same.</p> <p>Do I have to create another values.yaml for the new namespace and copy everything over and update the one field I want updated ? Or is there any other way ? I do not want to use the --set method of passing updates to the command line.</p>
<p>David suggested the right way. You can use different <code>values.yaml</code> where you can specify the namespace you want to deploy the chart:</p> <pre><code>$ helm install -f another-namespace-values.yaml &lt;my-release&gt; . </code></pre> <p>It's also entirely possible to launch helm chart with multiple values. For more reading please check <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">values</a> section of helm docs.</p>
<p>How do you add additional <code>values.yaml</code> files to <code>helm/chart-testing-action</code>?</p> <p>Something like <code>helm lint -f my-values.yaml</code>.</p>
<p>You can use --values or --set to add additional values to chart.</p> <pre><code>Example: helm install helm/chart-testing-action -f &lt;Value file&gt; or helm install helm/chart-testing-action --values &lt;Value file&gt; </code></pre>
<p>I'm create k8s in Google Cloud using Terraform, several node pools contains GPU, according <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus" rel="nofollow noreferrer">documentation</a> there should be applied DaemonSet with GPU drivers. It's possible to do it with Terraform or this operation requires my attention?</p>
<p>As @Patric W and Google Cloud <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">documentation</a> mentioned: </p> <blockquote> <p>After adding GPU nodes to your cluster, you need to install NVIDIA's device drivers to the nodes. Google provides a DaemonSet that automatically installs the drivers for you.</p> </blockquote> <p>So all We have to do is to apply <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/ubuntu/daemonset-preloaded.yaml" rel="nofollow noreferrer">DaemonSet</a> provided by Google.</p> <p>For Container-Optimized OS (COS) nodes:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml </code></pre> <p>For Ubuntu Nodes:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/ubuntu/daemonset-preloaded.yaml </code></pre> <hr> <p>Based on Terraform <a href="https://www.terraform.io/docs/provisioners/local-exec.html" rel="nofollow noreferrer">documentation</a> You can use <code>provisioner "local-exec"</code> to run kubectl apply command for DaemonSet after successful cluster deployment.</p> <pre><code>provisioner "local-exec" { command = "kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml" } } </code></pre> <p>Note that example above is for COS node version.</p>
<p>We switch now to <strong>istio</strong> and I need to expose my app to outside</p> <p>In the app, I've only 3 routes</p> <blockquote> <ol> <li>&quot;/&quot; root route</li> <li>&quot;/login&quot;</li> <li>&quot;static&quot; - my app should serve some static files</li> </ol> </blockquote> <p>we have <code>gw</code> and <code>host</code> but somehow I cannot access my app, any idea what am I doing wrong here? <strong>vs-yaml</strong></p> <p>is there a way to expose all the routes, or should I define them explicitly, if so how as it a bit confusing with <code>routes</code> and <code>match</code>?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bher-virtualservice namespace: ba-trail spec: gateways: - webb-system.svc.cluster.local hosts: - trialio.cloud.str http: - route: - destination: host: bsa port: number: 5000 </code></pre>
<blockquote> <p>if so how as it a bit confusing with routes and match</p> </blockquote> <p>I would suggest to take a look at istio documentation about <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">virtual services</a>, it's well described there.</p> <hr /> <p>Let's start from the beginning, you have virtual service and gateway, they should be in the same namespace as your application, or you need to specify that in both of them.</p> <p>As far as I can see your virtual service is incorrect, I have prepared example which should work for you. Take a look at below example.</p> <p><strong>Gateway</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bher-gateway namespace: ba-trail 👈 spec: selector: istio: ingressgateway # use the default IngressGateway servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;trialio.cloud.str&quot; </code></pre> <p>I see you have <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">gateway</a> which is already deployed, if it´s not in the same namespace as virtual service, you should add it like in below example.</p> <p>Check the <code>spec.gateways</code> section</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-gateway namespace: some-config-namespace </code></pre> <hr /> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-Mongo namespace: bookinfo-namespace spec: gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. </code></pre> <hr /> <p><strong>Virtual service</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bher-virtualservice namespace: ba-trail 👈 spec: gateways: - bher-gateway 👈 // name of your gateway hosts: - trialio.cloud.str http: - match: - uri: prefix: &quot;/&quot; - uri: prefix: &quot;/login&quot; - uri: prefix: &quot;/static&quot; - uri: regex: '^.*\.(ico|png|jpg)$' route: - destination: host: bsa.ba-trail.svc.cluster.local 👈 // name_of_your service.namespace.svc.cluster.local port: number: 5000 </code></pre> <p>Take a look at this <a href="https://rinormaloku.com/istio-practice-routing-virtualservices/" rel="nofollow noreferrer">example</a></p> <blockquote> <p>Let’s break down the requests that should be routed to Frontend:</p> <p><strong>Exact path</strong> / should be routed to Frontend to get the Index.html</p> <p><strong>Prefix path</strong> /static/* should be routed to Frontend to get any static files needed by the frontend, like <strong>Cascading Style Sheets</strong> and <strong>JavaScript files</strong>.</p> <p><strong>Paths matching the regex ^.*.(ico|png|jpg)$</strong> should be routed to Frontend as it is an image, that the page needs to show.</p> </blockquote> <pre><code>http: - match: - uri: exact: / - uri: exact: /callback - uri: prefix: /static - uri: regex: '^.*\.(ico|png|jpg)$' route: - destination: host: frontend port: number: 80 </code></pre> <hr /> <p>Hope you find this useful. If you have any questions let me know in the comments.</p>
<p>I have been trying to experiment with the calico network rules and I'm finding it tough to get the ingress and the egress rules to work with <code>order</code> in calico after denying all ingress and egress rules.</p> <pre><code> kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS hello-web3 1/1 Running 0 45m app=foo hello-web4 1/1 Running 0 45m app=bar hello-web5 1/1 Running 0 15s app=foobar hello-web6 1/1 Running 0 4s app=barbar </code></pre> <p>My network policy is as follows</p> <pre><code>--- apiVersion: projectcalico.org/v3 kind: NetworkPolicy metadata: name: ppdp-default spec: selector: all() order: 2000 types: - Ingress - Egress --- apiVersion: projectcalico.org/v3 kind: GlobalNetworkPolicy metadata: name: ppdp-egress-trusted spec: selector: app == 'foo' order: 1000 types: - Egress egress: - action: Allow destination: selector: app == 'bar' --- apiVersion: projectcalico.org/v3 kind: GlobalNetworkPolicy metadata: name: ppdp-ingress-trusted spec: selector: app == 'foobar' order: 100 types: - Ingress ingress: - action: Allow source: selector: app == 'barbar' </code></pre> <p>Output for Ingress:</p> <pre><code>(base) ➜ ✗ kubectl exec --stdin --tty hello-web5 -- sh / # ^C / # wget -qO- --timeout=2 http://hello-web6:8080 ^C / # wget -qO- --timeout=2 http://hello-web6:8080 wget: bad address 'hello-web6:8080' / # command terminated with exit code 1 --- (base) ➜ ✗ kubectl exec --stdin --tty hello-web6 -- sh / # wget -qO- --timeout=2 http://hello-web5:8080 wget: bad address 'hello-web5:8080' / # command terminated with exit code 1 </code></pre> <p>Output for Egress</p> <pre><code>(base) ➜ ✗ kubectl exec --stdin --tty hello-web3 -- sh / # wget -qO- --timeout=2 http://hello-web4:8080 ^C / # command terminated with exit code 130 </code></pre> <p>Am I missing anything? Any help would be of great use.</p> <p>Thanks in advance</p>
<blockquote> <p><strong>In a namespace, I want to deny traffic among all pods in the first place and then allow egress or ingress traffic between specific pods (matching labels)</strong></p> </blockquote> <p>While I don't know why you're using the order or calico network policies the goal described in the comments can be achieved with Kubernetes network policies supported by Calico CNI. I prepared a simple example of how network policies work with those. So let's start with a list of pods that I have created in <code>dev</code> namespace:</p> <pre class="lang-sh prettyprint-override"><code>➜ ~ kgp -n dev --show-labels -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS centos 1/1 Running 0 30m 10.244.120.68 minikube &lt;none&gt; &lt;none&gt; host=centos echo-server 1/1 Running 0 30m 10.244.120.69 minikube &lt;none&gt; &lt;none&gt; host=echo-server </code></pre> <p>Notice the labels. In my example we are going to allow ingress traffic to <code>echo-server</code> from host named <code>centos</code>. Let's first test if the connections works between those:</p> <pre><code>[root@centos /]# curl 10.244.120.69:80 -v * About to connect() to 10.244.120.69 port 80 (#0) * Trying 10.244.120.69... * Connected to 10.244.120.69 (10.244.120.69) port 80 (#0) &gt; GET / HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: 10.244.120.69 &gt; Accept: */* &quot;path&quot;: &quot;/&quot;, &quot;headers&quot;: { &quot;user-agent&quot;: &quot;curl/7.29.0&quot;, &quot;host&quot;: &quot;10.244.120.69&quot;, &quot;accept&quot;: &quot;*/*&quot; &quot;os&quot;: { &quot;hostname&quot;: &quot;echo-server&quot; }, &quot;connection&quot;: {} * Connection #0 to host 10.244.120.69 left intact </code></pre> <p>Now let's deny all traffic in that namespace:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: dev spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <p>Once this policy is in place previous test fails:</p> <pre class="lang-yaml prettyprint-override"><code>➜ ~ keti -n dev centos bash [root@centos /]# curl 10.244.120.69:80 -v * About to connect() to 10.244.120.69 port 80 (#0) * Trying 10.244.120.69... * Connection timed out * Failed connect to 10.244.120.69:80; Connection timed out * Closing connection 0 curl: (7) Failed connect to 10.244.120.69:80; Connection timed out </code></pre> <p>Let's now apply the ingress policy that will allow us to reach the <code>echo-server</code>. So basically we selects the pod's label that we want the policy to be applied to and the choose which pod ingress traffic is allowed from:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-echo namespace: dev spec: podSelector: matchLabels: host: echo-server ingress: - from: - podSelector: matchLabels: host: centos </code></pre> <p>Now since we allowed the traffic towards our <code>echo-server</code>, we might feel tempted to test this straight away but this still won't work. While we allowed the ingress traffic toward <code>echo-server</code> we have to remember that we denied both <code>ingress</code> and <code>egress</code> in our <code>deny-all</code> policy. This means that we have to allow the <code>egress</code> traffic from <code>centos</code> pod:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: egress-centos namespace: dev spec: podSelector: matchLabels: host: centos egress: - to: - podSelector: matchLabels: host: echo-server </code></pre> <p>After that we have successfully allowed traffic for some specific pods in the same namespace while deny every pod that don't match the labels.</p>
<p>So I have a setup like this: </p> <p>AWS NLB (forwards) --> Istio --> Nginx pod</p> <p>Now, I'm trying to implement rate limiting at Istio layer. I followed <a href="https://raw.githubusercontent.com/istio/istio/release-1.3/samples/bookinfo/policy/mixer-rule-productpage-ratelimit.yaml" rel="nofollow noreferrer">this</a> link. However, I can still request the API more than what I configured. Looking more into it, I logged X-Forwarded-For header in the nginx, and it's empty. </p> <p>So, how do I get the client IP in Istio when I'm using NLB? NLB forwards the client IP, but how? In header? </p> <p>EDITS:</p> <p>Istio Version: 1.2.5</p> <p>istio-ingressgateway is configured as type NodePort. </p>
<p>According to AWS <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html" rel="nofollow noreferrer">documentation</a> about Network Load Balancer:</p> <blockquote> <p>A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.</p> </blockquote> <p>...</p> <blockquote> <p>When you create a target group, you specify its target type, which determines whether you register targets by instance ID or IP address. If you register targets by instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you register targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.</p> </blockquote> <hr /> <p>There are two ways of preserving client IP address when using NLB:</p> <p><strong>1.: NLB preserves client IP address in source address when registering targets by instance ID.</strong></p> <p>So client IP address are only available in specific NLB configuration. You can read more about Target Groups in aws <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html" rel="nofollow noreferrer">documentation</a>.</p> <hr /> <p><strong>2.: Proxy Protocol headers.</strong></p> <p>It is possible to use to send additional data such as the source IP address in header. Even if You specify targets by IP addresses.</p> <p>You can follow aws <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol" rel="nofollow noreferrer">documentation</a> for guide and examples how to configure proxy protocol.</p> <blockquote> <p><strong>To enable Proxy Protocol using the console</strong></p> <ol> <li><p>Open the Amazon EC2 console at <a href="https://console.aws.amazon.com/ec2/" rel="nofollow noreferrer">https://console.aws.amazon.com/ec2/</a>.</p> </li> <li><p>On the navigation pane, under <strong>LOAD BALANCING</strong>, choose <strong>Target Groups</strong>.</p> </li> <li><p>Select the target group.</p> </li> <li><p>Choose <strong>Description</strong>, <strong>Edit attributes</strong>.</p> </li> <li><p>Select <strong>Enable proxy protocol v2</strong>, and then choose <strong>Save</strong>.</p> </li> </ol> </blockquote>
<p>i tried to create a firewall rule in k8s istio with &quot;istio-system&quot; ns, <br /> and i have a services within different ns. <br /> i need to create firewall rule with istio ingress that block all requests Besides &quot;POST&quot; requests. i tried to create new rule in firewall like that:</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-all namespace: istio-system spec: {} </code></pre> <p>and that really block all the requests and after that i tried to apply this rule:</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: test spec: action: ALLOW selector: matchLabels: app: test rules: to: - operation: hosts: [&quot;https://ABCD.xxx.ddd&quot;] methods: [&quot;POST&quot;] paths: [&quot;/*&quot;] </code></pre> <p>and that not successed <br /> To my current understanding, I can't block services that are on one namespace, <br /> using firewall rules as part of istio (within different ns) that is on another namespace. <br /> My question is is it possible to do this, and if so how?</p>
<blockquote> <p>i need to create firewall rule with istio ingress that block all requests Besides &quot;POST&quot; requests</p> </blockquote> <p>Maybe just create one authorization policy which deny every single one of them besides POST?</p> <hr /> <p>I have made an example which does allow only GET requests and deny rest. Take a look.</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: istio-system spec: action: DENY selector: matchLabels: istio: ingressgateway rules: - to: - operation: methods: [&quot;POST&quot;, &quot;HEAD&quot;, &quot;PUT&quot;, &quot;DELETE&quot;, &quot;CONNECT&quot;, &quot;OPTIONS&quot;, &quot;TRACE&quot;, &quot;PATCH&quot;] </code></pre> <p>And some tests</p> <pre><code>root@httpbin-779c54bf49-s6g6r:/# curl http://xx.xx.xxx.xxx/productpage -X GET -s -o /dev/null -w &quot;%{http_code}\n&quot; 200 root@httpbin-779c54bf49-s6g6r:/# curl http://xx.xx.xxx.xxx/productpage -X POST -s -o /dev/null -w &quot;%{http_code}\n&quot; 403 root@httpbin-779c54bf49-s6g6r:/# curl http://xx.xx.xxx.xxx/productpage -X PUT -s -o /dev/null -w &quot;%{http_code}\n&quot; 403 </code></pre> <hr /> <p>Related documentation about AuthorizationPolicy:</p> <ul> <li><a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/security/authorization-policy/</a></li> <li><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-http/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/authorization/authz-http/</a></li> <li><a href="https://istio.io/latest/docs/tasks/security/authorization/authz-deny/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/authorization/authz-deny/</a></li> </ul> <hr /> <p>Let me know if you have any more questions.</p>
<p>What kind of load balancing HAproxy ingress controller capable of. Can it do load balancing on a Pod level ? or it does it on a Node level load-balancing.</p> <p>Thanks Yaniv</p>
<p>An ingress provides load balancing, name based virtual hosting, SSL/TLS termination. Yes, it will do load balancing on services ( backed by pods ). Here is the sample Ingress kubernetes object manifest file.</p> <pre><code>Example: apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: sample-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: service1 ( Backed by service1 pod ) servicePort: 4200 - path: /bar backend: serviceName: service2 ( Backed by service2 pod ) servicePort: 8080 </code></pre>
<p>I have a single master cluster with 3 worker nodes. The master node has one network interface of 10Gb capacity and all worker nodes have two interfaces: 10Gb and 40Gb interface. They are all connected via a switch.</p> <p>By default, Kubernetes binds to the default network eth0 which is 10Gb for the worker nodes. How do I specify the 40Gb interface at joining?</p> <p>The kubeadm init command has a <code>--apiserver-advertise-address</code> argument but this is for the apiserver. Is there any equivalent option for the worker nodes so the communciation between master and worker (and between workers) are realised on the 40Gb link?</p> <p>Please note that this is a bare-metal on-prem installation with OSS Kubernetes v1.20.</p>
<p>You can use the <code>--hostname-override</code> flag to override the default kubelet behavior. The default name of the kubelet equals to the hostname and it's ip address default to the interface's ip address default gateway.</p> <p>For more details please visit <a href="https://github.com/kubernetes/kubernetes/issues/33618" rel="nofollow noreferrer">this issue</a>.</p>
<p>I'm deploying Jenkins with helm charts to GKE and having issue with jobs, mainly the slave doesn't see <code>tcpSlaveAgentListener</code> - it happens whenever i start any job - the master triggers scale of new jenkins-agent but it terminates with error like these</p> <pre><code>SEVERE: Failed to connect to http://jenkins.jenkins.svc.my_website:8080/tcpSlaveAgentListener/: jenkins.jenkins.svc.my_website java.io.IOException: Failed to connect to http://jenkins.jenkins.svc.my_website:8080/tcpSlaveAgentListener/: jenkins.jenkins.svc.my_website at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:206) </code></pre> <p>I replaced my domain(like google.com) to my_website - I'm curious about the jenkins.jenkins.svc.my_domain - is it built in or do I have some duplicates somewhere? that it has so many subdomains ? </p>
<p>You have to configure TCP port for incoming agents to 50000 on master Jenkins configure section. </p> <pre><code>1. Go to Configure Global Security 2. Under Agents section, select Fixed option and keep 50000 value. </code></pre>
<p>I have deployed a CipherSuite on an Istio Ingress Gateway object:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: hello-istio-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - hosts: - "*" port: name: https-wildcard number: 444 protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key cipherSuites: "[ECDHE-RSA-AES256-GCM-SHA384|ECDHE-RSA-AES128-GCM-SHA256]" </code></pre> <p>But from kubectl I get the error</p> <pre><code>admission webhook "pilot.validation.istio.io" denied the request: error decoding configuration: YAML decoding error: json: cannot unmarshal string into Go value of type []json.RawMessage </code></pre> <p>Any ideas what could be wrong with my manifest?</p> <p>Thanks in advance.</p> <p>Best regards, rforberger</p>
<p>Remove the <code>"</code> chars from the <code>cipherSuites</code>.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: hello-istio-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - hosts: - "*" port: name: https-wildcard number: 444 protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key cipherSuites: [ECDHE-RSA-AES256-GCM-SHA384|ECDHE-RSA-AES128-GCM-SHA256] </code></pre> <pre><code>$ kubectl apply -f gateway.yaml gateway.networking.istio.io/hello-istio-gateway created </code></pre>
<p>I want that Kubernetes recreate my pod with higher resources after a cpu stresstest but it does not recreate the pods, the recomandation has changed Can I somewhere control how often my VerticalPodAutoscaler checks the CPU/RAM Metrics? And is Recreate or Auto the better mode for this scenario?</p> <pre><code>apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: name: my-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: Deployment name: my-auto-deployment updatePolicy: updateMode: "Recreate" </code></pre> <p>Update:</p> <p><a href="https://i.stack.imgur.com/7489Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7489Y.png" alt="enter image description here"></a></p> <p>so the main problem is that the recommendations changed but it does not recreate the pod.</p> <p><a href="https://i.stack.imgur.com/giBJv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/giBJv.png" alt="enter image description here"></a></p> <p>The pod resources did not change/recreate</p>
<p>By default, VPA checks the metrics values at every 10s intervals. VPA requires the pods to be restarted to change allocated resources.</p>
<p>I need to route my traffic based on the headers using istio but this option is deprecated in istio1.6 version. Why control headers and routing is deprecated in istio?</p>
<p>As mentioned in istio <a href="https://istio.io/latest/docs/tasks/policy-enforcement/control-headers/" rel="nofollow noreferrer">documentation</a></p> <blockquote> <p>The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.</p> <p>Consider using Envoy <a href="https://www.envoyproxy.io/docs/envoy/v1.13.0/intro/arch_overview/security/ext_authz_filter" rel="nofollow noreferrer">ext_authz</a> filter, <a href="https://www.envoyproxy.io/docs/envoy/v1.13.0/configuration/http/http_filters/lua_filter" rel="nofollow noreferrer">lua</a> filter, or write a filter using the <a href="https://github.com/envoyproxy/envoy-wasm/tree/master/test/extensions/filters/http/wasm/test_data" rel="nofollow noreferrer">Envoy-wasm sandbox</a>.</p> </blockquote> <p>Control headers and routing are not deprecated, it's just mixer which was used to do that. There are different ways to do that now, as mentioned above.</p> <p>I'm not sure what exactly you want to do, but take a look at envoy filter and virtual service.</p> <hr /> <h2>Envoy filter</h2> <p>There is envoy filter which add some custom headers to all the outbound responses</p> <pre><code>kind: EnvoyFilter metadata: name: lua-filter namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: HTTP_FILTER match: context: GATEWAY listener: filterChain: filter: name: &quot;envoy.http_connection_manager&quot; subFilter: name: &quot;envoy.router&quot; patch: operation: INSERT_BEFORE value: name: envoy.lua typed_config: &quot;@type&quot;: &quot;type.googleapis.com/envoy.config.filter.http.lua.v2.Lua&quot; inlineCode: | function envoy_on_response(response_handle) response_handle:logInfo(&quot; ========= XXXXX ========== &quot;) response_handle:headers():add(&quot;X-User-Header&quot;, &quot;worked&quot;) end </code></pre> <p>And tests from curl</p> <pre><code>$ curl -s -I -X HEAD x.x.x.x/ HTTP/1.1 200 OK server: istio-envoy date: Mon, 06 Jul 2020 08:35:37 GMT content-type: text/html content-length: 13 last-modified: Thu, 02 Jul 2020 12:11:16 GMT etag: &quot;5efdcee4-d&quot; accept-ranges: bytes x-envoy-upstream-service-time: 2 x-user-header: worked </code></pre> <p>Few links worth to check about that:</p> <ul> <li><a href="https://blog.opstree.com/2020/05/27/ip-whitelisting-using-istio-policy-on-kubernetes-microservices/" rel="nofollow noreferrer">https://blog.opstree.com/2020/05/27/ip-whitelisting-using-istio-policy-on-kubernetes-microservices/</a></li> <li><a href="https://github.com/istio/istio/wiki/EnvoyFilter-Samples" rel="nofollow noreferrer">https://github.com/istio/istio/wiki/EnvoyFilter-Samples</a></li> <li><a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/envoy-filter/</a></li> </ul> <hr /> <h2>Virtual Service</h2> <p>Another thing worth to check here would be <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service" rel="nofollow noreferrer">virtual service</a>, you can do header routing based on matches here.</p> <p>Take a look at example from <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest" rel="nofollow noreferrer">istio documentation</a></p> <blockquote> <p>HttpMatchRequest specifies a set of criterion to be met in order for the rule to be applied to the HTTP request. For example, the following restricts the rule to match only requests where the URL path starts with /ratings/v2/ and <strong>the request contains a custom end-user header with value jason.</strong></p> </blockquote> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings-route spec: hosts: - ratings.prod.svc.cluster.local http: - match: - headers: end-user: exact: jason uri: prefix: &quot;/ratings/v2/&quot; ignoreUriCase: true route: - destination: host: ratings.prod.svc.cluster.local </code></pre> <p>Additionally there is my older <a href="https://stackoverflow.com/a/59841540/11977760">example</a> with header based routing in virtual service.</p> <hr /> <p>Let me know if you have any more questions.</p>
<p>Auto inject is enabled on namespace and I am attempting to use <code>Auto mTLS</code>. Verified Istio pilot and citadel are running correctly. The Cert secret was properly created and mounted. The istio proxy fails to start with the following logs. The specific error is <code>Invalid path: ./var/run/secrets/istio/root-cert.pem</code></p> <pre><code>kubectl logs ratings-v1-85c656b747-czg8z -c istio-proxy -n bookinfo 2020-04-29T05:39:42.276131Z info FLAG: --binaryPath=&quot;/usr/local/bin/envoy&quot; 2020-04-29T05:39:42.276164Z info FLAG: --concurrency=&quot;2&quot; 2020-04-29T05:39:42.276172Z info FLAG: --configPath=&quot;/etc/istio/proxy&quot; 2020-04-29T05:39:42.276186Z info FLAG: --connectTimeout=&quot;10s&quot; 2020-04-29T05:39:42.276192Z info FLAG: --controlPlaneAuthPolicy=&quot;MUTUAL_TLS&quot; 2020-04-29T05:39:42.276202Z info FLAG: --controlPlaneBootstrap=&quot;true&quot; 2020-04-29T05:39:42.276208Z info FLAG: --customConfigFile=&quot;&quot; 2020-04-29T05:39:42.276224Z info FLAG: --datadogAgentAddress=&quot;&quot; 2020-04-29T05:39:42.276378Z info FLAG: --disableInternalTelemetry=&quot;false&quot; 2020-04-29T05:39:42.276389Z info FLAG: --discoveryAddress=&quot;istiod.istio-system.svc:15012&quot; 2020-04-29T05:39:42.276396Z info FLAG: --dnsRefreshRate=&quot;300s&quot; 2020-04-29T05:39:42.276406Z info FLAG: --domain=&quot;bookinfo.svc.cluster.local&quot; 2020-04-29T05:39:42.276413Z info FLAG: --drainDuration=&quot;45s&quot; 2020-04-29T05:39:42.276419Z info FLAG: --envoyAccessLogService=&quot;&quot; 2020-04-29T05:39:42.276428Z info FLAG: --envoyMetricsService=&quot;&quot; 2020-04-29T05:39:42.276434Z info FLAG: --help=&quot;false&quot; 2020-04-29T05:39:42.276440Z info FLAG: --id=&quot;&quot; 2020-04-29T05:39:42.276445Z info FLAG: --ip=&quot;&quot; 2020-04-29T05:39:42.276455Z info FLAG: --lightstepAccessToken=&quot;&quot; 2020-04-29T05:39:42.276514Z info FLAG: --lightstepAddress=&quot;&quot; 2020-04-29T05:39:42.276525Z info FLAG: --lightstepCacertPath=&quot;&quot; 2020-04-29T05:39:42.276531Z info FLAG: --lightstepSecure=&quot;false&quot; 2020-04-29T05:39:42.276541Z info FLAG: --log_as_json=&quot;false&quot; 2020-04-29T05:39:42.276581Z info FLAG: --log_caller=&quot;&quot; 2020-04-29T05:39:42.276588Z info FLAG: --log_output_level=&quot;default:info&quot; 2020-04-29T05:39:42.276593Z info FLAG: --log_rotate=&quot;&quot; 2020-04-29T05:39:42.276604Z info FLAG: --log_rotate_max_age=&quot;30&quot; 2020-04-29T05:39:42.276610Z info FLAG: --log_rotate_max_backups=&quot;1000&quot; 2020-04-29T05:39:42.276616Z info FLAG: --log_rotate_max_size=&quot;104857600&quot; 2020-04-29T05:39:42.276622Z info FLAG: --log_stacktrace_level=&quot;default:none&quot; 2020-04-29T05:39:42.276640Z info FLAG: --log_target=&quot;[stdout]&quot; 2020-04-29T05:39:42.276646Z info FLAG: --mixerIdentity=&quot;&quot; 2020-04-29T05:39:42.276652Z info FLAG: --outlierLogPath=&quot;&quot; 2020-04-29T05:39:42.276667Z info FLAG: --parentShutdownDuration=&quot;1m0s&quot; 2020-04-29T05:39:42.276704Z info FLAG: --pilotIdentity=&quot;&quot; 2020-04-29T05:39:42.276717Z info FLAG: --proxyAdminPort=&quot;15000&quot; 2020-04-29T05:39:42.276723Z info FLAG: --proxyComponentLogLevel=&quot;misc:error&quot; 2020-04-29T05:39:42.276734Z info FLAG: --proxyLogLevel=&quot;warning&quot; 2020-04-29T05:39:42.276740Z info FLAG: --serviceCluster=&quot;ratings.bookinfo&quot; 2020-04-29T05:39:42.276746Z info FLAG: --serviceregistry=&quot;Kubernetes&quot; 2020-04-29T05:39:42.276752Z info FLAG: --statsdUdpAddress=&quot;&quot; 2020-04-29T05:39:42.276762Z info FLAG: --statusPort=&quot;15020&quot; 2020-04-29T05:39:42.276767Z info FLAG: --stsPort=&quot;0&quot; 2020-04-29T05:39:42.276773Z info FLAG: --templateFile=&quot;&quot; 2020-04-29T05:39:42.276784Z info FLAG: --tokenManagerPlugin=&quot;GoogleTokenExchange&quot; 2020-04-29T05:39:42.276790Z info FLAG: --trust-domain=&quot;cluster.local&quot; 2020-04-29T05:39:42.276797Z info FLAG: --zipkinAddress=&quot;zipkin.istio-system:9411&quot; 2020-04-29T05:39:42.276837Z info Version 1.5.1-9d07e185b0dd50e6fb1418caa4b4d879788807e3-Clean 2020-04-29T05:39:42.277060Z info Obtained private IP [10.244.3.53 fe80::a81e:5bff:fe52:5e73] 2020-04-29T05:39:42.277157Z info Proxy role: &amp;model.Proxy{ClusterID:&quot;&quot;, Type:&quot;sidecar&quot;, IPAddresses:[]string{&quot;10.244.3.53&quot;, &quot;10.244.3.53&quot;, &quot;fe80::a81e:5bff:fe52:5e73&quot;}, ID:&quot;ratings-v1-85c656b747-czg8z.bookinfo&quot;, Locality:(*envoy_api_v2_core.Locality)(nil), DNSDoma in:&quot;bookinfo.svc.cluster.local&quot;, ConfigNamespace:&quot;&quot;, Metadata:(*model.NodeMetadata)(nil), SidecarScope:(*model.SidecarScope)(nil), MergedGateway:(*model.MergedGateway)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:labels.Collection(nil), IstioVersi on:(*model.IstioVersion)(nil)} 2020-04-29T05:39:42.277183Z info PilotSAN []string{&quot;spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account&quot;} 2020-04-29T05:39:42.277199Z info MixerSAN []string{&quot;spiffe://cluster.local/ns/istio-system/sa/istio-mixer-service-account&quot;} 2020-04-29T05:39:42.278153Z info Effective config: binaryPath: /usr/local/bin/envoy concurrency: 2 configPath: /etc/istio/proxy connectTimeout: 10s controlPlaneAuthPolicy: MUTUAL_TLS discoveryAddress: istiod.istio-system.svc:15012 drainDuration: 45s envoyAccessLogService: {} envoyMetricsService: {} parentShutdownDuration: 60s proxyAdminPort: 15000 serviceCluster: ratings.bookinfo statNameLength: 189 tracing: zipkin: address: zipkin.istio-system:9411 2020-04-29T05:39:42.278214Z info JWT policy is first-party-jwt 2020-04-29T05:39:42.280069Z info Istio Agent uses default istiod CA 2020-04-29T05:39:42.280086Z info istiod uses self-issued certificate 2020-04-29T05:39:42.280207Z warn Failed to load root cert, assume IP secure network: open var/run/secrets/istio/root-cert.pem: no such file or directory 2020-04-29T05:39:42.384355Z info parsed scheme: &quot;&quot; 2020-04-29T05:39:42.384408Z info scheme &quot;&quot; not registered, fallback to default scheme 2020-04-29T05:39:42.384540Z info ccResolverWrapper: sending update to cc: {[{istiod.istio-system.svc:15010 &lt;nil&gt; 0 &lt;nil&gt;}] &lt;nil&gt; &lt;nil&gt;} 2020-04-29T05:39:42.385082Z info ClientConn switching balancer to &quot;pick_first&quot; 2020-04-29T05:39:42.391739Z info sds SDS gRPC server for workload UDS starts, listening on &quot;/etc/istio/proxy/SDS&quot; 2020-04-29T05:39:42.392005Z info PilotSAN []string{&quot;spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account&quot;, &quot;istiod.istio-system.svc&quot;} 2020-04-29T05:39:42.392173Z info Starting proxy agent 2020-04-29T05:39:42.392458Z info pickfirstBalancer: HandleSubConnStateChange: 0xc000a6c020, {CONNECTING &lt;nil&gt;} 2020-04-29T05:39:42.393821Z info sds Start SDS grpc server 2020-04-29T05:39:42.393911Z info Opening status port 15020 2020-04-29T05:39:42.394508Z info Received new config, creating new Envoy epoch 0 2020-04-29T05:39:42.394626Z info Epoch 0 starting 2020-04-29T05:39:42.398684Z warn failed to read pod labels: open ./etc/istio/pod/labels: no such file or directory 2020-04-29T05:39:42.404881Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster ratings.bookinfo --service-node sidecar~10.244.3.53~ratings-v1-85c656b747-czg8z.bookinfo~bookinfo .svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --log-format [Envoy (Epoch 0)] [%Y-%m-%d %T.%e][%t][%l][%n] %v -l warning --component-log-level misc:error --concurrency 2] [Envoy (Epoch 0)] [2020-04-29 05:39:42.450][15][critical][main] [external/envoy/source/server/server.cc:96] error initializing configuration '/etc/istio/proxy/envoy-rev0.json': Invalid path: ./var/run/secrets/istio/root-cert.pem Invalid path: ./var/run/secrets/istio/root-cert.pem 2020-04-29T05:39:42.453430Z error Epoch 0 exited with error: exit status 1 2020-04-29T05:39:42.453665Z info No more active epochs, terminating` </code></pre> <p>Secret</p> <pre><code>kubectl describe secret istio.bookinfo-ratings -n bookinfo Name: istio.bookinfo-ratings Namespace: bookinfo Labels: &lt;none&gt; Annotations: istio.io/service-account.name: bookinfo-ratings Type: istio.io/key-and-cert Data ==== cert-chain.pem: 1159 bytes key.pem: 1679 bytes root-cert.pem: 1054 bytes &quot;name&quot;: &quot;istio-certs&quot;, &quot;secret&quot;: { &quot;defaultMode&quot;: 420, &quot;optional&quot;: true, &quot;secretName&quot;: &quot;istio.bookinfo-ratings&quot; </code></pre> <p>Pod volumes</p> <pre><code>Volumes: bookinfo-ratings-token-fs642: Type: Secret (a volume populated by a Secret) SecretName: bookinfo-ratings-token-fs642 Optional: false istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: &lt;unset&gt; istio-certs: Type: Secret (a volume populated by a Secret) SecretName: istio.bookinfo-ratings Optional: true </code></pre> <p>UPDATE:</p> <p>When looking at the istio injector config map and comparing with another, there is an if condition missing. Any one know how to get this condition back?</p> <pre><code> {{- if eq .Values.global.pilotCertProvider &quot;istiod&quot; }} - mountPath: /var/run/secrets/istio name: istiod-ca-cert {{- end }} </code></pre>
<p>There are number of reasons that cause the error You have. It would be best to check if the certificate is actually there and if its valid.</p> <p>According to istio <a href="https://istio.io/pt-br/docs/tasks/security/authentication/mutual-tls/#verify-keys-and-certificates-installation" rel="nofollow noreferrer">documentation</a> You can verify keys and certificates:</p> <blockquote> <h2>Verify keys and certificates installation<a href="https://istio.io/pt-br/docs/tasks/security/authentication/mutual-tls/#verify-keys-and-certificates-installation" rel="nofollow noreferrer"></a></h2> <p>Istio automatically installs necessary keys and certificates for mutual TLS authentication in all sidecar containers. Run command below to confirm key and certificate files exist under <code>/etc/certs</code>:</p> </blockquote> <pre><code>$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- ls /etc/certs cert-chain.pem key.pem root-cert.pem </code></pre> <blockquote> <p><code>cert-chain.pem</code> is Envoy’s cert that needs to be presented to the other side. <code>key.pem</code> is Envoy’s private key paired with Envoy’s cert in <code>cert-chain.pem</code>. <code>root-cert.pem</code> is the root cert to verify the peer’s cert. In this example, we only have one Citadel in a cluster, so all Envoys have the same <code>root-cert.pem</code>.</p> <p>Use the <code>openssl</code> tool to check if certificate is valid (current time should be in between <code>Not Before</code> and <code>Not After</code>)</p> </blockquote> <pre><code>$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout | grep Validity -A 2 Validity Not Before: May 17 23:02:11 2018 GMT Not After : Aug 15 23:02:11 2018 GMT </code></pre> <blockquote> <p>You can also check the <em>identity</em> of the client certificate:</p> </blockquote> <pre><code>$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout | grep 'Subject Alternative Name' -A 1 X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/default/sa/default </code></pre> <blockquote> <p>Please check <a href="https://istio.io/pt-br/docs/concepts/security/#istio-identity" rel="nofollow noreferrer">Istio identity</a> for more information about <em>service identity</em> in Istio.</p> </blockquote> <p>Hope it helps.</p>
<p>I want to create new user admin in kubernetes ,i do all the steps for creating and authorizing the certificates but when i want to access to api,i receive anuthorized error. i do these steps to create user-admin:</p> <p>1/ <code>openssl genrsa -out user.key 2048</code></p> <p>2/ <code>openssl req -new -key user.key -out user.csr -subj "/CN=kube-user"</code></p> <p>3/</p> <pre><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: user spec: request: $(cat user.csr | base64 | tr -d '\n') usages: - digital signature - key encipherment - server auth EOF </code></pre> <p>4/ <code>k certificate approve user</code></p> <p>5/ <code>k get csr user -o jsonpath='{.status.certificate}' | base64 --decode &gt; user.crt</code></p> <p>6/ <code>kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - &gt; ca.crt</code></p> <p>7/ </p> <pre><code>curl https://$Kube-Master-Ip:6443/api/v1 \ --key user.key \ --cert user.crt \ --cacert ca.crt </code></pre> <p>8/ and this is what i've receive:</p> <pre><code>{ "kind":"Status", "apiVersion":"v1", "metadata":{}, "status":"Failure", "message":"Unauthorized", "reason":"Unatuhorized", "code":401 } </code></pre> <blockquote> <blockquote> <p>document source: <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/</a></p> </blockquote> </blockquote>
<p>The step 2 command is wrong. The admin user should be part of system:masters group.</p> <pre><code>openssl req -new -key user.key -out user.csr -subj "/CN=kube-user/O=system:masters" </code></pre>
<p>I'm still experimenting with Istio in a dev cluster, along with a couple of other people. We have a sample virtualservice, deployment, and destinationrule, and requests to the specified uri are going to the pod associated with the deployment. This is working fine.</p> <p>I am attempting a variation of this that goes to an alternate deployment if a particular header and value are present. If the header is not set, or the specified header value is not present, it will go to the original deployment, otherwise to the alternate deployment. I eventually intend for it to check for several different values of the specified header, each going to different deployments.</p> <p>I've created a deployment and destination rule that are copies of the original, with consistent variations. I attempted to modify the virtualservice with this alternate routing. Thus far, it isn't working properly. I determine which deployment a request goes to by tailing the container log of the container associated with each deployment. When I sent a request with the specified header and value, it does go to the alternate deployment. However, when I send the request without the specified header, or without the matching value, it ALSO goes to the alternate deployment. In fact, I can't get it to reach the main deployment at all.</p> <p>Note that I understand that another way to do this is to have one virtualservice for the "default" route, and an additional virtualservice for each alternate route, specifying a different header value. I've seen something basically like that working. However, that seems like a lot of duplication to get something that should be simpler to set up in a single VirtualService.</p> <p>The following is the current state of the virtualservice, with some elisions:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: annotations: {} name: fooms-vs-ingress namespace: com-example spec: gateways: - ingress-gateway hosts: - '*' http: - match: - uri: prefix: /msapi/foo - headers: ctx-debug-route-fooms: exact: myuid-1 route: - destination: host: fooms.com-example.svc.cluster.local port: number: 80 subset: myuid-1 - route: - destination: host: fooms.com-example.svc.cluster.local port: number: 80 subset: blue </code></pre> <p>I could show the deployments and destinationrules, but I don't know if that will be helpful.</p> <p><strong>Update</strong>:</p> <p>Since I wrote this, I discovered that in order to make two conditions AND in a route match, I have to have both conditions in a single match rule. I'm still getting used to how YAML works. I'm going to provide here an updated version of the virtualservice, along with the gateway, destination rule, and much of the deployment. There's a lot of stuff in the deployment that probably isn't helpful.</p> <p>When I sent a request to the service from Postman, with or without the routing header, I get a 503 back. Before I made these changes to check for the routing header, it was properly routing requests to the "blue" instance (I am tailing the logs for both pods). When I first tried making these changes, I inadvertently defined two match blocks, one with the uri condition, and one with the header match condition. When I did that, all of the requests were going to the alternate pod.</p> <p>Here are elided versions of the objects that might be relevant, with some transient properties removed.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: annotations: {} name: fooms-vs-ingress namespace: com-example spec: gateways: - comp-ingress-gateway hosts: - '*' http: - match: - headers: compctx-debug-route-fooms: exact: myuid-1 name: match-myuid-1 uri: prefix: /msapi/foo route: - destination: host: fooms.com-example.svc.cluster.local port: number: 80 subset: myuid-1 - name: default route: - destination: host: fooms.com-example.svc.cluster.local port: number: 80 subset: blue apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: annotations: {} name: comp-ingress-gateway namespace: com-example spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE privateKey: /etc/istio/ingressgateway-certs/tls.key serverCertificate: /etc/istio/ingressgateway-certs/tls.crt apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" labels: app: FooMS role: blue routeoffer: DEFAULT seed: COMPv2.2.0 version: 2.2.0-myuid-1 name: fooms-myuid-1 namespace: com-example spec: replicas: 1 revisionHistoryLimit: 2 selector: matchLabels: app: FooMS role: blue strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: sidecar.istio.io/inject: "true" creationTimestamp: null labels: app: FooMS role: blue routeoffer: DEFAULT seed: COMPv2.2.0 version: 2.2.0-myuid-1 spec: containers: - env: - name: SERVICE_NAME value: fooms - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName envFrom: - configMapRef: name: global-config - secretRef: name: global-secrets image: dockercentral.it.... imagePullPolicy: Always name: fooms ports: - containerPort: 8080 name: http protocol: TCP - image: ... imagePullPolicy: IfNotPresent name: info dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: annotations: {} name: fooms-destination-myuid-1 namespace: com-example spec: host: fooms.com-example.svc.cluster.local subsets: - labels: version: 2.2.0-myuid-1 name: myuid-1 trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre>
<p>Since I don't have all informations from this command <code>kubectl get pods,ep,svc -o wide</code> i'm not sure that's destination rule error because i don't know everything about your pods,ep,svc and your destination rule applies to same deployment with only 1 replica.</p> <p>Probably the problem there is <a href="https://istio.io/docs/reference/config/networking/destination-rule/" rel="nofollow noreferrer">destination rule</a> with only 1 subset </p> <p>Based on your <a href="https://istio.io/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">virtual service</a> i think that's how your destination rule should look</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: fooms-destination-myuid-1 namespace: com-example spec: host: fooms.com-example.svc.cluster.local subsets: - labels: version: 2.2.0-myuid-1 name: myuid-1 subsets: - labels: role: blue name: blue trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre> <p>I made some example, everything needed to make this below</p> <p><strong>Kubernetes version</strong> 1.13.11-gke.14</p> <p><strong>Istio version</strong> 1.4.1</p> <pre><code>kubectl label namespace default istio-injection=enabled </code></pre> <p><strong>Deployment</strong> 1 </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx1 spec: selector: matchLabels: run: nginx1 replicas: 1 template: metadata: labels: run: nginx1 app: frontend spec: containers: - name: nginx1 image: nginx ports: - containerPort: 80 lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello nginx1 &gt; /usr/share/nginx/html/index.html"] </code></pre> <p><strong>Deployment</strong> 2</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx2 spec: selector: matchLabels: run: nginx2 replicas: 1 template: metadata: labels: run: nginx2 app: frontend spec: containers: - name: nginx2 image: nginx ports: - containerPort: 80 lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello nginx2 &gt; /usr/share/nginx/html/index.html"] </code></pre> <p><strong>Service</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: frontend spec: ports: - port: 80 protocol: TCP selector: app: frontend </code></pre> <p><strong>Gateway</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: comp-ingress-gateway namespace: default spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE privateKey: /etc/istio/ingressgateway-certs/tls.key serverCertificate: /etc/istio/ingressgateway-certs/tls.crt </code></pre> <p><strong>Virtual Service</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: nginxvirt spec: gateways: - comp-ingress-gateway #outside cluster - mesh #inside cluster hosts: - nginx.default.svc.cluster.local #inside cluster - nginx.com #outside cluster http: - name: match-myuid match: - uri: prefix: /msapi headers: compctx: exact: myuid rewrite: uri: / route: - destination: host: nginx.default.svc.cluster.local port: number: 80 subset: v1 - name: default route: - destination: host: nginx.default.svc.cluster.local port: number: 80 subset: v2 </code></pre> <p><strong>Destination Rule</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: nginxdest spec: host: nginx.default.svc.cluster.local subsets: - name: v1 labels: run: nginx1 - name: v2 labels: run: nginx2 trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre> <p><strong>Pod for testing</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: ubu1 spec: containers: - name: ubu1 image: ubuntu command: ["/bin/sh"] args: ["-c", "sleep 3000"] </code></pre> <p>And results from curl:</p> <p><strong>Outside</strong>:</p> <pre><code>curl -H "host: nginx.com" -H "compctx: myuid" ingress_gateway_ip/msapi Hello nginx1 curl -H "host: nginx.com" -H "compctx: myuid" ingress_gateway_ip/msapi Hello nginx1 curl -H "host: nginx.com" -H "compctx: myuid" ingress_gateway_ip/msapi Hello nginx1 curl -H "host: nginx.com" ingress_gateway_ip Hello nginx2 curl -H "host: nginx.com" ingress_gateway_ip Hello nginx2 curl -H "host: nginx.com" ingress_gateway_ip Hello nginx2 </code></pre> <p><strong>Inside</strong>:</p> <pre><code>kubectl exec -ti ubu1 -- /bin/bash root@ubu1:/# curl -H "compctx: myuid " nginx/msapi Hello nginx1 root@ubu1:/# curl -H "compctx: myuid " nginx/msapi Hello nginx1 root@ubu1:/# curl -H "compctx: myuid " nginx/msapi Hello nginx1 </code></pre>
<p>In the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="nofollow noreferrer">google documentation</a>, it says that:</p> <blockquote> <p>Vertical Pod autoscaling frees you from having to think about what values to specify for a container’s CPU <strong>requests and limits</strong> and memory <strong>requests and limits</strong>. The autoscaler can recommend values for CPU and memory requests and limits, or it can automatically update the values</p> </blockquote> <p>However in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md#recommendation-model" rel="nofollow noreferrer">open source vertical pod autoscaler documentation</a> there are two seemingly contradictory statements:</p> <blockquote> <p>VPA is capable of setting container resources (CPU &amp; memory request/limit) at Pod submission time.</p> <p>VPA only controls the resource request of containers. It sets the limit to infinity. The request is calculated based on analysis of the current and previous runs</p> </blockquote> <p>I’m confused which one is finally correct, and if there is a capability to get limits recommendations how can I add that to my VPA? so far I have only managed to only get requests recommendations.</p>
<p>VPA is capable of setting the limit when you set the <code>controlledValues</code> to <code>RequestAndLimits</code> option. However, it does not recommend what the limit should be. With this requests are being calculated based on actual values where limits are calculated based on the current pod's requests and limit relation. This means that if you start the Pod that has 2CPU requests and limit set to 10CPU then VPA will always set te limit to be 1:5. Meaning second quantity (limits) will be always 5 times as large as the first.</p> <p>You have understand also that <code>limits</code> are not used by scheduler, those are just for Kubelet to kill the pods if he ever exceeds those</p> <p>As for your not correctly working VPA we would need to see some config example to provide any more advice over the internet.</p>
<p>I have been trying to set up a kubernetes cluster.</p> <p>I have two ubuntu droplets on digital ocean I am using to do this.</p> <p>I have set up the master and joined the slave <a href="https://i.stack.imgur.com/p6elw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p6elw.png" alt="enter image description here"></a></p> <p>I am now trying to create a secret for my docker credentials so that I can pull private images on the node, however when i run this command (or any other kubectl command e.g. kubectl get nodes) i get this error: The connection to the server localhost:8080 was refused - did you specify the right host or port?</p> <p>This is however all set up as kubectl on its own shows help.</p> <p>Does any one know why i might be getting this issue and how I can solve it?</p> <p>sorry, i have just started with kubernetes, but i am trying to learn.</p> <p>I understand that you have to set up the cluster on a user that is not root on the master (which I have) is it ok to use root on slaves?</p> <p>thanks</p>
<p><code>kubectl</code> is used to connect and run commands to kubernetes API plane. There is no need to have it configured on worker (slave) nodes.</p> <p>However if You really need to make kubectl work from worker node You would need to do the following:</p> <hr> <p>Createa <code>.kube</code> catalog on worker node:</p> <pre><code>mkdir -p $HOME/.kube </code></pre> <p>Copy the configuration file from master node <code>/etc/kubernetes/admin.conf</code> to <code>$HOME/.kube/config</code> on worker node.</p> <p>Then run the following command on worker node:</p> <pre><code>sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <hr> <p><strong>Update:</strong></p> <p>To address Your question in comment.</p> <p>That is not how kubernetes nodes work.</p> <p>From <a href="https://kubernetes.io/docs/concepts/#kubernetes-nodes" rel="nofollow noreferrer">kubernetes</a> documentation about Kubernetes Nodes:</p> <blockquote> <p>The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely interact with nodes directly.</p> </blockquote> <p>This means that the images pulled from private repository will be "handled" by master nodes configuration which is synchronized between all nodes. There is no need to configure anything on the worker (slave) nodes.</p> <p>Additional information about <a href="https://kubernetes.io/docs/concepts/#kubernetes-control-plane" rel="nofollow noreferrer">Kubernetes Control Plane</a>.</p> <p>Hope this helps.</p>
<p>How do I do chargeback for shared Kubernetes clusters on Azure? Say there are 10 departments/customers using a cluster split by namespaces, How do I bill them?</p>
<p>You might want to have a look on <a href="https://github.com/kubecost" rel="nofollow noreferrer">kubecost</a>.</p> <blockquote> <p>Kubecost models give teams visibility into current and historical Kubernetes spend and resource allocation. These models provide cost transparency in Kubernetes environments that support multiple applications, teams, departments, etc.</p> </blockquote> <p><code>Kubecost</code> enables you to get:</p> <ul> <li><p>Real-time cost allocations by all key k8s concepts,</p> </li> <li><p>Cost allocation by configurable labels to measure spend by owner, team, department, product, etc.</p> </li> <li><p>Dynamic asset pricing enabled by integrations with AWS and GCP billing APIs, estimates available for Azure</p> </li> <li><p>Cost allocation metrics for CPU, GPU, memory, and storage</p> </li> <li><p>Out of cluster cloud costs tied back to owner</p> </li> <li><p>You can also export billing data back to Prometheus for further analysis</p> </li> </ul> <p>Check also <a href="https://karlstoney.com/2018/07/07/managing-your-costs-on-kubernetes/" rel="nofollow noreferrer">this</a> article. It shows a way for checking costs using Grafana.</p>
<p>The documentation of VPA states that HPA and VPA should not be used together. It can only be used to gethere when you want scaling on custom metrics.</p> <p>I have scaling enabled on CPU.</p> <p>My question is can I have HPA enabled for some deployment (lets say A) and VPA enabled for some deployment (lets say B). Or will this also leed to errors.</p>
<p>Using them both at the same time is not suggested because if they both detect that the memory is need they might want to try to resolve the same problems at the same time which will lead to wrongly allocated resources. </p> <p>This is not something that can be specified at the application deployment level but you can specify which deployment should <code>HPA</code> and <code>VPA</code> scale using <code>targetRef</code></p> <p>So for the deployment with <code>app1</code> you can specify <code>VPA</code>: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: name: app1-vpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: app1 </code></pre> <p>And for the <code>app2</code> you can specify to use <code>HPA</code>: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: app2-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: app2 </code></pre> <p>If need to use HPA and VPA together on the same deployments, you just have to make sure that the will based they behavior on different metrics. This way you prevent them by being scaled by the same event. To summarize VPA and HPA can be used together if he HPA config won`t use CPU or Memory to determine its targets as stated in the documentation: </p> <blockquote> <p>"Vertical Pod Autoscaler <strong>should not be used with the</strong> <strong>Horizontal Pod Autoscaler</strong> <strong>(HPA) on CPU or memory</strong> at this moment"</p> </blockquote>
<p>The backend uses Springboot to provide a WebSocket connection and sets the maximum idle time to 3 minutes. The program runs well in local. After 3 minutes of idle, the connection will be disconnected as scheduled. It can also be accessed normally by node port service when deployed in Kubernetes.</p> <p>But when I inject sidecar into this backend pod, there is a problem. The connection doesn't work properly, often breaks, and is completely irregular. Sometimes when the frontend and backend are sending messages, it is interrupted suddenly. Sometimes it is interrupted after about 2 minutes idle. And sometimes the connection can only last for tens of seconds.</p> <p>When the connection is interrupted, the backend will throw java.io.EOFException, and the frontend will receive the on close event.</p> <p>This phenomenon will occur as long as a sidecar is injected into the pod(Even if I use node port service to access the pod). Also, I did a test, I used Nginx to transfer the request to port 31380 of istio-ingressgateway, and configured the gateway vs and dr as follows. But the result is the same.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: run: msapi version: product name: msapi namespace: test spec: replicas: 1 selector: matchLabels: run: msapi template: metadata: labels: run: msapi spec: containers: - env: - name: JAVA_OPTS valueFrom: configMapKeyRef: key: jvm.options name: test-config image: test/msapi:1.0.0 imagePullPolicy: Always name: msapi ports: - containerPort: 9000 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: msapi namespace: test spec: ports: - port: 80 protocol: TCP targetPort: 9000 selector: run: msapi type: ClusterIP --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ingress-test namespace: test spec: selector: istio: ingressgateway servers: - hosts: - '*.ingress.xxx.com' port: name: http number: 80 protocol: HTTP --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: msapi namespace: test spec: gateways: - ingress-test hosts: - msapi.ingress.xxx.com http: - match: - headers: cookie: regex: ^(.*?; ?)?(version=pre)(;.*)?$ route: - destination: host: msapi subset: pre - route: - destination: host: msapi subset: product --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: msapi namespace: test spec: host: msapi subsets: - labels: version: product name: product - labels: version: pre name: pre </code></pre>
<p>The problem here was websocketUpgrade, one line but important one.</p> <p>As I could find on github <a href="https://github.com/kyma-project/kyma/issues/4275" rel="nofollow noreferrer">there</a> </p> <blockquote> <p>Support for websockets is enabled by default in Istio from version 1.0: <a href="https://godoc.org/istio.io/api/networking/v1alpha3#HTTPRoute" rel="nofollow noreferrer">https://godoc.org/istio.io/api/networking/v1alpha3#HTTPRoute</a></p> </blockquote> <p>And OP provided another one <a href="https://github.com/istio/istio/issues/14740" rel="nofollow noreferrer">there</a> </p> <blockquote> <p>websocketUpgrade was removed some time ago, and is no longer needed.</p> </blockquote> <p>So it should work without adding it to virtual service.</p> <p><strong>HOWEVER</strong></p> <p>As showed on <a href="https://github.com/istio/istio/issues/9152#issuecomment-427564282" rel="nofollow noreferrer">github issue</a> and confirmed by OP You still have to add it.</p> <blockquote> <p>I found that only need to add conf of "websocketUpgrade: true".</p> </blockquote> <p>So if you have same issue, you should try add weboscketUpgrade to your virtual service yaml.</p> <p>If that doesn't work, there is another idea on <a href="https://github.com/istio/istio/issues/11579" rel="nofollow noreferrer">github</a> how to fix this. </p>
<p>I will probably ask a stupid question ( after reading official kubernetes documentation), but here it is my question :</p> <p>First, I setuped a kubeadm cluster with 3 stacked control plane nodes &amp; etcd + 1 load balancer (The etcd members and control plane nodes are co-located) + n worker.</p> <pre><code># kubectl get nodes NAME STATUS ROLES AGE VERSION pp-tmp-test20.xxx Ready master 17h v1.15.1 pp-tmp-test21.xxx Ready master 15h v1.15.2 pp-tmp-test22.xxx Ready master 15h v1.15.2 pp-tmp-test23.xxx Ready worker 14h v1.15.2 pp-tmp-test24.xxx Ready worker 15h v1.15.2 </code></pre> <p>Is there a way to migrate from this topology to a "kubadm cluster with external etcd cluster" without delete my actual cluster. Migrate to 3 stacked control plane nodes + 3 etcd nodes + 1 load balancer + n worker.</p> <p>Or I have to setup a new cluster ?</p> <p>I think I found my answer in the official doc (<a href="https://k0s.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://k0s.io/docs/setup/independent/high-availability/</a>) :</p> <p>"<strong>Before proceeding, you should carefully consider which approach best meets the needs of your applications and environment</strong>. This comparison topic outlines the advantages and disadvantages of each topology."</p> <p>"Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd <strong>with the exception that you should setup etcd first</strong>, and you should pass the etcd information in the kubeadm config file"</p> <p>Thank you very much for the help</p> <p>Best regards Vincent</p>
<p>Yes, you can setup etcd nodes now on external system and update the below parameters on kube-apiserver.yaml file. The manifest file should be available on /etc/kubernetes/manifests directory on the control plane nodes.</p> <pre><code> --etcd-servers=https://&lt;IP address of new etcd server&gt;:2379 </code></pre>
<p>I am tying to put together a setup like this :</p> <ul> <li>example.com <em>#frond end</em></li> <li>example.com/api</li> <li>example.com/authentication</li> </ul> <p>And obviously each of them separated applications and should be able to continue their own path , <code>ex. http://example.com/api/v1/test?v=ok</code></p> <p>Right now I have a yaml like this:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - http: paths: - path: / backend: serviceName: frontend-service servicePort: 80 - path: /api(/|$)(.*) backend: serviceName: backend-service servicePort: 80 - path: /authentication(/|$)(.*) backend: serviceName: identityserver-service servicePort: 80 </code></pre> <p>/api and /authentication behave the way I want, but the sub path for front end is not working. So for example <code> http://example.com/css/bootstrap.css</code> is not found.</p> <p>So far I've tried to</p> <p>1- add <code>(/|$)(.*)</code> at the and of front end path</p> <p>2- add a copy of front end path with the same backed and port and path of <code>/.*</code></p> <p>None of them solved the problem.</p> <p>Here is the describe result:</p> <pre><code>Name: test-ingress Namespace: default Address: 127.0.0.1 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- * / frontend-service:80 (10.1.80.38:80,10.1.80.43:80,10.1.80.50:80) /api(/|$)(.*) backend-service:80 (10.1.80.39:80,10.1.80.42:80,10.1.80.47:80) /authentication(/|$)(.*) identityserver-service:80 (10.1.80.40:80,10.1.80.41:80,10.1.80.45:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal UPDATE 43s (x14 over 13h) nginx-ingress-controller Ingress default/test-ingress </code></pre> <p>PS : I'vd found a few answers that seems to be older than Version 0.22.0 and not working anymore ex . <a href="https://stackoverflow.com/questions/49514702/kubernetes-ingress-with-multiple-target-rewrite">kubernetes ingress with multiple target-rewrite</a></p>
<p>The problem in your case lies with incorrect regular expression. The <code>$</code> operator matches the end of the string or end of a line if the multiline flag is enabled. With first group you are capturing <code>/</code> or <code>$</code> but your string does not end and there is not match.</p> <p>I tested this and it works with this regex:</p> <pre><code> paths: - path: /()(.*) backend: serviceName: frontend-service servicePort: 80 </code></pre>
<p>I updated a GKE cluster from 1.13 to 1.15.9-gke.12. In the process I switched from legacy logging to Stackdriver Kubernetes Engine Monitoring. Now I have the problem that the <code>stackdriver-metadata-agent-cluster-level</code> pod keeps restarting because it gets <code>OOMKilled</code>.</p> <p>The memory seems to be just fine though. <img src="https://i.stack.imgur.com/vLdgD.png" alt="enter image description here"></p> <p>The logs also look just fine (same as the logs of a newly created cluster):</p> <pre><code>I0305 08:32:33.436613 1 log_spam.go:42] Command line arguments: I0305 08:32:33.436726 1 log_spam.go:44] argv[0]: '/k8s_metadata' I0305 08:32:33.436753 1 log_spam.go:44] argv[1]: '-logtostderr' I0305 08:32:33.436779 1 log_spam.go:44] argv[2]: '-v=1' I0305 08:32:33.436818 1 log_spam.go:46] Process id 1 I0305 08:32:33.436859 1 log_spam.go:50] Current working directory / I0305 08:32:33.436901 1 log_spam.go:52] Built on Jun 27 20:15:21 (1561666521) at [email protected]:/google/src/files/255462966/depot/branches/gcm_k8s_metadata_release_branch/255450506.1/OVERLAY_READONLY/google3 as //cloud/monitoring/agents/k8s_metadata:k8s_metadata with gc go1.12.5 for linux/amd64 from changelist 255462966 with baseline 255450506 in a mint client based on //depot/branches/gcm_k8s_metadata_release_branch/255450506.1/google3 Build label: gcm_k8s_metadata_20190627a_RC00 Build tool: Blaze, release blaze-2019.06.17-2 (mainline @253503028) Build target: //cloud/monitoring/agents/k8s_metadata:k8s_metadata I0305 08:32:33.437188 1 trace.go:784] Starting tracingd dapper tracing I0305 08:32:33.437315 1 trace.go:898] Failed loading config; disabling tracing: open /export/hda3/trace_data/trace_config.proto: no such file or directory W0305 08:32:33.536093 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0305 08:32:33.936066 1 main.go:134] Initiating watch for { v1 nodes} resources I0305 08:32:33.936169 1 main.go:134] Initiating watch for { v1 pods} resources I0305 08:32:33.936231 1 main.go:134] Initiating watch for {batch v1beta1 cronjobs} resources I0305 08:32:33.936297 1 main.go:134] Initiating watch for {apps v1 daemonsets} resources I0305 08:32:33.936361 1 main.go:134] Initiating watch for {extensions v1beta1 daemonsets} resources I0305 08:32:33.936420 1 main.go:134] Initiating watch for {apps v1 deployments} resources I0305 08:32:33.936489 1 main.go:134] Initiating watch for {extensions v1beta1 deployments} resources I0305 08:32:33.936552 1 main.go:134] Initiating watch for { v1 endpoints} resources I0305 08:32:33.936627 1 main.go:134] Initiating watch for {extensions v1beta1 ingresses} resources I0305 08:32:33.936698 1 main.go:134] Initiating watch for {batch v1 jobs} resources I0305 08:32:33.936777 1 main.go:134] Initiating watch for { v1 namespaces} resources I0305 08:32:33.936841 1 main.go:134] Initiating watch for {apps v1 replicasets} resources I0305 08:32:33.936897 1 main.go:134] Initiating watch for {extensions v1beta1 replicasets} resources I0305 08:32:33.936986 1 main.go:134] Initiating watch for { v1 replicationcontrollers} resources I0305 08:32:33.937067 1 main.go:134] Initiating watch for { v1 services} resources I0305 08:32:33.937135 1 main.go:134] Initiating watch for {apps v1 statefulsets} resources I0305 08:32:33.937157 1 main.go:142] All resources are being watched, agent has started successfully I0305 08:32:33.937168 1 main.go:145] No statusz port provided; not starting a server I0305 08:32:37.134913 1 binarylog.go:95] Starting disk-based binary logging I0305 08:32:37.134965 1 binarylog.go:265] rpc: flushed binary log to "" </code></pre> <p>I already tried to disable the logging and reenable it without success. It keeps restarting all the time (more or less every minute).</p> <p>Does anybody have the same experience?</p>
<p>The issue is being caused because the LIMIT set on the <code>metadata-agent</code> deployment is too low on resources so the POD is being killed (OOM killed) since the POD requires more memory to properly work.</p> <p>There is a workaround for this issue until it is fixed.</p> <hr> <p>You can overwrite the base resources in the configmap of the <code>metadata-agent</code> with:</p> <p><code>kubectl edit cm -n kube-system metadata-agent-config</code></p> <p>Setting <code>baseMemory: 50Mi</code> should be enough, if it doesn't work use higher value <code>100Mi</code> or <code>200Mi</code>.</p> <p>So <code>metadata-agent-config</code> configmap should look something like this:</p> <pre><code>apiVersion: v1 data: NannyConfiguration: |- apiVersion: nannyconfig/v1alpha1 kind: NannyConfiguration baseMemory: 50Mi kind: ConfigMap </code></pre> <p>Note also that You need to restart the deployment, as the config map doesn't get picked up automatically:</p> <p><code>kubectl delete deployment -n kube-system stackdriver-metadata-agent-cluster-level</code></p> <p>For more details look into addon-resizer <a href="https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#addon-resizer-configuration" rel="noreferrer">Documentation</a>.</p>
<p>I made the Kubernetes cluster with 2 azure Ubuntu VMs and trying to monitor the cluster. For that, I have deployed node-exporter daemonSet, heapster, Prometheus and grafana. Configured the node-exporter as a target in Prometheus rules files. but I am getting <code>Get http://master-ip:30002/metrics: context deadline exceeded</code> error. I have also increased <code>scrape_interval</code> and <code>scrape_timeout</code> values in the Prometheus-rules file. </p> <p>The following are the manifest files for the Prometheus-rules file and node-exporter daemonSet and service files.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: node-exporter name: node-exporter namespace: kube-system spec: selector: matchLabels: app: node-exporter template: metadata: labels: app: node-exporter spec: containers: - args: - --web.listen-address=&lt;master-IP&gt;:30002 - --path.procfs=/host/proc - --path.sysfs=/host/sys - --path.rootfs=/host/root - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ image: quay.io/prometheus/node-exporter:v0.18.1 name: node-exporter resources: limits: cpu: 250m memory: 180Mi requests: cpu: 102m memory: 180Mi volumeMounts: - mountPath: /host/proc name: proc readOnly: false - mountPath: /host/sys name: sys readOnly: false - mountPath: /host/root mountPropagation: HostToContainer name: root readOnly: true - args: - --logtostderr - --secure-listen-address=[$(IP)]:9100 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - --upstream=http://&lt;master-IP&gt;:30002/ env: - name: IP valueFrom: fieldRef: fieldPath: status.podIP image: quay.io/coreos/kube-rbac-proxy:v0.4.1 name: kube-rbac-proxy ports: - containerPort: 9100 hostPort: 9100 name: https resources: limits: cpu: 20m memory: 40Mi requests: cpu: 10m memory: 20Mi hostNetwork: true hostPID: true nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 65534 serviceAccountName: node-exporter tolerations: - operator: Exists volumes: - hostPath: path: /proc name: proc - hostPath: path: /sys name: sys - hostPath: path: / name: root --- apiVersion: v1 kind: Service metadata: labels: k8s-app: node-exporter name: node-exporter namespace: kube-system spec: type: NodePort ports: - name: https port: 9100 targetPort: https nodePort: 30002 selector: app: node-exporter ---prometheus-config-map.yaml----- apiVersion: v1 kind: ConfigMap metadata: name: prometheus-server-conf labels: name: prometheus-server-conf namespace: default data: prometheus.yml: |- global: scrape_interval: 5m evaluation_interval: 3m scrape_configs: - job_name: 'node' tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token static_configs: - targets: ['&lt;master-IP&gt;:30002'] - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https </code></pre> </div> </div> </p> <p>Can we take service as NodePort for Node-exporter daemonSet? if the answer NO, how could we configure in a prometheus-rules file as the target? Could anyone help me to understand the scenario? Are any suggestible links also fine?</p>
<p>As @gayahtri confirmed in comments </p> <blockquote> <p>it worked for me. – gayathri </p> </blockquote> <p>If you have same issue as mentioned in topic check out this <a href="https://github.com/prometheus/prometheus/issues/4419" rel="nofollow noreferrer">github issue</a></p> <p>specifically this <a href="https://github.com/prometheus/prometheus/issues/4419#issuecomment-408451914" rel="nofollow noreferrer">answer</a> added by @simonpasquier </p> <blockquote> <p>We have debugged it offline and the problem was the network. Running the Prometheus container with "--network=host" solved the issue.</p> </blockquote>
<p>I got a service that needs to scan large files and process them, upload them back to the file server. My problem is that default available space in a pod is 10G which is not enough. I have 3 options:</p> <ol> <li>use hostFile/emptyDir volume, but this way I can't specify how much space I need, my pods could be scheduled to a node which didn't have enough disk space.</li> <li>use hostFile persistent volume, but the documents say it is &quot;Single node testing only“</li> <li>use local persistent volume, but according to the document <code>Dynamic provisioning is not supported yet</code>, I have to manually create pv in each node which seems not acceptable by me, but if there is no other options this will be the only way to go.</li> </ol> <p>Is there any other simpler options than <code>local persistent volume</code>?</p>
<p>Depending on your cloud provider you can mount their block storage options e.g e.g. Google Cloud Storage, Azure storage by Azure, Elasticblockstore for AWS. This way you won`t be depended on your node availability for storage. All of them are supported in Kubernetes via plugins as an <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">expanded persistent volume claims</a>. For example:</p> <p><strong>gcePersistentDisk</strong></p> <blockquote> <p>A <code>gcePersistentDisk</code> volume mounts a Google Compute Engine (GCE) <a href="https://cloud.google.com/compute/docs/disks" rel="nofollow noreferrer">Persistent Disk</a> into your Pod. Unlike <code>emptyDir</code>, which is erased when a Pod is removed, the contents of a PD are preserved and the volume is merely unmounted. This means that a PD can be pre-populated with data, and that data can be &quot;handed off&quot; between Pods.T</p> </blockquote> <p>This is similar for <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer">awsElasticBlockStore</a> or <a href="https://kubernetes.io/docs/concepts/storage/volumes/#azuredisk" rel="nofollow noreferrer">azureDisk</a></p> <hr /> <p>If you want to use AWS S3 there is an <a href="https://operatorhub.io/operator/awss3-operator-registry" rel="nofollow noreferrer">S3 Operator</a> which you may find interesting.</p> <blockquote> <p>AWS S3 Operator will deploy the AWS S3 Provisioner which will dynamically or statically provision AWS S3 Bucket storage and access.</p> </blockquote>
<p>Let us say I am deploying a Redis server to a Kubernetes cluster. </p> <p>How do I determine the resource requests and limits that I should set for my Pod? </p> <p>I tried leaving resources unconfigured, but find that my pods are frequently evicted. I have Horizontal and Vertical Scaling enabled on my node pools.</p>
<p>This is a very individual question and it is impossible to give a simple answer. Everything depends on your specific needs and your application usage pattern.</p> <p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">This article</a> and the <a href="https://www.youtube.com/watch?v=xjpHggHKm78" rel="nofollow noreferrer">film</a> included in it present some best practices and might be very helpful when deciding how to configure requests and limits in your particular <strong>kubernetes cluster</strong>.</p> <p>Before taking decision about configuring limits you should observe your cluster behavior for some time. <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/" rel="nofollow noreferrer">Tools for Monitoring Resources</a> are excellent source of such information.</p> <blockquote> <p>To scale an application and provide a reliable service, you need to understand how the application behaves when it is deployed. You can examine application performance in a Kubernetes cluster by examining the containers, pods, services, and the characteristics of the overall cluster. Kubernetes provides detailed information about an application’s resource usage at each of these levels. This information allows you to evaluate your application’s performance and where bottlenecks can be removed to improve overall performance.</p> </blockquote>
<p>When I configure multiple (gateway - virtual service) pairs in a namespace, each pointing to basic HTTP services, only one service becomes accessable. Calls to the other (typically, the second configured) return 404. If the first gateway is deleted, the second service then becomes accesible</p> <p>I raised a github issue a few weeks ago ( <a href="https://github.com/istio/istio/issues/20661" rel="nofollow noreferrer">https://github.com/istio/istio/issues/20661</a> ) that contains all my configuration but no response to date. Does anyone know what I'm doing wrong (if anything) ?</p>
<p>Based on that <a href="https://github.com/istio/istio/issues/12500" rel="nofollow noreferrer">github issue</a> </p> <blockquote> <p>The gateway port names have to be unique, if they are sharing the same port. Thats the only way we differentiate different RDS blocks. We went through this motion earlier as well. I wouldn't rock this boat unless absolutely necessary.</p> </blockquote> <p>More about the issue <a href="https://github.com/istio/istio/pull/12556" rel="nofollow noreferrer">here</a> </p> <p>Checked it on <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#configure-traffic-for-the-bookinfo-com-host" rel="nofollow noreferrer">istio documentation</a>, and in fact if you configure multiple gateways name of the first one is https, but second is https-bookinfo.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: httpbin-gateway spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key hosts: - "httpbin.example.com" </code></pre> <hr> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 443 name: https-bookinfo protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key hosts: - "bookinfo.com" </code></pre> <hr> <h2><strong>EDIT</strong></h2> <p>That's weird, but I have another idea.</p> <p>There is a <a href="https://github.com/istio/istio/pull/12556" rel="nofollow noreferrer">github pull</a> which have the following line in pilot:</p> <pre><code>routeName := gatewayRDSRouteName(s, config.Namespace) </code></pre> <blockquote> <p>This change adds namespace scoping to Gateway port names by appending namespace suffix to the HTTPS RDS routes. Port names still have to be unique within the namespace boundaries, but this change makes adding more specific scoping rather trivial.</p> </blockquote> <p>Could you try make 2 namespaces like in below example</p> <p><strong>EXAMPLE</strong></p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: httpbin labels: name: httpbin istio-injection: enabled --- apiVersion: v1 kind: Namespace metadata: name: nodejs labels: name: nodejs istio-injection: enabled </code></pre> <p>And deploy everything( deployment,service,virtual service,gateway) in proper namespace and let me know if that works?</p> <hr> <p>Could You try change the hosts from "*" to some names? It's only thing that came to my mind besides trying serverCertficate and privateKey but from the comments I assume you have already try it.</p> <p>Let me know if that help.</p>
<p>Not sure if this is the right place, please point me to a different forum if not.</p> <p>In a multi-cluster kubernetes setup, is cross-cluster communication a valid design? In particular, a pod in one cluster relying on a pod in another cluster. </p> <p>Or are there limitations or anti-patterns associated with this that we should avoid? If not, what tools do you use to manage this deployment and monitor load on each cluster?</p>
<blockquote> <p><a href="https://istio.io/docs/setup/install/multicluster/" rel="nofollow noreferrer">Multicluster</a> deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue.</p> </blockquote> <p><a href="https://istio.io/docs/ops/deployment/deployment-models/#multiple-meshes" rel="nofollow noreferrer">Multiple meshes</a> afford the following capabilities beyond that of a single mesh:</p> <ul> <li>Organizational boundaries: lines of business</li> <li>Service name or namespace reuse: multiple distinct uses of the default namespace</li> <li>Stronger isolation: isolating test workloads from production workloads</li> </ul> <hr> <p>I have found a very good youtube videos from KubeCon, check it out because it really explains how multi-cluster works, specially the first one with Matt Turner.</p> <ul> <li><a href="https://www.youtube.com/watch?v=FiMSr-fOFKU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=FiMSr-fOFKU</a></li> <li><a href="https://www.youtube.com/watch?v=-zsThiLvYos" rel="nofollow noreferrer">https://www.youtube.com/watch?v=-zsThiLvYos</a></li> </ul> <hr> <p>Check out <a href="https://github.com/istio-ecosystem/admiral" rel="nofollow noreferrer">Admiral</a> which provides automatic configuration and service discovery for multicluster Istio service mesh</p> <blockquote> <p>Istio has a very robust set of multi-cluster capabilities. Managing this configuration across multiple clusters at scale is challenging. Admiral takes an opinionated view on this configuration and provides automatic provisioning and syncing across clusters. This removes the complexity from developers and mesh operators pushing this complexity into automation.</p> </blockquote> <hr> <blockquote> <p>In a multi-cluster kubernetes setup, is cross-cluster communication a valid design? In particular, a pod in one cluster relying on a pod in another cluster.</p> </blockquote> <p>Based on provided links and my knowledge everything should work fine, pod can rely on a pod in another cluster.</p> <hr> <p>More useful links:</p> <ul> <li><a href="https://istio.io/docs/ops/deployment/deployment-models/#multiple-clusters" rel="nofollow noreferrer">https://istio.io/docs/ops/deployment/deployment-models/#multiple-clusters</a></li> <li><a href="https://banzaicloud.com/blog/istio-multicluster-federation-2/" rel="nofollow noreferrer">https://banzaicloud.com/blog/istio-multicluster-federation-2/</a></li> <li><a href="https://github.com/istio-ecosystem/coddiwomple" rel="nofollow noreferrer">https://github.com/istio-ecosystem/coddiwomple</a></li> <li><a href="https://github.com/istio-ecosystem/multi-mesh-examples" rel="nofollow noreferrer">https://github.com/istio-ecosystem/multi-mesh-examples</a> </li> </ul> <hr> <h2><strong>EDIT</strong></h2> <blockquote> <p>how do the different frameworks of Kubefed and Admiral fit with each other? Can we use both or only use one? </p> </blockquote> <p>I would not use kubefed since it's in alpha as far as i know, unless you really need it. I dont know how both of them would work together, I can only assume that they should both work. </p> <blockquote> <p>what considerations should we have in deciding between different mesh architecture to facilitate cross-cluster communication?</p> </blockquote> <p>Above, there is a link to youtube video, istio Multi-Cluster Service Mesh Patterns Explained, I would say it's up to you to decide which one you want to use based on your needs, the simplest one is the first described in the video, single control plane, single network. More about it <a href="https://istio.io/docs/ops/deployment/deployment-models/#multiple-clusters" rel="nofollow noreferrer">there</a>.</p>
<p>Installed a kubernetes cluster with Calico, CoreDNS.</p> <p>Check one CoreDNS's event message got</p> <pre><code>Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>Under <code>/var/lib/cni/networks/</code> directory there is nothing. Why? How to fix?</p> <p>Even all the pods' status is Running, but worrying about its healthy.</p> <p>logs</p> <pre><code># kubectl logs coredns-1308140hfw -n kube-system [INFO] plugin/ready: Still waiting on: &quot;kubernetes&quot; .:53 [INFO] plugin/reload: Running configuration MD5 = 20328084ha6966e76816bcd928foa CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] plugin/ready: Still waiting on: &quot;kubernetes&quot; [INFO] plugin/ready: Still waiting on: &quot;kubernetes&quot; I0804 08:18:03.874045 1 trace.go:116] Trace[336122540]: &quot;Reflector ListAndWatch&quot; name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-08-04 08:17:33.872753993 +0000 UTC m=+0.038838328) (total time: 30.001059939s): Trace[336122540]: [30.001059939s] [30.001059939s] END E0804 08:18:03.874108 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get &quot;https://[IPv6]:443/api/v1/endpoints?limit=500&amp;resourceVersion=0&quot;: dial tcp [IPv6]:443: i/o timeout I0804 08:18:03.874047 1 trace.go:116] Trace[208240456]: &quot;Reflector ListAndWatch&quot; name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-08-04 08:17:33.872755558 +0000 UTC m=+0.038839930) (total time: 30.001213767s): Trace[208240456]: [30.001213767s] [30.001213767s] END E0804 08:18:03.874137 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get &quot;https://[IPv6]:443/api/v1/namespaces?limit=500&amp;resourceVersion=0&quot;: dial tcp [IPv6]:443: i/o timeout I0804 08:18:03.874214 1 trace.go:116] Trace[1106410694]: &quot;Reflector ListAndWatch&quot; name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-08-04 08:17:33.872753715 +0000 UTC m=+0.038838086) (total time: 30.001438405s): Trace[1106410694]: [30.001438405s] [30.001438405s] END E0804 08:18:03.874248 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: Get &quot;https://[IPv6]:443/api/v1/services?limit=500&amp;resourceVersion=0&quot;: dial tcp [IPv6]:443: i/o timeout </code></pre>
<p>How readiness probe works?</p> <blockquote> <p>Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you don't want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.</p> </blockquote> <p>When a pod is started, Kubernetes can be configured to wait for a configurable amount of time to pass before performing the first readiness check. After that, it invokes the probe periodically and acts based on the result of the readiness probe. If a pod reports that it’s not ready, it’s removed from the service. If the pod then becomes ready again, it’s re-added.</p> <p>This means that in your situation your <code>coreDNS</code> pod was not entirely running but Kubernetes already started sending probes to check it's <code>readiness</code>.</p> <p>Thanks to those probes, when you have a couple of replicas, Kubernetes will direct traffic only to those which are healthy (with a successful probe).</p> <p>PS. My <code>/var/lib/cni/networks/</code> directory is also empty.</p>
<p>I have been trying to set up Knative development environment on my system. But everytime i deploy Istio , the pilot remain in pending state and i find its because of resource exhaustion.</p> <p>I followed basic setup guide from Knative docs. i.e. serving/blob/master/DEVELOPMENT.md</p> <p>Now if i install and deploy istio according to it, the resources get exhausted and istio-pilot remain in pending state due to no node available.</p> <p>If i try same with the one given on installation guide i.e. <a href="https://knative.dev/docs/install/installing-istio/" rel="nofollow noreferrer">https://knative.dev/docs/install/installing-istio/</a></p> <p>It works fine until later when i restart the cluster the api-server get stopped which is according to what i found by searching is also due to lack of resources.</p> <p>So What is the exact requirement for Knative set up?</p> <p>I used system with 8 -core processor and 32GB RAM. </p> <p>Am i allocating it wrong? as much I understood we have to give at least 8Gb memory and 6 CPU to a single node kubernetes structures(That's what i'm using). What about the resources Istio and Knative deployments use?</p> <p>I checked for the resources and limits in node and got the limits are set to 0%. </p> <p>I have already tried to limit the CPU and RAM in minikube config, then with --cpu and --memory on the time of starting the minikube, but the output remain the same. </p> <p>Minikube started with : minikube start</p> <pre><code> 🔥 Creating virtualbox VM (CPUs=6, Memory=8192MB, Disk=20000MB) ... 🐳 Preparing Kubernetes v1.15.2 on Docker 18.09.8 ... 🚜 Pulling images ... </code></pre> <p>ISTIO Deployed by:</p> <pre><code> kubectl apply -f ./third_party/istio-1.2-latest/istio-crds.yaml while [[ $(kubectl get crd gateways.networking.istio.io -o jsonpath='{.status.conditions[?(@.type=="Established")].status}') != 'True' ]]; do echo "Waiting on Istio CRDs"; sleep 1 done kubectl apply -f ./third_party/istio-1.2-latest/istio.yaml </code></pre> <p>The pilot remain pending and after describing the pod we get :</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 71s (x4 over 5m12s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu. </code></pre> <p>Output for <code>kubectl describe node nodename</code>:</p> <pre><code>Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 5105m (85%) 13800m (229%) memory 3749366272 (45%) 9497290Ki (117%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 19m kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 19m (x8 over 19m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 19m (x8 over 19m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 19m (x7 over 19m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 19m kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 18m kube-proxy, minikube Starting kube-proxy. </code></pre> <p>The setup should have been successful as i did set up the limits with RequestQuotas and LimitRange too. But nothing is working. </p> <p>what am i doing wrong here?</p>
<p>You definitely don't have enough resources. Only for <a href="https://istio.io/docs/setup/kubernetes/platform-setup/minikube/" rel="nofollow noreferrer"><strong>Istio</strong></a> on <strong>Minikube</strong> you need:</p> <blockquote> <p>16384 MB of memory and 4 CPUs</p> </blockquote> <p>Add to this requirements for <a href="https://knative.dev/docs/install/knative-with-gke/#creating-a-kubernetes-cluster" rel="nofollow noreferrer"><strong>Knative</strong></a> which are not included in the above and you'll see that resources you provide are not enough.</p>
<p>Recently critical vulnerability is found in kubernetes where hackers can send authorized message and acces the kubernetes and from there try to login to back-end. Is this possible only in public or as well as private network? How? </p>
<p>The <strong>key point</strong> here is:</p> <blockquote> <p>In all Kubernetes versions prior to v1.10.11, v1.11.5, and v1.12.3.</p> </blockquote> <p>These applies to old <strong>Kubernetes</strong> versions which are not supported any more and you are not supposed to use them on production systems. If you want to familiarize with Kubernetes version support policy, please refer to <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/" rel="nofollow noreferrer">this</a> article. As you can read in it:</p> <blockquote> <p>The Kubernetes project maintains release branches for the most recent three minor releases. Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.</p> </blockquote> <p>Currently they are: 1.13, 1.14 and 1.15 versions.</p> <p>As you can see <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1002105" rel="nofollow noreferrer">here</a> or <a href="https://www.twistlock.com/labs-blog/demystifying-kubernetes-cve-2018-1002105-dead-simple-exploit/" rel="nofollow noreferrer">here</a> the vulnerability was made public in December 2018 so I wouldn't say that in terms of contemporary software development process standards it is <em>recently</em>. It doesn't make sense to deliberate if its safe or not to use some software with vulnerabilities/bugs which are already fixed in newer versions, available from a long time.</p> <p>If you are interested in this bug, you can analyze <a href="https://github.com/kubernetes/kubernetes/issues/71411" rel="nofollow noreferrer">this</a> GitHub issue or read a nice description of it in <a href="https://www.twistlock.com/labs-blog/demystifying-kubernetes-cve-2018-1002105-dead-simple-exploit/" rel="nofollow noreferrer">this</a> article. As you can read:</p> <blockquote> <p>The bug allows an attacker who can send a legitimate, authorized request to the API server to bypass the authorization logic in any sequenced request. In other words, escalate privileges to that of any user.</p> </blockquote> <p><strong>In other words:</strong> to be able to bypass the authorization logic in subsequent requests or to escalate privileges, such user needs to be able to send legitimate, authorized requests to the API server. </p> <p>So at this point you can probably answer your question yourself. The key point isn't the fact that the network is public or private. More important is how it is secured and by whom it can be accessed. Generally private networks with no external access (e.g. intranets) tend to be more secure but if it comes to things like possible privilege escalation by someone who already has some level of access, it is potentially dangerous even within organization.</p>
<p>Assume I have this manifest:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: initialize-assets-fixtures spec: template: spec: initContainers: - name: wait-for-minio image: bitnami/minio-client env: - name: MINIO_SERVER_ACCESS_KEY valueFrom: secretKeyRef: name: minio key: access-key - name: MINIO_SERVER_SECRET_KEY valueFrom: secretKeyRef: name: minio key: secret-key - name: MINIO_SERVER_HOST value: minio - name: MINIO_SERVER_PORT_NUMBER value: "9000" - name: MINIO_ALIAS value: minio command: - /bin/sh - -c - | mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY} containers: - name: initialize-assets-fixtures image: bitnami/minio env: - name: MINIO_SERVER_ACCESS_KEY valueFrom: secretKeyRef: name: minio key: access-key - name: MINIO_SERVER_SECRET_KEY valueFrom: secretKeyRef: name: minio key: secret-key - name: MINIO_SERVER_HOST value: minio - name: MINIO_SERVER_PORT_NUMBER value: "9000" - name: MINIO_ALIAS value: minio command: - /bin/sh - -c - | mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY} for category in `ls`; do for f in `ls $category/*` ; do mc cp $f ${MINIO_ALIAS}/$category/$(basename $f) done done restartPolicy: Never </code></pre> <p>You see I have here one <code>initContainer</code> and one <code>container</code>. In both containers, I have the same configuration, i.e. the same <code>env</code> section. </p> <p>Assume I have yet another <code>Job</code> manifest where I use the very same <code>env</code> section again. </p> <p>It's a lot of duplicated configuration that I bet I can simplify drastically, but I don't know how to do it. Any hint? Any link to some documentation? After some googling, I was not able to come up with anything useful. Maybe with kustomize, but I'm not sure. Or maybe I'm doing it the wrong way with all those environment variables, but I don't think I have a choice, depending on the service I'm using (here it's minio, but I want to do the same kind of stuff with other services which might not be as flexible as minio).</p>
<p>Based on my knowledge you have those 3 options</p> <ul> <li><a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a></li> <li><a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a></li> <li><a href="https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html" rel="nofollow noreferrer">ConfigMap</a></li> </ul> <hr> <p><strong>ConfigMap</strong></p> <hr> <p>You can use either kubectl create configmap or a ConfigMap generator in kustomization.yaml to create a ConfigMap.</p> <p>The data source corresponds to a key-value pair in the ConfigMap, where</p> <p><strong>key</strong> = the file name or the key you provided on the command line</p> <p><strong>value</strong> = the file contents or the literal value you provided on the command line.</p> <p>More about how to use it in pod <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">here</a> </p> <hr> <p><strong>Helm</strong></p> <hr> <p>As @Matt mentionted in comments you can use helm</p> <blockquote> <p><a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> lets you template the yaml with values. Also once you get into it there are ways to create and include <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="nofollow noreferrer">partial templates</a> – Matt</p> </blockquote> <p>By the way, helm has it's own created <a href="https://github.com/helm/charts/tree/master/stable/minio" rel="nofollow noreferrer">minio chart</a>, you might take a look how it is created there.</p> <hr> <p><strong>Kustomize</strong></p> <hr> <p>It's well described <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/wordpress/README.md" rel="nofollow noreferrer">here</a> and <a href="https://blog.stack-labs.com/code/kustomize-101/" rel="nofollow noreferrer">here</a> how could you do that in kustomize. </p> <p>Let me know if you have any more questions.</p>
<p>I manage a deployment of Kubernetes on Openstack.</p> <p>My user pods mount a <code>PersistentVolume</code> created dynamically using Openstack Cinder as their home folder.</p> <p>What is strange is that if I create an (empty) file with file permissions 600:</p> <pre><code>bash-4.2$ ls -l total 16 -rw------- 1 jovyan users 0 Jul 16 17:55 id_rsa </code></pre> <p>Then I kill the container and restart it, the volume gets mounted again, but the permissions now have <code>rw</code> for group permissions:</p> <pre><code>bash-4.2$ ls -l total 16 -rw-rw---- 1 jovyan users 0 Jul 16 17:55 id_rsa </code></pre> <p>Any suggestions on how to debug this further?</p> <h2>Details on Kubernetes configuration</h2> <ul> <li>The volume <code>AccessMode</code> is <code>ReadWriteOnce</code>, <code>volumeMode: Filesystem</code></li> <li>Volume filesystem is <code>ext4</code>: <code>/dev/sdf: Linux rev 1.0 ext4 filesystem data, UUID=c627887b-0ff0-4310-b91d-37fe5ca9564d (needs journal recovery) (extents) (64bit) (large files) (huge files)</code></li> </ul> <h2>Check on Openstack</h2> <p>I first thought it was an Openstack issue, but if I detach the volume from the Openstack instance, then attach it again using Openstack commands, and mount it using the terminal on a node, permissions are ok. So I think it is Kubernetes messing with the permissions somehow.</p> <h2>Yaml resources</h2> <p>I pasted the YAML files for the pod, the PV and the PVC on a gist, see <a href="https://gist.github.com/zonca/21b81f735d0cc9a06cb85ae0fa0285e5" rel="noreferrer">https://gist.github.com/zonca/21b81f735d0cc9a06cb85ae0fa0285e5</a></p> <p>I also added the output of <code>kubectl describe</code> for those resources. It is a deployment of the <code>Jupyterhub</code> 0.9.0 Helm package.</p>
<p>This is happening because you have configured your pod with <code>fsGroup</code>. It's specified under the <code>securityContext</code>:</p> <pre><code>--- securityContext: fsGroup: 100 --- </code></pre> <p>Whenever <code>fsGroup</code> field is specified, all processes of the container are also part of the supplementary group ID. The owner for volumes and any files created in that volume will be Group ID.</p> <p>Here`s how <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#podsecuritycontext-v1-core" rel="noreferrer">kubernetes API docs</a> explains that:</p> <blockquote> <p><code>fsGroup</code> is a special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:</p> <ol> <li>The owning GID will be the FSGroup</li> <li>The setgid bit is set (new files created in the volume will be owned by FSGroup)</li> <li><strong>The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume</strong>.</li> </ol> </blockquote>
<p>I am trying to start minikube on my machine but it gives an error :</p> <pre><code> Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path </code></pre> <p>When I installed VirtualBox then and tried to start minikube it says that VirtualBox and Hyper V in conflict. So what is the way to get it started?</p> <p>Should I disable Hyper V and install VirtualBox or is there a way to use Hyper V ?</p>
<blockquote> <p>Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path</p> </blockquote> <p>means that <strong>Minikube</strong> tries to start using the <strong>default hypervisor</strong> which is <strong>Virtualbox</strong> (<a href="https://kubernetes.io/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/" rel="nofollow noreferrer">ref</a>) and it is looking for <code>VBoxManage</code> command in your system's <code>PATH</code> environment variable which of course is not present there if <strong>VirtualBox</strong> is not installed.</p> <p>When you want to use a different <strong>hypervisor</strong> ( and you can do it as <strong>Minikube</strong> for <strong>Windows</strong> also supports <strong>Hyper-V</strong> ( <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">ref</a> ) you need to provide additional flag to <code>minikube start</code> command, specifying the virtualization technology that you want it to use. If you want it to use <strong>Hyper-V</strong>, it should look like this:</p> <pre><code>minikube start --vm-driver=hyperv </code></pre> <p>Additionally you may want to set <code>hyper</code> as your <strong>default driver</strong>. You can do it with the following command:</p> <pre><code>minikube config set vm-driver hyperv </code></pre> <p>You can also find this information <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/" rel="nofollow noreferrer">here</a>.</p>
<p>I was having issues with <code>kubeadm init</code>, and so i ran <code>kubeadm reset</code> and then <code>kubeadm init</code> and the problem at hand went away, but now I have another problem and that is that when I run <code>kubectl get all</code>, I get the following response:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h6m Error from server (Forbidden): replicationcontrollers is forbidden: User "system:node:abc-server.localdomain" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" Error from server (Forbidden): daemonsets.apps is forbidden: User "system:node:abc-server.localdomain" cannot list resource "daemonsets" in API group "apps" in the namespace "default" Error from server (Forbidden): deployments.apps is forbidden: User "system:node:abc-server.localdomain" cannot list resource "deployments" in API group "apps" in the namespace "default" Error from server (Forbidden): replicasets.apps is forbidden: User "system:node:abc-server.localdomain" cannot list resource "replicasets" in API group "apps" in the namespace "default" Error from server (Forbidden): statefulsets.apps is forbidden: User "system:node:abc-server.localdomain" cannot list resource "statefulsets" in API group "apps" in the namespace "default" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:node:abc-server.localdomain" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "default" Error from server (Forbidden): jobs.batch is forbidden: User "system:node:abc-server.localdomain" cannot list resource "jobs" in API group "batch" in the namespace "default" Error from server (Forbidden): cronjobs.batch is forbidden: User "system:node:abc-server.localdomain" cannot list resource "cronjobs" in API group "batch" in the namespace "default" </code></pre> <p>I've exhausted my googling abilities with my limited kubernetes vocabulary, so hoping someone here could help me with the following:</p> <ol> <li>what's happening?! (is this a RBAC Authorization issue?)</li> <li>how can i resolve this? as this is a dev environment that will definitely require some clean up, I don't mind a quick and dirty way just so i can continue with the task at hand (which is to just get things up and running again)</li> </ol>
<p>As @Software Engineer mentioned in his comment there is a <a href="https://github.com/kubernetes/kubeadm/issues/1721" rel="nofollow noreferrer">github</a> issue with a fix for that:</p> <p>User <a href="https://github.com/kubernetes/kubeadm/issues/1721#issuecomment-520437689" rel="nofollow noreferrer">neolit123</a> on github posted this solution:</p> <blockquote> <p>getting a permission error during pod network setup, means you are trying to <code>kubectl apply</code> manifest files using a kubeconfig file which does not have the correct permissions.</p> <p>make sure that your <code>/etc/kubernetes/admin.conf</code> is generated by kubeadm and contains <code>kubernetes-admin</code> as the user.</p> <pre><code>root@master:~# kubectl auth can-i create deploy </code></pre> <p>which kubeconfig is this command using?<br> try</p> <pre><code>root@master:~# KUBECONFIG=/etc/kubernetes/admin.conf kubectl auth can-i create deploy </code></pre> <blockquote> <p>I wanted to check the release notes, but there is no much information, or I don't know interpret it. Does anyone have any information about what are the changes, or what am I doing wrong?</p> </blockquote> <p>AFAIK, there is no such change that breaks this between 1.14.4 and .3.</p> </blockquote>
<p>I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with <code>latest</code> tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)? </p> <p>There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with </p> <pre><code>kubectl patch deployment $deployment -n $ns -p \ "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" </code></pre> <p>is there something similar with cached images? </p>
<p>I think you're looking for kubectl set or patch which I found there in <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">kubernetes documentation</a>.</p> <p>To update image of deployment you can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#set" rel="nofollow noreferrer">kubectl set</a> </p> <pre><code>kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image </code></pre> <p>To update image of your pod you can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="nofollow noreferrer">kubectl patch</a> </p> <pre><code>kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}' </code></pre> <p>You can always use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#edit" rel="nofollow noreferrer">kubectl edit</a> to edit which allows you to directly edit any API resource you can retrieve via the command line tool.</p> <pre><code>kubectl edit deployment name_of_deployment </code></pre> <p>Let me know if you have any more questions.</p>
<p>I try to understand hostIP and hostPort in Kubernetes.</p> <p>Here is my cluster configuration :</p> <p>3 vagrant nodes :</p> <pre><code>nodes = [ { :hostname =&gt; 'k8s-master', :ip =&gt; '192.168.150.200', :ram =&gt; 4096 }, { :hostname =&gt; 'k8s-minion1', :ip =&gt; '192.168.150.201', :ram =&gt; 4096 }, { :hostname =&gt; 'k8s-minion2', :ip =&gt; '192.168.150.202', :ram =&gt; 4096 }, ] </code></pre> <p>I write the following manifest to test it :</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: firstpod spec: containers: - name: container image: nginx ports: - containerPort: 80 hostIP: 10.0.0.1 hostPort: 8080 </code></pre> <p>I deploy with <code>kubectl apply -f port.yml</code> The pod runs on k8s-minion2</p> <pre><code>kubectl get pods -o wide gives : NAME READY STATUS RESTARTS AGE IP NODE firstpod 1/1 Running 0 2m 10.38.0.3 k8s-minion2 </code></pre> <p>I can curl the ngnix from inside the cluster as follows:</p> <pre><code>#ssh inside the cluster vagrant ssh k8s-master #curl the containerPort on the pod ip curl 10.38.0.3:80 </code></pre> <p>But, I have no idea how to use hostIp and hostPort. curl 10.0.0.1:8080 gives :</p> <pre><code>curl: (7) Failed to connect to 10.0.0.1 port 80: Connection timed out </code></pre> <p>and curling the node or the cluster Ip gives :</p> <pre><code>curl: (7) Failed to connect to 10.38.0.3 port 8080: Connection refused </code></pre> <p>So where is port 8080 open and what does hostIp intended for?</p> <p>Thanks</p>
<p>If you take a look at the kubernetes API <a href="https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#pod-v1-core" rel="nofollow noreferrer">reference</a> you'll find that <code>hostIP</code> is IP that is being assigned once the pod is scheduled into node.</p> <blockquote> <p><code>hostIP</code> (<em>string</em>) - IP address of the host to which the pod is assigned. Empty if not yet scheduled.</p> </blockquote> <p>This can be further <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">exposed</a> as env inside the pod if needed (<code>spec.hostIP</code>)</p> <p>With <code>hostPort</code> you can expose container port to the external network at the address <code>&lt;hostIP&gt;:&lt;hostPort&gt;</code>, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user. You can read more about <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">here</a>.</p> <p>If you want to reach your pod there are also other ways to do that, such as <code>ClusterIP</code> or <code>NodePort</code> depending if the request comes internally or externally. This <a href="https://rtfm.co.ua/en/kubernetes-clusterip-vs-nodeport-vs-loadbalancer-services-and-ingress-an-overview-with-examples/" rel="nofollow noreferrer">article</a> goes thru them and their differences.</p>
<p>I am copying a local directory into Kubernetes pod using <code>kubectl cp</code> command</p> <pre><code>kubectl cp test $POD:/tmp </code></pre> <p>It copies the <code>test</code> directory into Kubernetes pod <code>/tmp</code> directory.</p> <p>Now I want to overwrite the test directory in the pod. I did not find any option to overwrite directory while copying using <code>kubectl cp</code> command.</p> <p>Currently, I am deleting the test directory from the pod and then copy the directory. </p> <pre><code>kubectl exec $POD -- sh -c 'rm -rf /tmp/test' kubectl cp test $POD:/tmp </code></pre> <p>This is working fine, but in case any error comes while copying, existing directory from pod will also be deleted.</p> <p>How can I overwrite the pod directory with a local directory without deleting the pod directory first?</p> <p>Thanks in advance.</p>
<p><strong>Currently there is unfortunatelly no way to achieve your desired state with <code>kubectl cp</code> command.</strong></p> <p>If there are some undocumented features, please feel free to edit this answer and provide the solution, but currently there is no single place in documentation that could suggest the opposite.</p> <p>Neither <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp" rel="nofollow noreferrer">here</a> nor in the context help of the <code>kubectl</code> command, available by running <code>kubectl cp --help</code>, there is no option mentioned that would modify the default operation of the<code> kubectl cp</code> command, which is basically a merge of the content of the already existing directory and copied one.</p> <pre><code>$ kubectl cp --help </code></pre> <pre><code>Copy files and directories to and from containers. Examples: # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'kubectl exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace &lt;some-namespace&gt; tar cf - /tmp/foo | kubectl exec -i -n &lt;some-namespace&gt; &lt;some-pod&gt; -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally kubectl exec -n &lt;some-namespace&gt; &lt;some-pod&gt; -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace kubectl cp /tmp/foo_dir &lt;some-pod&gt;:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container kubectl cp /tmp/foo &lt;some-pod&gt;:/tmp/bar -c &lt;specific-container&gt; # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace &lt;some-namespace&gt; kubectl cp /tmp/foo &lt;some-namespace&gt;/&lt;some-pod&gt;:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally kubectl cp &lt;some-namespace&gt;/&lt;some-pod&gt;:/tmp/foo /tmp/bar Options: -c, --container='': Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen --no-preserve=false: The copied file/directory's ownership and permissions will not be preserved in the container --retries=0: Set number of retries to complete a copy operation from a container. Specify 0 to disable or any negative value for infinite retrying. The default is 0 (no retry). Usage: kubectl cp &lt;file-spec-src&gt; &lt;file-spec-dest&gt; [options] Use &quot;kubectl options&quot; for a list of global command-line options (applies to all commands). </code></pre> <p>Basically the default behavior of <code>kubectl cp</code> command is <strong>a merge of the content of source and destination directory</strong>. Let's say we have local directory <code>/tmp/test</code> containing:</p> <pre><code>/tmp/test$ ls different_file.txt </code></pre> <p>with single line of text <code>&quot;some content&quot;</code>. If we copy our local <code>/tmp/test directory</code> to <code>/tmp</code> directory on our <code>Pod</code>, which already contains <code>test</code> folder with a different file, let's say <code>testfile.txt</code>, the content of both directories will be merged, so our destination <code>/tmp/test</code> will contain eventually:</p> <pre><code>/tmp/test# ls different_file.txt testfile.txt </code></pre> <p>If we change the content of our local <code>different_file.txt</code> to <code>&quot;yet another content&quot;</code> and run again the command:</p> <pre><code>kubectl cp /tmp/test pod-name:/tmp </code></pre> <p>it will only override the destination <code>different_file.txt</code> which is already present in the destination <code>/tmp/test</code> directory.</p> <p>Currently there is no way to override this default behavior.</p>
<h1>What happened:</h1> <p>When I applied patch to ConfigMap, it doesn't work well. (<code>Error from server: v1.ConfigMap.Data: ReadString: expects " or n, but found {, error found in #10 byte of ...|"config":{"template"|..., bigger context ...|{"apiVersion":"v1","data":{"config":{"template":{"containers":[{"lifecycle":{"preStop":|...</code>)</p> <p>gsed works fine, but I want to know `kubectl</p> <h2>Command</h2> <pre><code>$ kubectl patch configmap/cm-test-toshi -n dev --type merge -p '{ "data":{ "config":{ "template":{ "containers":[{ "name":"istio-proxy", "lifecycle":{ "preStop":{ "exec":{ "command":[\"/bin/sh\", \"-c\", \"while [ $(netstat -plunt | grep tcp | grep -v envoy | wc -l | xargs) -ne 0]; do sleep 1; done\"] } } } }] } } } }' </code></pre> <h2>Target CondigMap</h2> <pre><code>apiVersion: v1 data: config: |- policy: disabled alwaysInjectSelector: [] neverInjectSelector: [] template: | initContainers: - name: istio-validation ... containers: - name: istio-proxy {{- if contains "/" (annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image) }} image: "{{ annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image }}" {{- else }} image: "{{ .Values.global.hub }}/{{ .Values.global.proxy.image }}:{{ .Values.global.tag }}" {{- end }} ports: ... </code></pre> <h1>What you expected to happen:</h1> <p>No error occurs and <code>data.condif.template.containers[].lifecycle.preStop.exec.command</code> is applied.</p> <pre><code>apiVersion: v1 data: config: |- policy: disabled alwaysInjectSelector: [] neverInjectSelector: [] template: | initContainers: - name: istio-validation ... containers: containers: - name: istio-proxy lifecycle: # added preStop: # added exec: # added command: ["/bin/sh", "-c", "while [ $(netstat -plunt | grep tcp | grep -v envoy | wc -l | xargs) -ne 0 ]; do sleep 1; done"] # added {{- if contains "/" (annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image) }} image: "{{ annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image }}" {{- else }} image: "{{ .Values.global.hub }}/{{ .Values.global.proxy.image }}:{{ .Values.global.tag }}" {{- end }} ports: ... </code></pre> <h1>How to reproduce it (as minimally and precisely as possible):</h1> <ol> <li>Create ConfigMap</li> </ol> <pre><code>$ kubectl apply -f cm-test-toshi.yaml </code></pre> <p>cm-test-toshi.yaml</p> <pre><code>apiVersion: v1 data: config: |- policy: disabled alwaysInjectSelector: [] neverInjectSelector: [] template: | {{- $cniDisabled := (not .Values.istio_cni.enabled) }} {{- $cniRepairEnabled := (and .Values.istio_cni.enabled .Values.istio_cni.repair.enabled) }} {{- $enableInitContainer := (or $cniDisabled $cniRepairEnabled .Values.global.proxy.enableCoreDump) }} rewriteAppHTTPProbe: {{ valueOrDefault .Values.sidecarInjectorWebhook.rewriteAppHTTPProbe false }} {{- if $enableInitContainer }} containers: - name: istio-proxy {{- if contains "/" (annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image) }} image: "{{ annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image }}" {{- else }} image: "{{ .Values.global.hub }}/{{ .Values.global.proxy.image }}:{{ .Values.global.tag }}" {{- end }} ports: - containerPort: 15090 protocol: TCP name: http-envoy-prom args: - proxy - sidecar - --domain - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --configPath - "/etc/istio/proxy" - --binaryPath - "/usr/local/bin/envoy" - --serviceCluster {{ if ne "" (index .ObjectMeta.Labels "app") -}} - "{{ index .ObjectMeta.Labels `app` }}.$(POD_NAMESPACE)" {{ else -}} - "{{ valueOrDefault .DeploymentMeta.Name `istio-proxy` }}.{{ valueOrDefault .DeploymentMeta.Namespace `default` }}" {{ end -}} - --drainDuration - "{{ formatDuration .ProxyConfig.DrainDuration }}" - --parentShutdownDuration - "{{ formatDuration .ProxyConfig.ParentShutdownDuration }}" - --discoveryAddress - "{{ annotation .ObjectMeta `sidecar.istio.io/discoveryAddress` .ProxyConfig.DiscoveryAddress }}" {{- if eq .Values.global.proxy.tracer "lightstep" }} - --lightstepAddress - "{{ .ProxyConfig.GetTracing.GetLightstep.GetAddress }}" - --lightstepAccessToken - "{{ .ProxyConfig.GetTracing.GetLightstep.GetAccessToken }}" - --lightstepSecure={{ .ProxyConfig.GetTracing.GetLightstep.GetSecure }} - --lightstepCacertPath - "{{ .ProxyConfig.GetTracing.GetLightstep.GetCacertPath }}" {{- else if eq .Values.global.proxy.tracer "zipkin" }} - --zipkinAddress - "{{ .ProxyConfig.GetTracing.GetZipkin.GetAddress }}" {{- else if eq .Values.global.proxy.tracer "datadog" }} - --datadogAgentAddress - "{{ .ProxyConfig.GetTracing.GetDatadog.GetAddress }}" {{- end }} - --proxyLogLevel={{ annotation .ObjectMeta `sidecar.istio.io/logLevel` .Values.global.proxy.logLevel}} - --proxyComponentLogLevel={{ annotation .ObjectMeta `sidecar.istio.io/componentLogLevel` .Values.global.proxy.componentLogLevel}} - --connectTimeout - "{{ formatDuration .ProxyConfig.ConnectTimeout }}" {{- if .Values.global.proxy.envoyStatsd.enabled }} - --statsdUdpAddress - "{{ .ProxyConfig.StatsdUdpAddress }}" {{- end }} {{- if .Values.global.proxy.envoyMetricsService.enabled }} - --envoyMetricsServiceAddress - "{{ .ProxyConfig.GetEnvoyMetricsService.GetAddress }}" {{- end }} {{- if .Values.global.proxy.envoyAccessLogService.enabled }} - --envoyAccessLogServiceAddress - "{{ .ProxyConfig.GetEnvoyAccessLogService.GetAddress }}" {{- end }} - --proxyAdminPort - "{{ .ProxyConfig.ProxyAdminPort }}" {{ if gt .ProxyConfig.Concurrency 0 -}} - --concurrency - "{{ .ProxyConfig.Concurrency }}" {{ end -}} {{- if .Values.global.controlPlaneSecurityEnabled }} - --controlPlaneAuthPolicy - MUTUAL_TLS {{- else }} - --controlPlaneAuthPolicy - NONE {{- end }} - --dnsRefreshRate - {{ valueOrDefault .Values.global.proxy.dnsRefreshRate "300s" }} {{- if (ne (annotation .ObjectMeta "status.sidecar.istio.io/port" .Values.global.proxy.statusPort) "0") }} - --statusPort - "{{ annotation .ObjectMeta `status.sidecar.istio.io/port` .Values.global.proxy.statusPort }}" - --applicationPorts - "{{ annotation .ObjectMeta `readiness.status.sidecar.istio.io/applicationPorts` (applicationPorts .Spec.Containers) }}" {{- end }} {{- if .Values.global.trustDomain }} - --trust-domain={{ .Values.global.trustDomain }} {{- end }} {{- if .Values.global.logAsJson }} - --log_as_json {{- end }} {{- if (isset .ObjectMeta.Annotations `sidecar.istio.io/bootstrapOverride`) }} - --templateFile=/etc/istio/custom-bootstrap/envoy_bootstrap.json {{- end }} {{- if .Values.global.proxy.lifecycle }} lifecycle: {{ toYaml .Values.global.proxy.lifecycle | indent 4 }} {{- end }} env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status.podIP - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName - name: HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP {{- if eq .Values.global.proxy.tracer "datadog" }} {{- if isset .ObjectMeta.Annotations `apm.datadoghq.com/env` }} {{- range $key, $value := fromJSON (index .ObjectMeta.Annotations `apm.datadoghq.com/env`) }} - name: {{ $key }} value: "{{ $value }}" {{- end }} {{- end }} {{- end }} - name: ISTIO_META_POD_PORTS value: |- [ {{- $first := true }} {{- range $index1, $c := .Spec.Containers }} {{- range $index2, $p := $c.Ports }} {{- if (structToJSON $p) }} {{if not $first}},{{end}}{{ structToJSON $p }} {{- $first = false }} {{- end }} {{- end}} {{- end}} ] - name: ISTIO_META_CLUSTER_ID value: "{{ valueOrDefault .Values.global.multiCluster.clusterName `Kubernetes` }}" - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ISTIO_META_CONFIG_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: SDS_ENABLED value: "{{ .Values.global.sds.enabled }}" - name: ISTIO_META_INTERCEPTION_MODE value: "{{ or (index .ObjectMeta.Annotations `sidecar.istio.io/interceptionMode`) .ProxyConfig.InterceptionMode.String }}" - name: ISTIO_META_INCLUDE_INBOUND_PORTS value: "{{ annotation .ObjectMeta `traffic.sidecar.istio.io/includeInboundPorts` (applicationPorts .Spec.Containers) }}" {{- if .Values.global.network }} - name: ISTIO_META_NETWORK value: "{{ .Values.global.network }}" {{- end }} {{ if .ObjectMeta.Annotations }} - name: ISTIO_METAJSON_ANNOTATIONS value: | {{ toJSON .ObjectMeta.Annotations }} {{ end }} {{ if .ObjectMeta.Labels }} - name: ISTIO_METAJSON_LABELS value: | {{ toJSON .ObjectMeta.Labels }} {{ end }} {{- if .DeploymentMeta.Name }} - name: ISTIO_META_WORKLOAD_NAME value: {{ .DeploymentMeta.Name }} {{ end }} {{- if and .TypeMeta.APIVersion .DeploymentMeta.Name }} - name: ISTIO_META_OWNER value: kubernetes://apis/{{ .TypeMeta.APIVersion }}/namespaces/{{ valueOrDefault .DeploymentMeta.Namespace `default` }}/{{ toLower .TypeMeta.Kind}}s/{{ .DeploymentMeta.Name }} {{- end}} {{- if (isset .ObjectMeta.Annotations `sidecar.istio.io/bootstrapOverride`) }} - name: ISTIO_BOOTSTRAP_OVERRIDE value: "/etc/istio/custom-bootstrap/custom_bootstrap.json" {{- end }} {{- if .Values.global.sds.customTokenDirectory }} - name: ISTIO_META_SDS_TOKEN_PATH value: "{{ .Values.global.sds.customTokenDirectory -}}/sdstoken" {{- end }} {{- if .Values.global.meshID }} - name: ISTIO_META_MESH_ID value: "{{ .Values.global.meshID }}" {{- else if .Values.global.trustDomain }} - name: ISTIO_META_MESH_ID value: "{{ .Values.global.trustDomain }}" {{- end }} {{- if and (eq .Values.global.proxy.tracer "datadog") (isset .ObjectMeta.Annotations `apm.datadoghq.com/env`) }} {{- range $key, $value := fromJSON (index .ObjectMeta.Annotations `apm.datadoghq.com/env`) }} - name: {{ $key }} value: "{{ $value }}" {{- end }} {{- end }} imagePullPolicy: "{{ valueOrDefault .Values.global.imagePullPolicy `Always` }}" {{ if ne (annotation .ObjectMeta `status.sidecar.istio.io/port` .Values.global.proxy.statusPort) `0` }} readinessProbe: httpGet: path: /healthz/ready port: {{ annotation .ObjectMeta `status.sidecar.istio.io/port` .Values.global.proxy.statusPort }} initialDelaySeconds: {{ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds` .Values.global.proxy.readinessInitialDelaySeconds }} periodSeconds: {{ annotation .ObjectMeta `readiness.status.sidecar.istio.io/periodSeconds` .Values.global.proxy.readinessPeriodSeconds }} failureThreshold: {{ annotation .ObjectMeta `readiness.status.sidecar.istio.io/failureThreshold` .Values.global.proxy.readinessFailureThreshold }} {{ end -}} securityContext: allowPrivilegeEscalation: {{ .Values.global.proxy.privileged }} capabilities: {{ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) `TPROXY` -}} add: - NET_ADMIN {{- end }} drop: - ALL privileged: {{ .Values.global.proxy.privileged }} readOnlyRootFilesystem: {{ not .Values.global.proxy.enableCoreDump }} runAsGroup: 1337 {{ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) `TPROXY` -}} runAsNonRoot: false runAsUser: 0 {{- else -}} runAsNonRoot: true runAsUser: 1337 {{- end }} resources: {{ if or (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory`) -}} requests: {{ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) -}} cpu: "{{ index .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU` }}" {{ end}} {{ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory`) -}} memory: "{{ index .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory` }}" {{ end }} {{ else -}} {{- if .Values.global.proxy.resources }} {{ toYaml .Values.global.proxy.resources | indent 4 }} {{- end }} {{ end -}} volumeMounts: {{ if (isset .ObjectMeta.Annotations `sidecar.istio.io/bootstrapOverride`) }} - mountPath: /etc/istio/custom-bootstrap name: custom-bootstrap-volume {{- end }} - mountPath: /etc/istio/proxy name: istio-envoy {{- if .Values.global.sds.enabled }} - mountPath: /var/run/sds name: sds-uds-path readOnly: true - mountPath: /var/run/secrets/tokens name: istio-token {{- if .Values.global.sds.customTokenDirectory }} - mountPath: "{{ .Values.global.sds.customTokenDirectory -}}" name: custom-sds-token readOnly: true {{- end }} {{- else }} - mountPath: /etc/certs/ name: istio-certs readOnly: true {{- end }} {{- if and (eq .Values.global.proxy.tracer "lightstep") .Values.global.tracer.lightstep.cacertPath }} - mountPath: {{ directory .ProxyConfig.GetTracing.GetLightstep.GetCacertPath }} name: lightstep-certs readOnly: true {{- end }} {{- if isset .ObjectMeta.Annotations `sidecar.istio.io/userVolumeMount` }} {{ range $index, $value := fromJSON (index .ObjectMeta.Annotations `sidecar.istio.io/userVolumeMount`) }} - name: "{{ $index }}" {{ toYaml $value | indent 4 }} {{ end }} {{- end }} volumes: {{- if (isset .ObjectMeta.Annotations `sidecar.istio.io/bootstrapOverride`) }} - name: custom-bootstrap-volume configMap: name: {{ annotation .ObjectMeta `sidecar.istio.io/bootstrapOverride` "" }} {{- end }} - emptyDir: medium: Memory name: istio-envoy {{- if .Values.global.sds.enabled }} - name: sds-uds-path hostPath: path: /var/run/sds - name: istio-token projected: sources: - serviceAccountToken: path: istio-token expirationSeconds: 43200 audience: {{ .Values.global.sds.token.aud }} {{- if .Values.global.sds.customTokenDirectory }} - name: custom-sds-token secret: secretName: sdstokensecret {{- end }} {{- else }} - name: istio-certs secret: optional: true {{ if eq .Spec.ServiceAccountName "" }} secretName: istio.default {{ else -}} secretName: {{ printf "istio.%s" .Spec.ServiceAccountName }} {{ end -}} {{- if isset .ObjectMeta.Annotations `sidecar.istio.io/userVolume` }} {{range $index, $value := fromJSON (index .ObjectMeta.Annotations `sidecar.istio.io/userVolume`) }} - name: "{{ $index }}" {{ toYaml $value | indent 2 }} {{ end }} {{ end }} {{- end }} {{- if and (eq .Values.global.proxy.tracer "lightstep") .Values.global.tracer.lightstep.cacertPath }} - name: lightstep-certs secret: optional: true secretName: lightstep.cacert {{- end }} {{- if .Values.global.podDNSSearchNamespaces }} dnsConfig: searches: {{- range .Values.global.podDNSSearchNamespaces }} - {{ render . }} {{- end }} {{- end }} injectedAnnotations: values: '{"certmanager":{"enabled":false,"hub":"quay.io/jetstack","image":"cert-manager-controller","namespace":"istio-system","tag":"v0.6.2"},"clusterResources":true,"cni":{"namespace":"istio-system"},"galley":{"enableAnalysis":false,"enabled":false,"image":"galley","namespace":"istio-system"},"gateways":{"istio-egressgateway":{"autoscaleEnabled":true,"enabled":false,"env":{"ISTIO_META_ROUTER_MODE":"sni-dnat"},"namespace":"istio-system","ports":[{"name":"http2","port":80},{"name":"https","port":443},{"name":"tls","port":15443,"targetPort":15443}],"secretVolumes":[{"mountPath":"/etc/istio/egressgateway-certs","name":"egressgateway-certs","secretName":"istio-egressgateway-certs"},{"mountPath":"/etc/istio/egressgateway-ca-certs","name":"egressgateway-ca-certs","secretName":"istio-egressgateway-ca-certs"}],"type":"ClusterIP","zvpn":{"enabled":true,"suffix":"global"}},"istio-ingressgateway":{"applicationPorts":"","autoscaleEnabled":true,"debug":"info","domain":"","enabled":false,"env":{"ISTIO_META_ROUTER_MODE":"sni-dnat"},"meshExpansionPorts":[{"name":"tcp-pilot-grpc-tls","port":15011,"targetPort":15011},{"name":"tcp-citadel-grpc-tls","port":8060,"targetPort":8060},{"name":"tcp-dns-tls","port":853,"targetPort":853}],"namespace":"istio-system","ports":[{"name":"status-port","port":15020,"targetPort":15020},{"name":"http2","port":80,"targetPort":80},{"name":"https","port":443},{"name":"kiali","port":15029,"targetPort":15029},{"name":"prometheus","port":15030,"targetPort":15030},{"name":"grafana","port":15031,"targetPort":15031},{"name":"tracing","port":15032,"targetPort":15032},{"name":"tls","port":15443,"targetPort":15443}],"sds":{"enabled":false,"image":"node-agent-k8s","resources":{"limits":{"cpu":"2000m","memory":"1024Mi"},"requests":{"cpu":"100m","memory":"128Mi"}}},"secretVolumes":[{"mountPath":"/etc/istio/ingressgateway-certs","name":"ingressgateway-certs","secretName":"istio-ingressgateway-certs"},{"mountPath":"/etc/istio/ingressgateway-ca-certs","name":"ingressgateway-ca-certs","secretName":"istio-ingressgateway-ca-certs"}],"type":"LoadBalancer","zvpn":{"enabled":true,"suffix":"global"}}},"global":{"arch":{"amd64":2,"ppc64le":2,"s390x":2},"certificates":[],"configNamespace":"istio-system","configValidation":true,"controlPlaneSecurityEnabled":true,"defaultNodeSelector":{},"defaultPodDisruptionBudget":{"enabled":true},"defaultResources":{"requests":{"cpu":"10m"}},"disablePolicyChecks":true,"enableHelmTest":false,"enableTracing":false,"enabled":true,"hub":"docker.io/istio","imagePullPolicy":"IfNotPresent","imagePullSecrets":[],"istioNamespace":"istio-system","k8sIngress":{"enableHttps":false,"enabled":false,"gatewayName":"ingressgateway"},"localityLbSetting":{"enabled":true},"logAsJson":true,"logging":{"level":"default:warn"},"meshExpansion":{"enabled":false,"useILB":false},"meshNetworks":{},"mtls":{"auto":false,"enabled":false},"multiCluster":{"clusterName":"","enabled":false},"namespace":"istio-system","network":"","omitSidecarInjectorConfigMap":false,"oneNamespace":false,"operatorManageWebhooks":false,"outboundTrafficPolicy":{"mode":"ALLOW_ANY"},"policyCheckFailOpen":false,"policyNamespace":"istio-system","priorityClassName":"","prometheusNamespace":"istio-system","proxy":{"accessLogEncoding":"JSON","accessLogFile":"","accessLogFormat":"","autoInject":"disabled","clusterDomain":"cluster.local","componentLogLevel":"misc:error","concurrency":2,"dnsRefreshRate":"300s","enableCoreDump":false,"envoyAccessLogService":{"enabled":false},"envoyMetricsService":{"enabled":false,"tcpKeepalive":{"interval":"10s","probes":3,"time":"10s"},"tlsSettings":{"mode":"DISABLE","subjectAltNames":[]}},"envoyStatsd":{"enabled":false},"excludeIPRanges":"","excludeInboundPorts":"","excludeOutboundPorts":"","image":"proxyv2","includeIPRanges":"10.33.0.0/16,10.32.128.0/20","includeInboundPorts":"*","kubevirtInterfaces":"","logLevel":"warning","privileged":false,"protocolDetectionTimeout":"100ms","readinessFailureThreshold":30,"readinessInitialDelaySeconds":1,"readinessPeriodSeconds":2,"resources":{"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"100m","memory":"128Mi"}},"statusPort":15020,"tracer":"zipkin"},"proxy_init":{"image":"proxyv2","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"10m","memory":"10Mi"}}},"sds":{"enabled":false,"token":{"aud":"istio-ca"},"udsPath":""},"securityNamespace":"istio-system","tag":"1.4.6","telemetryNamespace":"istio-system","tracer":{"datadog":{"address":"$(HOST_IP):8126"},"lightstep":{"accessToken":"","address":"","cacertPath":"","secure":true},"zipkin":{"address":""}},"trustDomain":"cluster.local","useMCP":false},"grafana":{"accessMode":"ReadWriteMany","contextPath":"/grafana","dashboardProviders":{"dashboardproviders.yaml":{"apiVersion":1,"providers":[{"disableDeletion":false,"folder":"istio","name":"istio","options":{"path":"/var/lib/grafana/dashboards/istio"},"orgId":1,"type":"file"}]}},"datasources":{"datasources.yaml":{"apiVersion":1}},"enabled":false,"env":{},"envSecrets":{},"image":{"repository":"grafana/grafana","tag":"6.4.3"},"ingress":{"enabled":false,"hosts":["grafana.local"]},"namespace":"istio-system","nodeSelector":{},"persist":false,"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"replicaCount":1,"security":{"enabled":false,"passphraseKey":"passphrase","secretName":"grafana","usernameKey":"username"},"service":{"annotations":{},"externalPort":3000,"name":"http","type":"ClusterIP"},"storageClassName":"","tolerations":[]},"istio_cni":{"enabled":false,"repair":{"enabled":true}},"istiocoredns":{"coreDNSImage":"coredns/coredns","coreDNSPluginImage":"istio/coredns-plugin:0.2-istio-1.1","coreDNSTag":"1.6.2","enabled":false,"namespace":"istio-system"},"kiali":{"contextPath":"/kiali","createDemoSecret":false,"dashboard":{"passphraseKey":"passphrase","secretName":"kiali","usernameKey":"username","viewOnlyMode":false},"enabled":false,"hub":"quay.io/kiali","ingress":{"enabled":false,"hosts":["kiali.local"]},"namespace":"istio-system","nodeSelector":{},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"replicaCount":1,"security":{"cert_file":"/kiali-cert/cert-chain.pem","enabled":false,"private_key_file":"/kiali-cert/key.pem"},"tag":"v1.9"},"mixer":{"adapters":{"kubernetesenv":{"enabled":true},"prometheus":{"enabled":true,"metricsExpiryDuration":"10m"},"stackdriver":{"auth":{"apiKey":"","appCredentials":false,"serviceAccountPath":""},"enabled":false,"tracer":{"enabled":false,"sampleProbability":1}},"stdio":{"enabled":false,"outputAsJson":false},"useAdapterCRDs":false},"policy":{"adapters":{"kubernetesenv":{"enabled":true},"useAdapterCRDs":false},"autoscaleEnabled":true,"enabled":false,"image":"mixer","namespace":"istio-system","sessionAffinityEnabled":false},"telemetry":{"autoscaleEnabled":true,"enabled":false,"env":{"GOMAXPROCS":"6"},"image":"mixer","loadshedding":{"latencyThreshold":"100ms","mode":"enforce"},"namespace":"istio-system","nodeSelector":{},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"replicaCount":1,"reportBatchMaxEntries":100,"reportBatchMaxTime":"1s","sessionAffinityEnabled":false,"tolerations":[],"useMCP":true}},"nodeagent":{"enabled":false,"image":"node-agent-k8s","namespace":"istio-system"},"pilot":{"appNamespaces":[],"autoscaleEnabled":true,"autoscaleMax":5,"autoscaleMin":1,"configMap":true,"configNamespace":"istio-config","cpu":{"targetAverageUtilization":80},"enableProtocolSniffingForInbound":false,"enableProtocolSniffingForOutbound":true,"enabled":true,"env":{},"image":"pilot","ingress":{"ingressClass":"istio","ingressControllerMode":"OFF","ingressService":"istio-ingressgateway"},"keepaliveMaxServerConnectionAge":"30m","meshNetworks":{"networks":{}},"namespace":"istio-system","nodeSelector":{},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"policy":{"enabled":false},"replicaCount":1,"tolerations":[],"traceSampling":1,"useMCP":false},"prometheus":{"contextPath":"/prometheus","enabled":false,"hub":"docker.io/prom","ingress":{"enabled":false,"hosts":["prometheus.local"]},"namespace":"istio-system","nodeSelector":{},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"replicaCount":1,"retention":"6h","scrapeInterval":"15s","security":{"enabled":true},"tag":"v2.12.0","tolerations":[]},"security":{"dnsCerts":{"istio-pilot-service-account.istio-control":"istio-pilot.istio-control"},"enableNamespacesByDefault":true,"enabled":true,"image":"citadel","namespace":"istio-system","selfSigned":true,"trustDomain":"cluster.local"},"sidecarInjectorWebhook":{"alwaysInjectSelector":[],"enableNamespacesByDefault":false,"enabled":true,"image":"sidecar_injector","injectLabel":"istio-injection","injectedAnnotations":{},"lifecycle":{},"namespace":"istio-system","neverInjectSelector":[],"nodeSelector":{},"objectSelector":{"autoInject":true,"enabled":false},"podAnnotations":{},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"replicaCount":1,"resources":{},"rewriteAppHTTPProbe":false,"rollingMaxSurge":"100%","rollingMaxUnavailable":"25%","selfSigned":false,"tolerations":[]},"telemetry":{"enabled":true,"v1":{"enabled":true},"v2":{"enabled":false,"prometheus":{"enabled":true},"stackdriver":{"configOverride":{},"enabled":false,"logging":false,"monitoring":false,"topology":false}}},"tracing":{"enabled":false,"ingress":{"enabled":false},"jaeger":{"accessMode":"ReadWriteMany","enabled":false,"hub":"docker.io/jaegertracing","memory":{"max_traces":50000},"namespace":"istio-system","persist":false,"spanStorageType":"badger","storageClassName":"","tag":"1.14"},"nodeSelector":{},"opencensus":{"exporters":{"stackdriver":{"enable_tracing":true}},"hub":"docker.io/omnition","resources":{"limits":{"cpu":"1","memory":"2Gi"},"requests":{"cpu":"200m","memory":"400Mi"}},"tag":"0.1.9"},"podAntiAffinityLabelSelector":[],"podAntiAffinityTermLabelSelector":[],"provider":"jaeger","service":{"annotations":{},"externalPort":9411,"name":"http-query","type":"ClusterIP"},"zipkin":{"hub":"docker.io/openzipkin","javaOptsHeap":700,"maxSpans":500000,"node":{"cpus":2},"probeStartupDelay":200,"queryPort":9411,"resources":{"limits":{"cpu":"300m","memory":"900Mi"},"requests":{"cpu":"150m","memory":"900Mi"}},"tag":"2.14.2"}},"version":""}' kind: ConfigMap metadata: annotations: labels: app: sidecar-injector istio: sidecar-injector operator.istio.io/component: Injector operator.istio.io/managed: Reconcile operator.istio.io/version: 1.4.6 release: istio name: cm-test-toshi </code></pre> <ol start="2"> <li>Apply patch</li> </ol> <pre><code>$ kubectl patch configmap/cm-test-toshi --type merge -p '{ "data":{ "config":{ "template":{ "containers":[{ "name":"istio-proxy", "lifecycle":{ "preStop":{ "exec":{ "command":["/bin/sh", "-c", "while [ $(netstat -plunt | grep tcp | grep -v envoy | wc -l | xargs) -ne 0]; do sleep 1; done"] } } } }] } } } }' </code></pre> <h1>Anything else we need to know?:</h1> <h1>Environment:</h1> <ul> <li>Kubernetes version (use kubectl version): v1.14.10-dispatcher</li> <li>Cloud provider or hardware configuration: v1.14.10-gke.17</li> </ul>
<p>According to <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#alternate-forms-of-the-kubectl-patch-command" rel="nofollow noreferrer">kubernetes</a> documentation:</p> <blockquote> <h2>Alternate forms of the kubectl patch command</h2> <p>The <code>kubectl patch</code> command takes YAML or JSON. It can take the patch as a file or directly on the command line.</p> <p>Create a file named <code>patch-file.json</code> that has this content:</p> <pre><code>{ "spec": { "template": { "spec": { "containers": [ { "name": "patch-demo-ctr-2", "image": "redis" } ] } } } } </code></pre> <p>The following commands are equivalent:</p> <pre class="lang-sh prettyprint-override"><code>kubectl patch deployment patch-demo --patch "$(cat patch-file.yaml)" kubectl patch deployment patch-demo --patch 'spec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis' kubectl patch deployment patch-demo --patch "$(cat patch-file.json)" kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo-ctr-2","image": "redis"}]}}}}' </code></pre> </blockquote> <p>The method You are trying to use is not supported.</p> <p>So You can use <code>gsed</code> to stream the data as You mentioned. Or convert to a template like suggested from documentation. Or put the data into a file and <code>cat</code> it like in example above.</p> <p>Hope it helps.</p>
<p>We use istio to use distributed tracing. Our microservices sometimes need to hit external APIs, which usually communicate over https.</p> <p>To measure the exact performance of the whole system, we want to trace the communication when hitting an external API.<br> However, distributed tracing requires access to the header of the request, but https does not allow access because the header is encrypted.<br> For confirmation, I deployed bookinfo on GKE with istio enabled, entered the productpage container of the productpage pod, and executed the following command.</p> <pre><code>$ curl http://google.com $ curl https://google.com </code></pre> <p>Only http communication was displayed on zipkin.</p> <p>Is it possible to get a series of traces, including APIs that use external https?</p>
<p>Based on <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/trace/v2/trace.proto#config-trace-v2-tracing" rel="nofollow noreferrer">envoy documentation</a> it doesn't support https tracing.</p> <blockquote> <p>The tracing configuration specifies global settings for the HTTP tracer used by Envoy. The configuration is defined by the Bootstrap tracing field. Envoy may support other tracers in the future, but right now the HTTP tracer is the only one supported.</p> </blockquote> <p>And this post on <a href="https://stackoverflow.com/questions/187655/are-https-headers-encrypted">stackoverflow</a></p> <blockquote> <p>HTTPS (HTTP over SSL) sends all HTTP content over a SSL tunel, so HTTP content and headers are encrypted as well.</p> </blockquote> <p>I have even tried to reproduce that, but like in your case zipkin worked only for http.</p> <p>Based on that I would say it's not possible to use zipkin to track https.</p>
<p>I'm looking for the API used by kubernetes clients to rollback deployments.</p> <p>I can find that in the older versions of kubernetes API docs <code>POST /apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/rollback</code>(<a href="https://v1-15.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#rollback-deployment-v1beta1-extensions" rel="nofollow noreferrer">reference</a>) is the API being used. However this documentation seems to have been removed in versions later than 1.18 and I can't seem to find any replacement for this API either in the new documentation.</p> <p>If the API no longer exists how do clients such as kubectl or any of the client libraries rollback deployments in the newer versions of kubernetes?</p>
<p>The missing API is an result of the changes made in the newest Kubernetes version 1.19:</p> <blockquote> <p><code>apiextensions.k8s.io/v1beta1 </code> is deprecated in favor of <code>apiextensions.k8s.io/v1</code> (<a href="https://github.com/kubernetes/kubernetes/pull/90673" rel="nofollow noreferrer">#90673</a>, <a href="https://github.com/deads2k" rel="nofollow noreferrer">@deads2k</a>) [SIG API Machinery]</p> </blockquote> <p>As suggested by community running <code>kubectl</code> with high verbosity level will allow to debug your commands at high level. You can check <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">here</a> more about verbosity and debugging.</p>
<p>I'm using an AKS cluster running with K8s v1.16.15.</p> <p>I'm following this simple example to assign some cpu to a pod and it does not work. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p> <p>After applying this yaml file for the request,</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: cpu-example spec: containers: - name: cpu-demo-ctr image: vish/stress resources: limits: cpu: &quot;1&quot; requests: cpu: &quot;0.5&quot; args: - -cpus - &quot;2&quot; </code></pre> <p>If I try Kubectl describe pod... I get the following:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/1 nodes are available: 1 Insufficient cpu. </code></pre> <p>But CPUs seems available, if I run kubectl top nodes, I get:</p> <pre><code>CPU(cores) CPU% MEMORY(bytes) MEMORY% 702m 36% 4587Mi 100% </code></pre> <p>Maybe it is related to some AKS configuration but I can figure it out.</p> <p>Do you have an idea of what is happening?</p> <p>Thanks a lot in advance!!</p>
<p>The previous answer well explains the reasons why this could happen. What can be added is that while scheduling pods that has request you have to be aware of the resources that your other cluster objects consumes. System objects also use your resources. Even with small cluster you may have enabled some addon that will consume node resources.</p> <p>So your node has a certain amount of CPU and memory it can allocate to pods. While scheduling the Scheduler will only take into consideration nodes with enough unallocated resources to meet your desired requests. If the amount of unallocated CPU or memory is less than what the pod requests, Kubernetes will not schedule the pod to that node, because the node can’t provide the minimum amount required by the pod.</p> <p>If you describe your node you will see the pods that are already running and consuming your resources and all <code>allocated resources</code>:</p> <pre class="lang-sh prettyprint-override"><code>Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default elasticsearch-master-0 1 (25%) 1 (25%) 2Gi (13%) 4Gi (27%) 8d default test-5487d9b57b-4pz8v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27d kube-system coredns-66bff467f8-rhbnj 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 35d kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d kube-system httpecho 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d kube-system ingress-nginx-controller-69ccf5d9d8-rbdf8 100m (2%) 0 (0%) 90Mi (0%) 0 (0%) 34d kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 16d kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 35d kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 35d kube-system traefik-ingress-controller-78b4959fdf-8kp5k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1750m (43%) 1 (25%) memory 2208Mi (14%) 4266Mi (28%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) </code></pre> <p>Now the most important part is what you can do about that:</p> <ol> <li>You can enable <a href="https://azure.microsoft.com/en-gb/updates/generally-available-aks-cluster-autoscaler/" rel="nofollow noreferrer">autoscaling</a> so that system automatically provision node and extra needed resources. This of course assumes that you ran out of resources and you need more</li> <li>You can provision appropriate node by yourself (depending on how did you bootstrap your cluster)</li> <li>Turn off any addon services that might taking desired resources that you don`t need</li> </ol>
<p>What are the key benefits of using <code>ServiceEntry</code> when I can simply create <code>Service</code>(and if this service is a set of external IPs then define <code>Endpoints</code> instead of <code>selector</code>). In which cases I can't rely on <code>Service</code>?</p>
<p>I would say key benefits are mentioned in the <a href="https://istio.io/docs/concepts/traffic-management/#service-entries" rel="noreferrer">documentation</a>, you can configure the traffic route, define retry, timeouts, fault injection etc.</p> <blockquote> <p>A service entry describes the properties of a service (<strong>DNS name, VIPs, ports, protocols, endpoints</strong>). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).</p> </blockquote> <hr> <blockquote> <p><strong>You use a <a href="https://istio.io/docs/reference/config/networking/service-entry/#ServiceEntry" rel="noreferrer">service entry</a> to add an entry to the service registry that Istio maintains internally.</strong> <strong>After you add the service entry, the Envoy proxies can send traffic to the service as if it was a service in your mesh.</strong> Configuring service entries allows you to manage traffic for services running outside of the mesh, including the following tasks:</p> </blockquote> <ul> <li><strong>Redirect and forward traffic for external destinations, such as APIs consumed from the web, or traffic to services in legacy infrastructure.</strong></li> <li><strong>Define <a href="https://istio.io/docs/concepts/traffic-management/#retries" rel="noreferrer">retry</a>, <a href="https://istio.io/docs/concepts/traffic-management/#timeouts" rel="noreferrer">timeout</a>, and <a href="https://istio.io/docs/concepts/traffic-management/#fault-injection" rel="noreferrer">fault injection</a> policies for external destinations.</strong></li> <li><strong>Run a mesh service in a Virtual Machine (VM) by <a href="https://istio.io/docs/examples/virtual-machines/" rel="noreferrer">adding VMs to your mesh.</a></strong></li> <li><strong>Logically add services from a different cluster to the mesh to configure a <a href="https://istio.io/docs/setup/install/multicluster/gateways/#configure-the-example-services" rel="noreferrer">multicluster Istio mesh</a> on Kubernetes.</strong></li> </ul> <blockquote> <p>You don’t need to add a service entry for every external service that you want your mesh services to use. By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.</p> <p>The following example mesh-external service entry adds the ext-svc.example.com external dependency to Istio’s service registry:</p> </blockquote> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS </code></pre> <blockquote> <p>You specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name.</p> <p><strong>You can configure virtual services and destination rules to control traffic to a service entry in a more granular way, in the same way you configure traffic for any other service in the mesh.</strong> For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that we configured using the service entry:</p> </blockquote> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem </code></pre> <blockquote> <p>See the <a href="https://istio.io/docs/reference/config/networking/service-entry/" rel="noreferrer">Service Entry reference</a> for more possible configuration options.</p> </blockquote>