Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am trying to deploy a <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PodDisruptionBudget</a> for my deployment, but when I deploy this example</p>
<pre><code>apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: example-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: example-deployment
</code></pre>
<p>with this deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: example-deployment-app
template:
metadata:
labels:
app: example-deployment-app
spec:
...
</code></pre>
<p>I get the response</p>
<pre><code>$ kubectl get pdb
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
example-pdb 1 N/A 0 7s
</code></pre>
<p>What does it mean for "ALLOWED DISRUPTIONS" to be 0?</p>
| jjbskir | <p>As mentioned by <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget" rel="noreferrer">Specifying a PodDisruptionBudget</a>:</p>
<blockquote>
<p>A <code>PodDisruptionBudget</code> has three fields:</p>
<ul>
<li><p>A label selector <code>.spec.selector</code> to specify the set of pods to which it applies. This field is required.</p>
</li>
<li><p><code>.spec.minAvailable</code> which is a description of the number of pods from that set that must still be available after the eviction, even in
the absence of the evicted pod. <code>minAvailable</code> can be either an
absolute number or a percentage.</p>
</li>
<li><p><code>.spec.maxUnavailable</code> (available in Kubernetes 1.7 and higher) which is a description of the number of pods from that set that can be
unavailable after the eviction. It can be either an absolute number or
a percentage.</p>
</li>
</ul>
</blockquote>
<p>In your case the <code>.spec.minAvailable</code> is set to <code>1</code>, so <code>1</code> Pod must always be available, even during a disruption.</p>
<p>Now looking at your Deployment's <code>.spec.replicas</code> is set to <code>1</code> which in combination of <code>.spec.minAvailable: 1</code> means that there are no disruptions allowed for that config.</p>
<p>Take a look at the <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/#check-the-status-of-the-pdb" rel="noreferrer">official example</a>:</p>
<blockquote>
<p>Use <code>kubectl</code> to check that your PDB is created.</p>
<p>Assuming you don't actually have pods matching <code>app: zookeeper</code> in
your namespace, then you'll see something like this:</p>
<pre><code>kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 0 7s
</code></pre>
<p>If there are matching pods (say, 3), then you would see something like
this:</p>
<pre><code>kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 1 7s
</code></pre>
<p>The non-zero value for <code>ALLOWED DISRUPTIONS</code> means that the disruption
controller has seen the pods, counted the matching pods, and updated
the status of the PDB.</p>
<p>You can get more information about the status of a PDB with this
command:</p>
<pre><code>kubectl get poddisruptionbudgets zk-pdb -o yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
annotations:
…
creationTimestamp: "2020-03-04T04:22:56Z"
generation: 1
name: zk-pdb
…
status:
currentHealthy: 3
desiredHealthy: 2
disruptionsAllowed: 1
expectedPods: 3
observedGeneration: 1
</code></pre>
</blockquote>
<p>You can see that if the <code>.spec.minAvailable</code> is set to 2 and there are 3 running Pods than the <code>disruptionsAllowed</code> is actually <code>1</code>. You can check the same with your use case.</p>
| Wytrzymały Wiktor |
<p>I'm a beginner in Kubernetes and I have a situation as following: I have two differents Pods: <strong>PodA</strong> and <strong>PodB</strong>. Firstly, I want to expose <strong>PodA</strong> to the outside world, so I create a <strong>Service</strong> (type NodePort or LoadBalancer) for <strong>PodA</strong>, which is not difficult to understand for me. </p>
<p>Then I want <strong>PodA</strong> communicate to <strong>PodB</strong>, and after several hours googling, I found the answer is that I also need to create a <strong>Service</strong> (type ClusterIP if I want to keep <strong>PodB</strong> <em>only visible</em> inside the cluster) for <strong>PodB</strong>, and if I do so, I can let <strong>PodA</strong> and <strong>PodB</strong> comminucate to each other. But the problem is I also found <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-networking-guide-beginners.html" rel="noreferrer">this article</a>. According to this webpage, they say that <a href="https://i.stack.imgur.com/6PdHj.gif" rel="noreferrer"><em>the communication between pods on the same node</em></a> can be done via <code>cbr0</code>, a <strong>Network Bridge</strong>, or <a href="https://i.stack.imgur.com/rEkB6.gif" rel="noreferrer"><em>the communication between pods on different nodes</em></a> can be done via a <code>route table</code> of the cluster, and they don't mention anything to the <strong>Service</strong> object (which means we don't need <strong>Service</strong> object ???). </p>
<p>In fact, I also read the documents of K8s and I found in the <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="noreferrer">Cluster Networking</a> </p>
<blockquote>
<p><strong>Cluster Networking</strong><br>
...<br>
2. Pod-to-Pod communications: this is the primary focus of this document.<br>
...<br></p>
</blockquote>
<p>where they also focus on to the <strong>Pod-to-Pod communications</strong>, but there is no stuff relevant to the <strong>Service</strong> object.</p>
<p>So, I'm really confusing right now and my question is: Could you please explain to me the connection between these stuff in <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-networking-guide-beginners.html" rel="noreferrer">the article</a> and the <strong>Service</strong> object? The <strong>Service</strong> object is a high-level abstract of the <code>cbr0</code> and <code>route table</code>? And in the end, how can the <strong>Pods</strong> can communicate to each other?</p>
<p>If I misunderstand something, please, point it out for me, I really appreciate that. </p>
<p>Thank you guys !!!</p>
| nxh6991 | <p>Motivation behind using a service in a Kubernetes cluster.</p>
<p>Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. If you use a Deployment to run your app, it can create and destroy Pods dynamically.</p>
<p>Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.</p>
<p>This leads to a problem: if some set of Pods (call them “backends”) provides functionality to other Pods (call them “frontends”) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?</p>
<p>That being said, a service is handy when your deployments (podA and podB) are dynamically managed.</p>
| Khalid K |
<p>We have been trying to configure a K8 cluster that is deployed into Google Cloud Platform using the most economical set-up possible. Our solution will be deployed into different regional data centers due to regulatory and Geo-political constraints surrounding our commercial billing as a service platform called Bill Rush.</p>
<p>Given our regional requirements means that we want to make use of the following infrastructure settings:</p>
<ol>
<li>Committed Use virtual machine resource allocations. Our K8 nodes will be allocated to predefined fixed term compute infrastructure quotas when provisioned.</li>
<li>Standard Network Tier - as local customers are only one or two hops away from a GCP regional/zone data center location we are happy to use external network providers to carry traffic across to the closest google network egress point into the data center. Premium network routing is not required. </li>
<li>Regional environments and deployments. We only require a system to be running regionally across one or two zones for redundancy. We do not require fancier global redundancy set-ups.</li>
</ol>
<p>Using these 3 options would give us the cheapest set-up for each of our regional application environments. </p>
<p>Also, all regional instances need bookmarkable URL's so that users can easily find our application environments. As such we need to seed each environment with DNS and external IPs. These need to be referenced in our YAML ingress files when we apply them to our K8 cluster environments.</p>
<p><strong>The Issue:</strong></p>
<p>We would like to use conventional Kubernetes best practice and define an ingress. This will expose an external entry point into the cluster that is provisioned and managed by a GKE specific Google Cloud Controller.</p>
<p>In the case of a GKE ingress, only a single set-up is supported: A Global HTTP(S) Load Balancer which includes [proxy, forwarding rule, external IPs, back ends, certificates]. When using a regional external IP the LB set-up fails. </p>
<p><strong>Questions:</strong></p>
<ol>
<li>Why are we not allowed to use regional external IPs in an ingress YAML declaration?</li>
<li>What alternative GKE cluster configurations will support a standard network tier compliant external IP address</li>
<li>Will this impact our ability to use Anthos for development and UAT clusters deployed on-premise.</li>
</ol>
<p>Thanks in advance.</p>
| Shaun Forgie | <p>1) Why are we not allowed to use regional external IPs in an ingress YAML declaration?</p>
<p>As per GCP documentation on <a href="https://cloud.google.com/compute/docs/ip-addresses#reservedaddress" rel="nofollow noreferrer">Static external IP addresses</a>, "Static external IP addresses can be either a regional or a global resource. A regional static IP address allows resources of that region or resources of zones within that region to use the IP address. In this case, VM instances and regional forwarding rules can use a regional static IP address.</p>
<p>Global static external IP addresses are available only to global forwarding rules, used for global load balancing. You can't assign a global IP address to a regional or zonal resource."</p>
<p>2) What alternative GKE cluster configurations will support a standard network tier compliant external IP address?</p>
<p>You might have to consider some regional ingress or third-party nginx ingress and use a regional ip address there, you would have to create a new regional external IP address using the <a href="https://cloud.google.com/sdk/gcloud/reference/compute/addresses/create" rel="nofollow noreferrer">--region flag</a>. You might be able to find more information in this community documentation on <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">"Ingress with NGINX controller on Google Kubernetes Engine"</a></p>
<p>3) Will this impact our ability to use Anthos for development and UAT clusters deployed on-premise.</p>
<p>As per my understanding, it will use regional resources so you won't get any advantages of global resources like accessing GCP GFE to the nearest location of the client or your service user.</p>
| userX |
<p>Need to understand why pods are automounting the service accounts secret.</p>
<p>If we disable the automout of service account, will this affect any operation of our application which is already have service account specified in the pod spec part, but automount of the service account is not disabled.</p>
| Vowneee | <p>How to disable automounting of the service account is explained in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">linked documentation</a>:</p>
<blockquote>
<p>In version 1.6+, you can opt out of automounting API credentials for a
service account by setting <code>automountServiceAccountToken: false</code> on
the service account.</p>
<p>In version 1.6+, you can also opt out of automounting API credentials
for a particular pod.</p>
</blockquote>
<p>There are also some solutions suggested to mitigate the security issue:</p>
<ul>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-353824159" rel="nofollow noreferrer">Using RBAC</a></p>
</li>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-659304477" rel="nofollow noreferrer">Using mutating webhooks</a></p>
</li>
</ul>
<hr />
<blockquote>
<p>If we disable the automout of service account, will this affect any operation of our application which is already have service account specified in the pod spec part</p>
</blockquote>
<p>If you disable automounting of the SA secret, the Pod won't be able to access the K8s API server or do any other operation that requires authenticating as a Service Account. It's hard to tell if that would impact your workload or not, only you can tell. A web server or a worker Pod that only talks to other user-defined services might do fine without SA access, but if they want e.g. to spawn K8s Jobs from an application Pod they would need the SA.</p>
<hr />
<blockquote>
<p>But would like to understand why the secrete of the Service account getting mounted to the pods eventhough it's a security escalation.</p>
</blockquote>
<p>The point seems to be, as often in computer security, that we need to weigh convenience vs security. Automatically mounting SA secretes into a Pod makes it easy (=> goes to convenience) to use K8s API. Disabling this by default is more secure but also less convenient, as you need to explicitly mark those Pods that need access to the K8s API. Whether this is too much of a burden depends very much on the workload, and there's likely no default answer that fits everyone.</p>
<hr />
<blockquote>
<p>Why was it not changed to the more secure default?</p>
</blockquote>
<p>This was answered <a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-494986292" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>disabling by default is not backwards compatible, so is not a
realistic option until (if) a v2 Pod API is made</p>
</blockquote>
<p>and <a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-522209979" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>I'm not saying that it's unreasonable, just that it's going to be a
hard pill to swallow for GA distributions of Kubernetes. I could see
this happening in the v2 pod API.</p>
</blockquote>
| Wytrzymały Wiktor |
<p>Consider following expression</p>
<pre><code>kubectl get deploy -o 'jsonpath={.items[*].apiVersion}'
</code></pre>
<p>It returns following output:</p>
<pre><code>apps/v1 apps/v1
</code></pre>
<p>When using exactly the same expression with custom-columns:</p>
<pre><code>kubectl get deploy -o 'custom-columns=A:{.items[*].apiVersion}'
</code></pre>
<p>I get:</p>
<pre><code>A
<none>
<none>
</code></pre>
<p>What am I doing wrong?</p>
| Kshitiz Sharma | <p>Actually the case you are testing is kind of misleading. Because both <code>Deployment</code> and <code>DeploymentList</code> have the same apiVersion (<code>apps/v1</code>).
So let's work on <code>.metadata.name</code> for example:</p>
<pre><code>kubectl -n kube-system get deploy -o 'jsonpath={.items[*].metadata.name}'
</code></pre>
<p>You will get a result like this:</p>
<pre><code>calico-kube-controllers coredns dns-autoscaler kubernetes-dashboard metrics-server rbd-provisioner
</code></pre>
<p>But for custom column it is somehow different. Table is for showing a list of content. So the path you provide is for each row of the table. so you should use:</p>
<pre><code>kubectl -n kube-system get deploy -o 'custom-columns=A:{.metadata.name}'
</code></pre>
<p>And you will get the correct result:</p>
<pre><code>A
calico-kube-controllers
coredns
dns-autoscaler
kubernetes-dashboard
metrics-server
rbd-provisioner
</code></pre>
<p>So the problem was with using <code>items[*]</code> on <code>custom-columns</code>.</p>
| Emad Mohamadi |
<p>Is there any alias we can make for all-namespace as kubectl don't recognise the command <code>kubectl --all-namespaces</code> or any kind of shortcut to minimize the typing of the whole command.</p>
| Tinkaal Gogoi | <p>New in kubectl v1.14, you can use <code>-A</code> instead of <code>--all-namespaces</code>, eg:</p>
<p><code>kubectl get -A pod</code></p>
<p>(rejoice)</p>
<p>Reference:
<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#a-note-on-all-namespaces" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#a-note-on-all-namespaces</a></p>
| Tracey Jaquith |
<p>How can I speedup the rollout of new images in Kubernetes?</p>
<p>Currently, we have an automated build job that modifies a yaml file to point to a new revision and then runs <code>kubectl apply</code> on it.</p>
<p>It works, but it takes long delays (up to 20 minutes PER POD) before all pods with the previous revision are replaced with the latest.</p>
<p>Also, the deployment is configured for 3 replicas. We see one pod at a time is started with the new revision. (Is this the Kubernetes "surge" ?) But that is too slow, I would rather kill all 3 pods and have 3 new ones with the new image.</p>
| Leonel | <p>Jonas and SYN are right but I would like to expand this topic with some additional info and examples.</p>
<p>You have two types of strategies to choose from when specifying the way of updating your deployments:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate Deployment</a>: All existing Pods are killed before new ones are created.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update Deployment</a>: The Deployment updates Pods in a rolling update fashion.</p>
</li>
</ul>
<p>The default and more recommended one is the <code>.spec.strategy.type==RollingUpdate</code>. See the examples below:</p>
<pre><code>spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
<p>In this example there would be one additional Pod (<code>maxSurge: 1</code>) above the desired number of 3, and the number of available Pods cannot go lower than that number (<code>maxUnavailable: 0</code>).</p>
<p>Choosing this config, the Kubernetes will spin up an additional Pod, then stop an “old” one. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment. If not, the Pod will be deployed on an already used Node at the cost of resources from other Pods hosted on the same Node.</p>
<p>You can also try something like this:</p>
<pre><code>spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
</code></pre>
<p>With the example above there would be no additional Pods (<code>maxSurge: 0</code>) and only a single Pod at a time would be unavailable (<code>maxUnavailable: 1</code>).</p>
<p>In this case, Kubernetes will first stop a Pod before starting up a new one. The advantage of that is that the infrastructure doesn’t need to scale up but the maximum workload will be less.</p>
<p>If you chose to use the percentage values for <code>maxSurge</code> and <code>maxUnavailable</code> you need to remember that:</p>
<ul>
<li><p><code>maxSurge</code> - the absolute number is calculated from the percentage by <strong>rounding up</strong></p>
</li>
<li><p><code>maxUnavailable</code> - the absolute number is calculated from percentage by <strong>rounding down</strong></p>
</li>
</ul>
<p>With the <code>RollingUpdate</code> defined correctly you also have to make sure your applications provide endpoints to be queried by Kubernetes that return the app’s status. Below it's a <code>/greeting</code> endpoint, that returns an HTTP 200 status when it’s ready to handle requests, and HTTP 500 when it’s not:</p>
<pre><code>readinessProbe:
httpGet:
path: /greeting
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
</code></pre>
<ul>
<li><p><code>initialDelaySeconds</code> - Time (in seconds) before the first check for readiness is done.</p>
</li>
<li><p><code>periodSeconds</code> - Time (in seconds) between two readiness checks after the first one.</p>
</li>
<li><p><code>successThreshold</code> - Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p>
</li>
<li><p><code>timeoutSeconds</code> - Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p>
</li>
</ul>
<p>More on the topic of liveness/readiness probes can be found <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">here</a>.</p>
| Wytrzymały Wiktor |
<p>I'm trying to use Snowflake spark connector packages in <code>spark-submit</code> using <code>--packages</code></p>
<p>when i run in local, it is working fine. I'm able to connect to <code>Snowflake table</code> and returning a Spark <code>DataFrame</code>.</p>
<pre><code>spark-submit --packages net.snowflake:snowflake-jdbc:2.8.1,net.snowflake:spark-snowflake_2.10:2.0.0 test_sf.py
</code></pre>
<p>but when i try to pass --master argument, its fails stating Snowflake class is not available.</p>
<pre><code>spark-submit --packages net.snowflake:snowflake-jdbc:2.8.1,net.snowflake:spark-snowflake_2.10:2.0.0 --master spark://spark-master.cluster.local:7077 test_sf.py
</code></pre>
<p><strong>Update:</strong></p>
<p>I have tried all the options like <code>--jars</code>, <code>extraClassPath</code> on driver and executor and <code>--packages</code>, but nothing seems to be working.. is it because some problem in Spark standalone cluster</p>
<p><strong>Latest update:</strong></p>
<p>It is working when i specify the <code>repository URL</code> in <code>--jars</code> instead of file path. So basically i have to upload the jars in some repository and point to that.</p>
<p><strong>error log:</strong></p>
<pre><code>Caused by: java.lang.ClassNotFoundException: net.snowflake.spark.snowflake.io.SnowflakePartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:313)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
</code></pre>
| Shankar | <p>I am posting on behalf of a colleague that had some insights on this: </p>
<p>When you run spark-submit from your laptop to run a workload on Kubernetes (managed or otherwise) it requires you to provide the k8s master URL and not the spark master URL. Whatever this URL is pointing to "spark://spark-master.cluster.local:7077" does not have a line of sight from your machine, it may be that it does not even exist in your original issue. When using spark submit it creates the executor and driver nodes inside k8s and at that time a spark master URL will be available but even then the spark master URL is available only from inside k8s unless the line of sight is made available </p>
<p>Per your Update section: For passing packages, packages search for packages in the local maven repo or a remote repo if the path is provided to the remote repo, you can use the --jars options. Wherein you can bake the jars inside the container that would run the spark job and then provide the local path in the --jars variable</p>
<p>Does any of this resonate with the updates and conclusions you reached in your updated question? </p>
| Rachel McGuigan |
<p>Is it possible to get timestamp using downward API or any other way in pod spec</p>
<p>Checked here <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api</a>, but couldn't find timestamp.</p>
<p>Below is sample section from yaml where i want to generate output file with timestamp.</p>
<pre><code> args:
- "audit"
- "--config"
- "/opt/app/app-config/app.yaml"
- "--format"
- "json"
- "--output-file"
- "/var/log/app/app-$(date +%s).log"
</code></pre>
<p>Similar questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/52218081/include-pod-creation-time-in-kubernetes-pod-name">Include Pod creation time in Kubernetes Pod name</a></li>
<li><a href="https://stackoverflow.com/questions/52218081/include-pod-creation-time-in-kubernetes-pod-name">Include Pod creation time in Kubernetes Pod name</a></li>
</ul>
| Sumit Murari | <p>As mentioned in comments by @David Maze</p>
<blockquote>
<p>You can wrap this in a sh -c invocation to cause a shell to do the subprocess expansion.</p>
</blockquote>
<hr />
<p>Other way here would be to use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#set" rel="nofollow noreferrer">kubectl set</a>.</p>
<p>You can use</p>
<pre><code>kubectl set env deployment/deployment-name TIMESTAMP=$(kubectl get pod pod-name -o jsonpath='{.metadata.creationTimestamp}')
</code></pre>
<p>This will set creationTimestamp value as the env variable with TIMESTAMP name.</p>
<p>You can check the results with <code>kubectl describe pod pod-name</code> and search for Environment.</p>
<hr />
<p>Additionally you can use</p>
<pre><code>kubectl -n namespace get pod pod_name -o jsonpath="{range .status.conditions[*]}{.type}{','}{.lastTransitionTime}{'\n'}{end}"
</code></pre>
<p>to get all the events of the pod with timestamps</p>
| Jakub |
<pre><code> kubectl cp namespace/podname:/path/target .
</code></pre>
<p>If I use the instructed command from kubernetes guide, it only copies the contents inside the <code>target</code> directory and omits <code>target</code> itself.<br />
I don't want to use <code>mkdir</code> every time I need to copy.<br />
What's the option?</p>
| Lunartist | <p>I have a pod under <code>default</code> namespace called <code>ubuntu-pod</code> with a file located at root: <code>/decomission.log</code> and I got the same error:</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl cp default/ubuntu-pod:/decommission.log decommission.log
tar: Removing leading `/' from member names
</code></pre>
<p>The solution was to remove the slash and then I was able to copy the file with no message:</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl cp default/ubuntu-pod:decommission.log decommission.log
$ ls
decommission.log
</code></pre>
| Cesar Celis |
<p>I'm trying to configure AWS sso access my EKS clusters that are in a child account that I'm an admin to. I'm referencing <a href="https://www.powerupcloud.com/aws-eks-authentication-and-authorization-using-aws-single-signon/" rel="nofollow noreferrer">this document</a> and <a href="https://stackoverflow.com/questions/65660833/aws-eks-and-aws-sso-rbac-authentication-problem">this stack posting</a>. But keep getting RBAC errors when I log in with SSO to the child account. How do I properly configure this? I still have IAM access enabled at the moment.</p>
<p>Error in console:</p>
<pre><code>Your current user or role does not have access to Kubernetes objects on this EKS cluster
This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.
</code></pre>
<p>Roles:</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: default:sso-admin
namespace: default
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- rolearn: arn:aws:iam::xxxxx:role/AWSReservedSSOxxxxx
username: me:{{SessionName}}
groups:
- default:sso-admin
</code></pre>
| risail | <p>A solution for this issue is well described in the official docs:</p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/" rel="nofollow noreferrer">How do I resolve the "Your current user or role does not have access to Kubernetes objects on this EKS cluster" error in Amazon EKS?</a></p>
<blockquote>
<p><strong>Short description</strong></p>
<p>You receive this error when you use the AWS Management Console with an
AWS Identity and Access Management (IAM) role or user that's not in
your Amazon EKS cluster's aws-auth ConfigMap.</p>
<p>When you create an Amazon EKS cluster, the IAM user or role (such as a
federated user that creates the cluster) is automatically granted
system:masters permissions in the cluster's RBAC configuration. If you
access the Amazon EKS console and your IAM user or role isn't part of
the aws-auth ConfigMap, then you can't see your Kubernetes workloads
or overview details for the cluster.</p>
<p>To grant additional AWS users or roles the ability to interact with
your cluster, you must edit the aws-auth ConfigMap within Kubernetes.</p>
<p><strong>Resolution</strong></p>
<p>Note: If you receive errors when running AWS Command Line Interface
(AWS CLI) commands, make sure that you’re using the most recent AWS
CLI version.</p>
</blockquote>
<p>You can follow the steps described there in order to solve your problem.</p>
| Wytrzymały Wiktor |
<p>Our team wants to be able to run the Visual Studio debugger against deployed instances of our ASP.NET application to our internal Kubernetes cluster. I need to figure out how to finish the puzzle but I'm not very familiar with Visual Studio 2019.</p>
<ul>
<li>The Docker image is compiled with the official .NET Core images and has /vsdbg populated with the latest version (which does not support --attach).</li>
<li>Visual Studio works with my Docker Desktop.</li>
<li>Kubectl is correctly configured. I can use either the kubernetes cluster included with Docker Desktop or our internal kubernetes cluster for testing.</li>
<li>Azure is currently not an option. I understand from the documentation that this is what Microsoft prefers me to do.</li>
</ul>
<p>How should I configure Visual Studio to be able to do this? </p>
| Thorbjørn Ravn Andersen | <p>Ok. Let’s get it started. First of all make sure that you have published your app in Debug mode! I prefer to use a new Docker feature multi-stage build for building my images so I would write something like this in the end of a build stage in Dockerfile:</p>
<pre><code>RUN dotnet publish -c Debug -o ./results
</code></pre>
<p>To push images to Minikube I do use the local container registry as described here. But you can do it as you usually do.
When you have your container up and running we can start hacking to it. I will use Powershell for that purpose but the same can be easily rewritten in any other terminal language. You can follow tutorial step by step and execute commands in your terminal one by one checking var’s values with echo command when necessary.
In your *.yml file you should have a selector described something like this:</p>
<pre><code>selector:
matchLabels:
app: mywebapp
</code></pre>
<p>Grab it and use to define a $Selector var in your Powershell terminal:</p>
<pre><code>$Selector = 'app=mywebapp'
</code></pre>
<p>You need to find a pod where your containerized application is running by its selector:</p>
<pre><code>$pod = kubectl get pods --selector=$Selector -o jsonpath='{.items[0].metadata.name}';
</code></pre>
<p>Assuming that you have only one container on the pod now you can execute commands on that container. By default container does not have vsdbg installed so go ahead and install it:</p>
<pre><code>kubectl exec $pod -i -- apt-get update;
kubectl exec $pod -i -- apt-get install -y unzip;
kubectl exec $pod -i -- curl -sSL https://aka.ms/getvsdbgsh -o '/root/getvsdbg.sh';
kubectl exec $pod -i -- bash /root/getvsdbg.sh -v latest -l /vsdbg;
</code></pre>
<p>Next, you need to find PID of your app inside of the container:</p>
<pre><code>$prid = kubectl exec $pod -i -- pidof -s dotnet;
</code></pre>
<p>Normally it is equal to 1 but it is better to make fewer assumptions.
That’s it. Now you can start a debugger:</p>
<pre><code>kubectl exec $pod -i -- /vsdbg/vsdbg --interpreter=mi --attach $prid;
</code></pre>
<p>Don’t forget to execute the following commands before you close the window otherwise your app will stuck forever:</p>
<pre><code>-target-detach
-gdb-exit
</code></pre>
<p>Let’s put everything together, create a reusable script and save it somewhere near to the roots since you can use it with all your ASP.NET Core projects:</p>
<pre><code>param(
# the selector from your yml file
# selector:
# matchLabels:
# app: myweb
# -Selector app=myweb
[Parameter(Mandatory=$true)][string]$Selector
)
Write-Host '1. searching pod by selector:' $Selector '...';
$pod = kubectl get pods --selector=$Selector -o jsonpath='{.items[0].metadata.name}';
Write-Host '2. installing updates ...';
kubectl exec $pod -i -- apt-get update;
Write-Host '3. installing unzip ...';
kubectl exec $pod -i -- apt-get install -y --no-install-recommends unzip;
Write-Host '4. downloading getvsdbgsh ...';
kubectl exec $pod -i -- curl -sSL https://aka.ms/getvsdbgsh -o '/root/getvsdbg.sh';
Write-Host '5. installing vsdbg ...';
kubectl exec $pod -i -- bash /root/getvsdbg.sh -v latest -l /vsdbg;
$cmd = 'dotnet';
Write-Host '6. seaching for' $cmd 'process PID in pod:' $pod '...';
$prid = kubectl exec $pod -i -- pidof -s $cmd;
Write-Host '7. attaching debugger to process with PID:' $pid 'in pod:' $pod '...';
kubectl exec $pod -i -- /vsdbg/vsdbg --interpreter=mi --attach $prid;
</code></pre>
<p>Now you can execute this script like this when the terminal is running from the script folder:</p>
<pre><code>powershell -ExecutionPolicy Bypass -File kubedbg.ps1 -Selector app=mywebapp
</code></pre>
<p>But aren’t we supposed to be debugging from Visual Studio? Yes! Let’s go further and launch our terminal process from Visual Studio MIEngine.
Open your project in Visual Studio. Add new XML file with the following content and name it kubedbg.xml:</p>
<pre class="lang-xml prettyprint-override"><code>
<PipeLaunchOptions xmlns="http://schemas.microsoft.com/vstudio/MDDDebuggerOptions/2014"
PipePath="powershell" TargetArchitecture="x64" MIMode="clrdbg"
PipeArguments="
-ExecutionPolicy Bypass
-File C:\kube\kubedbg.ps1
-Selector app=mywebapp">
<LaunchCompleteCommand>None</LaunchCompleteCommand>
</PipeLaunchOptions>
</code></pre>
<p>In <code>-File</code> parameter you need to specify absolute path to the script file we created before. Then press Ctrl+Alt+A to open Command Window and run the following command:
<code>
Debug.MIDebugLaunch /Executable:dotnet /OptionsFile:absolute_path_to_kubedbg_xml
</code>
This command will start the debugging process inside Visual Studio with all standard benefits you would expect. But don’t stop debugging any other way than by pressing Detach All from Debug menu!
Although this command is not very convenient to write all the time. Luckily in Visual Studio, you can specify aliases for commands with parameters. Eventually, you would need a new <code>kubedbg.xml</code> file for each project. With this in mind go ahead and create your first alias by typing the following command in Command Window:</p>
<pre><code>alias kubedbg.mywebapp Debug.MIDebugLaunch /Executable:dotnet
/OptionsFile:absolute_path_to_kubedbg.xml
</code></pre>
<p>After that, you can start debugging just by executing kubedbg.mywebapp in Command Window. Even better you can run the same command from the Find toolbar Combobox but with prefix: <code>>kubedbg.mywebapp.</code> That’s not difficult since there is a text completion too. You can read more about command aliases here.
Happy debugging!
PS: As a bonus absolutely the same way you can debug your app even when running inside a public cloud. When kubectl is assigned to a cluster in the public cloud it just works with the same script and making fewer assumptions paid back since inside real cluster process ID is not equal to 1</p>
<p>Original author: <a href="https://medium.com/@pavel.agarkov/debugging-asp-net-core-app-running-in-kubernetes-minikube-from-visual-studio-2017-on-windows-6671ddc23d93" rel="noreferrer">https://medium.com/@pavel.agarkov/debugging-asp-net-core-app-running-in-kubernetes-minikube-from-visual-studio-2017-on-windows-6671ddc23d93</a></p>
| devcass |
<p>This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.</p>
<p><strong>On the kubernetes side:</strong> I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.</p>
<p><strong>On the client side:</strong> I have a Go client which can send gRPC requests to the service.</p>
<ul>
<li>It works fine when I <code>kubectl port-forward -n mynamespace service/myservice 8081:8081</code> and call my client via <code>client -url localhost:8081</code>.</li>
<li>When I close the port forward, and call with <code>client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081</code> my client fails to connect. (That url is the output of <code>kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'</code> with <code>:8081</code> appended.</li>
</ul>
<p><strong>Logs:</strong></p>
<ul>
<li>I looked at the <code>istio-system/istio-ingressgateway</code> service logs. I do not see an attempted connection.</li>
<li>I do see the bookinfo connections I made earlier when going over the <a href="https://istio.io/latest/docs/setup/getting-started/#ip" rel="nofollow noreferrer">istio bookinfo</a> tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show <code>"GET /productpage HTTP/1.1" 200</code>. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.</li>
</ul>
<p><strong>Istio's Ingress-Gateway</strong></p>
<pre><code>kubectl describe service -n istio-system istio-ingressgateway
</code></pre>
<p>outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)</p>
<pre><code>Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?</p>
<p>Here is the relevant yaml:</p>
<p><strong>Kubernetes Istio virtualservice:</strong> whose intent is to route anything on port 8081 to myservice</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
</code></pre>
<p><strong>Kubernetes Istio gateway:</strong> whose intent is to open port 8081 for GRPC</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
</code></pre>
<p><strong>Kubernetes service:</strong> showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
</code></pre>
<p><strong>Kubernetes deployment:</strong> showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
</code></pre>
<p><strong>Re checking DNS works on the client:</strong></p>
<pre><code>getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
</code></pre>
<p>outputs 3 IP's, I'm assuming that's good.</p>
<pre><code>[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
</code></pre>
<p><strong>Checking Istio Ingressgateway's routes:</strong></p>
<pre><code>istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
</code></pre>
<p>returns</p>
<pre><code>Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
</code></pre>
<p>Port 8081 is routed to myservice.mynamespace, seems good to me.</p>
<p><strong>UPDATE 1:</strong>
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.</p>
<p>That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a <code>Host:[hostname]</code> in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.</p>
| mipnw | <blockquote>
<p>I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.</p>
</blockquote>
<p>I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.</p>
<p>As far as I checked <a href="https://stackoverflow.com/questions/56661765/how-to-add-custom-port-for-istio-ingress-gateway">here</a> it's possible to do in 3 ways, here are the options with links to examples provided by @A_Suh, @Ryota and @peppered.</p>
<ul>
<li><a href="https://stackoverflow.com/a/56669078/11977760">Kubectl edit</a></li>
<li><a href="https://stackoverflow.com/a/58062314/11977760">Helm</a></li>
<li><a href="https://stackoverflow.com/a/61228394/11977760">Istio Operator</a></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/51835752/how-to-create-custom-istio-ingress-gateway-controller">How to create custom istio ingress gateway controller?</a></li>
<li><a href="https://stackoverflow.com/questions/56643594/how-to-configure-ingress-gateway-in-istio">How to configure ingress gateway in istio?</a></li>
</ul>
<hr />
<blockquote>
<p>That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.</p>
</blockquote>
<p>I see you have already create new question <a href="https://stackoverflow.com/questions/64037625/can-i-define-subdomains-for-a-classic-aws-elb-that-was-provisioned-by-istio">here</a>, so let's just move there.</p>
| Jakub |
<p>I am doing my practice on <strong>Kubernetes</strong> through <strong>Minikube</strong> on my <strong>AWS EC2</strong>. As part of that, I have created <strong>deployment</strong> and exposed that through <strong>NodePort</strong> service, then checked with:</p>
<pre><code>curl http://<node-ip>:<service-port>
</code></pre>
<p>in EC2 Machine that worked fine. But when I hit the same URL on browser gave me:</p>
<pre><code>This site can't be reached
</code></pre>
<p>Can anyone help me what is the problem and how can I access this?</p>
<p>Thank you.</p>
<p>This is my Deployment YAML file:</p>
<pre>
<code>
apiVersion: apps/v1
kind: Deployment
metadata:
name: myfirstdeployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
name: myfirstpod
labels:
app: web
spec:
containers:
- name: myfirstpod-1
image: nginx
command: ["sleep","3600"]
ports:
- containerPort: 80
</code>
</pre>
<p>This is my Service YAML file</p>
<pre>
<code>
apiVersion: v1
kind: Service
metadata:
name: myfirstservice
spec:
selector:
app: web
ports:
- targetPort: 80 #target container's port
port: 80 #service port
nodePort: 30030 #node port that we access to
type: NodePort
</code>
</pre>
| Aravind | <p>I strongly recommend going through the official tutorial showing the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="noreferrer">Accessing apps</a> options in Minikube:</p>
<blockquote>
<p>How to access applications running within minikube There are two major
categories of services in Kubernetes:</p>
<ul>
<li>NodePort</li>
<li>LoadBalancer</li>
</ul>
<p>minikube supports either. Read on!</p>
</blockquote>
<p>There you will find how to use both, the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#nodeport-access" rel="noreferrer">NodePort access</a>:</p>
<blockquote>
<p>A <code>NodePort</code> service is the most basic way to get external traffic
directly to your service. <code>NodePort</code>, as the name implies, opens a
specific port, and any traffic that is sent to this port is forwarded
to the service.</p>
</blockquote>
<p>Notice that you have to use <code>minikube ip</code> here.</p>
<p>And also the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#loadbalancer-access" rel="noreferrer">LoadBalancer access</a>:</p>
<blockquote>
<p>A <code>LoadBalancer</code> service is the standard way to expose a service to the
internet. With this method, each service gets its own IP address.</p>
</blockquote>
<p>This method uses the <code>minikube tunnel</code> command.</p>
<p>You can also use <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noreferrer">these docs</a> as a supplement.</p>
| Wytrzymały Wiktor |
<p>Here is the helm chart values for <code>stable/prometheus</code>: <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml</a></p>
<p>I was able to get this to work:</p>
<pre><code>helm upgrade --install prometheus stable/prometheus \
--set extraScrapeConfigs="- job_name: 'myjob'
scrape_interval: 1s
metrics_path: /metrics
scheme: https
static_configs:
- targets: ['###.##.###.###:#####']
tls_config:
ca_file: /prometheus/ca.pem
key_file: /prometheus/key.pem
cert_file: /prometheus/cert.pem
insecure_skip_verify: true"
</code></pre>
<p>In order to do this I had to do:</p>
<pre><code>kubectl cp localdir/ca.pem prometheus-server-abc:/prometheus -c prometheus-server
kubectl cp localdir/key.pem prometheus-server-abc:/prometheus -c prometheus-server
kubectl cp localdir/cert.pem prometheus-server-abc:/prometheus -c prometheus-server
</code></pre>
<p>I believe there's a better and more proper way to do this with <code>Secret</code> and <code>mountPath</code>. I tried something like the following with no luck:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
data:
ca.pem: base64encodedcapem
key.pem: base64encodedkeypem
cert.pem: base64encodedcertpem
</code></pre>
<pre><code>kubectl apply -f mysecret
</code></pre>
<pre><code>helm upgrade --install prometheus stable/prometheus \
--set extraSecretMounts="- name: mysecret-mount
mountPath: /somepathinpod/mysecret
secretName: mysecret" \
--set extraScrapeConfigs="- job_name: 'myjob'
scrape_interval: 1s
metrics_path: /metrics
scheme: https
static_configs:
- targets: ['###.##.###.###:#####']
tls_config:
ca_file: /somepathinpod/mysecret/ca.pem
key_file: /somepathinpod/mysecret/key.pem
cert_file: /somepathinpod/mysecret/cert.pem
insecure_skip_verify: true"
</code></pre>
<p>I expected the certs to magically show up at <code>/somepathinpod</code> but they did not.</p>
<p>I'm assuming I don't have to clone the whole repo and manually edit the helm chart to put a <code>volumeMount</code> into the <code>prometheus-server</code> deployment/pod and can just change my helm command somehow. Any advice on how to get my certs in there?</p>
| atkayla | <p>According to the <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">documentation</a>, the correct key to use would be <code>server.extraSecretMounts</code> instead of just <code>extraSecretMounts</code>.</p>
<p>Also verify the generated YAML on Kubernetes to contain the correct mounts via:</p>
<pre><code>kubectl get deployment prometheus-server-object-name -o yaml
</code></pre>
<p><strong>override.yaml</strong></p>
<pre><code>server:
extraSecretMounts:
- name: mysecret-mount
mountPath: /etc/config/mysecret
secretName: mysecret
extraScrapeConfigs: |
- job_name: myjob
scrape_interval: 15s
metrics_path: /metrics
scheme: https
static_configs:
- targets:
- ###.##.###.###:#####
tls_config:
ca_file: /etc/config/mysecret/ca.pem
key_file: /etc/config/mysecret/key.pem
cert_file: /etc/config/mysecret/cert.pem
insecure_skip_verify: true
</code></pre>
<pre><code>helm upgrade -f override.yaml prometheus stable/prometheus
</code></pre>
| Joe |
<p>I am trying create deployment using yaml file in <strong>kubernetes</strong> but facing this specific error:</p>
<p><strong>Error from server (BadRequest): error when creating ".\deployment_test.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.ObjectMeta: v1.ObjectMeta.Labels: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|"labels":"test"},"sp|...</strong></p>
<p>My yaml file is as follows:</p>
<pre><code>kind: Deployment
metadata:
labels:
environment: test
name: agentstubpod-deployment
spec:
replicas: 3
selector:
matchLabels:
enviroment: test
minReadySeconds: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels: test
spec:
containers:
- name: agentstub
image: some-repo:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: proxystubpod-deployment
spec:
replicas: 3
selector:
matchLabels:
enviroment: test
minReadySeconds: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels: test
spec:
containers:
- name: procyservice
image: some-repo:latest
</code></pre>
<p>What is wrong with this syntax? I am having a really hard time making a deployment</p>
| Manav Joshi | <p>There are some misconfigurations.</p>
<ol>
<li>apiVersion is missing in the first deployment</li>
<li>indentation below <code>metadata</code> is incorrect</li>
<li>You must include the <code>metadata.name</code> field</li>
<li><code>spec.selector.matchLabels</code> and <code>spec.template.metadata.labels</code> should be matched.</li>
</ol>
<p>Here is the corrected example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: agentstubpod-deployment
name: dep1
spec:
replicas: 3
selector:
matchLabels:
environment: test
minReadySeconds: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
environment: test
spec:
containers:
- name: agentstub
image: some-repo:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: proxystubpod-deployment
name: dep2
spec:
replicas: 3
selector:
matchLabels:
environment: test
minReadySeconds: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
environment: test
spec:
containers:
- name: procyservice
image: some-repo:latest
</code></pre>
| Daigo |
<p>We have installed istion-1.4.0 from istio-demo.yml file by running the following command on k8s cluster - 1.15.1</p>
<p>kubectl apply -f istio-demo.yml</p>
<p>Now we need to upgrade our istio from 1.4.0 to 1.5.0 and as per my understanding its not straight forward, due to changes in istio components ( introducing of istiod and removing citadel,galley,policy & telemetry).</p>
<p>How can i move from kubectl to istoctl so that my future upgrade to istio in-line with.??</p>
| Ankit Saxena | <p>As I mentioned in comments I have followed a theme on <a href="https://discuss.istio.io/t/upgrading-istio-1-4-3-to-1-6-0/6814/16" rel="nofollow noreferrer">istio discuss</a> about upgrade created by@laurentiuspurba.</p>
<p>I have changed it a little for your use case, so an upgrade from 1.4 to 1.5.</p>
<p>Take a look at below steps to follow.</p>
<hr />
<p>1.Follow istio <a href="https://istio.io/latest/docs/setup/getting-started/#download" rel="nofollow noreferrer">documentation</a> and install istioctl 1.4 and 1.5 with:</p>
<pre><code>curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh -
</code></pre>
<p>2.Add the istioctl 1.4 to your path</p>
<pre><code>cd istio-1.4.0
export PATH=$PWD/bin:$PATH
</code></pre>
<p>3.Install istio 1.4</p>
<pre><code>istioctl manifest generate > $HOME/generated-manifest.yaml
kubectl create namespace istio-system
kubectl apply -f generated-manifest.yaml
</code></pre>
<p>4.Check if everything works correct.</p>
<pre><code>kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
</code></pre>
<p>5.Add the istioctl 1.5 to your path</p>
<pre><code>cd istio-1.5.0
export PATH=$PWD/bin:$PATH
</code></pre>
<p>6.Install <a href="https://istio.io/latest/blog/2019/introducing-istio-operator/" rel="nofollow noreferrer">istio operator</a> for future upgrade.</p>
<pre><code>istioctl operator init
</code></pre>
<p>7.Prepare IstioOperator.yaml</p>
<pre><code>nano IstioOperator.yaml
</code></pre>
<hr />
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
tag: 1.5.0
</code></pre>
<p>8.Before the upgrade use below commands</p>
<pre><code>kubectl -n istio-system delete service/istio-galley deployment.apps/istio-galley
kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/istio-galley
</code></pre>
<p>9.Upgrade from 1.4 to 1.5 with istioctl upgrade and prepared IstioOperator.yaml</p>
<pre><code>istioctl upgrade -f IstioOperator.yaml
</code></pre>
<p>10.After the upgrade use below commands</p>
<pre><code>kubectl -n istio-system delete deployment istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete service istio-citadel istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete horizontalpodautoscaler.autoscaling/istio-pilot horizontalpodautoscaler.autoscaling/istio-telemetry
kubectl -n istio-system delete pdb istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete deployment istiocoredns
kubectl -n istio-system delete service istiocoredns
</code></pre>
<p>11.Check if everything works correct.</p>
<pre><code>kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
</code></pre>
<p>12.I have deployed a bookinfo app to check if everything work correct.</p>
<pre><code>kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
</code></pre>
<p>13.Results</p>
<pre><code>curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.5.0
control plane version: 1.5.0
data plane version: 1.5.0 (8 proxies)
</code></pre>
<hr />
<p>Hope you find this useful. If you have any questions let me know.</p>
| Jakub |
<p>I am using crictl pull to pull images, but it hangs at PullImageRequest.
I want to know what is happenning, so
Where is logs files of containerd, not logs of containers running but logs of containerd itself</p>
| User007 | <p><code>containerd</code> is running as a service of Linux, so you can check its logs with <code>journalctl -u containerd</code>.</p>
| Daigo |
<p>I have deployed a demo service (running on port 8000) in our K8s environment which has Istio installed (1.5.6, default profile).
When I make a call from outside the cluster to the public address, it succeeds.
When I make a call from a pod inside the cluster to the internal cluster address, it fails with response code 503.</p>
<p>When I change my Virtual Service to use the port instead of the subset, then it succeeds in both cases (external and internal call).</p>
<p>Any ideas what I'm doing wrong?</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
labels:
dgp-origin: demo-app
istio-injection: enabled
name: demo
---
apiVersion: v1
kind: Service
metadata:
name: demo
namespace: demo
labels:
app: demo
version: v1
annotations:
networking.istio.io/exportTo: "*"
spec:
ports:
- name: http
port: 8000
selector:
app: demo
version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: demo
version: v1
spec:
containers:
- name: echo
image: paddycarey/go-echo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demo
namespace: demo
spec:
exportTo:
- "*"
host: demo.demo.svc.cluster.local
subsets:
- name: v1
labels:
app: demo
version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: demo
namespace: demo
spec:
selector:
app: istio-ingressgateway
servers:
- hosts:
- demo.external.com
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
- hosts:
- demo.demo.svc.cluster.local
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demo
namespace: demo
spec:
exportTo:
- "*"
hosts:
- demo.external.com
- demo.demo.svc.cluster.local
gateways:
- mesh
- demo/demo
http:
- match:
- uri:
prefix: /
route:
- destination:
host: demo.demo.svc.cluster.local
# port:
# number: 8000
subset: v1
timeout: 55s
</code></pre>
<p><strong>log info (from istio-proxy of another container)</strong></p>
<p>external call: OK</p>
<pre><code>{
"authority": "-",
"bytes_received": "511",
"bytes_sent": "4744",
"downstream_local_address": "172.19.2.100:443",
"downstream_remote_address": "172.18.140.129:37992",
"duration": "43",
"istio_policy_status": "-",
"method": "-",
"path": "-",
"protocol": "-",
"request_id": "-",
"requested_server_name": "-",
"response_code": "0",
"response_flags": "-",
"route_name": "-",
"start_time": "2020-08-10T10:32:25.149Z",
"upstream_cluster": "PassthroughCluster",
"upstream_host": "172.19.2.100:443",
"upstream_local_address": "172.18.140.129:37994",
"upstream_service_time": "-",
"upstream_transport_failure_reason": "-",
"user_agent": "-",
"x_forwarded_for": "-"
}
</code></pre>
<p>internal call : NOT OK</p>
<pre><code>{
"authority": "demo.demo.svc.cluster.local",
"bytes_received": "0",
"bytes_sent": "0",
"downstream_local_address": "172.18.212.107:80",
"downstream_remote_address": "172.18.140.129:37802",
"duration": "0",
"istio_policy_status": "-",
"method": "GET",
"path": "/",
"protocol": "HTTP/1.1",
"request_id": "f875b032-f7d4-4f36-9ce1-38166aced074",
"requested_server_name": "-",
"response_code": "503",
"response_flags": "NR",
"route_name": "-",
"start_time": "2020-08-10T10:33:51.262Z",
"upstream_cluster": "-",
"upstream_host": "-",
"upstream_local_address": "-",
"upstream_service_time": "-",
"upstream_transport_failure_reason": "-",
"user_agent": "curl/7.61.1",
"x_forwarded_for": "-"
}
</code></pre>
<p><strong>UPDATE : When service is on port 80 it works</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo
namespace: demo
labels:
app: demo
version: v1
annotations:
networking.istio.io/exportTo: "*"
spec:
ports:
- name: http
port: 80
targetPort: 8000
selector:
app: demo
version: v1
</code></pre>
| Peter Claes | <p>Based on the istio <a href="https://istio.io/latest/docs/examples/bookinfo/" rel="nofollow noreferrer">bookinfo</a> app I would say the issue here are the missing <strong>labels</strong> in your deployment.</p>
<p>There is an productpage <a href="https://raw.githubusercontent.com/istio/istio/release-1.6/samples/bookinfo/platform/kube/bookinfo.yaml" rel="nofollow noreferrer">example</a></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
template:
metadata:
labels:
app: details
version: v1
spec:
serviceAccountName: bookinfo-details
containers:
- name: details
image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
</code></pre>
<p>Could you try to use your deployment after my edit?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: demo
labels:
app: demo
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: demo
version: v1
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: demo
version: v1
spec:
containers:
- name: echo
image: paddycarey/go-echo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
</code></pre>
<p><strong>EDIT</strong></p>
<p>I have test your yamls, and additionally I have created my own example with nginx pod.</p>
<p>I have the same issue as you, mesh internall call works only if I add port 8000 to virtual service.</p>
<hr />
<p>In my example with nginx everything works just fine.</p>
<hr />
<p>So based on that I assume there is either something wrong with</p>
<ul>
<li>paddycarey/go-echo image, as far as I checked, last time it was updated 4 years ago.</li>
<li>mesh gateway require port in virtual service, if it's other port than 80.</li>
</ul>
<hr />
<p>There are my yamls to test with nginx.</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: demo-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
namespace: demo-app
spec:
selector:
matchLabels:
app: nginx1
version: v1
replicas: 1
template:
metadata:
labels:
version: v1
app: nginx1
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: demo-app
labels:
app: nginx1
spec:
ports:
- name: http-front
port: 80
protocol: TCP
selector:
app: nginx1
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: simpleexample
namespace: demo-app
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
namespace: demo-app
spec:
gateways:
- simpleexample
- mesh
hosts:
- 'nginx.demo-app.svc.cluster.local'
- 'example.com'
http:
- route:
- destination:
host: nginx
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
namespace: demo-app
spec:
host: nginx
subsets:
- name: v1
labels:
version: v1
---
apiVersion: v1
kind: Pod
metadata:
name: ubu1
namespace: demo-app
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
</code></pre>
<hr />
<p>External call test</p>
<pre><code>curl -v -H "host: example.com" xx.xx.xx.xx/
HTTP/1.1 200 OK
Hello nginx1
</code></pre>
<p>Internal call test</p>
<pre><code>root@ubu1:/# curl nginx/
Hello nginx1
</code></pre>
<hr />
<p>Let me know if that was it or do you need further help.</p>
| Jakub |
<p>I'm curious on how the <code>@Value</code> works internally on Spring so that it can actually read value from <code>ConfigMap</code> of Kubernetes Cluster.</p>
<p>I know that:</p>
<ul>
<li><code>@Value("${my.nested.variable}")</code> were used to access variables
declared on <code>application.properties</code> or in OS' environment variable
(higher priority).</li>
<li>When creating new ConfigMap on kubernetes (for Spring project), you usually do <code>kubectl create configmap my-config-name --from-file=application.properties</code>, and it will magically connect those <code>ConfigMap</code> values with respective <code>@Value()</code> on Spring, of course we have to select <code>my-config-name</code> on deployment YAML file.</li>
</ul>
<p>Notice above that we <strong>didnt expose/map</strong> those configmap to container's environment variable, already checked the inside container with <code>printenv</code> , <strong>can't find it</strong>.
However, Spring were still able to retrieve those value from ConfigMap to be used in java program.</p>
<p>How is it possible? anyone know how Spring's <code>@Value</code> works or how the <code>ConfigMap</code> actually works internally so those two can <strong>magically</strong> connected?</p>
<p>Thank You.</p>
| xcode | <p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already mentioned by David Maze in the comments, <a href="https://spring.io/projects/spring-cloud-kubernetes" rel="nofollow noreferrer">Spring Cloud Kubernetes</a> is reading <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#kubernetes-propertysource-implementations" rel="nofollow noreferrer">ConfigMaps</a> by using <a href="https://kubernetes.io/docs/reference/using-api/" rel="nofollow noreferrer">Kubernetes API</a>. The mechanisms behind it are described in the linked docs.</p>
| Wytrzymały Wiktor |
<p>I have got a problem with decryption of my secrets.yaml file. The process freez like on pic. below:
<a href="https://i.stack.imgur.com/Xs9lQ.png" rel="nofollow noreferrer">helm secrets dec</a></p>
<p>Based on the example from official documentation: <a href="https://github.com/futuresimple/helm-secrets" rel="nofollow noreferrer">https://github.com/futuresimple/helm-secrets</a></p>
<p>1) I have my gpg key fingerprint added in the .sops.yaml</p>
<p>2) I make custom secrets.yaml file to encrypt:</p>
<pre><code>replicaCount:
image:
repository: git/repo
tag: v1
pullPolicy: always
service:
type: nodeport
port: 3456
targetPort: 4665
ingress:
enabled: true
</code></pre>
<p>Then I successfully encrypted this file with my key:
<a href="https://i.stack.imgur.com/eZCTj.png" rel="nofollow noreferrer">helm secrets enc</a></p>
<p>File is properly encrypted but unfortunetly I am not able to decrypt it back.
The command is suspended indefinitely as on the <a href="https://i.stack.imgur.com/MhVfj.png" rel="nofollow noreferrer">pic</a></p>
| PabloKielek | <p>Im not sure that helps me to solve this problem, but I made new GPG key WITHOUT PASS PHRASE. Propably SOPS was able to encrypt file, but can't obtain it to decryption. Hope it helps someone !</p>
| PabloKielek |
<p>I'm trying to add a new label <code>source_ip</code> to the prometheus metric <code>requestcount</code><br>
I've added the attribute to prom handler</p>
<pre><code>params:
metrics:
- instance_name: requestcount.instance.istio-system
kind: COUNTER
label_names:
- reporter
- source_ip
- source_app
</code></pre>
<p>and added a dimension to <code>requestcount</code> instance</p>
<pre><code>compiledTemplate: metric
params:
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_app: source.labels["app"] | "unknown"
source_ip: source.ip | "unknown"
</code></pre>
<p>and added an <code>attribute_binding</code> to the <code>attributes</code> instance</p>
<pre><code>spec:
attributeBindings:
destination.workload.namespace: $out.destination_workload_namespace | "unknown"
destination.workload.uid: $out.destination_workload_uid | "unknown"
source.ip: $out.source_pod_ip | ip("0.0.0.0")
</code></pre>
<p>yet, <code>source_ip</code> label is not included in the <code>istio_request_total</code> metric reported by prometheus, am I missing something here?</p>
| Gaurav Arora | <h2>About mixer and documentation you use</h2>
<blockquote>
<p>I'm using istio 1.5 and upgrading might take me some considerable time.</p>
</blockquote>
<p>That <a href="https://istio.io/latest/docs/reference/config/policy-and-telemetry/mixer-overview/#handlers" rel="nofollow noreferrer">documentation</a> you mentioned won't work on istio 1.5 as it uses mixer which is deprecated since istio 1.5, as mentioned in below docs you might re-enable it but I couldn't find any documentation about that.</p>
<p>As mentioned <a href="https://istio.io/latest/docs/reference/config/policy-and-telemetry/" rel="nofollow noreferrer">here</a> and <a href="https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/#mixer-deprecation" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies. Use of Mixer with Istio will only be supported through the 1.7 release of Istio.</p>
</blockquote>
<blockquote>
<p>Mixer deprecation</p>
<p>Mixer, the process behind the istio-telemetry and istio-policy deployments, has been deprecated with the 1.5 release. istio-policy was disabled by default since Istio 1.3 and istio-telemetry is disabled by default in Istio 1.5.</p>
<p>Telemetry is collected using an in-proxy extension mechanism (Telemetry V2) that does not require Mixer.</p>
<p>If you depend on specific Mixer features like out of process adapters, you may re-enable Mixer. Mixer will continue receiving bug fixes and security fixes until Istio 1.7. Many features supported by Mixer have alternatives as specified in the Mixer Deprecation document including the in-proxy <a href="https://github.com/istio/proxy/tree/master/extensions" rel="nofollow noreferrer">extensions</a> based on the WebAssembly sandbox API.</p>
<p>If you rely on a Mixer feature that does not have an equivalent, we encourage you to open issues and discuss in the community.</p>
</blockquote>
<hr />
<h2>About upgrade</h2>
<p>About the upgrade, I would say if that was older version of istio then it might be harder to upgrade, but since it's 1.5 I would say it might be easy to upgrade to 1.6 with <a href="https://istio.io/latest/docs/setup/upgrade/" rel="nofollow noreferrer">istioctl upgrade</a>. I would suggest to test it first on some test environment.</p>
<h2>About the main question</h2>
<p>Istio configures prometheus with a 'kubernetes-pods' job. At least while using the 'demo' profile. In this prometheus job config, there is a relabel_configs which gets the pod labels.</p>
<pre><code>relabel_configs:
...
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
</code></pre>
<p>If you want to use it then use meshConfig.enablePrometheusMerge=true option, it will append the labels to the istio metrics. There is related <a href="https://istio.io/latest/docs/ops/integrations/prometheus/#option-2-metrics-merging" rel="nofollow noreferrer">documentation</a> about that. Just a notice that this option is newly introduced in Istio 1.6 and is considered alpha at this time.</p>
| Jakub |
<p>I am using an <code>Ubuntu 22.04</code> machine to run and test Kubernetes locally. I need some functionality like <code>Docker-Desktop</code>. I mean it seems both <code>master</code> and <code>worker</code> nodes/machines will be installed by <code>Docker-Desktop</code> on the same machine. But when I try to install Kubernetes and following the instructions like <a href="https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/" rel="nofollow noreferrer">this</a>, at some points it says run the following codes on <code>master</code> node:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-master
</code></pre>
<p>Or run the following comands on the <code>worker</code> node machine:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-worker
</code></pre>
<p>I don't know how to specify <code>master</code>/<code>worker</code> nodes if I have only my local Ubuntu machine?</p>
<p>Or should I run <code>join</code> command after <code>kubeadm init</code> command? Because I can't understand the commands I run in my terminal will be considered as a command for which <code>master</code> or <code>worker</code> machine?</p>
<p>I am a little bit confused about this <code>master</code>/<code>worker</code> nodes or <code>client</code>/<code>server</code> machine stuff while I am just using one machine for both client and server machines.</p>
| best_of_man | <p>The hostname does nothing about node roles.</p>
<p>If you do <code>kubeadm init</code>, the node will be a <code>master</code> node (currently called <code>control plane</code>).</p>
<p>This node can also be used as a <code>worker</code> node (currently called just a <code>node</code>), but by default, Pods cannot be scheduled on the control plane node.</p>
<p>You can turn off this restriction by removing its taints with the following command:</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
</code></pre>
<p>and then you can use this node as both <code>control-plane</code> and <code>node</code>.</p>
<p>But I guess some small kubernetes like <code>k0s</code>, <code>k3s</code>, and <code>microk8s</code> are better options for your use case rather than kubeadm.</p>
| Daigo |
<p>I want to set a serviceaccount authorizations by associating it to a clusterRole but restricted to a namespace using a rolebinding.</p>
<p>I declared one clusterrole and I configured a rolebinding in a namespace pointing to that clusterrole.
However when I access the cluster with the serviceaccount token defined in the rolebinding I'm not restricted to the namespace.<br>
On the other hand, when I'm accessing the cluster with a "User" certificate, this is working. I have only access to the namespace.</p>
<p>Kubernetes v1.13.5</p>
<p>The Rolebinding I defined:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: exploitant
namespace: myNamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: myNamespace
- apiGroup: rbac.authorization.k8s.io
kind: User
name: myUser
</code></pre>
<p>This is what I get:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl auth can-i --token=XXXXXXXX get po -n myNamespace
yes
</code></pre>
<p>--> as expected</p>
<pre class="lang-sh prettyprint-override"><code>kubectl auth can-i --token=XXXXXXXX get po -n kube-system
yes
</code></pre>
<p>--> not expected !!!</p>
| Julien Le Fur | <p>The solution is to create a specific ServiceAccount. The "default" serviceAccount should not be used. By default all pods run with the default service account (if you dont specify one). So, the default service account exist in all namespace, so default service account can read pods in all namespace. </p>
| Julien Le Fur |
<p>I have three issues:</p>
<p>1 - How can I correctly implement regex to match the first part of a path in ingress?</p>
<pre><code>- path: /^search_[0-9A-Z]{9}
backend:
serviceName: global-search-service
servicePort: 5050
</code></pre>
<p>I would like to match any path that begins with <code>/search_</code> e.g <code>/search_model</code> or <code>/search_make</code></p>
<p>2 - How can I access a service with no path just port.</p>
<pre><code> path:
pathType: Exact
backend:
serviceName: chart-display
servicePort: 10000
</code></pre>
<p>I am just displaying this service using iframe. How do I access it since I only have the port number?</p>
<p>3 - I am hosting two different react apps, both of them work when I set their path as root <code>/</code> .. how do I implement both of them to work with root path?</p>
<p>Trying issue 3.</p>
<p>So I come up with something like this right?</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: admin-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /admin
backend:
serviceName: web2-service
servicePort: 2000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: normal-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: web1-service
servicePort: 1000
</code></pre>
<p>Loading <code><my-ip>/admin</code> does not go to web2-service, and if i leave them both at <code>/</code> , it automatically goes to web1-service. Kindly advice</p>
| Denn | <p>For the first question, you can use regex in your paths without any problem, you just need to annotate the ingress with the use-regex, at least according to the documentation (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#use-regex" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#use-regex</a>)</p>
<p>Something like:</p>
<pre><code>metadata:
name: your-name
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
...
</code></pre>
<p>or as an alternative, if you use the annotation for rewrite target, the regex should also be enforced.</p>
<p>As for the regex to use, in order to match the start of a path, for example all paths starting with <strong>something</strong>, like <strong>something_first</strong> or <strong>something_another</strong>, you could go for the simple:</p>
<pre><code>something_[a-zA-Z0-9]*
</code></pre>
<p>For the second question I'm not sure of what you are asking, exactly. The Ingress is supposed to be used with http or https requests and those should provide a path. If you want to simply expose a service outside at a given port, you could go for a LoadBalancer service.</p>
<p>Internally, the service you want to access with just a port answers in http at the root path? Or does something different? If it answers only at the root path, you could match all paths in the request to the root path.</p>
<p>As for how you could make all paths of the request rewrite to root, you could go with a rewrite annotation, such as:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>For more info on this annotation, check the documentation because it can do a lot of things (<a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a>)</p>
<p>Also take in mind that if you want to rewrite two services in two different places, you need to split the Ingress into two separate ingresses</p>
| AndD |
<p>We are using Kubernetes with Istio and have configured a virtual service:</p>
<pre><code> http:
- match:
- uri:
prefix: /api
rewrite:
uri: /api
route:
- destination:
host: svc-api
port:
number: 80
subset: stable
weight: 0
- destination:
host: svc-api
port:
number: 80
subset: vnext
weight: 100
corsPolicy: &apiCorsPolicy
allowOrigins:
- exact: '*'
allowMethods:
- GET
- POST
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: svc-api
port:
number: 80
subset: stable
weight: 0
- destination:
host: svc-api
port:
number: 80
subset: vnext
weight: 100
corsPolicy: *apiCorsPolicy
</code></pre>
<p>Making a request to <code>https://[host]/api/*</code> from a browser fails on OPTIONS request with <code>'from origin [request] has been blocked by CORS policy'</code>.</p>
<p>Describing the service in k8 shows the correct configurations.</p>
<p>According to v 1.6.4 the structure using <code>allowOrigins</code> and <code>exact</code> instead of <code>allowOrigin</code> is correct.</p>
<p>What am I missing here?</p>
| Casper Nybroe | <p>Few things worth to mention here.</p>
<hr />
<p>Recently in older istio versions there were a problem with cors and jwt combined together, take a look at below links:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/61152587/istio-1-5-cors-not-working-response-to-preflight-request-doesnt-pass-access-c">Istio 1.5 cors not working - Response to preflight request doesn't pass access control check</a></li>
<li><a href="https://github.com/istio/istio/issues/16171" rel="nofollow noreferrer">https://github.com/istio/istio/issues/16171</a></li>
</ul>
<hr />
<p>There is another <a href="https://github.com/istio/istio/issues/23757" rel="nofollow noreferrer">github issue</a> about that, in this <a href="https://github.com/istio/istio/issues/23757#issuecomment-634144575" rel="nofollow noreferrer">comment</a> community member have the same issue, maybe it's worth to open a new github issue for that?</p>
<hr />
<p>Additionally there is an answer from istio dev @howardjohn</p>
<blockquote>
<p>Hi everyone. Testing CORS using curl can be a bit misleading. CORS is not enforced at the server side; it will not return a 4xx error for example. Instead, headers are returned back which are used by browsers to deny/accept. <a href="https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/cors.html?highlight=cors" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/cors.html?highlight=cors</a> gives a good demo of this, and <a href="https://www.sohamkamani.com/blog/2016/12/21/web-security-cors/#:%7E:text=CORS%20isn%27t%20actually%20enforced,header%20in%20all%20its%20responses" rel="nofollow noreferrer">https://www.sohamkamani.com/blog/2016/12/21/web-security-cors/#:~:text=CORS%20isn't%20actually%20enforced,header%20in%20all%20its%20responses</a>. is a good explanation.</p>
<p>So Istio's job here is simply to return these headers. I have added a test showing this works: <a href="https://github.com/istio/istio/pull/26231" rel="nofollow noreferrer">#26231</a>.</p>
</blockquote>
<hr />
<p>As I mentioned in comment worth to take a look at another community member configuration <a href="https://stackoverflow.com/questions/63313148/istio-1-6-authorizationpolicy-does-not-have-proper-response-code-if-coming-from">here</a>, as he has a working options example, but have a 403 issue with POST.</p>
<hr />
<p>Few things I would change in your virtual service</p>
<p>Use just <code>corsPolicy</code> instead of <code>corsPolicy: &apiCorsPolicy</code></p>
<pre><code>corsPolicy:
</code></pre>
<p>Instead of exact you can also use regex which may do the trick for wildcards.</p>
<pre><code>allowOrigins:
- regex: '*'
</code></pre>
<p>About the <code>/</code> and <code>/api</code> paths, I think prefix is enough here, you don't need rewrite.</p>
<hr />
| Jakub |
<p>I have set up a kubernetes cluster on Azure with nginx ingress. I am getting a 404 error when navigating to a particular path.</p>
<p>I have set up some sample applications that return a simple echo that works perfectly fine. My ban api app always returns a 404 error. </p>
<p>When I navigate to the ingress path e.g. </p>
<pre><code>http://51.145.17.105/apple
</code></pre>
<p>It works fine. However when I navigate to the api application, I get a 404 error using the URL:</p>
<pre><code>http://51.145.17.105/ban-simple
</code></pre>
<p>If I log into the cluster and curl the ban-simple service (not the ingress ip) e.g. </p>
<pre><code>curl -L 10.1.1.40
</code></pre>
<p>I get the correct response. When I try it using the nginx ingress I get the 404 error.</p>
<p>The ingress mapping looks right to me. Here is a copy of the ingress yaml containing the paths.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fruit-ingress
namespace: ban
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
- path: /ban-simple
backend:
serviceName: ban-service
servicePort: 80
</code></pre>
<p>A copy of the "good" service is:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: apple-app
namespace: ban
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
namespace: ban
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
</code></pre>
<p>The service that does not work is:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: ban-simple
namespace: ban
labels:
app: ban-simple
spec:
containers:
- name: ban-simple
image: xxxxx.azurecr.io/services/ban
---
kind: Service
apiVersion: v1
metadata:
name: ban-simple-service
namespace: ban
spec:
selector:
app: ban-simple
ports:
- port: 80 # Default port for image
</code></pre>
<p>I have run the container locally and it works on my machine. It does redirect to localhost/index.html if that makes a difference.</p>
<p>Any suggestions are appreciated.</p>
| IHelpPeople | <p>It was <code>nginx.ingress.kubernetes.io/rewrite-target: /$</code> that was causing the issue. I commented it out and it resolved the issue.</p>
| IHelpPeople |
<p>Somebody, please, explain me (or direct to a detailed resource) why kubernetes uses this wrapper (pod) to work with containers. Every resource I go across just quotes same words - "it is the smallest unit in k8s". What I am looking for is the reason for it from engineering perspective. I do understand that it provides namespace for storage and networking for containers inside, but best practice is keeping a single container in a pod anyways.</p>
<p>I've used <code>docker-compose</code> a lot before I familiarized myself with k8s, and have hard times to understand the need for this additional layer (wrapper) around pretty straightforward entity, container.</p>
| wise-Kaa | <p>The reason for this decision is simply because a Pod may contain more than one container, doing different things.</p>
<p>First of all, A pod may have an init-container which is responsible to do some starting operations to ensure that the main container / containers work properly. I could have an init-container load some configuration and preparing it for the main application, or do some basic operations such as restoring a backup or similar things.</p>
<p>I can basically inject a series of operations to exec before starting the main application without building again the main application container image.</p>
<p>Second, even if the majority of applications are perfectly fine having only one container for Pod, there are several situations where more than one container in the same Pod may be useful.</p>
<p>An example could be having the main application running, and then a side-car container doing a proxy in front of the main application, maybe being the responsible for checking JWT tokens.. or another example could be a secondary application extracting metrics from the main application or similar things.</p>
<p>Last, let me quote Kubernetes documentation (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a>)</p>
<blockquote>
<p>The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost. An example of this pattern is a web server along with a helper program that polls a Git repository for new updates.</p>
</blockquote>
<p><strong>Update</strong></p>
<p>Like you said, init containers.. or multiple containers in the same Pod are not a must, all the functionalities that I listed can also be obtained in other ways, such as en entrypoints or two separate Pods communicating with each other instead of two containers in the same Pod.</p>
<p>There are several benefits in using those functionalities tho, let me quote the Kubernetes documentation once more (<a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a>)</p>
<blockquote>
<p>Because init containers have separate images from app containers, they have some advantages for start-up related code:</p>
<p>Init containers can contain utilities or custom code for setup that
are not present in an app image. For example, there is no need to make
an image FROM another image just to use a tool like sed, awk, python,
or dig during setup.</p>
<p>The application image builder and deployer roles
can work independently without the need to jointly build a single app
image.</p>
<p>Init containers can run with a different view of the filesystem
than app containers in the same Pod. Consequently, they can be given
access to Secrets that app containers cannot access.</p>
<p>Because init
containers run to completion before any app containers start, init
containers offer a mechanism to block or delay app container startup
until a set of preconditions are met. Once preconditions are met, all
of the app containers in a Pod can start in parallel.</p>
<p>Init containers
can securely run utilities or custom code that would otherwise make an
app container image less secure. By keeping unnecessary tools separate
you can limit the attack surface of your app container image</p>
</blockquote>
<p>The same applies to multiple containers running in the same Pod, they can communicate safely with each other, without exposing that communication to other on the cluster, because they keep it local.</p>
| AndD |
<p>I am trying to access rabbitmq cluster where TLS is enabled. I have written sample go app which trying to connect to the rabbitmq server using set of client certificates and client keys.</p>
<p>I am facing error -</p>
<pre><code>Error is - Exception (403) Reason: "username or password not allowed"
panic: Exception (403) Reason: "username or password not allowed"
</code></pre>
<p>My code snippet</p>
<pre><code>package main
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"log"
"github.com/streadway/amqp"
)
func main() {
fmt.Println("Go RabbitMQ Consumer Tutorial")
fmt.Println("Testing ClusterIP service connection over TLS")
cert, err := tls.LoadX509KeyPair("client.crt", "client.key")
if err != nil {
panic(err)
}
//Load CA cert.
caCert, err := ioutil.ReadFile("ca.crt") // The same you configured in your MQ server
if err != nil {
log.Fatal(err)
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
TlsConfig1 := &tls.Config{
Certificates: []tls.Certificate{cert}, // from tls.LoadX509KeyPair
RootCAs: caCertPool,
InsecureSkipVerify: true,
// ...other options are just the same as yours
}
conn, err := amqp.DialTLS("amqps://<username>:<password>@<rabbitmq-service-name>.<namespace>.svc.cluster.local:5671/", TlsConfig1)
if err != nil {
fmt.Println("Error is -", err)
panic(err)
}
fmt.Println("Connected to consumer successfully")
defer conn.Close()
ch, err := conn.Channel()
if err != nil {
fmt.Println(err)
}
defer ch.Close()
if err != nil {
fmt.Println(err)
}
msgs, err := ch.Consume(
"TestQueue",
"",
true,
false,
false,
false,
nil,
)
forever := make(chan bool)
go func() {
for d := range msgs {
fmt.Printf("Recieved Message: %s\n", d.Body)
}
}()
fmt.Println("Successfully Connected to our RabbitMQ Instance")
fmt.Println(" [*] - Waiting for messages")
<-forever
}
</code></pre>
<p>The code snippet is running as pod in the EKS cluster where my rabbitmq cluster is running.</p>
| Rajendra Gosavi | <p>After going through the rabbitmq TLS debugging doc- <a href="https://www.rabbitmq.com/access-control.html" rel="nofollow noreferrer">https://www.rabbitmq.com/access-control.html</a> I figured out that I was giving wrong username and password.</p>
<p>NOTE: if any one who is also getting the same error - please confirm that you are using correct username and password.</p>
<p>You can check the username and password for the cluster by logging into the rabbitmq cluster pod.</p>
| Rajendra Gosavi |
<p>I have set up a local cluster using microk8s and Kubeflow on my local machine. I followed <a href="https://www.kubeflow.org/docs/started/workstation/getting-started-multipass/" rel="nofollow noreferrer">these</a> installation instructions to get my cluster up and running. I have started a Jupyter Server and coded a Kubeflow Pipeline.</p>
<p>My YAML file I have used to define my components shown below:</p>
<pre><code>name: beat_the_market - Preprocess
description: Preprocesses market data and loads into GCS bucket.
inputs:
- {name: project, type: String, description: GCP Project ID}
- {name: bucket, type: GCSPath, description: GCS bucket path}
- {name: ticker, type: String, description: Ticker symbol for selected stock}
outputs:
- {name: Trained model, type: Tensorflow model}
implementation:
container:
image: us.gcr.io/manceps-labs/beat_the_market:latest
command: [python3, /opt/preprocess.py,
--project, {inputValue: project},
--bucket, {inputValue: bucket},
--ticker, {inputValue: ticker}
]
</code></pre>
<p>Unfortunately when I try to create an experiment using the Kubeflow Pipelines SDK I get the following error:</p>
<pre><code>2020-04-15 23:03:25,135 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1cc8a4c358>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /apis/v1beta1/experiments
2020-04-15 23:03:25,135 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1cc8a4c358>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /apis/v1beta1/experiments
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1cc8a4c358>: Failed to establish a new connection: [Errno -2] Name or service not known',)': /apis/v1beta1/experiments
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in _new_conn(self)
158 conn = connection.create_connection(
--> 159 (self._dns_host, self.port), self.timeout, **extra_kw)
160
/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
56
---> 57 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
58 af, socktype, proto, canonname, sa = res
/usr/lib/python3.6/socket.py in getaddrinfo(host, port, family, type, proto, flags)
744 addrlist = []
--> 745 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
746 af, socktype, proto, canonname, sa = res
gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
599 body=body, headers=headers,
--> 600 chunked=chunked)
601
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
353 else:
--> 354 conn.request(method, url, **httplib_request_kw)
355
/usr/lib/python3.6/http/client.py in request(self, method, url, body, headers, encode_chunked)
1238 """Send a complete request to the server."""
-> 1239 self._send_request(method, url, body, headers, encode_chunked)
1240
/usr/lib/python3.6/http/client.py in _send_request(self, method, url, body, headers, encode_chunked)
1284 body = _encode(body, 'body')
-> 1285 self.endheaders(body, encode_chunked=encode_chunked)
1286
/usr/lib/python3.6/http/client.py in endheaders(self, message_body, encode_chunked)
1233 raise CannotSendHeader()
-> 1234 self._send_output(message_body, encode_chunked=encode_chunked)
1235
/usr/lib/python3.6/http/client.py in _send_output(self, message_body, encode_chunked)
1025 del self._buffer[:]
-> 1026 self.send(msg)
1027
/usr/lib/python3.6/http/client.py in send(self, data)
963 if self.auto_open:
--> 964 self.connect()
965 else:
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in connect(self)
180 def connect(self):
--> 181 conn = self._new_conn()
182 self._prepare_conn(conn)
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in _new_conn(self)
167 raise NewConnectionError(
--> 168 self, "Failed to establish a new connection: %s" % e)
169
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f1cc8b3e860>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
<ipython-input-325-c8d6a70afd2d> in <module>
9 try:
---> 10 experiment = client.get_experiment(experiment_name=experiment_name)
11 except:
/usr/local/lib/python3.6/dist-packages/kfp/_client.py in get_experiment(self, experiment_id, experiment_name)
213 while next_page_token is not None:
--> 214 list_experiments_response = self.list_experiments(page_size=100, page_token=next_page_token)
215 next_page_token = list_experiments_response.next_page_token
/usr/local/lib/python3.6/dist-packages/kfp/_client.py in list_experiments(self, page_token, page_size, sort_by)
193 response = self._experiment_api.list_experiment(
--> 194 page_token=page_token, page_size=page_size, sort_by=sort_by)
195 return response
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api/experiment_service_api.py in list_experiment(self, **kwargs)
347 else:
--> 348 (data) = self.list_experiment_with_http_info(**kwargs) # noqa: E501
349 return data
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api/experiment_service_api.py in list_experiment_with_http_info(self, **kwargs)
429 _request_timeout=params.get('_request_timeout'),
--> 430 collection_formats=collection_formats)
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, async_req, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
329 _return_http_data_only, collection_formats,
--> 330 _preload_content, _request_timeout)
331 else:
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in __call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
160 _preload_content=_preload_content,
--> 161 _request_timeout=_request_timeout)
162
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in request(self, method, url, query_params, headers, post_params, body, _preload_content, _request_timeout)
350 _request_timeout=_request_timeout,
--> 351 headers=headers)
352 elif method == "HEAD":
/usr/local/lib/python3.6/dist-packages/kfp_server_api/rest.py in GET(self, url, headers, query_params, _preload_content, _request_timeout)
237 _request_timeout=_request_timeout,
--> 238 query_params=query_params)
239
/usr/local/lib/python3.6/dist-packages/kfp_server_api/rest.py in request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout)
210 timeout=timeout,
--> 211 headers=headers)
212 except urllib3.exceptions.SSLError as e:
/usr/local/lib/python3.6/dist-packages/urllib3/request.py in request(self, method, url, fields, headers, **urlopen_kw)
67 headers=headers,
---> 68 **urlopen_kw)
69 else:
/usr/local/lib/python3.6/dist-packages/urllib3/request.py in request_encode_url(self, method, url, fields, headers, **urlopen_kw)
88
---> 89 return self.urlopen(method, url, **extra_kw)
90
/usr/local/lib/python3.6/dist-packages/urllib3/poolmanager.py in urlopen(self, method, url, redirect, **kw)
323 else:
--> 324 response = conn.urlopen(method, u.request_uri, **kw)
325
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
637 retries = retries.increment(method, url, error=e, _pool=self,
--> 638 _stacktrace=sys.exc_info()[2])
639 retries.sleep()
/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
398 if new_retry.is_exhausted():
--> 399 raise MaxRetryError(_pool, url, error or ResponseError(cause))
400
MaxRetryError: HTTPConnectionPool(host='ml-pipeline.kubeflow.svc.cluster.local', port=8888): Max retries exceeded with url: /apis/v1beta1/experiments?page_token=&page_size=100&sort_by= (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1cc8b3e860>: Failed to establish a new connection: [Errno -2] Name or service not known',))
During handling of the above exception, another exception occurred:
gaierror Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in _new_conn(self)
158 conn = connection.create_connection(
--> 159 (self._dns_host, self.port), self.timeout, **extra_kw)
160
/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
56
---> 57 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
58 af, socktype, proto, canonname, sa = res
/usr/lib/python3.6/socket.py in getaddrinfo(host, port, family, type, proto, flags)
744 addrlist = []
--> 745 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
746 af, socktype, proto, canonname, sa = res
gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
599 body=body, headers=headers,
--> 600 chunked=chunked)
601
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
353 else:
--> 354 conn.request(method, url, **httplib_request_kw)
355
/usr/lib/python3.6/http/client.py in request(self, method, url, body, headers, encode_chunked)
1238 """Send a complete request to the server."""
-> 1239 self._send_request(method, url, body, headers, encode_chunked)
1240
/usr/lib/python3.6/http/client.py in _send_request(self, method, url, body, headers, encode_chunked)
1284 body = _encode(body, 'body')
-> 1285 self.endheaders(body, encode_chunked=encode_chunked)
1286
/usr/lib/python3.6/http/client.py in endheaders(self, message_body, encode_chunked)
1233 raise CannotSendHeader()
-> 1234 self._send_output(message_body, encode_chunked=encode_chunked)
1235
/usr/lib/python3.6/http/client.py in _send_output(self, message_body, encode_chunked)
1025 del self._buffer[:]
-> 1026 self.send(msg)
1027
/usr/lib/python3.6/http/client.py in send(self, data)
963 if self.auto_open:
--> 964 self.connect()
965 else:
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in connect(self)
180 def connect(self):
--> 181 conn = self._new_conn()
182 self._prepare_conn(conn)
/usr/local/lib/python3.6/dist-packages/urllib3/connection.py in _new_conn(self)
167 raise NewConnectionError(
--> 168 self, "Failed to establish a new connection: %s" % e)
169
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f1cc8a4c5f8>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
<ipython-input-325-c8d6a70afd2d> in <module>
10 experiment = client.get_experiment(experiment_name=experiment_name)
11 except:
---> 12 experiment = client.create_experiment(experiment_name)
13
14 print(experiment)
/usr/local/lib/python3.6/dist-packages/kfp/_client.py in create_experiment(self, name)
172 logging.info('Creating experiment {}.'.format(name))
173 experiment = kfp_server_api.models.ApiExperiment(name=name)
--> 174 experiment = self._experiment_api.create_experiment(body=experiment)
175
176 if self._is_ipython():
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api/experiment_service_api.py in create_experiment(self, body, **kwargs)
52 return self.create_experiment_with_http_info(body, **kwargs) # noqa: E501
53 else:
---> 54 (data) = self.create_experiment_with_http_info(body, **kwargs) # noqa: E501
55 return data
56
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api/experiment_service_api.py in create_experiment_with_http_info(self, body, **kwargs)
129 _preload_content=params.get('_preload_content', True),
130 _request_timeout=params.get('_request_timeout'),
--> 131 collection_formats=collection_formats)
132
133 def delete_experiment(self, id, **kwargs): # noqa: E501
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, async_req, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
328 response_type, auth_settings,
329 _return_http_data_only, collection_formats,
--> 330 _preload_content, _request_timeout)
331 else:
332 thread = self.pool.apply_async(self.__call_api, (resource_path,
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in __call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
159 post_params=post_params, body=body,
160 _preload_content=_preload_content,
--> 161 _request_timeout=_request_timeout)
162
163 self.last_response = response_data
/usr/local/lib/python3.6/dist-packages/kfp_server_api/api_client.py in request(self, method, url, query_params, headers, post_params, body, _preload_content, _request_timeout)
371 _preload_content=_preload_content,
372 _request_timeout=_request_timeout,
--> 373 body=body)
374 elif method == "PUT":
375 return self.rest_client.PUT(url,
/usr/local/lib/python3.6/dist-packages/kfp_server_api/rest.py in POST(self, url, headers, query_params, post_params, body, _preload_content, _request_timeout)
273 _preload_content=_preload_content,
274 _request_timeout=_request_timeout,
--> 275 body=body)
276
277 def PUT(self, url, headers=None, query_params=None, post_params=None,
/usr/local/lib/python3.6/dist-packages/kfp_server_api/rest.py in request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout)
165 preload_content=_preload_content,
166 timeout=timeout,
--> 167 headers=headers)
168 elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501
169 r = self.pool_manager.request(
/usr/local/lib/python3.6/dist-packages/urllib3/request.py in request(self, method, url, fields, headers, **urlopen_kw)
70 return self.request_encode_body(method, url, fields=fields,
71 headers=headers,
---> 72 **urlopen_kw)
73
74 def request_encode_url(self, method, url, fields=None, headers=None,
/usr/local/lib/python3.6/dist-packages/urllib3/request.py in request_encode_body(self, method, url, fields, headers, encode_multipart, multipart_boundary, **urlopen_kw)
148 extra_kw.update(urlopen_kw)
149
--> 150 return self.urlopen(method, url, **extra_kw)
/usr/local/lib/python3.6/dist-packages/urllib3/poolmanager.py in urlopen(self, method, url, redirect, **kw)
322 response = conn.urlopen(method, url, **kw)
323 else:
--> 324 response = conn.urlopen(method, u.request_uri, **kw)
325
326 redirect_location = redirect and response.get_redirect_location()
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
665 timeout=timeout, pool_timeout=pool_timeout,
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
669 def drain_and_release_conn(response):
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
665 timeout=timeout, pool_timeout=pool_timeout,
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
669 def drain_and_release_conn(response):
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
665 timeout=timeout, pool_timeout=pool_timeout,
666 release_conn=release_conn, body_pos=body_pos,
--> 667 **response_kw)
668
669 def drain_and_release_conn(response):
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
636
637 retries = retries.increment(method, url, error=e, _pool=self,
--> 638 _stacktrace=sys.exc_info()[2])
639 retries.sleep()
640
/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
397
398 if new_retry.is_exhausted():
--> 399 raise MaxRetryError(_pool, url, error or ResponseError(cause))
400
401 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='ml-pipeline.kubeflow.svc.cluster.local', port=8888): Max retries exceeded with url: /apis/v1beta1/experiments (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1cc8a4c5f8>: Failed to establish a new connection: [Errno -2] Name or service not known',))
</code></pre>
<p>Note that I did not include all of the retries but I think you get the point. I have tried using the IP provided by <code>microk8s.enable</code> and it gave me a sort-of successful output but all values were <code>None</code> so still not what I want.</p>
<pre><code>client = kfp.Client(host='http://xx.xx.xx.xx.xip.io')
experiment = client.create_experiment('test')
</code></pre>
<pre><code>Experiment link here
{'created_at': None, 'description': None, 'id': None, 'name': None}
</code></pre>
<p>Any help would be much appreciated. Let me know any other output you need to assess properly. Still learning Kubeflow so not sure how to debug and couldn't find much on it in Kubeflow docs, microk8s docs, or other threads. Currently working off of these 2 examples.</p>
<p><a href="https://github.com/kubeflow/examples/blob/master/named_entity_recognition/notebooks/Pipeline.ipynb" rel="nofollow noreferrer">https://github.com/kubeflow/examples/blob/master/named_entity_recognition/notebooks/Pipeline.ipynb</a></p>
<p><a href="https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/mnist/03_Reusable_Components.ipynb" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/mnist/03_Reusable_Components.ipynb</a></p>
| Christopher Thompson | <p>Try this:</p>
<pre><code>client = kfp.Client(host='pipelines-api.kubeflow.svc.cluster.local:8888').
</code></pre>
<p>This helped me resolve the HTTPConnection error</p>
| Amit Meghanani |
<p>I have a legacy app which keep checking an empty file inside a directory and perform certain action if the file timestamp is changed.</p>
<p>I am migrating this app to Kubernetes so I want to create an empty file inside the pod. I tried subpath like below but it doesn't create any file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: '/volume-name-path'
subPath: emptyFile
volumes:
- name: volume-name
emptyDir: {}
</code></pre>
<p>describe pods shows</p>
<pre><code>Containers:
demo:
Container ID: containerd://0b824265e96d75c5f77918326195d6029e22d17478ac54329deb47866bf8192d
Image: alpine
Image ID: docker.io/library/alpine@sha256:08d6ca16c60fe7490c03d10dc339d9fd8ea67c6466dea8d558526b1330a85930
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Running
Started: Wed, 10 Feb 2021 12:23:43 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4gp4x (ro)
/volume-name-path from volume-name (rw,path="emptyFile")
</code></pre>
<p>ls on the volume also shows nothing.
<code>k8 exec -it demo-pod -c demo ls /volume-name-path</code></p>
<p>any suggestion??</p>
<p>PS: I don't want to use a ConfigMap and simply wants to create an empty file.</p>
| nothing_authentic | <p>If the objective is to create a empty file when the Pod starts, then the most easy way is to either use the entrypoint of the docker image or an init container.</p>
<p>With the initContainer, you could go with something like the following (or with a more complex init image which you build and execute a whole bash script or something similar):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
initContainers:
- name: create-empty-file
image: alpine
command: ["touch", "/path/to/the/directory/empty_file"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
volumes:
- name: volume-name
emptyDir: {}
</code></pre>
<p>Basically the init container gets executed first, runs its command and if it is successful, then it terminates and the main container starts running. They share the same volumes (and they can also mount them at different paths) so in the example, the init container mount the emptyDir volume, creates an empty file and then complete. When the main container starts, the file is already there.</p>
<p>Regarding your legacy application which is getting ported on Kubernetes:</p>
<p>If you have control of the Dockerfile, you could simply change it create an empty file at the path you are expecting it to be, so that when the app starts, the file is already created there, empty, from the beginning, just exactly as you add the application to the container, you can add also other files.</p>
<p>For more info on init container, please check the documentation (<a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a>)</p>
| AndD |
<p>We are using Prometheus operator and we need to expose Grafana publicly (outside) using istio,
<a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a></p>
<p>Normally when I have application which I need to expose publicly with istio, I adding something like following to my micro service <strong>and it works</strong> and exposed outside.</p>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: po-svc
namespace: po
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: myapp //I take the name from deployment.yaml --in the chart NOT SURE WHICH VALUE I SHOULD TAKE FROM THE CHART---
</code></pre>
<p>And add a virtual service</p>
<p><strong>virtualservice.yaml</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: po-virtualservice
namespace: po
spec:
gateways:
- gw-system.svc.cluster.local
hosts:
- po.eu.trial.appos.cloud.mvn
http:
- route:
- destination:
host: po-svc
port:
number: 3000
</code></pre>
<p>Then I was able to access to my application <strong>publicly</strong>.</p>
<p>Now I want to the same for Grafana from the prometheus operator chart</p>
<p>in the <code>values.yaml</code> there is service entry</p>
<p><a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576</a>
However not sure if it should replace the <code>service.yaml</code> and if yes how to fill the data like <code>app: myapp</code> (which in regualr application I take from the deployment.yaml the `name' field) to be the grafana that the service have the reference to Grafana application</p>
<p>in addition, in the <code>virutalservice.yaml</code> there is a reference to the <code>service</code> (host: po-svc)</p>
<blockquote>
<p>My question is: How should I fill those <strong>two values</strong> and be able to
expose Grafana using istio ?</p>
</blockquote>
<p>Btw, if I change the <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">values from the chart</a> to <code>LoadBalancer</code> like below, im getting a public url to access outside, however I want to expose it via istio.</p>
<pre><code> service:
portName: service
type: LoadBalancer
</code></pre>
<p><strong>update</strong></p>
<p>I've created the following virtual service</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: po-virtualservice
namespace: po
spec:
gateways:
- gw-system.svc.cluster.local
hosts:
- po.eu.trial.appos.cloud.mvn
http:
- route:
- destination:
host: po-grafana. // This is the name of the service that promethues operator created when applying the chart .
port:
number: 3000
</code></pre>
<p>and update the <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">values.yaml</a> like following</p>
<pre><code> service:
portName: service
port: 3000
targetPort: 3000
</code></pre>
<p>Now when I hit the browser for the application url (po.eu.trial.appos.cloud.mvn) I got error</p>
<p><code>upstream connect error or disconnect/reset before headers. reset reason: connection termination </code> any idea what could be the problem? how should I trace this issue ?</p>
<p>I would think(not sure 100%) I may be missing something on the <strong>service config in the</strong> <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">chart</a> but not sure what...</p>
<p>I've found this post which have similar error: (but not sure we have the same issue)</p>
<p><a href="https://github.com/istio/istio/issues/19966" rel="nofollow noreferrer">https://github.com/istio/istio/issues/19966</a></p>
<p>However not sure how should I add the nameport to the <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">chart yaml</a> service definition</p>
| PJEM | <p>There is a working example for istio with version 1.7.0</p>
<pre><code>istioctl version
client version: 1.7.0
control plane version: 1.7.0
data plane version: 1.7.0 (1 proxies)
</code></pre>
<p>1.I have used <a href="https://v2.helm.sh/docs/helm/#helm-fetch" rel="nofollow noreferrer">helm fetch</a> to get prometheus operator.</p>
<pre><code>helm fetch stable/prometheus-operator --untar
</code></pre>
<p>2.I changed these in values.yaml.</p>
<p>Grafana <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L576" rel="nofollow noreferrer">Service</a>.</p>
<pre><code>service:
portName: http-service
port: 3000
targetPort: 3000
</code></pre>
<p>Grafana <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L514" rel="nofollow noreferrer">host</a>.</p>
<pre><code>hosts:
- grafana.domain.com
</code></pre>
<p>3.I have created po namespace and installed prometheus operator</p>
<pre><code>kubectl create namespace po
helm install prometheus-operator ./prometheus-operator -n po
</code></pre>
<p>4.I have checked the grafana service name with</p>
<pre><code>kubectl get svc -n po
prometheus-operator-grafana ClusterIP
</code></pre>
<p>5.I have used below yamls for istio, used grafana service name which is <code>prometheus-operator-grafana</code> as my virtual service and destination rule host.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: po
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "grafana.domain.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: po
spec:
hosts:
- "grafana.domain.com"
gateways:
- grafana-gateway
http:
- route:
- destination:
host: prometheus-operator-grafana.po.svc.cluster.local
port:
number: 3000
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: grafana
namespace: po
spec:
host: prometheus-operator-grafana.po.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<p>5.Test with curl, it's 302 instead of 200 as we have to login.</p>
<pre><code>curl -v -H "host: grafana.domain.com" xx.xx.xxx.xxx/
GET / HTTP/1.1
> Host: grafana.domain.com
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 302 Found
</code></pre>
<hr />
<p>Let me know if it worked or if you have any other questions. Maybe there is a problem with the 1.4.3 version you use.</p>
| Jakub |
<p>I have a <code>values.yaml</code> file in which I have given <code>spring_datasource_hikari_maximum_pool_size: "10"</code></p>
<p>In <code>deployment yaml</code> I have used this value as</p>
<pre><code> - name: SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE
value: {{ .Values.spring_datasource_hikari_maximum_pool_size }}
</code></pre>
<p>However, when used inside the <code>deployment.yaml </code>file it fails with the below error.</p>
<pre><code>
Deploy failed: The request is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":
{
(helm values etc)
`{"name":"SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE","value":10}]` **(this is the incorrect value)**
}
cannot convert int64 to string
</code></pre>
<p>What is the correct format of using an integer value from <code>values.yaml </code>file in a <code>deployment.yaml </code>file?</p>
<p>I have also tried multiple combinations with quotes "" but nothing seems to be working.</p>
<p>Any help is appreciated, Thanks in advance.</p>
| Mohammed Idris | <p>I was able to resolve this by using <strong>double quotes</strong> on the <code>value</code> itself in <code>deployment.yaml</code> file</p>
<pre><code>- name: SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE
value: "{{ .Values.spring_datasource_hikari_maximum_pool_size }}"
</code></pre>
<p>Since this was a <strong>production instance</strong> I could not check with @David Maze and Vit's solution.</p>
<p><strong>Edit:</strong></p>
<p>Tried with <code>quote</code> option and it worked too.</p>
<pre><code> - name: SPRING_DATASOURCE_HIKARI_MAXIMUMPOOLSIZE
value: {{ quote .Values.spring_datasource_hikari_maximum_pool_size }}
</code></pre>
| Mohammed Idris |
<p>I am new to kubernetes and I have minikube setup on my linux mint 20.
I am trying to implement server side rendering with nextjs, I have installed ingress-nginx using helm.</p>
<p>ingess-service.yaml :</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: example.dev
http:
paths:
- backend:
serviceName: users-srv
servicePort: 4000
path: /api/users/?(.*)
- backend:
serviceName: ui-srv
servicePort: 3000
path: /?(.*)
</code></pre>
<p>in next js app <code>ui</code> I want to access ingress controller in order to make api calls from server side. I tried:</p>
<p><code>axios.get('http://ingress-nginx-controller-admission/api/users/currentuser')</code></p>
<p><code>axios.get('http://ingress-nginx-controller/api/users/currentuser')</code></p>
<p><code>axios.get('http://ingress-service/api/users/currentuser')</code></p>
<p>but nothing is working.</p>
<p>kubctl get services:</p>
<pre><code> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.45.123 172.42.42.100 80:31205/TCP,443:32568/TCP 80m
ingress-nginx-controller-admission ClusterIP 10.111.229.112 <none> 443/TCP 80m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d1h
ui-srv ClusterIP 10.99.20.51 <none> 3000/TCP 89s
users-mongo-srv ClusterIP 10.103.187.200 <none> 27017/TCP 89s
users-srv ClusterIP 10.99.15.244 <none> 4000/TCP 89s
</code></pre>
<p>can anyone help me out ?
Thanks in advance...</p>
| Akshay Patil | <p>The ingress is designed to handle external traffic to the cluster and as such, it is expecting the request to arrive at the domain you specified (aka example.dev)</p>
<p>To access your APIs from inside a Pod, you should most definitely use directly the services that are served by the Ingress, such as <code>users-srv</code> or <code>ui-srv</code>.</p>
<p>If you really want to contact the ingress instead of the Service, you could try a couple things:</p>
<ul>
<li><p>Make so that <code>example.dev</code> points to the LoadBalancer IP address, for example adding it to <code>/etc/hosts</code> of the cluster's nodes should work 8or even internally in the Pod). But take into consideration that this means accessing the services by a long route when you could just access them with the service name.</p>
</li>
<li><p>Remove the host parameter from your rules, meaning the services should be served generally at the IP address of the nginx-controller, this should make using <code>ingress-nginx-controller</code> work as expected. This is not supported by all Ingress Controllers but it could work.</p>
</li>
</ul>
| AndD |
<p>I'm trying to run FileBeat on minikube following this doc with k8s 1.16
<a href="https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html" rel="noreferrer">https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html</a></p>
<p>I downloaded the manifest file as instructed</p>
<pre><code>curl -L -O https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/filebeat-kubernetes.yaml
</code></pre>
<p>Contents of the yaml file below</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# host: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
</code></pre>
<p>When I try the deploy step, </p>
<pre><code>kubectl create -f filebeat-kubernetes.yaml
</code></pre>
<p>I get the output + error:</p>
<p>configmap/filebeat-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"</p>
| phamjamstudio | <p>As we can see <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">there</a></p>
<blockquote>
<p><strong>DaemonSet</strong>, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API</p>
</blockquote>
<p>You need to change <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning" rel="noreferrer">apiVersion</a></p>
<pre><code>apiVersion: extensions/v1beta1 -> apiVersion: apps/v1
</code></pre>
<p>Then there is another error </p>
<p><code>missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec;</code></p>
<p>So we have to add selector field </p>
<pre><code>spec:
selector:
matchLabels:
k8s-app: filebeat
</code></pre>
<p>Edited DaemonSet yaml:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
</code></pre>
<p>Let me know if that help you.</p>
| Jakub |
<p>I deployed spark on kuberenets
<code>helm install microsoft/spark --version 1.0.0</code> (also tried bitnami chart with the same result)</p>
<p>then, as is described <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#submitting-applications-to-kubernetes" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#submitting-applications-to-kubernetes</a></p>
<p>i go to $SPARK_HOME/bin</p>
<pre><code>docker-image-tool.sh -r -t my-tag build
</code></pre>
<p>this returns
Cannot find docker image. This script must be run from a runnable distribution of Apache Spark.</p>
<p>but all spark runnables are in this directory. </p>
<pre><code>bash-4.4# cd $SPARK_HOME/bin
bash-4.4# ls
beeline find-spark-home.cmd pyspark.cmd spark-class spark-shell.cmd spark-sql2.cmd sparkR
beeline.cmd load-spark-env.cmd pyspark2.cmd spark-class.cmd spark-shell2.cmd spark-submit sparkR.cmd
docker-image-tool.sh load-spark-env.sh run-example spark-class2.cmd spark-sql spark-submit.cmd sparkR2.cmd
find-spark-home pyspark run-example.cmd spark-shell spark-sql.cmd spark-submit2.cmd
</code></pre>
<p>any suggestions what am i doing wrong?
i haven't made any other configurations with spark, am i missing something? should i install docker myself, or any other tools?</p>
| rigby | <p>You are mixing things here. </p>
<p>When you run <code>helm install microsoft/spark --version 1.0.0</code> you're deploying Spark with all pre-requisites inside Kubernetes. Helm is doing all hard work for you. After you run this, Spark is ready to use. </p>
<p>Than after you deploy Spark using Helm you are trying to deploy Spark from inside a Spark pod that is already running on Kubernetes. </p>
<p>These are two different things that are not meant to be mixed. <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#submitting-applications-to-kubernetes" rel="nofollow noreferrer">This</a> guide is explaining how to run Spark on Kubernetes by hand but fortunately it can be done using Helm as you did before. </p>
<p>When you run <code>helm install myspark microsoft/spark --version 1.0.0</code>, the output is telling you how to access your spark webui:</p>
<pre><code>NAME: myspark
LAST DEPLOYED: Wed Apr 8 08:01:39 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the Spark URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w myspark-webui'
export SPARK_SERVICE_IP=$(kubectl get svc --namespace default myspark-webui -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SPARK_SERVICE_IP:8080
2. Get the Zeppelin URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w myspark-zeppelin'
export ZEPPELIN_SERVICE_IP=$(kubectl get svc --namespace default myspark-zeppelin -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$ZEPPELIN_SERVICE_IP:8080
</code></pre>
<p>Let's check it:</p>
<pre><code>$ export SPARK_SERVICE_IP=$(kubectl get svc --namespace default myspark-webui -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ echo http://$SPARK_SERVICE_IP:8080
http://34.70.212.182:8080
</code></pre>
<p>If you open this URL you have your Spark webui ready.</p>
<p><a href="https://i.stack.imgur.com/kTv4q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kTv4q.png" alt="Spark WEBUI"></a></p>
| Mark Watney |
<p>While trying to understand <strong>Kubernetes networking</strong>, one point has confused me. Why <strong>Kubernetes doesn't handle pod-to-pod communication inbuilt/itself?</strong></p>
<p>As per the docs - <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a> </p>
<p>There are <strong>4 distinct networking problems</strong> to address:</p>
<ol>
<li><strong>Highly-coupled container-to-container communications</strong>: this is solved
by pods and localhost communications.</li>
<li><strong>Pod-to-Pod communications</strong>: this is the primary focus of this
document.</li>
<li><strong>Pod-to-Service communications</strong>: this is covered by services.</li>
<li><strong>External-to-Service communications</strong>: this is covered by services.</li>
</ol>
<p>When Kubernetes can <strong>handle all the other problems</strong>(mentioned above) of networking, <strong>why does pod-to-pod communication needs to handled by other <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">plugins</a></strong> like ACI, Cilium, Flannel, Jaguar and so on..</p>
<p>Would like to know is there <strong>any specific reason for such architecture</strong>? </p>
| Here_2_learn | <p>Agree with Tim above. Kubernetes in general is mostly an abstraction and orchestration layer of the Compute, storage and networking for the developers so that they don't have to be aware of the implementation. The implementation itself will be tied to the underlying infrastructure and kubernetes just defines the interface for them (CRI for containers -compute, CSI for storage and CNI for networking).</p>
<p>By just defining the interface the implementations can evolve independently without breaking the contract. For example, in future it might become possible to offload pod-to-pod networking to the nic-card and expecting kubernetes to evolve to such a technology change might be a big ask. By not being intimately tied to the implementation it allows development of technology to be accelerated in each layer. </p>
| Mohana Kumar |
<p>apiVersion: v1
kind: ConfigMap
metadata:
name: kibana</p>
<pre><code>namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
</code></pre>
<p>this is my config.yml file. when I try to create this project, I get this error</p>
<pre><code>error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
</code></pre>
<p>I can't get rid of the error even after removing the space in line 13 column 17</p>
| ashique | <p>The yml content can be directly put on multiple lines, formatted like a real <code>yaml</code>, take a look at the following example:</p>
<pre><code>data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
</code></pre>
<p>This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)</p>
| AndD |
<p>Is there a way to request the status of a readinessProbe by using a service name linked to a deployment ? In an initContainer for example ?</p>
<p>Imagine we have a deployment X, using a readinessProbe, a service linked to it so we can request for example <code>http://service-X:8080</code>.
Now we create a deployment Y, in the initContainer we want to know if deployment X is ready. Is there a way to ask something like<code>deployment-X.ready</code> or <code>service-X.ready</code> ?</p>
<p>I know that the correct way to handle dependencies is to let kubernetes do it for us, but i have a container which doesn't crash and I have no hand on it...</p>
| Borhink | <p>Instead of readinessProbe You can use just <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">InitContainer</a>.</p>
<p>You create a pod/deployment X, make service X, and create a initContainer which is searching for the service X.</p>
<blockquote>
<p>If he find it -> he will make the pod.</p>
<p>If he won't find it -> he will keep looking until service X will be created.</p>
</blockquote>
<p>Just a simple example, we create <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">nginx deployment</a> by using <code>kubectl apply -f nginx.yaml</code>.</p>
<p>nginx.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Then we create initContainer</p>
<p>initContainer.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup my-nginx; do echo waiting for myapp-pod2; sleep 2; done;']
</code></pre>
<p>initContainer will look for service <strong>my-nginx</strong>, until You create it ,it will be in <code>Init:0/1</code> status.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/1 0 15m
</code></pre>
<p>After You add service for example by using <code>kubectl expose deployment/my-nginx</code> and initContainer will find my-nginx service, he will be created.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 35m
</code></pre>
<p>Result:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/myapp-pod to kubeadm2
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container init-myservice
Normal Started 20s kubelet, kubeadm2 Started container init-myservice
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container myapp-container
Normal Started 20s kubelet, kubeadm2 Started container myapp-container
</code></pre>
<p>Let me know if that answer your question.</p>
| Jakub |
<p>I have two k8s cluster, one using docker and another using containerd directly, both with selinux enabled.
but I found selinux not actually working on the containerd one, although this two cluster have the same version of containerd and runc. </p>
<p>did i miss some setting with containerd?</p>
<p>docker: file label is <strong>container_file_t</strong>, and process runs as <strong>container_t</strong>, selinux works fine</p>
<pre><code>K8s version: 1.17
Docker version: 19.03.6
Containerd version: 1.2.10
selinux enable by adding ["selinux-enabled": true] to /etc/docker/daemon.json
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:container_t:s0:c655,c743 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_file_t:s0:c655,c743 /usr/local/openjdk-8/bin/java
</code></pre>
<p>containerd: file label is <strong>container_var_lib_t</strong>, and process runs as <strong>spc_t</strong>, selinux makes no sense</p>
<pre><code>K8s version: 1.15
Containerd version: 1.2.10
selinux enable by setting [enable_selinux = true] in /etc/containerd/config.toml
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:spc_t:s0 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_var_lib_t:s0 /usr/local/openjdk-8/bin/java
// seems run as spc_t is correct
# sesearch -T -t container_var_lib_t | grep spc_t
type_transition container_runtime_t container_var_lib_t : process spc_t;
</code></pre>
| Cyron | <p>From this <a href="https://github.com/containerd/cri/issues/1195" rel="nofollow noreferrer">issue</a> we can read: </p>
<blockquote>
<p>Containerd includes minimal support for SELinux. More accurately, it
contains support to run ON systems using SELinux, but it does not make
use of SELinux to improve container security.</p>
<p>All containers run with the
<code>system_u:system_r:container_runtime_t:s0</code> label, but no further
segmentation is made</p>
</blockquote>
<p>There is no full support for what you are doing using Containerd. Your approach is correct but the problem is lack of support to this functionality. </p>
| Mark Watney |
<p>I have tried to install MariaDB ColumnStore on Kubernetes. I am using Windows 10 and have my kubernetes cluster inside a VBox. </p>
<p>This is what I have tried so far:</p>
<p>First I went to <a href="https://github.com/mariadb-corporation/mariadb-kubernetes/tree/master/mariadb-enterprise" rel="nofollow noreferrer">https://github.com/mariadb-corporation/mariadb-kubernetes/tree/master/mariadb-enterprise</a> and downloaded the git repo.</p>
<pre><code>git clone https://github.com/mariadb-corporation/mariadb-kubernetes
</code></pre>
<p>I CD to the directory of the folder and try to install the chart using Helm without any modification to the values file to see if it works.</p>
<pre><code>helm install mariadb-enterprise/ --name my_cluster
</code></pre>
<p>It works. But when I try to change the topology to "columnstore" </p>
<pre><code>helm install mariadb-enterprise/ --name my_cluster --set mariadb.cluster.topology = columnstore-standalone
</code></pre>
<p>I get the following error </p>
<pre><code>my-cluster-mdb-cs-single-0 0/1 Init:ErrImagePull 0 18s
</code></pre>
<p>I get the following output when I use </p>
<pre><code> kubectl describe pod my-cluster-mdb-cs-single-0
</code></pre>
<hr>
<pre><code> Name: my-cluster-mdb-cs-single-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 19 Jun 2019 09:05:39 +0200
Labels: controller-revision-hash=my-cluster-mdb-cs-single-
84bcfc86b8
mariadb=my-cluster
pm.mariadb=my-cluster
statefulset.kubernetes.io/pod-name=my-cluster-mdb-cs-single-0
um.mariadb=my-cluster
Annotations: <none>
Status: Pending
IP: xxx.17.0.17
Controlled By: StatefulSet/my-cluster-mdb-cs-single
Init Containers:
init-columnstore:
Container ID:
Image: mariadb/columnstore:1.2.3
Image ID:
Port: <none>
Host Port: <none>
Command:
bash
/mnt/config-template/init-configurations.sh
columnstore
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment:
BACKUP_RESTORE_FROM:
CLUSTER_TOPOLOGY: columnstore-standalone
Mounts:
/docker-entrypoint-initdb.d from mariadb-entrypoint-vol (rw)
/mnt/config-map from mariadb-config-vol (rw)
/mnt/config-template from mariadb-configtemplate-vol (rw)
/mnt/secrets from mariadb-secrets-vol (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cv2g5
(ro)
init-volume:
Container ID:
Image: mariadb/columnstore:1.2.3
Image ID:
Port: <none>
Host Port: <none>
Command:
bash
-c
set -e; if [ ! -d "/mnt/columnstore/etc" ]; then rm -rf
/mnt/columnstore/data && cp -rp /usr/local/mariadb/columnstore/data
/mnt/columnstore/ && rm -rf /mnt/columnstore/local && cp -rp
/usr/local/mariadb/columnstore/local /mnt/columnstore/ && rm -rf
/mnt/columnstore/mysql && mkdir -p /mnt/columnstore/mysql && chown
mysql:mysql /mnt/columnstore/mysql && cp -rp
/usr/local/mariadb/columnstore/mysql/db /mnt/columnstore/mysql/ &&cp -rp
/usr/local/mariadb/columnstore/etc /mnt/columnstore/; fi
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt/columnstore from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cv2g5
(ro)
Containers:
columnstore-module-pm:
Container ID:
Image: mariadb/columnstore:1.2.3
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
Command:
bash
/mnt/config-map/start-mariadb-instance.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
MYSQL_ALLOW_EMPTY_PASSWORD: Y
CLUSTER_TOPOLOGY: columnstore-standalone
Mounts:
/docker-entrypoint-initdb.d from mariadb-entrypoint-vol (rw)
/mnt/config-map from mariadb-config-vol (rw)
/tmp/data from temp-data (rw)
/usr/local/mariadb/columnstore/data from data (rw,path="data")
/usr/local/mariadb/columnstore/data1 from data (rw,path="data1")
/usr/local/mariadb/columnstore/data2 from data (rw,path="data2")
/usr/local/mariadb/columnstore/data3 from data (rw,path="data3")
/usr/local/mariadb/columnstore/etc from data (rw,path="etc")
/usr/local/mariadb/columnstore/local from data (rw,path="local")
/usr/local/mariadb/columnstore/mysql/db from data (rw,path="mysql/db")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cv2g5 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-mdb-cs-single-0
ReadOnly: false
temp-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: temp-data-my-cluster-mdb-cs-single-0
ReadOnly: false
mariadb-entrypoint-vol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
mariadb-config-vol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
mariadb-configtemplate-vol:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-mariadb-config
Optional: false
mariadb-secrets-vol:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-mariadb-secret
Optional: false
default-token-cv2g5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cv2g5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 65s default-scheduler Successfully
assigned default/my-cluster-mdb-cs-single-0 to minikube
Warning Failed 49s kubelet, minikube Failed to pull
image "mariadb/columnstore:1.2.3": rpc error: code = Unknown desc = Error
response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup
registry-1.docker.io on 10.0.2.3:53: read udp 10.0.2.15:57278->10.0.2.3:53:
i/o timeout
Normal Pulling 35s (x2 over 65s) kubelet, minikube Pulling image
"mariadb/columnstore:1.2.3"
Warning Failed 25s (x2 over 49s) kubelet, minikube Error:
ErrImagePull
Warning Failed 25s kubelet, minikube Failed to pull
image "mariadb/columnstore:1.2.3": rpc error: code = Unknown desc = Error
response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup
registry-1.docker.io on 10.0.2.3:53: read udp 10.0.2.15:34499->10.0.2.3:53:
i/o timeout
Normal BackOff 15s (x2 over 48s) kubelet, minikube Back-off pulling
image "mariadb/columnstore:1.2.3"
Warning Failed 15s (x2 over 48s) kubelet, minikube Error:
ImagePullBackOff
</code></pre>
<p>Anyone know why I get this error and if there is a way to solve it?</p>
| kkss | <p>The problem was that I needed to have Docker installed on Windows. This was not possible when running minikube on VirtualBox. I had to reinstall minikube on Hyper-V, install Docker for Desktop, and then I could install MariaDB ColumnStore.</p>
| kkss |
<p>I understand that the k8s Service basically performs Round Robin on Pods.</p>
<p>If I set the weight of the pods using the 'Destination Rule' of the Istio, what happens to RR of the existing k8s Service? Are load balancing rules in k8s Service ignored?</p>
| henry-jo | <blockquote>
<p>I understand that the k8s Service basically performs ROUND ROBIN on Pods.`</p>
</blockquote>
<p>That's correct. It's explained in the kubernetes documentation <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="nofollow noreferrer">here</a>.</p>
<hr />
<blockquote>
<p>If I set the weight of the pods using the 'Destination Rule' of the Istio, what happens to RR of the existing k8s Service? Are load balancing rules in k8s Service ignored?</p>
</blockquote>
<p>I couldn't find the exact information how it works, so I will explain how I understand it.</p>
<p>As kubernetes service uses kube-proxy's iptables rules to distribute the requests, I assume that istio destination rule can override it with his own rules, and apply them through envoy sidecar. As all traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.</p>
<p>So if you want to change the default ROUND ROBIN to other algorithm (e.g. LEAST_CONN, RANDOM) you can just configure that in your destination rule <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-SimpleLB" rel="nofollow noreferrer">LoadBalancerSettings.SimpleLB</a>. Note that by default the algorithm is also ROUND ROBIN, same as with kubernetes service.</p>
<p>More about it <a href="https://istio.io/latest/docs/concepts/traffic-management/" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset</a></li>
<li><a href="https://istio.io/latest/docs/concepts/traffic-management/#load-balancing-options" rel="nofollow noreferrer">https://istio.io/latest/docs/concepts/traffic-management/#load-balancing-options</a></li>
</ul>
| Jakub |
<p>I have a Redis pod on my Kubernetes cluster on Google Cloud.
I have built pv and the claim.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: redis-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: my-size
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: redis-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: my size
</code></pre>
<p>I also mounted it in my deployment.yaml</p>
<pre><code>volumeMounts:
- mountPath: /data
name: redis-pv-claim
volumes:
- name: redis-pv-claim
persistentVolumeClaim:
claimName: redis-pv-claim
</code></pre>
<p>I can't see any error while running describe pod</p>
<pre><code>Volumes:
redis-pv-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pv-claim
ReadOnly: false
</code></pre>
<p>But it just can't save any key. After every deployment, the "/data" folder is just empty.</p>
<p>My NFS is active now but i still cant keep data .</p>
<p>Describe pvc</p>
<pre><code>
Namespace: my namespace
StorageClass: nfs-client
Status: Bound
Volume: pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: nfs-client
volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: my grafana pod
Events: <none>
</code></pre>
<p>Describe pod gives me an error though.</p>
<pre><code>
Warning FailedMount 18m kubelet, gke-devcluster-pool-1-36e6a393-rg7d MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
</code></pre>
| Pasha | <p>What is happening is that when you have multiple nodes using PVC to share files between pods isn't the best approach.</p>
<p>PVCs can share files between pods residing in the same node. So if I have multiple nodes sometimes I may have the impression that my files aren't being stored properly.</p>
<p>The ideal solution for you is to use any <a href="https://kubernetes.io/docs/concepts/storage/" rel="nofollow noreferrer">DSF</a> solution available. In your question you mention that you are using GCP and it's not clear if you are using GKE or if you created your cluster on top of compute instances. </p>
<p>If you are using GKE, have you already checked <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">this</a> document? Please let me know. </p>
<p>If you have access to your nodes, the easiest setup you can have is to create a NFS server in one of your nodes and use <a href="https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner" rel="nofollow noreferrer">nfs-client-provisioner</a> to provide access to the nfs server from your pods. </p>
<p>I've been using this approach for quite a while now and it works really well. </p>
<p><strong>1 - Install and configure NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):</strong></p>
<p>Before installing the NFS Kernel server, we need to update our system’s repository index:</p>
<pre><code>$ sudo apt-get update
</code></pre>
<p>Now, run the following command in order to install the NFS Kernel Server on your system:</p>
<pre><code>$ sudo apt install nfs-kernel-server
</code></pre>
<p>Create the Export Directory</p>
<pre><code>$ sudo mkdir -p /mnt/nfs_server_files
</code></pre>
<p>As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy): </p>
<pre><code>$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
</code></pre>
<p>Assign server access to client(s) through NFS export file</p>
<pre><code>$ sudo nano /etc/exports
</code></pre>
<p>Inside this file, add a new line to allow access from other servers to your share.</p>
<pre><code>/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
</code></pre>
<p>You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.</p>
<p>Export the shared directory and restart the service to make sure all configuration files are correct. </p>
<pre><code>$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
</code></pre>
<p>Check all active shares: </p>
<pre><code>$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
</code></pre>
<p><strong>2 - Install NFS Client on all my Worker Nodes:</strong></p>
<pre><code>$ sudo apt-get update
$ sudo apt-get install nfs-common
</code></pre>
<p>At this point you can make a test to check if you have access to your share from your worker nodes: </p>
<pre><code>$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
</code></pre>
<p>Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here.
Check if the volume mounted as expected and create some folders and files to male sure everything is working fine. </p>
<pre><code>$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
</code></pre>
<p>Go back to your master node and check if these files are at /mnt/nfs_server_files folder. </p>
<p><strong>3 - Install NFS Client Provisioner</strong>.</p>
<p>Install the provisioner using helm:</p>
<pre><code>$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
</code></pre>
<p>Notice that I've specified a namespace for it.
Check if they are running: </p>
<pre><code>$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
</code></pre>
<p>At this point we have a storageclass called nfs-client: </p>
<pre><code>$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
</code></pre>
<p>We need to create a PersistentVolumeClaim: </p>
<pre><code>$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
</code></pre>
<pre><code>$ kubectl apply -f nfs-client-pvc.yaml
</code></pre>
<p>Check the status (Bound is expected):</p>
<pre><code>$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
</code></pre>
<p><strong>4 - Create a simple pod to test if we can read/write out NFS Share:</strong></p>
<p>Create a pod using this yaml: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
</code></pre>
<pre><code>$ kubectl apply -f pod.yaml
</code></pre>
<p>Let's list all mounted volumes on our pod:</p>
<pre><code>$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem Size Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 99G 11G 84G 11% /mnt
</code></pre>
<p>As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path <code>kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1</code>) </p>
<p>Let's check it: </p>
<pre><code>root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
</code></pre>
<p>It's empty. Let's create some files: </p>
<pre><code>$ for i in 1 2; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
</code></pre>
<p>Now let's where are these files on our NFS Server (Master Node): </p>
<pre><code>$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
</code></pre>
<p>And here are the files we just created inside our pod! </p>
<p>Please let me know if this solution helped you.</p>
| Mark Watney |
<p>help me figure it out.</p>
<p>I have a Bare Metal Kubernetes cluster with three nodes, each node has a public ip.
I have installed MetalLB and IngressController.</p>
<p>It is not clear to me which IP should I redirect domains and subdomains to so that they can be resolved by the Ingress Controller?</p>
<p>I need to initially define on which node the Ingress Controller will be launched?
I need to install the Ingress Controller, and then look at the worker node, on which it will be installed and send all domains or subdomains there?
What happens if, after restarting the cluster, the ingress controller will be deployed on another node?</p>
<p>All the tutorials I've seen show how it works locally or with a cloud load balancer.</p>
<p>Help me understand how this should work correctly.</p>
| JDev | <p>Usually, when you install MetalLB, you configure a pool of addresses which can be used to assign new IPs at LoadBalancer services whenever they are created. Such IP addresses need to be available, they cannot be created out of nothing of course.. they could be in lease from your hosting provider for example.</p>
<p>If instead you have a private Bare Metal cluster which serves only your LAN network, you could just select a private range of IP addresses which are not used.</p>
<p>Then, once MetalLB is running, what happens is the following:</p>
<ul>
<li>Someone / something creates a <code>LoadBalancer</code> services (an HELM Chart, a user with a definition, with commands, etc)</li>
<li>The newly created service needs an external IP. MetalLB will select one address from the configured selected range and assign it to that service</li>
<li>MetalLb will start to announce using standard protocol that the IP address can now be reached by contacting the cluster, it can work either in <code>Layer2</code> mode (one node of the cluster holds that additional IP address) or <code>BGP</code> (true load balancing across all nodes of the cluster)</li>
</ul>
<p>From that point, you can just reach the new service by contacting this newly assigned IP address (which is NOT the ip of any of the cluster nodes)</p>
<p>Usually, the Ingress Controller will just bring a LoadBalancer service (which will grab an external IP address from MetalLb) and then, you can reach hte Ingress Controller from that IP.</p>
<p>As for your other questions, you don't need to worry about where the Ingress Controller is running or similar, it will be automatically handled.</p>
<p>The only thing you may want to do is to make the domain names which you want to serve point to the external IP address assigned to the Ingress Controller.</p>
<hr />
<p>Some docs:</p>
<ul>
<li>MetalLB <a href="https://metallb.universe.tf/concepts/" rel="nofollow noreferrer">explanations</a></li>
<li>Bitnami MetalLB <a href="https://github.com/bitnami/charts/tree/master/bitnami/metallb" rel="nofollow noreferrer">chart</a></li>
<li>LoadBalancer service <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">docs</a></li>
</ul>
| AndD |
<p>If I want to run multiple replicas of some container that requires a one off initialisation task, is there a standard or recommended practice?</p>
<p>Possibilities:</p>
<ul>
<li>Use a StatefulSet even if it isn't necessary after initialisation, and have init containers which check to see if they are on the first pod in the set and do nothing otherwise. (If a StatefulSet is needed for other reasons anyway, this is almost certainly the simplest answer.)</li>
<li>Use init containers which use leader election or some similar method to pick only one of them to do the initialisation.</li>
<li>Use init containers, and make sure that multiple copies can safely run in parallel. Probably ideal, but not always simple to arrange. (Especially in the case where a pod fails randomly during a rolling update, and a replacement old pod runs its init at the same time as a new pod is being started.)</li>
<li>Use a separate Job (or a separate Deployment) with a single replica. Might make the initialisation easy, but makes managing the dependencies between it and the main containers in a CI/CD pipeline harder (we're not using Helm, but this would be something roughly comparable to a post-install/post-upgrade hook).</li>
</ul>
| armb | <p>The fact that "replicas of some container" are dependent on "a one off initialisation task" means that the application architecture does not fit the Kubernetes paradigm well. That is why an involvement of a third-party manager on top of k8s like Helm has to be considered (as suggested by <a href="https://stackoverflow.com/users/7641078/eduardo-baitello">Eduardo Baitello</a> and <a href="https://stackoverflow.com/users/1318694/matt">Matt</a>). </p>
<p>To keep with pure Kubernetes approach, it'd be better to redesign your application so that get it components working as independent or loosely coupled microservices (including initialization tasks).
A <a href="https://stackoverflow.com/questions/57211216/how-to-wait-for-the-other-container-completed-execution-in-k8s/57272595#57272595">similar question</a> has been discussed here recently. </p>
<p>As for the possibilities listed in the question, perhaps the first option with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">InitContainers</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> could be feasible in pure Kubernetes. </p>
| mebius99 |
<p>I wanna run a microservice which use DB. DB need to deploy in the same kubernetes cluster as well using PVC/PV. What is the kubernetes service name/command to use to implement such logic:</p>
<ol>
<li>Deploy the DB instance</li>
<li>If 1 is successful, then deploy the microservice, else return to 1 and try (if 100 times fail - then stop and alarm)</li>
<li>If 2 is successful, use work with it, autoscale if needed (autoscale kubernetes option)</li>
</ol>
<p>I concern mostly about 1-2: the service cannot work without the DB, but at the same time need to be in different pods ( or am I wrong and it's better to put 2 containers: DB and service at the same pod?)</p>
| J.J. Beam | <p>I would say you should add <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">initContainer</a> to your microservice, which would search for the DB service, and whenever it's gonna be ready, then the microservice will be started.</p>
<p>e.g.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
</code></pre>
<p>As for the command simply use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply" rel="nofollow noreferrer">kubectl apply</a> with your yamls (with initContainer configured in your application).</p>
<p>If you want to do that in more automative way you can think about using <a href="https://fluxcd.io/" rel="nofollow noreferrer">fluxCD</a>/<a href="https://argoproj.github.io/argo-cd/" rel="nofollow noreferrer">argoCD</a>.</p>
<hr />
<p>As for the question from comments, <code>containers that run before the main container runs and the main container must be in the same pod?</code></p>
<p>Yes, they have to be in the same pod. As the init container is going to work unless, f.e. the database service will be avaliable, then the main container is gonna start. There is great example with that in above initContainer documentation.</p>
| Jakub |
<p>I am trying to install an elastic statefulset in my GKE cluster but it's throwing an error and am unable to identify the error here this is the log that I had got inside the pod. Can someone help me? I have given the error logs as well as the elasticsearch_statefulset.yml file.</p>
<pre><code>{"type": "server", "timestamp": "2021-12-16T09:30:49,473Z", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "k8s-logs", "node.name": "es-cluster-0", "message": "failed to resolve host [es-cluster-2.elasticsearch]",
"stacktrace": ["java.net.UnknownHostException: es-cluster-2.elasticsearch",
"at java.net.InetAddress$CachedAddresses.get(InetAddress.java:800) ~[?:?]",
"at java.net.InetAddress.getAllByName0(InetAddress.java:1495) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1354) ~[?:?]",
"at java.net.InetAddress.getAllByName(InetAddress.java:1288) ~[?:?]",
"at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:548) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:490) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:855) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at org.elasticsearch.discovery.SeedHostsResolver.lambda$resolveHostsLists$0(SeedHostsResolver.java:144) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
</code></pre>
<p>This is the yml I used to configure the stateful set :</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: kube-logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
resources: {}
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10Gi
</code></pre>
<p>Service file which I used :</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: kube-logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
type: LoadBalancer
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
</code></pre>
<p>Output for kubectl get statefulset -n kube-logging es-cluster -oyaml</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2021-12-20T06:31:55Z"
generation: 1
name: es-cluster
namespace: kube-logging
resourceVersion: "43285"
uid: a1730c94-1aa5-461c-ba73-9af617ea4c42
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: elasticsearch
serviceName: elasticsearch-headless
template:
metadata:
creationTimestamp: null
labels:
app: elasticsearch
spec:
containers:
- env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: discovery.seed_hosts
value: es-cluster-0.elasticsearch
- name: cluster.initial_master_nodes
value: es-cluster-0
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
image: busybox
imagePullPolicy: Always
name: fix-permissions
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: busybox
imagePullPolicy: Always
name: increase-vm-max-map
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- command:
- sh
- -c
- ulimit -n 65536
image: busybox
imagePullPolicy: Always
name: increase-fd-ulimit
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: elasticsearch
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gold
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: es-cluster-6f5bd744c7
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: es-cluster-6f5bd744c7
updatedReplicas: 1
</code></pre>
<p>Output for kubectl get svc -n kube-logging elasticsearch-headless -oyaml :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2021-12-20T06:31:40Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: elasticsearch
name: elasticsearch-headless
namespace: kube-logging
resourceVersion: "43384"
uid: 80d10b9b-1f22-48ce-ba39-7d5801dc91ee
spec:
clusterIP: 10.8.4.128
clusterIPs:
- 10.8.4.128
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: rest
nodePort: 30625
port: 9200
protocol: TCP
targetPort: 9200
- name: inter-node
nodePort: 30056
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.222.11.96
</code></pre>
| sidharth vijayakumar | <p>Additionally to the service you created to expose elastic search outside the cluster, you also need an <code>headless</code> service so that each node / Pod of the elastic cluster can communicate with each other.</p>
<p>I would do the following:</p>
<p>First, inside the spec of the <code>StatefulSet</code>, change <code>spec.serviceName</code> to another value, such as for example <code>elasticsearch-headless</code></p>
<p>Second. create the new service with the following:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
# must be the same as the StatefulSet spec.serviceName
name: elasticsearch-headless
namespace: kube-logging
labels:
app: elasticsearch
spec:
type: ClusterIP
# headless service, can be used by elastic Pods to contact each other
clusterIP: None
ports:
- name: rest
port: 9200
protocol: TCP
targetPort: 9200
- name: inter-node
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
</code></pre>
<hr />
<p>Some docs on <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Services</a></p>
<p>Also, you may be interested into checking <a href="https://github.com/elastic/helm-charts" rel="nofollow noreferrer">HELM Charts</a> and <a href="https://www.elastic.co/elastic-cloud-kubernetes" rel="nofollow noreferrer">ECK</a> because there is several stuff ready to be used in order to deploy production-ready elastic search clusters.</p>
| AndD |
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment</a> mentions that a <code>deployment</code> creates a <code>replicaSet</code> but appends a <code>pod-template-hash</code> to the name of the <code>replicaSet</code> and also adds <code>pod-template-hash</code> as <code>replicaSet</code>'s label.</p>
<p>my best guess is that <code>deployment</code> creates multiple <code>replicaSets</code> and this hash ensures that the replicas do not overlap. Is that correct?</p>
| Prasath | <p>Correct, the documentation states this really well:</p>
<blockquote>
<p>The <code>pod-template-hash</code> label is added by the Deployment controller to
every ReplicaSet that a Deployment creates or adopts.</p>
<p>This label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the <code>PodTemplate</code> of the ReplicaSet
and using the resulting hash as the label value that is added to the
ReplicaSet selector, Pod template labels, and in any existing Pods
that the ReplicaSet might have.</p>
</blockquote>
<p>This is necessary for a bunch of different reasons:</p>
<ul>
<li>When you apply a new version of a Deployment, depending on how the deployment is configured and on probes, the previous Pod / Pods could stay up until the new one / ones is not Running and Ready and only then is gracefully terminated. So it may happens that Pods of different <code>ReplicaSet</code> (previous and current) run at the same time.</li>
<li>Deployment History is available to be consulted and you may also want to rollback to an older revision, should the current one stops behaving correctly (for example you changed the image that needs to be used and it jsut crash in error). Each revision has its own ReplicaSet ready to be scaled up or down as necessary as explained in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#checking-rollout-history-of-a-deployment" rel="noreferrer">docs</a></li>
</ul>
| AndD |
<p>Whats the difference between kubectl port-forwarding (which forwards port from local host to the pod in the cluster to gain access to cluster resources) and NodePort Service type ?</p>
| Rad4 | <p>You are comparing two completely different things. You should <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">compare</a> ClusterIP, NodePort, LoadBalancer and Ingress.</p>
<p>The first and most important difference is that NodePort expose is persistent while by doing it using port-forwarding, you always have to run <code>kubectl port-forward ...</code> and kept it active.</p>
<p>kubectl port-forward is meant for testing, labs, troubleshooting and not for long term solutions. It will create a tunnel between your machine and kubernetes so this solution will serve demands from/to your machine.</p>
<p>NodePort can give you long term solution and it can serve demands from/to anywhere inside the network your nodes reside.</p>
| Mark Watney |
<p>I am trying to convert from ClusterIp to NodePort. But When I call via curl <a href="http://176.55.116.xxx:30100" rel="nofollow noreferrer">http://176.55.116.xxx:30100</a> . It creates the below error. How can I convert below from clusterIp to Nodeport version?</p>
<p><a href="https://i.stack.imgur.com/wjkCb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjkCb.png" alt="Unable o connect to remote server when using curl to call http://176.55.116.116:30100" /></a> below configuration is not working...</p>
<p>**(1) ClusterIp version (deployment_service.yml) [WORKING] **</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: coturn
name: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
template:
metadata:
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
hostNetwork: true
# securityContext:
# runAsUser: 1000
# runAsGroup: 1000
containers:
- name: coturn
image: coturn/coturn
imagePullPolicy: Always
ports:
# - name: turn-udp
# containerPort: 3000
# protocol: UDP
- name: turn-port1
containerPort: 3478
hostPort: 3478
protocol: UDP
- name: turn-port2
containerPort: 3478
hostPort: 3478
protocol: TCP
# securityContext:
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# allowPrivilegeEscalation: false
args:
# - --stun-only
- -v
---
apiVersion: v1
kind: Service
metadata:
name: coturn
namespace: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
type: ClusterIP
ports:
# - port: 3000
# targetPort: 3000
# protocol: UDP
# name: turn-udp
- port: 3478
targetPort: 3478
protocol: UDP
name: turn-port1
- port: 3478
targetPort: 3478
protocol: TCP
name: turn-port2
# - port: 3478
# targetPort: 3478
# protocol: TCP
# name: turn-port3
# - port: 3479
# targetPort: 3479
# protocol: TCP
# name: turn-port4
selector:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1</code></pre>
</div>
</div>
</p>
<p><strong>(2) NodePort version (deployment_service.yml) : (NOT WORKInG)</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: coturn
name: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
template:
metadata:
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
hostNetwork: true
# securityContext:
# runAsUser: 1000
# runAsGroup: 1000
containers:
- name: coturn
image: coturn/coturn
imagePullPolicy: Always
ports:
# - name: turn-udp
# containerPort: 3000
# protocol: UDP
- name: turn-port1
containerPort: 3478
hostPort: 3478
protocol: UDP
- name: turn-port2
containerPort: 3478
hostPort: 3478
protocol: TCP
# securityContext:
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# allowPrivilegeEscalation: false
args:
# - --stun-only
- -v
---
apiVersion: v1
kind: Service
metadata:
name: coturn
namespace: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
type: NodePort
ports:
# - port: 3000
# targetPort: 3000
# protocol: UDP
# name: turn-udp
- port: 3478
targetPort: 3478
nodePort: 30100
protocol: UDP
name: turn-port1
- port: 3478
targetPort: 3478
nodePort: 30100
protocol: TCP
name: turn-port2
selector:
app: coturn
# - port: 3478
# targetPort: 3478
# protocol: TCP
# name: turn-port3
# - port: 3479
# targetPort: 3479
# protocol: TCP
# name: turn-port4
status:
loadBalancer: {}
</code></pre>
</div>
</div>
</p>
<p><strong>Error:</strong></p>
<pre class="lang-sh prettyprint-override"><code>yusuf@DESKTOP-QK5VI8R:~/turnserver$ kubectl apply -f deploy.yml
daemonset.apps/coturn created
error: error parsing deploy.yml: error converting YAML to JSON: yaml: line 27: did not find expected '-' indicator
</code></pre>
| Penguen | <p>You have an error in the <code>yaml</code> syntax, the selector is under <code>spec.ports</code>, which cannot work.</p>
<p>This is the corrected file (to check problems, use YAML validators, there are plenty online which you can toy with):</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: coturn
name: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
template:
metadata:
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
hostNetwork: true
# securityContext:
# runAsUser: 1000
# runAsGroup: 1000
containers:
- name: coturn
image: coturn/coturn
imagePullPolicy: Always
ports:
# - name: turn-udp
# containerPort: 3000
# protocol: UDP
- name: turn-port1
containerPort: 3478
hostPort: 3478
protocol: UDP
- name: turn-port2
containerPort: 3478
hostPort: 3478
protocol: TCP
# securityContext:
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# allowPrivilegeEscalation: false
args:
# - --stun-only
- -v
---
apiVersion: v1
kind: Service
metadata:
name: coturn
namespace: coturn
labels:
app.kubernetes.io/name: coturn
app.kubernetes.io/instance: coturn
app.kubernetes.io/version: 0.0.1
spec:
type: NodePort
ports:
# - port: 3000
# targetPort: 3000
# protocol: UDP
# name: turn-udp
- port: 3478
targetPort: 3478
nodePort: 30100
protocol: UDP
name: turn-port1
- port: 3478
targetPort: 3478
nodePort: 30100
protocol: TCP
name: turn-port2
# - port: 3478
# targetPort: 3478
# protocol: TCP
# name: turn-port3
# - port: 3479
# targetPort: 3479
# protocol: TCP
# name: turn-port4
selector:
app: coturn
</code></pre>
| AndD |
<p>In the context of improving an API on Kubernetes, I am considering using a distributed hash table. My API always receives requests to an URL with this scheme:</p>
<pre><code>www.myapi.com/id
</code></pre>
<p>Reading the documentation of Istio, it seems pretty direct and easy to get what I want. Indeed, Istio handles a load balancing scheme called <code>ConsistentHashLB</code>. I such a scheme, the service destination is chosen according to a hash computed from several possible fields: HTTP header name, cookie, source IP, and an HTTP query parameter name.</p>
<p>In my case, I would need to compute the hash according to the <code>id</code> associated with the request.</p>
<p>My question is double and conditional:</p>
<ol>
<li>Is it possible to read the <code>id</code> as an HTTP parameter name?</li>
<li>If affirmative, how should I specify the rule in the manifest? (the doc that I have read is not enough clear in this regard).</li>
</ol>
<p>If negative, some idea? some trick? For example, I am considering adding the id as an HTTP header with `Nginx, but this would add an additional step.</p>
| lrleon | <p>As I mentioned in comments if I understand correctly you're looking for a ConsistenHashLB path, there is <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings-ConsistentHashLB-HTTPCookie" rel="nofollow noreferrer">documentation</a> about that.</p>
<p>There is also <a href="https://github.com/istio/istio/issues/11554" rel="nofollow noreferrer">github issue</a> about that.</p>
<hr />
<p>As for the http header question, you should be able to add it wither with:</p>
<ol>
<li><a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">Virtual Service</a></li>
</ol>
<p>There is <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#Headers" rel="nofollow noreferrer">Headers</a> part on the istio documentation which shows how to add/remove headers with an example.</p>
<blockquote>
<p>Message headers can be manipulated when Envoy forwards requests to, or responses from, a destination service. Header manipulation rules can be specified for a specific route destination or for all destinations. The following VirtualService adds a test header with the value true to requests that are routed to any reviews service destination. It also removes the foo response header, but only from responses coming from the v1 subset (version) of the reviews service.</p>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
test: "true"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
weight: 25
- destination:
host: reviews.prod.svc.cluster.local
subset: v1
headers:
response:
remove:
- foo
weight: 75
</code></pre>
<ol start="2">
<li><a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">Envoy Filter</a></li>
</ol>
<blockquote>
<p>EnvoyFilter provides a mechanism to customize the Envoy configuration generated by Istio Pilot. Use EnvoyFilter to modify values for certain fields, add specific filters, or even add entirely new listeners, clusters, etc.</p>
</blockquote>
<p>Below envoy filter add request header called customer-id with alice value to all request going though istio ingress gateway. I also commented code for response headers.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: lua-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():add("customer-id", "alice")
end
# function envoy_on_response(response_handle)
# response_handle:headers():add("customer-id", "alice")
# end
</code></pre>
| Jakub |
<p>I have a local cluster with minikube 1.6.2 running. </p>
<p>All my pods are OK, I checked the logs individually, but I have 2 db, influx and postgres, are not accesible anymore from any url outside namespace.</p>
<p>I logged into both pods, and I can confirm that each db is OK, has data, and I can connect manually with my user / pass.</p>
<p>Let's take the case of influx.</p>
<pre><code>kubectl exec -it -n influx blockchain-influxdb-local-fb745b98c-vbghp -- influx -username='myuser' -password="mypass" -database="mydb" -precision=rfc3339 -execute "show measurements"
</code></pre>
<p>gives me 4 measurements, so no pb.</p>
<p>but when I try to connect influx from the same namespace with his local dns, I get a timeout.</p>
<pre><code>➜ ~ kubectl get svc -n influx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blockchain-influxdb-local ClusterIP 10.96.175.62 <none> 8086/TCP 19m
➜ ~ kubectl get deployments -n influx
NAME READY UP-TO-DATE AVAILABLE AGE
blockchain-influxdb-local 1/1 1 1 20m
➜ ~ kubectl get po -n influx
NAME READY STATUS RESTARTS AGE
blockchain-influxdb-local-fb745b98c-vbghp 1/1 Running 0 21m
measures-api-local-8667bb496f-4wp8d 1/1 Running 0 21m
</code></pre>
<p>Case where it works:</p>
<p>From a pod inside the same namespace:</p>
<pre><code>curl --verbose -G 'http://blockchain-influxdb-local:8086/query?db=mydb&pretty=true' --data-urlencode 'u=myuser' --data-urlencode 'p=mypass' --data-urlencode 'precision=rfc3339' --data-urlencode 'q=show measurements'
</code></pre>
<p>From a pod in another namespace (same namespace), with pod IP</p>
<pre><code>curl --verbose -G '172.17.0.5:8086/query?db=mydb&pretty=true' --data-urlencode 'u=myuser' --data-urlencode 'p=mypass' --data-urlencode 'precision=rfc3339' --data-urlencode 'q=show measurements'
</code></pre>
<p>From a pod in another namespace (same namespace), with service IP</p>
<pre><code>curl --verbose -G '10.96.175.62:8086/query?db=mydb&pretty=true' --data-urlencode 'u=myuser' --data-urlencode 'p=mypass' --data-urlencode 'precision=rfc3339' --data-urlencode 'q=show measurements'
</code></pre>
<p>But when I use local dns from outside namespace, it won't work, I get a timeout from CURL:</p>
<pre><code> curl --verbose -G 'blockchain-influxdb-local.influx.svc.cluster.local:8086/query?db=mydb&pretty=true' --data-urlencode 'u=myuser' --data-urlencode 'p=mypass' --data-urlencode 'precision=rfc3339' --data-urlencode 'q=show measurements'
</code></pre>
<p>I followed those debug step to ensure DNS is working, and had no problem, everything works.</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<p>Inside same pod, when I ping this url, I get:</p>
<pre><code>root@metadata-api-local-8b4b7846b-zllb8:/go/src/gitlab.com/company/metadata_api# ping blockchain-influxdb-local.influx.svc.cluster.local
PING nc-ass-vip.sdv.fr (212.95.74.75) 56(84) bytes of data.
--- nc-ass-vip.sdv.fr ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 47ms
</code></pre>
<p>I don't know why is it making a reference to <code>nc-ass-vip.sdv.fr</code></p>
<p>I also tried to remove local cluster and redeploy it, also tried to update minikube to latest version (1.8.2), nothing worked.</p>
<p>I don't know what else to do...</p>
<p>Has anyone an idea ? I was working well for monthes, don't really know what happened. :(</p>
<p>In response to @Arghya Sadhu, I post the file /etc/resolv.conf from the Influx pod:</p>
<pre><code>nameserver 10.96.0.10
search influx.svc.cluster.local svc.cluster.local cluster.local numericable.fr
options ndots:5
</code></pre>
<p><code>kubectl edit cm coredns -n kube-system</code></p>
<pre><code># Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-03-19T10:59:28Z"
name: coredns
namespace: kube-system
resourceVersion: "176"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 0797c1a9-e9db-4b4c-bc8d-4c7ecca24968
</code></pre>
<p>EDIT: </p>
<pre><code>kubectl exec -ti dnsutils -- nslookup blockchain-influxdb-local.influx.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
blockchain-influxdb-local.influx.svc.cluster.local.numericable.fr canonical name = nc-ass-vip.sdv.fr.
Name: nc-ass-vip.sdv.fr
Address: 212.95.74.75
</code></pre>
| Juliatzin | <p>After digging into a few possibilities we came across the output for the following commands: </p>
<pre><code>$ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx
</code></pre>
<pre><code>$ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx.svc.cluster.local
</code></pre>
<pre><code>$ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx.svc.cluster.local.
</code></pre>
<p>Output for these commands <a href="https://gist.github.com/xoco70/ea6a99ce7acb31cb06c78e7bfffeaf78" rel="nofollow noreferrer">here</a> (adding to the end of this answer for future reference in case of link doesn't work).</p>
<p>Reviewing this output we can see that no matter what <code>numericable.fr</code> is always giving positive answer to dns queries. </p>
<p>To avoid this situation you can change ndots entry to 1 or even 0 in your pods. </p>
<pre><code>nameserver 10.96.0.10
search influx.svc.cluster.local svc.cluster.local cluster.local numericable.fr
options ndots:0
</code></pre>
<p>From man pages we have: </p>
<blockquote>
<p>ndots:n
Sets a threshold for the number of dots which must appear
in a name given to res_query(3) (see resolver(3)) before
an initial absolute query will be made. The default for
n is 1, meaning that if there are any dots in a name, the
name will be tried first as an absolute name before any
search list elements are appended to it. The value for
this option is silently capped to 15.</p>
</blockquote>
<p>A more effective and long term solution is to add this entry in the pod/statefulset/deployment manifest as in this example: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsConfig:
options:
- name: ndots
value: "0"
</code></pre>
<p>Output from commands referenced for future reference: </p>
<pre><code>➜ ~ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.default.svc.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1584628757
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 10
ADDITIONAL RECORDS:
------------
** server can't find blockchain-influxdb-local.influx.default.svc.cluster.local: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local, type = A, class = IN
ANSWERS:
-> blockchain-influxdb-local.influx.svc.cluster.local
internet address = 10.96.72.6
ttl = 10
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Name: blockchain-influxdb-local.influx.svc.cluster.local
Address: 10.96.72.6
pod "dnsutils" deleted
pod default/dnsutils terminated (Error)
➜ ~ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local.default.svc.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1584628757
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find blockchain-influxdb-local.influx.svc.cluster.local.default.svc.cluster.local: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local.svc.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1584628757
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find blockchain-influxdb-local.influx.svc.cluster.local.svc.cluster.local: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local.cluster.local, type = A, class = IN
ANSWERS:
AUTHORITY RECORDS:
-> cluster.local
origin = ns.dns.cluster.local
mail addr = hostmaster.cluster.local
serial = 1584628757
refresh = 7200
retry = 1800
expire = 86400
minimum = 30
ttl = 30
ADDITIONAL RECORDS:
------------
** server can't find blockchain-influxdb-local.influx.svc.cluster.local.cluster.local: NXDOMAIN
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local.numericable.fr, type = A, class = IN
ANSWERS:
-> blockchain-influxdb-local.influx.svc.cluster.local.numericable.fr
canonical name = nc-ass-vip.sdv.fr.
ttl = 30
-> nc-ass-vip.sdv.fr
internet address = 212.95.74.75
ttl = 30
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Non-authoritative answer:
blockchain-influxdb-local.influx.svc.cluster.local.numericable.fr canonical name = nc-ass-vip.sdv.fr.
Name: nc-ass-vip.sdv.fr
Address: 212.95.74.75
pod "dnsutils" deleted
pod default/dnsutils terminated (Error)
➜ ~ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils -- nslookup -debug blockchain-influxdb-local.influx.svc.cluster.local.
Server: 10.96.0.10
Address: 10.96.0.10#53
------------
QUESTIONS:
blockchain-influxdb-local.influx.svc.cluster.local, type = A, class = IN
ANSWERS:
-> blockchain-influxdb-local.influx.svc.cluster.local
internet address = 10.96.72.6
ttl = 30
AUTHORITY RECORDS:
ADDITIONAL RECORDS:
------------
Name: blockchain-influxdb-local.influx.svc.cluster.local
Address: 10.96.72.6
pod "dnsutils" deleted
</code></pre>
| Mark Watney |
<p>I'm hitting 429 error in Google Container Registry, too many images are pulled simultaneously</p>
<pre><code>Error: Status 429 trying to pull repository [...] "Quota Exceeded."
</code></pre>
<p>There is a Kubernetes cluster with multiple nodes and pods implement Kubeflow steps. In the Google <a href="https://cloud.google.com/container-registry/docs/troubleshooting" rel="nofollow noreferrer">guide</a> they suggest the following:</p>
<pre><code>To avoid hitting the fixed quota limit, you can:
- Increase the number of IP addresses talking to Container Registry.
Quotas are per IP address.
- Add retries that introduce a delay.
For example, you could use exponential backoff.
</code></pre>
<p><strong>Questions</strong>:</p>
<ul>
<li>How to increase the number of IP addresses talking to Container Registry?</li>
<li>How and where is it possible to add retries that introduce a delay? Pods are run as steps in the Kubeflow pipelines.</li>
</ul>
<p><strong>Update</strong>:</p>
<ul>
<li>It was Spinnaker which produced many requests to GCR, so it was a different application that exceeded quota and caused issues</li>
</ul>
| Konstantin | <p>It seems nothing can be done in terms of the Cloud Registry quota limits because those are <strong>fixed</strong>. According to <a href="https://cloud.google.com/container-registry/quotas" rel="nofollow noreferrer">Container Registry > Doc > Resources > Quotas and limits</a>:</p>
<blockquote>
<p>Any request sent to Container Registry has a 2 hour timeout limit.</p>
<p>The fixed rate limits per client IP address are:</p>
<ul>
<li>50,000 HTTP requests every 10 minutes</li>
<li>1,000,000 HTTP requests per day</li>
</ul>
</blockquote>
<p>Google provides support for GKE, but Kubeflow itself is not supported by Google. This question should be addressed to the Kubeflow support.</p>
<p>A Kubeflow issue with breaching quota limits and a question about how to get container pulls to use more IP addresses can be registered on the project page on GitHub:
<a href="https://github.com/kubeflow/kubeflow/issues" rel="nofollow noreferrer">https://github.com/kubeflow/kubeflow/issues</a></p>
<p>Other support options are available here:
<a href="https://www.kubeflow.org/docs/other-guides/support/" rel="nofollow noreferrer">https://www.kubeflow.org/docs/other-guides/support/</a></p>
<p>If you use CLI, you may try to customize Kubeflow configuration file before deployment or split it into separate deployments to overcome Cloud Registry quota limits. This approach helps for some complicated deployments. An important thing here is to take care of dependencies. First run</p>
<pre><code>kfctl build -v -f ${CONFIG_URI}
</code></pre>
<p>make changes in the file <code>${KF_DIR}/kfctl_gcp_iap.yaml</code>, and then run</p>
<pre><code>kfctl apply -v -f ${CONFIG_URI}
</code></pre>
| mebius99 |
<p>I am running a Kuberentes cluster in dev environment. I executed deployment files for metrics server, my pod is up and running without any error message. See the output here:</p>
<pre><code>root@master:~/pre-release# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-568697d856-9jshp 1/1 Running 0 10m 10.244.1.5 worker-1 <none> <none>
</code></pre>
<p>Next when I am checking API service status, it shows up as below</p>
<pre><code>
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: k8s-app=metrics-server
Annotations: <none>
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2021-03-29T17:32:16Z
Resource Version: 39213
UID: 201f685d-9ef5-4f0a-9749-8004d4d529f4
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: pre-release
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2021-03-29T17:32:16Z
Message: failing or missing response from https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
</code></pre>
<p>Here the metric server deployment code</p>
<pre><code>containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
</code></pre>
<p>Here the complete code</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: pre-release
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: pre-release
version: v1beta1
versionPriority: 100
</code></pre>
<p>latest error</p>
<pre><code>I0330 09:02:31.705767 1 secure_serving.go:116] Serving securely on [::]:4443
E0330 09:04:01.718135 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:worker-2: unable to fetch metrics from Kubelet worker-2 (worker-2): Get https://worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup worker-2 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:master: unable to fetch metrics from Kubelet master (master): Get https://master:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master on 10.96.0.10:53: read udp 10.244.2.23:41419->10.96.0.10:53: i/o timeout, unable to fully scrape metrics from source kubelet_summary:worker-1: unable to fetch metrics from Kubelet worker-1 (worker-1): Get https://worker-1:10250/stats/summary?only_cpu_and_memory=true: dial tcp: i/o timeout]
</code></pre>
<p>Could someone please help me to fix the issue.</p>
| Gowmi | <p>As I mentioned in the comment section, this may be fixed by adding <code>hostNetwork:true</code> to the metrics-server Deployment.</p>
<p>According to kubernetes <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.</p>
</blockquote>
<pre><code>spec:
hostNetwork: true <---
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
</code></pre>
<hr />
<p>There is an <a href="https://stackoverflow.com/questions/62138734/metric-server-not-working-unable-to-handle-the-request-get-nodes-metrics-k8s/66411937#66411937">example</a> with information how can you edit your deployment to include that <code>hostNetwork:true</code> in your metrics-server deployment.</p>
<p>Also related <a href="https://github.com/kubernetes-sigs/metrics-server/issues/141#issuecomment-480878691" rel="nofollow noreferrer">github issue</a>.</p>
| Jakub |
<p>I'm at the first steps with Kubernetes, and i'm stuck with a problem of Windows paths.
I defined a .yaml where for a PersistentVolume i have (file not complete, only the part for the problem)</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-storage
local:
path: /c/temp/data/db
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
<p>I'm using the latest minikube (1.8.2) with updated kubectl on Windows 10 Pro updated, Docker Community latest version.</p>
<p>I searched a lot because of every sample for Kubernetes is referring to paths for Unix/Macos.
I found (don't remember where...) that a valid path for Windows should be the one in the sample
path: /c/temp/data/db</p>
<p>But it does not work: Docker is switched on Linux containers, c: shared, Kubernetes in Docker activated, with "describe pod " I get</p>
<p>"didn't find available persistent volumes to bind" </p>
<p>Obviously I tried another disk (shared in Docker), tried "/c/temp/data/db", that is surrounding with ", tried to give all perms to Everyone on this path, /c/Users...nothing</p>
| Roberto Alessi | <blockquote>
<p>I searched a lot because of every sample for Kubernetes is referring
to paths for Unix/Macos. I found (don't remember where...) that a
valid path for Windows should be the one in the sample path:
/c/temp/data/db</p>
</blockquote>
<p>With minikube, you can't mount your local directory into a PersistentVolume as you are trying. </p>
<p>Minikube creates a virtual machine with Linux and your cluster is running inside this Linux VM. That's why it can't see your files in your windows machine.</p>
<p>To be able to access your local directory into your minikube cluster you need to mount it into your minikube:</p>
<p>You have few options to achieve what you need. The best and easiest way is to start your minikube with the option <code>--mount</code>. This option will mount C:/Users/ by default. </p>
<p>Example: </p>
<pre><code>PS C:\WINDOWS\system32> minikube delete; minikube.exe start --vm-driver=virtualbox --mount
</code></pre>
<p>If you ssh into minikube Linux VM: </p>
<pre><code>PS C:\WINDOWS\system32> minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$
$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.9G 489M 1.5G 26% /
devtmpfs 987M 0 987M 0% /dev
tmpfs 1.1G 0 1.1G 0% /dev/shm
tmpfs 1.1G 17M 1.1G 2% /run
tmpfs 1.1G 0 1.1G 0% /sys/fs/cgroup
tmpfs 1.1G 4.0K 1.1G 1% /tmp
/dev/sda1 17G 1.3G 15G 8% /mnt/sda1
/c/Users 181G 106G 76G 59% /c/Users
</code></pre>
<pre><code>$ cd /c/Users/
$ pwd
/c/Users
</code></pre>
<p>If you want\need to mount any other directory than C:/Users, take a look at <a href="https://minikube.sigs.k8s.io/docs/reference/commands/mount/" rel="nofollow noreferrer">minikube mount</a> and/or <a href="https://minikube.sigs.k8s.io/docs/reference/commands/start/" rel="nofollow noreferrer">--mount-string</a>. You may face some issues with these option depending on your vm-driver. </p>
<p>After mounting your directory you can use it in you PersistentVolume refering to the Linux path that based on my examples it can be /c/Users/myname/myapp/db.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-storage
local:
path: /c/Users/myname/myapp/db
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
<p>Please, let me know if my answer helped you to solve your problem. </p>
| Mark Watney |
<p>I have two Cloud Armor policies.
Policy "A" is allowing our office network to connect to specific services.
Policy "B" is allowing a customer to a service to consume an API.</p>
<p>I can not add the rule of Policy "B" to Policy "A" because then the rule (the customers ip) would've access to all services where Policy "A" is applied to. Therefore I separated it off to the a standalone Policy "B".</p>
<p>So I have now two different BackendConfig resources. One referencing to Policy "A" and one to Policy "B".</p>
<p>The next step I took was to somehow apply <strong>both</strong> cloud armor backend configs to one specific service. The office network should've access to that services <strong>plus</strong> the IP of our customer.
This is how I thought it might work:</p>
<pre><code>metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default":{"policy-a-cloud-armor-backend-config","policy-b-cloud-armor-backend-config"}}'
</code></pre>
<p>Unfortunately this does <strong>not</strong> work. In the policy panel in GCP the following warning appears:</p>
<pre><code>This policy has not been applied to any targets yet, so the policy will not affect any traffic.
</code></pre>
<p>Any idea how I can make this exclusive access possible?</p>
| xetra11 | <p>This notification <code>This policy has not been applied to any targets yet, so the policy will not affect any traffic</code> indicates that a security policy is not bound to any of targets via backend configuration and annotations. </p>
<p>The annotation syntax does not look consistent with the documentation: the field <code>ports</code> is missing. It should be like this: </p>
<pre><code>{"ports": {"http":"config-http", "http2" :"config-http2"}, "default": "config-default"}
</code></pre>
<p>See <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig#associating_a_service_port_with_a_backendconfig" rel="nofollow noreferrer">Associating a Service port with a BackendConfig</a> </p>
<p>It is hardly possible to apply two different security policies to the same set like service/port. </p>
<p>As a workaround, you can try an additional service with a special security policy that includes rules for both clients. </p>
| mebius99 |
<p>I did performance test to my application.
The usage of cpu was 30% while testing. - the test sever has 4 cores.
so I thought one core had 30/4 usage = 7.5%.</p>
<p>I want to a container in k8s to keep the usage of CPU under 30%
so I decided that the cpu limits was 250m + 50m(extra core).</p>
<p>I am wondering if this way is right? otherwise, is there the best other way to decide the limit of cpu?</p>
| seo | <p>To answer your question, <code>I am wondering if this way is right?</code>, yes, that's the right way. You can also use a tool called <a href="https://github.com/FairwindsOps/goldilocks" rel="nofollow noreferrer">goldilocks</a>, a kubernetes controller that collects data about running pods and provides recommendations on how to set resource requests and limits. It could help you identify a starting point for resource requests and limits.</p>
<hr />
<p>I'm not sure if your calculations are correct, as CPU resources are defined in millicores. If your container needs two full cores to run, you would put the value “2000m”. If your container only needs ¼ of a core, you would put a value of “250m”.</p>
<p>So if you want 30% of 1 of your cores then <code>300m</code> is the correct amount. But if you want 30% of your 4 cores, then I would say <code>1200m</code> is the correct amount.</p>
<p>There is kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="nofollow noreferrer">documentation</a> about that.</p>
<hr />
<p>You could also consider using <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Vertical Pod Autoscaler</a>.</p>
<blockquote>
<p>Vertical Pod Autoscaler (VPA) <strong>frees the users from necessity of setting up-to-date resource limits and requests</strong> for the containers in their pods. When configured, <strong>it will set the requests automatically based on usage</strong> and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod. <strong>It will also maintain ratios between limits and requests that were specified in initial containers configuration.</strong></p>
</blockquote>
<hr />
<p>I would also recommend to read below tutorials about limits and requests:</p>
<ul>
<li><a href="https://learnk8s.io/setting-cpu-memory-limits-requests" rel="nofollow noreferrer">Setting the right requests and limits in Kubernetes</a></li>
<li><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Kubernetes best practices: Resource requests and limits</a></li>
</ul>
| Jakub |
<p>We have a microservices/ish archictecture we currently on a VM. Each of our applications are deployed as DLLs with no executable. To run them, we spawn a new instance of our activator, passing the path of the application as an argument. The process activator injects behaviors on the application via DI, such as proxies and service discovery logic.</p>
<p>This has the benefit that the applications and the process activator can be developed, managed and deployed independently of one another. If we have an update for the activator, we only need to deploy it and restart all applications for our changes to take effect; No need to re-deploy an application, much less to rebuild it.</p>
<p>As we are now developing a plan to migrate our archictecture to Kubernetes, however, we've hit a roadblock because of this design. We haven't been able to find a way to replicate this. We've thought of doing it by simply deploying the two together and setting the activator as the entrypoint; However, that would mean that anytime we update the activator, all applications' images would have to be updated as well, which completely defeats the purpose of this design.</p>
<p>We've also thought of deploying them as two different containers and somehow making the activator read the contents of the application container and then load its DLLs, but we don't know if it's possible for a container to read the contents of another.</p>
<p>In the end, is there a way to make this design work in Kubernetes?</p>
| andre_ss6 | <p>If the design requires the following:</p>
<ul>
<li>Inject files into the main container to change its behaviour</li>
</ul>
<p>Then a viable choise is to use init containers. Init containers can perform operations before the main container (or containers) starts, for example they could copy some files for the main container to use.</p>
<p>You could have this activator as the main container and all the various apps being a different init container which contains the DLLs of that app.</p>
<p>When an init container starts, it copies the DLLs of that app on an ephemeral volume (aka <code>emptyDir</code>) to make them available to the main container. Then the activator container starts and find the DLLs at a path and can do whatever it wants with them.</p>
<p>This way:</p>
<ul>
<li>If you need to update the activator, you need to update the main container image (bump its tag) and then update the definitions of all the Deployments / StatefulSets to use the new image.</li>
<li>If you need to update one of the apps, you need to update its single init container image (bump its tag) and then update the definition of the Deployment / StatefulSet of that particular app.</li>
</ul>
<p>This strategy works perfectly fine (I think) if you are ok with the idea that you'll still need to define all the apps in the cluster. If you have 3 apps, A, B and C, you'll need to define 3 Deployments (or StatefulSets if the apps are stateful for some reasons) which uses the right init containers.</p>
<p>If the applications are mostly equal and only few things changes.. like only the DLLs to inject to the activator, you could think of using HELM to define your resources on the cluster, as it makes you able to template the resources and personalize them with very little overhead.</p>
<hr />
<p>Some documentation:</p>
<p>Init Containers: <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p>Example of making a copy of files between Init container and Main container: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/</a></p>
<p>HELM: <a href="https://helm.sh/docs/" rel="nofollow noreferrer">https://helm.sh/docs/</a></p>
| AndD |
<p>Totally new to GCP and trying to deploy first kubernetes cluster and getting below error.</p>
<p>(1) insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at <a href="https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=test-255811" rel="noreferrer">https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=test-255811</a></p>
<p>Have already request for quota increase but I want to know what does this "8.0" limit mean? how many IP addressed are available in "1.0"? from where I can reduce the size of my network. I am using "default" Network and default "/20" Node Subnet options. </p>
| user553076 | <p>The easy way to check quota usage for the current project is to go to </p>
<p>GCP Navigation => IAM & admin => Quotas, </p>
<p>then sort data by Current Usage. </p>
<p>There are regional hard limits that you could have exceeded (<code>In-use IP addresses</code> in your case). </p>
<p>The numbers in the error message are just decimal values in the format the <code>gcloud</code> and API commonly use for quotas. You might try the following commands to see how the quota values are actually displayed: </p>
<pre><code>$ gcloud compute project-info describe --project project-name
$ gcloud compute regions describe region-name
</code></pre>
<p>In your particular case 9 addresses were requested, and the deployment was short of 1 address because of the quota of 8 addresses. </p>
<p>Google Cloud documentation provides viable explanation of quotas: </p>
<p><a href="https://cloud.google.com/compute/quotas" rel="noreferrer">Resource quotas</a></p>
<p><a href="https://cloud.google.com/docs/quota" rel="noreferrer">Working with Quotas</a></p>
| mebius99 |
<p>Just study the core of K8S on local machine (Linux Mint 20.2).</p>
<p>Created one node cluster locally with:</p>
<blockquote>
<p>k3d cluster create mycluster</p>
</blockquote>
<p>And now I want to run spring boot application in a container.<br />
I build local image:</p>
<blockquote>
<p>library:0.1.0</p>
</blockquote>
<p>And here is snippet from <code>Deployment.yml</code>:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library'
image: library:0.1.0
imagePullPolicy: IfNotPresent
</code></pre>
<p>Despite the fact that image is already built:</p>
<pre><code>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
library 0.1.0 254c13416f46 About an hour ago 462MB
</code></pre>
<p>Starting the container fails:</p>
<pre><code>pod/library-867dfb64db-vndtj Pulling image "library:0.1.0"
pod/library-867dfb64db-vndtj Failed to pull image "library:0.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "library:0.1.0": failed to resolve reference "library:0.1.0": failed to do request: Head "https://...com/v2/library/manifests/0.1.0": x509: certificate signed by unknown authority
pod/library-867dfb64db-vndtj Error: ErrImagePull
pod/library-867dfb64db-vndtj Error: ImagePullBackOff
pod/library-867dfb64db-vndtj Back-off pulling image "library:0.1.0"
</code></pre>
<p><em><strong>How to resolve local images visibility for k3d cluster?</strong></em></p>
<hr />
<p>Solution:</p>
<p>Update the <code>Deployment.yml</code>:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library-xp'
image: xpinjection/library:0.1.0
imagePullPolicy: Never
</code></pre>
<p>And import the image to cluster:</p>
<blockquote>
<p>k3d image import xpinjection/library:0.1.0 -c mycluster</p>
</blockquote>
| catch23 | <p>If you don't want to use a docker registry, you have to import the locally built image into the k3d cluster:</p>
<pre><code>k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]
</code></pre>
<p>But don't forget to configure in your deployment:</p>
<pre><code>imagePullPolicy: Never
</code></pre>
| Bguess |
<p>I want to run shell script which are having ssh commands in the file using cron job in the kubernetes</p>
<p><strong>test.sh</strong></p>
<pre><code>#!/bin/bash
echo copying files from edge node to batch pods;
ssh -q userid@<hostname>"kubectl config use-context testnamespace";
ssh -q userid@<hostname>"kubectl scale statefulsets testpod --replicas=1";
</code></pre>
<p>when I executing it manually by going pod, it is throwing me error <code>"ssh command not found"</code> and by triggering job throwing me <code>permission denied</code> message.</p>
<p>can anyone help me how to resolve this issue.</p>
| Sanjay Chintha | <p>This message (<code>ssh command not found</code>) indicates a lack of ssh client in your pod. In the comments, you are mentioning docker so I assume you are using your docker image. </p>
<p>To install SSH in your docker container you have to run apt-get or yum or any other package manager according to our Linux Distribution and this needs to be expressed in your Dockerfile. </p>
<p>Here is an example of how to achieve this: </p>
<pre><code># A basic apache server. To use either add or bind mount content under /var/www
FROM ubuntu:12.04
MAINTAINER Kimbro Staken version: 0.1
RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
</code></pre>
<p>In this example, we are installing apache on a ubuntu image. In your scenario you need to run something similar to this: </p>
<pre><code>RUN apt-get update && apt-get install -y openssh-client && apt-get clean && rm -rf /var/lib/apt/lists/*
</code></pre>
| Mark Watney |
<p>I have the following deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gofirst
labels:
app: gofirst
spec:
selector:
matchLabels:
app: gofirst
template:
metadata:
labels:
app: gofirst
spec:
restartPolicy: Always
containers:
- name: gofirst
image: lbvenkatesh/gofirst:0.0.5
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: http
httpHeaders:
- name: "X-Health-Check"
value: "1"
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: http
httpHeaders:
- name: "X-Health-Check"
value: "1"
initialDelaySeconds: 30
periodSeconds: 10
</code></pre>
<p>and my service yaml is this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: gofirst
labels:
app: gofirst
spec:
publishNotReadyAddresses: true
type: NodePort
selector:
app: gofirst
ports:
- port: 8080
targetPort: http
name: http
</code></pre>
<p>"gofirst" is a simple web application written in Golang Gin.
Here is the dockerFile of the same:</p>
<pre><code>FROM golang:latest
LABEL MAINTAINER='Venkatesh Laguduva <[email protected]>'
RUN mkdir /app
ADD . /app/
RUN apt -y update && apt -y install git
RUN go get github.com/gin-gonic/gin
RUN go get -u github.com/RaMin0/gin-health-check
WORKDIR /app
RUN go build -o main .
ARG verArg="0.0.1"
ENV VERSION=$verArg
ENV PORT=8080
ENV GIN_MODE=release
EXPOSE 8080
CMD ["/app/main"]
</code></pre>
<p>I have deployed this application in Minikube and when I try to describe this pods, I am seeing these events:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m (x2 over 10m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
Normal Scheduled 10m default-scheduler Successfully assigned default/gofirst-95fc8668c-6r4qc to m01
Normal Pulling 10m kubelet, m01 Pulling image "lbvenkatesh/gofirst:0.0.5"
Normal Pulled 10m kubelet, m01 Successfully pulled image "lbvenkatesh/gofirst:0.0.5"
Normal Killing 8m13s (x2 over 9m13s) kubelet, m01 Container gofirst failed liveness probe, will be restarted
Normal Pulled 8m13s (x2 over 9m12s) kubelet, m01 Container image "lbvenkatesh/gofirst:0.0.5" already present on machine
Normal Created 8m12s (x3 over 10m) kubelet, m01 Created container gofirst
Normal Started 8m12s (x3 over 10m) kubelet, m01 Started container gofirst
Warning Unhealthy 7m33s (x7 over 9m33s) kubelet, m01 Liveness probe failed: Get http://172.17.0.4:8080/health: dial tcp 172.17.0.4:8080: connect: connection refused
Warning Unhealthy 5m35s (x12 over 9m25s) kubelet, m01 Readiness probe failed: Get http://172.17.0.4:8080/health: dial tcp 172.17.0.4:8080: connect: connection refused
Warning BackOff 31s (x17 over 4m13s) kubelet, m01 Back-off restarting failed container
</code></pre>
<p>I tried the sample container "hello-world" and worked well when I did "minikube service hello-world" but when I tried the same with "minikube service gofirst", I got the connection error in the browser.</p>
<p>I must be doing something relatively simpler but am unable to locate the error. Please go through my yaml and docker file, let me know if I am making any error.</p>
| Venkatesh Laguduva | <p>I've reproduced your scenario and faced the same issues you have. So I decided to remove the liveness and rediness probes to be able to log in to the pod and investigate it. </p>
<p>Here is the yaml I used: </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gofirst
labels:
app: gofirst
spec:
selector:
matchLabels:
app: gofirst
template:
metadata:
labels:
app: gofirst
spec:
restartPolicy: Always
containers:
- name: gofirst
image: lbvenkatesh/gofirst:0.0.5
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 8080
</code></pre>
<p>I logged in the pod to check if the application is listening in the port you are trying to test: </p>
<pre><code>kubectl exec -ti gofirst-65cfc7556-bbdcg -- bash
</code></pre>
<p>Then I installed netstat: </p>
<pre><code># apt update
# apt install net-tools
</code></pre>
<p>Checked if the application is running: </p>
<pre><code># ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:06 ? 00:00:00 /app/main
root 9 0 0 10:06 pts/0 00:00:00 sh
root 15 9 0 10:07 pts/0 00:00:00 ps -ef
</code></pre>
<p>And finally checked if port 8080 is listening:</p>
<pre><code># netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN
tcp 0 0 10.28.0.9:56106 151.101.184.204:80 TIME_WAIT
tcp 0 0 10.28.0.9:56130 151.101.184.204:80 TIME_WAIT
tcp 0 0 10.28.0.9:56104 151.101.184.204:80 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
</code></pre>
<p>As we can see, application is listening to localhost connections only and not from everywhere. Expected output should be: <code>0.0.0.0:8080</code></p>
<p>Hope it helps you to solve the problem. </p>
| Mark Watney |
<p>I want to have a tcpdump of a node port (like 30034) of a NodePort service pointing a pod in Kubernetes cluster.
This node port service is mapped inside an ingress resource under paths. When I hit the ingress using the host configured inside ingress, I get a response from the target pod but tcpdump doesn't trace anything. (Ingress-->NodePortService-> NodePort--[tcpdump]->pod)</p>
<p>I have tried with: sudo tcpdump -i any port 30034 -w tcp-dump.pcap
but its not capturing anything.</p>
<p>Could you please suggest here. What is the reason that tcpdump is not capturing anything when traffic comes via ingress controller.
However, if I hit the node directly as <a href="https://node-ip:30034:/service" rel="nofollow noreferrer">https://node-ip:30034:/service</a>; I receive the tcpdump.</p>
<p>Thanks.</p>
| Jaraws | <p>TCPdump effectively in Kubernetes is a bit tricky and requires you to create a side car to your pod. What you are facing is actually the expected behavior. </p>
<blockquote>
<p>run good old stuff like TCPdump or ngrep would not yield much
interesting information, because you link directly to the bridge
network or overlay in a default scenario.</p>
<p>The good news is, that you can link your TCPdump container to the host
network or even better, to the container network stack.
Source: <a href="https://medium.com/@xxradar/how-to-tcpdump-effectively-in-docker-2ed0a09b5406" rel="nofollow noreferrer">How to TCPdump effectively in Docker</a></p>
</blockquote>
<p>The thing is that you have two entry points, one is for nodeIP:NodePort the second is ClusterIP:Port. Both are pointing to the same set of randomization rules for endpoints set on kubernetes iptables. </p>
<p>As soon as it can happen on any node it's hard to configure tcpdump to catch all interesting traffic in just one point.</p>
<p>The best tool I know for such kind of analysis is Istio, but it works mostly for HTTP traffic.</p>
<p>Considering this, the best solution is to use a tcpdumper sidecar for each pod behind the service.</p>
<p>Let's go trough an example on how to achieve this </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web-app
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
- name: tcpdumper
image: docker.io/dockersec/tcpdump
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: default
spec:
ports:
- nodePort: 30002
port: 80
protocol: TCP
targetPort: 80
selector:
app: web
type: NodePort
</code></pre>
<p>On this manifest we can notice tree important things. We have a nginx container and one tcpdumper container as a side car and we have a service defined as NodePort. </p>
<p>To access our sidecar, you have to run the following command: </p>
<pre><code>$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper
</code></pre>
<p>Example: </p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d
web-svc NodePort 10.108.142.180 <none> 80:30002/TCP 9d
</code></pre>
<pre><code>$ curl localhost:30002
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
</code></pre>
<pre><code>$ kubectl attach -it web-app-db7f7c59-d4xm6 -c tcpdumper
Unable to use a TTY - container tcpdumper did not allocate one
If you don't see a command prompt, try pressing enter.
> web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 300957902 ecr 300958061], length 77: HTTP: GET / HTTP/1.1
12:03:16.884512 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [.], ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 0
12:03:16.884651 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 239: HTTP: HTTP/1.1 200 OK
12:03:16.884705 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 300958061 ecr 300957902], length 612: HTTP
12:03:16.884743 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 300957902 ecr 300958061], length 0
12:03:16.884785 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 300957902 ecr 300958061], length 0
12:03:16.889312 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 300957903 ecr 300958061], length 0
12:03:16.889351 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.1336: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 300958062 ecr 300957903], length 0
12:03:16.889535 IP 192.168.250.64.1336 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 300957903 ecr 300958062], length 0
12:08:10.336319 IP6 fe80::ecee:eeff:feee:eeee > ff02::2: ICMP6, router solicitation, length 16
12:15:47.717966 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [S], seq 3314747302, win 28400, options [mss 1420,sackOK,TS val 301145611 ecr 0,nop,wscale 7], length 0
12:15:47.717993 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [S.], seq 2539474977, ack 3314747303, win 27760, options [mss 1400,sackOK,TS val 301145769 ecr 301145611,nop,wscale 7], length 0
12:15:47.718162 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.718164 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [P.], seq 1:78, ack 1, win 222, options [nop,nop,TS val 301145611 ecr 301145769], length 77: HTTP: GET / HTTP/1.1
12:15:47.718191 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [.], ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 0
12:15:47.718339 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [P.], seq 1:240, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 239: HTTP: HTTP/1.1 200 OK
12:15:47.718403 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [P.], seq 240:852, ack 78, win 217, options [nop,nop,TS val 301145769 ecr 301145611], length 612: HTTP
12:15:47.718451 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 240, win 231, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.718489 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 852, win 240, options [nop,nop,TS val 301145611 ecr 301145769], length 0
12:15:47.723049 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [F.], seq 78, ack 852, win 240, options [nop,nop,TS val 301145612 ecr 301145769], length 0
12:15:47.723093 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.2856: Flags [F.], seq 852, ack 79, win 217, options [nop,nop,TS val 301145770 ecr 301145612], length 0
12:15:47.723243 IP 192.168.250.64.2856 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 853, win 240, options [nop,nop,TS val 301145612 ecr 301145770], length 0
12:15:50.493995 IP 192.168.250.64.31340 > web-app-db7f7c59-d4xm6.80: Flags [S], seq 124258064, win 28400, options [mss 1420,sackOK,TS val 301146305 ecr 0,nop,wscale 7], length 0
12:15:50.494022 IP web-app-db7f7c59-d4xm6.80 > 192.168.250.64.31340: Flags [S.], seq 3544403648, ack 124258065, win 27760, options [mss 1400,sackOK,TS val 301146463 ecr 301146305,nop,wscale 7], length 0
12:15:50.494189 IP 192.168.250.64.31340 > web-app-db7f7c59-d4xm6.80: Flags [.], ack 1, win 222, options
</code></pre>
<p>You can also take a look at <a href="https://github.com/eldadru/ksniff" rel="nofollow noreferrer">ksniff</a> tool, a kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod in your Kubernetes cluster.</p>
| Mark Watney |
<p>I'm using the microk8s with default ingress addons.</p>
<pre class="lang-sh prettyprint-override"><code>$ microk8s enable ingress
Addon ingress is already enabled.
$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
</code></pre>
<p>My service is running smoothly when access it without ingress routing.</p>
<pre class="lang-sh prettyprint-override"><code>$ curl 10.152.183.197 #the service binded to 10.152.183.197
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="robots" content="NONE,NOARCHIVE">
....
</head>
</html>
</code></pre>
<p>But I cannot get the ingress working properly in both localhost and remote-host, it always return 404.</p>
<pre class="lang-sh prettyprint-override"><code>$ curl 127.0.0.1 -H "Host: projects.xtech1999.com" #executed in microk8s host node
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
$ curl projects.xtech1999.com #executed in remote machine
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
</code></pre>
<p>I confirmed that the DNS record (projects.xtech1999.com) is pointed to IP Address correctly, My configuration are below:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 10h
pgsql-srv NodePort 10.152.183.239 <none> 5432:32157/TCP 9h
projects-srv NodePort 10.152.183.197 <none> 80:31436/TCP 9h
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress <none> projects.xtech1999.com 80 12m
$ kubectl describe ing ingress
Name: ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
projects.xtech1999.com
/ projects-srv:80 (10.1.166.145:80)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
$ cat 9999-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: projects.xtech1999.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: projects-srv
port:
number: 80
$ netstat -lnp | grep 80
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
unix 2 [ ACC ] STREAM LISTENING 28036 - /var/snap/microk8s/2094/var/kubernetes/backend/kine.sock
unix 2 [ ACC ] STREAM LISTENING 48509 - @/containerd-shim/86b04e625b27cda8731daf9c4b25b8a301cc659f41bb0957a0124780fd428557.sock@
unix 2 [ ACC ] STREAM LISTENING 52619 - @/containerd-shim/a53801f4268bd9e9a2bd1f0a2e7076ead63759ba23d1baaa193347f2abff54ea.sock@
unix 2 [ ACC ] STREAM LISTENING 36064 - @/containerd-shim/d805d091f24793a8452aa1699a67cf733884e51a6b8602290e522e088deb7fec.sock@
unix 2 [ ACC ] STREAM LISTENING 34750 - @/containerd-shim/849069ea6f0aac1707e1046f6b2ed65ba8d804b19ca157f538c279f323f8ad27.sock@
unix 2 [ ACC ] STREAM LISTENING 206657 - @/containerd-shim/a26df014b6fc6001235480215ec743c82b83aabe3c1e69442c37106dd097a12d.sock@
unix 2 [ ACC ] STREAM LISTENING 39444 - @/containerd-shim/97743748a84e4cbbda28e93b4d215b3adf514fa0fb4801790f567b1a63e6d92a.sock@
unix 2 [ ACC ] STREAM LISTENING 47838 - @/containerd-shim/e142dd0724d17d6da61c580bbd599dc246ef806d7d3b09d5791484c8fb6f6f93.sock@
unix 2 [ ACC ] STREAM LISTENING 38340 - @/containerd-shim/1fcc48ca77e6d7b138008c2a215ff2845e4e48d63e50be16285ae1daa003ea55.sock@
$ sudo ufw status
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
</code></pre>
<p>what's going on? I guess is the ingress cannot routing propertly.</p>
| KensonMan | <p>From your Ingress definition, I see you set-up as ingress class <code>nginx</code></p>
<pre><code>$ cat 9999-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: projects.xtech1999.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: projects-srv
port:
number: 80
</code></pre>
<p>Now, it's unfortunately not well documented on <code>microk8s</code> Ingress add-on page of the documentation, but when you enable the Ingress add-on, it creates an Ingress Controller with the default class called <code>public</code> (you can see the default definition here <a href="https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/ingress.yaml" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/ingress.yaml</a> )</p>
<p>You can confirm that your ingress class is called <code>public</code> with a <code>k get IngressClass</code></p>
<p>Modify your ingress definition to use the default ingress class and your setup <strong>should</strong> work:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: projects.xtech1999.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: projects-srv
port:
number: 80
</code></pre>
<hr />
<p>If you are curious on what is the purpose of an Ingress Class, it can mostly be used to associate an Ingress resource definition, with an ingress controller. On a Kubernetes cluster, there may be more than one Ingress Controller, each one with its own ingress class and Ingress resources are associated to one of them by matching the requested ingress class.</p>
<p>If an ingress class is not specified, the Ingress uses the default one, which means that the <code>IngressClass</code> annotated to be the default one of the cluster is automatically used.</p>
<p>For more info, check the documentation here ( <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class</a> )</p>
| AndD |
<p>I have succesfully deployed a stream using spring dataflow in eks, but I need to debug an application of the stream.</p>
<p>I have set up <code>spring.cloud.deployer.kubernetes.environment-variables: JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000'</code> in the application I want to debug, and the application starts an it is listening on that port.</p>
<p>Is there any property to tell kubernetes to map this port and make it accessible?</p>
<p>Thank you.</p>
| Juan | <p>Try this:</p>
<p><a href="https://i.stack.imgur.com/yWE7v.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yWE7v.jpg" alt="enter image description here" /></a></p>
<p>And then try a kubectl port-forward</p>
<pre><code>service/YOUR_SERVICE_NAME Host port:Service port
</code></pre>
<p>The documentation is really complete btw, there's a lot of information here:</p>
<p><a href="https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/" rel="nofollow noreferrer">https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/</a></p>
| Bguess |
<p>I have deployed a Gitlab Runner in our Kubernetes Cluster with the <a href="https://docs.gitlab.com/runner/install/kubernetes.html#installing-gitlab-runner-using-the-helm-chart" rel="nofollow noreferrer">Helm Chart</a></p>
<p>Now i try to build a image with kaniko. But the runner can not resolve the url of my gitlab server:</p>
<pre><code>Running with gitlab-runner 12.3.0 (a8a019e0)
on gitlab-runner-gitlab-runner-d7996895b-7lpnh nY2nib3b
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image gcr.io/kaniko-project/executor:debug ...
Waiting for pod gitlab/runner-ny2nib3b-project-2-concurrent-0w2ffw to be running, status is Pending
Running on runner-ny2nib3b-project-2-concurrent-0w2ffw via gitlab-runner-gitlab-runner-d7996895b-7lpnh...
Fetching changes...
Initialized empty Git repository in /builds/my-repo/.git/
Created fresh repository.
fatal: unable to access 'https://gitlab-ci-token:[MASKED]@XXX.XY:8443/my-repo.git/': Could not resolve host: XXX.XY
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>When i connect to the pod and try <code>nslookup XXX.XY</code>:</p>
<p><code>nslookup: can't resolve 'XXX.XY': Name does not resolve</code></p>
<p>I have already solved some problems but here I have no idea. DNS works in other PODs.</p>
<p>Edit:</p>
<p>on a working busybox pod the output of nslooup is</p>
<pre><code>nslookup google.de
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: google.de
Address 1: 2a00:1450:4001:816::2003 fra16s07-in-x03.1e100.net
Address 2: 172.217.18.3 fra15s28-in-f3.1e100.net
</code></pre>
| micha | <p>If you are using v12.3.0, then you ran into a bug: <a href="https://gitlab.com/gitlab-org/charts/gitlab-runner/issues/96" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/charts/gitlab-runner/issues/96</a></p>
| aries1980 |
<p>I have created python web-server and created docker image for that. Secondly I created Kubernetes cluster with kubeadm and created a service with type load balancer but I got External IP in Pending state so I came to know that External Load balancer can not be created in kubeadm tool.
So my task is to create pod replica of web-server and access through Pubic IP and the request should be load balanced between pod.
Can someone help me for this ?</p>
| HARINI NATHAN | <p>I see two options in your scenario an choosing between them is up to you based in your needs. </p>
<p>The fact it's pending forever is expected behavior. </p>
<p>By default, the solution proposed by Kubeadm requires you to <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">define a cloud-provider</a> to be able to use resources like <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws" rel="nofollow noreferrer">LoadBalancer</a> offered by your Cloud Provider. </p>
<p>Another option is to use a out-of-box solutions as <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>.</p>
<p>MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.</p>
| Mark Watney |
<p>I want block outgoing traffic to the ip (eg-DB) in IP tables in K8s.</p>
<p>I know that in K8s ip tables exist only at node level.</p>
<p>and I'm not sure in which file changes should be made and what is the command or changes required.</p>
<p>Please help me with this query.</p>
<p>Thanks.</p>
| Anonymous | <p>You could deploy istio and specifically the istio egress gateway.</p>
<p>This way you will be able to manage outgoing traffic within the istio manifest</p>
| Bguess |
<p>Running an application with <code>three services</code> on my local <code>minikube</code> while was installed on a server with <strong>16 cpus and 64 GB of memory</strong> , one <code>replicas</code> of which is 2, I only set resources.limits for each service, as shown below</p>
<pre><code>resources:
limits:
cpu: "2"
memory: "209715200"
</code></pre>
<blockquote>
<p>All service resource restrictions are the same.</p>
</blockquote>
<p>However, some service <code>pending</code> appears.</p>
<p>The pending Pod describe partial output is as follows</p>
<pre><code>Limits:
cpu: 2
memory: 209715200
Requests:
cpu: 2
memory: 209715200
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
</code></pre>
<blockquote>
<p>The result of <code>kubectl get pod</code> is as follows</p>
</blockquote>
<pre><code>kubectl get pod
NAME READY STATUS RESTARTS AGE
test1-777f54bcdb-pvfn5 1/1 Running 0 4m49s
test2-75ccb875b-lj9xl 1/1 Running 0 4m48s
test2-75ccb875b-s7xht 1/1 Running 0 4m48s
test3-797f6b795f-z9qv5 0/1 Pending 0 4m48s
</code></pre>
<blockquote>
<p>The result of <code>kubectl top node</code> is as follows</p>
</blockquote>
<pre><code>kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
test 1057m 13% 31675Mi 50%
</code></pre>
<blockquote>
<p>minikube version</p>
</blockquote>
<pre><code># minikube version
minikube version: v1.9.2
commit: 93af9c1e43cab9618e301bc9fa720c63d5efa393
</code></pre>
<blockquote>
<p>kubectl version</p>
</blockquote>
<pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I'm confused that my server configuration should be able to run this application, but it pend due to <strong>insufficient CPU</strong></p>
<p>Any comments would be greatly appreciated, thank you in advance!</p>
| moluzhui | <p>Running minikube with stock configs isn't going to make use of all your hardware potential. Minikube allows you to create nodes as big as you need and this is very important because the majority of people are using minikube on their workstation besides other applications and they don't want minikube to have unrestricted access to use your hardware.</p>
<p>To start minikube with custom specs you can do as <a href="https://stackoverflow.com/a/61795256/12153576">Radek</a> described, where you specify the amount of cpus and memory while starting your minikube.</p>
<pre><code>$ minikube start --cpus N --memory N
</code></pre>
<p>Another option is to set these parameters as default:</p>
<pre><code>$ minikube config set cpus N
$ minikube config set memory N
</code></pre>
<p>To check all configurable parameters you can run <code>minikube config</code>.</p>
<p>Another reason to have your nodes with limited resources is that you can have a minikube cluster with multiple nodes and also multiple clusters on one machine.
To create a minikube cluster with multiple nodes you can run:</p>
<pre><code>$ minikube start -n X
</code></pre>
<p>Where X is the number of desired nodes.</p>
<p>If you have a running minikube cluster and want to add another node to it, you can run:</p>
<pre><code>$ minikube node add
</code></pre>
<p>To create a secondary minikube cluster you can run:</p>
<pre><code>$ minikube start -p cluster-name
</code></pre>
<p>Where cluster-name is a name of your choice.</p>
| Mark Watney |
<p>I am having an issue with Kubernetes on GKE.
I am unable to resolve services by their name using internal DNS. </p>
<p>this is my configuration</p>
<p>Google GKE v1.15</p>
<pre><code>kubectl get namespaces
NAME STATUS AGE
custom-metrics Active 183d
default Active 245d
dev Active 245d
kube-node-lease Active 65d
kube-public Active 245d
kube-system Active 245d
stackdriver Active 198d
</code></pre>
<p>I've deployed a couple of simple services based on openjdk 11 docker image and made with spring boot + actuator in order to have a /actuator/health endpoint to test in dev </p>
<pre><code>kubectl get pods --namespace=dev
NAME READY STATUS RESTARTS AGE
test1-5d86946c49-h9t9l 1/1 Running 0 3h1m
test2-5bb5f4ff8d-7mzc8 1/1 Running 0 3h10m
</code></pre>
<p>If i try to execute under </p>
<pre><code>kubectl --namespace=dev exec -it test1-5d86946c49-h9t9 -- /bin/bash
root@test1-5d86946c49-h9t9:/app# cat /etc/resolv.conf
nameserver 10.40.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.back-office-236415.internal c.back-office-236415.internal google.internal
options ndots:5
root@test1-5d86946c49-h9t9:/app# nslookup test2
Server: 10.40.0.10
Address: 10.40.0.10#53
** server can't find test2: NXDOMAIN
</code></pre>
<p>The same issue occurs if I try using test2 service and try to resolve test1. There is a special configuration for namespace to enable DNS resolve? Shouldn't this be automatic?</p>
| IlTera | <p>I have reproduced this using master version 1.15 and and type of service as ‘ClusterIP’. I am able to do look up from the Pod of one service to another. For creating Kubernetes Services in a Google Kubernetes Engine cluster [1] might be helpful.</p>
<p>To see the services:
$ kubectl get svc --namespace=default</p>
<p>To access the deployment:
$ kubectl exec -it [Pod Name] sh</p>
<p>To lookup:
$ nslookup [Service Name]</p>
<p>Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain.</p>
<p>“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.</p>
<p>For “Headless” (without a cluster IP) Services are also assigned a DNS A record for a name.Though this resolves to the set of IPs of the pods selected by the Service.</p>
<p>However, DNS policies can be set on a per-pod basis. Currently Kubernetes supports the following pod-specific DNS policies. These policies are specified in the dnsPolicy field of a Pod Spec [2]:</p>
<p>“Default“: The Pod inherits the name resolution configuration from the node that the pods run on. </p>
<p>“ClusterFirst“: Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. </p>
<p>“ClusterFirstWithHostNet“: For Pods running with hostNetwork, need to set its DNS policy “ClusterFirstWithHostNet”.</p>
<p>“None“: It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec.</p>
<p>[1]-<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps</a>
[2]-<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config</a></p>
| Shanewaz |
<p>I'm having a hard time understanding why a pods readiness probe is failing.</p>
<pre><code> Warning Unhealthy 21m (x2 over 21m) kubelet, REDACTED Readiness probe failed: Get http://192.168.209.74:8081/actuator/health: dial tcp 192.168.209.74:8081: connect: connection refused
</code></pre>
<p>If I exec into this pod (or in fact into any other I have for that application), I can run a curl against that very URL without issue:</p>
<pre><code>kubectl exec -it REDACTED-l2z5w /bin/bash
$ curl -v http://192.168.209.74:8081/actuator/health
$ curl -v http://192.168.209.74:8081/actuator/health
* Expire in 0 ms for 6 (transfer 0x5611b949ff50)
* Trying 192.168.209.74...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5611b949ff50)
* Connected to 192.168.209.74 (192.168.209.74) port 8081 (#0)
> GET /actuator/health HTTP/1.1
> Host: 192.168.209.74:8081
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200
< Set-Cookie: CM_SESSIONID=E62390F0FF8C26D51C767835988AC690; Path=/; HttpOnly
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-Frame-Options: DENY
< Content-Type: application/vnd.spring-boot.actuator.v3+json
< Transfer-Encoding: chunked
< Date: Tue, 02 Jun 2020 15:07:21 GMT
<
* Connection #0 to host 192.168.209.74 left intact
{"status":"UP",...REDACTED..}
</code></pre>
<p>I'm getting this behavior from both a Docker-for-Desktop k8s cluster on my Mac as well as an OpenShift cluster.</p>
<p>The readiness probe is shown like this in kubectl describe:</p>
<pre><code> Readiness: http-get http://:8081/actuator/health delay=20s timeout=3s period=5s #success=1 #failure=10
</code></pre>
<p>The helm chart has this to configure it:</p>
<pre><code> readinessProbe:
failureThreshold: 10
httpGet:
path: /actuator/health
port: 8081
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
</code></pre>
<p>I cannot fully rule out that HTTP proxy settings are to blame, but the k8s docs say that HTTP_PROXY is ignored for checks since v1.13, so it shouldn't happen locally.</p>
<p>The OpenShift k8s version is 1.11, my local one is 1.16.</p>
| stblassitude | <p>Describing events always show the last event on the resource you are checking. The thing is that the last event logged was an error while checking the <code>readinessProbe</code>. </p>
<p>I tested it in my lab with the following pod manifest: </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: readiness-exec
spec:
containers:
- name: readiness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- sleep 30; touch /tmp/healthy; sleep 600
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>As can be seen, a file <code>/tmp/healthy</code> will be created in the pod after 30 seconds and the <code>readinessProbe</code> will check if the file exists after 5 seconds and repeat the check after every 5 seconds.</p>
<p>Describing this pod will give me that: </p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m56s default-scheduler Successfully assigned default/readiness-exec to yaki-118-2
Normal Pulling 7m55s kubelet, yaki-118-2 Pulling image "k8s.gcr.io/busybox"
Normal Pulled 7m55s kubelet, yaki-118-2 Successfully pulled image "k8s.gcr.io/busybox"
Normal Created 7m55s kubelet, yaki-118-2 Created container readiness
Normal Started 7m55s kubelet, yaki-118-2 Started container readiness
Warning Unhealthy 7m25s (x6 over 7m50s) kubelet, yaki-118-2 Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory
</code></pre>
<p>The <code>readinessProbe</code> looked for the file 6 times with no success and it's completely right as I configured it to check every 5 seconds and the file was created after 30 seconds. </p>
<p>What you think is a problem is actually the expected behavior. Your Events is telling you that the <code>readinessProbe</code> failed to check 21 minutes ago. It means actually that your pod is healthy since 21 minutes ago. </p>
| Mark Watney |
<p>I'm running a node application <a href="https://github.com/awslabs/amazon-kinesis-client-nodejs" rel="nofollow noreferrer">built on top of AWS's java KCL lib</a> on k8s.</p>
<p>Every 5 minutes or so the container crashe with "CrashLoopBackOff" and restarts - I can't figure out why.</p>
<p>The container logs show no errors and at some point the stream simply ends with:</p>
<pre><code>Stream closed EOF for sol/etl-sol-onchain-tx-parse-6b7d8f4c94-tf8tc (parse)
</code></pre>
<p>The pod events show no useful info either, looking like this:</p>
<pre><code>│ State: Running
│ Started: Sun, 08 May 2022 10:06:36 -0400
│ Last State: Terminated
│ Reason: Completed
│ Exit Code: 0
│ Started: Sun, 08 May 2022 09:58:42 -0400
│ Finished: Sun, 08 May 2022 10:03:43 -0400
│ Ready: True
│ Restart Count: 6
</code></pre>
<p>How is it possible that it says "Completed" with exit code 0? The container is a never ending process, it should never complete.</p>
<p>CPU/mem requests are used 25-50% at most.</p>
<p>What else might be causing this? The container is supposed to be using 4-7 threads (not sure if green) - maybe that's the issue? Running it on a M5. large (2 vCPUs, 8gb ram).</p>
| ilmoi | <p>I don't think that what you say is accurate :</p>
<blockquote>
<p>The container is a never ending process, it should never complete.</p>
</blockquote>
<p>In my opinion this is not linked to kubernetes but to your application in the container. Try to execute your container directly one your host (within docker for example) and check the behavior.</p>
| Bguess |
<p>I am trying to setup an <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx kubernetes ingress</a>. I am able to serve http and websockets content on different routes at the moment.</p>
<p>However I am not able to add GRPC routes on the same host. Adding this annotation <code>nginx.ingress.kubernetes.io/backend-protocol: "GRPC"</code> breaks the existing routes.</p>
<p>My java GRPC client exits with
<code>Caused by: io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f</code>
According to <a href="https://github.com/grpc/grpc-java/issues/2905" rel="nofollow noreferrer">https://github.com/grpc/grpc-java/issues/2905</a> this means the request is seen as HTTP</p>
<p>Is there a way to have http/websocket/grpc routes on the same host using the nginx kubernetes ingress? Alternatively, is there another ingress with which this would work?</p>
| nha | <p>As you want the annotation <code>nginx.ingress.kubernetes.io/backend-protocol: "GRPC"</code> to apply only on certain routes of your host, you could declare two Ingress definitions. The first one for all HTTP routes, the second one for GRPC routes.</p>
<p>The Nginx Ingress Controller will pick all the Ingress definitions (with the expected <code>IngressClass</code>) and will use them to compose the <code>nginx.conf</code>. This behaviour is perfect to have the possibility of having paths which requires different tunings in the annotations, like rewrite targets or, in your case, different backend protocols.</p>
<p>In particular, from the Nginx Controller documentation:</p>
<blockquote>
<p>Multiple Ingresses can define different annotations. These definitions
are not shared between Ingresses.</p>
</blockquote>
<p>You can check all the steps which are used to build the <code>nginx.conf</code> in the docs: <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/#building-the-nginx-model" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/how-it-works/#building-the-nginx-model</a></p>
| AndD |
<p>I have downloaded Kubernetes latest version from Kubernetes official site and referenced it in the PATH above the reference for Docker but It is still showing the version installed with Docker Desktop.</p>
<p>I understand that docker comes with Kubernetes installed out of the box but the docker version '1.15.5' doesn't work correctly with my Minikube version which is 'v1.9.2' which is causing me problems.</p>
<p>any suggestions on how to fix this issues? should I remove the Kubernetes binary from <code>C:\Program Files\Docker\Docker\resources\bin</code> I don't think that will be a good idea.</p>
<p>Can someone help me tackle this issue, along with some explanation on how the versions work with each other? Thanks</p>
| Zeeshan Adil | <p>This is happening because windows always give you the first comment found in the PATH, both kubectl versions (Docker and yours) are in the PATH but Docker PATH in being referenced before your kubectl PATH. </p>
<p>To solve this really depends on what you need. If you are not using your Docker Kubernetes you have two alternatives: </p>
<p>1 - Fix your PATH and make sure that your kubectl PATH is referenced before Docker PATH.</p>
<p>2 - Replace Docker kubectl to yours.</p>
<p>3- Make sure you restart your PC after doing these changes, as kubectl will automatically update the configuration to point to the newer kubectl version the next time you use the <code>minikube start</code> command with a correct <code>--kubernetes-version</code>:</p>
<p>If you are using both from time to time, I would suggest you to create a script that will change your PATH according to your needs. </p>
<p>According to the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin" rel="nofollow noreferrer">documentation</a> you must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.</p>
| Mark Watney |
<p>Trying to figure out how to expose multiple TCP/UDP services using a single LoadBalancer on Kubernetes. Let's say the services are ftpsrv1.com and ftpsrv2.com each serving at port 21. </p>
<p>Here are the options that I can think of and their limitations :</p>
<ul>
<li>One LB per svc: too expensive.</li>
<li>Nodeport : Want to use a port outside the 30000-32767 range.</li>
<li>K8s Ingress : does not support TCP or UDP services as of now.</li>
<li>Using Nginx Ingress controller : which again <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">will be one on one mapping</a>: </li>
<li>Found <a href="https://github.com/DevFactory/smartnat" rel="noreferrer">this custom implementation</a> : But it doesn't seem to updated, last update was almost an year ago.</li>
</ul>
<p>Any inputs will be greatly appreciated.</p>
| Ali | <p>It's actually <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">possible</a> to do it using NGINX Ingress.</p>
<p>Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags <code>--tcp-services-configmap</code> and <code>--udp-services-configmap</code> to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <code><namespace/service name>:<service port>:[PROXY]:[PROXY]</code>.</p>
<p><a href="https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/" rel="noreferrer">This guide</a> is describing how it can be achieved using minikube but doing this on a on-premises kubernetes is different and requires a few more steps.</p>
<p>There is lack of documentation describing how it can be done on a non-minikube system and that's why I decided to go through all the steps here. This guide assumes you have a fresh cluster with no NGINX Ingress installed.</p>
<p>I'm using a GKE cluster and all commands are running from my Linux Workstation. It can be done on a Bare Metal K8S Cluster also.</p>
<p><strong>Create sample application and service</strong></p>
<p>Here we are going to create and application and it's service to expose it later using our ingress.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: redis-service2
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6380
targetPort: 6379
protocol: TCP
</code></pre>
<p>Notice that we are creating 2 different services for the same application. This is only to work as a proof of concept. I wan't to show latter that many ports can be mapped using only one Ingress.</p>
<p><strong>Installing NGINX Ingress using Helm:</strong></p>
<p>Install helm 3:</p>
<pre><code>$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
</code></pre>
<p>Add NGINX Ingress repo:</p>
<pre><code>$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
</code></pre>
<p>Install NGINX Ingress on kube-system namespace:</p>
<pre><code>$ helm install -n kube-system ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p><strong>Preparing our new NGINX Ingress Controller Deployment</strong></p>
<p>We have to add the following lines under spec.template.spec.containers.args:</p>
<pre><code> - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
</code></pre>
<p>So we have to edit using the following command:</p>
<pre><code>$ kubectl edit deployments -n kube-system ingress-nginx-controller
</code></pre>
<p>And make it look like this:</p>
<pre><code>...
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=kube-system/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/ingress-nginx-controller
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
...
</code></pre>
<p><strong>Create tcp/udp services Config Maps</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: kube-system
</code></pre>
<p>Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them every time you add a service:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6380":"default/redis-service2:6380"}}'
</code></pre>
<p>Where:</p>
<ul>
<li><code>6379</code> : the port your service should listen to from outside the minikube virtual machine</li>
<li><code>default</code> : the namespace that your service is installed in</li>
<li><code>redis-service</code> : the name of the service</li>
</ul>
<p>We can verify that our resource was patched with the following command:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get configmap tcp-services -n kube-system -o yaml
apiVersion: v1
data:
"6379": default/redis-service:6379
"6380": default/redis-service2:6380
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namespace":"kube-system"}}
creationTimestamp: "2020-04-27T14:40:41Z"
name: tcp-services
namespace: kube-system
resourceVersion: "7437"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 11b01605-8895-11ea-b40b-42010a9a0050
</code></pre>
<p>The only value you need to validate is that there is a value under the <code>data</code> property that looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code> "6379": default/redis-service:6379
"6380": default/redis-service2:6380
</code></pre>
<p><strong>Add ports to NGINX Ingress Controller Deployment</strong></p>
<p>We need to patch our nginx ingress controller so that it is listening on ports 6379/6380 and can route traffic to your service.</p>
<pre><code>spec:
template:
spec:
containers:
- name: controller
ports:
- containerPort: 6379
hostPort: 6379
- containerPort: 6380
hostPort: 6380
</code></pre>
<p>Create a file called <code>nginx-ingress-controller-patch.yaml</code> and paste the contents above.</p>
<p>Next apply the changes with the following command:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch deployment ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-controller-patch.yaml)"
</code></pre>
<p><strong>Add ports to NGINX Ingress Controller Service</strong></p>
<p>Differently from the solution presented for minikube, we have to patch our NGINX Ingress Controller Service as it is the responsible for exposing these ports.</p>
<pre><code>spec:
ports:
- nodePort: 31100
port: 6379
name: redis
- nodePort: 31101
port: 6380
name: redis2
</code></pre>
<p>Create a file called <code>nginx-ingress-svc-controller-patch.yaml</code> and paste the contents above.</p>
<p>Next apply the changes with the following command:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-svc-controller-patch.yaml)"
</code></pre>
<p><strong>Check our service</strong></p>
<pre><code>$ kubectl get service -n kube-system ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.15.251.203 34.89.108.48 6379:31100/TCP,6380:31101/TCP,80:30752/TCP,443:30268/TCP 38m
</code></pre>
<p>Notice that our <code>ingress-nginx-controller</code> is listening to ports 6379/6380.</p>
<p>Test that you can reach your service with telnet via the following command:</p>
<pre><code>$ telnet 34.89.108.48 6379
</code></pre>
<p>You should see the following output:</p>
<pre><code>Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.
</code></pre>
<p>To exit telnet enter the <code>Ctrl</code> key and <code>]</code> at the same time. Then type <code>quit</code> and press enter.</p>
<p>We can also test port 6380:</p>
<pre><code>$ telnet 34.89.108.48 6380
Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.
</code></pre>
<p>If you were not able to connect please review your steps above.</p>
<p><strong>Related articles</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/Handbook/access-application-cluster/ingress-minikube/" rel="noreferrer">Routing traffic multiple services on ports 80 and 443 in minikube with the Kubernetes Ingress resource</a></li>
<li><a href="https://kubernetes.io/docs/Handbook/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">Use port forwarding to access applications in a cluster</a></li>
</ul>
| Mark Watney |
<p>I'v got microservices deployed on GKE, with Helm v3; all apps/helms stood nicely for months, but yesterday for some reason pods were re-created</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pods -l app=myapp
NAME READY STATUS RESTARTS AGE
myapp-75cb966746-grjkj 1/1 Running 1 14h
myapp-75cb966746-gz7g7 1/1 Running 0 14h
myapp-75cb966746-nmzzx 1/1 Running 1 14h
</code></pre>
<p>the <code>helm3 history myapp</code> shows it was updated 2days ago (40+hrs), not yesterday (so I exclude possibility someone simply run <code>helm3 upgrade ..</code>; (seems like someone ran a command <code>kubectl rollout restart deployment/myapp</code>), any thoughts how can I check why the pods were restarted? I'm not sure how to verify it; PS: the logs from <code>kubectl logs deployment/myapp</code> go back only to 3 hours ago</p>
<hr />
<p>just for reference, I'm not asking for this command <code>kubectl logs -p myapp-75cb966746-grjkj</code>, with that there is no problem, I want to know what happened to the 3 pods that were there 14 hrs ago, and were simply deleted/replaced - and how to check that.</p>
<hr />
<p>also no events on the cluster</p>
<pre class="lang-sh prettyprint-override"><code>MacBook-Pro% kubectl get events
No resources found in myns namespace.
</code></pre>
<hr />
<p>as for describing the deployment all there is, is that first the deployment was created few months ago</p>
<pre class="lang-sh prettyprint-override"><code>CreationTimestamp: Thu, 22 Oct 2020 09:19:39 +0200
</code></pre>
<p>and that last update was >40hrs ago</p>
<pre class="lang-sh prettyprint-override"><code>lastUpdate: 2021-04-07 07:10:09.715630534 +0200 CEST m=+1.867748121
</code></pre>
<p>here is full describe if someone wants</p>
<pre class="lang-sh prettyprint-override"><code>MacBook-Pro% kubectl describe deployment myapp
Name: myapp
Namespace: myns
CreationTimestamp: Thu, 22 Oct 2020 09:19:39 +0200
Labels: app=myapp
Annotations: deployment.kubernetes.io/revision: 42
lastUpdate: 2021-04-07 07:10:09.715630534 +0200 CEST m=+1.867748121
meta.helm.sh/release-name: myapp
meta.helm.sh/release-namespace: myns
Selector: app=myapp,env=myns
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 5
RollingUpdateStrategy: 25% max unavailable, 1 max surge
Pod Template:
Labels: app=myapp
Annotations: kubectl.kubernetes.io/restartedAt: 2020-10-23T11:21:11+02:00
Containers:
myapp:
Image: xxx
Port: 8080/TCP
Host Port: 0/TCP
Limits:
cpu: 1
memory: 1G
Requests:
cpu: 1
memory: 1G
Liveness: http-get http://:myappport/status delay=45s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:myappport/status delay=45s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
myapp-myns Secret Optional: false
Environment:
myenv: myval
Mounts:
/some/path from myvol (ro)
Volumes:
myvol:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: myvol
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-75cb966746 (3/3 replicas created)
Events: <none>
</code></pre>
| potatopotato | <p>First thing first, I would check nodes on which the Pods were running.</p>
<ul>
<li>If a Pod is restarted (which means that the <strong>RESTART COUNT</strong> is incremented) it usually means that the Pod had an error and that error caused the Pod to crash.</li>
<li>In your case tho, the Pod were completely recreated, this means (like you said) that someone could have use a rollout restart, or the deployment was scaled down and then up (both manual operations).</li>
</ul>
<p>The most common case for Pods to be created automatically, is that the node / nodes where the Pods were being executed had a problem. If a node becomes <strong>NotReady</strong>, even for a small amount of time, Kubernetes Scheduler will try to schedule new Pods on other nodes in order to match the desired state (number of replicas and so on)</p>
<p>Old Pods on a <strong>NotReady</strong> node will go into <strong>Terminating</strong> state and will be forced to terminate as soon as the <strong>NotReady</strong> node will become <strong>Ready</strong> again (if they are still up and running)</p>
<p>This is described in details in the documentation ( <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime</a> )</p>
<blockquote>
<p>If a Node dies, the Pods scheduled to that node are scheduled for deletion after a timeout period. Pods do not, by themselves, self-heal. If a Pod is scheduled to a node that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a controller, that handles the work of managing the relatively disposable Pod instances.</p>
</blockquote>
| AndD |
<p>There is this HELM chart (<a href="https://github.com/cdwv/bitwarden-k8s" rel="nofollow noreferrer">https://github.com/cdwv/bitwarden-k8s</a>) that I would like to convert to standard Kubernetes manifests.</p>
<p>When trying to use <code>helm template</code> I get <em>does not appear to be a gzipped archive; got 'application/octet-stream'</em></p>
<p>Any other way I can achieve this?</p>
| Krychaz | <p>I hope you are enjoying your Kubernetes journey !</p>
<p>First question, is your helm chart gziped/tarballed and Have you been able to download your chart and you unzip/untar it? Is it a private github repo?</p>
<p>You can check this post for maybe solving your pb: <a href="https://stackoverflow.com/questions/64320225/use-git-as-helm-repo-throws-does-not-appear-to-be-a-gzipped-archive-got-text">Use git as helm repo throws "does not appear to be a gzipped archive; got 'text/html; charset=utf-8'"</a></p>
<p>Which command do you use to templatize your helm chart?
I mean, when you have downloaded and unzipped your chart, you can just cd in the directory where there is the values and templates and type:</p>
<pre><code>helm template <randomName> .
</code></pre>
<p>and you'll get your k8s files completed with the values.</p>
<p>Waiting for your answer.
bguess.</p>
| Bguess |
<p>ingress ip is providing expected result but host returns <strong>404 http not found</strong></p>
<p>Ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld-ing
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
defaultBackend:
service:
name: helloworld-svc
port:
number: 8080
ingressClassName: nginx
tls:
- hosts:
- helloworld.dev.com
secretName: ingress-tls-csi
rules:
- host: helloworld.dev.com
http:
paths:
- path: /helloworld
pathType: Prefix
backend:
service:
name: helloworld-svc
port:
number: 8080
</code></pre>
<p>Ingress Ip was not working earlier but adding default backend resolved that issue.
I believe this can be the issue that its not going past backend and not even reaching rules.</p>
<p>I do not see warning/errors in ingress logs but If I remove default backend I can not even access app using ingress IP.</p>
<p>I am not sure what I am missing in my ingress configuration.</p>
<p>I am trying same path for url and Ip -</p>
<pre><code>Curl http://10.110.45.61/helloworld/service/result
Curl http://helloworld.dev.com/helloworld/service/result
</code></pre>
<p>I am happy to provide more information if required.</p>
| megha | <p>Hello, Hope you'are enjoying your Kubernetes journey !</p>
<p>So This is what I have tested for a first try (i havent tested tls now):
First I have setup a kind cluster locally with this configuration (info here: <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>):</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
</code></pre>
<p>after this I create my cluster with this command:</p>
<pre><code>kind create cluster --config=config.yaml
</code></pre>
<p>Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: so-tests
</code></pre>
<p>then I created this vanilla nginx deployment and exposed it with a service, here is the config (manifest from here <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment</a>):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>same with the pod (manifest from obtained with: k expose deployment nginx-deployment --dry-run -o yaml ):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx-deployment
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
</code></pre>
<p>after applying every manifests to my cluster and checked if my pods were running, I made sure that i could access my nginx pod web homepage by running:</p>
<pre><code>kubectl port-forward pod/nginx-deployment-74d589986c-c85r9 8080:80 #checked on localhost:8080 and it was succesful
</code></pre>
<p>I've done the same against the service to make sure that it was correctly redirecting the traffic to the pod:</p>
<pre><code>k port-forward service/nginx-deployment 8080:80 #checked on localhost:8080 and it was succesful
</code></pre>
<p>When I was sure that my workload was correctly running and accessible, I installed nginx ingress controller (from here: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) with this command:</p>
<pre><code>helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>Then I created the ingress k8s resource, here is the config (obtained by running: k create ingress demo-localhost --class=nginx --rule="demo.localdev.me/*=demo:80" and by replacing the service.name by the name of my service.):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: demo-localhost
spec:
ingressClassName: nginx
rules:
- host: demo.localdev.me
http:
paths:
- backend:
service:
name: nginx-deployment
port:
number: 80
path: /
pathType: Prefix
</code></pre>
<p>Then to check if my ingress was corretly redirecting the traffic i ran:</p>
<pre><code>kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 #I also tested this on my browser localhost:8080
</code></pre>
<p>And guess what? -> 404 not found.</p>
<p>so, I decided to replace the name "demo.localdev.me" by "localhost" and it worked, here is the conf:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-localhost
spec:
ingressClassName: nginx
rules:
- host: localhost #demo.localdev.me
http:
paths:
- backend:
service:
name: nginx-deployment
port:
number: 80
path: /
pathType: Prefix
</code></pre>
<p>I went to my C:\Windows\System32\drivers\etc\hosts file on windows (equivalent to /etc/hosts on linux, or equivalent to your DNS serveur in enterprise) to check if it was a dns issue, i added this line:</p>
<pre><code>127.0.0.1 demo.localdev.me
</code></pre>
<p>and it worked as expected. (make sure to clear you browser cache when playing with /etc/hosts file or to use private navigation to get accurate resulsts)</p>
<p>So, you can do some testing: make sure that:</p>
<ol>
<li>your path is correctly reachable (you can do the testing with
kubectl port-forward, as i have done before)</li>
<li>you do have the correct entry in dns that redirects the traffic to your nginx service (not your application pod service)</li>
<li>you do not have a firewall that is blocking your ports</li>
<li>the tls configuration is working properly (try with and without)</li>
</ol>
<p>If you have more information to share with us, feel free, i'll be happy to help you !</p>
<p>Have a nice day man !</p>
| Bguess |
<p>I am migrating Cassandra to Google Cloud and I have checked out few options like deploying cassandra inside Kubernetes, using Datastax Enterprise on GCP and Portworks etc., but not sure which one to use. Can someone suggest me with better options that you have used to deploy Cassandra on cloud? </p>
| archura | <p>As Carlos Monroy mentioned in his comment is correct, this is wide-ranging, it highly depends on the use case, number of users, SLA. I've found these links useful that describes how to deploy Cassandra in <a href="https://cloudplatform.googleblog.com/2014/07/click-to-deploy-apache-cassandra-on-google-compute-engine.html" rel="nofollow noreferrer">GCE</a> and how to run <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/cassandra/README.md" rel="nofollow noreferrer">Cassandra in GKE</a> with stateful sets. This <a href="https://docs.datastax.com/en/ddac/doc/datastax_enterprise/gcp/aboutGCP.html" rel="nofollow noreferrer">documentation</a> will guide you to about DataStax Distribution of Apache Cassandra on GCP Marketplace You can also consider the cost between running those products. You can estimate the charges using <a href="https://cloud.google.com/products/calculator/" rel="nofollow noreferrer">GCP pricing calculator</a>.</p>
| Aarti S |
<p>I have been working in Kubernetes for a while and I have a docker image of wildfly application. In the stanalone.xml of the wildfly, the connection to datasources are defined as follows:</p>
<pre><code><datasource jta="true" jndi-name="java:/DB" pool-name="DB" enabled="true" use-ccm="true">
<connection-url>jdbc:mysql://IP:3306/DB_NAME?zeroDateTimeBehavior=convertToNull&amp;autoReconnect=true</connection-url>
<driver-class>com.mysql.cj.jdbc.Driver</driver-class>
<driver>mysql</driver>
<security>
<user-name>root</user-name>
<password>root</password>
</security>
</datasource>
</code></pre>
<p>I have one worker node and 2 replicas of the same pod are running in it. But currently i observed that internet is not able to reach my pods. I am trying with</p>
<blockquote>
<p>ping google.com</p>
</blockquote>
<p>It is not giving response as expected. Already I am using <strong>LoadBalancer</strong> services to expose the ports.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: re-demo
namespace: default
spec:
type: LoadBalancer
selector:
app: re-demo
ports:
- port: 9575
targetPort: 9575
nodePort: 32756
externalTrafficPolicy: Cluster
</code></pre>
<p>How can I solve this ??</p>
| Bruce wayne - The Geek Killer | <p>There was mistake when I set up the cluster.</p>
<pre><code>kubeadm init --apiserver-advertise-address 10.128.0.12 --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>cidr address we should give just like what we have in our <strong>kube_flannel.yaml</strong> file. If you want to change the ip adress in the cidr, then first make changes in the <code>kube_flannel.yaml</code> file.</p>
<p>Otherwise, it will result in the no internet availability of the pods. And we need to use the <code>hostNetwork =true</code> property for internet connection, but which turn prevents us some running more than one replica of same pod in the same node.</p>
| Bruce wayne - The Geek Killer |
<p>I have a simple Express.js server Dockerized and when I run it like:</p>
<pre><code>docker run -p 3000:3000 mytag:my-build-id
</code></pre>
<p><a href="http://localhost:3000/" rel="nofollow noreferrer">http://localhost:3000/</a> responds just fine and also if I use the LAN IP of my workstation like <a href="http://10.44.103.60:3000/" rel="nofollow noreferrer">http://10.44.103.60:3000/</a></p>
<p>Now if I deploy this to MicroK8s with a service deployment declaration like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: "3000"
port: 3000
targetPort: 3000
status:
loadBalancer: {}
</code></pre>
<p>and pod specification like so (update 2019-11-05):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: my-service
spec:
replicas: 1
selector:
matchLabels:
name: my-service
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-service
spec:
containers:
- image: mytag:my-build-id
name: my-service
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
status: {}
</code></pre>
<p>and obtain the exposed NodePort via <code>kubectl get services</code> to be 32750 and try to visit it on the MicroK8s host machine like so:</p>
<p>curl <a href="http://127.0.0.1:32750" rel="nofollow noreferrer">http://127.0.0.1:32750</a></p>
<p>then the request just hangs and if I try to visit the LAN IP of the MicroK8s host from my workstation at
<a href="http://192.168.191.248:32750/" rel="nofollow noreferrer">http://192.168.191.248:32750/</a>
then the request is immediately refused.</p>
<p>But, if I try to port forward into the pod with</p>
<pre><code>kubectl port-forward my-service-5db955f57f-q869q 3000:3000
</code></pre>
<p>then <a href="http://localhost:3000/" rel="nofollow noreferrer">http://localhost:3000/</a> works just fine.</p>
<p>So the pod deployment seems to be working fine and example services like the microbot-service work just fine on that cluster.</p>
<p>I've made sure the Express.js server listens on all IPs with </p>
<pre><code>app.listen(port, '0.0.0.0', () => ...
</code></pre>
<p>So what can be the issue?</p>
| Bjorn Thor Jonsson | <p>You need to add a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">selector</a> to your service. This will tell Kubernetes how to find your deployment. Additionally you can use nodePort to specify the port number of your service. After doing that you will be able to curl your MicroK8s IP.</p>
<p>Your Service YAML should look like this: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: my-service
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
selector:
name: my-service
status:
loadBalancer: {}
</code></pre>
| Mark Watney |
<p>I have a kubernetes cluster. I created the cluster using the <code>Google Cloud</code>, but not using the GKE, but using GCE. I've created one <code>master node</code> and two <code>worker nodes</code> using <code>VM instances</code>. <code>Kubeadm</code> is used for joining the master and worker nodes along with <code>kube-flannel.yml</code> file. I am exposing my cluster outside in <code>postman</code> using my <code>Vm's public ip & nodePort</code>. I am able to hit to that URL. <code>publicip:nodePort/adapter_name</code>. The hit is reaching my pods and logs are generating. When I used <code>minikube</code> before, I've used <code>port-forwarding</code> to expose my port. Now i am not using that.</p>
<p>There is a default <code>kubeconfig</code> file called config is present in the location <code>$HOME/.kube/config</code>. It have the following content in it.</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJ....
server: https://10.128.0.12:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFe....
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb.....
</code></pre>
<p>The server <code>IP</code> is <code>https://10.128.0.12:6443</code>. Can I change this default URL to the one required for authentication[my rest api url]??</p>
<p>My requirement is to provide authentication for my rest api url, that my application enables, while running in the kubernetes pod.</p>
<p>How can I authenticate my rest api url with this <code>kubeconfig</code> method or by creating a new kubeconfig file and using that??</p>
<p><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/" rel="nofollow noreferrer">https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/</a></p>
<p><a href="http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/" rel="nofollow noreferrer">http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/</a></p>
<p>I got few ideas from above two blogs and tried to implement that, but none of them is satisfying my requirement. Authentication via postman using any JWT token is also acceptable.</p>
<p>Kubernetes version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:09:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| Bruce wayne - The Geek Killer | <p>The best method to authenticate our client api/end point url is to use <code>Istio</code></p>
<p><a href="https://istio.io/latest/docs/concepts/what-is-istio/" rel="nofollow noreferrer">Istio installation</a></p>
<p>I documeneted whole process of providing security via Istio in a PDF file which i am attaching <a href="https://pdfhost.io/v/emsAoXAbr_Istio_Installation_Integration_and_Security_with_Kubernetes_Cluster_Open_sourcepdf.pdf" rel="nofollow noreferrer">here</a>. Istio is used for the verification of the token and Keycloak is used for the generation of the JWT Token.</p>
| Bruce wayne - The Geek Killer |
<p>The use case is to get the environment variable *COUNTRY from all the pods running in a namespace </p>
<pre><code>kubectl get pods podname -n namespace -o 'jsonpath={.spec.containers[0].env[?(@.name~="^COUNTRY")].value}'
</code></pre>
<p>This does not seem to work. any lead?</p>
| cloudbud | <p>You can retrieve this information using the following command: </p>
<pre><code>kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\t"}{.spec.containers[*].env[*].value}{"\n"}{end}' | grep COUNTRY | cut -f 2
</code></pre>
<p>It will return the variables content as follows:</p>
<pre><code>$ kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\t"}{.spec.containers[*].env[*].value}{"\n"}{end}' | grep VAR | cut -f 2
123456
7890123
</code></pre>
| Mark Watney |
<p>I'm using Terraform for deploying <strong>cert-manager</strong> and <strong>ambassador</strong>.</p>
<p>Trying to understand how to use <strong>nodeSelector</strong> in terraform deployment and assign the helm chart I'm using for both services to a specific group node I have (using a label with key and value to assign)</p>
<pre><code>resource "helm_release" "cert_manager" {
namespace = var.cert_manager_namespace
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = var.cert_manager_release_version
create_namespace = true
count = var.enable
set {
name = "controller."
}
set {
name = "controller.nodeselector"
value = ""
}
set {
name = "installCRDs" # Should only happen on the first attempt
value = "true"
}
set {
name = "securityContext.enabled"
value = "true"
}
</code></pre>
<p>Thie example above is me trying to assign it.
Any ideas?</p>
<p>Thanks!!</p>
| n1vgabay | <p>If Your nodeSelector location in values.yaml looks like this:</p>
<pre><code>controller:
nodeSelector: {}
</code></pre>
<p>You should be setting it up this way:</p>
<pre><code>set {
name = "controller.nodeSelector.dedicated"
value = "workloads"
}
</code></pre>
<p>Where <strong>dedicated</strong> is key and <strong>workloads</strong> is value.</p>
| Maciej Olesiński |
<p>I'm deploying postgres DB using helm by this steps:</p>
<ol>
<li>applying pv:</li>
</ol>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
namespace: admin-4
name: postgresql-pv-admin-4
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<ol start="2">
<li>applying PVC:</li>
</ol>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: admin-4
name: postgresql-pvc-admin-4
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<ol start="3">
<li>running helm command:</li>
</ol>
<pre><code>helm install postgres bitnami/postgresql --set persistence.enabled=true --set persistence.existingClaim=postgresql-pvc-admin-4 --set volumePermissions.enabled=true -n admin-4
</code></pre>
<p>This is the output:
<a href="https://i.stack.imgur.com/G5Svy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G5Svy.png" alt="k get pod,pv,pvc" /></a></p>
| Eslam Ali | <p>On the latest bitnami/postgresql chart (chart version 11.8.1), the fields to set are:</p>
<pre class="lang-yaml prettyprint-override"><code>primary:
persistence:
enabled: true
existingClaim: postgresql-pvc-admin-4
</code></pre>
| ericfossas |
<p>I have docker image with config folder in it.
Logback.xml is located there.
So I want to have ability to change logback.xml in pod (to up log level dinamicly, for example)
First, I've thought to use epmtyDir volume, but this thing rewrite directory and it becomes empty.
So Is there some simple method to make directory writable into pod?</p>
| Haster | <p>Hello, hope you are enjoying your kubernetes Journey, if I understand, you want to have a file in a pod and be able to modify it when needed,</p>
<p>The thing you need here is to create a configmap based on your logback.xml file (can do it with imperative or declarative kubernetes configuration, here is the imperative one:</p>
<pre><code>kubectl create configmap logback --from-file logback.xml
</code></pre>
<p>And after this, just mount this very file to you directory location by using volume and volumeMount subpath in your deployment/pod yaml manifest:</p>
<pre><code>...
volumeMounts:
- name: "logback"
mountPath: "/CONFIG_FOLDER/logback.xml"
subPath: "logback.xml"
volumes:
- name: "logback"
configMap:
name: "logback"
...
</code></pre>
<p>After this, you will be able to modify your logback.xml config, by editing / recreating the configmap and restarting your pod.</p>
<p>But, keep in mind:</p>
<p>1: The pod files are not supposed to be modified on the fly, this is against the container philosophy (cf. Pet vs Cattle)</p>
<p>2: Howerver, Depending on your container image user rights, all pods directories can be writable...</p>
| Bguess |
<p>I have a ClusterIssuer that is expecting <code>secretName</code>, I see in the <code>ClusterIssuer</code> <code>spec</code>, I can specify the <code>secretName</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: postgres-operator-ca-certificate-cluster-issuer
spec:
ca:
secretName: postgres-operator-ca-certificate # <---- Here
</code></pre>
<p>but how to provide the reference to the secret namespace? This secret is created using <code>Certificate</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-operator-self-signed-ca-certificate
namespace: postgres # <---- This namespace can not be changed to cert-manager
spec:
isCA: true
commonName: postgres-operator-ca-certificate
secretName: postgres-operator-ca-certificate
issuerRef:
name: postgres-operator-selfsigned-clusterissuer
kind: ClusterIssuer
</code></pre>
<p>As this is <code>namespaced</code> is the suggestion is to use <code>Issuer</code> instead of <code>ClusterIssuer</code>? Does <code>ClusterIssuer</code> by default look in the <code>cert-manager</code> namespace?</p>
| Vishrant | <p>Typically it will look for the secret in the namespace <code>cert-manager</code> by default. Which namespace it looks in can be changed by your cert-manager installation by using the <code>--cluster-resource-namespace</code> argument, but not by individual ClusterIssuer.</p>
<p>From the documentation:</p>
<blockquote>
<p>If the referent is a cluster-scoped resource (e.g. a ClusterIssuer),
the reference instead refers to the resource with the given name in
the configured ‘cluster resource namespace’, which is set as a flag on
the controller component (and defaults to the namespace that
cert-manager runs in).</p>
</blockquote>
<p><a href="https://cert-manager.io/docs/reference/api-docs/#meta.cert-manager.io/v1.LocalObjectReference" rel="nofollow noreferrer">https://cert-manager.io/docs/reference/api-docs/#meta.cert-manager.io/v1.LocalObjectReference</a></p>
| ericfossas |
<p>I have created a simple program written in Python which interacts with a redis database take a list of elements which is stored in my db and sort them.</p>
<p>PYTHON CODE :</p>
<pre><code>import redis
import numpy as np
r = redis.Redis(host='redis-master', port=6379, db=9, socket_connect_timeout=2, socket_timeout=2)
array = np.array([]);
vector = np.vectorize(int);
while(r.llen('Numbers')!=0):
array = vector(np.append(array, r.lpop('Numbers').decode('utf8')))
sorted_array = np.sort(array);
print("The sorted array : ");
print(sorted_array);
</code></pre>
<p>I have created an image with the following Docker file :</p>
<pre><code>FROM python:3
WORKDIR /sorting
COPY sorting.py ./
RUN apt-get update
RUN pip3 install numpy
RUN pip3 install redis
CMD python3 sorting.py
</code></pre>
<p>Also for the redis deployment and service I have the following yaml file :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
ports:
- name: "redis-server"
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
</code></pre>
<p>and for the python programm deployment and service I have the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sortingapp
labels:
app: sortingapp
spec:
selector:
matchLabels:
app: sortingapp
replicas: 1
template:
metadata:
labels:
app: sortingapp
spec:
containers:
- name: sortingapp
image: sorting-app:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: sorting-app
spec:
type: NodePort
ports:
- name: http
port: 9090
targetPort: 8080
selector:
app: go-redis-app
</code></pre>
<p>My redis pod seems to work properly, but when I try to run my sortingapp it creates the pod but the status is CrashLoopBackOff. I tried to show the logs and it shows the prints of my python program</p>
<pre><code>The sorted array :
[]
</code></pre>
<p>So as I understand something is wrong with the connection between the app pod and the redis pod.
Any suggestion about what I am doing wrong?</p>
| Rodr | <p>You are doing it the right way, I tested your code locally and when you pass a wrong database host name to your python script, it fails, so since you have the output <code>The sorted array : []</code> That means that the connection to the database has been made properly.</p>
<p>However, you have to know that by deploying this kind of one-executing script in Kubernetes (or docker), the container will restart again and again since it only runs one time and stops.</p>
<p>So if you don't want this error to come out, just make your script an app that runs continuously OR use kubernetes job, for example, if you want to run it manually when needed.</p>
<p>Another thing: Since redis is a stateful application, consider using a <code>StatefulSet</code> object in Kubernetes instead of a <code>Deployment</code>. But since its not related to the problem, you can do it whenever you want.</p>
<p>A little advice: you should pass the <code>host</code> configuration of your redis database in your python code in the environment variable, it will be better if one day you need to execute the container elsewhere you would just modify the environment variable instead of rebuilding your docker image.</p>
<p>A big problem for the future: look at your Python Kubernetes service, it contains the selector: <code>app: go-redis-app</code> instead of <code>app: sorting-app</code></p>
<p>Basically it's a python problem, not a Kubernetes database connection problem, so good job.</p>
| Bguess |
<p>I am using an existing helm chart repo
<a href="https://github.com/kubecost/cost-analyzer-helm-chart" rel="nofollow noreferrer">https://github.com/kubecost/cost-analyzer-helm-chart</a></p>
<p>For deployment I am using custom helm chart, have created tgz of the repo and put it under my own charts/ directory and then i put my own certain templates which deploys some resources related to cost-analyzer.</p>
<p>I want to assign some custom labels to the resources which are coming from that tgz.</p>
<p>Is there something/someway that i can add custom labels to all the resources which are deployed using my custom helm chart including the resource which are from tgz.</p>
| Keyur Barapatre | <p>There is nothing built into Helm for doing that.</p>
<p>You can set the <code>additionalLabels</code> field in their Helm chart <a href="https://github.com/kubecost/cost-analyzer-helm-chart/blob/643f3fa401c54aeb44aef375c52b0d77d5d91fcc/cost-analyzer/values.yaml#L135" rel="nofollow noreferrer">values.yaml</a> file (there are multiple places this needs to be done).</p>
<p>A potential kludge could be to pull the manifests after deploying, get the name and type of every resource, and pump that into a kubectl command to label everything, for example:</p>
<pre><code>HELM_RELEASE="???"
NAMESPACE="???"
LABEL="???"
helm get manifest $HELM_RELEASE -n $NAMESPACE \
| kubectl get -n $NAMESPACE -f - \
| grep -vE '^$|^NAME' \
| cut -d' ' -f1 \
| xargs -I {} kubectl label -n $NAMESPACE {} $LABEL
</code></pre>
| ericfossas |
<p>I can list my installed charts like this:</p>
<pre><code>❯ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 2 2020-07-05 18:38:44.8954751 -0700 PDT deployed cert-manager-v0.15.2 v0.15.2
</code></pre>
<p>But how do I find out where I installed <code>cert-manager</code> from?</p>
<p>I assume it was <code>https://charts.jetstack.io</code> but is there any history on that? Can I find what command I used?</p>
| mpen | <p>I think there is no way to obtain the repo or URL from which the chart was installed, as a Helm chart is not tied to any remote location.</p>
<p>The command <code>helm get all installation-name</code> will give all the info about an installed chart such as the manifest, the given values and the computed ones, but <strong>NOT</strong> the url or origin of the chart.</p>
<p>There are several open bugs / requests such as <a href="https://github.com/helm/helm/issues/4256" rel="nofollow noreferrer">https://github.com/helm/helm/issues/4256</a></p>
<p>It would be interesting if the origin could be available as part of the manifest info or something like that, but I think the origin is stripped completely from the helm client, so there's no info on where was the chart located from in the first place, once we are at installation time.</p>
| AndD |
<p>I have a Bare-Metal Kubernetes custom setup (manually setup cluster using Kubernetes the Hard Way). Everything seems to work, but I cannot access services externally.</p>
<p>I can get the list of services when curl:</p>
<pre><code>https://<ip-addr>/api/v1/namespaces/kube-system/services
</code></pre>
<p>However, when I try to proxy (using <code>kubectl proxy</code>, and also by using the <code><master-ip-address>:<port></code>):</p>
<pre><code>https://<ip-addr>/api/v1/namespaces/kube-system/services/toned-gecko-grafana:80/proxy/
</code></pre>
<p>I get:</p>
<pre><code>Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Trying to reach: 'http://10.44.0.16:3000/'
</code></pre>
<ul>
<li><p><strike>Even if I normally curl <code>http://10.44.0.16:3000/</code> I get the same error. This is the result whether I curl from inside the VM where Kubernetes is installed.</strike> Was able to resolve this, check below.</p></li>
<li><p>I can access my services externally using NodePort.</p></li>
<li><p>I can access my services if I expose them through Nginx-Ingress.</p></li>
<li><p>I am using Weave as CNI, and the logs were normal except a couple of log-lines at the beginning about it not being able to access Namespaces (RBAC error). Though logs were fine after that.</p></li>
<li><p>Using CoreDNS, logs look normal. APIServer and Kubelet logs look normal. Kubernetes-Events look normal, too.</p></li>
<li><p><strong><em>Additional Note</em></strong>: The DNS Service-IP I assigned is <code>10.3.0.10</code>, and the service IP range is: <code>10.3.0.0/24</code>, and POD Network is <code>10.2.0.0/16</code>. I am not sure what <code>10.44.x.x</code> is or where is it coming from.</p>
<ul>
<li>Also, I am using Nginx-Ingress (Helm Chart: <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a>)</li>
</ul></li>
</ul>
<p>Here is output from one of the services:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "5c8bb34f-c6a2-11e8-84a7-00163cb4ceeb",
"resourceVersion": "7054",
"creationTimestamp": "2018-10-03T00:22:07Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 30033
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.3.0.30",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
</code></pre>
<p>I am not sure how to debug this, even some pointers to the right direction would help. If anything else is required, please let me know.</p>
<hr>
<p>Output from <code>kubectl get svc</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-primary ClusterIP 10.3.0.10 <none> 53/UDP,53/TCP,9153/TCP 4h51m
kubernetes-dashboard NodePort 10.3.0.30 <none> 443:30033/TCP 4h51m
</code></pre>
<hr>
<p><strong>EDIT:</strong></p>
<p>Turns out I didn't have <code>kube-dns</code> service running for some reason, despite having CoreDNS running. It was as mentioned here: <a href="https://github.com/kubernetes/kubeadm/issues/1056#issuecomment-413235119" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/1056#issuecomment-413235119</a></p>
<p>Now I can curl from inside the VM successfully, but the proxy-access still gives me the same error: <code>No route to host</code>. I am not sure why or how would this fix the issue, since I don't see DNS being in play here, but it fixed the issue regardles. Would appreciate any possible explanation on this too.</p>
| Jaskaranbir Singh | <p>I encountered the same issue and resolved it by running the commands below:</p>
<pre><code>iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker
</code></pre>
| KevinLiu |
<p>I want to update my kubernetes cluster from 1.21 to 1.22, I should update ingress resources from v1beta1 to v1, how do I compare the resource definitions of v1beta1 and v1 to know what to update?</p>
| Kaizendae | <p>You could check on internet first (example: <a href="https://docs.konghq.com/kubernetes-ingress-controller/latest/concepts/ingress-versions/" rel="nofollow noreferrer">https://docs.konghq.com/kubernetes-ingress-controller/latest/concepts/ingress-versions/</a> )</p>
<p>Or you could use <code>kubectl proxy</code> command to access kubernetes API server locally and navigate through different apiversions.</p>
<p>(And maybe check the <code>kubectl explain</code> command, I have to check if we can do this with it)</p>
| Bguess |
<p>I currently have a kubernetes CronJob with a <code>concurrencyPolicy: Forbid</code>.
I would like to be able to uniquely identify this pod externally (k8 cluster) and internally from the pod, with an ID that will be unique forever.
I was thinking of using the <code>creationTimestamp</code> and the name of the pod but unfortunately it is <a href="https://stackoverflow.com/questions/72873334/how-to-pass-creation-timestamp-to-kubernetes-cronjob/72874894#72874894">not that easy to pass the <code>creationTimestamp</code></a>. Any other ideas?</p>
| maxisme | <p>Just use the pod name like you had figured out from your other question:</p>
<pre><code>- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>The pod name should be very unique (if you want completely unique, you'll need to grab its UID, but i'm not sure what you're trying to solve).</p>
<p>The pod name will be generated using the following format:</p>
<pre><code>{{CRONJOB_NAME}}-{{UNIX_TIMESTAMP_IN_MINUTES}}-{{5_CHAR_RANDOM_ALPHANUM}}
</code></pre>
| ericfossas |
<p>I have scripts that collect data all the time on Google Cloud VMs, but there are times when I have more or less data to collect, so I need to volatile and automatically allocate CPU and memory so I don't spend so much money. Searching I saw that the best way is to create container and orchestrate them correctly, google offers Kubernetes, Cloud Run or Google Compute Engine, which is the simplest and best for this problem? Or if there is another platform that solves it better, which one?</p>
<p>Ps. I'm new in Cloud Computing, sorry if I made a mistake or said something that doesn't exist.</p>
| Gabriel Franco | <p>Definitely forget about GCE ( compute engine ),</p>
<p>It remains GKE or Cloud run, you have to choose depending on your needs, here is the best article I be found:</p>
<p><a href="https://cloud.google.com/blog/products/containers-kubernetes/when-to-use-google-kubernetes-engine-vs-cloud-run-for-containers" rel="nofollow noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/when-to-use-google-kubernetes-engine-vs-cloud-run-for-containers</a></p>
<p><a href="https://i.stack.imgur.com/UTfJS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UTfJS.png" alt="enter image description here" /></a></p>
<p>However2 if you choose to use k8s, you can manage resources within the deployments tank manifests in the "resources" section. The request would be the minimum allocated resources to your deployment and the limits will be the maximum resources the deployment can use. You may play with this maybe.</p>
| Bguess |
<p>currently we're adding features to 3rd party helm charts we're deploying (for example - in prometheus we're adding an authentication support as we use nginx ingress controller).</p>
<p>Obviously, this will cause us headaches when we want to upgrade those helm charts, we will need to perform "diffs" with our changes.</p>
<p>What's the recommended way to add functionality to existing 3rd party helm charts? Should i use umbrella charts and use prometheus as a dependency? then import value from the chart? (<a href="https://github.com/helm/helm/blob/master/docs/charts.md#importing-child-values-via-requirementsyaml" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/charts.md#importing-child-values-via-requirementsyaml</a>)</p>
<p>Or any other recommended way?</p>
<p>-- EDIT --</p>
<p>Example - as you can see, i've added 3 nginx.ingress.* annotations to support basic auth on <strong>prometheus</strong> ingress resource - of course if i'll upgrade, i'll need to manually add them again, which will cause problems</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
{{- if .Values.prometheus.ingress.annotations }}
annotations:
{{ toYaml .Values.prometheus.ingress.annotations | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.ingress.nginxBasicAuthEnabled }}
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
nginx.ingress.kubernetes.io/auth-secret: {{ template "prometheus-operator.fullname" . }}-prometheus-basicauth
nginx.ingress.kubernetes.io/auth-type: "basic"
{{- end }}
name: {{ $serviceName }}
labels:
app: {{ template "prometheus-operator.name" . }}-prometheus
{{ include "prometheus-operator.labels" . | indent 4 }}
{{- if .Values.prometheus.ingress.labels }}
{{ toYaml .Values.prometheus.ingress.labels | indent 4 }}
{{- end }}
spec:
rules:
{{- range $host := .Values.prometheus.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: "{{ $routePrefix }}"
backend:
serviceName: {{ $serviceName }}
servicePort: 9090
{{- end }}
{{- if .Values.prometheus.ingress.tls }}
tls:
{{ toYaml .Values.prometheus.ingress.tls | indent 4 }}
{{- end }}
{{- end }}
</code></pre>
| ArielB | <p>I think that might answer your <a href="https://stackoverflow.com/a/52554219/11977760">question</a>.</p>
<ul>
<li><a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md" rel="nofollow noreferrer" title="Subcharts and Globals">Subcharts and Globals</a></li>
<li><a href="https://github.com/helm/helm/blob/master/docs/chart_best_practices/requirements.md" rel="nofollow noreferrer">Requirements</a></li>
<li><a href="https://github.com/helm/helm/blob/master/docs/helm/helm_dependency.md" rel="nofollow noreferrer">Helm Dependencies</a></li>
</ul>
<p>This led me to find the <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">specific part I was looking for</a>, where the parent chart can override sub-charts by specifying the chart name as a key in the parent <code>values.yaml</code>.</p>
<p>In the application chart's <code>requirements.yaml</code>:</p>
<pre><code>dependencies:
- name: jenkins
# Can be found with "helm search jenkins"
version: '0.18.0'
# This is the binaries repository, as documented in the GitHub repo
repository: 'https://kubernetes-charts.storage.googleapis.com/'
</code></pre>
<p>Run:</p>
<pre><code>helm dependency update
</code></pre>
<p>In the application chart's <code>values.yaml</code>:</p>
<pre><code># ...other normal config values
# Name matches the sub-chart
jenkins:
# This will be override "someJenkinsConfig" in the "jenkins" sub-chart
someJenkinsConfig: value
</code></pre>
| Jakub |
<p>I am deploying stolon via statefulset (default from stolon repo).
I have define in statefulset config </p>
<pre><code>volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: stolon-local-storage
resources:
requests:
storage: 1Gi
</code></pre>
<p>and here is my storageClass:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: stolon-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>statefulset was created fine, but pod has error:
<strong>pod has unbound immediate PersistentVolumeClaims</strong></p>
<p>How can I resolve it?</p>
| Donets | <blockquote>
<p>pod has unbound immediate PersistentVolumeClaims</p>
</blockquote>
<p>In this case pvc could not connect to storageclass because it wasn't make as a <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">default</a>.</p>
<blockquote>
<p>Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">PersistentVolumeClaim documentation</a> for details.</p>
</blockquote>
<p>Command which can be used to make your new created storageclass a default one.</p>
<pre><code>kubectl patch storageclass <name_of_storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</code></pre>
<p>Then You can use <code>kubectl get storageclass</code> and it should look like this</p>
<pre><code>NAME PROVISIONER AGE
stolon-local-storage (default) kubernetes.io/gce-pd 1d
</code></pre>
| Jakub |
<p>So I want to create a directory under the node kubernetes simulated via docker-desktop</p>
<p>When I am trying to do so I am getting this error</p>
<p>/ # mkdir pod-volume
mkdir: can't create directory 'pod-volume': Read-only file system</p>
<p>Any idea How can I fix this inside docker-desktop (used for kubernetes simulation)</p>
| EagerLearner | <blockquote>
<p>The Kubernetes server runs locally within your Docker instance, is not
configurable, and is a single-node cluster.</p>
</blockquote>
<p><a href="https://docs.docker.com/desktop/kubernetes/" rel="nofollow noreferrer">https://docs.docker.com/desktop/kubernetes/</a></p>
<blockquote>
<p>Docker Desktop offers a Kubernetes installation with a solid host
integration aiming to work without any user intervention.</p>
</blockquote>
<p>By the way this is a great article about how it works under the hood:</p>
<p><a href="https://www.docker.com/blog/how-kubernetes-works-under-the-hood-with-docker-desktop/" rel="nofollow noreferrer">https://www.docker.com/blog/how-kubernetes-works-under-the-hood-with-docker-desktop/</a></p>
<p>However, don't know why you are trying to do this, but it's not a good practice. If you want to deal with volumes there is a lot of articles on the internet about this, here is one stack overflow link that could help: <a href="https://stackoverflow.com/questions/54073794/kubernetes-persistent-volume-on-docker-desktop-windows">Kubernetes persistent volume on Docker Desktop (Windows)</a></p>
<p>Hope this has helped you,
Bguess</p>
| Bguess |
<p>I am working with someone else's kubernetes application. It has a long-running Deployment whose first action upon beginning to run is to create several additional, non-workload, cluster resources (<code>ValidatingWebhookConfigurations</code> and <code>MutatingWebhookConfigurations</code> in this case).</p>
<p>I would like for the generated resources to be deleted when their parent Deployment is deleted. Please assume that I don't have control over how the generated manifests are deployed, but do have control over their contents.</p>
<p>Questions:</p>
<ul>
<li>Can my goal be achieved with plain kubernetes?
<ul>
<li>In other words, can the generated resources be modified to be deleted when <code>kubectl delete deployment parent-deployment</code> is called?</li>
<li>This is preferable to handling this via Helm since some people may deploy the application using a different deployment tool.</li>
</ul>
</li>
<li>Alternatively, is there a good way to handle this with Helm if <code>parent-deployment</code> is deployed as part of a helm chart?
<ul>
<li>Can I just add helm's labels to the generated resources to make Helm aware of the new resources?</li>
<li>I, personally, will deploy the application using a Helm chart so a Helm-based solution will solve my immediate problem but won't help people who deploy the application using a different method.</li>
</ul>
</li>
</ul>
| Vorticity | <p>You can use owner references and finalizers to create parent/child relationships between resources that allow you to clean up child resources when parent resources are deleted.</p>
<p>Current docs: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/</a></p>
| ericfossas |
<p>With the storage add-on for MicroK8s, Persistent Volume Claims are by default given storage under <code>/var/snap/microk8s/common/default-storage</code> on the host system. How can that be changed?</p>
<p>Viewing the declaration for the <code>hostpath-provisioner</code> pod, shows that there is an environment setting called <code>PV_DIR</code> pointing to <code>/var/snap/microk8s/common/default-storage</code> - seems like what I'd like to change, but how can that be done?</p>
<p>Not sure if I'm asking a MicroK8s specific question or if this is something that applies to Kubernetes in general?</p>
<pre><code>$ microk8s.kubectl describe -n kube-system pod/hostpath-provisioner-7b9cb5cdb4-q5jh9
Name: hostpath-provisioner-7b9cb5cdb4-q5jh9
Namespace: kube-system
Priority: 0
Node: ...
Start Time: ...
Labels: k8s-app=hostpath-provisioner
pod-template-hash=7b9cb5cdb4
Annotations: <none>
Status: Running
IP: ...
IPs:
IP: ...
Controlled By: ReplicaSet/hostpath-provisioner-7b9cb5cdb4
Containers:
hostpath-provisioner:
Container ID: containerd://0b74a5aa06bfed0a66dbbead6306a0bc0fd7e46ec312befb3d97da32ff50968a
Image: cdkbot/hostpath-provisioner-amd64:1.0.0
Image ID: docker.io/cdkbot/hostpath-provisioner-amd64@sha256:339f78eabc68ffb1656d584e41f121cb4d2b667565428c8dde836caf5b8a0228
Port: <none>
Host Port: <none>
State: Running
Started: ...
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: ...
Finished: ...
Ready: True
Restart Count: 3
Environment:
NODE_NAME: (v1:spec.nodeName)
PV_DIR: /var/snap/microk8s/common/default-storage
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from microk8s-hostpath-token-nsxbp (ro)
/var/snap/microk8s/common/default-storage from pv-volume (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pv-volume:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/common/default-storage
HostPathType:
microk8s-hostpath-token-nsxbp:
Type: Secret (a volume populated by a Secret)
SecretName: microk8s-hostpath-token-nsxbp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
| Bjorn Thor Jonsson | <h2>HostPath</h2>
<p>If You want to add your own path to your persistentVolume You can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="noreferrer">spec.hostPath.path</a> value </p>
<p>example yamls</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: base
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: base
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: base
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p><strong>kindly reminder</strong></p>
<blockquote>
<p>Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="noreferrer">PersistentVolumeClaim documentation</a> for details.</p>
</blockquote>
<p>You can check your storageclass by using </p>
<pre><code>kubectl get storageclass
</code></pre>
<p>If there is no <code><your-class-name>(default)</code> that means You need to make your own default storage class.</p>
<p>Mark a StorageClass as default:</p>
<pre><code>kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
</code></pre>
<p>After You make defualt <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storageClass</a> You can use those yamls to create pv and pvc</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume3
labels:
type: local
spec:
storageClassName: ""
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data2"
</code></pre>
<pre><code>
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim3
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<h2>one pv for each pvc</h2>
<p>Based on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="noreferrer">kubernetes documentation</a></p>
<blockquote>
<p>Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. <strong>A PVC to PV binding is a one-to-one mapping</strong>.</p>
</blockquote>
| Jakub |
<p>For now, I deploy my application pods using static files and one of them is <code>app-secrets.yaml</code> with all secrets to deploy an application</p>
<pre><code>---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
root: xxxxxx
user1: xxxxxx
user2: xxxxxx
</code></pre>
<p>but this is not neither secure nor convenient (if I need another app instance, I have to create another file with human-generated password).</p>
<p>I'm looking to generate random passwords at application creation but I don't know if it's possible.
I've already looked to the topic <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">secret</a> and especially <code>secretGenerator</code> but this is not directly what I want as I understand it, because it does not create a random string but a random secret name like <code>secret/app-secrets-ssdsdfmfh4k</code> but I have to provide still the passwords.</p>
| Baptiste Mille-Mathias | <p>You may want to use <a href="https://github.com/mittwald/kubernetes-secret-generator" rel="nofollow noreferrer">kubernetes-secret-generator</a>. I've tested it and it's doing exactly what you need.</p>
<p>To accomplish it you have to have helm in your cluster and follow these instructions:</p>
<p>Clone repository</p>
<pre><code>$ git clone https://github.com/mittwald/kubernetes-secret-generator
</code></pre>
<p>Create helm deployment</p>
<pre><code>$ helm upgrade --install secret-generator ./deploy/chart
</code></pre>
<p>Now you to use it, you just have to</p>
<blockquote>
<p>Add annotation <code>secret-generator.v1.mittwald.de/autogenerate</code> to any
Kubernetes secret object .The value of the annotation can be a field
name (or comma separated list of field names) within the secret; the
SecretGeneratorController will pick up this annotation and add a field
[or fields] (<code>password</code> in the example below) to the secret with a
randomly generated string value. From <a href="https://github.com/mittwald/kubernetes-secret-generator#usage" rel="nofollow noreferrer">here</a>.</p>
</blockquote>
<pre><code>$ kubectl apply -f mysecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
annotations:
secret-generator.v1.mittwald.de/autogenerate: password
data:
username: UGxlYXNlQWNjZXB0Cg==
</code></pre>
<p>After applying this secret you can take a look at it to check if the passward was generated as expected:</p>
<pre><code>$ kubectl get secrets mysecret -o yaml
apiVersion: v1
data:
password: dnVKTDBJZ0tFS1BacmtTMnBuc3d2YWs2YlZsZ0xPTUFKdStDa3dwUQ==
username: UGxlYXNlQWNjZXB0Cg==
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"username":"UGxlYXNlQWNjZXB0Cg=="},"kind":"Secret","metadata":{"annotations":{"secret-generator.v1.mittwald.de/autogenerate":"password"},"name":"mysecret","namespace":"default"}}
secret-generator.v1.mittwald.de/autogenerate: password
secret-generator.v1.mittwald.de/autogenerate-generated-at: 2020-01-09 14:29:44.397648062
+0000 UTC m=+664.011602557
secret-generator.v1.mittwald.de/secure: "yes"
creationTimestamp: "2020-01-09T14:29:44Z"
name: mysecret
namespace: default
resourceVersion: "297425"
selfLink: /api/v1/namespaces/default/secrets/mysecret
uid: 7ae42d71-32ec-11ea-92b3-42010a800009
type: Opaque
</code></pre>
<p>As we can see, the password was generated.</p>
| Mark Watney |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.