Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I was wondering if it's possible to refer to image field in Kubernetes deployment yaml file,
as </p>
<pre class="lang-yaml prettyprint-override"><code> env:
- name: VERSION
value:
valueFrom:
containerField: spec.image
</code></pre>
<p>Please let me know. Thank you. </p>
| Arsen | <p><code>image</code> value in pod definition cannot be passed as environment variable using <code>fieldRef</code>. </p>
<p>The only <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="nofollow noreferrer">supported values are</a> <code>metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs</code> and <code>resource</code> fields (memory, cpu request/limits) and container ephemeral storage limit/request. </p>
<p>As a workaround it can be passed using labels and then passing this label as an environment variable, example:</p>
<pre><code> env:
- name: VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['version']
</code></pre>
| kool |
<p>Starting from Kubernetes v1.18 the v2beta2 API allows scaling behavior to be configured through the Horizontal Pod Autoscalar (HPA) behavior field. <strong>I'm planning to apply HPA with custom metrics to a StatefulSet</strong>.</p>
<p>The use case I'm looking at is scaling out using a custom metric (e.g. number of user sessions on my application), but the HPA will not scale down at all. This use case is also described by K8s SIG-Autoscaling enhancements - <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md#story-4-scale-up-as-usual-do-not-scale-down" rel="nofollow noreferrer">"Configurable scale velocity for HPA >> Story 4: Scale Up As Usual, Do Not Scale Down"</a>.</p>
<pre><code>behavior:
scaleDown:
policies:
- type: pods
value: 0
</code></pre>
<p>The user sessions could stay active for minutes to hours. Starting with 1 replica of the StatefulSet, as the number of user sessions hit an upper limit (exposed using Prometheus collector and later configured using HPA custom metric option), the application pods will scale-out. The new pods will start serving new users.</p>
<p>Since this is a StatefulSet and cannot just abruptly scale down, <strong>I'm seeking help on ways to scale down when the user sessions on the new replicas go down to 0</strong>. The above link says that the scale down can be controlled by a separate process. Not sure how to do this? Looking for some pointers.</p>
<p>Thanks.</p>
| smulkutk | <p>You can use <code>periodSeconds</code> and <code>stabilizationWindowSeconds</code> values to manage how much time will pass between termination of pods, for example:</p>
<pre><code> behavior:
scaleDown:
stabilizationWindowSeconds: 10
policies:
- type: Pods
value: 1
periodSeconds: 20
</code></pre>
<p>This way it will scale down 1 pod every ~30 seconds (or whatever value will be used in <code>periodSeconds</code> and <code>stabilizationWindowSeconds</code>). Time may vary depending on <code>stabilizationWindowSeconds</code> values over time.</p>
<p><code>periodSeconds</code> describes how much time will pass between termination of each pod, maximum value is 1800 second (30 minutes).</p>
<p><code>stabilizationWindowSeconds</code> when metrics indicate that target should be scaled down, this algorithm takes a look into previously calculated desired states and uses highest value from specified interval. For scale down default value is 300, maximum value is 3600 (one hour).</p>
| kool |
<p>The Google cloud SDK console showing "Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it." below error when I'm trying to run kubectl commands</p>
<p><a href="https://i.stack.imgur.com/nbIYl.png" rel="nofollow noreferrer">click here to view image</a></p>
| harish hari | <p>This error indicates that your <code>kubeconfig</code> is not correct.</p>
<p>In order to use connect to cluster you can run:</p>
<pre><code>gcloud container clusters list
</code></pre>
<p>to get the name of your cluster and then run:</p>
<pre><code>gcloud container clusters get-credentials <cluster-name>
</code></pre>
<p>to generate <code>kubeconfig</code> for chosen cluster.</p>
| kool |
<p>I have docker image in my repo, akshay123007/lists-pods. I have created helm chart
image:</p>
<pre><code> repository: akshay123007/lists-pods
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets:
- name: regcred
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>when i install chart ,i get error as: Failed to pull image "akshay123007/lists-pods:1.16.0": rpc error: code = Unknown desc = Error response from daemon: manifest for akshay123007/lists-pods:1.16.0 not found: manifest unknown: manifest unknown.</p>
<p>I have tried changing image:docker.io/akshay123007/lists-pods:latest, then it says errImagePull.I dont what is the issue, any help would be appreciated.</p>
| Aksahy Awate | <p>You forget about the registry where it should take your image. It should be like this</p>
<pre><code>image:
registry: YOUR_DOCKER_REGISTRY (EX: docker.io)
repository: akshay123007/lists-pods
</code></pre>
| Амангельды Омаров |
<p>I'm following the <a href="https://www.youtube.com/watch?v=DgVjEo3OGBI" rel="nofollow noreferrer">tutorial</a> from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.</p>
<p>I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.</p>
<p>The apps running in the containers are .NET Core.</p>
<p><strong>Commands-deply & cluster ip</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p><strong>Platforms-depl & cluster ip</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p><strong>Ingress-srv</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
</code></pre>
<p>I also added this to my hosts file:
<code>127.0.0.1 acme.com</code></p>
<p>And I applied this from the nginx documentation:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p><strong>kubectl get ingress</strong>
<a href="https://i.stack.imgur.com/YmKBU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmKBU.png" alt="kubectl get ingress" /></a></p>
<p><strong>kubectl describe ing ingress-srv</strong>
<br />
<a href="https://i.stack.imgur.com/QuV8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QuV8Q.png" alt="kubectl describe ing ingress-srv" /></a></p>
<p><strong>Dockerfile CommandService</strong></p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
</code></pre>
<p><strong>kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx</strong>
<a href="https://i.stack.imgur.com/Ir1HB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ir1HB.png" alt="LOGS INGRESS POD" /></a></p>
<p>Am I missing something?</p>
| Maarten Vissers | <p>I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using <code>NET stop HTTP</code> in an admin cmd.</p>
<p>I noticed that running <code>kubectl get services -n=ingress-nginx</code> resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running <code>kubectl get ingress</code> also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.</p>
<p>Reference: <a href="https://stackoverflow.com/a/26653919/12290439">Port 80 is being used by SYSTEM (PID 4), what is that?</a></p>
| Maarten Vissers |
<p>Here my first ServiceAccount, ClusterRole, And ClusterRoleBinding</p>
<pre><code>---
# Create namespace
apiVersion: v1
kind: Namespace
metadata:
name: devops-tools
---
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: devops-tools
name: bino
---
# Set Secrets for SA
# k8s >= 1.24 need to manualy created
# https://stackoverflow.com/a/72258300
apiVersion: v1
kind: Secret
metadata:
name: bino-token
namespace: devops-tools
annotations:
kubernetes.io/service-account.name: bino
type: kubernetes.io/service-account-token
---
# Create Cluster Role
# Beware !!! This is Cluster wide FULL RIGHTS
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: devops-tools-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- networking.k8s.io
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Bind the SA to Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: devops-tools-role
---
</code></pre>
<p>It work when I use to create NameSpace, Deployment, and Service.
But it fail (complain about 'have no right') when I try to create kind: Ingress.</p>
<p>Then I try to add</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: devops-tools-role-binding-admin
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: bino
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
</code></pre>
<p>and now 'bino' can do all things.</p>
<p>My question is: Is there any docs on what 'apiGroups' and 'resources' need to be assigned so one service account can do some-things (not all-things)?</p>
<p>Sincerely</p>
<p>-bino-</p>
| Bino Oetomo | <p>You can run this command to determine the <code>apiGroup</code> of a resource:</p>
<pre><code>kubectl api-resources
</code></pre>
<p>You will see something like:</p>
<pre><code>NAME SHORTNAMES APIVERSION NAMESPACED KIND
ingresses ing networking.k8s.io/v1 true Ingress
</code></pre>
<p>So you would need to add this to the <code>rules</code> of your <code>ClusterRole</code>:</p>
<pre><code>- apiGroups:
- "networking.k8s.io/v1"
resources:
- "ingresses"
verbs:
- "get"
</code></pre>
| ericfossas |
<p>When running <code>linkerd upgrade --from-manifests</code> the following error occurs:</p>
<pre><code>linkerd upgrade --from-manifests install.yaml > install-new.yaml
× Failed to parse Kubernetes objects from manifest install.yaml: no kind "APIService" is registered for version "apiregistration.k8s.io/v1" in scheme "pkg/runtime/scheme.go:101"
</code></pre>
| cpretzer | <p>This is a <a href="https://github.com/linkerd/linkerd2/issues/3559" rel="nofollow noreferrer">known issue</a> with a workaround.</p>
<p>The workaround is to export the <code>secret/linkerd-identity-issuer</code> and <code>configmap/linkerd-config</code> resources to a separate manifest file, then use the generated file as an argument to <code>linkerd upgrade --from-manifests</code>:</p>
<pre><code>kubectl -n linkerd get \
secret/linkerd-identity-issuer \
configmap/linkerd-config \
-oyaml > linkerd-manifests.yaml
</code></pre>
<p>then:</p>
<p><code>linkerd upgrade --from-manifests linkerd-manifests.yaml</code></p>
| cpretzer |
<p>I want to run multiple jobs in a kubernetes cluster, but the total resource requirements exceed the size of the cluster, and the requirements of one job span multiple nodes. How do I avoid a livelock where all jobs have some resources, but none have enough to complete?</p>
<p>For example, suppose I have 4 nodes, each with 1 GB of memory available. I want to submit 2 jobs, each of which requires 3 GB of memory to complete, split across 3 pods that require 1 GB each. The correct solution here would be to run the jobs sequentially, how do I ensure this happens?</p>
<p>I want to avoid the situation where both jobs schedule two pods each, using up the entire cluster, while the remaining pod of each job is stuck in the <code>Pending</code> state, as no more resources are available. Because the jobs cannot complete using only 2 GB of memory, the system is now incapable of making progress.</p>
<h1>Related Features</h1>
<p>Some features I've looked at that don't seem to be suitable:</p>
<ol>
<li><a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">Pod Disruption Budget</a> - this is for ensuring that the number of pods never goes below X, but doesn't have any effect when scheduling the pods initially</li>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">Pod Affinity</a> - this can ensure I schedule pods in a region where a matching pod is running, but I can't require two or more pods. I'm also not sure if affinity would be satisfied if no pods are running but they are scheduled.</li>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/" rel="nofollow noreferrer">Pod Topology Spread Constraints</a> - This is to ensure that the numbers of pods scheduled in multiple regions is always within N of each other, but again I can't specify a required minimum.</li>
</ol>
<h1>Possible Solution</h1>
<p>It looks like a custom scheduler is needed. <a href="https://github.com/kubernetes-sigs/kube-batch/blob/master/doc/usage/tutorial.md" rel="nofollow noreferrer">Kube Batch</a> looks like a possible solution for this, supporting a <code>minMember</code> attribute. I will test this and submit it as a self-answer, unless anyone can chime in with more detail.</p>
| kai | <p>The easy solution is to assign each a job a PriorityClass so that one job can preempt the other if needed:</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/</a></p>
<p>However, this means one job will always have priority over the other. If you need them to run in the order they were received, you need a queue job system. Here is one you can try:</p>
<p><a href="https://github.com/kubernetes-sigs/kueue" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kueue</a></p>
<p>Using kueue, you would create a <code>Workload</code> for each job as they come in and add it to the same <code>LocalQueue</code>.</p>
| ericfossas |
<p>Trying to update the resources of my Deployment using <code>kubectl patch</code> command:</p>
<pre><code>kubectl patch statefulset test -n test --patch '{"spec": {"template": {"spec": {"containers": [{"resources": [{"limits": [{"cpu": "4000m","memory": "16Gi"}]},{"requests": [{"cpu": "3000m","memory": "13Gi"}]}]}]}}}}'
</code></pre>
<p>But getting the below error:</p>
<blockquote>
<p><strong>Error from server: map: map[resources:[map[limits:[map[cpu:4000m memory:16Gi]]] map[requests:[map[cpu:3000m memory:13Gi]]]]] does not contain declared merge key: name</strong></p>
</blockquote>
| Kartik | <p>It needs to know which container you want to patch in the statefulset. You indicate this by including the name of the container.</p>
<p>Also, the json structure of your resources field is incorrect. See the example below for a complete working example:</p>
<p>(replace <strong>???</strong> with the name of the container you want patched)</p>
<pre><code>kubectl patch statefulset test -n test --patch '{"spec": {"template": {"spec": {"containers": [{"name": "???", "resources": {"limits": {"cpu": "4000m","memory": "16Gi"},"requests": {"cpu": "3000m","memory": "13Gi"}}}]}}}}'
</code></pre>
| ericfossas |
<p>How to create an alb ingress such that only "/" will redirect to a service that shows static files in service-1. all other paths should redirect to service 2. it will host something like this</p>
<pre><code>spec:
rules:
- host: a.abc.com
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 80
- path: /v1
backend:
serviceName: service-2/v1
servicePort: 9090
</code></pre>
<p>I know I can't have <code>service-2/v1</code> on <code>serviceName</code> but I want to map <code>/v1</code> to <code>service-2:9090/v1</code>.</p>
<p>I am hosting some static files in <code>service-1(nginx)</code> which I want to show on <code>/</code> and then <code>/v1</code> and <code>/admin</code> is on my another service (i.e <code>service-2/v1</code> and <code>service-2/admin</code>).</p>
| Vikas Rathore | <p>Name of the service and any other resource may not contain <code>/</code> and <code>%</code> as explained <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#path-segment-names" rel="nofollow noreferrer">here</a> and if you would like to deploy it, it will show following error:</p>
<p><code>metadata.name: Invalid value: "demo/v1": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')</code></p>
<p>So you have to consider changing name of the services, besides that you can specify multiple paths in single <code>Ingress</code> definition:</p>
<pre><code>spec:
rules:
- host: a.abc.com
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 80
- path: /v1
backend:
serviceName: service-v1
servicePort: 9090
- path: /admin #additional path
backend:
serviceName: service-admin #service name pointing to /admin
servicePort: 9091 #port it will be accessible on
</code></pre>
| kool |
<p>My version Kubernetes is 1.22</p>
<p>I'm taking a Kubernetes CKAD course.</p>
<p>In the resources and limits section it says:</p>
<blockquote>
<p>Let's now look at a container running on a node in the Docker world.</p>
<p>A Docker container has no limit to the resources it can consume on a node.</p>
<p>Say a container starts with one CPU on a node.</p>
<p>It can go up and consume as much resource as it requires suffocating the native processes on the node or other containers of resources.</p>
<p>However, you can set a limit for the resource usage on these parts by default.</p>
<p>Kubernetes sets a limit of one CPU to containers, so if you do not specify explicitly, a container will be limited to consume only one CPU from the node.</p>
<p>The same goes with memory.</p>
<p>By default, Kubernetes sets a limit of 512 Mi on containers.</p>
<p>If you don't like the default limits, you can change them by adding a limit section under the resources section in your file.</p>
<p>Specify new limits for the memory and CPU like this when the pod is created.</p>
</blockquote>
<p>I created podr nginx, in which I specified only reguest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-test
name: nginx-test
spec:
containers:
- image: nginx
name: nginx-test
resources:
requests:
memory: "64Mi"
cpu: "250m"
dnsPolicy: ClusterFirst
restartPolicy: Always
</code></pre>
<p>After creating a pod, I saw that the CPU and RAM are unlimited:</p>
<pre><code> Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
only-test nginx-test 250m (0%) 0 (0%) 64Mi (0%) 0 (0%) 8m33s
</code></pre>
<p>Maybe this is true for Docker? I am using Containerd.</p>
| Maksim | <p>I figured it out. To use the default resources, you need to create LimitRange in a specific namespace.</p>
<p>Examples:</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
</code></pre>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
</code></pre>
| Maksim |
<p>Does the # in front of success and failure in a Kubectl describe (refer picture) meant to represent something?</p>
<p>All of the elements in each of those probes represent a config element for the probe but success and failure are prefixed with a #. I initially thought maybe it is a default value (which was not specified by the user but noticed that this is not the case as the picture shows different values for all the failure config for the various probes.</p>
<p>Am I reading too much into the # or is it intentionally placed there for a reason?</p>
<p>It does not make a difference to the functionality or affect us in any form or shape. Just Curious as it catches the eye!</p>
<p>Noticed a <a href="https://stackoverflow.com/questions/69419096/kubectl-describe-pod-does-not-report-proper-url-of-liveness-probe/69419270#comment128849330_69419270">related question</a> but it doesn't focus on the "#".</p>
<p><a href="https://i.stack.imgur.com/mBzwA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBzwA.jpg" alt="enter image description here" /></a></p>
| Manglu | <p>It appears to be just be embedded into the print statement:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/b1e130fe83156783153538b6d79821c2fdaa85bb/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L1956" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/b1e130fe83156783153538b6d79821c2fdaa85bb/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L1956</a></p>
<p>Here is the original PR:</p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/21341" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/21341</a></p>
<p>Looked into messaging the original author, but he disables DMs on his social media.</p>
| ericfossas |
<p>I am trying to use ‘kubectl patch’ to provide an annotation to a default service account in a namespace. This is because the JavaScript client does not seem to have a <code>kubectl annotate</code> function. So now I wonder:
Why does the following patch command not work?</p>
<pre class="lang-bash prettyprint-override"><code>kubectl patch sa default -n somenamespace -v8 --type=json -p='[{"op": "add", "path": "annotations/http://eks.amazonaws.com~1role-arn", "value": "ueah"}]'
</code></pre>
<p>While the following statement using annotate does work?</p>
<pre class="lang-bash prettyprint-override"><code>
kubectl annotate --overwrite -v8 sa default -n t-werwww2 http://eks.amazonaws.com/role-arn="ueah"
</code></pre>
<p>What would be the correct <code>kubectl patch</code> command?</p>
| Jeroen | <p>@hiroyukik seems to have partially answered your question by pointing out that you have the path wrong and it should be "/metadata/annotations".</p>
<p>You used the JSON Merge Patch strategy in your comment. I don't think you need to find a JSON Patch alternative as you suggested, as the the Javascript Kubernetes client supports JSON Merge Patch.</p>
<p>My understanding is that you just add a header in the options to set the strategy you want, like so:</p>
<pre class="lang-js prettyprint-override"><code>const options = { "headers": { "Content-type": PatchUtils.PATCH_FORMAT_JSON_MERGE_PATCH } }
</code></pre>
<p>See the docs for how to add this to the function call:</p>
<p><a href="https://kubernetes-client.github.io/javascript/classes/corev1api.corev1api-1.html#patchnamespacedserviceaccount" rel="nofollow noreferrer">https://kubernetes-client.github.io/javascript/classes/corev1api.corev1api-1.html#patchnamespacedserviceaccount</a></p>
<p>However, if you do really need to use the JSON Patch strategy, you'll need to check whether the service account has annotations first as that strategy has no way of creating and adding a field in a single operation. See this Github comment for an explanation:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/90623#issuecomment-621584160" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/90623#issuecomment-621584160</a></p>
<p>So a complete shell script example using the JSON Patch strategy would look like this:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get sa default -n somenamespace -o json \
| jq -e '.metadata | has("annotations")' && \
kubectl patch sa default -n somenamespace --type=json \
-p='[{"op": "add", "path": "/metadata/annotations/eks.amazonaws.com~1role-arn", "value": "ueah"}]' || \
kubectl patch sa default -n somenamespace --type=json \
-p='[{"op":"add","path":"/metadata/annotations","value":{}},{"op":"add","path":"/metadata/annotations/eks.amazonaws.com~1role-arn","value": "ueah"}]'
</code></pre>
| ericfossas |
<p>We have an Nginx ingress controller that we use as a Load Balancer. We have a control application that we use to create accounts for clients and when our control application creates the deployment it also upserts a service and an ingress. The ingress is used to route the traffic to a specific client backend service.</p>
<p><a href="https://i.stack.imgur.com/QAWpF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QAWpF.png" alt="enter image description here" /></a></p>
<p>So any requests to <code>https://our.website.com/client/1</code> are routed to the <code>client-1</code> service and any requests to <code>https://our.website.com/client/2</code> are routed to the <code>client-2</code> service.</p>
<p>Now our product has matured and we have the need to be able to deploy customer backends to different clusters. We have looked at creating a Multi-cluster implementation like GCP suggests in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-services" rel="nofollow noreferrer">docs</a>. This is almost working for us but not exactly. The services can communicate with each other(we have other services also running) but NGINX is not able to see the Service Import. My question is does NGINX support mapping to Service Imports rather than just Services? If not is there a workaround to that or perhaps a different load balancer that would support that?</p>
| Ilian Tetradev | <p>So I wasn't able to find a way to do that. We ended up implementing a mixed architecture where most of our apps such as the control ones and the React frontend are handled bt our current NGINX controller. We have also deployed a <code>gke-l7-gxlb-mc</code> Multi Cluster Gateway alongside out NGINX Load Balancer with a secondary URL <code>our-gw.website.com</code> This URL os used by our frontend only and is not visible to the clients.</p>
| Ilian Tetradev |
<pre><code>Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.45.1.el7.x86_64
</code></pre>
<p>I am using an external load balancer HAProxy and Keepalived. My Virtual IP 172.24.16.6. If I create a service with NodePort, then i can connect from outside to pod. This is the premise that IP from the load balancer is available to my cluster.</p>
<p>Im installed NGINX Ingress Controller via this instruction <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/</a></p>
<p>I also applied <code>$ kubectl apply -f service/loadbalancer.yaml</code> with with such parameters:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
externalIPs:
- 172.24.16.6
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
</code></pre>
<p>As a result, it all looks like this:</p>
<pre><code>]$ kubectl get all -o wide -n nginx-ingress
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-ingress-768698d9df-c2wlx 1/1 Running 0 27m 192.168.105.197 srv-dev-k8s-worker-05 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx-ingress LoadBalancer 10.104.239.149 172.24.16.6 80:30053/TCP,443:30021/TCP 22m app=nginx-ingress
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-ingress 1/1 1 1 28m nginx-ingress nginx/nginx-ingress:2.0.2 app=nginx-ingress
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-ingress-6454cfbc49 0 0 0 28m nginx-ingress nginx/nginx-ingress:2.0.2 app=nginx-ingress,pod-template-hash=6454cfbc49
replicaset.apps/nginx-ingress-768698d9df 1 1 1 27m nginx-ingress nginx/nginx-ingress:2.0.2 app=nginx-ingress,pod-template-hash=768698d9df
</code></pre>
<p>nginx-ingress pod:</p>
<pre><code>$ kubectl -n nginx-ingress get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-768698d9df-c2wlx 1/1 Running 0 72m 192.168.105.197 srv-dev-k8s-worker-05 <none> <none>
</code></pre>
<p>The <code>netstat</code> shows that ports 80 and 443 are open and bound to 172.24.16.6:</p>
<pre><code>$ netstat -tulpn
(No info could be read for "-p": geteuid()=1002 but you should be root.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 172.24.16.6:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:179 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 172.24.16.6:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:43707 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:32000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:30021 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:30053 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:9098 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:9099 0.0.0.0:* LISTEN -
tcp 0 0 172.24.25.141:2379 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:6444 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:6444 0.0.0.0:* LISTEN -
tcp 0 0 172.24.25.141:2380 0.0.0.0:* LISTEN -
tcp6 0 0 :::10256 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::31231 :::* LISTEN -
tcp6 0 0 :::5473 :::* LISTEN -
tcp6 0 0 :::10250 :::* LISTEN -
tcp6 0 0 :::6443 :::* LISTEN -
udp 0 0 127.0.0.1:323 0.0.0.0:* -
udp 0 0 0.0.0.0:4789 0.0.0.0:* -
udp 0 0 0.0.0.0:58191 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp6 0 0 ::1:323 :::* -
</code></pre>
<p>But <code>iptables</code> don't open any ports <a href="https://pastebin.com/BvV32sjD" rel="nofollow noreferrer">https://pastebin.com/BvV32sjD</a></p>
<p>Please help me to access from outside.</p>
| Maksim | <p>Yes, i added ingress to namespace for-only-test.</p>
<pre><code>$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-559d658b74-6p4tb 1/1 Running 0 179m 192.168.240.70 srv-dev-k8s-worker-08 <none> <none>
pod/nginx-deployment-559d658b74-r96s9 1/1 Running 0 179m 192.168.240.71 srv-dev-k8s-worker-08 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx-deployment ClusterIP 10.108.39.147 <none> 80/TCP 178m app=nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-deployment 2/2 2 2 3h1m nginx nginx:1.16.1 app=nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-deployment-559d658b74 2 2 2 179m nginx nginx:1.16.1 app=nginx,pod-template-hash=559d658b74
</code></pre>
<p>Then created ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-for-nginx-deployment
annotations:
# kubernetes.io/ingress.class: "nginx"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deployment
port:
number: 80
</code></pre>
<pre><code>$ kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-for-nginx-deployment nginx k8s.domain.com 80 7s
</code></pre>
| Maksim |
<p>I have a Helm Chart for Cassandra, which is running fine, I am able to connect to it and run <code>cqlsh</code> commands.<br />
I want to add a Helm Hook to the chart. I've managed how to do it, however, I cannot execute cqlsh in the container. This is my Kubernetes Job I want to execute in <code>post-install</code> phase.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
template:
metadata:
name: hook-job
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
containers:
- name: cqlsh-cmd
image: <cassandra-image>
command: ["bin/sh", "-c", "cqlsh"]
restartPolicy: OnFailure
</code></pre>
<p>However, the cqlsh command is not found.</p>
<p>In general it seems odd I have to re-use the same container I have defined in Helm Chart. Am I doing something wrong?</p>
| Forin | <p>your pod/container may not up at that time. Use it under post life cycle.</p>
<pre><code> spec:
containers:
- name: cqlsh-cmd
image: <cassandra-image>
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "
set -x\n
while true;\n
do\n
echo 'looking cassandra...,'\n
timeout 1 bash -c 'cat < /dev/null > /dev/tcp/localhost/9042'\n
exitCode=$?\n
if [ $exitCode = 0 ]; then\n
cqlsh /** your command **/ \n
break;\n
fi\n
sleep 1s\n
done\n
"]
</code></pre>
| Madhu Potana |
<p>I have a <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metrics-server</a> and a horizontal pod autoscaler using this server, running on my cluster.<br />
This works perfectly fine, until i inject linkerd-proxies into the deployments of the namespace where my application is running. Running <code>kubectl top pod</code> in that namespace results in a <code>error: Metrics not available for pod <name></code> error. However, nothing appears in the metrics-server pod's logs.<br />
The metrics-server clearly works fine in other namespaces, because top works in every namespace but the meshed one.</p>
<p>At first i thought it could be because the proxies' resource requests/limits weren't set, but after running the injection with them (<code>kubectl get -n <namespace> deploy -o yaml | linkerd inject - --proxy-cpu-request "10m" --proxy-cpu-limit "1" --proxy-memory-request "64Mi" --proxy-memory-limit "256Mi" | kubectl apply -f -</code>), the issue stays the same.</p>
<p>Is this a known problem, are there any possible solutions?</p>
<p>PS: I have a <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> running in a different namespace, and this seems to be able to scrape the pod metrics from the meshed pods just fine <img src="https://i.stack.imgur.com/yVt6v.png" alt="grafana dashboard image showing prometheus can collect the data" /></p>
| Raven | <p>The problem was apparently a bug in the cAdvisor stats provider with the CRI runtime. The linkerd-init containers keep producing metrics after they've terminated, which shouldn't happen. The metrics-server ignores stats from pods that contain containers that report zero values (to avoid reporting invalid metrics, like when a container is restarting, metrics aren't collected yet,...). You can follow up on the <a href="https://github.com/kubernetes/kubernetes/issues/103368" rel="nofollow noreferrer">issue</a> here. Solutions seem to be changing to another runtime or using the PodAndContainerStatsFromCRI flag, which will let the internal CRI stats provider be responsible instead of the cAdvisor one.</p>
| Raven |
<p>I want to change timezone of our entire on-prim kubernetes Cluster from default UTC to America/Los_Angeles. </p>
<p>I am aware about changing it for single deployment by using volumes[ref.: <a href="https://evalle.xyz/posts/kubernetes-tz/]" rel="nofollow noreferrer">https://evalle.xyz/posts/kubernetes-tz/]</a> This is tedious job to do, as there are many pods in my cluster.</p>
<p>I am looking out better option to do it for entire cluster one go. Any help much appreciated.</p>
| ron shine | <p><strong>TL;DR</strong>: You will not be able to globally set the TZ in a cluster.</p>
<p>Based on the answer of “<em>KarlKFI</em>” in (<a href="https://stackoverflow.com/questions/48949090/what-time-is-it-in-a-kubernetes-pod">What time is it in a Kubernetes pod?</a>)</p>
<blockquote>
<p>The clock in a container is the same as the host machine because it’s controlled by the kernel.</p>
<p>…</p>
</blockquote>
<p>As you already mentioned, you can change the TZ of the POD by binding the <code>zoneinfo</code> of the host OS.</p>
<p>So the <strong>TIME ZONE</strong> (TZ) is something that is locally controlled at POD level and can’t be changed globally because this should be done within the POD definition.</p>
<p>If you want to change the TZ without binding the <code>zoneinfo</code> of the host OS, based on (<a href="https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes">https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes</a>) you may change it by setting the TZ environment variable. If you change your image and set the TZ environment variable in there, when the POD is created it will inherit from the image so the POD will be created with the TZ environment variable set.</p>
<p>So your only options are:</p>
<p><strong>1.-</strong> Bind the <code>zoneinfo</code> of the host OS in each POD.</p>
<p><strong>2.-</strong> Change your TimeZone on each node of your cluster.</p>
<p><strong>3.</strong>- Set the TZ environment variable on your <em><strong>IMAGE</strong></em> so the POD will inherit the value when is created.</p>
| Armando Cuevas |
<p>I have the following manifest for deploying a IstIO egress gateway routing:</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: REDACTED-egress-se
spec:
hosts:
- sahfpxa.REDACTED
ports:
- number: 8080
name: http-port
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sahfpxa-REDACTED-egress-gw
spec:
selector:
istio: egressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- sahfpxa.REDACTED
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-sahfpxa-REDACTED
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: sahfpxa
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-sahfpxa-REDACTED-through-egress-gateway
spec:
hosts:
- sahfpxa.REDACTED
gateways:
- REDACTED/REDACTED-egress-gw
- mesh
http:
- match:
- gateways:
- mesh
port: 8080
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: sahfpxa
port:
number: 80
weight: 100
- match:
- gateways:
- REDACTED/sahfpxa-REDACTED-egress-gw
port: 8080
route:
- destination:
host: sahfpxa.REDACTED
port:
number: 8080
weight: 100
</code></pre>
<p>But I get a connection refused from the sidecar istio-proxy container Pod of the affected namespace and a HTTP 503 error from the workload container in that namespace.</p>
<p>Any ideas what could be wrong with the configuration or how I can debug it?</p>
<p>Thanks in advance.</p>
<p>Best regards,
rforberger</p>
| Ronny Forberger | <p>There were few errors in Your deployment manifest like <code>DestinationRule</code> was not pointing at Your <code>ServiceEntry</code>.</p>
<p>You can try to match Yours with these manifest files:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: etth
spec:
hosts:
- etth.pl
ports:
- number: 8080
name: http-port
protocol: HTTP
resolution: DNS
</code></pre>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- etth.pl
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-cnn
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: etth
</code></pre>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- etth.pl
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: etth
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: etth.pl
port:
number: 8080
weight: 100
</code></pre>
<p>You can check if routes are present in:
<code>istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json</code></p>
<pre><code>$ istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "etth.pl:80",
"domains": [
"etth.pl",
"etth.pl:80"
],
"routes": [
{
"match": {
"prefix": "/",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8080||etth.pl",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/direct-cnn-through-egress-gateway"
}
}
},
"decorator": {
"operation": "etth.pl:8080/*"
},
"typedPerFilterConfig": {
"mixer": {
"@type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
"disableCheckCalls": true,
"mixerAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "etth.pl"
},
"destination.service.name": {
"stringValue": "etth.pl"
},
"destination.service.namespace": {
"stringValue": "default"
}
}
},
"forwardAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "etth.pl"
},
"destination.service.name": {
"stringValue": "etth.pl"
},
"destination.service.namespace": {
"stringValue": "default"
}
}
}
}
}
}
]
}
],
"validateClusters": false
},
{
"virtualHosts": [
{
"name": "backend",
"domains": [
"*"
],
"routes": [
{
"match": {
"prefix": "/stats/prometheus"
},
"route": {
"cluster": "prometheus_stats"
}
}
]
}
]
}
]
</code></pre>
| Piotr Malec |
<p>I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described <a href="https://github.com/BuoyantIO/emojivoto/blob/linux-training/training/traffic-split/web-apex.yml" rel="nofollow noreferrer">here</a>. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning "<code>503 Service Temporarily Unavailable</code>" message from the Nginx.</p>
<p>I went through the documentation and I have created ingress following <a href="https://linkerd.io/2/tasks/using-ingress/#nginx" rel="nofollow noreferrer">this</a>, I can confirm that below annotations were added to the ingress resources.</p>
<pre><code>annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
</code></pre>
<p>but still no luck with the out side of the cluster.</p>
<p>I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in <a href="https://github.com/BuoyantIO/emojivoto/tree/linux-training/training/traffic-split" rel="nofollow noreferrer">this</a> training repository.</p>
<p>I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.</p>
| Aruna Fernando | <p>tl;dr: The nginx ingress <a href="https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/controller.go#L427" rel="nofollow noreferrer">requires</a> a <code>Service</code> resource to have an <code>Endpoint</code> resource in order to be considered a valid destination for traffic. The architecture in the repo creates three <code>Service</code> resources, one of which acts as an <code>apex</code> and has no <code>Endpoint</code> resources because it has no selectors, so the nginx ingress won't send traffic to it, and the <code>leaf</code> services will not get traffic as a result.</p>
<p>The example in the repo follows the SMI Spec by defining a single <strong>apex</strong> service and two <strong>leaf</strong> services. The <code>web-apex</code> service does not have any endpoints, so nginx will not send traffic to it.</p>
<p>According to the <a href="https://github.com/servicemeshinterface/smi-spec/blob/master/apis/traffic-split/v1alpha1/traffic-split.md#tradeoffs" rel="nofollow noreferrer">SMI Spec</a> services can be <em>self-referential</em>, which means that a service can be both an <strong>apex</strong> and a <strong>leaf</strong> service, so to use the nginx ingress with this example, you can modify the <code>TrafficSplit</code> <a href="https://github.com/BuoyantIO/emojivoto/blob/linux-training/training/traffic-split/web-svc-ts.yml" rel="nofollow noreferrer">definition</a> to change the <code>spec.service</code> value from <code>web-apex</code> to <code>web-svc</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
</code></pre>
| cpretzer |
<p>I have ingress service with the fallowing config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: my-service
servicePort: 3001
- path: /auth/?(.*)
backend:
serviceName: my-service
servicePort: 3001
</code></pre>
<p>The thing is that when I'm running this on my minikube I cannot connect in a proper way, ie.
When I'm typing in the browser: <code>IP/api/test</code> it shows <code>not found</code> even though my express endpoint is:</p>
<pre><code>app.get('/api/test', (req, res) => {
return res.send({ hi: 'there' });
});
</code></pre>
<p>But <code>IP/api/api/test</code> (double <code>api</code>) works and delivers expected response. Obviously I would like to get there with <code>IP/api/test</code>. How can I achieve that in my ingress config?</p>
| Murakami | <p>If You want to access <code>http://.../api/test</code> by calling curl <code>http://.../api/test</code> than you don't need a rewrite for it so just make it empty.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: my-service
servicePort: 3001
- path: /auth/?(.*)
backend:
serviceName: my-service
servicePort: 3001
</code></pre>
<p>This configuration will for example rewrite the following:</p>
<pre><code>http://.../api/test -> http://.../api/test
</code></pre>
<pre><code>http://.../auth/test -> http://.../test
</code></pre>
<pre><code>http://.../asdasdasd -> http://.../asdasdasd
</code></pre>
<hr>
<p>Update:</p>
<p>In case You have another rewrite with <code>path: /?(.*)</code> You can modify Your app path request to:</p>
<pre><code>app.get('/test', (req, res) => {
return res.send({ hi: 'there' });
});
</code></pre>
<p>and use the original ingress configuration You posted in Your question.</p>
<p>This way when the request You have issues with will be resolved in following way:</p>
<p><code>IP/api/test</code> -> <code>IP/test</code></p>
<p>This is also the reason why You experienced <code>IP/api/api/test</code> before. Because one <code>/api/</code> was removed by rewrite and then <code>IP/api/test</code> was accessible.</p>
<p>Yes, You can have multiple rewrites in ingress. As long as they don't loop or rewrite too many times.</p>
| Piotr Malec |
<p>I have a kustomize transformer plugin that reads the value of serviceName in Ingress spec/rules/*/http/paths/0/backend/serviceName. The intent of the plugin is to update the host entries in the Ingress with the final serviceName after nameSuffix/namePrefix has been applied.</p>
<p>The plugins reads from stdin, but the state of the yaml isn't what I expected. The names of the Service and the Ingress have the nameSuffix/namePrefix applied, but the value of serviceName in the Ingress is still the raw service name. I verified this by dumping the Ingress yaml when the plugin executes. After the plugin runs, the final output does have the updated serviceName (with prefix/suffix). So something is running after the plugin that does the updating.</p>
<p>How do I configure the plugin so that it runs after the serviceName in the Ingress has been updated? </p>
| Erick T | <p>According to this example: <a href="https://github.com/kubernetes-sigs/kustomize/tree/master/examples/transformerconfigs" rel="nofollow noreferrer">Transformer Configs</a></p>
<p><code>namePrefix</code> and <code>nameSuffix</code> have reference only:</p>
<pre><code> namePrefix:
- path: metadata/name
</code></pre>
<p>If you want to include <code>serviceName</code> to the <code>nameReference</code>, you can create, for example, a <code>kustomize-config.yml</code> file with the content:</p>
<pre><code>nameReference:
- kind: ServiceName
fieldSpecs:
- path: spec/rules/*/http/paths/0/backend/serviceName
kind: Ingress
</code></pre>
<p>Then, on your <code>kustomization.yml</code> you need to reference it:</p>
<pre><code>configurations:
- kustomize-config.yml
</code></pre>
<p>Next time you run <code>kubectl kustomize .</code> or <code>kustomize build .</code>, you may see both Prefix and Suffix reflect also to <code>serviceName</code></p>
| albertocavalcante |
<p>In kubernetes kustomize.yml, when I use configMapGenerator to pass in some values as env, can I access those variables in deployed springboot application, application.properties file?</p>
<p>kustomize.yml</p>
<pre><code>...
configMapGenerator:
- name: test-app-config
env: ./test/applicationsetup.env
...
</code></pre>
<p>test/applicationsetup.env</p>
<pre><code>some_key=data1
some_key1=data2
</code></pre>
<p>application.properties</p>
<pre><code>APPLICATION_KEY=${some_key}
APPLICATION_KEY1=${some_key1}
</code></pre>
| Sarav | <p>I missed to add configMapRef inside container where I was trying to access the data.</p>
<pre><code>containers:
- name: test-container
image: <image>
envFrom:
- configMapRef:
name: test-app-config
</code></pre>
| Sarav |
<p>I'm struggling with import dump via <code>kubectl</code> to MySql database running in Kubernetes. There is no error output, but also no data imported.</p>
<p>Here is proof that there is such pod, also dump file on disk root called <code>/database.sql</code> and command.</p>
<pre><code>root@node-1:~# kubectl get pods -n esopa-test | grep mariadb
esopa-test-mariadb-0 1/1 Running 0 14d
root@node-1:~# ll /database.sql
-rw-r--r-- 1 root root 4418347 Oct 14 08:50 /database.sql
root@node-1:~# kubectl exec esopa-test-mariadb-0 -n esopa-test -- mysql -u root -proot database < /database.sql
root@node-1:~#
</code></pre>
<p>Thank you for any advice</p>
| Mariyo | <p>You can copy files from a pod to node by using <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp" rel="nofollow noreferrer"><code>kubectl cp</code></a> command.</p>
<p>To copy files from pod to node syntax is very simple:</p>
<pre><code>kubectl cp <some-namespace>/<some-pod>:<directory-inside-pod> <directory_on_your_node>
</code></pre>
<p>So in your use case you can use following command:</p>
<pre><code>kubectl cp esopa-test/esopa-test-mariadb-0:/database.sql <directory_on_your_node>
</code></pre>
<p>And to copy files from node to pod you can use:</p>
<pre><code>kubectl cp <directory_on_your_node> esopa-test/esopa-test-mariadb-0:/database.sql
</code></pre>
| kool |
<p>Inside a K8s cluster, I run a web application with 2 Pods (replica is 2), and expose a them using a <code>Service</code> with type <code>LoadBalancer</code>.
Then I do an experiment by sending 2 consecutive requests, I found that both request are handled by the same Pod.</p>
<p>Anyone can help me to explain this behavior?
And what should I do to change this behavior to round robin or something else?</p>
| Xiao Ma | <p>By default, kubernetes uses <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer"><code>iptables</code> mode</a> to route traffic between the pods. The pod that is serving request is chosen randomly.</p>
<p>For 2 pods, it is distributed evenly with 0.5 (50%) probability. Because it is not using round-robin, the backend pod is chosen randomly. It will be even in a longer time-frame.</p>
<p>It can be checked using <code>sudo iptables-save</code>.</p>
<p>Example output for 2 pods (for nginx service):</p>
<pre><code> sudo iptables-save | grep nginx
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx:" -m tcp --dport 31554 -j KUBE-SVC-4N57TFCL4MD7ZTDA //KUBE-SVC-4N57TFCL4MD7ZTDA is a tag for nginx service
sudo iptables-save | grep KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SOWYYRHSSTWLCRDY
</code></pre>
<p>As mentioned by @Zambozo <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">IPVS proxy mode</a> allows you to use round-robin algorithm (which is used by default) to spread the traffic equally between the pods.</p>
| kool |
<p>I am looking for a simple method to get the storage used and storage allocated for a persistent volume dynamically claimed by PVC for any pod. Is there any rest API or <code>oc</code> command for the same?</p>
<p>I am new to OpenShift/Kubernetes. As per my investigation, I could not find any such command. <code>oc adm top</code> command describes the usage statistics for nodes and pods only.</p>
| Himanshu Jindal | <p>You can do: <code>oc rsh podname</code> to access pod command line and then: <code>du -c /path/to/pv</code> or <code>du -shc /path/to/pv</code>.</p>
| Franco Fontana |
<p>I'm migrating my cluster to GKE using autpilot mode, and I'm trying to apply fluentbit for logging (to be sent to Elasticsearch and then Kibana to be alerted on a slack channel).</p>
<p>But it seems that GKE Autopilot doesn't want me to do anything on the <code>hostPath</code> other than reading into files inside <code>/var/log</code> according to this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#host_options_restrictions" rel="nofollow noreferrer">documentation</a>. However Fluentbit needs to access <code>/var/lib/docker/containers</code> which is different from <code>/var/log</code> and also access to write inside <code>/var/log</code></p>
<p>Is there a way to get around this or how do you usually log in GKE Autopilot with alerts?
Experience sharing is also welcome</p>
| MatsuzakaSteven | <p>Citing the official documentation:</p>
<blockquote>
<h3>External monitoring tools</h3>
<p>Most external monitoring tools require access that is restricted. Solutions from several Google Cloud partners are available for use on Autopilot, however not all are supported, and <strong>custom monitoring tools cannot be installed on Autopilot clusters.</strong></p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#external_monitoring_tools" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: External monitoring tools </a></em></p>
<hr />
<h3>Host options restrictions</h3>
<p>HostPort and hostNetwork are not permitted because node management is handled by GKE. Using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volumes in write mode is prohibited, <strong>while using hostPath volumes in read mode is allowed only for <code>/var/log/</code> path prefixes</strong>. Using <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">host namespaces</a> in workloads is prohibited.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#host_options_restrictions" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Host options restrictions</a></em></p>
</blockquote>
<p>As you've already found the access to the <code>/var/lib/docker/containers</code> directory is not possible with the <code>GKE</code> in <code>Autopilot</code> mode.</p>
<p>As a <strong>workaround</strong> you could try to <strong>either</strong>:</p>
<ul>
<li>Use <code>GKE</code> cluster in <code>standard</code> mode.</li>
<li>Use <code>Cloud Operations</code> with its Slack notification channel. You can read more about this topic by following:
<ul>
<li><em><a href="https://cloud.google.com/monitoring/alerts" rel="nofollow noreferrer">Cloud.google.com: Monitoring: Alerts</a></em></li>
<li><em><a href="https://cloud.google.com/monitoring/support/notification-options#slack" rel="nofollow noreferrer">Cloud.google.com: Monitoring: Support: Notification options: Slack</a></em></li>
</ul>
</li>
</ul>
<p>I'd reckon you could also consider checking the guide for exporting logs to <code>Elasticsearch</code> from <code>Cloud Logging</code>:</p>
<ul>
<li><em><a href="https://cloud.google.com/architecture/exporting-stackdriver-logging-elasticsearch" rel="nofollow noreferrer">Cloud.google.com: Architecture: Scenarios for exporting Cloud Logging: Elasticsearch</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/67599710/12257134">Stackoverflow.com: Answer: Prometheus on GKE Autopilot?
</a></em></li>
</ul>
| Dawid Kruk |
<p>When I upgrade kubernetes version 1.20.X to 1.21.1, all containers related is up to date. But the pause container is still in use, I can not force update it to the latest version.</p>
<pre><code># docker ps
XXX/pause:3.2
# docker images
XXX/pause:3.2
XXX/pause:3.4.1
# docker rmi -f XXX/pause:3.2
Error response from daemon: conflict: unable to delete XXX/pause:3.2 (cannot be forced) - image is being used by running container
</code></pre>
| ccd | <p>When you upgrade the cluster using <code>kubeadm</code> you will probably get the notification about the <code>kubelet</code> manual upgrade requirement:</p>
<pre><code>Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 1 x v1.20.7 v1.21.1
</code></pre>
<p>I've managed to create a <code>kubeadm</code> cluster version: <code>1.20.7-00</code> and then upgraded the cluster to the newest available at the time: <code>1.21.1-00</code>. After the upgrade was complete, the pause container stayed in the version <code>3.2.0</code> even after upgrading <code>kubelet</code>.</p>
<p><strong>One of the ways</strong> to update <code>kubelet</code> to use specific <code>pause</code> container version is by:</p>
<ul>
<li>modifiying following file:
<ul>
<li><code>/var/lib/kubelet/kubeadm-flags.env</code> (change for example to <code>k8s.gcr.io/pause:3.3</code>)</li>
</ul>
</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"
</code></pre>
<ul>
<li>restarting kubelet (depending on the OS)
<ul>
<li><code>$ systemctl restart kubelet</code></li>
</ul>
</li>
</ul>
<p>After this steps you should be seeing the new version of <code>pause</code> container passed to <code>kubelet</code>.</p>
<ul>
<li><code>$ systemctl status kubelet</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>
kruk@ubuntu:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2021-05-27 13:28:12 UTC; 7h ago
Docs: https://kubernetes.io/docs/home/
Main PID: 724 (kubelet)
Tasks: 18 (limit: 9442)
Memory: 128.6M
CGroup: /system.slice/kubelet.service
└─724 /usr/bin/kubelet <-SKIPPED-> --pod-infra-container-image=k8s.gcr.io/pause:3.3
May 27 13:29:12 ubuntu kubelet[724]: 2021-05-27 13:29:12.125 [INFO][5164] ipam.go 1068: Successfully claimed IPs: [172.16.243.205/26] block=172.16.243.192/26 handle="k8s-pod-network.1638a3ba44d1a46f6ad7eadb1519a42cdda98fafd0c94a7b67881f38213a5032" host="ubuntu"
May 27 13:29:12 ubuntu kubelet[724]: 2021-05-27 13:29:12.125 [INFO][5164] ipam.go 722: Auto-assigned 1 out of 1 IPv4s: [172.16.243.205/26] handle="k8s-pod-network.1638a3ba44d1a46f6ad7eadb1519a42cdda98fafd0c94a7b67881f38213a5032" host="ubuntu"
May 27 13:29:12 ubuntu kubelet[724]: time="2021-05-27T13:29:12Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:369"
</code></pre>
<p>In my testing the old container that were present were not updated to the new <code>pause</code> container. They stayed at version <code>3.2</code>. Each new workload that was spawned, like for example <code>nginx</code> <code>Deployment</code> was using new <code>pause</code> container version:</p>
<ul>
<li><code>$ docker ps</code></li>
</ul>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cc215019335 nginx "/docker-entrypoint.…" 7 hours ago Up 8 hours k8s_nginx_nginx-6799fc88d8-lhh48_default_58580cf2-ac6c-4d55-9c08-608ce2018fce_1
1638a3ba44d1 k8s.gcr.io/pause:3.3 "/pause" 7 hours ago Up 8 hours k8s_POD_nginx-6799fc88d8-lhh48_default_58580cf2-ac6c-4d55-9c08-608ce2018fce_1
</code></pre>
<hr />
<p>Additional resources/reference on the topic:</p>
<ul>
<li><em><a href="https://www.ianlewis.org/en/almighty-pause-container" rel="nofollow noreferrer">Ianlewis.org: Almighty pause container</a></em></li>
<li><em><a href="https://github.com/kubernetes/kubernetes/issues/98765" rel="nofollow noreferrer">Github.com: Kubernetes: Isuses: Handling Deprecation of pod-infra-container-image</a></em></li>
<li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#workflow-when-using-kubeadm-init" rel="nofollow noreferrer">Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Kubelet integration: Workflow when using kubeadm init</a></em></li>
</ul>
| Dawid Kruk |
<p>I want to get specific output for a command like getting the nodeports and loadbalancer of a service. How do I do that?</p>
| Anvay | <p>The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.</p>
<p>When you use Kubernetes, you are most probably using <code>kubectl</code> to interact with <code>kubeapi-server</code>.</p>
<p>Some of the commands you can use to retrieve the information from the cluster:</p>
<ul>
<li><code>$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME</code></li>
<li><code>$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME</code></li>
</ul>
<hr />
<h3>Example:</h3>
<p>Let's assume that you have a <code>Service</code> of type <code>LoadBalancer</code> (I've redacted some output to be more readable):</p>
<ul>
<li><code>$ kubectl get service nginx -o yaml</code></li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
clusterIP: 10.2.151.123
externalTrafficPolicy: Cluster
ports:
- nodePort: 30531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: A.B.C.D
</code></pre>
<p>Getting a <code>nodePort</code> from this output could be done like this:</p>
<ul>
<li><code>kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>30531
</code></pre>
<p>Getting a <code>loadBalancer IP</code> from this output could be done like this:</p>
<ul>
<li><code>kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"</code></li>
</ul>
<pre><code>A.B.C.D
</code></pre>
<p>You can also use <code>kubectl</code> with <code>custom-columns</code>:</p>
<ul>
<li><code>kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME IP
kubernetes 10.2.0.1
nginx 10.2.151.123
</code></pre>
<hr />
<p>There are a lot of possible ways to retrieve data with <code>kubectl</code> which you can read more by following the:</p>
<ul>
<li><code>kubectl get --help</code>:</li>
</ul>
<blockquote>
<p>-o, --output='': Output format. One of:
json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
See <a href="http://kubernetes.io/docs/user-guide/kubectl-overview/#custom-columns" rel="nofollow noreferrer">custom columns</a>, <a href="http://golang.org/pkg/text/template/#pkg-overview" rel="nofollow noreferrer">golang template</a> and <a href="http://kubernetes.io/docs/user-guide/jsonpath" rel="nofollow noreferrer">jsonpath template</a>.</p>
</blockquote>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Kubectl: Overview</a></em></li>
<li><em><a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Github.com: Kubernetes client: Python</a></em> - if you would like to retrieve this information with Python</li>
<li><em><a href="https://stackoverflow.com/a/53669973/12257134">Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value</a></em></li>
</ul>
| Dawid Kruk |
<p>We are currently setting up a kubernetes cluster for deploying our production workloads (mainly http rest services).
In this cluster we have setup nginx ingress controller to route traffic to our services from the outside world. Since the ingress controller will be used mainly with path routing I do have the following questions:</p>
<ul>
<li><strong>Question 1: Dynamic backend routing</strong></li>
</ul>
<p>Is it possible to route the traffic to a backend, without specifically specifiying the backend name in the ingress specification? For example I have the followign ingress: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 8080
</code></pre>
<p>Is there any possibility that the /apple request is routed to the apple-service without specifically specifying it in the serviceName? So /apple is automatically routed to the apple-service service, /orange is automatically routed to the orange service without explicitly specifying the backend name?</p>
<ul>
<li><strong>Question Number 2</strong></li>
</ul>
<p>If there is no sulution to number 1 so that we can deploy based on some conventions, the question now goes further on how to manage the ingress in an automated way.
Since the services are going to be deployed by an automated CI/CD pipeline, and new paths may be added as services are added to cluster, how can the ci/cd orchestrator (e.g. jenkins) update the ingress routes when an application is deployed? So that we are sure, that no manual intervention is needed into the cluster and each route is deployed together with the respective service?</p>
<p>I hope that the information given is enough to understand the issue.
Thank you very much for your support. </p>
| stgiaf | <p>Just have a step in your ci/cd pipeline that checks for what the current ingress is and some kind of param if it needs to be updated. </p>
<p>High level steps...</p>
<pre><code>kubectl get ingress example-ingress -o yaml > ex-ingress.yaml
</code></pre>
<p>you can write that output to a file and read it, updated it, verify it and so on. </p>
<p>and then push it to the cluster along with your deployment</p>
<pre><code>kubectl replace -f ex-ingress.yaml
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
| David Walton |
<p>I have a backend API & Frontend application that i want to deploy on Kubernetes, all using docker images.
I know how to deploy the Frontend using a Loadbalancer service & Ingress to expose the Frontend to the public internet.
The question i have is about how the backend API service will communicate with the frontend.</p>
<p>I want to deploy the backend API using ClusterIP service,
so it's only accessible to the Frontend from within the cluster,
instead of exposing the backend API using ingress, hence, and no public access to the backend API</p>
<p>Is this a good approach if i do decide to use Cluster IP?
& how will the Frontend be able to access the backend?
will it be using <code>http://localhost:4000</code>? or if an IP is generated it'll be fixed and it won't change? What's the best way to have an URL for the backend which the frontend can call</p>
| james | <p>There are various options on connecting your frontend to backend and it all depends on your application architecture.</p>
<p>You can expose your frontend using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>- it allows you to expose different parts of your app using different paths. And backend pods can be accessible only within cluster. You can check <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">"Connecting Applications with Services
"</a> in Kubernetes documentation.</p>
<p>There are few examples available online that might help you decide how to approach it.</p>
<ol>
<li><p>You can take a look in official documentation: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">Connect a Front End to a Back End Using a Service</a>.</p>
</li>
<li><p>There is a tutorial on <a href="https://medium.com/better-programming/kubernetes-deployment-connect-your-front-end-to-your-back-end-with-nginx-7e4e7cfef177" rel="nofollow noreferrer">how to connect frontend to backend using nginx</a></p>
</li>
<li><p>Similar question was also asked in SO, where <a href="https://stackoverflow.com/a/45171567/12237732">good answer was given</a>.</p>
</li>
</ol>
| kool |
<p>I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added <code>--tcp-services-configmap</code> flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
</code></pre>
<p>This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this</p>
<pre><code>...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
</code></pre>
<p>Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using <code>host: ...</code> will starts pointing to the deployment attached with the newly added service in configmap.</p>
<p>Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?</p>
<p>Note:- I am using AWS EKS Cluster and Route53 with external-dns.</p>
| Ajay Pathak | <p>Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.</p>
<p>Feel free to expand/edit it.</p>
<hr />
<p><code>NGINX Ingress</code> main responsibility is to forward the <strong><code>HTTP</code></strong>/<strong><code>HTTPS</code></strong> traffic. With the addition of the <code>tcp-services</code>/<code>udp-services</code> it can also forward the <code>TCP</code>/<code>UDP</code> traffic to their respective endpoints:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services</a></em></li>
</ul>
<p>The main issue is that the <code>Host</code> based routing for <code>Ingress</code> resource in Kubernetes is targeting specifically <code>HTTP</code>/<code>HTTPS</code> traffic and not <code>TCP</code> (<code>RDP</code>).</p>
<p>You could achieve a following scenario:</p>
<ul>
<li><code>Ingress controller</code>:
<ul>
<li><code>3389</code> - <code>RDP</code> <code>Deployment</code> #1</li>
<li><code>3390</code> - <code>RDP</code> <code>Deployment</code> #2</li>
<li><code>3391</code> - <code>RDP</code> <code>Deployment</code> #3</li>
</ul>
</li>
</ul>
<p>Where there would be no <code>Host</code> based routing. It would be more like port-forwarding.</p>
<blockquote>
<p>A side note!
This setup would also depend on the ability of the <code>LoadBalancer</code> to allocate ports (which could be limited due to cloud provider specification)</p>
</blockquote>
<hr />
<p>As for possible solution which could be not so straight-forward I would take a look on following resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/34741571/nginx-tcp-forwarding-based-on-hostname">Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname</a></em></li>
<li><em><a href="https://doc.traefik.io/traefik/routing/routers/#configuring-tcp-routers" rel="nofollow noreferrer">Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers</a></em></li>
<li><em><a href="https://github.com/bolkedebruin/rdpgw" rel="nofollow noreferrer">Github.com: Bolkedebruin: Rdpgw</a></em></li>
</ul>
<p>I'd also check following links:</p>
<ul>
<li><p><em><a href="https://aws.amazon.com/quickstart/architecture/rd-gateway/" rel="nofollow noreferrer">Aws.amazon.con: Quickstart: Architecture: Rd gateway</a></em> - AWS specific</p>
</li>
<li><p><em><a href="https://docs.konghq.com/kubernetes-ingress-controller/1.2.x/guides/using-tcpingress/" rel="nofollow noreferrer">Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress</a></em></p>
</li>
<li><p>Haproxy:</p>
<ul>
<li><em><a href="https://www.haproxy.com/documentation/aloha/12-0/deployment-guides/remote-desktop/rdp-gateway/" rel="nofollow noreferrer">Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway</a></em></li>
<li><em><a href="https://www.haproxy.com/documentation/aloha/10-5/deployment-guides/remote-desktop/" rel="nofollow noreferrer">Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop</a></em></li>
<li><em><a href="https://www.haproxy.com/blog/microsoft-remote-desktop-services-rds-load-balancing-and-protection/" rel="nofollow noreferrer">Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection</a></em></li>
</ul>
</li>
</ul>
| Dawid Kruk |
<p>I'm trying to setup ArgoCD to automatically deploy Kubernetes manifests from a remote repository.</p>
<p>Argocd is installed on my K3s cluster. The CLI is working... kind of. I can't do <code>argocd app create name</code> (with or without additional parameters) or it stalls in the terminal indefinitely... I'm not exactly sure what the word is for this, but the terminal never prompts me for another command. Note that if I do <code>argo app create --help</code> it does not stall-- it displays the help message.</p>
<p>Because of this, I want to use the UI to add an app to argo. This requires port forwarding to port 8080. I am trying to do this with the following command:</p>
<p><code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code></p>
<p>But it prints</p>
<pre><code>Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
</code></pre>
<p>With the cursor on the third line... This command <em>also</em> stalls indefinitely! I've waited for quite a while and nothing happens. Nothing else is running on port 8080 on any of the nodes.</p>
<p>Someone please give some guidance on how to proceed here? I am brand new to ArgoCD and Kubernetes.</p>
| Antonio Leonti | <p>This was a silly misunderstanding on my part of how the <code>argocd port-forward</code> command works. When you run <code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code>, it sets up a proxy so that you can communicate with the argocd server through port 8080. When that program ends, the proxy closes-- so when you press <code>control+c</code>, thinking the command is "stuck", you are actually just closing the proxy you set up.</p>
<p>The solution is to run <code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code>, open a new shell, and use the new shell to login and interact with ArgoCD.</p>
| Antonio Leonti |
<p>I don't see any link between the service and Ingress yaml files. How is it linked and how does it work? I looked at the nginx ingress controller but couldn't find any links to the ingress either.</p>
<p>How does the traffic flow? <code>LB -> Ingress controller -> Ingress -> Backend service -> pods</code>? And it seems only 80 and 443 are allowed by ingress. Does that mean any custom ports defined on <code>ingress-nginx service</code> is directly connected to the pod through like <code>LB -> Backend service -> Pod</code>?</p>
<p>Update: Figured out the traffic flow. Its as follows:
<code>LB -> Ingress controller -> Ingress -> Backend service -> pods</code></p>
<p>I have a <code>https virtual host with a custom port</code> and I guess I need to edit the <code>ingress-controller</code> yaml file to allow custom port and add the custom port to ingress and would it start routing?</p>
<p><code>Ingress.yml:</code></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
rules:
- path: /
backend:
serviceName: httpd
servicePort: 443
</code></pre>
<p><code>cloud-generic-service.yml:</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
1235: "test-web-dev/tomcat7:1235"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-1235
port: 1235
protocol: TCP
targetPort: 1235
</code></pre>
| user630702 | <p>Explanation to this can be found <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">in the documentation</a></p>
<blockquote>
<p>Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.</p>
<p>An Ingress may be configured to give Services externally-reachable
URLs, load balance traffic, terminate SSL / TLS, and offer name-based
virtual hosting.</p>
</blockquote>
<p>So <code>Ingress</code> routes traffic from outside the cluster to service that you've specified in it, <code>httpd</code> in your example. You can specify how traffic should be used by <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">adding annotations</a> (example of annotation for nginx ingress).</p>
<blockquote>
<p>The Ingress controller is an application that runs in a cluster and
configures an HTTP load balancer according to Ingress resources. The
load balancer can be a software load balancer running in the cluster
or a hardware or cloud load balancer running externally. Different
load balancers require different Ingress controller implementations.</p>
</blockquote>
<blockquote>
<p>In the case of NGINX, the Ingress controller is deployed in a pod along with the > load balancer.</p>
</blockquote>
<p><code>Ingress</code> resources requires <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/#how-it-works" rel="nofollow noreferrer"><code>Ingress controller</code></a> to be present in the cluster. It is not deployed in to the cluster by default that's why it has has to be installed manually.</p>
| kool |
<p>I am quite confused about where k8s regular service and istio's sidecar reside. </p>
<p>I recently learned about istio and its "sidecar/envoy/proxy" inside the pod. And i am confident in saying that istio's sidecar resides inside the pod. But where does the k8s regular service reside and who is contacted first from the app, the Service or the Proxy/Sidecar?</p>
<p>The diagram in my mind is something like:
<img src="https://i.stack.imgur.com/jZkZu.png" alt="The diagram in my mind is something like"></p>
| Finley Ben | <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> are internal abstract REST objects like: loadbalancer, clusterip, nodeport, etc. Their definition is stored in Kubernetes API server (etcd).</p>
<p>Services usually are implemented by Kube-proxy and are also assigned to endpoints by matching selectors and labels.</p>
<p>Each node is running an instance of kube-proxy that is managed by etcd. Services are most of time stored as sets of rules in iptables.</p>
<hr />
<p>Istio services that are used in service mesh are located in Istio control plane and can be used as gateways, egress, ingress, virtualservices etc objects.</p>
<p>The istio control plane also cosists of:
Citadel: for key and certificate management
Pilot: to distribute authentication policies and secure naming information to the proxies
Mixer: to manage authorization and auditing</p>
<p>As you mentioned sidecar proxies (envoy proxy) are injected into pods next to application container.</p>
<p>Here is graph from istio <a href="https://istio.io/latest/docs/concepts/security/#high-level-architecture" rel="nofollow noreferrer">documentation</a>.
<a href="https://web.archive.org/web/20200114173104/https://istio.io/docs/concepts/security/architecture.svg" rel="nofollow noreferrer"><img src="https://web.archive.org/web/20200114173104/https://istio.io/docs/concepts/security/architecture.svg" alt="Istio Security Architecture" /></a></p>
| Piotr Malec |
<p>I have a HorizontalPodAutoscalar to scale my pods based on CPU. The minReplicas here is set to <code>5</code>:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
<p>I've then added Cron jobs to scale up/down my horizontal pod autoscaler based on time of day:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
</code></pre>
<p>This works really well, except that now when I deploy it overwrites this <code>minReplicas</code> value with the minReplicas in the HorizontalPodAutoscaler spec (in my case, this is set to 5)</p>
<p>I'm deploying my HPA using <code>kubectl apply -f ~/autoscale.yaml</code></p>
<p>Is there a way of handling this situation? Do I need to create some kind of shared logic so that my deployment scripts can work out what the minReplicas value should be? Or is there a simpler way of handling this?</p>
| MDalt | <p>I think you could also consider the following two options:</p>
<hr />
<h3>Use helm to manage the life-cycle of your application with lookup function:</h3>
<p>The main idea behind this solution is to query the state of specific cluster resource (here <code>HPA</code>) before trying to create/recreate it with <code>helm</code> <code>install</code>/<code>upgrade</code> commands.</p>
<ul>
<li><em><a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="nofollow noreferrer">Helm.sh: Docs: Chart template guide: Functions and pipelines: Using the lookup function</a></em></li>
</ul>
<p>I mean to check the current <code>minReplicas</code> value each time before you upgrade your application stack.</p>
<hr />
<h3>Manage the <code>HPA</code> resource separately to application manifest files</h3>
<p>Here you can handover this task to a dedicated <code>HPA</code> operator, which can coexist with your <code>CronJobs</code> that adjust <code>minReplicas</code> according specific schedule:</p>
<ul>
<li><em><a href="https://banzaicloud.com/blog/k8s-hpa-operator/" rel="nofollow noreferrer">Banzaicloud.com: Blog: K8S HPA Operator</a></em></li>
</ul>
| Dawid Kruk |
<p>I am somewhat new to Kubernetes, and I am trying to learn about deploying airflow to Kubernetes.</p>
<p>My objective is to try to deploy an "out-of-the-box" (or at least closer to that) deployment for airflow on Kubernetes. I have created the Kubernetes cluster via Terraform (on EKS), and would like to deploy airflow to the cluster. I found that Helm can help me deploy airflow easier relative to other solutions.</p>
<p>Here is what I have tried so far (snippet and not complete code):</p>
<pre><code>provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
data "helm_repository" "airflow" {
name = "airflow"
url = "https://airflow-helm.github.io/charts"
}
resource "helm_release" "airflow" {
name = "airflow-helm"
repository = data.helm_repository.airflow.metadata[0].name
chart = "airflow-chart"
}
</code></pre>
<p>I am not necessarily fixed on using Terraform (I just thought it might be easier and wanted to keep state). So I am also happy to discover other solutions that will help me airflow with all the pods needed.</p>
| alt-f4 | <p>You can install it using Helm from official repository, but there are a lot of additional configuration to consider. The Airflow config is described in chart's <a href="https://github.com/airflow-helm/charts/blob/main/charts/airflow/values.yaml" rel="nofollow noreferrer"><code>values.yaml</code></a>. You can take a look on <a href="https://medium.com/clarityai-engineering/running-airflow-in-kubernetes-and-aws-lessons-learned-part-1-77be9556846c" rel="nofollow noreferrer">this article</a> to check example configuration.</p>
<p>For installation using terraform you can take a look <a href="https://medium.com/typeforms-engineering-blog/deploy-airflow-1-10-10-in-kubernetes-using-terraform-and-helm-2476f03f07d0" rel="nofollow noreferrer">into this article</a>, where both Terraform config and helm chart's values are described in detail.</p>
| kool |
<p>Good afternoon. I am working with HPA (HorizontalPodAutoscaler) for the automatic scaling of replicas of a pods, in this case I am using memory usage as a reference, I declare it as follows:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: find-complementary-account-info-1
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: find-complementary-account-info-1
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 70
</code></pre>
<p>What I want is to automatically scale my pods when the use of the pod's memory percentage is greater than 70%, and when this percentage drops my replicas return to the declared minimum, however, doing tests, I place a limit at 70, without However, when memory usage is lower than this value, the replicas continue in 4.</p>
<p>5 minutes have passed, and this is what the HPA shows:</p>
<pre><code>[dockermd@tmp108 APP-MM-ConsultaCuentaPagadoraPospago]$ kubectl get hpa -o wide
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
find-complementary-account-info-1 Deployment/find-complementary-account-info-1 65%/70% 2 5 4 2d4h
</code></pre>
<p>I have a wrong concept of HPA or I am declaring the configuration for the HPA in the wrong way, or how to reduce the number of replicas as soon as the memory usage is below the indicated</p>
<p>My environment is on-premise and is configured with metalb, and I use LoadBalancer to expose my services</p>
| Cesar Justo | <p>It is working as designed. The <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">algorithm counts</a> desired number of replicas by using this formula:</p>
<pre><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
</code></pre>
<p>So if your <code>desiredMetricValue</code> is 70% and <code>currentMetricValue</code> is 65% than</p>
<pre><code>desiredReplicas=4*(65%/70%)=~3.7
</code></pre>
<p>The CEIL() function returns the smallest integer value that is bigger than or equal to a number so in this case it is 4.</p>
<p>If you want to lower the number to 3 replicas you have to lower your utilization to 52.5%:</p>
<pre><code>3=4*(`currentMetricValue`/70)
currentMetricValue=52.5%
</code></pre>
| kool |
<p>I have a <strong>non-EKS</strong> AWS kubernetes cluster with 1 master 3 worker nodes</p>
<p>I am trying to install <strong>nginx ingress controller</strong> in order to use the cluster with a domain name but unfortunately it does not seem to work, the <strong>nginx ingress controller service</strong> is not taking automatically an IP and even if I set manually an <strong>external IP</strong> this IP is not answering in 80 port.</p>
| chatzich | <p>The reason for <code>External IP</code> remaining in pending is that there is no load balancer in front of your cluster to provide it with external IP, like it would work EKS. You can achieve it by boostraping your cluster with <code>--cloud-provider</code> option using <code>kubeadm</code>.</p>
<p>You can follow these tutorials on how to successfully achieve it:</p>
<p><a href="https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/" rel="nofollow noreferrer">Kubernetes, Kubeadm, and the AWS Cloud Provider</a></p>
<p><a href="https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd" rel="nofollow noreferrer">Setting up the Kubernetes AWS Cloud Provider</a></p>
<p><a href="https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2" rel="nofollow noreferrer">Kubernetes: part 2 — a cluster set up on AWS with AWS cloud-provider and AWS LoadBalancer</a></p>
| kool |
<p>I am new to kubernetes and trying to deploy a simple hello-world app. I am using Ubuntu 20.04 and running it on VMware workstation. I have installed minikube and trying to deploy. However, the pods are deployed but the service is not accessible through browser.</p>
<p>Below is my <code>deployment.yaml</code> file:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
name: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
name: myapp
template:
metadata:
labels:
name: myapp
spec:
containers:
- name: myapp
image: pritishkapli/example:v1.0.0
ports:
- containerPort: 8080
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
</code></pre>
<p>Below is the kubernetes service:</p>
<pre><code>pritish@ubuntu:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exampleservice LoadBalancer 10.101.149.155 <pending> 8081:30000/TCP 12m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h
</code></pre>
<p>Below is the pods running:</p>
<pre><code>pritish@ubuntu:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myappdeployment-b85b56d64-knhhc 1/1 Running 0 17m
myappdeployment-b85b56d64-m4vbg 1/1 Running 0 17m
myappdeployment-b85b56d64-qss4l 1/1 Running 0 17m
myappdeployment-b85b56d64-r2jq4 1/1 Running 0 17m
myappdeployment-b85b56d64-tflnz 1/1 Running 0 17m
</code></pre>
<p>Please help!</p>
<p>PS: I have updated the <code>deployment.yaml</code> file and it's working as expected.</p>
| Pritish | <h3><strong>TL;DR</strong></h3>
<p>This is not the issue with the <code>Service</code> of type <code>LoadBalancer</code> but the <strong>mismatch between <code>service.spec.selector</code> value and <code>deployment.spec.selector.matchLabels</code></strong> value.</p>
<hr />
<h3>How you can fix your setup?</h3>
<p>To fix your setup you can use the same values from either <code>Service</code> or <code>Deployment</code>.</p>
<p><strong>Please choose only one of the following options:</strong></p>
<ul>
<li><code>deployment.yaml</code> changes:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp # <-- CHANGES
template:
metadata:
labels:
app: myapp # <-- CHANGES
spec:
containers:
- name: myapp
image: pritishkapli/example:v1.0.0
ports:
- containerPort: 8080
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
</code></pre>
<ul>
<li><code>service.yaml</code> changes:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app.kubernetes.io/name: myapp # <-- CHANGES
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30000
type: LoadBalancer
</code></pre>
<hr />
<h3>How could you tell that this was the issue with <code>selector</code>?</h3>
<p>This comes down to the experience on working with Kubernetes, but the easiest method to check it is by either (using old examples):</p>
<ul>
<li><code>kubectl describe service exampleservice</code> (part of this output is redacted due to readability)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Name: exampleservice
Namespace: default
Labels: <none>
Selector: app=myapp
Type: LoadBalancer
IP: 10.8.10.103
Port: <unset> 8081/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30000/TCP
Endpoints: <none> # <-- LOOK HERE!
</code></pre>
<ul>
<li><code>kubectl get endpoints exampleservice</code></li>
</ul>
<pre><code>NAME ENDPOINTS AGE
exampleservice <none> 2m3s
</code></pre>
<p>As you can see on above output there are no <code>endpoints</code> to send the traffic to (<code>Pods</code>).</p>
<hr />
<h3>Service of type <code>LoadBalancer</code> on <code>minikube</code></h3>
<p>Talking from the perspective of the <code>Service</code> of type <code>Loadbalancer</code> on <code>minikube</code>:</p>
<p>The easiest method would be to connect as said previously by a <code>NodePort</code>. Due to <code>minikube</code>'s local nature, it won't get the <code>External IP</code> assigned unless you opt for a workaround including for example:</p>
<ul>
<li><code>minikube tunnel</code></li>
</ul>
<blockquote>
<p><strong>tunnel</strong> - creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. for a detailed example see <a href="https://minikube.sigs.k8s.io/docs/tasks/loadbalancer" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/tasks/loadbalancer</a></p>
<p><em><a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Commands: Tunnel</a></em></p>
</blockquote>
<hr />
<p><strong>A side note!</strong></p>
<p>I haven't tested it personally but you could try to experiment with: <code>$ minikube addons enable metallb</code> to assign it the IP address in the same <code>CIDR</code> as your <code>minikube</code> instance and then query (<code>curl</code>) this newly assigned IP.</p>
<p>A guide for more reference:</p>
<ul>
<li><em><a href="https://medium.com/faun/metallb-configuration-in-minikube-to-enable-kubernetes-service-of-type-loadbalancer-9559739787df" rel="nofollow noreferrer">Medium.com: Faun: Metallb configuration in minikube to enable Kubernetes service of type Loadbalancer</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Access application cluster: Create external load balancer</a></em></li>
<li><em><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Start</a></em></li>
</ul>
| Dawid Kruk |
<p>When I create a service account in Kubernetes with the following specification</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-bot
</code></pre>
<p>It automatically creates the following secret with prefix <code>deploy-bot-token-XXXX</code></p>
<pre><code>$ kubectl get secret
NAME TYPE DATA AGE
default-token-lvq79 kubernetes.io/service-account-token 3 60m
deploy-bot-token-7gmnh kubernetes.io/service-account-token 3 4m53s
</code></pre>
<p>Is there a way via which we can disable the automatic creation of secret tokens while creating service accounts?</p>
| Anshul Patel | <p>You can achieve it by modifying <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer"><code>kube-controller-manager</code> options</a>.</p>
<p>The flag to be passed to the controller is <code>--controllers=-serviceaccount-token</code>. It will disable creating token for service accounts.</p>
<pre><code>spec:
containers:
- command:
- kube-controller-manager
- --controllers=-serviceaccount-token
[...]
</code></pre>
<p>After this modification when you deploy your service account:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-bot
$ kubectl get sa
NAME SECRETS AGE
default 1 14m
deploy-bot 0 3s
</code></pre>
<p>and check the secrets created, you will notice that the secret has not been created:</p>
<pre><code>$ kubectl get secret
NAME TYPE DATA AGE
default-token-t4qnv kubernetes.io/service-account-token 3 14m
</code></pre>
| kool |
<p>I just tried to go over the getting started guide of Argocd found here <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noreferrer">https://argo-cd.readthedocs.io/en/stable/getting_started/</a>.<br />
I did steps 1 and 2 and then ran the command <code>argocd login --core</code> to skip steps 3-5 (as it said in the guide).<br />
when running the next command<br />
<code>argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-server https://kubernetes.default.svc --dest-namespace default</code>.<br />
to apply the app itself I got the following error:
<code>FATA[0000] configmap "argocd-cm" not found</code>.
Although I did find it on the cluster with the label <code>app.kubernetes.io/part-of: argocd</code>.<br />
I also tried going back to steps 3-5 and changing the server and namespace but some steps didn't work or I didn't know what to do in them, I also got the same error during the second command of step 5.
Thanks for the help.</p>
| gshabi | <p>Although the <code>configmap</code> is in the <code>argocd</code> namespace, if <code>argocd</code> is not your <em>current</em> namespace, it won't work. To make sure it is, run :</p>
<pre><code>kubectl config set-context --current --namespace=argocd
</code></pre>
<p>That solved it for me. Command from <a href="https://stackoverflow.com/questions/55373686/how-to-switch-namespace-in-kubernetes">How to switch namespace in kubernetes</a></p>
| Emile Tenezakis |
<p>I have an application setup on AKS (Azure Kubernetes Service) and I’m currently using Azure Application gateway as ingress resource for my application running on AKS.</p>
<p>Now after setting up ISTIO for my cluster the graphs are coming up fine except one part. Since the Azure APP gateway is unknown to ISTIO it is showing the resource as “unknown”. I even tried launching a virtual service and pointed it to the ingress resource but that didn’t have any effect on the graph. How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.</p>
<p><a href="https://i.stack.imgur.com/0uJD0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0uJD0.png" alt="enter image description here"></a></p>
| vishal | <p>This is because Azure Application gateway is not part of Istio Mesh. Depending on how You have Your Azure Application Gateway configured You might not even get any benefits of using istio.</p>
<p>Getting istio to work with Azure Application Gateway is lot more complicated than it seems.</p>
<p>There is a <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/issues/633" rel="nofollow noreferrer">Github</a> issue that uses istio and Azure Application Gateway at the same time.</p>
<p>With the following statement:</p>
<blockquote>
<p>You may wonder why I chose to put the ingress resource into the istio-system namespace. Im doing so because in my understanding the istio-ingress must be the endpoint for each app-gateway redirect. If I would let it redirect to the echo-server service, AGKI(application-gateway-kubernetes-ingress) would point to the ip-address of the deployed pod, which would completely disregard istios servicemesh.</p>
</blockquote>
<p>So if don't already have configuration like that and You want to use Istio I suggest setting Istio Ingress Gateway as an endpoint for Your Azure Application Gateway and treat it as traffic comming from outside mesh.</p>
<hr />
<p>Here is an explanation why Azure Application gateway is "unknown" resource.</p>
<p>In an <a href="https://itnext.io/where-does-the-unknown-taffic-in-istio-come-from-4a9a7e4454c3" rel="nofollow noreferrer">this</a> article you can find the following statement:</p>
<blockquote>
<p>Ingress traffic</p>
<p>Istio expects traffic to go via the the Ingress Gateway. When you see ‘unknown’ traffic it can simply be the case that you use the standard Kubernetes Ingress or an OpenShift route to send traffic from the outside to Istio.</p>
</blockquote>
<p><a href="https://learn.microsoft.com/en-us/azure/application-gateway/overview#ingress-controller-for-aks" rel="nofollow noreferrer">Azure Application gateway</a> uses custom ingress controller:</p>
<blockquote>
<p>Application Gateway Ingress Controller (AGIC) allows you to use Application Gateway as the ingress for an Azure Kubernetes Service (AKS) cluster.</p>
<p>The ingress controller runs as a pod within the AKS cluster and consumes Kubernetes Ingress Resources and converts them to an Application Gateway configuration which allows the gateway to load-balance traffic to the Kubernetes pods. The ingress controller only supports Application Gateway V2 SKU.</p>
<p>For more information, see Application Gateway Ingress Controller (AGIC).</p>
</blockquote>
<p>According to <a href="https://kiali.io/faq/graph/#many-unknown" rel="nofollow noreferrer">Kiali</a> documentation:</p>
<blockquote>
<p>In some situations you can see a lot of connections from an "Unknown" node to your services in the graph, because some software external to your mesh might be periodically pinging or fetching data. This is typically the case when you setup Kubernetes liveness probes, or have some application metrics pushed or exposed to a monitoring system such as Prometheus. Perhaps you wouldn’t like to see these connections because they make the graph harder to read.</p>
</blockquote>
<hr />
<p>To address Your additional question:</p>
<blockquote>
<p>How shall I establish to ISTIO that it is Azure app gateway and not “unknown” resource.</p>
</blockquote>
<p>As far as I know there is no way to make Custom (non-istio) Ingress Gateway be part of istio mesh. Leaving Azure Application Gateway labelled as “unknown”.</p>
<p>Hope this helps.</p>
| Piotr Malec |
<p>Say I have this VPA config file:</p>
<pre><code>apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: vpa-reco
spec:
targetRef:
apiVersion: "batch/v1beta1"
kind: CronJob
name: test-autoscaling-1
updatePolicy:
updateMode: "Off"
</code></pre>
<p>How can I make my VPA target CronJob <code>test-autoscaling-2</code> also?</p>
| Ugurite | <p>Unfortunately you can't reference multiple objects in a single <code>VPA</code> object.</p>
<p>If you tried like below (to have 2 <code>CronJobs</code> in a single <code>VPA</code>):</p>
<pre><code>apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: vpa-reco
spec:
targetRef:
- apiVersion: "batch/v1beta1"
kind: CronJob
name: test-autoscaling-1
- apiVersion: "batch/v1beta1"
kind: CronJob
name: test-autoscaling-2
updatePolicy:
updateMode: "Off"
</code></pre>
<p>You should got the error (part of it) similar to the one below:</p>
<pre class="lang-sh prettyprint-override"><code>denied the request: json: cannot unmarshal array into Go struct field VerticalPodAutoscalerSpec.targetRef of type v1.CrossVersionObjectReference
</code></pre>
<p>To have the <code>VPA</code> configured for: <code>test-autoscaling-1</code> and <code>test-autoscaling-2</code> <strong>you would need to create 2 separate <code>VPA</code> objects</strong> like: <code>vpa-reco-1</code> and <code>vpa-reco-2</code>.</p>
<hr />
<p>Talking from the perspective of a <code>VPA</code> with <code>CronJob</code>, support for it was added in the version <code>0.7.0</code>. More can be found in this specific release note:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/autoscaler/releases/tag/vertical-pod-autoscaler-0.7.0" rel="nofollow noreferrer">Github.com: Kubernetes: Autoscaler: Releases: Tag: Vertical pod autoscaler 0.7.0</a></em></li>
</ul>
<p>Additional resources:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/autoscaler" rel="nofollow noreferrer">Github.com: Kubernetes: Autoscaler</a></em></li>
</ul>
| Dawid Kruk |
<p>While running a complete cluster if suddenly etcd stops working, then what will happen?</p>
<p>Will pods services and deployments continue?</p>
| Tamish Verma | <p>The <code>etcd cluster</code> is considered failed if the majority of <code>etcd members</code> have permanently failed.</p>
<p>After the <code>etcd cluster</code> failure, all running workload might continue operating. However due to etcd role, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled.
So applications running in Kubernetes cluster might continue to serve traffic but <code>etcd</code> cluster should be <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster" rel="nofollow noreferrer">recovered</a> as soon as possible.</p>
<p>You can find more information about <code>etcd's</code> role in Kubernetes in <a href="https://rancher.com/blog/2019/2019-01-29-what-is-etcd/" rel="nofollow noreferrer">Rancher docs</a>.</p>
| kool |
<p>When trying to run gcr.io/google_containers/defaultbackend as non root, the pod goes to crashLoopBackOff state when I see the below error in logs </p>
<p><strong>standard_init_linux.go:211: exec user process caused "permission denied"</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: ingress-haproxy
spec:
replicas: 1
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
</code></pre>
<p>OS: Ubuntu 18.04.1 LTS</p>
<p><strong>Note: This issue persist only with ubuntu 18.04.1</strong></p>
| Arundathi G Vardhan | <p>Looks like the issue is caused by lack of executable permissions to the user that You are trying to run this containers as in Your deployment.</p>
<p>You can try to modify the image dockerfile and add few lines that would allow the right user to execute commands like in this <a href="https://github.com/fsanys/sonarqube-developer/issues/1" rel="nofollow noreferrer">github</a> issue.</p>
<p>There is also lots of useful information about this in <a href="https://stackoverflow.com/questions/58298774/standard-init-linux-go211-exec-user-process-caused-exec-format-error">this</a> StackOverflow post.</p>
<p>Hope it Helps.</p>
| Piotr Malec |
<p>I have a kubernetes cluster that exposes Postgresql on port 5432 via <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="nofollow noreferrer">this information</a>, this works like a charm. I'm currently testing this on my machine, and it works on <code>db.x.io</code> (<code>x</code> being my domain). But it also works on <code>localhost</code>. This seems fair, as it only creates a binding upon port 5432 to my service. </p>
<p>How can i also filter on subdomain? So its only accessible via <code>db.x.io</code></p>
| WiseStrawberry | <p>There is not much that <code>TCP</code> protocol has in terms of filtering. This is because <code>TCP</code> protocol uses only <code>IP:Port</code> combination, no headers like in HTTP. Your subdomain is resolved by <code>DNS</code> to <code>IP</code> address before connection is made.</p>
<p>According to <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/" rel="nofollow noreferrer">Nginx</a> documentation you can do the following:</p>
<blockquote>
<ul>
<li><a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/#restrict" rel="nofollow noreferrer">Restricting Access by IP Address</a></li>
<li><a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/#limit_conn" rel="nofollow noreferrer">Limiting the Number of TCP Connections</a></li>
<li><a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/#limit_bandwidth" rel="nofollow noreferrer">Limiting the Bandwidth</a></li>
</ul>
</blockquote>
<hr>
<p>You can try to limit access from localhost by adding <code>deny 127.0.0.1</code> to nginx configuration, however it will most likely break the Postgresql instead. So it is a risky suggestion.</p>
<p>For kubernetes ingress object it would be:</p>
<pre><code>metadata:
annotations:
nginx.org/server-snippets: |
deny 127.0.0.1;
</code></pre>
<p>Based on <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/custom-annotations/" rel="nofollow noreferrer">Nginx</a> documentation.</p>
| Piotr Malec |
<p>I am trying to solve how to convert
<code>kubectl create -f file.yaml</code> to <code>curl</code> eqivalent. I have obviously managed to do something like that</p>
<pre><code>curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
</code></pre>
<p>However, when it comes to <code>curl -X POST ...</code> request, I was not able to figure out how to make <strong>curl</strong> substitute</p>
<pre><code>kubectl create -f file.yaml --v=8
</code></pre>
| user2156115 | <p>Considering the question of the following post is:</p>
<blockquote>
<p>How can I send a request with <code>curl</code> that will create a resource in Kubernetes cluster with a specified <code>YAML</code> file.</p>
</blockquote>
<p>Dissecting the example from my comment:</p>
<blockquote>
<p>curl -X POST https://kubernetes/api/v1/namespaces/default/pods -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type: application/yaml" --data-binary @pod.yaml --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt</p>
</blockquote>
<ul>
<li><code>-X POST</code>:
<ul>
<li>send a <code>HTTP</code> request with <code>POST</code> method</li>
</ul>
</li>
<li><code>https://kubernetes/api/v1/namespaces/default/pods</code>:
<ul>
<li>location to send the request to</li>
</ul>
</li>
<li><code>-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"</code>:
<ul>
<li>set the header with a token for authorization</li>
</ul>
</li>
<li><code>-H "Content-Type: application/yaml"</code>:
<ul>
<li>set the header with a request body type (<code>YAML</code>)</li>
</ul>
</li>
<li><code>--data-binary @pod.yaml</code>:
<ul>
<li>specify the filename that will be send with a request</li>
</ul>
</li>
<li><code>--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code>:
<ul>
<li>use the specified certificate file to verify the peer</li>
</ul>
</li>
</ul>
<blockquote>
<p>Disclaimer!</p>
<p>This example assumes that the request comes from a <code>Pod</code> and all of the necessary permissions are assigned to a <code>ServiceAccount</code>.</p>
</blockquote>
<p>Creating different resources like <code>Deployments</code>, <code>Configmaps</code> will require to send the request to a different <code>URL</code> for example:</p>
<ul>
<li><code>Deployment</code>: <code>apis/apps/v1/namespaces/default/deployments</code></li>
<li><code>Configmap</code>: <code>api/v1/namespaces/default/configmaps</code></li>
<li>etc.</li>
</ul>
<blockquote>
<p>A side note!</p>
<p>You will also need to consider the <code>namespace</code> you are sending the request to as sending the resource to the wrong namespace will deny its creation.</p>
</blockquote>
<p>Following command should be helpful on how the <code>curl</code> should look like (path wise):</p>
<ul>
<li><code>kubectl create -f file_name.yaml -v=9</code></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://curl.se/docs/manpage.html" rel="nofollow noreferrer">Curl.se: Docs: Manpage</a></em></li>
<li><em><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Generated: Kubernetes-api: v1.20</a></em></li>
</ul>
| Dawid Kruk |
<p>I am trying to configure Istio control Plane to use zipkin as tracing backend, but I can't. In their docs, they state that in order to do this, I just have to pass the following parameters when installing Istio:</p>
<p><code>--set values.tracing.enabled=true</code> and <code>--set values.tracing.provider=zipkin</code>. My problem is that I have installed Istio manually.</p>
<p>I found the parameter <code>provider: jaeger</code> in the <code>Configmap</code> <code>istio-sidecar-injector</code>, and made the change, then killed the control plane so it would be re-deployed with zipkin, but didn't work.</p>
<p>Does anyone know what object/s should I manipulate to get zipkin?</p>
| suren | <p>By using the following <a href="https://istio.io/docs/setup/install/istioctl/#show-differences-in-profiles" rel="nofollow noreferrer">commands</a> I was able to generate the manifests using <a href="https://istio.io/docs/setup/getting-started/#download" rel="nofollow noreferrer"><code>istioctl</code></a> with parameters You mentioned:</p>
<pre><code>$ istioctl manifest generate --set profile=demo --set values.tracing.enabled=true --set values.tracing.provider=zipkin > istio-demo-with-zipkin.yaml
</code></pre>
<pre><code>$ istioctl manifest generate --set profile=demo > istio-demo.yaml
</code></pre>
<p>Then compared them to see differences made with those parameter modifications.</p>
<pre><code>$ istioctl manifest diff istio-demo.yaml istio-demo-with-zipkin.yaml
Differences of manifests are:
Object ConfigMap:istio-system:istio-sidecar-injector has diffs:
data:
values:
tracing:
provider: jaeger -> zipkin
Object Deployment:istio-system:istio-tracing has diffs:
metadata:
labels:
app: jaeger -> zipkin
spec:
selector:
matchLabels:
app: jaeger -> zipkin
template:
metadata:
annotations:
prometheus.io/port: 14269 ->
prometheus.io/scrape: true ->
labels:
app: jaeger -> zipkin
spec:
containers:
'[?->0]': -> map[env:[map[name:POD_NAMESPACE valueFrom:map[fieldRef:map[apiVersion:v1
fieldPath:metadata.namespace]]] map[name:QUERY_PORT value:9411] map[name:JAVA_OPTS
value:-XX:ConcGCThreads=2 -XX:ParallelGCThreads=2 -Djava.util.concurrent.ForkJoinPool.common.parallelism=2
-Xms700M -Xmx700M -XX:+UseG1GC -server] map[name:STORAGE_METHOD value:mem]
map[name:ZIPKIN_STORAGE_MEM_MAXSPANS value:500000]] image:docker.io/openzipkin/zipkin:2.14.2
imagePullPolicy:IfNotPresent livenessProbe:map[initialDelaySeconds:200 tcpSocket:map[port:9411]]
name:zipkin ports:[map[containerPort:9411]] readinessProbe:map[httpGet:map[path:/health
port:9411] initialDelaySeconds:200] resources:map[limits:map[cpu:300m memory:900Mi]
requests:map[cpu:150m memory:900Mi]]]
'[0->?]': map[env:[map[name:POD_NAMESPACE valueFrom:map[fieldRef:map[apiVersion:v1
fieldPath:metadata.namespace]]] map[name:BADGER_EPHEMERAL value:false] map[name:SPAN_STORAGE_TYPE
value:badger] map[name:BADGER_DIRECTORY_VALUE value:/badger/data] map[name:BADGER_DIRECTORY_KEY
value:/badger/key] map[name:COLLECTOR_ZIPKIN_HTTP_PORT value:9411] map[name:MEMORY_MAX_TRACES
value:50000] map[name:QUERY_BASE_PATH value:/jaeger]] image:docker.io/jaegertracing/all-in-one:1.14
imagePullPolicy:IfNotPresent livenessProbe:map[httpGet:map[path:/ port:14269]]
name:jaeger ports:[map[containerPort:9411] map[containerPort:16686] map[containerPort:14250]
map[containerPort:14267] map[containerPort:14268] map[containerPort:14269]
map[containerPort:5775 protocol:UDP] map[containerPort:6831 protocol:UDP]
map[containerPort:6832 protocol:UDP]] readinessProbe:map[httpGet:map[path:/
port:14269]] resources:map[requests:map[cpu:10m]] volumeMounts:[map[mountPath:/badger
name:data]]] ->
volumes: '[map[emptyDir:map[] name:data]] ->'
Object Service:istio-system:jaeger-agent is missing in B:
Object Service:istio-system:jaeger-collector is missing in B:
Object Service:istio-system:jaeger-query is missing in B:
Object Service:istio-system:tracing has diffs:
metadata:
labels:
app: jaeger -> zipkin
spec:
ports:
'[0]':
targetPort: 16686 -> 9411
selector:
app: jaeger -> zipkin
Object Service:istio-system:zipkin has diffs:
metadata:
labels:
app: jaeger -> zipkin
spec:
selector:
app: jaeger -> zipkin
</code></pre>
<p>You can try to manually modify those applied settings or apply it to Your cluster.</p>
<p>Istioctl I used to generate these manifests:</p>
<pre><code>$ istioctl version
client version: 1.4.3
control plane version: 1.4.3
data plane version: 1.4.3 (4 proxies)
</code></pre>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I created a jenkins master in my OKE cluster (a single node cluster). I wanted to try using Jenkins kubernetes plugin and connect jnlp-slaves.
<a href="https://i.stack.imgur.com/W3fSW.png" rel="nofollow noreferrer">Following are my kuberenetes cloud configurations.</a></p>
<p><a href="https://i.stack.imgur.com/ZY17I.png" rel="nofollow noreferrer">Jenkins tunnel and URL info</a></p>
<p>Jenkins version that Im using is Jenkins 2.319.3. When I start the build, a pod is creating in my cluster and gives an error.</p>
<p>Following is the log of the created pod.</p>
<pre><code>Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: kube-agent-s8tw6
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Mar 17, 2022 5:49:10 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.11.2
Mar 17, 2022 5:49:10 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Mar 17, 2022 5:49:10 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://138.2.75.190:30000/]
Mar 17, 2022 5:49:10 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping]
Mar 17, 2022 5:49:10 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting TCP connection tunneling is enabled. Skipping the TCP Agent Listener Port availability check
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: http://138.2.75.190
Agent port: 50000
Identity: 5b:f2:ef:be:d1:98:05:91:83:3d:a5:61:b6:1b:93:e5
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to http://138.2.75.190:50000
Mar 17, 2022 5:49:10 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: null
java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Unknown Source)
at java.base/sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at java.base/java.nio.channels.SocketChannel.open(Unknown Source)
at org.jenkinsci.remoting.engine.JnlpAgentEndpoint.open(JnlpAgentEndpoint.java:206)
at hudson.remoting.Engine.connectTcp(Engine.java:880)
at hudson.remoting.Engine.innerRun(Engine.java:757)
at hudson.remoting.Engine.run(Engine.java:540)
</code></pre>
<p>I checked the ports and I disabled firewall in my node. Im still getting the error. Can someone help me to solve this?</p>
| IamM96 | <p>I faced the same error and solved.</p>
<p>In error logs we can see you defined tunnel address as 'http://138.2.75.190:50000'</p>
<pre class="lang-bash prettyprint-override"><code>...
INFO: Connecting to http://138.2.75.190:50000
...
</code></pre>
<p>You added 'http://' prefix to Jenkins tunnel address and this is causing error. Expected address must be 'host:port' format, not 'protocol://host:port'</p>
<p>So for fix this error just remove 'http://' prefix from tunnel address. For your case; it should be '138.2.75.190:50000'</p>
<p>Change it in Jenkins Kubernetes configuration page.</p>
| eyEminYILDIZ |
<p>I am upgrading K8s from 1.15 to 1.16. Before I do it, I must migrate my daemonsets, deployment, statefulset etc. to the apps/v1 version. But when I do it, I don't understand the K8s behaviour.</p>
<p>Let's say, that we have a daemonset:</p>
<pre><code>apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: spot-interrupt-handler
namespace: kube-system
spec:
selector:
matchLabels:
app: spot-interrupt-handler
template:
metadata:
labels:
app: spot-interrupt-handler
spec:
serviceAccountName: spot-interrupt-handler
containers:
- name: spot-interrupt-handler
image: madhuriperi/samplek8spotinterrupt:latest
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SPOT_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
nodeSelector:
lifecycle: Ec2Spot
</code></pre>
<p>I change the first line to <strong>apps/v1</strong> and successfully apply this yaml to K8s. Nothing changes after it, the pods are not restarted.
I get this notification:</p>
<pre><code>daemonset.apps/spot-interrupt-handler configured
</code></pre>
<ol>
<li>Is it a normal behavior? Shouldn't it be restarted after I change the API version?</li>
</ol>
<p>Then I want to see, that this API version change is really applied to K8s etcd.</p>
<pre><code>kubectl get ds spot-interrupt-handler -n default -o yaml
</code></pre>
<p>And that is what I see at the start of the yaml definition:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"spot-interrupt-handler","namespace":"default"},"spec":{"selector":{"matchLabels":{"app":"spot-interrupt-handler"}},"template":{"metadata":{"labels":{"app":"spot-interrupt-handler"}},"spec":{"containers":[{"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"SPOT_POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"madhuriperi/samplek8spotinterrupt:latest","imagePullPolicy":"Always","name":"spot-interrupt-handler"}],"nodeSelector":{"lifecycle":"Ec2Spot"},"serviceAccountName":"spot-interrupt-handler"}}}}
creationTimestamp: "2021-02-09T08:34:33Z"
</code></pre>
<ol start="2">
<li>Why is <strong>extensions/v1beta1</strong> on the top? I expect it to be <strong>apps/v1</strong>.</li>
<li>I see, that the new version of API is in the last-applied-configuration. Does it mean, that this DaemonSet will work after the upgrade to 1.16?</li>
</ol>
<p>Thanks in advance</p>
| dice2011 | <p>I've reproduced your setup in GKE environment and after upgrading kubernetes version from 1.15 to 1.16 daemonset's <code>apiVersion</code> has changed to <code>apiVersion: apps/v1</code>.</p>
<p>I've started with GKE version <code>1.15.12</code> and applied your configuration. Once successfully applied I've changed apiVersion to <code>apps/v1</code>, <code>extensions/v1beta1</code> has remained as the current <code>apiVersion</code>.</p>
<p>After upgrading kubernetes version to
<code>version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.6000"</code> the DS in now <code>apps/v1</code>.</p>
<p>To check if the same behavior will happen again I've created a DS and upgraded kubernetes version without changing apiVersion and it has changed by itself to <code>apps/v1</code>.</p>
| kool |
<p>I'm creating kubernetes service in Azure with Advanced Networking options where I've selected particular vNet and Subnet for it.</p>
<p>I'm getting error as below:</p>
<pre><code>{"code":"InvalidTemplateDeployment","message":"The template deployment failed with error: 'Authorization failed for template resource '<vnetid>/<subnetid>/Microsoft.Authorization/xxx' of type 'Microsoft.Network/virtualNetworks/subnets/providers/roleAssignments'. The client '<emailid>' with object id 'xxx' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/<subid>/resourceGroups/<rgid>/providers/Microsoft.Network/virtualNetworks/<vnetid>/subnets/<subnetid>/providers/Microsoft.Authorization/roleAssignments/xxx'.'."}
</code></pre>
<p>I've got contributor role.</p>
| Jaydeep Soni | <p>As per the following article, you will need Owner privileges over the vNet to change access to it.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#built-in-role-descriptions" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#built-in-role-descriptions</a></p>
| Architect Jamie |
<p>I have a question regarding assigning an IP from Azure VNET to nginx-ingress loadbalancer. I am a newbie and hence wanted to check if the approach I am thinking is possible. </p>
<p>We are planning to deploy an <strong>internal</strong> application in Azure Kubernetes. In order to minimise the use of Ips (since our team has a small number of IP addresses allocated through Azure VNET), we have gone for basic networking in AKS and are planning to update the Nginx Loadbalancer with an allocated IP from the Azure VNET. </p>
<p>Will this approach work? </p>
<p>The confusion I have is the AKS cluster I created uses basic networking, and it has automatically created its own VNET and NSG, however, the ip address allocated to us in the company belongs to a different Azure VNET. </p>
<p>The constraint I have is that I want to use a minimal number of IP addresses from our allocated IP range. I will be interested on how others are solving this issue.</p>
<p>Help much appreciated.</p>
| jack | <p>If you need to use the address space of an existing VNET, you need to select the <em>Advanced</em> network configuration when deploying the resource and select the existing VNET and subnet.</p>
<p>This way, applications will obtain an IP lease from your VNETs built-in DHCP server.</p>
<p>While there may be ways to facilitate routing to the cluster in the way you have deployed, using VNET peering, it makes for an unnecessarily complex architecture.</p>
<p>EDIT:</p>
<p>If you have no option but to deploy AKS to a different network segment because of IP allocation contraints you should still use the Advanced network option but create a new VNET with an address space which does not overlap with the existing VNET.</p>
<p>For example, if your production VNET address space is 10.0.0.0/8, then you must use an address space which fits inside 192.168.0.0/16 or 172.16.0.0/12 - non-overlapping.</p>
<p>Once this is deployed you must creating a peering between the two VNETs. See <a href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview</a></p>
<p>Once your peering is configured you can create an internal load balancer in the VNET which has the IP contraints and set the backend pool to the AKS load balancer address for your application.</p>
<p><strong>This architecture is merely a workaround for your IP allocation constraints and is not recommended. There are 16m+ IP addresses available in the 10.0.0.0/8 address space. If you are constrained to a single IP address in this space then you either have a gigantic environment or your VNET setup is not optimal or accomodating</strong></p>
| Architect Jamie |
<p>I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.</p>
<p>However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.</p>
<p>Then I removed the istio sidecar and just started the redis server in k8s, it worked.</p>
<p>After searching I added DestinationRule to the config, but it didn't help.</p>
<p>So what's the problem of it? Thanks for any tips!</p>
<p>Here is my <code>redis.yml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
</code></pre>
<p>Update:</p>
<p>This is how I request</p>
<pre><code>[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte
</code></pre>
| Woko | <p>There are few thing that need to be different in case of exposing TCP application with istio.</p>
<ol>
<li><p>The <code>hosts:</code> needs to be <code>"*"</code> as <code>TCP</code> protocol works only with <code>IP:PORT</code>. There are no headers in L4.</p></li>
<li><p>There needs to be <code>TCP</code> port <code>match</code> Your <code>VirtualService</code> that matches <code>GateWay</code>. I suggest to name it in a unique way and match <code>Deployment</code> port name.</p></li>
<li><p>I suggest avoiding using port <code>80</code> as it is already used in default ingress configuration and it could result in port conflict, so i changed it to <code>11337</code>.</p></li>
</ol>
<p>So Your <code>GateWay</code> should look something like this:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
</code></pre>
<p>And Your <code>VirtualService</code> like this:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
</code></pre>
<p><em>Note that I removed namespaces for clarity.</em></p>
<p>Then add our custom port to default ingress gateway use the following command:</p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
<p>And add following next other port definitions:</p>
<pre><code>- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
</code></pre>
<hr>
<p>To access the exposed application use istio gateway external IP and port that we
just set up.</p>
<p>To get Your gateway external IP you can use:</p>
<pre><code>export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
</code></pre>
<pre><code>redis-cli -h $INGRESS_HOST -p 11337
</code></pre>
<p>If Your <code>istio-ingressgateway</code> does not have external IP assigned, use one of Your nodes IP address and port <code>31402</code>.</p>
<p>Hope this helps.</p>
| Piotr Malec |
<p>I would like to learn Kubernetes and would like to setup it on my laptop.</p>
<p>The architecture would be as follows:</p>
<p><a href="https://i.stack.imgur.com/MRVWw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MRVWw.png" alt="enter image description here"></a></p>
<ul>
<li>Create 4 Ubuntu 18.04 server VM's instances on my laptop</li>
<li>3 of 4 VM's will be Kubernetes Clusters and 1 VM wilk be the base</li>
<li>Access via SSH the base VM</li>
</ul>
<p>For virtualization, I am using Virtual Box.</p>
<p>The question is, how to achieve it?</p>
| softshipper | <p>To set up Kubernetes Cluster on Ubuntu Servers with Virtualbox and Kubeadm follow this steps: </p>
<h2>Prerequisites:</h2>
<ul>
<li>Virtual machines with specification of minimum:
<ul>
<li>2 cores and 2GB RAM for master node </li>
<li>1 core and 1GB for each of worker nodes </li>
</ul></li>
<li>Ubuntu Server 18.04 installed on all virtual machines </li>
<li>OpenSSH Server installed on all virtual machines </li>
</ul>
<p>All of the virtual machines need to communicate with the Internet, main host and each other. It can be done through various means like: bridged networking, virtual hosts adapters etc. The networking scheme example below can be adjusted. </p>
<p><a href="https://i.stack.imgur.com/iW04g.png" rel="nofollow noreferrer">Network scheme</a></p>
<h2>Ansible:</h2>
<p>You can do all things manually but to speed up the configuration process you can use automation tool like Ansible. It can be installed on the virtualization host, another virtual machine etc. </p>
<h3>Installation steps to reproduce on host</h3>
<ul>
<li>Refresh the information about packages in repository:<br>
<code>$ sudo apt update</code> </li>
<li>Install package manager for Python3:<br>
<code>$ sudo apt install python3-pip</code> </li>
<li>Install Ansible package:<br>
<code>$ sudo pip3 install ansible</code> </li>
</ul>
<h2>Configuring SSH key based access:</h2>
<h3>Generating key pairs</h3>
<p>To be able to connect to virtual machines without password you need to configure ssh keys. Command below will create a pair of ssh keys (private and public) and allow you to login to different systems without providing password.<br>
<code>$ ssh-keygen -t rsa -b 4096</code><br>
These keys will be created in default location: <strong>/home/USER/.ssh</strong></p>
<h3>Authorization of keys on virtual machines</h3>
<p>Next step is to upload newly created ssh keys to all of the virtual machines.<br>
<strong>For each of virtual machine you need to invoke:</strong><br>
<code>$ ssh-copy-id USER@IP_ADDRESS</code><br>
This command will copy your public key to the authorized_keys file and will allow you to login without password. </p>
<h3>SSH root access</h3>
<p>By default root account can't be accessed with ssh only by password. It can be accessed with ssh keys (which you created earlier). Assuming the default configurations of files you can copy the ssh directory from user to root directory.</p>
<p><strong>This step needs to invoked on all virtual machines:</strong><br>
<code>$ sudo cp -r /home/USER/.ssh /root/</code> </p>
<p>You can check it by running below command on main host:<br>
<code>$ ssh root@IP_ADDRESS</code> </p>
<p>If you can connect without password it means that the keys are configured correctly. </p>
<h2>Checking connection between virtual machines and Ansible:</h2>
<h3>Testing the connection</h3>
<p>You need to check if Ansible can connect to all of the virtual machines. To do that you need 2 things: </p>
<ul>
<li><strong>Hosts</strong> file with information about hosts (virtual machines in that case) </li>
<li><strong>Playbook</strong> file with statements what you require from Ansible to do </li>
</ul>
<p>Example hosts file: </p>
<pre><code>[kubernetes:children]
master
nodes
[kubernetes:vars]
ansible_user=root
ansible_port=22
[master]
kubernetes-master ansible_host=10.0.0.10
[nodes]
kubernetes-node1 ansible_host=10.0.0.11
kubernetes-node2 ansible_host=10.0.0.12
kubernetes-node3 ansible_host=10.0.0.13
</code></pre>
<p>Hosts file consists of 2 main groups of hosts:</p>
<ul>
<li>master - group created for master node </li>
<li>nodes - group created for worker nodes </li>
</ul>
<p>Variables specific to group are stored in section <strong>[kubernetes:vars]</strong>. </p>
<p>Example playbook:</p>
<pre><code>- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
</code></pre>
<p>Main purpose of above playbook is to check connection between host and virtual machines.<br>
You can test the connection by invoking command:<br>
<code>$ ansible-playbook -i hosts_file ping.yaml</code> </p>
<p>Output of this command should be like this: </p>
<pre class="lang-sh prettyprint-override"><code>PLAY [Playbook for checking connection between hosts] *****************************************************
TASK [Task to check the connection] ***********************************************************************
ok: [kubernetes-node1]
ok: [kubernetes-node2]
ok: [kubernetes-node3]
ok: [kubernetes-master]
PLAY RECAP ************************************************************************************************
kubernetes-master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes-node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
<p>The output above proves that connection between Ansible and virtual machines have been successful. </p>
<h2>Configuration before cluster deployment:</h2>
<h3>Configure hostnames</h3>
<p>Hostnames can be configured with Ansible. Each vm should connect with each vm by their hostnames. Ansible can modify hostnames as well as /etc/hosts file.
Example playbook: <a href="https://pastebin.com/4KZWVA3B" rel="nofollow noreferrer">hostname.yaml</a></p>
<h3>Disable SWAP</h3>
<p>Swap needs to be disabled when working with Kubernetes.
Example playbook: <a href="https://pastebin.com/6W1ZcTTp" rel="nofollow noreferrer">disable_swap.yaml</a></p>
<h3>Additional software installation</h3>
<p>Some packages are required before provisioning. All of them can be installed with Ansible:<br>
Example playbook: <a href="https://pastebin.com/NHTRiFiF" rel="nofollow noreferrer">apt_install.yaml</a></p>
<h3>Container Runtime Interface</h3>
<p>In this example you will install Docker as your CRI.
Playbook <a href="https://pastebin.com/CP6SXw2Z" rel="nofollow noreferrer">docker_install.yaml</a> will:</p>
<ul>
<li>Add apt signing key for Docker</li>
<li>Add Docker's repository </li>
<li>Install Docker with specific version (latest recommended) </li>
</ul>
<h3>Docker configuration</h3>
<blockquote>
<p><strong>[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"</strong></p>
</blockquote>
<p>When deploying Kubernetes cluster kubeadm will give above warning about Docker cgroup driver. Playbook <a href="https://pastebin.com/PwijtqAh" rel="nofollow noreferrer">docker_configure.yaml</a> was created to resolve this issue. </p>
<h3>Kubernetes tools installation</h3>
<p>There are some core components of Kubernetes that need to be installed before cluster deployment. Playbook <a href="https://pastebin.com/HgzJGzN8" rel="nofollow noreferrer">kubetools_install.yaml</a> will: </p>
<ul>
<li>For master and worker nodes:
<ul>
<li>Add apt signing key for Kubernetes</li>
<li>Add Kubernetes repository </li>
<li>Install kubelet and kubeadm</li>
</ul></li>
<li>Additionally for master node:
<ul>
<li>Install kubectl </li>
</ul></li>
</ul>
<h3>Reboot</h3>
<p>Playbook <a href="https://pastebin.com/ppKKw81k" rel="nofollow noreferrer">reboot.yaml</a> will reboot all the virtual machines. </p>
<h2>Cluster deployment:</h2>
<h3>Cluster initalization</h3>
<p>After successfully completing all the steps above, cluster can be created. Command below will initialize a cluster: </p>
<p><code>$ kubeadm init --apiserver-advertise-address=IP_ADDRESS_OF_MASTER_NODE --pod-network-cidr=192.168.0.0/16</code></p>
<p>Kubeadm can give warning about number of CPU's. It can be ignored by passing additional argument to kubeadm init command:
<code>--ignore-preflight-errors=NumCPU</code></p>
<p>Sucessful kubeadm provisioning should output something similar to this: </p>
<pre><code>Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
</code></pre>
<p>Copy kubeadm join command for all the worker nodes: </p>
<pre><code>kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
</code></pre>
<p>Run commands below as regular user: </p>
<pre><code> mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<h3>Deploying Container Network Interface (CNI)</h3>
<p>CNI is responsible for networking between pods and nodes. There are many examples like: </p>
<ul>
<li>Flannel</li>
<li>Calico</li>
<li>Weave</li>
<li>Multus</li>
</ul>
<p>Command below will install Calico:</p>
<p><code>$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml</code></p>
<h3>Provisioning worker nodes</h3>
<p>Run previously stored command from kubeadm init output <strong>on all worker nodes</strong>: </p>
<pre><code>kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \
--discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH
</code></pre>
<p>All of the worker nodes should output: </p>
<pre><code>This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
</code></pre>
<h2>Testing:</h2>
<p>Run below command on master node as regular user to check if nodes are properly connected: </p>
<p><code>$ kubectl get nodes</code></p>
<p>Output of this command: </p>
<pre><code>NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 115m v1.16.2
kubernetes-node1 Ready <none> 106m v1.16.2
kubernetes-node2 Ready <none> 105m v1.16.2
kubernetes-node3 Ready <none> 105m v1.16.2
</code></pre>
<p>Above output concludes that all the nodes are configured correctly. </p>
<p><strong>Pods can now be deployed on the cluster!</strong></p>
| Dawid Kruk |
<p>I performed the steps in <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/start/</a> to properly install minikube locally in my ubuntu running on my VM:</p>
<pre><code>pcname@ubuntu:~$curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
</code></pre>
<p>also</p>
<pre><code>pcname@ubuntu:~$sudo install minikube-linux-amd64 /usr/local/bin/minikube
</code></pre>
<p>an empty cmd line is returned. When I type <code>minikube start</code> as if it is install properly I get the following error:</p>
<blockquote>
<p>😄 minikube v1.22.0 on Ubuntu 20.04 👎 Unable to pick a default
driver. Here is what was considered, in preference order:
▪ docker: Not installed: exec: "docker": executable file not found in $PATH
▪ kvm2: Not installed: exec: "virsh": executable file not found in $PATH
▪ podman: Not installed: exec: "podman": executable file not found in $PATH
▪ vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in $PATH
▪ virtualbox: Not installed: unable to find VBoxManage in $PATH</p>
<p>❌ Exiting due to DRV_NOT_DETECTED: No possible driver was detected.
Try specifying --driver, or see
<a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/start/</a></p>
</blockquote>
<p>Notice: Before those, I followed all steps in <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/</a> and I installed kubectl and verified it is installed by:</p>
<pre><code>pcname@ubuntu:kubectl version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe3***************", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
even if the 3 step failed.
</code></pre>
<p>What is wrong? I am just trying to install kubectl and then minikube. Do I have to install Docker as it is suggested when I type "minikube start" ?</p>
| JosephFourier88 | <p><strong>Solution</strong>
-sudo chmod 755 /usr/local/bin/minikube to give execute permission to minikube
-minikube version => It gave the version
-minikube start => it did not work.
-Figured out that I have to install Virtual Box so I typed:</p>
<pre><code>sudo apt install virtualbox virtualbox-ext-pack
</code></pre>
<p>-minikube start [again] and installation started and goes on without a hitch.</p>
| JosephFourier88 |
<p>I'm trying to add consul ingress to my project, and I'm using this GitHub repo as a doc for ui and ingress: <a href="https://github.com/hashicorp/consul-helm/blob/497ebbf3cf875b028006a0453d6232df50897d42/values.yaml#L610" rel="nofollow noreferrer">here</a> and as you can see unfortunately there is no ingress in doc, there is an ingressGateways which is not useful because doesn't create ingress inside Kubernetes(it can just expose URL to outside)</p>
<p>I have searched a lot, there are 2 possible options:</p>
<p>1: create extra deployment for ingress</p>
<p>2: create consul helm chart to add ingress deploy</p>
<p>(unfortunately I couldn't find a proper solution for this on the Internet)</p>
| sasan | <p>Here is an example Docker compose file which configures Traefik to expose an entrypoint named <code>web</code> which listens on TCP port 8000, and integrates Traefik with Consul's service catalog for endpoint discovery.</p>
<pre class="lang-yaml prettyprint-override"><code># docker-compose.yaml
---
version: "3.8"
services:
consul:
image: consul:1.8.4
ports:
- "8500:8500/tcp"
traefik:
image: traefik:v2.3.1
ports:
- "8000:8000/tcp"
environment:
TRAEFIK_PROVIDERS_CONSULCATALOG_CACHE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_STALE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_ENDPOINT_ADDRESS: http://consul:8500
TRAEFIK_PROVIDERS_CONSULCATALOG_EXPOSEDBYDEFAULT: 'false'
TRAEFIK_ENTRYPOINTS_web: 'true'
TRAEFIK_ENTRYPOINTS_web_ADDRESS: ":8000"
</code></pre>
<p>Below is a Consul service registration file which registers an application named <code>web</code> which is listening on port 80. The service registration includes a couple tags which instructs Traefik to expose traffic to the service (<code>traefik.enable=true</code>) over the entrypoint named <code>web</code>, and creates the associated routing config for the service.</p>
<pre><code>service {
name = "web"
port = 80
tags = [
"traefik.enable=true",
"traefik.http.routers.web.entrypoints=web",
"traefik.http.routers.web.rule=Host(`example.com`) && PathPrefix(`/myapp`)"
]
}
</code></pre>
<p>This can be registered into Consul using the CLI (<code>consul service register web.hcl</code>). Traefik will then discover this via the catalog integration, and configure itself based on the routing config specified in the tags.</p>
<p>HTTP requests received by Traefik on port 8000 with an <code>Host</code> header of <code>example.com</code> and path of <code>/myapp</code> will be routed to the <code>web</code> service that was registered with Consul.</p>
<p>Example curl command.</p>
<pre class="lang-sh prettyprint-override"><code>curl --header "Host: example.com" http://127.0.0.1:8000/myapp
</code></pre>
<p>This is a relatively basic example that is suitable for dev/test. You will need to define additional Traefik config parameters if you are deploying into a production Consul environment which is typically secured by access control lists (ACLs).</p>
| Blake Covarrubias |
<p>I deployed a istio to k8s and it works well at first, but after one day, I can't access the app via ingress gateway. Then checked the istio svc status. It shows the external ip of the istio ingress gateway is pending.</p>
<p>I checked logs and events of the service, but there is nothing. What's the most possibility cause of the issue?</p>
<p>the external ip stay pending:</p>
<p><img src="https://i.stack.imgur.com/3euP8.png" alt="the external ip stay pending"></p>
| zzg | <p>This is most likely caused by using platform that does not provide an external loadbalancer to istio ingress gateway.</p>
<p>According to <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">istio</a> documentation:</p>
<blockquote>
<p>If the <code>EXTERNAL-IP</code> value is set, your environment has an external load balancer that you can use for the ingress gateway. If the <code>EXTERNAL-IP</code> value is <code><none></code> (or perpetually <code><pending></code>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">node port</a>.</p>
</blockquote>
<hr />
<p>Follow these instructions if you have determined that your environment has an external load balancer.</p>
<p>Set the ingress IP and ports:</p>
<pre><code>export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
</code></pre>
<p>In certain environments, the load balancer may be exposed using a host name, instead of an IP address. In this case, the ingress gateway’s <code>EXTERNAL-IP</code> value will not be an IP address, but rather a host name, and the above command will have failed to set the <code>INGRESS_HOST</code> environment variable. Use the following command to correct the <code>INGRESS_HOST</code> value:</p>
<pre><code>export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
</code></pre>
| Piotr Malec |
<p>I have a Kubernetes cluster with two applications which provide web-frontends, and I would like to make both of them accesible through an NGINX ingress controller. This is the relevant part of my ingress.yaml:</p>
<pre><code>tls:
- hosts:
- myapp.com
secretName: my-certificate
rules:
- host: myapp.com
http:
paths:
- backend:
serviceName: myapp2-service
servicePort: 12345
path: /myapp2/(.*)
- backend:
serviceName: myapp1-service
servicePort: 80
path: /(.*)
</code></pre>
<p>With this setup, I can reach the frontend of myapp1 through the URL myapp.com. When I change it to</p>
<pre><code> paths:
- backend:
serviceName: myapp2-service
servicePort: 12345
path: /(.*)
- backend:
serviceName: myapp1-service
servicePort: 80
path: /(.*)
</code></pre>
<p>I can reach the frontend of myapp2 through the URL myapp.com.</p>
<p>What I want to achieve is that I can reach the frontend of myapp1 through myapp.com and the frontend myapp.com/myapp2. Is that possible? And if so where is my mistake? As I've said, the frontend of myapp2 is basically accesible, just not through a sub-URL.</p>
| TigersEye120 | <p>Your path is configured to <code>/myapp2/(.*)</code> so <code>myapp.com/myapp2</code> does not match that. </p>
<p>Right now <code>myapp.com/myapp2</code> goes to <code>myapp1-service</code> looking for <code>/myapp2</code> content.</p>
<p>You can configure <code>/</code> at the end to be optional. But it will affect other path strings that match <code>myapp2</code>.</p>
<pre><code>tls:
- hosts:
- myapp.com
secretName: my-certificate
rules:
- host: myapp.com
http:
paths:
- backend:
serviceName: myapp2-service
servicePort: 12345
path: /myapp2(/|$)(.*)
- backend:
serviceName: myapp1-service
servicePort: 80
path: /(.*)
</code></pre>
<p>It is also possible to rewrite <code>myapp.com/myapp2</code> to <code>myapp.com/myapp2/</code> but it is little bit more complicated on free version of nginx ingress.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>My windows 10 is <a href="https://i.stack.imgur.com/Hym11.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hym11.png" alt="enter image description here"></a></p>
<p>I installed docker edge from the docker website <a href="https://hub.docker.com/editions/community/docker-ce-desktop-windows" rel="nofollow noreferrer">https://hub.docker.com/editions/community/docker-ce-desktop-windows</a></p>
<p>Initially i tried install Docker, with Windows container setting check box On but the Kubernetes tab did not appear.</p>
<p>Then i Uninstalled and install docker with Linux container option and i could see the Kubernetes tab.</p>
<p><a href="https://i.stack.imgur.com/IjPhl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IjPhl.png" alt="enter image description here"></a></p>
<p>Can i NOT run Kubernetes for windows containers?</p>
| JemHah | <p>Running kubernetes on Windows 10 is little bit more complicated than it seems. Luckily You have Windows 10 Pro so there are options.</p>
<p>There is an <a href="https://learnk8s.io/blog/installing-docker-and-kubernetes-on-windows" rel="nofollow noreferrer">article</a> which explains everything in detail and also offers different types of workarounds and alternatives.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I am designing a Kubernetes system which will require storing audio files. To do this I would like to setup a persistent storage volume making use of a stateful set.</p>
<p>I have found a few tutorials on how to set something like this up, but I am unsure once I have created it how to read/write the files. What would be the best approach to do this. I will be using a flask app, but if I could just get a high level approach then I can find the exact libraries myself.</p>
| EoinHanan | <p>Not acknowledging on facts how it should be implemented programming wise and the specific tuning for dealing with audio files, you can use your <code>Persistent Volume</code> the same as you would read/write data to a directory (as correctly pointed by user @zerkms in the comments).</p>
<p>Answering this specific part of the question:</p>
<blockquote>
<p>but I am unsure once I have created it how to read/write the files.</p>
</blockquote>
<p>Assuming that you've created your <code>StatefulSet</code> in a following way:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ubuntu-sts
spec:
selector:
matchLabels:
app: ubuntu # has to match .spec.template.metadata.labels
serviceName: "ubuntu"
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
terminationGracePeriodSeconds: 10
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- name: audio-volume
mountPath: /audio
volumeClaimTemplates:
- metadata:
name: audio-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
</code></pre>
<p>Take a look on below part (it's showing where your <code>Volume</code> will be mounted):</p>
<pre><code> volumeMounts:
- name: audio-volume
mountPath: /audio # <-- HERE!
</code></pre>
<blockquote>
<p>Disclaimer!</p>
<p>This example is having the 1:1 <code>Pod</code> to <code>Volume</code> relation. If your use case is different you will need to refer to the Kubernetes documentation about <code>accessModes</code>.</p>
</blockquote>
<p>You can <code>exec</code> into this <code>Pod</code> to look how you can further develop your application:</p>
<ul>
<li><code>$ kubectl exec -it ubuntu-sts-0 -- /bin/bash</code></li>
<li><code>$ echo "Hello from your /audio directory!" > /audio/hello.txt</code></li>
<li><code>$ cat /audio/hello.txt</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>root@ubuntu-sts-0:/# cat /audio/hello.txt
Hello from your /audio directory!
</code></pre>
<hr />
<blockquote>
<p>A side note!</p>
<p>If it happens that you are using the cloud-provider managed Kubernetes cluster like <code>GKE</code>, <code>EKS</code> or <code>AKS</code>, please refer to it's documentation about storage options.</p>
</blockquote>
<p>I encourage you to check the official documentation on <code>Persistent Volumes</code>:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes</a></em></li>
</ul>
<p>Also, please take a look on documentation regarding <code>Statefulset</code>:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Controllers: Statefulset</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://www.guru99.com/reading-and-writing-files-in-python.html" rel="nofollow noreferrer">Guru99.com: Reading and writing files in Python</a></em></li>
</ul>
| Dawid Kruk |
<p>How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.</p>
<p>Is there's any way to import the config running, which is not being handled by helm?</p>
<p>Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.</p>
<p>Error getting while deploying with the same name:</p>
<pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
</code></pre>
<p>Is their any other approach?</p>
<p>Thanks</p>
| Ankit Arora | <p>Addressing the error message and part of the question:</p>
<blockquote>
<p>How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.</p>
</blockquote>
<p>You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that <strong>were not</strong> added by Helm to "import" the existing resources and act on them. <strong>Please try to run your workload on a test environment first before trying it as it could redeploy some resources.</strong></p>
<p>There is already similar answer on how to annotate resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/62528643/12257134">Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl</a></em></li>
</ul>
<blockquote>
<p>see this feature of helm3 <a href="https://github.com/helm/helm/pull/7649" rel="noreferrer">Adopt resources into release with correct instance and managed-by labels</a></p>
<p>Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.</p>
<p>In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:</p>
<pre><code>KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
</code></pre>
</blockquote>
<hr />
<p>Assuming following situation:</p>
<ul>
<li><code>Deployment</code> created outside of Helm (example below).</li>
<li>Helm Chart with equivalent templated <code>Deployment</code> in <code>templates/</code> (example below).</li>
</ul>
<p>Creating below <code>Deployment</code> without Helm:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Assuming that above file is used with <code>kubectl apply</code> and it's also residing in <code>templates/</code> directory (templated) of your Chart, you will get the following error (when you try to run <code>$ helm install release_name .</code>):</p>
<pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
</code></pre>
<p>By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.</p>
<p>After that you can run <code>$ helm install release_name .</code> and provision your resources with desired changes.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://jacky-jiang.medium.com/import-existing-resources-in-helm-3-e27db11fd467" rel="noreferrer">Jacky-jiang.medium.com: Import existing resources in Helm3</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm using Grafana dashboards packaged with <a href="https://marketplace.digitalocean.com/apps/kubernetes-monitoring-stack" rel="nofollow noreferrer">DigitalOcean Kubernetes Monitoring Stack</a>. I would like my pod select dropdowns in dashboards to hide terminated pods, as I have a lot of them in there and only care about the running ones. I presume I should edit the queries somewhere but am not familiar with them so appreciate any pointers, thanks!</p>
| demiters | <blockquote>
<p>To change this in Grafana, open the <code>Variables</code> menu, then change <code>$pod</code> variable to refresh <code>On Time Range Change</code>:</p>
<p><a href="https://i.stack.imgur.com/qStHB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qStHB.png" alt="enter image description here" /></a></p>
</blockquote>
<p>Source: <a href="https://devops.stackexchange.com/questions/6308/keeping-graphs-of-terminated-kubernetes-pods-in-prometheus-grafana">https://devops.stackexchange.com/questions/6308/keeping-graphs-of-terminated-kubernetes-pods-in-prometheus-grafana</a></p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I currently have an Airflow deployment hosted on an EKS cluster, and want it to run a report that will check the logging for another deployment and alert me if any errors have occurred.</p>
<p>Locally I'm able to run this without issue as I can just point the k8s python api to my kubeconfig, however this doesn't work once deployed as there isn't a $Home/.kube directory with the kubeconfig on the pod.</p>
<pre><code> with client.ApiClient(config.load_kube_config(config_file=k8s_config_file)) as api_client:
api_instance = client.CoreV1Api(api_client)
</code></pre>
<p>I've tried removing the load_kube_config command, however this just throws a connection refused error, presumably because it now doesn't know about any cluster, although it resides in one...</p>
<p>I assume putting the kubeconfig on the deployment wouldn't be a good practice.</p>
<p>How can I get airflow to use the kubeconfig of the cluster it's hosted on?
Or is there an alternative I'm missing...</p>
| davo777 | <p>Answering some concerns from the question:</p>
<blockquote>
<p>I've tried removing the load_kube_config command, however this just throws a connection refused error, presumably because it now doesn't know about any cluster, although it resides in one...</p>
</blockquote>
<p>To run your code inside of the cluster (from a <code>Pod</code>) you will need to switch:</p>
<ul>
<li><strong>from:</strong> <code>config.load_kube_config()</code></li>
<li><strong>to:</strong> <code>config.load_incluster_config()</code></li>
</ul>
<p>Please read below as I addressed the rest of the setup needed to run Kubernetes Python API library code inside the cluster.</p>
<hr />
<blockquote>
<p>How can I get airflow to use the kubeconfig of the cluster it's hosted on? Or is there an alternative I'm missing...</p>
</blockquote>
<p>In fact there is a solution that you are missing:</p>
<p><strong>You will need to use a <code>ServiceAccount</code> with proper <code>Roles</code> and <code>RoleBindings</code>.</strong></p>
<p>Let me explain it a bit more and add an example to follow:</p>
<hr />
<h3>Explanation:</h3>
<p>To run such setup as I described above you will need to refer to following Kubernetes docs:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account</a></em> - for <code>ServiceAccount</code></li>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: RBAC</a></em> - for <code>Role</code> and <code>RoleBinding</code></li>
</ul>
<p>As stated in the official documentation:</p>
<blockquote>
<p>When you (a human) access the cluster (for example, using <code>kubectl</code>), you are authenticated by the apiserver as a particular User Account. Processes in containers inside pods can also contact the apiserver. <strong>When they do, they are authenticated as a particular Service Account (for example, default)</strong>.</p>
</blockquote>
<p>You will need to add permissions to your <code>ServiceAccount</code> with <code>Roles</code> and <code>RoleBidings</code> to allow it to query the Kubernetes API server. For example you will need to add permissions to list <code>Pods</code>.</p>
<hr />
<h3>Example:</h3>
<p>I've already answered quite lengthily a similar case on Serverfault. I encourage you to check it out:</p>
<ul>
<li><em><a href="https://serverfault.com/questions/1041268/starting-a-container-on-a-kubernetes-cluster-from-another-container/1041787#1041787">Serverfault.com: Starting a container on a Kubernetes cluster from another container</a></em></li>
</ul>
<p>I've allowed myself to copy and alter some of the parts of this answer:</p>
<blockquote>
<h3>Create a <code>ServiceAccount</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: python-job-sa
</code></pre>
<p>This <code>ServiceAccount</code> will be used with the <code>Deployment/Pod</code> that will host your Python code.</p>
<h3>Assign specific permissions to your <code>ServiceAccount</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: python-job-role
rules:
# This will give you access to pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
# This will give you access to pods logs
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
</code></pre>
<p>This is a <code>Role</code> that allows to query the Kubernetes API for the resources like > <code>Pods</code>.</p>
<h3>Bind your <code>Role</code> to a <code>ServiceAccount</code></h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: python-job-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: python-job-sa
namespace: default
roleRef:
kind: Role
name: python-job-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
</blockquote>
<p>After applying those rules you can use the <code>serviceAccount: python-job-sa</code> in your <code>Deployment</code> manifest (in <code>.spec.template.spec</code>) and query Kubernetes API like below:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_incluster_config() # <-- IMPORTANT
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_namespaced_pod("default")
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>Listing pods with their IPs:
10.88.0.12 default nginx-deployment-d6bcfb88d-q8s8s
10.88.0.13 default nginx-deployment-d6bcfb88d-zbdm6
10.88.0.11 default cloud-sdk
</code></pre>
<hr />
<p>Additonal resources:</p>
<ul>
<li><em><a href="https://blog.meain.io/2019/accessing-kubernetes-api-from-pod/" rel="nofollow noreferrer">Blog.meain.io: Accessing Kubernetes API from pod</a></em></li>
</ul>
| Dawid Kruk |
<p>I have installed Minikube on my laptop. I am trying to run the Istio. I have followed the instructions given here.
<a href="https://istio.io/docs/setup/getting-started/" rel="nofollow noreferrer">https://istio.io/docs/setup/getting-started/</a></p>
<p>My setup is as below.</p>
<ul>
<li>Microsoft Windows 10 Pro </li>
<li>Minikube version 1.5.2</li>
<li>Kubernetes version 1.16.2</li>
<li>Istio version 1.4</li>
</ul>
<p>When I run the command as given in the documentation</p>
<pre><code> istioctl manifest apply --set profile=demo
</code></pre>
<p>It fails immediately giving below error.</p>
<blockquote>
<p>Could not configure logs: couldn't open sink "/dev/null": open
/dev/null: The system cannot find the path specified.</p>
</blockquote>
<p>Anybody has faced the issue? Thanks in advance for any pointers.</p>
| Shashank Dixit | <p>Try adding flag --logtostderr.</p>
<pre><code>istioctl manifest apply --set profile=demo --logtostderr
</code></pre>
| Iain |
<p>I am deploying prometheus using stable/prometheus-operator chart. It is installed in <code>monitoring</code> namespace. In the <code>default</code> namespace I have a pod running named <code>my-pod</code> with three replicas. This pod spits out metrics on port 9009 (I have verified this by doing k port-forward and validating the metrics show up in localhost:9009). I would like prometheus-operator to scrape these metrics. So I added the configuration below to <code>values.yaml</code></p>
<pre><code>prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'my-pod-job'
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- default
relabel_configs:
- source_labels: [__meta_kubernetes_pod_name]
action: keep
regex: 'my-pod'
</code></pre>
<p>I then install prometheus using the command below:</p>
<pre><code>helm upgrade --install prometheus stable/prometheus-operator \
--set kubeEtcd.enabled=false \
--set kubeControllerManager.enabled=false \
--set kubeScheduler.enabled=false \
--set prometheusOperator.createCustomResource=true \
--set grafana.smtp.existingSecret=smtp-secret \
--set kubelet.serviceMonitor.https=true \
--set kubelet.enabled=true \
-f values.yaml --namespace monitoring
</code></pre>
<p>However, when I go to <code>/service-discover</code> I see </p>
<pre><code>my-pod-job (0/40 active targets)
</code></pre>
<p><strong>Question</strong></p>
<p>How can I configure prometheus such that it scrapes metrics from pods running in default namespace and spitting out metrics on port 9009?</p>
| Anthony | <p>To tell prometheus to scrape pods, add these annotations:</p>
<pre><code>...
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9009'
</code></pre>
| gears |
<p>Good Morning. I have a GRPC server that I want to serve on Google Kubernetes Engine. My cluster already has the <code>nginx-ingress</code> controller installed, and I'm currently using this to serve http/https traffic. This is the ingress resource I've tried to make to host the GRPC server:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: grpc-ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/grpc-backend: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
namespace: default
spec:
tls:
- hosts:
- bar.foo.com
secretName: reploy-tls
rules:
- host: bar.foo.com
http:
paths:
- backend:
serviceName: bar-backend-service
servicePort: 50051
</code></pre>
<p>And here's the service/deployment for the app:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bar-backend-service
namespace: default
spec:
selector:
app: bar-backend-app
ports:
- port: 50051
targetPort: 50051
name: grpc
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: bar-backend
labels:
app: bar-backend-app
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: bar-backend-app
spec:
containers:
- name: bar-backend-image
image: gcr.io/himank-jay/bar-backend-image:TAG
ports:
- containerPort: 50051
name: grpc
</code></pre>
<p>When I run <code>grpc_cli ls bar.foo.com:443</code> (using <a href="https://github.com/grpc/grpc/blob/master/doc/command_line_tool.md" rel="nofollow noreferrer">grpc_cli</a>), I get the following error: </p>
<pre><code>{"created":"@1580833741.460274000","description":"Error received from peer ipv4:x.x.x.x:x","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Socket closed","grpc_status":14}
</code></pre>
<p>And the error from the <code>nginx-controller</code> is as follows:</p>
<pre><code>x.x.x.x - - [04/Feb/2020:16:28:46 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" 0 0.020 [] [] - - - - xxxxx
</code></pre>
<p>Any ideas on what's wrong here? Or any thoughts on how to debug this?</p>
| Jay K. | <p>The server is serving HTTP/1.x, not HTTP/2 that's required for gRPC.</p>
<p>You can try adding the following annotation to the <code>Ingress</code> config</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
</code></pre>
<p>...as explained <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/grpc#step-3-the-kubernetes-ingress" rel="nofollow noreferrer">here</a>.</p>
<p>It is also worth checking the <code>use-http2</code> flag in the nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-http2" rel="nofollow noreferrer">configuration</a> (it should be enabled, <code>true</code>, by default).</p>
<hr>
<p>EDIT, regarding the new error:</p>
<p><code>PRI * HTTP/2.0</code> is the so-called <a href="https://httpwg.org/specs/rfc7540.html#rfc.section.3.5" rel="nofollow noreferrer">HTTP/2 Connection Preface</a> - part of negotiating HTTP/2. It still appears that nginx isn't configured for HTTP/2.</p>
| gears |
<p>I am reading <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build" rel="nofollow noreferrer">this GitOps-style</a> article on google cloud and I am now wondering how can I substitute the sample <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build#create_the_git_repositories_in" rel="nofollow noreferrer">python application</a> which they clone from <a href="https://github.com/GoogleCloudPlatform/gke-gitops-tutorial-cloudbuild" rel="nofollow noreferrer">here</a> with a Google Cloud Function (GCF) which is being triggered by Google Cloud Storage (GCS). </p>
<p><a href="https://cloud.google.com/functions/docs/testing/test-background" rel="nofollow noreferrer">This article</a> describes how you can unit test, integration test and system test a GCF and I would like to apply this in the GitOps-style continuous delivery sample. But, for that purpose I would need a specific Dockerfile (I suppose) similar to this one (but related to node.js):</p>
<pre><code>FROM python:3.7-slim
RUN pip install flask
WORKDIR /app
COPY app.py /app/app.py
ENTRYPOINT ["python"]
CMD ["/app/app.py"]
</code></pre>
<p>and maybe the <strong>yaml</strong> file. I would be happy if you could give me some directions. I am quite new to dockers, containers and kubernetes as a whole.</p>
<p>From what I understand, the syntax of the Dockerfile describes your project type and the runtime for it. So, in my case I guess I need to describe Google's runtime for executing cloud functions. I am no sure this is possible?</p>
| user2128702 | <p>I cannot understand why you want to do it this way... Isn't cloud function intended to do not care about virtualization, containerization, servers etc.? Seems you want to do something that Google Cloud Functions want to avoid...</p>
<p>As requested I am attaching <a href="http://youtube.com/watch?v=UPqN1pxF1lk" rel="nofollow noreferrer">this</a> . It's really worth to watch.</p>
| vitooh |
<p>I need to copy a file inside my pod during the time of creation. I don't want to use <code>ConfigMap</code> and <code>Secrets</code>. I am trying to create a <code>volumeMounts</code> and copy the source file using the <code>kubectl cp</code> command—my manifest looks like this.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
</code></pre>
<p>But I am getting a <code>CrashLoopBackOff</code> error. Any help or suggestion is highly appreciated.</p>
| Abhinav | <p>I do agree with an answer provided by H.R. Emon, it explains why you can't just run <code>kubectl cp</code> inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.</p>
<p>For this particular use case it is recommended to use an <code>initContainer</code>.</p>
<blockquote>
<p><code>initContainers</code> - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.</p>
<p><em><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers</a></em></p>
</blockquote>
<p>You could use the example from the official Kubernetes documentation (assuming that downloading your <code>test.json</code> is feasible):</p>
<blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
<p>-- <em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer</a></em></p>
</blockquote>
<p>You can also modify above example to your specific needs.</p>
<hr />
<p><strong>Also, referring to your particular example, there are some things that you will need to be aware of:</strong></p>
<ul>
<li>To use <code>kubectl</code> inside of a <code>Pod</code> you will need to have required permissions to access the Kubernetes API. You can do it by using <code>serviceAccount</code> with some permissions. More can be found in this links:
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens</a></em></li>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: RBAC</a></em></li>
</ul>
</li>
<li>Your <code>bitnami/kubectl</code> container will run into <code>CrashLoopBackOff</code> errors because of the fact that you're passing a single command that will run to completion. After that <code>Pod</code> would report status <code>Completed</code> and it would be restarted due to this fact resulting in before mentioned <code>CrashLoopBackOff</code>. To avoid that you would need to use <code>initContainer</code>.</li>
<li>You can read more about what is happening in your setup by following this answer (connected with previous point):
<ul>
<li><em><a href="https://stackoverflow.com/a/62589092/12257134">Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?</a></em></li>
</ul>
</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Kubernetes.io: Pod lifecycle</a></em></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>I also do consider including the reason why <code>Secrets</code> and <code>ConfigMaps</code> cannot be used to be important in this particular setup.</p>
</blockquote>
| Dawid Kruk |
<p>I'm new to Kubernetes and wanted to use the NGINX Ingress Controller for the project I'm currently working on. I read some of the docs and watched some tutorials but I haven't really understood the:</p>
<ul>
<li>installation process (should I use Helm, the git repo???)</li>
<li>how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend</a>) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations</li>
</ul>
<p>Does anybody know of a blog post or tutorial that makes these things clear. Out of everything I've learned so far (both frontend and backend) developing and deploying to a cloud environment has got me lost. I've been stuck on a problem for a week and want to figure out it Ingress can help me.
Thanks!</p>
| Stroboscopio | <p>Answering:</p>
<blockquote>
<p>How should I install <code>nginx-ingress</code></p>
</blockquote>
<p>There is no one correct way to install <code>nginx-ingress</code>. Each way has its own advantages/disadvantages, each Kubernetes cluster could require different treatment (for example: cloud managed Kubernetes and minikube) and you will need to determine which option is best suited for you.</p>
<p>You can choose from running:</p>
<ul>
<li><code>$ kubectl apply -f ...</code>,</li>
<li><code>$ helm install ...</code>,</li>
<li><code>terraform apply ...</code> (helm provider),</li>
<li>etc.</li>
</ul>
<hr />
<blockquote>
<p>How should I properly configure <code>Ingress</code>?</p>
</blockquote>
<p>Citing the official documentation:</p>
<blockquote>
<p>An API object that manages external access to the services in a cluster, typically HTTP.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></p>
</blockquote>
<p>Basically <code>Ingress</code> is a resource that tells your <code>Ingress controller</code> how it should handle specific <code>HTTP</code>/<code>HTTPS</code> traffic.</p>
<p>Speaking specifically about the <code>nginx-ingress</code>, it's entrypoint that your <code>HTTP</code>/<code>HTTPS</code> traffic should be sent to is a <code>Service</code> of type <code>LoadBalancer</code> named: <code>ingress-nginx-controller</code> (in a <code>ingress-nginx</code> namespace). In Docker with Kubernetes implementation it will bind to the localhost of your machine.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
</code></pre>
<p>The modified example from the documentation will tell your <code>Ingress</code> controller to pass the traffic with any <code>Host</code> and with <code>path: /</code> (every <code>path</code>) to a service named <code>nginx</code> on port <code>80</code>.</p>
<p>The above configuration after applying will be reflected by <code>ingress-nginx</code> in the <code>/etc/nginx/nginx.conf</code> file.</p>
<blockquote>
<p>A side note!</p>
<p>Take a look on how the part of <code>nginx.conf</code> looks like when you apply above definition:</p>
<pre><code> location / {
set $namespace "default";
set $ingress_name "minimal-ingress";
set $service_name "nginx";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
</code></pre>
</blockquote>
<p>On how your specific <code>Ingress</code> manifest should look like you'll need to consult the documentation of the software that you are trying to send your traffic to and <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">ingress-nginx docs</a>.</p>
<hr />
<p>Addressing the part:</p>
<blockquote>
<p>how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend</a>) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations.</p>
</blockquote>
<p>You typically don't modify <code>nginx.conf</code> that the <code>Ingress controller</code> is using by yourself. You write an <code>Ingress</code> manifest and rest is taken by <code>Ingress controller</code> and <code>Kubernetes</code>. <code>nginx.conf</code> in the <code>Pod</code> responsible for routing (your <code>Ingress controller</code>) will reflect your <code>Ingress</code> manifests.</p>
<p><code>Configmaps</code> and <code>Annotations</code> can be used to modify/alter the configuration of your <code>Ingress controller</code>. With the <code>Configmap</code> you can say to enable <code>gzip2</code> compression and with annotation you can say to use a specific <code>rewrite</code>.</p>
<p>To make things clearer. The guide that is referenced here is a frontend <code>Pod</code> with <code>nginx</code> installed that passes the request to a <code>backend</code>. This example apart from using <code>nginx</code> and forwarding traffic is not connected with the actual <code>Ingress</code>. It will not acknowledge the <code>Ingress</code> resource and will not act accordingly to the manifest you've passed.</p>
<blockquote>
<p>A side note!</p>
<p>Your traffic would be directed in a following manner (simplified):</p>
<ul>
<li><code>Ingress controller</code> -> <code>frontend</code> -> <code>backend</code></li>
</ul>
</blockquote>
<p>This example <strong>speaking from personal perspective</strong> is more of a guide how to connect <code>frontend</code> and <code>backend</code> and not about <code>Ingress</code>.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/61541812/ingress-nginx-how-to-serve-assets-to-application/61751019#61751019">Stackoverflow.com: Questions: Ingress nginx how sto serve assets to application</a></em></li>
<li><em><a href="https://stackoverflow.com/questions/64647258/how-nginx-ingress-controller-back-end-protocol-annotation-works-in-path-based-ro/64662822#64662822">Stackoverflow.com: Questions: How nginx ingress controller backend protocol annotation works</a></em></li>
</ul>
<p>The guide that I wrote some time ago should help you with the idea on how you can configure basic <code>Ingress</code> (it could be little outdated):</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/59255445/how-can-i-access-nginx-ingress-on-my-local/59274163#59274163">Stackoverflow.com: Questions: How can I access nginx ingress on my local</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm new to Kubernetes and I'm trying to deploy a React app to my cluster. Here's the basic info:</p>
<ul>
<li>Docker Desktop, single-node Kubernetes cluster</li>
<li>React development frontend, exposing port 3000</li>
<li>Node.js/Express backend, exposing port 8080</li>
<li>NGINX Ingress Controller, serving my React frontend on "localhost:3000" and routing my Fetch API requests (fetch("localhost:3000/api/...", OPTIONS)) to the backend (which works)</li>
</ul>
<p>I am having an issue when opening the React app. The Ingress Controller correctly routes to the app but the 3 bundles (bundle.js, main.chink.js, the third one which I don't remember) aren't loaded. I get the following error:</p>
<pre><code>GET http://localhost/static/js/main.chunk.js net::ERR_ABORTED 404 (Not Found) ingress (one example)
</code></pre>
<p>I understand why this error happens. The Ingress Controller correctly routes the traffic but only loads the index.html file. In this file, there are calls to 3 scripts (referring to the bundles) which aren't loaded. I understand the error, the files don't get sent to the browser so the index.html file can't load them in, but do not know how to fix it.
Does anyone have any suggestions? de and then pulled from Docker Hub. Does anybody know what a possible solution could be? For example, does deploying the build/ folder (built React app using "npm run build") fix this? Do I have to use nginx inside my Dockerfile to build the container?</p>
<p>Ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: titanic-ingress
#labels:
#name: titanic-ingress
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: titanicfrontendservice
port:
number: 3000
- path: /api
pathType: Exact
backend:
service:
name: titanicbackendservice
port:
number: 8080
</code></pre>
<p>Ingress controller deployment yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.10.0
imagePullPolicy: IfNotPresent
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: readiness-port
containerPort: 8081
#- name: prometheus
#containerPort: 9113
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
- -report-ingress-status
- -external-service=nginx-ingress
#- -enable-prometheus-metrics
#- -global-configuration=$(POD_NAMESPACE)/nginx-configuration
</code></pre>
<p>Ingress controller service yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 3000
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
</code></pre>
| Stroboscopio | <p><strong>TL;DR</strong></p>
<p>Switch your <code>pathType</code> in both <code>/api</code> and <code>/</code> path to <strong><code>Prefix</code></strong>.</p>
<p>I've included some explanation with fixed <code>Ingress</code> resource below.</p>
<hr />
<p>For the reproduction purposes I used the <code>titanic</code> manifests that you provided in the another question:</p>
<ul>
<li><em><a href="https://github.com/strobosco/titanicfullstack" rel="nofollow noreferrer">Github.com: Strobosco: Titanicfullstack</a></em></li>
</ul>
<p>The issue with your configuration is with: <code>pathType</code>.</p>
<p>Using your <code>Ingress</code> resource with <code>pathType: Exact</code> showed me blank page.</p>
<p>Modifying your <code>Ingress</code> resource with <code>pathType: Prefix</code> solved the issue.</p>
<blockquote>
<p>Side note!</p>
<p>The message: <strong>"Would you have survived the sinking of the Titanic?"</strong> showed.</p>
</blockquote>
<p>The exact <code>Ingress</code> configuration should be following:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: titanic-ingress
spec:
ingressClassName: nginx
rules:
- host: localhost
http:
paths:
- path: /
pathType: Prefix # <-- IMPORTANT
backend:
service:
name: titanicfrontendservice
port:
number: 3000
- path: /api
pathType: Prefix # <-- IMPORTANT
backend:
service:
name: titanicbackendservice
port:
number: 8080
</code></pre>
<hr />
<h3>Why I think it happened?</h3>
<p>Citing the official documentation:</p>
<blockquote>
<h3>Path types</h3>
<p>Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:</p>
<ul>
<li><p><code>ImplementationSpecific</code>: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate pathType or treat it identically to Prefix or Exact path types.</p>
</li>
<li><p><code>Exact</code>: Matches the URL path exactly and with case sensitivity.</p>
</li>
<li><p><code>Prefix</code>: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the / separator. A request is a match for path p if every p is an element-wise prefix of p of the request path.</p>
</li>
</ul>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress: Path types</a></em> (there are some examples on how the path matching is handled)</p>
</blockquote>
<p><code>Ingress controller</code> is forced to match <strong>only</strong> the <code>/</code> path leaving rest of the dependencies (apart from the <code>index.html</code>) on other paths like <code>/super.jpg</code> and <code>/folder/awesome.jpg</code> to error with <code>404</code> code.</p>
<blockquote>
<p>Side note!</p>
<p>You can test yourself this behavior by spawning an nginx <code>Pod</code> and placing example files in it. After applying the <code>Ingress</code> resource with <code>/</code> and <code>pathType: Exact</code> you won't be able to request it through the <code>Ingress</code> controller but you could access them within the cluster.</p>
</blockquote>
<hr />
<p>I encourage you to check the additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: Ingress path matching</a></em></li>
</ul>
| Dawid Kruk |
<p>I configuring kubernetes to have 3 images (my API, Elastic Search and Kibana)</p>
<p>Here is my <code>deployment.yml</code> file</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: tooseeweb-deployment
spec:
selector:
matchLabels:
app: tooseeweb-pod
template:
metadata:
labels:
app: tooseeweb-pod
spec:
containers:
- name: tooseewebcontainer
image: tooseewebcontainer:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9200
- name: kibana
image: docker.elastic.co/kibana/kibana:6.2.4
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5601
</code></pre>
<p>When I run <code>kubectl get deployments</code> I see this</p>
<p><a href="https://i.stack.imgur.com/RbDgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RbDgC.png" alt="enter image description here"></a></p>
<p>It's stuck on 0/1 ready. I try to reboot docker, etc. It not helps. How I can fix this?</p>
<p><strong>UPDATE</strong></p>
<p>I run <code>kubectl describe pod</code> and have this error</p>
<pre><code>Warning FailedScheduling 19s default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
</code></pre>
<p>How I can fix this?</p>
| Eugene Sukh | <p>Remove these resource limits in every pods. </p>
<pre><code>resources:
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<p>If you want to limit the resources do it later after applying once the deployment successfully.</p>
| Bumuthu Dilshan |
<p>I'm doing SSL termination using Ingress for HTTPS traffic. But I also want to achieve the same thing for Custom Port (http virtual host). For example <code>https://example.com:1234</code> should go to <code>http://example.com:1234</code></p>
<p>Nginx Ingress has a <code>ConfigMap</code> where we can expose custom ports. But SSL termination doesn't work here.</p>
<p>Any work around? I wonder If I could redirect the incoming <code>https</code> using .htaccess instead.</p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-services/httpd:1234"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
</code></pre>
| user630702 | <p><code>SSL Termination</code> for TCP traffic is not a feature directly supported by <code>nginx-ingress</code>.</p>
<p>It is more widely described in this Github issue:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/ingress-nginx/issues/636" rel="nofollow noreferrer">Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP
</a></em></li>
</ul>
<p>You can also find in this thread that some people were successful in implementing a workaround allowing them to support terminating <code>SSL</code> with <code>TCP</code> services. Specifically:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/ingress-nginx/issues/636#issuecomment-749026036" rel="nofollow noreferrer">Github.com: Kubernetes: Ingress-nginx: Issues: [nginx] Support SSL for TCP: Comment 749026036</a></em></li>
</ul>
<hr />
<p>As your example featured the "downgrade" from <code>HTTPS</code> communication to <code>HTTP</code> it could be beneficiary to add that you can alter the way that <code>NGINX Ingress Controller</code> connects to your <code>backend</code>. Let me elaborate on that.</p>
<p>Please consider this as a workaround:</p>
<p>By default your <code>NGINX Ingress Controller</code> will connect to your backend with <code>HTTP</code>. This can be changed with following annotation:</p>
<ul>
<li><code>nginx.ingress.kubernetes.io/backend-protocol:</code></li>
</ul>
<p>Citing the official documentation:</p>
<blockquote>
<p>Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI</p>
<p><strong>By default NGINX uses HTTP.</strong></p>
<p>-- <em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">Kubernetes.github.io: Ingress-nginx: User guide: Nginx configuration: Annotations: Backend protocol</a></em></p>
</blockquote>
<p>In this particular example the request path will be following:</p>
<ul>
<li><code>client</code> -- (HTTP<strong>S</strong>:443) --> <code>Ingress controller</code> (TLS Termination) -- (HTTP:service-port) --> <code>Service</code> ----> <code>Pod</code></li>
</ul>
<hr />
<h3>The caveat</h3>
<p>You can use the <code>Service</code> of type <code>LoadBalancer</code> to send the traffic from port <code>1234</code> to either <code>80</code>/<code>443</code> of your <code>Ingress Controller</code>. This would make <code>TLS</code> termination much easier but it would force the client to use <strong>only one</strong> protocol. For example:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: custom
port: 1234
protocol: TCP
targetPort: 443
</code></pre>
<p>This excerpt from <code>nginx-ingress</code> <code>Service</code> could be used to forward the <strong><code>HTTPS</code></strong> traffic to your <code>Ingress Controller</code> where the request would be <code>TLS terminated</code> and forwarded as <code>HTTP</code> to your <code>backend</code>. Forcing the <code>HTTP</code> through that port would yield error code <code>400: Bad request</code>.</p>
<p>In this particular example the request path will be following:</p>
<ul>
<li><code>client</code> -- (HTTP<strong>S</strong>:1234) --> <code>Ingress controller</code> (TLS Termination) -- (HTTP:service-port) --> <code>Service</code> ----> <code>Pod</code></li>
</ul>
| Dawid Kruk |
<p>I have a requirement to get the resource details inside the pod and do some operations depend upon the result. I'm using k8s client python inside the pod. After the role/rolebinding i'm getting forbidden.</p>
<p>i have created Serviceaccount/role/rolebinding as like below.</p>
<p>Can any one help me in this issue.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: myaccount
namespace: dev
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: pods-reader-role
rules:
-apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-controller
namespace: dev
subjects:
- kind: ServiceAccount
name: myaccount
apiGroup: ""
roleRef:
kind: Role
name: pods-reader-role
apiGroup: ""
Listing pods with their IPs:
Traceback (most recent call last):
File "/opt/scripts/bin/PodCont.py", line 792, in <module>
main()
File "/opt/scripts/bin/PodCont.py", line 596, in main
ret = v1.list_pod_for_all_namespaces(watch=False)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 16864, in list_pod_for_all_namespaces
return self.list_pod_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 16981, in list_pod_for_all_namespaces_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 377, in request
headers=headers)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 243, in GET
query_params=query_params)
File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 05 Apr 2021 09:47:13 GMT', 'Content-Length': '285'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:dev:deploy-svc-account\" cannot list resource \"pods\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403}
</code></pre>
| Santhoo Kumar | <p>Answering the question, I think there are some things that should be considered:</p>
<ul>
<li>Indentation</li>
<li>Service account running the <code>Pod</code></li>
<li>Python code and access scopes</li>
</ul>
<p>As there is no <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal, reproducible example</a> we can at most assume on how exactly you've configured your setup.</p>
<hr />
<h3>Indentation</h3>
<p>The <code>YAML</code> manifest that you've included will is not indented correctly. The correct manifest should look like below:</p>
<ul>
<li><code>full.yaml</code>:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: myaccount
namespace: dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: dev
name: pods-reader-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-controller
namespace: dev
subjects:
- kind: ServiceAccount
name: myaccount
apiGroup: ""
roleRef:
kind: Role
name: pods-reader-role
apiGroup: ""
</code></pre>
<blockquote>
<p>A side note!</p>
<p>Consider creating more restrictive <code>Role</code> for your use case as it's allowing to do everything in the <code>dev</code> namespace.</p>
</blockquote>
<hr />
<h3>Service account running the <code>Pod</code></h3>
<p>The potential issue here is that you've created a <code>serviceAccount</code> with a name: <code>myaccount</code>
and the <code>Pod</code> is trying to authenticate using the <code>deploy-svc-account</code>. (<code>User \"system:serviceaccount:dev:deploy-svc-account\" cannot list resource</code>)</p>
<p>Please ensure that the correct <code>serviceAccount</code> is used to run a <code>Pod</code>.</p>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: sdk
namespace: dev
spec:
serviceAccountName: myaccount # <-- IMPORTANT
containers:
- image: google/cloud-sdk
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
name: sdk
restartPolicy: Always
</code></pre>
<hr />
<h3>Python code and access scopes</h3>
<p>Assuming that you've used the code from the documentation page of Kubernetes Python API library (<code>"Listing pods with their IPs:"</code>):</p>
<ul>
<li><em><a href="https://github.com/kubernetes-client/python/" rel="nofollow noreferrer">Github.com: Kubernetes client: Python</a></em></li>
</ul>
<p>There are 2 topics to consider here:</p>
<ul>
<li>Access scopes</li>
<li>Querying the resources</li>
</ul>
<p>Citing the official documentation:</p>
<blockquote>
<p><strong>A Role always sets permissions within a particular namespace</strong>; when you create a Role, you have to specify the namespace it belongs in.</p>
<p><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: RBAC: Role and ClusterRole</a></em></p>
</blockquote>
<p>The permissions that you've assigned by a <code>Role</code> and a <code>RoleBinding</code> are for <code>dev</code> namespace only. If you would like to have full cluster scope you would need to create a <code>ClusterRole</code> and a <code>ClusterRoleBinding</code>.</p>
<p>I encourage you to check the documentation included in the citation as it has some examples to follow and there are many explanations on that matter.</p>
<p>Also, a word about the Python code:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>Focusing on:</p>
<pre class="lang-py prettyprint-override"><code>ret = v1.list_pod_for_all_namespaces(watch=False)
</code></pre>
<p>This code will query for <code>Pods</code> from all namespaces, that's why you've also receiving the error <code>cannot list resource \"pods\" in API group \"\" at the cluster scope"</code>.</p>
<p>To list the <code>Pods</code> from a specific namespace you can use:</p>
<pre class="lang-py prettyprint-override"><code>ret = v1.list_namespaced_pod(namespace="dev", watch=False)
</code></pre>
<p>And by that you should be able to get:</p>
<ul>
<li><code>python3 program.py</code>:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Listing pods with their IPs:
10.32.0.15 dev sdk
</code></pre>
| Dawid Kruk |
<p>According to this doc (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-alias" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-alias</a>), I'm able to add additional server_name to the nginx config file.
However, it adds the extra server_name to all of my hosts, which cause conflicts for sure.
Is there a way to add server-alias only for one of my hosts? Say I only want to add 10.10.0.100 to my test1 host.
Ingress example:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/server-alias: 10.10.0.100
spec:
rules:
- host: test1.com
http:
paths:
- path: /
backend:
service:
name: test1-service
port:
number: 8000
pathType: Prefix
- host: test2.com
http:
paths:
- path: /
backend:
service:
name: test2-service
port:
number: 8000
pathType: Prefix
</code></pre>
| Yulin | <p><strong>TL;DR</strong></p>
<p><strong>You can split your <code>Ingress</code> resource on multiple objects (which will work together) to add <code>Annotations</code> to only specific <code>hosts</code>.</strong></p>
<blockquote>
<p><strong><code>Annotations</code> can only be set on the whole kubernetes resource</strong>, as they are part of the resource <code>metadata</code>. The <code>ingress spec</code> doesn't include that functionality at a lower level.</p>
<p>-- <em><a href="https://stackoverflow.com/questions/60749036/apply-nginx-ingress-annotations-at-path-level">Stackoverflow.com: Questions: Apply nginx-ingress annotations at path level</a></em></p>
</blockquote>
<hr />
<p>Extending on the answer to give an example of how such setup could be created. Let's assume (example):</p>
<ul>
<li>All required domains pointing to the <code>Service</code> of type <code>LoadBalancer</code> of <code>nginx-ingress-controller</code>:
<ul>
<li><code>hello.kubernetes.docker.internal</code> - used in <code>host</code> <code>.spec</code></li>
<li><code>hello-two.kubernetes.docker.internal</code> - used in <code>annotations</code> <code>.metadata</code></li>
<li>--</li>
<li><code>goodbye.kubernetes.docker.internal</code> - used in <code>host</code> <code>.spec</code></li>
<li><code>goodbye-two.kubernetes.docker.internal</code>- used in <code>annotations</code> <code>.metadata</code></li>
</ul>
</li>
</ul>
<p>Skipping the <code>Deployment</code> and <code>Service</code> definitions, the <code>Ingress</code> resources should look like below:</p>
<p><code>hello-ingress.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ingress
annotations:
nginx.ingress.kubernetes.io/server-alias: "hello-two.kubernetes.docker.internal"
spec:
rules:
- host: hello.kubernetes.docker.internal # <-- IMPORTANT
http:
paths:
- path: /
backend:
service:
name: hello-service
port:
number: 80
pathType: Prefix
</code></pre>
<p><code>goodbye-ingress.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
nginx.ingress.kubernetes.io/server-alias: "goodbye-two.kubernetes.docker.internal"
spec:
rules:
- host: goodbye.kubernetes.docker.internal # <-- IMPORTANT
http:
paths:
- path: /
backend:
service:
name: goodbye-service
port:
number: 80
pathType: Prefix
</code></pre>
<p>Above definitions will create 2 <code>Ingress</code> resources that will be merged:</p>
<ul>
<li><code>hello-service</code> will respond for:
<ul>
<li><code>hello.kubernetes.docker.internal</code></li>
<li><code>hello-two.kubernetes.docker.internal</code></li>
</ul>
</li>
<li><code>goodbye-service</code> will respond for:
<ul>
<li><code>goodbye.kubernetes.docker.internal</code></li>
<li><code>goodbye-two.kubernetes.docker.internal</code></li>
</ul>
</li>
</ul>
<p>Running:</p>
<ul>
<li><code>$ kubectl get ingress</code>:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME CLASS HOSTS ADDRESS PORTS AGE
goodbye-ingress <none> goodbye.kubernetes.docker.internal localhost 80 36m
hello-ingress <none> hello.kubernetes.docker.internal localhost 80 36m
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-alias" rel="nofollow noreferrer">Kubernetes.github.io: Ingress NGINX: Annotations: Server alias</a></em></li>
</ul>
| Dawid Kruk |
<p>I want to react to certain Kubernetes/Openshift events,
therefor I need a list of all possible (Kubernetes) events with their type (normal vs. warning).</p>
<p><a href="https://docs.openshift.com/container-platform/3.11/dev_guide/events.html#events-reference" rel="nofollow noreferrer">Openshift event list (but without type info)</a></p>
<p>Event data example:</p>
<pre><code> {
"metadata": {
...
},
"involvedObject": {
...
},
"reason": "Created",
"firstTimestamp": "...",
"lastTimestamp": "...",
"count": 1,
"type": "Normal",
"eventTime": null,
}
</code></pre>
<p>Is there any relation between the type and the reason of the event?</p>
<p>How can I create/find such a comprehensive list (event reasons + type + involved object kind)?</p>
| HectorLector | <p>Posting this answer as a community wiki to give more of a baseline to the question than the actual solution. Feel free to expand it.</p>
<p>I haven't found the Kubernetes equivalent of the OpenShift documentation like used in the question:</p>
<ul>
<li><em><a href="https://docs.openshift.com/container-platform/3.11/dev_guide/events.html#events-reference" rel="nofollow noreferrer">Docs.openshift.com: Container platform: 3.11: Dev guide: Events</a></em></li>
</ul>
<hr />
<p>From Kubernetes perspective you can look on the source code of the components to see what events they can send.</p>
<p><code>Kubelet</code> example:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/events/event.go" rel="nofollow noreferrer">Github.com: Kubernetes: Kubelets: Events: event.go</a></em></li>
</ul>
<pre class="lang-golang prettyprint-override"><code>
const (
FailedToKillPod = "FailedKillPod"
FailedToCreatePodContainer = "FailedCreatePodContainer"
FailedToMakePodDataDirectories = "Failed"
NetworkNotReady = "NetworkNotReady"
)
const (
CreatedContainer = "Created"
StartedContainer = "Started"
FailedToCreateContainer = "Failed"
FailedToStartContainer = "Failed"
KillingContainer = "Killing"
PreemptContainer = "Preempting"
BackOffStartContainer = "BackOff"
ExceededGracePeriod = "ExceededGracePeriod"
)
</code></pre>
<hr />
<p>The relation between the type and a reason for an event could be described as:</p>
<ul>
<li><em><a href="https://github.com/kubernetes/kubernetes/blob/b11d0fbdd58394a62622787b38e98a620df82750/pkg/apis/core/types.go#L4670" rel="nofollow noreferrer">Github.com: Kubernetes: Pkg: Apis: Core: types.go: Line 4670</a></em></li>
</ul>
<pre class="lang-golang prettyprint-override"><code>// Valid values for event types (new types could be added in future)
const (
// Information only and will not cause any problems
EventTypeNormal string = "Normal"
// These events are to warn that something might go wrong
EventTypeWarning string = "Warning"
)
</code></pre>
<p>As you can see below the <code>Normal</code> type event was for the information that is not causing any issues. The <code>Warning</code> type event was created where there was an issue (trying to download not existent image: <code>fake</code>):</p>
<pre class="lang-sh prettyprint-override"><code>11s Warning Failed pod/fake-f68cd66bc-hgxxv Error: ErrImagePull
11s Normal BackOff pod/fake-f68cd66bc-hgxxv Back-off pulling image "fake"
11s Warning Failed pod/fake-f68cd66bc-hgxxv Error: ImagePullBackOff
14s Normal SuccessfulCreate replicaset/fake-f68cd66bc Created pod: fake-f68cd66bc-hgxxv
14s Normal ScalingReplicaSet deployment/fake Scaled up replica set fake-f68cd66bc to 1
50s Normal Scheduled pod/nginx-6799fc88d8-ks76h Successfully assigned default/nginx-6799fc88d8-ks76h to docker-desktop
</code></pre>
<p>To have a list of the events that happened in the cluster you could try to use dedicated application inside Kubernetes cluster to watch on the events and store them in a storage option of your choosing.</p>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://www.cncf.io/blog/2020/12/10/the-top-kubernetes-apis-for-cloud-native-observability-part-1-the-kubernetes-metrics-service-container-apis-3/" rel="nofollow noreferrer">Cncf.io: Blog: 2020-12-10: The top Kubernetes APIs for cloud native observability: Part 1: Kuberentes metrics service container apis 3</a></em></li>
<li><em><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#event-v1-core" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Generated: Kubernetes api: v1.20: Event v1 core</a></em></li>
</ul>
| Dawid Kruk |
<p>Could you please let me know, How should I deploy a grafana dashboard with sidecar and place/group it in specific dashboard folder? </p>
<p>Example ( Mongo DB dashboard resides in Mongo Folder and Postgres at Postgres Folder). Please note that I am NOT talking about file locations ( /tmp/dashboards ).</p>
<p>I am using the grafana stable HELM chart and latest version of grafana (version 6.4.2) </p>
<p>I am deploying JSON dashboards with k8s configmap and label the config map with sidecar dashboard label. Once deployed it always goes to default "General" dashboards Folder. </p>
<p>HRLM values</p>
<pre><code> sidecar:
dashboards:
enabled: true
label: grafana_dashboard
</code></pre>
<p>THANKS</p>
| Dileeka Fernando | <p>You can add dashboard Providers to your values file and specify custom configurations for each of your folders.</p>
<p>You can check the default values for Grafana chart to find an <a href="https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L525" rel="nofollow noreferrer">example</a>.</p>
<p>The <code>dashboardProviders</code> should be under <code>grafana</code>, same indentions as <code>sidecar</code>.</p>
<p>Example :</p>
<pre><code>grafana:
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: folder1
orgId: 1
type: file
folder: folder1
allowUiUpdates: true
disableDeletion: false
updateIntervalSeconds: 10
editable: true
options:
path: /tmp/dashboards/folder1
- name: folder2
orgId: 1
type: file
folder: folder2
allowUiUpdates: true
disableDeletion: false
updateIntervalSeconds: 10
editable: true
options:
path: /tmp/dashboards/folder2
- name: folder3
orgId: 1
type: file
folder: folder3
allowUiUpdates: true
disableDeletion: false
updateIntervalSeconds: 10
editable: true
options:
path: /tmp/dashboards/folder3
</code></pre>
<p>Then you can add an annotation to each of your dashboards configmaps to tell helm chart where to place those dashboards :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-dashboard
labels:
grafana_dashboard: "1" #
annotations:
k8s-sidecar-target-directory: /tmp/dashboards/folder1
</code></pre>
<p>This annotation will tell helm to place the target dashboards under /tmp/dashboards/folder1 so that they can be managed by folder1 provider.</p>
| touati ahmed |
<p>Sorry in advance if my terminology isn't perfect, I'm learning Kubernetes right now.</p>
<p>I have a self-managed Kubernetes cluster on a series of AWS instances, with one master node and five worker nodes. All nodes are running Ubuntu 18.04. These nodes are all on a VPC and I ssh into them using a bastion host. For the time being, I've also given all of the nodes external IPs as well, just to make testing easier. I also have a domain, let's call it xxx.example.org, pointed at the current external IP of the master node.</p>
<p>I set up Kubernetes using Kubespray and then proceeded to install Istio (using istioctl) and set up the Ingress Gateway per the official docs <a href="https://istio.io/docs/setup/getting-started/" rel="nofollow noreferrer">here</a> and <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">here</a></p>
<p>When I run <code>kubectl get svc -n istio-system istio-ingressgateway</code>, the External-IP for the cluster is always :</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.3.209 <pending> 15020:30051/TCP,80:32231/TCP,443:30399/TCP,15029:31406/TCP,15030:32078/TCP,15031:30050/TCP,15032:30204/TCP,31400:31912/TCP,15443:31071/TCP 3m1s
</code></pre>
<p>I am able to access the services in a browser using <code>IP:32231/headers</code> or <code>xxx.example.org:32231/headers</code></p>
<p>I used the following command to configure my Gateway and VirtualService for the httpbin and Bookinfo projects referenced in the Istio docs:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
- uri:
prefix: /headers
route:
- destination:
port:
number: 8000
host: httpbin
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
EOF
</code></pre>
<p>Seeing as this is a self-managed cluster, is there any way to get an external-ip for the cluster? If not, how would I go about modifying my current configuration such that the pages are accessible from <code>xxx.example.org</code> rather than <code>xxx.example.org:32231</code>? </p>
<p><strong>EDIT #1</strong></p>
<p>I did try to set up a NLB on AWS by following <a href="https://istio.io/blog/2018/aws-nlb/" rel="nofollow noreferrer">this documentation</a> and <a href="https://medium.com/swlh/public-and-private-istio-ingress-gateways-on-aws-f968783d62fe" rel="nofollow noreferrer">this guide</a>. Unfortunately, this didn't change anything, the <code>EXTERNAL-IP</code> is still <code><pending></code>. After doing that, I deployed a new ingress gateway, which looked like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: istio-ingressgateway-2
istio: ingressgateway-2
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.5.2
release: istio
name: istio-ingressgateway-2
namespace: istio-system
spec:
ports:
- name: status-port
nodePort: 30625
port: 15020
protocol: TCP
targetPort: 15020
- name: http2
nodePort: 32491
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30466
port: 443
protocol: TCP
targetPort: 443
- name: kiali
nodePort: 32034
port: 15029
protocol: TCP
targetPort: 15029
- name: prometheus
nodePort: 30463
port: 15030
protocol: TCP
targetPort: 15030
- name: grafana
nodePort: 31176
port: 15031
protocol: TCP
targetPort: 15031
- name: tracing
nodePort: 32040
port: 15032
protocol: TCP
targetPort: 15032
- name: tcp
nodePort: 32412
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 30411
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway-2
istio: ingressgateway-2
type: LoadBalancer
</code></pre>
<p>I also changed my <code>httpbin-gateway</code> to use <code>ingressgateway-2</code>. This failed to load anything, even on port 32231.</p>
| Vivek Ramachandran | <p>This issue can be fixed by adding annotations to Your <code>LoadBalancer</code> service manifest.</p>
<p>According to <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">Amazon</a> Documentation:</p>
<blockquote>
<p>Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on Amazon EC2 instance worker nodes through the Kubernetes service of type <code>LoadBalancer</code>. Classic Load Balancers and Network Load Balancers are not supported for pods running on AWS Fargate (Fargate). For Fargate ingress, we recommend that you use the <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">ALB Ingress Controller</a> on Amazon EKS (minimum version v1.1.4).</p>
<p>The configuration of your load balancer is controlled by annotations that are added to the manifest for your service. By default, Classic Load Balancers are used for <code>LoadBalancer</code> type services. To use the Network Load Balancer instead, apply the following annotation to your service:</p>
<p><code>service.beta.kubernetes.io/aws-load-balancer-type: nlb</code></p>
<p>For an example service manifest that specifies a load balancer, see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Type LoadBalancer</a> in the Kubernetes documentation. For more information about using Network Load Balancer with Kubernetes, see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support" rel="nofollow noreferrer">Network Load Balancer support on AWS</a> in the Kubernetes documentation.</p>
<p>By default, services of type <code>LoadBalancer</code> create public-facing load balancers. To use an internal load balancer, apply the following annotation to your service:</p>
<p><code>service.beta.kubernetes.io/aws-load-balancer-internal: "true"</code></p>
<p>For internal load balancers, your Amazon EKS cluster must be configured to use at least one private subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.</p>
</blockquote>
<p>To add one or more annotations like that to Your istio ingress configuration You can follow an example from <a href="https://medium.com/swlh/public-and-private-istio-ingress-gateways-on-aws-f968783d62fe" rel="nofollow noreferrer">this</a> article.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I would like to calculate the total number of bytes allocated by the persistent volumes (PVs) in a cluster. Using the following:</p>
<pre><code>$ kubectl get pv -A -o json
</code></pre>
<p>I can get a JSON list of all the cluster's PVs and for each PV in the <code>items[]</code> list one can read the <code>spec.capacity.storage</code> key to access the necessary information.
See example below:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "PersistentVolume",
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"capacity": {
"storage": "500Gi"
},
"claimRef": {
"apiVersion": "v1",
"kind": "PersistentVolumeClaim",
"name": "s3-storage-minio",
"namespace": "default",
"resourceVersion": "515932",
},
"persistentVolumeReclaimPolicy": "Delete",
"volumeMode": "Filesystem",
},
"status": {
"phase": "Bound"
}
},
</code></pre>
<p>However, the returned values can be represented in different suffix (storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. Or similarly, power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki).</p>
<p>Is there a neat way to request the capacity in integer format (or any other format but consistent among all the PVs) using the kubectl?</p>
<p>Otherwise, transforming different suffix to a common one in Bash looks like not very straightforward.</p>
<p>Thanks in advance for your help.</p>
| fantoman | <p>I haven't found a way to transform a value in <code>.spec.capacity.storage</code> using purely <code>kubectl</code>.</p>
<hr />
<p>I've managed to create a code with Python and it's Kubernetes library to extract the data and calculate the size of all used <code>PV</code>'s. Please treat this code as an example and not production ready:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
import re
config.load_kube_config() # use .kube/config
v1 = client.CoreV1Api()
multiplier_dict = {"k": 1000, "Ki": 1024, "M": 1000000, "Mi": 1048576 , "G": 1000000000, "Gi": 1073741824} # and so on ...
size = 0
# for i in v1.list_persistent_volume_claim_for_all_namespaces(watch=False).items: # PVC
for i in v1.list_persistent_volume(watch=False).items: # PV
x = i.spec.capacity["storage"] # PV
# x = i.spec.resources.requests["storage"] # PVC
y = re.findall(r'[A-Za-z]+|\d+', x)
print(y)
# try used if no suffix (like Mi) is used
try:
if y[1] in multiplier_dict:
size += multiplier_dict.get(y[1]) * int(y[0])
except IndexError:
size += int(y[0])
print("The size in bytes of all PV's is: " + str(size))
</code></pre>
<p>Having as an example a cluster that has following <code>PV</code>'s:</p>
<ul>
<li><code>$ kubectl get pv</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6b5236ec-547f-4f96-8448-e3dbe01c9039 500Mi RWO Delete Bound default/pvc-four hostpath 4m13s
pvc-86d178bc-1673-44e0-9a89-2efb14a1d22c 512M RWO Delete Bound default/pvc-three hostpath 4m15s
pvc-89b64f93-6bf4-4987-bdda-0356d19d6f59 1G RWO Delete Bound default/pvc-one hostpath 4m15s
pvc-a3455e77-0db0-4cab-99c9-c72721a65632 10Ki RWO Delete Bound default/pvc-six hostpath 4m14s
pvc-b47f92ef-f627-4391-943f-efa4241d0811 10k RWO Delete Bound default/pvc-five hostpath 4m13s
pvc-c3e13d78-9047-4899-99e7-0b2667ce4698 1Gi RWO Delete Bound default/pvc-two hostpath 4m15s
pvc-c57fe2b0-013a-412b-bca9-05050990766a 10 RWO Delete Bound default/pvc-seven hostpath 113s
</code></pre>
<p>The code would produce the output of:</p>
<pre class="lang-sh prettyprint-override"><code>['500', 'Mi']
['512', 'M']
['1', 'G']
['10', 'Ki']
['10', 'k']
['1', 'Gi']
['10']
The size in bytes of all PV's is: 3110050074
</code></pre>
<hr />
<p>Adding to the whole answer remember that there could be differences on the request of a <code>PVC</code> and the actual <code>PV</code> size. Please refer to the storage documentation of your choosing on that regard.</p>
<ul>
<li><code>pvc.yaml</code>:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M
</code></pre>
<p>Part of the <code>$ kubectl get pvc -o yaml</code> output:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M # <-- REQUEST
<-- REDACTED -->
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi # <-- SIZE OF PV
phase: Bound
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes</a></em></li>
<li><em><a href="https://en.wikipedia.org/wiki/Byte#Multiple-byte_units" rel="nofollow noreferrer">Wikipedia.org: Byte: Multiple byte units</a></em></li>
</ul>
| Dawid Kruk |
<p>Recently I installed a new GitLab Server in a Kubernetes cluster running on <code>GKE</code>.
I followed this documentation:</p>
<ul>
<li><a href="https://cloud.google.com/solutions/deploying-production-ready-gitlab-on-gke" rel="nofollow noreferrer">https://cloud.google.com/solutions/deploying-production-ready-gitlab-on-gke</a></li>
</ul>
<p>I used Helm 3 instead Helm 2.</p>
<p>Now I needed to configure a mail server to send emails to each operation (new user, pipelines, etc.) but I didn't find how to do that in this documentation.
I found a doc (<a href="https://docs.gitlab.com/omnibus/settings/smtp.html#example-configurations" rel="nofollow noreferrer">https://docs.gitlab.com/omnibus/settings/smtp.html#example-configurations</a>) but is useful only in GitLab installed in virtual machines, also I looked in the <code>replicaset</code> and <code>deployment</code>, but I can't find it.</p>
<p>How can I access my configurations in my GitLab Server?</p>
<ul>
<li>GitLab version: 13.7.0</li>
<li>Helm chart: gitlab-4.7.0</li>
</ul>
| Julio Back | <p>As asked in the question:</p>
<blockquote>
<p>How can I access my configurations in my GitLab Server? GitLab version: 13.7.0 Helm chart: gitlab-4.7.0</p>
</blockquote>
<p>You already accessed some of the configuration of your <code>Gitlab</code> by your <code>values.yaml</code> file. This is the file that stores the configuration of the resources you will be (or was) deploying.</p>
<p>By following one of the parts of the official documentation:</p>
<ul>
<li><em><a href="https://cloud.google.com/solutions/deploying-production-ready-gitlab-on-gke#install_the_gitlab_chart" rel="nofollow noreferrer">Cloud.google.com: Solutions: Deploying production ready gitlab on GKE: Install the gitlab chart</a></em></li>
</ul>
<p>You created your own <code>values.yaml</code> and used it to override the values (only some) of the <code>values.yaml</code> file in the <code>Helm</code> Chart.</p>
<p>To pass additional configuration to your <code>Gitlab</code> you could either:</p>
<ul>
<li>Pull the whole <code>gitlab</code> chart, modify it's <code>values.yaml</code> and run it from local source:
<ul>
<li><code>$ helm pull gitlab/gitlab --untar</code></li>
<li>edit the <code>gitlab/values.yaml</code> file</li>
<li><code>$ helm install gitlab gitlab/ -f gcp-values.yaml</code> (<code>gcp-values.yaml</code> is the values from <code>GCP</code> guide and it's in the preceding directory)</li>
</ul>
</li>
<li>Add to your previously created <code>values.yaml</code> configuration that is responsible for managing email communication (add to the <code>values.yaml</code> from <code>GCP</code> guide).</li>
</ul>
<p>There are multiple parts responsible for mail communication in the <code>Gitlab</code> <code>values.yaml</code>.</p>
<p>For example, there is a part responsible for handling outgoing messages under <code>global.smtp</code> section:</p>
<pre class="lang-yaml prettyprint-override"><code> ## doc/installation/deployment.md#outgoing-email
## Outgoing email server settings
smtp:
enabled: false
address: smtp.mailgun.org
port: 2525
user_name: ""
## doc/installation/secrets.md#smtp-password
password:
secret: ""
key: password
# domain:
authentication: "plain"
starttls_auto: false
openssl_verify_mode: "peer"
## doc/installation/deployment.md#outgoing-email
## Email persona used in email sent by GitLab
email:
from: ''
display_name: GitLab
reply_to: ''
subject_suffix: ''
smime:
enabled: false
secretName: ""
keyName: "tls.key"
certName: "tls.crt"
</code></pre>
<p>There are also parts for incoming messages, service desk etc.. You will need to check for yourself and configure it to match your needs.</p>
<p>The site that you mentioned:</p>
<ul>
<li><em><a href="https://docs.gitlab.com/omnibus/settings/smtp.html#example-configurations" rel="nofollow noreferrer">Docs.gitlab.com: Omnibus: Settings: SMTP: Example configurations</a></em></li>
</ul>
<p>Could be a good reference/guide when modifying the <code>values.yaml</code> file to support the mail configuration of your choosing.</p>
<p>I also encourage you to also check <a href="https://docs.gitlab.com/ee/administration/incoming_email.html" rel="nofollow noreferrer">this</a> site for incoming emails configuration.</p>
<hr />
<p>As for mail communication in <code>GKE</code>.</p>
<p><code>GKE</code> nodes are in fact <code>GCE</code> <code>VM's</code> and they are under certain limitations:</p>
<blockquote>
<h2>Sending email from an instance</h2>
<p>This document describes the options for sending mail from a virtual machine (VM) instance and provides general recommendations on how to set up your instances to send email.</p>
<h3>Using standard email ports</h3>
<p>Due to the risk of abuse, connections to destination TCP Port 25 are <a href="https://cloud.google.com/vpc/docs/firewalls#blockedtraffic" rel="nofollow noreferrer">always blocked</a> when the destination is external to your VPC network. This includes using SMTP relay with Google Workspace.</p>
<p>Google Cloud does not place any restrictions on traffic sent to external destination IP addresses using destination TCP ports 587 or 465.</p>
<p>-- <em><a href="https://cloud.google.com/compute/docs/tutorials/sending-mail" rel="nofollow noreferrer">Cloud.google.com: Compute: Docs: Tutorials: Sending mail</a></em></p>
</blockquote>
<p>Following on the above <a href="https://cloud.google.com/compute/docs/tutorials/sending-mail" rel="nofollow noreferrer">link</a>:</p>
<p>I've managed to use one of the mentioned external mail service providers to configure the outgoing email communication on my <code>Gitlab</code> instance. You can choose one that suits your needs the most.</p>
<p>You can also check this ServerFault answer which provides some additional information:</p>
<ul>
<li><em><a href="https://serverfault.com/questions/1043566/is-it-possible-to-run-an-smtp-server-like-postfix-on-a-google-cloud-instance-wit">Serverfault.com: Questions: Is it possible to run an SMTP server like postfix on a Google Cloud instance</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm attempting to create a kubernetes pod that will run MLflow tracker to store the mlflow artifacts in a designated s3 location. Below is what I'm attempting to deploy with</p>
<p>Dockerfile:</p>
<pre><code>FROM python:3.7.0
RUN pip install mlflow==1.0.0
RUN pip install boto3
RUN pip install awscli --upgrade --user
ENV AWS_MLFLOW_BUCKET aws_mlflow_bucket
ENV AWS_ACCESS_KEY_ID aws_access_key_id
ENV AWS_SECRET_ACCESS_KEY aws_secret_access_key
COPY run.sh /
ENTRYPOINT ["/run.sh"]
# docker build -t seedjeffwan/mlflow-tracking-server:1.0.0 .
# 1.0.0 is current mlflow version
</code></pre>
<p>run.sh:</p>
<pre><code>#!/bin/sh
set -e
if [ -z $FILE_DIR ]; then
echo >&2 "FILE_DIR must be set"
exit 1
fi
if [ -z $AWS_MLFLOW_BUCKET ]; then
echo >&2 "AWS_MLFLOW_BUCKET must be set"
exit 1
fi
if [ -z $AWS_ACCESS_KEY_ID ]; then
echo >&2 "AWS_ACCESS_KEY_ID must be set"
exit 1
fi
if [ -z $AWS_SECRET_ACCESS_KEY ]; then
echo >&2 "AWS_SECRET_ACCESS_KEY must be set"
exit 1
fi
mkdir -p $FILE_DIR && mlflow server \
--backend-store-uri $FILE_DIR \
--default-artifact-root s3://${AWS_MLFLOW_BUCKET} \
--host 0.0.0.0 \
--port 5000
</code></pre>
<p>mlflow.yaml:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mlflow-tracking-server
namespace: default
spec:
selector:
matchLabels:
app: mlflow-tracking-server
replicas: 1
template:
metadata:
labels:
app: mlflow-tracking-server
spec:
containers:
- name: mlflow-tracking-server
image: seedim/mlflow-tracker-service:v1
ports:
- containerPort: 5000
env:
# FILE_DIR can not be mount dir, MLFLOW need a empty dir but mount dir has lost+found
- name: FILE_DIR
value: /mnt/mlflow/manifest
- name: AWS_MLFLOW_BUCKET
value: <aws_s3_bucket>
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_SECRET_ACCESS_KEY
volumeMounts:
- mountPath: /mnt/mlflow
name: mlflow-manifest-storage
volumes:
- name: mlflow-manifest-storage
persistentVolumeClaim:
claimName: mlflow-manifest-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mlflow-tracking-server
namespace: default
labels:
app: mlflow-tracking-server
spec:
ports:
- port: 5000
protocol: TCP
selector:
app: mlflow-tracking-server
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mlflow-manifest-pvc
namespace: default
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
<p>I am then building the docker image, saving it to the minikube environment, and then attempting to run the docker image on a kubernetes pod. </p>
<p>When I try this, I get CrashLoopBackOff error for the image pod and 'pod has unbound immediate PersistentVolumeClaims' for the pod created with the yaml. </p>
<p>I'm attempting to follow the information here (<a href="https://github.com/aws-samples/eks-kubeflow-workshop/blob/master/notebooks/07_Experiment_Tracking/07_02_MLFlow.ipynb" rel="nofollow noreferrer">https://github.com/aws-samples/eks-kubeflow-workshop/blob/master/notebooks/07_Experiment_Tracking/07_02_MLFlow.ipynb</a>). </p>
<p>Is there anything noticeable that I'm doing wrong in this situation?</p>
<p>Thank you </p>
| JMV12 | <p>The issue here is related to Persistent Volume Claim that is not provisioned by Your minikube cluster.</p>
<p>You will need to make a decision to switch to platform managed kubernetes service or to stick with minikube and manually satisfy the Persistent Volume Claim or
with alternative solutions.</p>
<p>The simplest option would be to use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> charts for mflow installation like <a href="https://hub.helm.sh/charts/cetic/mlflow" rel="nofollow noreferrer">this</a> or <a href="https://hub.helm.sh/charts/larribas/mlflow" rel="nofollow noreferrer">this</a>.</p>
<p>The first helm <a href="https://hub.helm.sh/charts/cetic/mlflow" rel="nofollow noreferrer">chart</a> has listed requirements:</p>
<blockquote>
<h2>Prerequisites</h2>
<ul>
<li>Kubernetes cluster 1.10+</li>
<li>Helm 2.8.0+</li>
<li>PV provisioner support in the underlying infrastructure.</li>
</ul>
</blockquote>
<p>Just like in the guide You followed this one requires PV provisioner support.</p>
<p>So by switching to EKS You most likely will have easier time deploying mflow with artifact storing with s3.</p>
<p>If You wish to stay on minikube, You will need to modify the helm chart values or the yaml files from the guide You linked to be compatible with You manual configuration of PV. It might also need permissions configuration for s3.</p>
<p>The second helm <a href="https://hub.helm.sh/charts/larribas/mlflow" rel="nofollow noreferrer">chart</a> has the following limitation/feature:</p>
<blockquote>
<h2>Known limitations of this Chart</h2>
<p>I've created this Chart to use it in a production-ready environment in my company. We are using MLFlow with a Postgres backend store.</p>
<p>Therefore, the following capabilities have been left out of the Chart:</p>
<ul>
<li>Using persistent volumes as a backend store.</li>
<li>Using other database engines like MySQL or SQLServer.</li>
</ul>
</blockquote>
<p>You can try to install it on minikube. This setup would result in artifacts being stored on remote a database. It would still need tweaking in order to connect to s3.</p>
<p>Anyway minikube still is a lightweight distribution of kubernetes targeted mainly for learning, so You will eventually reach another limitation if You stick to it for too long.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>According to the <a href="https://minikube.sigs.k8s.io/docs/handbook/config/" rel="nofollow noreferrer">minikube handbook</a> the configuration commands are used to "Configure your cluster". But what does that mean?</p>
<p>If I set cpus and memory then are these the max values the cluster as a whole can ever consume?</p>
<p>Are these the values it will reserve on the host machine in preparation for use?</p>
<p>Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?</p>
<p>What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.</p>
<p>Thanks for the help in advance.</p>
| Currn Hyde | <p>Answering the question:</p>
<blockquote>
<p>If I set cpus and memory then are these the max values the cluster as a whole can ever consume?</p>
</blockquote>
<p>In short. It will be a limit for the whole resource (either a <code>VM</code>, a container, etc. depending on a <code>--driver</code> used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.</p>
<blockquote>
<p>Are these the values it will reserve on the host machine in preparation for use?</p>
</blockquote>
<p>I'd reckon this would be related to the <code>--driver</code> you are using and how its handling the resources. I personally doubt it's reserving the 100% of <code>CPU</code> and <code>memory</code> you've passed in the <code>$ minikube start</code> and I'm more inclined to the idea that it uses how much it needs during specific operations.</p>
<blockquote>
<p>Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?</p>
</blockquote>
<p>By default, when you create a <code>minikube</code> instance with: <code>$ minikube start ...</code> you will create a single node cluster capable of being a <code>control-plane</code> node and a <code>worker</code> node simultaneously. You will be able to run your workloads (like an <code>nginx-deployment</code> without adding additional node).</p>
<p>You can add a node to your <code>minikube</code> ecosystem with just: <code>$ minikube node add</code>. This will make another node marked as a <code>worker</code> (with no <code>control-plane</code> components). You can read more about it here:</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Tutorials: Multi node</a></em></li>
</ul>
<blockquote>
<p>What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.</p>
</blockquote>
<p>As said previously, you don't need to delete the <code>minikube</code> cluster to add another node. You can run <code>$ minikube node add</code> to add a node on a <code>minikube</code> host. There are also options to <code>delete</code>/<code>stop</code>/<code>start</code> nodes.</p>
<p><strong>Personally</strong> speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:</p>
<ul>
<li><code>Kubeadm</code></li>
<li><code>Kubespray</code></li>
<li><code>Microk8s</code></li>
</ul>
<p>This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, <code>minikube</code> works within a single host (like your laptop for example)).</p>
<blockquote>
<p>A side note!</p>
<p>There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a <code>minikube</code> here :</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/51706547/12257134">Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
</a></em></li>
</ul>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm</a></em></li>
<li><em><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Github.com: Kubernetes sigs: Kubespray</a></em></li>
<li><em><a href="https://microk8s.io/" rel="nofollow noreferrer">Microk8s.io</a></em></li>
</ul>
| Dawid Kruk |
<p>I have a persistent volume (PV) and persistent volume claim (PVC) which got bound as well. Initially, the storage capacity was 2Gi for the PV and the requested storage from PVC was 1Gi.
I then edit the existing bound PV and increased the storage to 5Gi with the record flag as <code>--record</code>.</p>
<pre><code>vagrant@mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 2Gi RWO Retain Bound test/my-pvc 106s
vagrant@mykubemaster:~/my-k8s$ kubectl edit pv my-pv --record
persistentvolume/my-pv edited
vagrant@mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWO Retain Bound test/my-pvc 2m37s
</code></pre>
<p>Now my question is if there is any way by which I can confirm that this --record flag have certainly recorded this storage change (edit PV) in history.</p>
<p>With deployments, it is easy to check with the <code>kubectl rollout history <deployment name></code> but I'm not sure how to check this with other objects like PV.</p>
<p>Please assist. thanks</p>
| vinod827 | <p>As mentioned in <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer"><code>kubectl</code> references docs</a>:</p>
<pre><code>Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
</code></pre>
<p>You can run <code>kubectl get pv my-pv -o yaml</code> and you should see that <code>kubernetes.io/change-cause</code> was updated with the command that you ran. In your case, it will be <code>kubectl edit pv my-pv --record</code>.</p>
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout" rel="nofollow noreferrer"><code>rollout</code></a> command that you mentioned (including <code>rollout history</code>) works only with the following resources:</p>
<ul>
<li>deployments</li>
<li>daemonsets</li>
<li>statefulsets</li>
</ul>
| hilsenrat |
<p>I’m new to Istio. I’m implementing Authorization with JWT. The DENY action is not reflected for a valid JWT token. I’ve added the JWT Payload and Authorization Policy for reference.
I’m using kubernetes version v1.18.3 and Istio 1.6.2. I’m running cluster on minikube.</p>
<p>I applied below rule on ingressgateway first:</p>
<pre><code>apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: ingress-auth-jwt
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
- issuer: "https://dev-n63ipah2.us.auth0.com/"
jwksUri: "https://dev-n63ipah2.us.auth0.com/.well-known/jwks.json"
audiences:
- "http://10.97.72.213/"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-authz
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: ALLOW
rules:
- when:
- key: request.auth.claims[iss]
values: ["https://dev-n63ipah2.us.auth0.com/"]
</code></pre>
<p>After that I applied below policy for dex-ms-contact service</p>
<pre><code>JWT Payload:
{
"iss": "https://dev-n63ipah2.us.auth0.com/",
"sub": "sEbjHGBcZ16D0jk8wohIp7vPoT0MWTO0@clients",
"aud": "http://10.97.72.213/",
"iat": 1594274641,
"exp": 1594361041,
"azp": "sEbjHGBcZ16D0jk8wohIp7vPoT0MWTO0",
"gty": "client-credentials"
}
</code></pre>
<pre><code>RequestAuthentication:
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: dex-ms-contact-jwt
namespace: default
spec:
selector:
matchLabels:
app: dex-ms-contact
jwtRules:
- issuer: "https://dev-n63ipah2.us.auth0.com/"
jwksUri: "https://dev-n63ipah2.us.auth0.com/.well-known/jwks.json"
audiences:
- "http://10.97.72.213/"
---
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: dex-ms-contact-require-jwt
namespace: default
spec:
selector:
matchLabels:
app: dex-ms-contact
action: DENY
rules:
- when:
- key: request.auth.claims[iss]
values: ["https://dev-n63ipah2.us.auth0.com/"]
</code></pre>
<p>The ingressgateway policy works fine. However when I apply DENY policy on a dex-ms-contact service The DENY policy does not reflect with a valid JWT token. Ideally it should not allow me to access dex-ms-contact service right?</p>
<p>What is the expected behavior?</p>
| Sweta Sharma | <p>According to istio <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Istio Authorization Policy enables access control on workloads in the mesh.</p>
<p>Authorization policy supports both allow and deny policies. When allow and deny policies are used for a workload at the same time, the deny policies are evaluated first. The evaluation is determined by the following rules:</p>
<ol>
<li>If there are any DENY policies that match the request, deny the request.</li>
<li>If there are no ALLOW policies for the workload, allow the request.</li>
<li>If any of the ALLOW policies match the request, allow the request.</li>
<li>Deny the request.</li>
</ol>
</blockquote>
<p>So taking into consideration that deny policies are evaluated first. Your request could have been first denied on workload policy and then allowed on gateway policy which resulted in overriding deny rule completely.</p>
<p>Considering the order of evaluation of policies being more specific what should get allowed in ALLOW policy would probably make Your permissions model possible.</p>
<p>Hope it helps.</p>
<hr />
<p>Edit:</p>
<p>According to istio <a href="https://istio.io/latest/docs/reference/glossary/#workload" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>WORKLOAD<a href="https://istio.io/latest/docs/reference/glossary/#workload" rel="nofollow noreferrer"></a></p>
<p>A binary deployed by <a href="https://istio.io/latest/docs/reference/glossary/#operator" rel="nofollow noreferrer">operators</a> to deliver some function of a service mesh application. Workloads have names, namespaces, and unique ids. These properties are available in policy and telemetry configuration using the following <a href="https://istio.io/latest/docs/reference/glossary/#attribute" rel="nofollow noreferrer">attributes</a>:</p>
<ul>
<li><code>source.workload.name</code>, <code>source.workload.namespace</code>, <code>source.workload.uid</code></li>
<li><code>destination.workload.name</code>, <code>destination.workload.namespace</code>, <code>destination.workload.uid</code></li>
</ul>
<p>In Kubernetes, a workload typically corresponds to a Kubernetes deployment, while a <a href="https://istio.io/latest/docs/reference/glossary/#workload-instance" rel="nofollow noreferrer">workload instance</a> corresponds to an individual <a href="https://istio.io/latest/docs/reference/glossary/#pod" rel="nofollow noreferrer">pod</a> managed by the deployment.</p>
</blockquote>
<p>Sorry for late answer, I have been away for some time.</p>
| Piotr Malec |
<p>Now in GKE there is new tab while creating new K8s cluster</p>
<p><code>Automation</code> - Set cluster-level criteria for automatic maintenance, autoscaling, and auto-provisioning. Edit the node pool for automation like auto-scaling, auto-upgrades, and repair.</p>
<p>it has two options - <strong>Balanced (default)</strong> & <strong>Optimize utilization (beta)</strong></p>
<p>cant we set this for older cluster any work around?</p>
<p>we are running old GKE version <strong>1.14</strong> we want to auto-scale cluster when <strong>70%</strong> of resource utilization of existing nodes.</p>
<p>Currently, we have <strong>2</strong> different pools - only <strong>one</strong> has auto node provisioning enable but during peak hour if HPA scales POD, New node taking some time to join the cluster and sometimes exiting node start crashing due to resource pressure.</p>
| chagan | <p>You can set the autoscaling profile by going into:</p>
<ul>
<li><code>GCP Cloud Console</code> (Web UI) -> <code>Kubernetes Engine</code> -> <code>CLUSTER-NAME</code> -> <code>Edit</code> -> <code>Autoscaling profile</code></li>
</ul>
<p><a href="https://i.stack.imgur.com/6946C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6946C.png" alt="Autoscaling profile" /></a></p>
<blockquote>
<p>This screenshot was made on <code>GKE</code> version <code>1.14.10-gke.50</code></p>
</blockquote>
<p>You can also run:</p>
<ul>
<li><code>gcloud beta container clusters update CLUSTER-NAME --autoscaling-profile optimize-utilization</code></li>
</ul>
<hr />
<p>The official documentation states:</p>
<blockquote>
<p>You can specify which autoscaling profile to use when making such decisions. The currently available profiles are:</p>
<ul>
<li><code>balanced</code>: The default profile.</li>
<li><code>optimize-utilization</code>: Prioritize optimizing utilization over keeping spare resources in the cluster. When selected, the cluster autoscaler scales down the cluster more aggressively: it can remove more nodes, and remove nodes faster. <strong>This profile has been optimized for use with batch workloads that are not sensitive to start-up latency. We do not currently recommend using this profile with serving workloads.</strong></li>
</ul>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#autoscaling_profiles" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Cluster autoscaler: Autoscaling profiles</a></em></p>
</blockquote>
<p>This setting (<code>optimize-utilization</code>) could not be the best option when using it for serving workloads. It will more aggressively try to <code>scale-down</code> (remove a node). It will automatically reduce the amount of available resources your cluster is having and could be more vulnerable to workload spikes.</p>
<hr />
<p>Answering the part of the question:</p>
<blockquote>
<p>we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes.</p>
</blockquote>
<p>As stated in the documentation:</p>
<blockquote>
<p>Cluster autoscaler increases or decreases the size of the node pool automatically, based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:</p>
<ul>
<li><strong>If Pods are unschedulable because there are not enough nodes in the node pool, cluster autoscaler adds nodes, up to the maximum size of the node pool</strong>.</li>
</ul>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#how_cluster_autoscaler_works" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Cluster autoscaler: How cluster autoscaler works</a></em></p>
</blockquote>
<p>You can't directly scale the cluster based on the percentage of resource utilization (70%).
Autoscaler bases on inability of the cluster to schedule pods on currently existing nodes.</p>
<p>You can scale the amount of replicas of your <code>Deployment</code> by <code>CPU</code> usage with <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>. This <code>Pods</code> could have a buffer to handle increased amount of traffic and after a specific threshold they could spawn new <code>Pods</code> where the <code>CA</code>( Cluster autoscaler) would send a request for a new node (if new <code>Pods</code> are unschedulable). This buffer would be the mechanism to prevent sudden spikes that application couldn't manage.</p>
<p>The buffer part and over-provisioning explained in details in:</p>
<ul>
<li><em><a href="https://cloud.google.com/solutions/best-practices-for-running-cost-effective-kubernetes-applications-on-gke#autoscaler_and_over-provisioning" rel="nofollow noreferrer">Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke: Autoscaler and over-provisioning</a></em></li>
</ul>
<hr />
<hr />
<p>There is an extensive documentation about running cost effective apps on <code>GKE</code>:</p>
<ul>
<li><em><a href="https://cloud.google.com/solutions/best-practices-for-running-cost-effective-kubernetes-applications-on-gke" rel="nofollow noreferrer">Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke </a></em></li>
</ul>
<p>I encourage you to check above link as there are a lot of tips and insights on (scaling, over-provisioning, workload spikes, <code>HPA</code>, <code>VPA</code>,etc.)</p>
<p>Additional resources:</p>
<ul>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Node auto provisioning</a></em></li>
</ul>
| Dawid Kruk |
<p>While deploying a new deployment to our GKE cluster, the pod is created but is failing with the following error:</p>
<pre><code>Failed create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknown
</code></pre>
<p>The cluster is not loaded at all, and there is enough free disk, memory and CPU.</p>
<p>No other issue was seen in the pod/cluster logs.</p>
| hilsenrat | <p>The issue was eventually in the deployment YAML.</p>
<p>If you encounter a similar issue, check your resources section and verify it has to correct syntax which can be found here: <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a> </p>
<p>In my case, the issue was in the memory value:</p>
<p><strong>Working example (Note the Mi):</strong></p>
<pre><code>resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 5m
memory: 256Mi
</code></pre>
<p><strong>The initial resources definition (BAD SYNTAX):</strong></p>
<pre><code>resources:
limits:
cpu: 500m
memory: 256m
requests:
cpu: 5m
memory: 256m
</code></pre>
<p>I hope that it'll help other community members in the future, since it took me a decent amount of time to find the root cause in my case...</p>
| hilsenrat |
<p>Based on <a href="https://github.com/istio/istio/issues/23757" rel="nofollow noreferrer">https://github.com/istio/istio/issues/23757</a> it's been quite some time now that I get no answer about it, so I realized that most likely I don't understand the internals and I need to shift the question in another diretion.</p>
<p>I have a case where vulnerability scan shows that we are vulnerable to custom origin domains. The expectation from the provider is to block request that don't match a predefined ORIGIN within a virtual service allowOrigin setting.
I am trying to send OPTIONS preflights or simple gets, but no matter what I do the mesh always returns <strong>200</strong>:</p>
<pre><code>curl -s -H "Origin: http://fake" --verbose http://192.168.223.10:31380/productpage | grep -i "HTTP/1.1 200 OK"
curl -s -H "Origin: http://testit.com" --verbose http://192.168.223.10:31380/productpage | grep -i "HTTP/1.1 200 OK"
curl -s -X OPTIONS -H "Origin: http://testit.com" --verbose http://192.168.223.10:31380/productpage | grep -i "HTTP/1.1 200 OK"
curl -s -X OPTIONS -H "Origin: http://fake" --verbose http://192.168.223.10:31380/productpage | grep -i "HTTP/1.1 200 OK"
</code></pre>
<p>Is this something that controls only client blocking (browser) and if so how am I supposed to test it with curl?</p>
<p>I know how to reject a origin like, but it will just return not found then:</p>
<pre><code>- uri:
exact: /productpage
headers:
origin:
regex: "*test.com"
</code></pre>
| anVzdGFub3RoZXJodW1hbg | <p>There is an answer to this issue on <a href="https://github.com/istio/istio/issues/23757#issuecomment-670165587" rel="nofollow noreferrer">github</a>:</p>
<hr />
<p>Hi everyone. Testing CORS using curl can be a bit misleading. CORS is not enforced at the server side; it will not return a 4xx error for example. Instead, headers are returned back which are used by browsers to deny/accept. <a href="https://www.google.com/url?q=https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/cors.html?highlight%3Dcors&sa=D&usg=AFQjCNHe3PigScjzrJhzCHwqR_sanWb5rA" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/cors.html?highlight=cors</a> gives a good demo of this, and <a href="https://www.google.com/url?q=https://www.sohamkamani.com/blog/2016/12/21/web-security-cors/%23:%7E:text%3DCORS%2520isn%27t%2520actually%2520enforced,header%2520in%2520all%2520its%2520responses&sa=D&usg=AFQjCNEmy6-dHMqCnVa7ZJyA28iIWqjQKw" rel="nofollow noreferrer">https://www.sohamkamani.com/blog/2016/12/21/web-security-cors/#:~:text=CORS%20isn't%20actually%20enforced,header%20in%20all%20its%20responses</a>. is a good explanation.</p>
<p>So Istio's job here is simply to return these headers. I have added a test showing this works: <a href="https://www.google.com/url?q=https://github.com/istio/istio/pull/26231&sa=D&usg=AFQjCNF0tovkG_5bBxkD3wdKnLF7_h2oYw" rel="nofollow noreferrer">https://github.com/istio/istio/pull/26231</a></p>
| Piotr Malec |
<p>I have 2 nfs mounts of 100TB each i.e. 200TB in total. I have mounted these 2 on Kubernetes container. My file server is a typical log server that holds a mix of data types like JSON, HTML, images, logs and text files, etc. The size of files also varies a lot. I am kind of guessing what should be the ideal resource request for this kubernetes container? My assumption,</p>
<ol>
<li>As this is file reads its i/o intensive operation, CPU should be high</li>
<li>Since we may have a large file size transferred over, Memory should also be high.</li>
</ol>
<p>Just wanted to check if my assumptions are right?</p>
| Hacker | <p>Posting this community wiki answer to set a baseline and to show one possible set of actions that should led to solution.</p>
<p>Feel free to edit and expand.</p>
<hr />
<p><strong>As I stated previously, this setup will heavily depend on case to case basis and giving the approximate could be misleading</strong>. In my opinion the best course of actions to take would be:</p>
<ul>
<li>Install monitoring tools</li>
<li>Deploy the application for testing</li>
<li>Simulate the load</li>
</ul>
<hr />
<h3>Install monitoring tools</h3>
<p>There are a lot of monitoring tools that can retrieve the data about the <code>CPU</code> and <code>Memory</code> usage of your <code>Pods</code>. You will need to choose the one that suits your workloads and infrastructure best.</p>
<p>Some of them are:</p>
<ul>
<li><em><a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus.io</a></em></li>
<li><em><a href="https://www.elastic.co/elastic-cloud-kubernetes" rel="nofollow noreferrer">Elastic.co</a></em></li>
<li><em><a href="https://www.datadoghq.com/" rel="nofollow noreferrer">Datadoghq.com</a></em></li>
</ul>
<hr />
<h3>Deploy the application for testing</h3>
<p>This can also be a quite wide topic considering the fact that the exact requirements and the infrastructure is not known. One of many questions is if the <code>Deployment</code> should have a steady <code>replica</code> amount or should use some kind of <code>Horizontal Pod Autoscaling</code> (basing on <code>CPU</code> and/or <code>Memory</code>). The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access modes</a> on the storage shouldn't matter as <code>NFS</code> supports <code>RWX</code>.</p>
<p>The basic implementation of the <code>Deployment</code> that could be used can be found in the official Kubernetes documentation:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Controllers: Deployment: Creating a deployment</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Volumes: NFS</a></em></li>
</ul>
<hr />
<h3>Simulate the load</h3>
<p>The simulation part could go either as a real life usage or by using a tool to simulate the load. You would need in this part to choose the option/tool that suits your requirements the most. This part will show you the approximate resources that should be allocated to your <code>nginx file explorer</code>.</p>
<blockquote>
<p>A side note!</p>
<p>In my testing I've used <code>ab</code> to check if the load was divided equally by <code>X</code> amount of replicas.</p>
</blockquote>
<hr />
<h3>Additional resources</h3>
<p>I do recommend to check the official guide on official Kubernetes documentation regarding managing resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Configuration: Manage resources containers</a></em></li>
</ul>
<p>I also think that the <code>VPA</code> could help you in the whole process as:</p>
<blockquote>
<p>Vertical Pod Autoscaler (VPA) frees the users from necessity of setting up-to-date resource limits and requests for the containers in their pods. When configured, it will set the requests automatically based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod. It will also maintain ratios between limits and requests that were specified in initial containers configuration.</p>
<p>It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.</p>
<p>-- <em><a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Github.com: Kubernetes: Autoscaler: Vertical Pod Autoscaler</a></em></p>
</blockquote>
<p>I'd reckon you could also look on this answer:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/63224542/12257134">Stackoverflow.com: Answers: PromQL query to find CPU and memory used for the last week</a></em></li>
</ul>
| Dawid Kruk |
<p>I created a schedule configuration inside my Gcloud project to create snapshots of a bunch of virtual disks.</p>
<p>Now I want to add my schedule configuration to my disks, but I dont know how to do it in a automated way, because I have more than 1200 disks.</p>
<p>I tryed to use a POD with a cron inside, but I cannot execute the kubectl command to list all my persistent volumes:</p>
<pre><code>kubectl describe pv | grep "Name" | awk 'NR % 2 == 1' | awk '{print $2}'
</code></pre>
<p>I want to use this list with the next command in a Loop to add automatically my programmed schedule to my disks:</p>
<pre><code>gcloud compute disks add-resource-policies [DISK_NAME] --resource-policies [SCHEDULE_NAME] --zone [ZONE]
</code></pre>
<p>Thanks in advance for your help.</p>
<p>Edit 1: After some comments I changed my code to add a Kubernetes CronJob, but the result is the same, the code doesn't work (the pod is created, but it gives me an error: ImagePullBackOff):</p>
<pre><code>resource "kubernetes_cron_job" "schedulerdemo" {
metadata {
name = "schedulerdemo"
}
spec {
concurrency_policy = "Replace"
failed_jobs_history_limit = 5
schedule = "*/5 * * * *"
starting_deadline_seconds = 10
successful_jobs_history_limit = 10
job_template {
metadata {}
spec {
backoff_limit = 2
ttl_seconds_after_finished = 10
template {
metadata {}
spec {
container {
name = "scheduler"
image = "imgscheduler"
command = ["/bin/sh", "-c", "date; kubectl describe pv | grep 'Name' | awk 'NR % 2 == 1' | awk '{print $2}'"]
}
}
}
}
}
}
}
</code></pre>
| Barragán Louisenbairn | <p>Answering the comment:</p>
<blockquote>
<p>Ok, shame on me, wrong image name. Now I have an error in the Container Log: /bin/sh: kubectl: not found</p>
</blockquote>
<p>It means that the image that you are using doesn't have <code>kubectl</code> installed (or it's not in the <code>PATH</code>). You can use image: <code>google/cloud-sdk:latest</code>. This image already have <code>cloud-sdk</code> installed which includes:</p>
<ul>
<li><code>gcloud</code></li>
<li><code>kubectl</code></li>
</ul>
<hr />
<p>To run a <code>CronJob</code> that will get the information about <code>PV</code>'s and change the configuration of <code>GCP</code> storage you will need following accesses:</p>
<ul>
<li><code>Kubernetes/GKE</code> API(<code>kubectl</code>) - <code>ServiceAccount</code> with a <code>Role</code> and <code>RoleBinding</code>.</li>
<li><code>GCP</code> API (<code>gcloud</code>) - <code>Google Service account</code> with <code>IAM</code> permissions for storage operations.</li>
</ul>
<p>I found this links helpful when assigning permissions to list <code>PV</code>'s:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: RBAC</a></em></li>
<li><em><a href="https://success.mirantis.com/article/user-unable-to-list-persistent-volumes" rel="nofollow noreferrer">Success.mirantis.com: Article: User unable to list persistent volumes</a></em></li>
</ul>
<p>The recommended way to assign specific permissions for <code>GCP</code> access:</p>
<blockquote>
<p>Workload Identity is the recommended way to access Google Cloud services from applications running within GKE due to its improved security properties and manageability.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Workload Identity: How to</a></em></p>
</blockquote>
<p>I encourage you to read documentation I linked above and check other <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#alternatives" rel="nofollow noreferrer">alternatives</a>.</p>
<hr />
<p>As for the script used inside of a <code>CronJob</code>. You should look for <code>pdName</code> instead of <code>Name</code> as the <code>pdName</code> is representation of the <code>gce-pd</code> disk in <code>GCP</code> (assuming that we are talking about in-tree plugin).</p>
<p>You will have multiple options to retrieve the disk name from the API to use it in the <code>gcloud</code> command.</p>
<p>One of the options:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pv -o yaml | grep "pdName" | cut -d " " -f 8 | xargs -n 1 gcloud compute disks add-resource-policies --zone=ZONE --resource-policies=POLICY
</code></pre>
<blockquote>
<p>Disclaimer!</p>
<p>Please treat above command only as an <strong>example</strong>.</p>
</blockquote>
<p>Above command will get the <code>PDName</code> attribute from the <code>PV</code>'s and iterate with each of them in the command after <code>xargs</code>.</p>
<p>Some of the things to take into consideration when creating a script/program:</p>
<ul>
<li>Running this command more than once on a single disk will issue an error that you cannot assign multiple policies. You could have a list of already configured disks that do not require assigning a policy.</li>
<li>Consider using <code>.spec.concurrencyPolicy: Forbid</code> instead of <code>Replace</code>. Replaced <code>CronJob</code> will start from the beginning iterating over all of those disks. Command could not complete in the desired time and <code>CronJob</code> will be replaced.</li>
<li>You will need to check for the correct <code>kubectl</code> version as the official support allows +1/-1 version difference between client and a server (<code>cloud-sdk:latest</code> uses <code>v1.19.3</code>).</li>
</ul>
<hr />
<p>I highly encourage you to look on other methods to backup your <code>PVC</code>'s (like for example <code>VolumeSnapshots</code>).</p>
<p>Take a look on below links for more reference/ideas:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/60931080/12257134">Stackoverflow.com: Answer: Periodic database backup in kubernetes</a></em></li>
<li><em><a href="https://stash.run/docs/v0.9.0-rc.2/guides/latest/volumesnapshot/pvc/" rel="nofollow noreferrer">Stash.run: Guides: Latest: Volumesnapshot: PVC</a></em></li>
<li><em><a href="https://velero.io/" rel="nofollow noreferrer">Velero.io</a></em></li>
</ul>
<p>It's worth to mention that:</p>
<blockquote>
<p>CSI drivers are the future of storage extension in Kubernetes. <strong>Kubernetes has announced that the in-tree volume plugins are expected to be removed from Kubernetes in version 1.21.</strong> For details, see <a href="https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/" rel="nofollow noreferrer">Kubernetes In-Tree to CSI Volume Migration Moves to Beta</a>. After this change happens, existing volumes using in-tree volume plugins will communicate through CSI drivers instead.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#benefits_of_using" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: GCE PD CSI Driver: Benefits of using</a></em></p>
</blockquote>
<p>Switching to <code>CSI</code> plugin for your <code>StorageClass</code> will allow you to use <code>Volume Snapshots</code> inside of <code>GKE</code>:</p>
<blockquote>
<p>Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: Volume snaphosts: How to</a></em></p>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes</a></em></li>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Cronjobs: How to</a></em></li>
<li><em><a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/cron_job" rel="nofollow noreferrer">Terraform.io: Kubernetes: CronJob</a></em></li>
<li><em><a href="https://cloud.google.com/compute/docs/disks/create-snapshots" rel="nofollow noreferrer">Cloud.google.com: Compute: Disks: Create snapshot</a></em></li>
</ul>
| Dawid Kruk |
<p>we <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#cos" rel="noreferrer">followed this guide</a> to use GPU enabled nodes in our existing cluster but when we try to schedule pods we're getting <code>2 Insufficient nvidia.com/gpu error</code></p>
<p><strong>Details:</strong></p>
<p>We are trying to use GPU in our existing cluster and for that we're able to successfully create a NodePool with a single node having GPU enabled.</p>
<p>Then as a next step according to the guide above we've to create a daemonset and we're also able to run the DS successfully.</p>
<p>But now when we are trying to schedule the Pod using the following resource section the pod becomes un-schedulable with this error <code>2 insufficient nvidia.com/gpu</code></p>
<pre><code> resources:
limits:
nvidia.com/gpu: "1"
requests:
cpu: 200m
memory: 3Gi
</code></pre>
<p><strong>Specs:</strong></p>
<pre><code>Node version - v1.18.17-gke.700 (+ v1.17.17-gke.6000) tried on both
Instance type - n1-standard-4
image - cos
GPU - NVIDIA Tesla T4
</code></pre>
<p>any help or pointers to debug this further will be highly appreaciated.</p>
<p>TIA,</p>
<hr />
<p>output of <code>kubectl get node <gpu-node> -o yaml</code> [Redacted]</p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: n1-standard-4
beta.kubernetes.io/os: linux
cloud.google.com/gke-accelerator: nvidia-tesla-t4
cloud.google.com/gke-boot-disk: pd-standard
cloud.google.com/gke-container-runtime: docker
cloud.google.com/gke-nodepool: gpu-node
cloud.google.com/gke-os-distribution: cos
cloud.google.com/machine-family: n1
failure-domain.beta.kubernetes.io/region: us-central1
failure-domain.beta.kubernetes.io/zone: us-central1-b
kubernetes.io/arch: amd64
kubernetes.io/os: linux
node.kubernetes.io/instance-type: n1-standard-4
topology.kubernetes.io/region: us-central1
topology.kubernetes.io/zone: us-central1-b
name: gke-gpu-node-d6ddf1f6-0d7j
spec:
taints:
- effect: NoSchedule
key: nvidia.com/gpu
value: present
status:
...
allocatable:
attachable-volumes-gce-pd: "127"
cpu: 3920m
ephemeral-storage: "133948343114"
hugepages-2Mi: "0"
memory: 12670032Ki
pods: "110"
capacity:
attachable-volumes-gce-pd: "127"
cpu: "4"
ephemeral-storage: 253696108Ki
hugepages-2Mi: "0"
memory: 15369296Ki
pods: "110"
conditions:
...
nodeInfo:
architecture: amd64
containerRuntimeVersion: docker://19.3.14
kernelVersion: 5.4.89+
kubeProxyVersion: v1.18.17-gke.700
kubeletVersion: v1.18.17-gke.700
operatingSystem: linux
osImage: Container-Optimized OS from Google
</code></pre>
<p>Tolerations from the deployments</p>
<pre><code> tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
</code></pre>
| Oli | <p>The <code>nvidia-gpu-device-plugin</code> should be installed in the GPU node as well. You should see <code>nvidia-gpu-device-plugin</code> DaemonSet in your <code>kube-system</code> namespace.</p>
<p>It should be automatically deployed by Google, but if you want to deploy it on your own, run the following command: <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml</code></p>
<p>It will install the GPU plugin in the node and afterwards your pods will be able to consume it.</p>
| hilsenrat |
<p>I'm trying to setup a Google Kubernetes Engine cluster with GPU's in the nodes loosely following <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#gcloud" rel="nofollow noreferrer">these instructions</a>, because I'm programmatically deploying using the Python client.</p>
<p>For some reason I can create a cluster with a NodePool that contains GPU's
<a href="https://i.stack.imgur.com/5yR8q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5yR8q.png" alt="GKE NodePool with GPUs" /></a></p>
<p>...But, the nodes in the NodePool don't have access to those GPUs.
<a href="https://i.stack.imgur.com/DJPVm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DJPVm.png" alt="Node without access to GPUs" /></a></p>
<p>I've already installed the NVIDIA DaemonSet with this yaml file:
<a href="https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml</a></p>
<p>You can see that it's there in this image:
<a href="https://i.stack.imgur.com/PEggq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PEggq.png" alt="enter image description here" /></a></p>
<p>For some reason those 2 lines always seem to be in status "ContainerCreating" and "PodInitializing". They never flip green to status = "Running". How can I get the GPU's in the NodePool to become available in the node(s)?</p>
<h3>Update:</h3>
<p>Based on comments I ran the following commands on the 2 NVIDIA pods; <code>kubectl describe pod POD_NAME --namespace kube-system</code>.</p>
<p>To do this I opened the UI KUBECTL command terminal on the node. Then I ran the following commands:</p>
<p><code>gcloud container clusters get-credentials CLUSTER-NAME --zone ZONE --project PROJECT-NAME</code></p>
<p>Then, I called <code>kubectl describe pod nvidia-gpu-device-plugin-UID --namespace kube-system</code> and got this output:</p>
<pre><code>Name: nvidia-gpu-device-plugin-UID
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: gke-mycluster-clust-default-pool-26403abb-zqz6/X.X.X.X
Start Time: Wed, 02 Mar 2022 20:19:49 +0000
Labels: controller-revision-hash=79765599fc
k8s-app=nvidia-gpu-device-plugin
pod-template-generation=1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/nvidia-gpu-device-plugin
Containers:
nvidia-gpu-device-plugin:
Container ID:
Image: gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:aa80c85c274a8e8f78110cae33cc92240d2f9b7efb3f53212f1cefd03de3c317
Image ID:
Port: 2112/TCP
Host Port: 0/TCP
Command:
/usr/bin/nvidia-gpu-device-plugin
-logtostderr
--enable-container-gpu-metrics
--enable-health-monitoring
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 50m
memory: 50Mi
Requests:
cpu: 50m
memory: 20Mi
Environment:
LD_LIBRARY_PATH: /usr/local/nvidia/lib64
Mounts:
/dev from dev (rw)
/device-plugin from device-plugin (rw)
/etc/nvidia from nvidia-config (rw)
/proc from proc (rw)
/usr/local/nvidia from nvidia (rw)
/var/lib/kubelet/pod-resources from pod-resources (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
device-plugin:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/device-plugins
HostPathType:
dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
nvidia:
Type: HostPath (bare host directory volume)
Path: /home/kubernetes/bin/nvidia
HostPathType: Directory
pod-resources:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pod-resources
HostPathType:
proc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
nvidia-config:
Type: HostPath (bare host directory volume)
Path: /etc/nvidia
HostPathType:
default-token-qnxjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qnxjr
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m55s default-scheduler Successfully assigned kube-system/nvidia-gpu-device-plugin-hxdwx to gke-opcode-trainer-clust-default-pool-26403abb-zqz6
Warning FailedMount 6m42s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[nvidia-config default-token-qnxjr device-plugin dev nvidia pod-resources proc]: timed out waiting for the condition
Warning FailedMount 4m25s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[proc nvidia-config default-token-qnxjr device-plugin dev nvidia pod-resources]: timed out waiting for the condition
Warning FailedMount 2m11s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[pod-resources proc nvidia-config default-token-qnxjr device-plugin dev nvidia]: timed out waiting for the condition
Warning FailedMount 31s (x12 over 8m45s) kubelet MountVolume.SetUp failed for volume "nvidia" : hostPath type check failed: /home/kubernetes/bin/nvidia is not a directory
</code></pre>
<p>Then, I called <code>kubectl describe pod nvidia-driver-installer-UID --namespace kube-system</code> and got this output:</p>
<pre><code>Name: nvidia-driver-installer-UID
Namespace: kube-system
Priority: 0
Node: gke-mycluster-clust-default-pool-26403abb-zqz6/X.X.X.X
Start Time: Wed, 02 Mar 2022 20:20:06 +0000
Labels: controller-revision-hash=6bbfc44f6d
k8s-app=nvidia-driver-installer
name=nvidia-driver-installer
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 10.56.0.9
IPs:
IP: 10.56.0.9
Controlled By: DaemonSet/nvidia-driver-installer
Init Containers:
nvidia-driver-installer:
Container ID:
Image: gke-nvidia-installer:fixed
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 150m
Environment: <none>
Mounts:
/boot from boot (rw)
/dev from dev (rw)
/root from root-mount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro)
Containers:
pause:
Container ID:
Image: gcr.io/google-containers/pause:2.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
boot:
Type: HostPath (bare host directory volume)
Path: /boot
HostPathType:
root-mount:
Type: HostPath (bare host directory volume)
Path: /
HostPathType:
default-token-qnxjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qnxjr
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m20s default-scheduler Successfully assigned kube-system/nvidia-driver-installer-tzw42 to gke-opcode-trainer-clust-default-pool-26403abb-zqz6
Normal Pulling 2m36s (x4 over 4m19s) kubelet Pulling image "gke-nvidia-installer:fixed"
Warning Failed 2m34s (x4 over 4m10s) kubelet Failed to pull image "gke-nvidia-installer:fixed": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/gke-nvidia-installer:fixed": failed to resolve reference "docker.io/library/gke-nvidia-installer:fixed": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Warning Failed 2m34s (x4 over 4m10s) kubelet Error: ErrImagePull
Warning Failed 2m22s (x6 over 4m9s) kubelet Error: ImagePullBackOff
Normal BackOff 2m7s (x7 over 4m9s) kubelet Back-off pulling image "gke-nvidia-installer:fixed"
</code></pre>
| Jed | <p>According the docker image that the container is trying to pull (<code>gke-nvidia-installer:fixed</code>), it looks like you're trying use Ubuntu daemonset instead of <code>cos</code>.</p>
<p>You should run <code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml </code></p>
<p>This will apply the right daemonset for your <code>cos</code> node pool, as stated <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">here</a>.</p>
<p>In addition, please verify your node pool has the <code>https://www.googleapis.com/auth/devstorage.read_only</code> scope which is needed to pull the image. You can should see it in your node pool page in GCP Console, under Security -> Access scopes (The relevant service is Storage).</p>
| hilsenrat |
<p>I am trying to submit the Spark application to minikube k8s cluster (Spark Version used : 2.4.3) using below command:</p>
<pre><code>spark-submit \
--master <K8S_MASTER> \
--deploy-mode cluster \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=<my docker image> \
--conf spark.kubernetes.driver.pod.name=spark-py-driver \
--conf spark.executor.memory=2g \
--conf spark.driver.memory=2g \
local:///home/proj/app/run.py <arguments>
</code></pre>
<p>Please note that the python script run.py exists in my docker image in the same path
Once I do the Spark submit, the Spark job starts and the driver job gets killed. i could see only the below logs in the Driver pod </p>
<p><code>[FATAL tini (6)] exec driver-py failed: No such file or directory</code></p>
<p>I have verified the execution of pyspark job by doing a docker run on the docker image and is able to see that the above python code gets executed. </p>
<p>These are the events for the failed driver pod</p>
<p>Events:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 52m default-scheduler Successfully assigned ***-develop/run-py-1590847453453-driver to minikube
Warning FailedMount 52m kubelet, minikube MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "run-py-1590847453453-driver-conf-map" not found
Normal Pulled 52m kubelet, minikube Container image "******************:latest" already present on machine
Normal Created 52m kubelet, minikube Created container spark-kubernetes-driver
Normal Started 52m kubelet, minikube Started container spark-kubernetes-driver
</code></pre>
| Subramanian Lakshmanan | <p>I am using one of the base images from my org. But issue regarding the mount is only a Warning and the pod got successfully assigned after that.</p>
<pre><code>FROM <project_repo>/<proj>/${SPARK_ALPINE_BUILD}
ENV SPARK_OPTS --driver-java-options=-Dlog4j.logLevel=info
ENV SPARK_MASTER "spark://spark-master:7077"
ADD https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar $SPARK_HOME/jars
ADD https://repo1.maven.org/maven2/com/datastax/spark/spark-cassandra-connector_2.11/2.3.2/spark-cassandra-connector_2.11-2.3.2.jar $SPARK_HOME/jars
USER root
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /home/<proj>/app
# copy files
COPY src/configs ./configs
COPY src/dependencies ./dependencies
COPY src/jobs ./jobs
COPY src/run.py ./run.py
COPY run.sh ./run.sh
COPY src/requirements.txt .
# install packages here
RUN set -e; \
pip install --no-cache-dir -r requirements.txt;
</code></pre>
| Subramanian Lakshmanan |
<p>I open Visual Studio 2019 and create a new project (Container application for kubernetes). I tick enable https support and then when I start debugging in Visual Studio; I can browse to the https address. </p>
<p>I then try to go one step further. I have Kubernetes enabled in Docker Desktop on my development PC and follow these instructions (after opening all the .yaml files and changing all references of https to http and all references of port 80 to port 443):</p>
<pre><code>1) cd C:\mvcsecure
2) docker build -t mvcsecure:stable -f c:\mvcsecure\mvcsecure\Dockerfile .
3) cd c:\mvcsecure\mvcsecure\charts
4) helm install mvcsecure ./mvcsecure/
5) kubectl expose deployment mvcsecure --type=NodePort --name=mvcsecure-service
6) kubectl get service
mvcsecure-service NodePort 10.96.128.133 <none> 443:31577/TCP 6s
7) I then try to browse to: https://localhost:31577 and it says:
Cannot securely connect to this page
</code></pre>
<p><a href="https://i.stack.imgur.com/sWGRS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sWGRS.png" alt="enter image description here"></a></p>
<p>Notice there is no option to trust a certificate or anything. </p>
<p>What changes must I make to the default Helm charts created by Visual Studio to get https working on my basic service? I cannot find any documentation or examples online. It would be great to see an example of a https service (mvc or api) deployed to Kubernetes using Helm. I could post the .yaml file code if needed,, however there is a lot of it.</p>
<p>I am wanting to use kubernetes cluster root certificate as described here: <a href="https://stackoverflow.com/questions/44708272/how-to-access-a-kubernetes-service-through-https">How to access a kubernetes service through https?</a></p>
<p>I have checked that all TLS and SSL options are ticked in Internet Options.</p>
| w0051977 | <p>In case when Your application accepts <code>HTTP</code> traffic and You want to make is secure (<code>HTTPS</code>); I suggest to try <a href="https://en.wikipedia.org/wiki/TLS_termination_proxy" rel="nofollow noreferrer">TLS termination</a> with kubernetes ingress.</p>
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">documentation</a> has great explanation how to configure TLS termination. With ingress object You can make Your <code>HTTP</code> service be accessible via <code>HTTPS</code> from outside of the cluster. </p>
<p>This means that connections made to service will be made in <code>HTTPS</code> and get decrypted to <code>HTTP</code> once internally in Your cluster before reaching the service.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I had a question about the timezone used by my Kubernetes Cluster. I know I can adjust the timezone of the pods(<a href="https://evalle.xyz/posts/kubernetes-tz/" rel="noreferrer">https://evalle.xyz/posts/kubernetes-tz/</a>). </p>
<p>However, I want to make sure my Cluster always uses UTC in the time zone. Is this a default option or can it change over time?</p>
| sumeyyeemir | <p>Have a look at the documentation <a href="https://cloud.google.com/container-optimized-os/docs/concepts/features-and-benefits#using_container-optimized_os" rel="noreferrer">Using Container-Optimized OS</a>:</p>
<blockquote>
<p><strong>Container-Optimized OS is the default node OS Image in Kubernetes
Engine</strong> and other Kubernetes deployments on Google Cloud Platform.</p>
</blockquote>
<p>then move to the <a href="https://cloud.google.com/container-optimized-os/docs/how-to/create-configure-instance#changing_the_time_zone" rel="noreferrer">Changing the time zone</a> for Container-Optimized OS:</p>
<blockquote>
<p>The <strong>default time zone</strong> of Container-Optimized OS is <strong>UTC0</strong>.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Note that /etc is stateless, so the timezone will be reset to the
default (UTC0) every reboot.</p>
</blockquote>
<p>So, if you don't change <code>Image type</code> for your nodes from default Container-Optimized OS to Ubuntu you have nothing to do with time zone settings.</p>
<p>In addition, I've checked on my cluster:</p>
<pre><code>$ date
Tue Feb 4 09:15:51 UTC 2020
$ ls -l /etc/ | grep localtime
lrwxrwxrwx 1 root root 25 Jan 29 08:37 localtime -> ../usr/share/zoneinfo/UTC
</code></pre>
| Serhii Rohoza |
<p>I have a Kubernetes cluster.
All the container logs in the stack driver appear as severity:error.</p>
<p>The browser hass all the requests with status 200.</p>
<p><a href="https://i.stack.imgur.com/8u42H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8u42H.png" alt="enter image description here"></a></p>
<p>Is this normal?
Thanks</p>
| fatnjazzy | <p>When you create a GKE cluster it'll come preconfigured with pushing logs to Stackdriver. And as soon as you start your application on top of GKE, logs going to <code>stdout</code> or <code>stderr</code> from your containers and will be pushed to Stackdriver Logs by Fluentd. </p>
<p>Severity, that you can see in <code>Stackdriver -> Logging -> Logs viewer</code>, depends on where event was collected. So, if your application send event to <code>stderr</code> you'll found it with severity <code>ERROR</code> at Stackdriver. </p>
<p>Try to check where your application send events.</p>
<p><strong>EDIT</strong> You can customize Stackdriver logs with Fluentd - follow this <a href="https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd" rel="nofollow noreferrer">documentation</a>.</p>
| Serhii Rohoza |
<p>Create one liner (Imperative way) command in kubernetes</p>
<pre><code>kubectl run test --image=ubuntu:latest --limits="cpu=200m,memory=512Mi" --requests="cpu=200m,memory=512Mi" --privileged=false
</code></pre>
<p>And also I need to set <code>securityContext</code> in one liner, is it possible? basically I need to run container as <code>securityContext/runAsUser</code> not as <code>root</code> account.</p>
<p>Yes declarative works, but I'm looking for an imperative way.</p>
| Manjunath v | <p>Posting this answer as a community wiki to highlight the fact that the solution was posted in the comments (a link to another answer):</p>
<blockquote>
<p>Hi, check this answer: <a href="https://stackoverflow.com/a/37621761/5747959">stackoverflow.com/a/37621761/5747959</a> you can solve this with --overrides – CLNRMN 2 days ago</p>
</blockquote>
<p>Feel free to edit/expand.</p>
<hr />
<p>Citing <code>$ kubectl run --help</code>:</p>
<blockquote>
<pre><code> --overrides='': An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
</code></pre>
</blockquote>
<p>Following on <code>--overrides</code> example that have additionals field included and to be more specific to this particular question (<code>securityContext</code> wise):</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -it ubuntu --rm --overrides='
{
"apiVersion": "v1",
"spec": {
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 1000,
"runAsGroup": 1000,
"fsGroup": 1000
},
"containers": [
{
"name": "ubuntu",
"image": "ubuntu",
"stdin": true,
"stdinOnce": true,
"tty": true,
"securityContext": {
"allowPrivilegeEscalation": false
}
}
]
}
}
' --image=ubuntu --restart=Never -- bash
</code></pre>
<p>By above override you will use a <code>securityContext</code> to constrain your workload.</p>
<blockquote>
<p>Side notes!</p>
<ul>
<li>The example above is specific to running a <code>Pod</code> that you will exec into (<code>bash</code>)</li>
<li>The <code>--overrides</code> will override the other specified parameters outside of it (for example: <code>image</code>)</li>
</ul>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Configure pod container: Security context</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Security: Pod security standards</a></em></li>
</ul>
| Dawid Kruk |
<p>Kubernetes readiness (http) probe is failing, however liveness (http) is working fine without readiness.
Using the following, tested with different initialDelaySeconds. </p>
<pre><code>readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
</code></pre>
<pre><code>livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
</code></pre>
| Pthota | <p>The <code>readiness</code> and <code>liveness</code> probes serve slightly different purposes:</p>
<ul>
<li><p>the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer"><code>readiness</code></a> probe controls whether the pod IP is included in the
list of endpoints for a service, and so also whether a target for a
route when it is exposed via an external URL;</p></li>
<li><p>the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer"><code>liveness</code></a> probe determines whether a pod is still running
normally or whether it should be restarted.</p></li>
</ul>
<p>Theoretically situation like you describe could happened if something wrong with exposing of your service for example. Have a look at the best practices <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">here</a>, also you can find some extra information <a href="https://medium.com/@AADota/kubernetes-liveness-and-readiness-probes-difference-1b659c369e17" rel="nofollow noreferrer">here</a>.</p>
| Serhii Rohoza |
<p>I have a container with the following configuration:</p>
<pre><code>spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: local-src
hostPath:
path: /src/analysis/src
type: DirectoryOrCreate
containers:
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
</code></pre>
<ul>
<li>Note that I'm intentionally omitting some other configuration parameters to keep the question short</li>
</ul>
<p>However, when I deploy it to my cluster on kubernetes on gcloud, I see the following error:</p>
<pre><code>Error: failed to start container "market-state": Error response from daemon: error while creating mount source path '/src/analysis/src': mkdir /src: read-only file system
</code></pre>
<p>I have tried deploying the exact same job locally with minikube and it works fine.</p>
<p>My guess is that this has to do with the pod's permissions relative to the host, but I expected it to work given the <code>SYS_ADMIN</code> permissions that I'm setting. When creating my cluster, I gave it a <code>devstorage.read_write</code> scope for other reason, but am wondering if there are other scopes I need as well?</p>
<pre><code>gcloud container clusters create my_cluster \
--zone us-west1-a \
--node-locations us-west1-a \
--scopes=https://www.googleapis.com/auth/devstorage.read_write
</code></pre>
<p>DirectoryOrCreate</p>
| Olshansky | <p>As pointed by user @DazWilkin:</p>
<blockquote>
<p>IIUC, if your cluster is using Container-Optimized VMs, you'll need to be aware of the structure of the file system for these instances.</p>
<p>See <a href="https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem" rel="nofollow noreferrer">https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem</a></p>
</blockquote>
<p>This is correct understanding. You can't write to readonly location like: <code>/</code> (even with the <code>SYS_ADMIN</code> and <code>privileged</code> parameters) because of the:</p>
<blockquote>
<p><strong>The root filesystem is mounted as read-only to protect system integrity</strong>. However, home directories and /mnt/stateful_partition are persistent and writable.</p>
<p>-- <em><a href="https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem#filesystem" rel="nofollow noreferrer">Cloud.google.com: Container optimized OS: Docs: Concepts: Disk and filesystem: Filesystem</a></em></p>
</blockquote>
<p>As for a <strong>workaround</strong> solution you can change the location of your <code>hostPath</code> on the node or use <code>GKE</code> with nodes that uses <code>Ubuntu</code> images instead of <code>Container Optimized OS</code> images. You will be able to use <code>hostPath</code> volumes with paths as specified in your question. You can read more about available node images by following official documentation:</p>
<ul>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-images" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Node images</a></em></li>
</ul>
<hr />
<p>If your workload/use case allows using <code>Persistent Volumes</code>, I encourage you to do so.</p>
<blockquote>
<p>PersistentVolume resources are used to manage durable storage in a cluster. In GKE, PersistentVolumes are typically backed by Compute Engine persistent disks.</p>
<p><---></p>
<p>PersistentVolumes are cluster resources that exist independently of <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/pod" rel="nofollow noreferrer">Pods</a>. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated. PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes</a></em></p>
</blockquote>
<p>You can also consider looking on <code>Local SSD</code> solution which can use <code>hostPath</code> type of <code>Volume</code>:</p>
<ul>
<li><em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Persistent Volumes: Local SSD</a></em></li>
</ul>
<hr />
<blockquote>
<p>When creating my cluster, I gave it a <code>devstorage.read_write</code> scope for other reason, but am wondering if there are other scopes I need as well?</p>
</blockquote>
<p>You can create <code>GKE</code> cluster without adding any additional scopes like:</p>
<ul>
<li><code>$ gcloud container clusters create --zone=ZONE</code></li>
</ul>
<p>The: <code>--scopes=SCOPE</code> will depend on the workload you are intending to run on it. You can assign scopes that will grant you access to specific Cloud Platform services (like Cloud Storage for example).</p>
<p>You can read more about it by following <code>gcloud</code> online manual:</p>
<ul>
<li><em><a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes" rel="nofollow noreferrer">Cloud.google.com: SDK: Gcloud: Container: Clusters: Create: Scopes</a></em></li>
</ul>
<p>To add to the topic of authentication to Cloud Platform services:</p>
<blockquote>
<p>There are three ways to authenticate to Google Cloud services using service accounts from within GKE:</p>
<ol>
<li>Use Workload Identity</li>
</ol>
<p>Workload Identity is the recommended way to authenticate to Google Cloud services from GKE. Workload Identity allows you to configure Google Cloud service accounts using Kubernetes resources. If this fits your use case, it should be your first option. This example is meant to cover use cases where Workload Identity is not a good fit.</p>
<ol start="2">
<li>Use the default Compute Engine Service Account</li>
</ol>
<p>Each node in a GKE cluster is a Compute Engine instance. Therefore, applications running on a GKE cluster by default will attempt to authenticate using the "Compute Engine default service account", and inherit the associated scopes.</p>
<p>This default service account may or may not have permissions to use the Google Cloud services you need. It is possible to expand the scopes for the default service account, but that can create security risks and is not recommended.</p>
<ol start="3">
<li>Manage Service Account credentials using Secrets</li>
</ol>
<p>Your final option is to create a service account for your application, and inject the authentication key as a Kubernetes secret. This will be the focus of this tutorial.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Authenticating to Cloud Platform</a></em></p>
</blockquote>
| Dawid Kruk |
<p>i have created PostgreSQL cluster using crunchydata pgo operator in a namespace with istio-injection enabled.but now getting api server connection refused.</p>
<pre><code>
level=error msg="Get https://100.xx.xx.xx:443/apis/batch/v1/namespaces/project/jobs?labelSelector=pg-cluster%3Dmilkr7%2Cpgdump%3Dtrue: dial tcp 100.xx.xx.xx:443: connect: connection refused".
</code></pre>
<p><em>api server log</em>:</p>
<pre><code>W0603 03:04:21.373083 1 dispatcher.go:180] Failed calling webhook, failing closed sidecar-injector.istio.io: failed calling webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: dial tcp 100.65.xx.xx:443: connect: connection refused
I0603 03:18:59.654964 1 log.go:172] http: TLS handshake error from 172.20.xx.xx:44638: remote error: tls: bad certificate
</code></pre>
| Taybur Rahman | <p>To add Your Database to istio service mesh You can use <a href="https://istio.io/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer"><code>ServiceEntry</code></a> object.</p>
<blockquote>
<p><code>ServiceEntry</code> enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the <code>workloadSelector</code> field. These endpoints can be VM workloads declared using the <code>WorkloadEntry</code> object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.</p>
</blockquote>
<p>Example of <code>ServiceEntry</code> yaml manifest for database:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: db-service
namespace: databasens
spec:
exportTo:
- "."
hosts:
- db-service.xxx.com
ports:
- number: 5443
name: tcp
protocol: tcp
resolution: DNS
location: MESH_EXTERNAL
</code></pre>
<p>If You have mTLS enforcement enabled You will also need <code>DestinationRule</code> that will define how to communicate with the external service.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mtls-db-service
spec:
host: db-service.xxx.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
</code></pre>
<p>For more information and more examples visit istio <a href="https://istio.io/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">documentation</a> page for <code>ServiceEntry</code>.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I would like to find the best practice of creating several pods with different env values.</p>
<p>Let's say that my system should ping several websites every pod will ping a different website, the only difference is the URL,
I would like to write one deployment file for all the different pods and one file with the list of URLs and that k8s will create a pod for each URL in the list.</p>
<p>Is it possible?</p>
| Tomer | <p>Posting this community wiki answer to give more of a baseline approach with some potential solutions rather than a definitive one.</p>
<p>Feel free to edit and expand.</p>
<hr />
<p>Addressing the question from the title:</p>
<blockquote>
<p>Kubernetes creation of multiple deployment with one deployment file</p>
</blockquote>
<p>You can't create a single <code>Deployment</code> that each replica would be different from each other. The <code>Deployment</code> creates sets of identical <code>Pods</code>:</p>
<blockquote>
<p>What is a Deployment?</p>
<p><strong><em>Deployments</em> represent a set of multiple, identical <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/pod" rel="nofollow noreferrer">Pods</a> with no unique identities.</strong> A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. In this way, Deployments help ensure that one or more instances of your application are available to serve user requests. Deployments are managed by the Kubernetes Deployment controller.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/deployment#what_is_a_deployment" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Deployment: What is a Deployment</a></em></p>
</blockquote>
<p>Some of the ways that you could achieve the setup that you've described:</p>
<ul>
<li>As pointed by @David Maze:</li>
</ul>
<blockquote>
<p>Yes, it is possible, by putting multiple complete Deployment specs in the same file. For the example you give, though, do these need to be separate processes, or can you achieve the same thing with a single Deployment worker and a job queue (like RabbitMQ) that pushes out the URLs?</p>
</blockquote>
<ul>
<li><p>With a templating tool like <code>Helm</code> where you would template the exact specification of your workload and then iterate over it with different values (see the example)</p>
</li>
<li><p>Use the Kubernetes official documentation on work queue topics:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/" rel="nofollow noreferrer">Indexed Job for Parallel Processing with Static Work Assignment</a> - alpha</li>
<li><a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">Parallel Processing using Expansions</a></li>
</ul>
</li>
</ul>
<hr />
<h3>Example:</h3>
<p>As I said previously, you can use <code>Helm</code> to template the workload and spawn different <code>Jobs</code> that would be configured with a different command.</p>
<blockquote>
<p>Side notes!</p>
<ul>
<li>Please do not treat this example as a production ready.</li>
<li>This example does not acknowledge persistently storing the data after the <code>Job</code> is finished. You would need to examine available solutions and choose the one that fits your needs the most.</li>
</ul>
</blockquote>
<p>Assuming that basic Helm template is created and it's having modified files:</p>
<ul>
<li><code>values.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(100)"']
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(200)"']
</code></pre>
<ul>
<li><code>templates/job.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
spec:
template:
spec:
containers:
- name: {{ $jobs.name }}
image: perl
command: {{ $jobs.command}}
restartPolicy: Never
backoffLimit: 4
---
{{- end }}
</code></pre>
<p>By above example you will create 2 <code>Jobs</code> that will calculate pi to its 100 or 200 decimal number. You can modify this example to support the workload that you are intending to run.</p>
<ul>
<li><code>$ kubectl get pods</code></li>
</ul>
<pre><code>NAME READY STATUS RESTARTS AGE
job1-sgr86 0/1 Completed 0 3h36m
job2-4jxh5 0/1 Completed 0 3h36m
</code></pre>
<ul>
<li><code>$ echo "one:"; kubectl logs job1-sgr86; echo "two:"; kubectl logs job2-4jxh5</code></li>
</ul>
<pre><code>one:
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
two:
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303820
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://serverfault.com/questions/993167/in-a-helm-template-how-can-i-use-items-in-1-list-to-iterate-over-another-list">Serverfault.com: Questions: In a Helm template how can I use items in 1 list to iterate over another list</a></em></li>
</ul>
| Dawid Kruk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.