prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have a shared yaml file for multiple pipelines and I would like to parameterize the tag of one of the images in the yaml file.</p> <p>What would be the simplest way to do this? At the moment I am maintaining multiple <code>KubernetesPods.yaml</code>, such as <code>KubernetesPods-1.5.0.yaml</code> and interpolating the parameter into the name ( <code>yamlFile &quot;KubernetesPods-${params.AGENT_POD_SPEC}.yaml&quot;</code>), but this does not scale well.</p> <p>Can I get parameters into the yaml without having to have the yaml written out in every pipeline?</p> <p>Example pipeline:</p> <pre><code>pipeline { agent { kubernetes { yamlFile 'KubernetesPods.yaml' } } parameters { choice( name: 'AGENT_POD_SPEC', choices: ['1.5.0','1.3.0','1.2.0','1.4.0'], description: 'Agent pod configuration' ) } } </code></pre> <p>Example KubernetesPods.yaml:</p> <pre><code>kind: Pod spec: containers: - name: example-image image: example/image:&lt;IMAGE-TAG-I-WANT-TO-PARAMETERIZE&gt; imagePullPolicy: Always command: - cat </code></pre>
<p>You can do it using <code>yaml</code> instead of <code>yamlFile</code></p> <pre><code>pipeline { agent { kubernetes { yaml &quot;&quot;&quot; kind: Pod spec: containers: - name: example-image image: example/image:${params.AGENT_POD_SPEC} imagePullPolicy: Always command: - cat &quot;&quot;&quot; } } parameters { choice( name: 'AGENT_POD_SPEC', choices: ['1.5.0','1.3.0','1.2.0','1.4.0'], description: 'Agent pod configuration' ) } } </code></pre>
<p>I'm currently working on a Kubernetes deployment file and I need to set the environment variables that the container will have.</p> <p>Is there a way to set Kubernetes secrets as environment variables in the deployment file?</p> <p>Here is a dummy file to help explain:</p> <p><a href="https://i.stack.imgur.com/mv4Rr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mv4Rr.png" alt="dummy deployment file" /></a></p>
<p>Yes, of course, you can mount env variables from config maps or secrets with</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: - name: app image: ... envFrom: - configMapRef: name: env-configmap - secretRef: name: env-secrets </code></pre> <p>See more in the official doc: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables</a></p>
<p>I want to add a wireguard sidecar container to prometheus stack installed via helm chart such that I can fetch client connected via vpn. I'm using the helm chart from <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">prometheus-community/kube-prometheus-stack</a> with modified values.yml. To integrate wireguard I added a wireguard container in the values.yml like this:</p> <pre><code>... containers: - name: &quot;wireguard&quot; image: &quot;lscr.io/linuxserver/wireguard:latest&quot; volumeMounts: - name: wireguard-config mountPath: /config readOnly: true - name: wireguard-run mountPath: /run securityContext: runAsGroup: 0 runAsUser: 0 privileged: true capabilities: add: - NET_ADMIN - SYS_MODULE </code></pre> <p>but when I start the containers, I get the following error:</p> <pre><code>Normal Pulled 4s kubelet Successfully pulled image &quot;lscr.io/linuxserver/wireguard:latest&quot; in 500.578587ms Warning Failed 3s (x3 over 4s) kubelet Error: container's runAsUser breaks non-root policy (pod: &quot;XX&quot;, container: wireguard) Normal Pulled 3s kubelet Successfully pulled image &quot;lscr.io/linuxserver/wireguard:latest&quot; in 456.879479ms </code></pre> <p>As wireguard needs to be able to change network interfaces, it needs the root privileges. If I don't run the container with root privileges I get the following:</p> <pre><code>... SOME OTHER PERMISSION ERROS s6-supervise (child): fatal: unable to exec run: Permission denied s6-supervise coredns: warning: unable to spawn ./run - waiting 10 seconds s6-supervise (child): fatal: unable to exec run: Permission denied s6-supervise wireguard: warning: unable to spawn ./run - waiting 10 seconds s6-supervise coredns: warning: unable to spawn ./run - waiting 10 seconds </code></pre> <p><strong>What I tried</strong> is modifying the <em>podSecurityPolicy</em> to allow running containers as root in the prometheus values.yml. I was hoping that I could simply run the container as root (for testing at least):</p> <pre><code>podSecurityPolicy: allowedCapabilities: - runAsUser: RunAsAny - NET_ADMIN - SYS_MODULE allowedHostPaths: [] volumes: [] </code></pre> <p>This didn't change anything (am I doing this correctly?)</p> <p>How would I allow running a sidecar container as root? Or is there a way to run wireguard without root priv?</p>
<ol> <li><p>there is no psp in the newer kube-prometheus-stack. see <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md</a> - search for PodSecurityPolicies - &quot;From 27.x to 28.x This version disables PodSecurityPolicies by default because they are deprecated in Kubernetes 1.21 and will be removed in Kubernetes 1.25.&quot;</p> </li> <li><p>we need to play with securityContext. it seems that you cannot have a pod running as non-root with a container (or init container) running as root. this is the error &quot;kubelet Error: container's runAsUser breaks non-root policy &quot;.</p> </li> </ol> <p>so we need to specify for the whole pod the securityContext run as root. for this, you may use a values.yaml for helm chart like below. the effect is the pod prometheus-prometheus-stack-kube-prom-prometheus-0 will run all its containers as root. not ideal but it works.</p> <pre><code>prometheus: prometheusSpec: securityContext: runAsGroup: 0 runAsNonRoot: false runAsUser: 0 fsGroup: 0 # initContainers: # - name: &quot;chmod&quot; # image: alpine:3.16.0 # command: # - &quot;/bin/sh&quot; # - &quot;-c&quot; # - &quot;chmod 777 /prometheus&quot; # volumeMounts: # - name: prometheus-prometheus-stack-kube-prom-prometheus-db # mountPath: /prometheus [...] </code></pre>
<p>I am using <code>kube-prometheus-stack</code> and the yaml snippets you see below are part of a <code>PrometheusRule</code> definition.</p> <p>This is a completely hypothetical scenario, the simplest one I could think of that illustrates my point.</p> <p>Given this kind of metric:</p> <pre><code>cpu_usage{job=&quot;job-1&quot;, must_be_lower_than=&quot;50&quot;} 33.72 cpu_usage{job=&quot;job-2&quot;, must_be_lower_than=&quot;80&quot;} 56.89 # imagine there are plenty more lines here # with various different values for the must_be_lower_than label # ... </code></pre> <p>I'd like to have alerts that check the label <code>must_be_lower_than</code> and alert. Something like this (this doesn't work the way it's written now, just trying to demonstrate):</p> <pre><code>alert: CpuUsageTooHigh annotations: message: 'On job {{ $labels.job }}, the cpu usage has been above {{ $labels.must_be_lower_than }}% for 5 minutes.' expr: cpu_usage &gt; $must_be_lower_than for: 5m </code></pre> <p><strong>P.S</strong> I already know I can define alerts like this:</p> <pre class="lang-yaml prettyprint-override"><code>alert: CpuUsageTooHigh50 annotations: message: 'On job {{ $labels.job }}, the cpu usage has been above 50% for 5 minutes.' expr: cpu_usage{must_be_lower_than=&quot;50&quot;} &gt; 50 for: 5m --- alert: CpuUsageTooHigh80 annotations: message: 'On job {{ $labels.job }}, the cpu usage has been above 80% for 5 minutes.' expr: cpu_usage{must_be_lower_than=&quot;80&quot;} &gt; 80 for: 5m </code></pre> <p>This is not what I'm looking for, because I have to manually define alerts for <em>some</em> of the various values of the <code>must_be_lower_than</code> label.</p>
<p>There is currently no way in Prometheus to have this kind of &quot;templating&quot;.</p> <p>The only way to get something near would be to use recording rules that that define the maximum value for the label:</p> <pre><code>rules: - record: max_cpu_usage expr: vector(50) labels: must_be_lower_than:&quot;50&quot; - record: max_cpu_usage expr: vector(80) labels: must_be_lower_than:&quot;80&quot; # ... other possible values </code></pre> <p>Then use it in your alerting rule:</p> <pre><code>alert: CpuUsageTooHigh annotations: message: 'On job {{ $labels.job }}, the cpu usage has been above {{ $labels.must_be_lower_than}}% for 5 minutes.' expr: cpu_usage &gt; ON(must_be_lower_than) GROUP_LEFT max_cpu_usage for: 5m </code></pre>
<h1>Background</h1> <p>I'm installing Elastic Search into my cluster with a helm chart with the following command:</p> <pre class="lang-bash prettyprint-override"><code>helm -n elasticsearch upgrade --install -f values_elasticsearch.yaml elasticsearch elastic/elasticsearch </code></pre> <p>This allows me to override the values which is nice, but I'd also like to add an <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">Istio virtual service</a>. Which I believe would require I add a template to the helm chart.</p> <h2>Options</h2> <p>I've considered the following thee options, but not sure what is best practice.</p> <ol> <li>Download the elastic search maintained helm chart and add a template for the additional yamls I need. This could create an issue when they upgrade their helm, I'm gonging to have to keep merging in their changes.</li> <li>Add the template to a separate helm chart. I don't love this solution because I like to have one helm chart for a single namespace and application.</li> <li>Create a helm subchart. I don't know much about these or if this is the right scenario to use them in.</li> </ol> <p>So I'm wondering if there is a better way to do this or which of my options is best.</p>
<p>helm subchart is ideal way</p> <ul> <li>You can create a helm chart and add the elasticsearch as subchart</li> <li>You can add required istio yaml files inside your template folder</li> <li>First you should run <code>helm dep update</code> then run the install/upgrade command</li> </ul>
<p>I want to utilize data from a persistent volume as <code>readOnly</code> inside a container. What is the difference of setting the <code>readOnly</code> flag under the <code>volumeMounts</code> vs the deployment <code>volumes.persistentVolumeClaim</code>?</p> <p>The results seems to be the same. Is one of the two redundant?</p> <pre class="lang-yaml prettyprint-override"><code>--- kind: Deployment apiVersion: apps/v1 metadata: name: ${ORDERER_ORG_NAME}-delpoyment namespace: ${NS} labels: app: ${ORDERER_ORG_NAME} spec: replicas: 1 selector: matchLabels: app: ${ORDERER_ORG_NAME} template: metadata: labels: app: ${ORDERER_ORG_NAME} spec: initContainers: - name: prepare-tls-certs image: busybox imagePullPolicy: IfNotPresent command: [&quot;/bin/sh&quot;, &quot;wait-for-certs.sh&quot;] volumeMounts: - name: fabric-volume mountPath: ${ORDERER_ORG_PATH} subPath: var/hyperledger/config/tls readOnly: true containers: # other containers volumes: - name: fabric-volume persistentVolumeClaim: readOnly: true claimName: fabric-pvc </code></pre>
<p>In your case, yes, the outcome is the same. But this functionality is not redundant, because the <code>readOnly</code> on a volume and <code>readOnly</code> on a <code>mountPoint</code> have different implications. For example,</p> <ol> <li><code>readOnly: true</code> on a <code>volumeMount</code> means that the <code>ro</code> mount option is set when mounting your endpoint. This is on the <code>container level</code>. You can read more about <code>mount options</code> <a href="https://man7.org/linux/man-pages/man8/mount.8.html#FILESYSTEM-INDEPENDENT_MOUNT_OPTIONS" rel="nofollow noreferrer">here</a>.</li> <li><code>readOnly: true</code> on a volume means the <em><strong>volume</strong></em> is readonly. Even if you don't mount it as <code>readOnly</code>, you cannot write to it. This is on a pod level and any container that will mount this volume will not have the ability to write. However, it doesn't <em><strong>have to be</strong></em> <code>readOnly</code>. You can set <code>readOnly:false</code> as well, where your <code>volume</code> will be <code>writeable</code> but the mounted <code>filesystem</code> can be <code>readOnly</code>. This way, other containers can mount this volume as <code>readOnly: false</code> and be able to write to it. This can be used, for instance, to ensure certain containers are not allowed to modify a volume, but others are.</li> </ol> <p>To take it a level further &quot;down the stack&quot;, you can have <code>PersistentVolumes</code> that can be shared across multiple pods (across multiple nodes). Here, the <code>readOnly</code> field plays a role in conjunction with the <code>AccessMode</code> of the <code>PersistentVolume</code>. You can have an <code>AccessMode</code> of <code>WriteOnceReadMany</code> meaning that 1 <code>node</code> can have pods attaching this volume for <code>writing</code> and many can <code>read</code> from it only. More than 1 <code>node</code> will not be able to have its pods write to this volume. You can read more about <code>AccessModes</code> of <code>PersistentVolumes</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">here</a>.</p> <p>In summary, the permissions are always <code>AND</code>ed together with the most restrictive permission generally <code>up the stack</code>. So, generally, an <code>AccessMode</code> will be more permissive, a <code>Volume</code> permission will be less permissive and the <code>volumeMount</code> permissions will be least permissive by design. That's the way you design access to your resources.</p>
<p><a href="https://i.stack.imgur.com/dLCUi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dLCUi.png" alt="Running Netstat in terminal" /></a> <a href="https://i.stack.imgur.com/2JQyb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2JQyb.png" alt="Kubectl Version" /></a></p> <p>I have no clue how it got installed and why it got installed. I do have docker installed but i don't have Kubernetes installed via docker. I was assuming it was due to killer intelligence but i got no idea how to verify or check that.</p>
<p>This isn't a kubernetes process. This is a result of installing docker and how it configures itself to <em>enable</em> you to run kubernetes locally. The string &quot;kubernetes&quot; is coming from your <code>hosts</code> file even though you might not have turned on the feature to use k8s.</p> <p>If you open <code>C:\Windows\System32\drivers\etc\hosts</code>, you'll see a line that associates &quot;kubernetes&quot; with 127.0.0.1, your <code>localhost</code>. So, while <code>netstat</code> may show &quot;kubernetes&quot; as the destination address, it's a little misleading because anything going to 127.0.0.1 will show up as &quot;kubernetes&quot;.</p> <p><code>netstat -ab</code> will show the executable associated with each connection if you want to verify.</p>
<p>Is it possible to pull private images in Github Container Registry using GITHUB_TOKEN?</p> <p>If so, what do I need to configure in k8s?</p> <p>Thanks!</p>
<p>Yes you can. You will have to create a <code>secret</code> object in your cluster.</p> <pre><code>kubectl create secret docker-registry ghcr-login-secret --docker-server=https://ghcr.io --docker-username=$YOUR_GITHUB_USERNAME --docker-password=$YOUR_GITHUB_TOKEN --docker-email=$YOUR_EMAIL </code></pre> <blockquote> <p>Note: Your credentials will become part of shell history, so be careful and remove the shell history afterwards.</p> </blockquote> <p>This will internally create a <code>dockerconfig.json</code> with your provided values and generate a secret that will be used to authenticate with your registry.</p> <p>You can then proceed to specify in your <code>Pod</code> specification that you are using a private registry and pass this secret as:</p> <pre><code>... imagePullSecrets: - name: ghcr-login-secret ... </code></pre> <p>You can read more about external registry interfacing with Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">here</a>.</p>
<p>I am writing a terraform file in GCP to run a stateless application on a GKE, these are the steps I'm trying to get into terraform.</p> <ol> <li>Create a service account</li> <li>Grant roles to the service account</li> <li>Creating the cluster</li> <li>Configuring the deployment</li> </ol> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mllp-adapter-deployment spec: replicas: 1 selector: matchLabels: app: mllp-adapter template: metadata: labels: app: mllp-adapter spec: containers: - name: mllp-adapter imagePullPolicy: Always image: gcr.io/cloud-healthcare-containers/mllp-adapter ports: - containerPort: 2575 protocol: TCP name: &quot;port&quot; command: - &quot;/usr/mllp_adapter/mllp_adapter&quot; - &quot;--port=2575&quot; - &quot;--hl7_v2_project_id=PROJECT_ID&quot; - &quot;--hl7_v2_location_id=LOCATION&quot; - &quot;--hl7_v2_dataset_id=DATASET_ID&quot; - &quot;--hl7_v2_store_id=HL7V2_STORE_ID&quot; - &quot;--api_addr_prefix=https://healthcare.googleapis.com:443/v1&quot; - &quot;--logtostderr&quot; - &quot;--receiver_ip=0.0.0.0&quot; </code></pre> <ol start="5"> <li>Add internal load balancer to make it accesible outside of the cluster</li> </ol> <pre><code>apiVersion: v1 kind: Service metadata: name: mllp-adapter-service annotations: cloud.google.com/load-balancer-type: &quot;Internal&quot; spec: type: LoadBalancer ports: - name: port port: 2575 targetPort: 2575 protocol: TCP selector: app: mllp-adapter </code></pre> <p>I've found this <a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public" rel="nofollow noreferrer">example</a> in order to create an auto-pilot-public cluster, however I don't know where to specify the YAML file of my step 4</p> <p>Also I've found this other <a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/deploy_service" rel="nofollow noreferrer">blueprint</a> that deploy a service to the created cluster using the kubernetes provider, which I hope solves my step 5.</p> <p>I'm new at terraform and GCP architecture in general, I got all of this working following <a href="https://cloud.google.com/healthcare-api/docs/how-tos/mllp-adapter" rel="nofollow noreferrer">documentation</a> however I'm now trying to find a way to deploy this on a dev enviroment for testing purposes but that's outside of my sandbox and it's supposed to be deployed using terraform, I think I'm getting close to it.</p> <p>Can someone enlight me what's the next step or how to add those YAML configurations to the <code>.tf</code> examples I've found?</p> <p>Am I doing this right? :(</p>
<p>You can use this script and extend it further to deploy the YAML files with that : <a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public" rel="nofollow noreferrer">https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public</a></p> <p>The above TF script is creating the GKE auto pilot cluster for YAML deployment you can use the K8s provider and apply the files using that.</p> <p><a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment</a></p> <p>Full example : <a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public" rel="nofollow noreferrer">https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public</a></p> <p>main.tf</p> <pre><code>locals { cluster_type = &quot;simple-autopilot-public&quot; network_name = &quot;simple-autopilot-public-network&quot; subnet_name = &quot;simple-autopilot-public-subnet&quot; master_auth_subnetwork = &quot;simple-autopilot-public-master-subnet&quot; pods_range_name = &quot;ip-range-pods-simple-autopilot-public&quot; svc_range_name = &quot;ip-range-svc-simple-autopilot-public&quot; subnet_names = [for subnet_self_link in module.gcp-network.subnets_self_links : split(&quot;/&quot;, subnet_self_link)[length(split(&quot;/&quot;, subnet_self_link)) - 1]] } data &quot;google_client_config&quot; &quot;default&quot; {} provider &quot;kubernetes&quot; { host = &quot;https://${module.gke.endpoint}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } module &quot;gke&quot; { source = &quot;../../modules/beta-autopilot-public-cluster/&quot; project_id = var.project_id name = &quot;${local.cluster_type}-cluster&quot; regional = true region = var.region network = module.gcp-network.network_name subnetwork = local.subnet_names[index(module.gcp-network.subnets_names, local.subnet_name)] ip_range_pods = local.pods_range_name ip_range_services = local.svc_range_name release_channel = &quot;REGULAR&quot; enable_vertical_pod_autoscaling = true } </code></pre> <p>Another Good example which use the YAML files as template and apply it using the terraform. : <a href="https://github.com/epiphone/gke-terraform-example/tree/master/terraform/dev" rel="nofollow noreferrer">https://github.com/epiphone/gke-terraform-example/tree/master/terraform/dev</a></p>
<p>I have pods running on different nodes. But when I execute the command <code>curl -s checkip.dyndns.org</code>, I am getting the same public IP for all. So, is the pod's public IP different from the public IP of the node it is running?</p> <p>Also, when I execute the command <code>kubectl get nodes -o wide</code>, I get <code>EXTERNAL-IP</code> as <code>&lt;none&gt;</code> and there is only <code>INTERNAL-IP</code>.</p> <p>I actually need the node's public IP address to access Kubernetes <strong>NodePort</strong> service.</p>
<p><code>...when I execute the command curl -s checkip.dyndns.org, I am getting the same public IP for all.</code></p> <p>That's your NAT public IP.</p> <p><code>I actually need the node's public IP address...</code></p> <p>The node needs to run in a subnet that allows direct (no NAT) Internet access and have public IP assigned. You can find this info on your cloud provider console; or in the node run <code>ip addr show</code> to see all IP(s) assigned to the node.</p>
<p>I'm trying to run k3s in rootless-mode. For now, i've done common steps from <a href="https://rootlesscontaine.rs/getting-started" rel="nofollow noreferrer">https://rootlesscontaine.rs/getting-started</a> and used unit-file from <a href="https://github.com/k3s-io/k3s/blob/master/k3s-rootless.service" rel="nofollow noreferrer">https://github.com/k3s-io/k3s/blob/master/k3s-rootless.service</a></p> <p>Systemd service k3s-rootless.service is active and run, but the pods are constantly in pending status.</p> <p>I get these messages:</p> <pre><code>jun 21 20:43:58 k3s-tspd.local k3s[1065]: E0621 20:43:58.647601 33 controller.go:116] loading OpenAPI spec for &quot;v1beta1.metrics.k8s.io&quot; failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable jun 21 20:43:58 k3s-tspd.local k3s[1065]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] jun 21 20:43:58 k3s-tspd.local k3s[1065]: I0621 20:43:58.647876 33 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. jun 21 20:43:59 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:43:59Z&quot; level=info msg=&quot;Waiting for control-plane node k3s-tspd.local startup: nodes \&quot;k3s-tspd.local\&quot; not found&quot; jun 21 20:44:00 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:44:00Z&quot; level=info msg=&quot;Waiting for control-plane node k3s-tspd.local startup: nodes \&quot;k3s-tspd.local\&quot; not found&quot; jun 21 20:44:00 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:44:00Z&quot; level=info msg=&quot;certificate CN=k3s-tspd.local signed by CN=k3s-server-ca@1655821591: notBefore=2022-06-21 14:26:31 +0000 UTC notAfter=2023-06-21 20:44:00 +0000 UTC&quot; jun 21 20:44:00 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:44:00Z&quot; level=info msg=&quot;certificate CN=system:node:k3s-tspd.local,O=system:nodes signed by CN=k3s-client-ca@1655821591: notBefore=2022-06-21 14:26:31 +0000 UTC notAfter=2023-06-21 20:44:00 +0000 UTC&quot; jun 21 20:44:00 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:44:00Z&quot; level=info msg=&quot;Waiting to retrieve agent configuration; server is not ready: \&quot;fuse-overlayfs\&quot; snapshotter cannot be enabled for \&quot;/home/scadauser/.rancher/k3s/agent/containerd\&quot;, try using \&quot;native\&quot;: fuse-overlayfs not functional, make sure running with kernel &gt;= 4.18: failed to mount fuse-overlayfs ({Type:fuse3.fuse-overlayfs Source:overlay Options:[lowerdir=/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower2:/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower1]}) on /home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/merged: mount helper [mount.fuse3 [overlay /home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/merged -o lowerdir=/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower2:/home/scadauser/.rancher/k3s/agent/containerd/fuseoverlayfs-check751772682/lower1 -t fuse-overlayfs]] failed: \&quot;\&quot;: exec: \&quot;mount.fuse3\&quot;: executable file not found in $PATH&quot; jun 21 20:44:01 k3s-tspd.local k3s[1065]: time=&quot;2022-06-21T20:44:01Z&quot; level=info msg=&quot;Waiting for control-plane node k3s-tspd.local startup: nodes \&quot;k3s-tspd.local\&quot; not found&quot; NAMESPACE NAME READY STATUS RESTARTS AGE kube-system helm-install-traefik-hn2nn 0/1 Pending 0 5h5m kube-system helm-install-traefik-crd-djr4j 0/1 Pending 0 5h5m kube-system local-path-provisioner-6c79684f77-w7fjb 0/1 Pending 0 5h5m kube-system metrics-server-7cd5fcb6b7-rlctn 0/1 Pending 0 5h5m kube-system coredns-d76bd69b-mjj4m 0/1 Pending 0 15m </code></pre> <p>What should i do next?</p>
<p>The solution was quite obvious. In unit file k3s-rootless.service I used the wrong snapshotter. For containerd in k3s rootless-mode it has to be '--snapshotter=fuse-overlayfs'. fuse-overlayf also need to be installed before run k3s in rootless-mode.</p>
<p>I have a Kubernetes controller written using client-go <a href="https://pkg.go.dev/k8s.io/client-go/informers" rel="nofollow noreferrer">informers</a> package. It maintains a watch on all Pods in the cluster, there are about 15k of these objects, and their YAML representation takes around 600 MB to print (I assume their in-memory representation is not that different.)</p> <p>As a result, this (otherwise really small) controller watching Pods ends up with a huge memory footprint (north of 1 GiB). Even methods that you'd think offer a way of filtering, such as the one named like <a href="https://pkg.go.dev/k8s.io/client-go/informers#NewFilteredSharedInformerFactory" rel="nofollow noreferrer"><code>NewFilteredSharedInformerFactory</code></a> doesn't really give you a way to specify a predicate function that chooses which objects are stored in the in-memory <code>cache</code>.</p> <p>Instead, that method in client-go offers a <a href="https://pkg.go.dev/k8s.io/[email protected]/informers/internalinterfaces#TweakListOptionsFunc" rel="nofollow noreferrer"><code>TweakListOptionsFunc</code></a>. It helps you control <code>ListOptions</code> but my predicate unfortunately cannot be satisfied with a <code>labelSelector</code> or <code>fieldSelector</code>. I need to drop the objects when they arrive to the controller through a predicate function.</p> <blockquote> <p>Note: the predicate I have is something like <em>&quot;Pods that have an ownerReference by a DaemonSet&quot;</em> (which is <a href="https://stackoverflow.com/questions/59442065/how-to-use-kubernetes-fieldselector-to-query-ownerreferences">not possible with <code>fieldSelectors</code></a> –also another question of mine) and there's no <code>labelSelector</code> that can work in my scenario.</p> </blockquote> <p>How would I go about configuring an informer on Pods that only have DaemonSet owner references to reduce the memory footprint of my controller?</p>
<p>Here's an idea, you can get a list of all the <code>DaemonSets</code> in your cluster. Read the <code>spec.selector.matchLabels</code> field to retrieve the label that the <code>DaemonSet</code> pods are bound to have. Use those labels as part of your <code>TweakListOptionsFunc</code> with a logic like:</p> <pre><code>Pods with label1 OR label2 OR label3 ... </code></pre> <p>I know it's a bit of toil, but seems to be a working approach. I believe there isn't a way to specify fields in <code>client-go</code>.</p>
<p>Does the # in front of success and failure in a Kubectl describe (refer picture) meant to represent something?</p> <p>All of the elements in each of those probes represent a config element for the probe but success and failure are prefixed with a #. I initially thought maybe it is a default value (which was not specified by the user but noticed that this is not the case as the picture shows different values for all the failure config for the various probes.</p> <p>Am I reading too much into the # or is it intentionally placed there for a reason?</p> <p>It does not make a difference to the functionality or affect us in any form or shape. Just Curious as it catches the eye!</p> <p>Noticed a <a href="https://stackoverflow.com/questions/69419096/kubectl-describe-pod-does-not-report-proper-url-of-liveness-probe/69419270#comment128849330_69419270">related question</a> but it doesn't focus on the &quot;#&quot;.</p> <p><a href="https://i.stack.imgur.com/mBzwA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBzwA.jpg" alt="enter image description here" /></a></p>
<p>It appears to be just be embedded into the print statement:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/b1e130fe83156783153538b6d79821c2fdaa85bb/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L1956" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/b1e130fe83156783153538b6d79821c2fdaa85bb/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L1956</a></p> <p>Here is the original PR:</p> <p><a href="https://github.com/kubernetes/kubernetes/pull/21341" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/21341</a></p> <p>Looked into messaging the original author, but he disables DMs on his social media.</p>
<p>I am a beginner to Kubernetes. I have created a secret file and referred it in deployment yaml file.</p> <p>app-secret.yaml</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: app-secret data: username: YWRtaW4= password: YWRtaW4= </code></pre> <p>deploy.yaml</p> <pre><code>env: - name: DEPLOY_ENV value: ${env} - name: NAMESPACE_NAME valueFrom: fieldRef: fieldPath : metadata.namespace - name: APP_USERNAME valueFrom: secretKeyRef: name: app-secret key: username - name: APP_PASSWORD valueFrom: secretKeyRef: name: app-secret key: password </code></pre> <p>While using the command <code>kubectl get secret pod-54rfxd -n dev-ns -o json</code>, it is printing the username and password in encoded format only. When i query for the environment variables list using the command <code>kubectl exec pod-54rfxd -n dev-ns -- printenv</code>, it was giving below result.</p> <pre><code>APP_USERNAME=admin APP_PASSWORD=admin </code></pre> <p>Why it was not in encoded format in environment variables. Could you please let me know the reason and is it possible to have it in encoded format?</p>
<p>You could use the <code>stringData</code> format:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: app-secret stringData: username: &quot;YWRtaW4=&quot; password: &quot;YWRtaW4=&quot; </code></pre> <p>From K8s doc: <a href="https://i.stack.imgur.com/uasIb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uasIb.png" alt="warning about stringData secret type" /></a></p> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">K8s doc</a></p>
<p>When reading blog posts about WAFs and Kubernetes, it seems 90+ % of the posts are written by WAF-providers, while the remaining posts seem to be sceptical. So I would like to hear what your experiences are with WAFs, do they make sense, and if so can you recommend any good open-source WAFs? We are currently not allowed to used American cloud providers, as we work with &quot;person data&quot;, and the Schrems II judgement has indicated that unencrypted &quot;person data&quot; is not allowed on their platforms (even if on EU servers).</p> <h1>To my understanding WAF help with the following:</h1> <ol> <li>IP-whitelists/blacklists</li> <li>Rate Limits</li> <li>Scanning of HTTPS requests for SQLi and XSS</li> <li>Cookie Poisoning and session-jacking</li> <li>DDOS (requires a huge WAF cluster)</li> </ol> <h1>But I would also think that these problems can be handled elsewhere:</h1> <ol> <li>IP-whitelists/blacklists can be handled by the Loadbalancer or NetworkPolicies</li> <li>Rate Limits can be configured in the Ingress</li> <li>Handling of SQLi and XSS is done by input sanitization in the application</li> <li>Server-side sessions bound to IPs can prevent poisoning and jacking</li> <li>DDOS are hard to absorb, so I have no native solution here (but they are low risk?)</li> </ol> <p>Sure, I can see the advantage in centralizing security at the access gate to the network, but from what I have read WAFs are hard to maintain, they have tons af false positives and most companies mainly use them to be compliant with ISO-standards, and mainly in &quot;monitoring mode&quot;. Shouldn't it be secure enough to use SecurityPolicies, NetworkPolicies, Ingress Rules and Loadbalancer Rules rather than a WAF?</p>
<p>A WAF is not strictly <em>necessary</em> on Kubernetes β€”Β or on any other deployment platform. Honestly, even after consulting for dozens of companies, I've seldom encountered any site that used a WAF at all.</p> <p>You're right that you could duplicate the functions of a WAF using other technology. But you're basically reinventing the wheel by doing so, and the programmers you assign to do it are not as expert in those security tasks than the developers of the WAF are. At least they are probably doing it as one of many other tasks they are working on, so they can't devote full-time to implementation and testing of the WAF.</p> <p>There is also a valid argument that <a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)" rel="nofollow noreferrer">defense in depth</a> in computing is a good thing. Even if you have other security measures in place, they might fail. It's worth creating redundant layers of security defense, to account for that possibility.</p> <p>There's a tradeoff between implementing security (or any other feature) yourself versus paying someone else for their expert work. This is true for many areas of software development, not only a WAF.</p> <p>For example, it has become popular to use a web application framework. Is it possible to develop your own framework? Of course it is, and sometimes it's necessary if you want the code to have very specific behavior. But most of the time you can use some third-party framework off the shelf. It saves you a lot of time, and you get the instant benefit from years of development and testing done by someone else.</p>
<p>Please correct me if I'm wrong. Up to my understanding, advantage of containers is that those don't need to supply the whole OS and they also don't need to execute it (but instead they utilize underlying OS which is provided e.g. by Docker) which leads to the saving of the resources -especially the memory.</p> <p>My question is: do I need to pay attention when choosing base image for my containers that I plan to deploy in Azure k8s service (AKS) which currently supports Ubuntu 18.04? Should I choose only from those base images that explicitly support Ubuntu 18.04 or can I go for any base Linux image (e.g. Alpine) and all will still work fine once deployed in AKS? I guess that as far as my base image is compliant (same Linux kernel version) with the Linux kernel available in AKS then I should be fine. Is my assumption correct?</p>
<p>Short answer: you can pick any base image that's compatible with whatever is running inside your container.</p> <blockquote> <p>advantage of containers is that those don't need to supply the whole OS and they also don't need to execute it (but instead they utilize underlying OS which is provided e.g. by Docker)</p> </blockquote> <p>More precisely, containers do not run their own OS <em>kernel</em>. They do run their own copy of the part of the OS that runs in user space.</p> <p>Practically speaking, kernel version compatibility is rarely a concern.</p>
<p>I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...</p> <p>I have the app functioning as I need without any issues, my current directory structure is.</p> <pre class="lang-bash prettyprint-override"><code>. β”œβ”€β”€ README.md β”œβ”€β”€ backend β”‚ β”œβ”€β”€ Dockerfile β”‚ β”œβ”€β”€ Pipfile β”‚ β”œβ”€β”€ Pipfile.lock β”‚ └── main.py └── frontend β”œβ”€β”€ Dockerfile β”œβ”€β”€ index.html β”œβ”€β”€ package-lock.json β”œβ”€β”€ package.json β”œβ”€β”€ postcss.config.js β”œβ”€β”€ src β”‚ β”œβ”€β”€ App.jsx β”‚ β”œβ”€β”€ favicon.svg β”‚ β”œβ”€β”€ index.css β”‚ β”œβ”€β”€ logo.svg β”‚ └── main.jsx β”œβ”€β”€ tailwind.config.js └── vite.config.js </code></pre> <p>I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.</p> <p>I have a <code>.env</code> file in each, only simple things like URLs or hosts.</p> <p>I currently run the app, with the front end and backend separately as an example.</p> <pre class="lang-bash prettyprint-override"><code>&gt; ./frontend &gt; npm run dev </code></pre> <pre class="lang-bash prettyprint-override"><code>&gt; ./backend &gt; uvicorn .... </code></pre> <p>Can anyone give me tips /advice on how I can dockerize this as one?</p>
<p>As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one <code>Dockerfile</code> per app).</p> <p>Then, you can add a <code>docker-compose.yml</code> file at the root of your project in order to link them together, it could look like that:</p> <pre class="lang-yaml prettyprint-override"><code>version: '3.3' services: app: build: context: ./frontend/ dockerfile: ./Dockerfile ports: - &quot;127.0.0.1:80:80&quot; backend: env_file: - backend/.env build: context: ./backend/ dockerfile: ./Dockerfile ports: - &quot;127.0.0.1:8000:80&quot; </code></pre> <p>The backend would be running on <code>http://localhost:8000</code> and the frontend on <code>http://localhost:80</code></p> <p>In order to start the docker-compose you can just type in your shell:</p> <pre class="lang-bash prettyprint-override"><code>$&gt; docker-compose up </code></pre> <p>This implies that you already have your Dockerfile for both apps. You can find many example online of different implementations of Dockerfile for the different technologies. For example :</p> <ul> <li>For ReactJS you can configure it like <a href="https://mherman.org/blog/dockerizing-a-react-app/" rel="nofollow noreferrer">this</a></li> <li>For FastAPI Like <a href="https://fastapi.tiangolo.com/deployment/docker/#dockerfile" rel="nofollow noreferrer">that</a></li> </ul>
<p>A few months back, I deployed the Elastic-Search (version - 8.0.1) on Kubernetes (GCP) as a service as External load balancer using this <a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">guide</a>.</p> <p>Now, I am unable to perform any read or write operation on ElasticSearch. I checked the logs, in which I found that memory of the node is almost full.</p> <p><strong>Here are some logs which support this analysis:</strong></p> <blockquote> <p>flood stage disk watermark [95%] exceeded on [hulk-es-default-0][/usr/share/elasticsearch/data] free: 18.5mb[1.8%], all indices on this node will be marked read-only</p> <p>Cluster health status changed from [YELLOW] to [RED] (reason: [shards failed [[1][0]]]).</p> <p>This node is unhealthy: health check failed on [/usr/share/elasticsearch/data].`</p> </blockquote> <p><strong>Here are the errors that are coming when performing any read/write operation:</strong></p> <blockquote> <p>elasticsearch.exceptions.TransportError: TransportError(503, 'master_not_discovered_exception', None)</p> <p>elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPSConnectionPool(host='...', port=****): Read timed out. (read timeout=30))</p> </blockquote> <p>I increased the capacity of my elasticsearch persistent volume claim(PVC) but was unable to create the pod with that new volume.</p> <p>I followed the following steps -</p> <ul> <li><p>Set the allowVolumeExpansion field to true in their StorageClass object(s)</p> </li> <li><p>Scaled ElasticSearch Operator Deployment to 0 Replicas.</p> </li> <li><p>Deleted the statefulset Object without deleting the pods using</p> <p><code>kubectl delete sts &lt;statefulset-name&gt; --cascade=orphan</code></p> </li> </ul> <p>Before deleting I saved the yaml of the statefulset using</p> <pre><code>kubectl get sts &lt;statefulset-name&gt; -o yaml </code></pre> <ul> <li>Increased the storage in capacity in the yaml file of PVC.</li> <li>Recreated the StatefulSet with the new storage request by the yaml file I saved using</li> </ul> <p><code>kubectl apply -f file-name.yml</code></p> <ul> <li>Scaled back the operator deployment to 1</li> </ul> <p>But, when I recreated the stateful set, the <code>CrashLoopBackOff</code> error is being shown every-time.</p> <p>Following are some logs -</p> <ul> <li>readiness probe failed</li> <li>Likely root cause: java.io.IOException: No space left on device</li> <li>using data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb)]], net usable_space [0b], net total_space [975.8mb], types [ext4]</li> </ul> <p>The persistent disk's volume that the ES pod is accessing is increased but still the pod is unable to start. Can anyone guide me here, what is the problem here ?</p>
<p>Appears to be an issue with the disk space ( /usr/share/elasticsearch/data). Verify that the persistent volume with the required size is available and is bound to the elasticsearch pod through PVC object.</p>
<p>I can't get my Ingress to use my TLS cert. I have created a self signed TLS cert using openssl for hostname myapp.com and added myapp.com to /etc/hosts.</p> <p><code>openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365</code></p> <p>I have verified the Ingress is using the TLS cert</p> <pre><code>$ kubectl describe ingress myapp-ingress Name: myapp-ingress Labels: app=myapp name=myapp-ingress Namespace: default Address: $PUBLIC_IP Ingress Class: nginx-ingress-class Default backend: &lt;default&gt; TLS: nginx-ingress-tls terminates myapp.com Rules: Host Path Backends ---- ---- -------- myapp.com / myapp-service:8080 (10.244.0.14:80) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 19m (x11 over 21h) nginx-ingress-controller Scheduled for sync </code></pre> <p>however, when I curl myapp.com, I get an error message informing me no subject name matches target host 'myapp.com'.</p> <pre><code>$ curl -I https://myapp.com curl: (60) SSL: no alternative certificate subject name matches target host name 'myapp.com' More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. </code></pre> <p>I made sure to give openssl myapp.com as the FQDN. I'm not sure why it isn't working. Any help is appreciated.</p> <p>Edit:</p> <p>I'm looking at the logs of the ingress controller. I see the following error messages</p> <pre><code>$ kubectl logs -n nginx-ingress ingress-nginx-controller-7c45d9ff9f-2hcd7 | grep cert I0618 20:43:32.096653 7 main.go:104] &quot;SSL fake certificate created&quot; file=&quot;/etc/ingress-controller/ssl/default-fake-certificate.pem&quot; I0618 20:43:32.116162 7 ssl.go:531] &quot;loading tls certificate&quot; path=&quot;/usr/local/certificates/cert&quot; key=&quot;/usr/local/certificates/key&quot; W0618 20:43:33.246716 7 backend_ssl.go:45] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: failed to parse private key I0618 20:43:33.340807 7 nginx.go:319] &quot;Starting validation webhook&quot; address=&quot;:8443&quot; certPath=&quot;/usr/local/certificates/cert&quot; keyPath=&quot;/usr/local/certificates/key&quot; W0618 20:43:33.342061 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate W0618 20:43:37.149824 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate W0618 20:43:41.152972 7 controller.go:1334] Error getting SSL certificate &quot;default/nginx-ingress-tls&quot;: local SSL certificate default/nginx-ingress-tls was not found. Using default certificate </code></pre>
<p>When you are using a cert that is not signed by trusted certificates in the installed CA certificate store, you will get the error message:</p> <pre><code>failed to verify the legitimacy of the server and therefore could not establish a secure connection to it </code></pre> <p>As a workaround, you can disable the strict certificate checking it with the following command:</p> <pre><code>curl -k https://myapp.com </code></pre> <p>You can find more details about it in this <a href="https://curl.haxx.se/docs/sslcerts.html" rel="nofollow noreferrer">link</a>.</p>
<p>In Detail, the .JS file contains MongoDB commands like below,</p> <pre class="lang-js prettyprint-override"><code>db = db.getSiblingDB('records') db.createCollection(&quot;collection_new&quot;); db.collection_new.insertMany([ { capped : true, size : 5242880, max : 5000 }, { capped : false, size : 5242880, max : 8000 }, { capped : false, size : 5242880, max : 5000 }, { capped : true, size : 5242880, max : 8000 } ]) </code></pre> <p>Now, instead of converting this .JS file into a standalone .yaml configmap file, I want to create a configmap which refers to the .JS file. Also, I do not want to add the raw data inside the configmap because the above commands are just a small part of the JS file. The main file contains approx. 400-500 lines.</p>
<p>You cannot make reference to a file from a configmap. As suggested in the comments, you can use an <a href="https://docs.helm.sh/docs/chart_template_guide/accessing_files/" rel="nofollow noreferrer">Helm template which make reference to a file</a>:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: mongo-commands.js: |- {{ .Files.Get &quot;js/mongo_commands.js&quot; }} </code></pre> <p>Note that:</p> <ul> <li>The directory <code>js</code> is at the root of the Helm chart.</li> <li>Charts must be smaller than 1M because of the storage limitations of Kubernetes objects.</li> <li>Files in templates/ cannot be accessed.</li> <li>Files excluded using <code>.helmignore</code> cannot be accessed.</li> </ul>
<p>I'm probably missing something in my understanding of networking and gRPC. I want to start with an example to explain how I understand the networking.</p> <p>I have a deployment with 3 replicas named app in default namespace they have pod IP:</p> <p><code>10.3.0.1</code>, <code>10.3.0.2</code>, <code>10.3.0.3</code></p> <p>I have a <code>ClusterIP</code> service for the deployment called app-service, it has an IP of:</p> <p><code>10.24.0.0</code></p> <p>The kube-dns will have a record that maps the <code>app-service.default.svc.cluster.local</code> -&gt; <code>10.24.0.0</code>. The kube-proxy on every node will see the configuration and update the netfilter with mapping <code>10.24.0.0</code> -&gt; <code>10.3.0.1</code>, <code>10.3.0.2</code>, <code>10.3.0.3</code>.</p> <p>Now in my cluster I have another client pod that makes a gRPC call to <code>app-service.default.svc.cluster.local</code>.</p> <p>What I expect to happen is <code>app-service.default.svc.cluster.local</code> will resolve to a single IP <code>10.24.0.0</code>, and gRPC will create a single sub channel and try to establish a long-live connection.</p> <p>This call will get out of the pod and to the node and go pass the netfilter at which point <code>10.24.0.0</code> becomes <code>10.3.0.1</code> and finally arrives on the first pod.</p> <p>Now there is a second call from the client, it goes through the netfilter at which point <code>10.24.0.0</code> becomes <code>10.3.0.2</code>, this call ends up on another pod which had no idea they had a connection established?</p> <p>Also I see these <a href="https://techdozo.dev/grpc-load-balancing-on-kubernetes-using-headless-service/" rel="nofollow noreferrer">blogs</a> that mention gRPC will create sticky session to a single pod IP, but I thought the pod IP will not be resolved inside the application/grpc client but in the netfilter.</p>
<p>I think you could find some answers from the <a href="https://github.com/grpc/grpc/blob/master/doc/load-balancing.md" rel="nofollow noreferrer">Load Balancing in gRPC</a>.</p> <p>Load-balancing policies fit into the gRPC client workflow in between name resolution and the connection to the server.</p> <p><a href="https://i.stack.imgur.com/mLX6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mLX6s.png" alt="enter image description here" /></a></p> <ul> <li>On startup, the gRPC client issues a <a href="https://github.com/grpc/grpc/blob/master/doc/naming.md" rel="nofollow noreferrer">name resolution</a> request for the server name. The name will resolve to <strong>a list of IP addresses</strong>, a service config that indicates which client-side load-balancing policy to use (e.g., <code>round_robin</code> or <code>grpclb</code>) and provides a configuration for that policy and a set of attributes (channel args in C-core).</li> <li>The client instantiates the load balancing policy and passes its configuration from the service config, the list of IP addresses, and the attributes.</li> <li>The load balancing policy creates a set of subchannels for the IP addresses of the servers (which might be different from the IP addresses returned by the resolver; see below). It also watches the subchannels' connectivity states and decides when each subchannel should attempt to connect.</li> <li>For each RPC sent, the load balancing policy decides which subchannel (i.e., which server) the RPC should be sent to.</li> </ul> <p>When the IP of pods is changed due to pods restart, now gRPC will try to re-resolve whenever a subconn goes into transient failure. Here is one related <a href="https://github.com/grpc/grpc-go/issues/1663" rel="nofollow noreferrer">discussion</a> and <a href="https://github.com/grpc/grpc-go/blob/master/clientconn.go#L1258" rel="nofollow noreferrer">code</a></p>
<p>I'm using helm and given a yaml object I want to flatten it while applying some recursive formatting.</p> <p>Given this:</p> <pre class="lang-yaml prettyprint-override"><code>some_map: with: different indentation: levels: and nested: sub: maps: &quot;42&quot; and_more: maps: 42 </code></pre> <p>I want to (for example) get this:</p> <pre class="lang-ini prettyprint-override"><code>some_map.with=&quot;different&quot; some_map.indentation.levels=&quot;and&quot; some_map.nested.sub.maps=&quot;42&quot; some_map.nested.and_more.maps=42 </code></pre> <p>I haven't read anything about recursive looping in the helm docs, keep in mind that the format of the recursion in the example ( &quot;%v.%v&quot; if !root else &quot;%v=%v&quot; ) may vary.</p>
<p>Yes, it seems that <code>{{ define</code> supports recursive use of <code>{{ include</code>, although unknown to what depth</p> <p>The PoC I whipped up to see if it could work</p> <pre><code>{{- define &quot;bob&quot; -}} {{- $it := . -}} {{- $knd := kindOf . -}} {{- if eq $knd &quot;map&quot; }} {{- range (keys .) }} {{- $k := . }} {{- $v := get $it . }} {{- $vk := kindOf $v }} {{- if eq $vk &quot;map&quot; }} {{- printf &quot;%s.&quot; $k }} {{- include &quot;bob&quot; $v }} {{- else }} {{- printf &quot;%s=%s\n&quot; $k (toJson $v) }} {{- end }} {{- end }} {{- else }} {{ toJson . }}#k({{ $knd }}) {{- end }} {{- end -}} </code></pre> <p>invoked as</p> <pre class="lang-yaml prettyprint-override"><code>{{ $fred := dict &quot;alpha&quot; (dict &quot;a0&quot; &quot;a0ch0&quot;) &quot;beta&quot; (dict &quot;beta0&quot; (dict &quot;beta00&quot; 1234)) &quot;charlie&quot; (list &quot;ch0&quot; &quot;ch1&quot; &quot;ch2&quot;) }} data: theData: | {{ toJson $fred | indent 4 }} toml: | {{ include &quot;bob&quot; $fred | indent 4 }} </code></pre> <p>produced</p> <pre class="lang-yaml prettyprint-override"><code>data: theData: | {&quot;alpha&quot;:{&quot;a0&quot;:&quot;a0ch0&quot;},&quot;beta&quot;:{&quot;beta0&quot;:{&quot;beta00&quot;:1234}},&quot;charlie&quot;:[&quot;ch0&quot;,&quot;ch1&quot;,&quot;ch2&quot;]} toml: | alpha.a0=&quot;a0ch0&quot; beta.beta0.beta00=1234 charlie=[&quot;ch0&quot;,&quot;ch1&quot;,&quot;ch2&quot;] </code></pre> <p>Also, your cited example seems to make reference to the outermost variable name, which I don't think helm knows about, so you'd need an artificial wrapper <code>dict</code> in order to get that behavior: <code>{{ include &quot;toToml&quot; (dict &quot;some_map&quot; .Values.some_map) }}</code></p>
<p>I have an application which does this:</p> <pre><code>subprocess.Popen([&quot;python3&quot;, &quot;-u&quot;, &quot;sub-program-1.py&quot;]) </code></pre> <p>So Python program can start multiple long-lived processes on demand.</p> <p>If I stop main Python program and start again, it knows that <code>sub-program-1.py</code> should be started, because there is a record about status in DB that tells it.</p> <p>So simply it works fine when there is only one replica of Docker container, pod, virtual machine or whatever you call it.</p> <hr /> <p>If I scale an app to 3 replicas, <code>subprocess</code> fails to achieve it.</p> <ol> <li><p>Each Docker container starts <code>sub-program-1.py</code>, while I want to start on one container</p> </li> <li><p>If one container fails, an app should be smart enough to failover <code>sub-program-1.py</code> to another container</p> </li> <li><p>An app should be smart enough to balance subprocesses across containers, for example: <code>sub-program-1.py</code> - <code>sub-program-9.py</code> ideally should be spread by putting 3 processes per container so in total there are 9 subprocesses running - I don't need this to be precise, most simplest solution is fine to balance it</p> </li> </ol> <hr /> <p>I've tried to explore RQ (Redis Queue) and similar solutions, but they are heavily focused on tasks, ideally short-living. In my case, they are long-lived processes. E.g. <code>sub-program-1.py</code> can live for months and years.</p> <hr /> <p>The scheme is this:</p> <p>Main Python app -&gt; <code>sub-program-1.py</code>, <code>sub-program-2.py</code>, etc.</p> <p>Any simple solution exists here without overhead?</p> <p>Is writing statuses of each sub program to DB an option (also detecting when sub process fails to failover it to another container based on statuses in DB) or would you incorporate additional tool to solve <code>subprocess</code> scaling issue?</p> <hr /> <p>Another option is to start <code>sub-program-1.py</code> on all containers and scale operations inside of it. <code>sub-program-1.py</code> basically calls some third-party APIs and does some operations based on user preference. So scaling those API calls based on each user preference is complicated, it has multiple threads in background when calling APIs simultaneously. In short, <code>sub-program-1.py</code> is tied to user1, <code>sub-program-2.py</code> is tied to user2, etc. So is it worth to make it complex by choosing this option?</p> <hr /> <p>Update</p> <p>If <code>subprocess</code> is used only in standalone apps and nobody tried to implement this mechanism at findable scale on Github, libraries, etc.</p> <p>How would you solve this issue in Python?</p> <p>I think about these entries in DB:</p> <pre><code>ProcessName ProcessHostname LastHeartBeat Enabled process1 host-0 2022-07-10 15:00 true process2 null null true process3 host-1 2022-07-10 14:50 true </code></pre> <p>So to solve three points that I wrote above:</p> <ol> <li><p>Each container tries to pick up process that is not already picked up (where is <code>null</code> or old date of <code>LastHeartBeat</code>). When first container picked up a process, it writes date to <code>LastHeartBeat</code> and then uses <code>subprocess</code> to start a process. Other containers cannot pick up if <code>LastHeartBeat</code> is constantly updated.</p> </li> <li><p>If process fails, it doesn't write <code>LastHeartBeat</code> so other container picks up the process as described in point 1. If failed container cannot reach DB, it stops the operations and restarts (if it's able to even do <code>exit</code>). If it cannot reach DB, it doesn't do anything. That is to not run same process twice.</p> </li> <li><p>To balance processes across containers, the container which is running less processes can pick up a new one. That info is on DB table to make a decision.</p> </li> </ol> <p>Would you solve differently? Any best practices you recommend?</p> <p>Thanks</p>
<p>TL;DR - This is a classical monolithic application scaling problem you can easily solve this by redesigning your application to a microservice architecture, since your application functionality is inherently decoupled between components. Once you've done that, it all really boils down to you deploying your application in a natively microservice-friendly fashion and all of your design requirements will be met.</p> <blockquote> <p>Edit: You're currently trying to &quot;scale-up&quot; your application in a micro-service system (multiple processes/containers in 1 pod), which defeats the whole purpose of using it. You will have to stick with 1 subprocess &lt;===&gt; 1 pod for the design to really work. Otherwise, you are only introducing immense complications and this is against many design principles of micro services. More details below, if you're interested.</p> </blockquote> <p>Let me first summarise all that you've said so we can coherently discuss the design.</p> <h2>Application Requirements</h2> <p>As I understand the requirements of your application from all the information you've provided, the following is true:</p> <ol> <li>You want your <code>processes</code> to be <code>long-lived</code>.</li> <li>You have a parent <code>application</code> that spawns these <code>long-lived</code> processes.</li> <li>These <code>processes</code> need to be started on-demand. (dynamically scaled - and scaled-out; see(7) below for 1 <code>process</code> per <code>container</code>)</li> <li>If there is no load, your <code>application</code> should spawn just 1 <code>process</code> with <code>sub-process.py</code>.</li> <li>If a <code>container</code> fails, you would like your <code>application</code> to be able to intelligently switch traffic to a healthy <code>container</code> that is also running your long-lived <code>process</code>.</li> <li>The <code>application</code> should be able to shared load across all the <code>processes</code>/<code>containers</code> currently running.</li> <li>A <code>process</code> is tied to user requests since it makes calls to 3rd party <code>API</code> systems in order to perform its function. So, it is favourable to have just one <code>process</code> inside a container for simplicity of design.</li> </ol> <h2>Limitations of the Current Design(s)</h2> <h3>The Current Application Design</h3> <p>Currently, you have the application setup in a way that:</p> <ol> <li>You have one <code>application</code> process that spawns multiple identical <code>sub-process.py</code> processes <em><strong>through the application process</strong></em>.</li> <li>The <code>application</code> faces the user, and receives requests, and spawns <code>sub-process.py</code> processes as needed and scales well inside one compute unit (container, VM etc.)</li> <li>These <code>processes</code> then perform their actions, and return the response to the <code>application</code> which return it to the user.</li> </ol> <p>Now let's discuss your current approach(es) that you've mentioned above and see what are the challenges that you've described.</p> <h3>Scaling Design 1 - Simply Scaling Docker Containers</h3> <p>This means simply creating more containers for your applications. And we know that it doesn't satisfy the requirements because scaling the application to multiple replicas starts all the <code>processes</code> and makes them <code>active</code>. This is not what you want, so there is no relationship between these replicas in different containers (since the sub-processes are tied to <code>application</code> running in each container, not the overall system). This is obviously because <code>application</code>'s in different containers are unaware of each-other (and more importantly, the sub-processes each are spawning).</p> <p>So, this fails to meet our requirement (3), (4), (5).</p> <h3>Scaling Design 2 - Use a DB as State Storage</h3> <p>To try and meet (3), (4) and (5) we introduced a <code>database</code> that is central to our overall system and it can keep state data of different <code>processes</code> in our system and how certain <code>containers</code> can be &quot;bound&quot; to processes and manage them. However, this was also known to have certain limitations as you pointed out (plus my own thoughts):</p> <ol> <li>Such solutions are good for <code>short-lived</code> processes.</li> <li>We have to introduce a database that is high speed and be able to maintain states at a very quick pace with a possibility of <code>race conditions</code>.</li> <li>We will have to write a lot of house-keeping code on top of our containers for orchestration that will use this <code>database</code> and some known rules (that you defined as last 3 points) to achieve our goal. Especially an orchestration component that will know when to start containers on-demand. This is highly complicated.</li> <li>Not only do we have to spawn new processes, we also want to be able to handle failures and automated traffic switching. This will require us to implement a &quot;networking&quot; component that will communicate with our orchestrator and detect failed containers and re-route incoming traffic to healthy ones and restarts the failed containers.</li> <li>We will also require this networking service to be able to distribute incoming traffic load across all the containers currently in our system.</li> </ol> <p>This fails to meet our requirements (1) and (7) and most importantly THIS IS REINVENTING THE WHEEL!</p> <p>LET'S TALK ABOUT KUBERNETES AND WHY IT IS <em>EXACTLY</em> WHAT YOU NEED.</p> <h2>Proposed Solution</h2> <p>Now let's see how this entire problem can be re-engineered with minimum effort and we can satisfy all of our requirements.</p> <h3>The Proposed Application Design</h3> <p>I propose that you can very simply detach your <code>application</code> from your <code>processes</code>. This is easy enough to do, since your application is accepting user requests and forwarding them to identical pool of workers which are performing their operation by making 3rd party API calls. Inherently, this maps perfectly on micro-services.</p> <pre><code>user1 =====&gt; |===&gt; worker1 =&gt; someAPIs user2 =====&gt; App |===&gt; worker2 =&gt; someAPIs user2 =====&gt; |===&gt; worker3 =&gt; someAPIs ... </code></pre> <p>We can intelligently leverage this. Note that not only are the elements decoupled, but all the workers are performing an identical set of functions (which can result in different output based on use inputs). Essentially you will replace</p> <pre class="lang-py prettyprint-override"><code>subprocess.Popen([&quot;python3&quot;, &quot;-u&quot;, &quot;sub-program-1.py&quot;]) </code></pre> <p>with an API call to a service that can provide a worker for you, on demand:</p> <pre><code>output = some_api(my_worker_service, user_input) </code></pre> <p>This means, your design of the application has been preserved and you've simply placed your processes on different systems. So, the application now looks something like this:</p> <pre><code>user1 =====&gt; |===&gt; worker1 =&gt; someAPIs user2 =====&gt; App ==&gt;worker_service |===&gt; worker2 =&gt; someAPIs user2 =====&gt; |===&gt; worker3 =&gt; someAPIs ... </code></pre> <p>With this essential component of application redesign in place, let's revisit our issues from previous designs and see if this helps us and how Kubernetes comes into the picture.</p> <h3>The Proposed Scaling Solution - Enter Kubernetes!</h3> <p>You were absolutely on the right path when you described usage of a database to maintain the <code>state</code> of our entire system and the orchestration logic being able to retrieve status of current <code>containers</code> in our system and make certain decisions. That's <em>exactly</em> how Kubernetes works!</p> <p>Let's see how Kubernetes solves our problems now</p> <ol> <li><code>processes</code> in Kubernetes can be long lived. So, requirement (1) is met and limitation (1) of our database design is also mitigated.</li> <li>We introduced a <code>service</code> that will manage all of the worker <code>processes</code> for us. So, requirement (2),satisfied. It will also be able to scale the <code>processes</code> on-demand, so requirement (3) is satisfied. It will also keep a minimum <code>process</code> count of <code>1</code> so we don't spawn unnecessary processes, so requirement (4) is satisfied. It will be intelligent enough to forward traffic only to <code>processes</code> at are <code>healthy</code>. So, requirement (5) is met. It will also load balance traffic across all the <code>processes</code> it governs, so requirement (6) is met. This service will also mitigate limitation (4) and (5) of our second design.</li> <li>You will be allowed to size your <code>processes</code> as needed, to make sure that you only use the resources needed. So, requirement (7) is met.</li> <li>It uses a central database called <code>etcd</code>, which stores the state of your entire cluster and keeps it updated at all times and accommodates for race conditions as well (multiple components updating the same information - it simply lets the first one to arrive win and fails the other one, forcing it to retry). We've solved problem (2) from our second design.</li> <li>It comes with logic to orchestrate our <code>processes</code> out of the box so there is no need to write any code. This mitigates limitation (3) of our second design.</li> </ol> <p>So, not only were we able to meet all of our requirements, we were also able to implement the solution we were trying to achieve, without writing any additional code for the orchestration! (You will just have to restructure your program a little and introduce APIs).</p> <h2>How to Implement This</h2> <p>Just note that in the k8s literature the smallest unit of computation is referred to as <code>pod</code> which performs a single function. Architecturally, this is identical to your description of <code>sub-process</code>. So, whenever I talk about 'Pods' I simply refer to your <code>sub-processes</code>.</p> <p>You will take (roughly) the following steps to implement the proposed design.</p> <ol> <li>Rewrite some part of your application to decouple <code>application</code> from <code>sub-process.py</code>, introducing an API between them.</li> <li>Package <code>sub-process.py</code> into a container image.</li> <li>Deploy a small Kubernetes cluster.</li> <li>Create a <code>Deployment</code> using your <code>sub-process.py</code> image and set the min repica count to 1 and max to any number you want, say 10, for auto-scaling.</li> <li>Expose this <code>Deployment</code> by creating a <code>Service</code>. This is the &quot;worker service&quot; I talked about, and your <code>application</code> will &quot;submit&quot; requests to this service. And it will not have to worry about anything other than simply making a request to an API endpoint, everything else is handled by k8s.</li> <li>Configure your application to make API calls to this <code>Service</code>.</li> <li>Test your application and watch it scale up and down!</li> </ol> <p>Now, the way this will function is as follows:</p> <ol> <li>Client makes a request to your <code>application</code>.</li> <li><code>application</code> forwards it to the <code>Service</code> API endpoint.</li> <li>The <code>Service</code> receives the API request and forwards it to one of the <code>Pods</code> that are running your <code>sub-process.py</code> functionality. If multiple requests are received the <code>Service</code> will balance the requests across all the pods that are available. If a pod fails, it will be take &quot;away&quot; from the service by K8s so requests don't fail.</li> <li>The pod will perform your functionality and provide the output.</li> <li>If all the pods in the <code>Service</code> are reaching saturation, the <code>Deployment</code> auto-scaling will trigger and create more pods for the <code>Service</code> and load sharing will resume again (scale-out). If the resource utilisation then reduces, the <code>Deployment</code> will remove certain pods that are not being used anymore and you will be back to 1 pod (scale-in).</li> </ol> <blockquote> <p>If you want, you can put your <code>frontend</code> application into a <code>Deployment</code> and <code>Service</code> as well which will allow you to have an even friendlier cloud-native micro-service architecture. The user will interact with an API of your <code>front-end</code> which will invoke the <code>Service</code> that is managing your <code>sub-process.py</code> workers which will return results.</p> </blockquote> <p>I hope this helps you and you can appreciate how clearly the micro-service architecture fits into the design pattern you have, and how you can very simply adapt to it and scale your application as much as you want! Not only that, expressing your design this way will also allow you to redesign/manage/test different versions by simply managing a set of YAML manifests (text files) that you can use with Version Control as well!</p>
<p><a href="https://i.stack.imgur.com/D5fPZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/D5fPZ.png" alt="enter image description here" /></a></p> <p>I have been trying to setup minikube but the command is stuck at pulling base image since forever</p>
<p>It's not stuck; it is just not showing the output of the downloading progress. It's an 800 MB file, it takes time to download.</p> <p>See <a href="https://github.com/kubernetes/minikube/issues/7012" rel="noreferrer">https://github.com/kubernetes/minikube/issues/7012</a></p>
<p>I am following the tutorial here about kserve <a href="https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md" rel="nofollow noreferrer">https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md</a></p> <p>Is this my docker&amp;k8s issue? I have spent hours trying to debug but to no avail.</p> <p>I am getting the following error:</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m20s default-scheduler Successfully assigned modelmesh-serving/modelmesh-serving-mlserver-0.x-77cc8fd548-xdgvr to minikube Normal Pulling 4m18s kubelet Pulling image &quot;kserve/modelmesh:v0.9.0-rc0&quot; Normal Pulled 3m18s kubelet Successfully pulled image &quot;kserve/modelmesh:v0.9.0-rc0&quot; in 59.419620166s Normal Created 3m18s kubelet Created container mm Normal Started 3m17s kubelet Started container mm Normal Pulling 3m17s kubelet Pulling image &quot;seldonio/mlserver:0.5.2&quot; Warning Failed 68s kubelet Failed to pull image &quot;seldonio/mlserver:0.5.2&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 68s kubelet Error: ErrImagePull Normal Pulling 68s kubelet Pulling image &quot;kserve/modelmesh-runtime-adapter:v0.9.0-rc0&quot; </code></pre>
<p>The image you try to pull exists and is available <a href="https://hub.docker.com/layers/mlserver/seldonio/mlserver/0.5.2/images/sha256-8c2244e2b51102d3bd7f814dc11775f304c8be5dca42ccb6a57b545d88bed1f9?context=explore" rel="nofollow noreferrer">here</a>. However, from what I see it's very big (<code>2.74 GB</code>). That probably means that you experienced a timeout while pull the image.</p> <p>Depending on your Kubernetes cluster and use case you can do the following:</p> <ul> <li>Try a few times and it'll pull the image at some point</li> <li>Increase the <code>--runtime-request-timeout</code> in your <code>kubelet</code></li> <li>Execute <code>docker pull seldonio/mlserver:0.5.2</code></li> </ul>
<p>Can we set min and max limit for deployments at deployment level, not at cluster or replica set level in kubernetes ?</p>
<p>On deployment level <strong>it is not possible</strong>, but there is an option to do this indirectly. You should use a <strong>HorizontalPodAutoscaler</strong> (HPA for short):</p> <blockquote> <p>HPA automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.</p> </blockquote> <p>Example code for HPA:</p> <pre><code>apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: php-apache minReplicas: 1 maxReplicas: 10 </code></pre> <p>More information can be found in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="noreferrer">Kubernetes documentation</a>.</p>
<p>A few months back, I deployed the Elastic-Search (version - 8.0.1) on Kubernetes (GCP) as a service as External load balancer using this <a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">guide</a>.</p> <p>Now, I am unable to perform any read or write operation on ElasticSearch. I checked the logs, in which I found that memory of the node is almost full.</p> <p><strong>Here are some logs which support this analysis:</strong></p> <blockquote> <p>flood stage disk watermark [95%] exceeded on [hulk-es-default-0][/usr/share/elasticsearch/data] free: 18.5mb[1.8%], all indices on this node will be marked read-only</p> <p>Cluster health status changed from [YELLOW] to [RED] (reason: [shards failed [[1][0]]]).</p> <p>This node is unhealthy: health check failed on [/usr/share/elasticsearch/data].`</p> </blockquote> <p><strong>Here are the errors that are coming when performing any read/write operation:</strong></p> <blockquote> <p>elasticsearch.exceptions.TransportError: TransportError(503, 'master_not_discovered_exception', None)</p> <p>elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPSConnectionPool(host='...', port=****): Read timed out. (read timeout=30))</p> </blockquote> <p>I increased the capacity of my elasticsearch persistent volume claim(PVC) but was unable to create the pod with that new volume.</p> <p>I followed the following steps -</p> <ul> <li><p>Set the allowVolumeExpansion field to true in their StorageClass object(s)</p> </li> <li><p>Scaled ElasticSearch Operator Deployment to 0 Replicas.</p> </li> <li><p>Deleted the statefulset Object without deleting the pods using</p> <p><code>kubectl delete sts &lt;statefulset-name&gt; --cascade=orphan</code></p> </li> </ul> <p>Before deleting I saved the yaml of the statefulset using</p> <pre><code>kubectl get sts &lt;statefulset-name&gt; -o yaml </code></pre> <ul> <li>Increased the storage in capacity in the yaml file of PVC.</li> <li>Recreated the StatefulSet with the new storage request by the yaml file I saved using</li> </ul> <p><code>kubectl apply -f file-name.yml</code></p> <ul> <li>Scaled back the operator deployment to 1</li> </ul> <p>But, when I recreated the stateful set, the <code>CrashLoopBackOff</code> error is being shown every-time.</p> <p>Following are some logs -</p> <ul> <li>readiness probe failed</li> <li>Likely root cause: java.io.IOException: No space left on device</li> <li>using data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdb)]], net usable_space [0b], net total_space [975.8mb], types [ext4]</li> </ul> <p>The persistent disk's volume that the ES pod is accessing is increased but still the pod is unable to start. Can anyone guide me here, what is the problem here ?</p>
<p>It seems, for some reason the Pod is not seeing the new volume Size. Have you tried <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-expansion" rel="nofollow noreferrer">this</a>, for the GKE Volume expansion?</p> <p>If the Pod is always in the <code>CrashLoopBackOff</code> state you can use e.g: <code>kubectl debug mypod -it --image=busybox</code> So you will attach a debug container to your Pod and check what is going on with the mounted volume.</p> <p>Others things you can also do is to create a snapshot/backup of your Volume and restore it on a new bigger volume to see if the Issue still persist.</p>
<p>I am following the tutorial here about kserve <a href="https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md" rel="nofollow noreferrer">https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md</a></p> <p>Is this my docker&amp;k8s issue? I have spent hours trying to debug but to no avail.</p> <p>I am getting the following error:</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m20s default-scheduler Successfully assigned modelmesh-serving/modelmesh-serving-mlserver-0.x-77cc8fd548-xdgvr to minikube Normal Pulling 4m18s kubelet Pulling image &quot;kserve/modelmesh:v0.9.0-rc0&quot; Normal Pulled 3m18s kubelet Successfully pulled image &quot;kserve/modelmesh:v0.9.0-rc0&quot; in 59.419620166s Normal Created 3m18s kubelet Created container mm Normal Started 3m17s kubelet Started container mm Normal Pulling 3m17s kubelet Pulling image &quot;seldonio/mlserver:0.5.2&quot; Warning Failed 68s kubelet Failed to pull image &quot;seldonio/mlserver:0.5.2&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 68s kubelet Error: ErrImagePull Normal Pulling 68s kubelet Pulling image &quot;kserve/modelmesh-runtime-adapter:v0.9.0-rc0&quot; </code></pre>
<p>As mentioned by @RafaΕ‚ Leszko, The image β€œ<strong>seldonio/mlserver:0.5.2</strong>” you are trying to pull is very large which possibly throws the error: <strong>ErrImagePull: context deadline error</strong>. You can still pull the image with <code>docker pull</code> after which the scheduling succeedes.</p> <p>When the timeout is exceeded, kubelet will cancel the request, throw out an error. The possible workaround is by setting or increasing the parameter <code>--runtime-request-timeout duration</code> via the config file(in /var/lib/kubelet) by adjusting the timeout and then pull the image by running the command <strong><code>docker pull imagename</code></strong>.</p> <p>See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">kubelet-config-file</a> for more information.</p>
<p>I have a spring-boot application running over Kubernetes, Now I am trying to set up a horizontal pod auto scaler.</p> <p>I have one doubt, without modifying any autoscalar thresholds, does auto scaler check pods only when they are ready(after the readiness probe succeeds) or even when readiness is not complete.</p> <p>Example</p> <ul> <li>A Java app takes 5 mins to start(i.e to complete the readiness probe)</li> <li>During this 5 mins, CPU for this app with 100% of the CPU requests assigned</li> <li>HPA is configured to scale if targetCPUUtilization reaches 50%</li> <li>Now what would happen in this case when the HPA condition is satisfied but the pod is not ready yet? Will it add one more pod right away or it will first wait for pods to be ready and then starts the timer for -<strong>horizontal-pod-autoscaler-initial-readiness-delay ?</strong></li> </ul> <p>I am assuming answer lies in this, but not clear to me</p> <blockquote> <p>Due to technical constraints, the HorizontalPodAutoscaler controller cannot exactly determine the first time a pod becomes ready when determining whether to set aside certain CPU metrics. Instead, it considers a Pod &quot;not yet ready&quot; if it's unready and transitioned to unready within a short, configurable window of time since it started. This value is configured with the --horizontal-pod-autoscaler-initial-readiness-delay flag, and its default is 30 seconds. Once a pod has become ready, it considers any transition to ready to be the first if it occurred within a longer, configurable time since it started. This value is configured with the --horizontal-pod-autoscaler-cpu-initialization-period flag, and its default is 5 minutes</p> </blockquote> <p>Also, can anyone <strong>explain horizontal-pod-autoscaler-cpu-initialization-period</strong> &amp; <strong>horizontal-pod-autoscaler-initial-readiness-delay</strong> ? Documentation is confusing</p>
<p>A <a href="https://github.com/jthomperoo/predictive-horizontal-pod-autoscaler" rel="nofollow noreferrer">Digital OCean Predictive Horizontal Pod Autoscaler</a> has the same kind of parameter: <a href="https://predictive-horizontal-pod-autoscaler.readthedocs.io/en/latest/reference/configuration/#cpuinitializationperiod" rel="nofollow noreferrer"><code>cpuInitializationPeriod</code></a>.</p> <p>It rephrases what <code>--horizontal-pod-autoscaler-cpu-initialization-period</code> as:</p> <blockquote> <p>the period after pod start when CPU samples might be skipped.</p> </blockquote> <p>And for <code>horizontal-pod-autoscaler-initial-readiness-delay</code></p> <blockquote> <p>the period after pod start during which readiness changes will be treated as initial readiness.</p> </blockquote> <p>The idea is to:</p> <ul> <li>not trigger any scaling based on CPU change alone (because the initial <code>cpu-initialization-period</code> means the pod is still being ready, with potential CPU spike)</li> <li>not trigger any scaling based on readiness state changes (because the initial <code>readiness-delay</code> means, even if the pod reports it is ready, that can change during that delay)</li> </ul> <p><a href="https://github.com/kubernetes/website/issues/12657" rel="nofollow noreferrer"><code>kubernetes/website</code> issue 12657</a> has more (mainly to confirm the original documentation is confusing).</p>
<p>May be I am missing some basic here. My pod failing at executing the shell command</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: ReplicaSet metadata: name: redis-rs01 labels: name: redis-rs01 run: redis-rs01 app: redis spec: replicas: 3 selector: matchLabels: app: redis template: metadata: name: redis labels: app: redis spec: volumes: - name: vol001 emptyDir: {} containers: - name: redis image: redis ports: - containerPort: 6739 env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - name: vol001 mountPath: /tmp command: [&quot;/bin/sh -c 'mkdir /tmp &amp;&amp; touch /tmp/date.log'&quot;] </code></pre> <p>From <code>describe</code> I am getting error as below</p> <pre><code>Warning Failed 14s (x3 over 32s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: &quot;/bin/sh -c 'mkdir /tmp &amp;&amp; touch /tmp/date.log'&quot;: stat /bin/sh -c 'mkdir /tmp &amp;&amp; touch /tmp/date.log': no such file or directory: unknown </code></pre> <p>Any suggestions, much appreciated.</p> <p>Thank you.</p>
<p>You need to format your command to separate the arguments properly, right now your entire shell command is being perceived as a &quot;file that is a shell script&quot; to be executed with &quot;sh -c&quot;. And since there is no file named <code>mkdir /tmp &amp;&amp; touch /tmp/date.log</code> the shell returns an error.</p> <p>You can fix this as follows:</p> <pre><code>command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;mkdir /tmp &amp;&amp; touch /tmp/date.log&quot;] </code></pre>
<p>When deploying a set of SpringBoot microservice applications in a Kubernetes cluster, should I include any kind of service discovery client libraries in my SpringBoot application to leverage kubernetes-native-service-discovery? If not, how a caller service calls another microservice in the same cluster?</p> <p>Thanks in advance.</p>
<p>A service in kubernetes can be invoked as mentioned below for example an http service. An service has to be created and associated with pod. Please refer kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> documentation for various services.</p> <pre><code>http://&lt;service-name&gt;:&lt;port&gt; </code></pre> <p>No other changes are required from application end. Please refer <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">kubernetes</a> official documentation for resolution details</p>
<p>I have a multinode kubernetes cluster with 6 pods (replicas, 8Gi, 4 CPU cores) running on different nodes residing in Auto Scaling Group. These pods contain an Application that serves REST API, and is connected to Redis.</p> <p>For all the requests going through ALB configured via ingress, some requests are painfully slower than the others.</p> <p>When I sent the requests at Pod-IP level, I found 1 pod to be much slower (almost 5 times as slow) than the other 5, bringing down the total response-time drastically.</p> <p>I tried killing the pod, such that the deployment spinned up a new one which worked fine. The issue is, some other pod went slow because of this. The ratio of fast:slow is maintained at 5:1.</p> <p>The CPU-utilization of the pods is below 30% and have ample available resources.</p> <p>I am not able to figure out the reason. Please help.</p>
<p>I am not the questioneer but oddly enough ran into a similar issue that could not be attributed to anything obvious per se.</p> <p>After a lot of debugging and turning every stone we finally disabled the Prometheus Operator scraping our pods by removing the required annotation. The &quot;1 pod performance issue&quot; magically disappeared.</p> <p>We kubectl forwarded one of the pods and checked our metrics endpoint: it was generating 6 MB (!) of metric data which is quite a lot and took around 700-1000ms to generate when no load is present. It turned out that a custom metric of ours had a regression and created a lot of tag-variants for a specific metric which attributed to nearly 3 MB of the generated metrics. The next issue was Kafka Streams which generates a lot of very detailed metrics (even on a per Kafka Stream node operation basis or tagged in regards to every node in the connected Kafka cluster). There is no easy way for us to disable the collection of these metrics but we just excluded them from our prometheus endpoint.</p> <p>This left us with a meager 32kb of metrics. Reactivating the Prometheus Operator scraping did not re-introduce the observed issue.</p> <p>But why one pod? We basically have two Prometheus Operator instances scraping all registered endpoints every 30 seconds which leads to an average scraping interval of around 15 seconds. We checked our http metrics and then it struck us: one pod is scraped 8-10x more often compared to any other pod! Considering a high load scenario its not unlikely that the prometheus endpoint exceeds 1.5 seconds to respond which would mean that another scraping process is started while the previous scrape is not completed yet. All this was adding up more and more CPU usage leading to more throttling of the pod because it was hitting its CPU limit which in turn increased the metrics endpoints response time which lead to more concurrent scrapes generating 6 MB of data.</p> <p>As to why one pod was scraped this often: we have no definite answer for this as of now as our systems team is still investigating. Sadly enough the 8-10x amount of scraping disappeared after we reduced our metrics endpoints response size.</p> <p>We basically got DDOSd by metrics scraping which occured too often on one pod (reason unknown). This has to be the most complex thing I have ever debugged. We basically removed every part of our application (DB layer, tracing, prometheus, custom metrics) until we found the culprit. We even considered if specific Kubernetes nodes where the culprit and even checked if something like entropy was running out for whatever reason.</p> <p>Good luck with your issue, I hope this helps some poor soul not waste more than a week of searching for a needle in a haystack!</p>
<p>I am using the RabbitMQ <a href="https://www.rabbitmq.com/kubernetes/operator/using-operator.html" rel="nofollow noreferrer">Kubernetes operator</a> for a dev-instance and it works great. What isn't great is that the credentials generated by the operator are different for everyone on the team (I'm guessing it generates random creds upon init).</p> <p>Is there a way to provide a secret and have the operator use those credentials in place of the generated ones?</p> <p>Yaml:</p> <pre><code>apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: rabbitmq-cluster-deployment namespace: message-brokers spec: replicas: 1 service: type: LoadBalancer </code></pre> <p>Ideally, I can just configure some yaml to point to a secret and go from there. But, struggling to find the documentation around this piece.</p> <p>Example Username/Password generated:</p> <ul> <li>user: default_user_wNSgVBIyMIElsGRrpwb</li> <li>pass: cGvQ6T-5gRt0Rc4C3AdXdXDB43NRS6FJ</li> </ul>
<p>I figured it out. Looks like you can just add a secret configured like the below example and it'll work. I figured this out by reverse engineering what the operator generated. So, please chime in if this is bad.</p> <p>The big thing to remember is the <code>default_user.confg</code> setting. Other than that, it's just a secret.</p> <pre><code>kind: Secret apiVersion: v1 metadata: name: rabbitmq-cluster-deployment-default-user namespace: message-brokers stringData: default_user.conf: | default_user = user123 default_pass = password123 password: password123 username: user123 type: Opaque </code></pre> <p><code>rabbitmq-cluster-deployment-default-user</code> comes from the Deployment <code>mdatadata.name</code> + <code>-default-user</code> (see yaml in question)</p> <p><strong>UPDATE</strong></p> <p>From the docs: <a href="https://github.com/rabbitmq/cluster-operator/tree/main/docs/examples/external-admin-secret-credentials" rel="nofollow noreferrer">https://github.com/rabbitmq/cluster-operator/tree/main/docs/examples/external-admin-secret-credentials</a></p> <blockquote> <p>When this externalSecret field is specified the default secret will not be generated by the cluster operator but the statefulset will wait until my-secret will be generated.</p> </blockquote> <pre><code>apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: external-secret-user spec: service: type: LoadBalancer replicas: 1 secretBackend: externalSecret: name: &quot;my-secret&quot; </code></pre> <p>Example Secret:</p> <pre><code>apiVersion: v1 data: default_user.conf: ZGVmYXVsdF91c2VyID0gZGVmYXVsdF91c2VyX2htR1pGaGRld3E2NVA0ZElkeDcKZGVmYXVsdF9wYXNzID0gcWM5OG40aUdEN01ZWE1CVkZjSU8ybXRCNXZvRHVWX24K host: dmF1bHQtZGVmYXVsdC11c2VyLmRlZmF1bHQuc3Zj password: cWM5OG40aUdEN01ZWE1CVkZjSU8ybXRCNXZvRHVWX24= port: NTY3Mg== provider: cmFiYml0bXE= type: cmFiYml0bXE= username: ZGVmYXVsdF91c2VyX2htR1pGaGRld3E2NVA0ZElkeDc= kind: Secret metadata: name: my-secret namespace: rabbitmq-system type: Opaque </code></pre>
<p>I'm trying to set the resources of my pod from a config map</p> <pre><code>resources: requests: cpu: valueFrom: configMapKeyRef: name: config value: CPU_REQUEST </code></pre> <p>But get error &quot;got map&quot;, expected &quot;string&quot;</p>
<p>Resources cannot be configured through configmaps, they need to be configured as part of the <a href="https://k8s-examples.container-solutions.com/examples/Pod/Pod.html#resources_and_limits/" rel="nofollow noreferrer">podspec</a></p>
<p>I can't seem to figure out whey my nginx template is not using the values file when I pass it with the <code>helm template --debug nginx charts/transport-latency -f charts/transport-latency/values.yaml &gt; .ignore/nginx-out.yaml</code> command.</p> <p>Output using <code>--debug</code></p> <pre><code>install.go:178: [debug] Original chart version: &quot;&quot; install.go:195: [debug] CHART PATH: /Users/&lt;userName&gt;/src/Repos/helm_charts/charts/transport-latency </code></pre> <p>Here is the structure of the chart:</p> <pre><code>charts/transport-latency β”œβ”€β”€ Chart.lock β”œβ”€β”€ Chart.yaml β”œβ”€β”€ charts β”‚ └── nginx-13.1.0.tgz β”œβ”€β”€ templates └── values.yaml </code></pre> <p>And when I run the above mentioned command I just get the default chart without any modification that should be included in the <code>values.yaml</code> file.</p> <p>Here is what my <code>values.yaml</code> file looks like:</p> <pre class="lang-yaml prettyprint-override"><code>namespaceOverride: airflow replicaCount: 2 service: type: ClusterIP </code></pre> <p>If it helps here is my <code>Chart.yaml</code> contents:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v2 name: transport-latency type: application version: 1.0.0 appVersion: &quot;1.0.0&quot; dependencies: - name: nginx version: 13.1.0 repository: https://charts.bitnami.com/bitnami </code></pre>
<p>If you are referencing a dependent chart (in your case, the <code>nginx</code> chart), then you must nest values for that subchart in its own block with the name of the dependency.</p> <p>So, since you named the dependency <code>nginx</code> as per your <code>chart.yaml</code>:</p> <pre><code>apiVersion: v2 name: transport-latency type: application version: 1.0.0 appVersion: &quot;1.0.0&quot; dependencies: - name: nginx version: 13.1.0 repository: https://charts.bitnami.com/bitnami </code></pre> <p>You must therefore nest the values for that chart in a block labelled <code>nginx</code></p> <p><code>values.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>nginx: namespaceOverride: airflow replicaCount: 2 service: type: ClusterIP </code></pre> <p>Using your values.yaml as it is:</p> <pre><code>namespaceOverride: airflow replicaCount: 2 service: type: ClusterIP </code></pre> <p>Would only provide those to your &quot;root&quot; chart -- which is empty.</p>
<p>I'm unable to wrap my head around the concepts of interconnectivity among <code>DNS</code>, <code>Ingress controller</code>, <code>MetalLb</code> and <code>Kubeproxy</code>.</p> <p>I know what these resources/tools/services are for and get the concepts of them individually but unable to form a picture of them working in tandem.</p> <p>For example, in a bare metal setup, a client accesses my site - <code>https://mytestsite.com</code>, still having doubts , how effectively it lands to the right pod and where the above mentioned services/resources/tools comes into picture &amp; at what stage ? Ex. How DNS talks to my MetalLB, if the client accesses my MetalLB hosting my application and how LB inturn speaks to IngressController and finally where does Kube-proxy comes into play here.</p> <p>I went thru the K8s official documentation and few others as well but still kind of stumped here. Following article is really good but I'm unable to stitch the pieces together. <a href="https://www.disasterproject.com/kubernetes-with-external-dns/" rel="nofollow noreferrer">https://www.disasterproject.com/kubernetes-with-external-dns/</a></p> <p>Kindly redirect me to the correct forum, if it is not the right place, thanks.</p>
<p>The ingress-controller creates a <code>service</code> of type <code>LoadBalancer</code> that serves as the entry point into the cluster. In a public cloud environment, a loadbalancer like ELB on AWS would create the counter part and set the <code>externalIP</code> of that service to it's ip. It is like a service of type <code>NodePort</code> but it also has an <code>ExternalIP</code>, which corresponds to the actual ip of the counterpart, a load balancer like ELB on aws.</p> <p>In a bare metal environment, no external load balancer will be created, so the external ip would stay in <code>&lt;Pending&gt;</code> state forever. Here for example the service of the istio ingress controller:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 192.12.129.119 &lt;Pending&gt; [...],80:32123/TCP,443:30994/TCP,[...] </code></pre> <p>In that state you would need to call <code>http://&lt;node-ip&gt;:32123</code> to reach the <code>http</code> port <code>80</code> of the ingress controller service, which would be then forwarded to your pod (more on that in a bit).</p> <p>When you're using metallb, it will update the service with an external ip so you can call <code>http://&lt;ip&gt;</code> instead. Metallb will also announce that ip, e.g. via BGP, so other know where to send traffic to, when someone would call the ip.</p> <p>I havn't used external DNS and only scanned the article but I guess that you can use that to also have a dns record to be created so someone can call your service by it's domain, not only by it's ip. So you can call <code>http://example.com</code> instead.</p> <p>This is basically why you run metallb and how it interacts with your ingress controller. The ingress controller creates an entry point into the cluster and metallb configures it and attracts traffic.</p> <p>Until now the call to <code>http://example.com</code> can reach your cluster, but it needs to also reach the actual application, running in a pod inside the cluster. That's <code>kube-proxy</code>'s job.</p> <p>You read a lot about <code>service</code> of different types and all this kind of stuff, but in the end it all boils down to iptables rules. <code>kube-proxy</code> will create a bunch of those rules, that form a chain.</p> <p>SSH into any kubernetes worker, run <code>iptables-save | less</code> command and search for the <code>external ip</code> configured on your ingress-controller's service by metallb. You'll find a chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.</p> <p>In the end the whole chain would look something like this:</p> <pre><code>http://example.com -&gt; http://&lt;some-ip&gt; (domain translated to ip) -&gt; http://&lt;node-ip&gt;:&lt;node-port&gt; (ingress-controller service) --- -&gt; http://&lt;cluster-internal-ip&gt;:&lt;some-port&gt; (service of your application) -&gt; http://&lt;other-cluster-internal-ip&gt;:&lt;some-port&gt; (ip of one of n pods) </code></pre> <p>where the <code>---</code> line shows the switch from cluster external to cluster internal traffic. The <code>cluster-internal-ip</code> will be from the configured <code>service-cdir</code> and the <code>other-cluster-internal-ip</code> will be from the configured <code>pod-cidr</code>.</p> <p>Note that there are different ways to configure cluster internal traffic routing, how to run <code>kube-proxy</code> and some parts might even be a bit simplified, but this should give you a good enough understanding of the overall concept.</p> <p>Also see <a href="https://stackoverflow.com/a/71753345/10020419">this answer</a> on the question 'What is a Kubernetes LoadBalancer On-Prem', that might provide additional input.</p>
<p>I'm following the <a href="https://www.youtube.com/watch?v=DgVjEo3OGBI" rel="nofollow noreferrer">tutorial</a> from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.</p> <p>I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.</p> <p>The apps running in the containers are .NET Core.</p> <p><strong>Commands-deply &amp; cluster ip</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: commands-depl spec: replicas: 1 selector: matchLabels: app: commandservice template: metadata: labels: app: commandservice spec: containers: - name: commandservice image: maartenvissershub/commandservice:latest --- apiVersion: v1 kind: Service metadata: name: commands-clusterip-srv spec: type: ClusterIP selector: app: commandservice ports: - name: commandservice protocol: TCP port: 80 targetPort: 80 </code></pre> <p><strong>Platforms-depl &amp; cluster ip</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: platforms-depl spec: replicas: 1 selector: matchLabels: app: platformservice template: metadata: labels: app: platformservice spec: containers: - name: platformservice image: maartenvissershub/platformservice:latest --- apiVersion: v1 kind: Service metadata: name: platforms-clusterip-srv spec: type: ClusterIP selector: app: platformservice ports: - name: platformservice protocol: TCP port: 80 targetPort: 80 </code></pre> <p><strong>Ingress-srv</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: acme.com http: paths: - path: /api/platforms pathType: Prefix backend: service: name: platforms-clusterip-srv port: number: 80 - path: /api/c/platforms pathType: Prefix backend: service: name: commands-clusterip-srv port: number: 80 </code></pre> <p>I also added this to my hosts file: <code>127.0.0.1 acme.com</code></p> <p>And I applied this from the nginx documentation:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <p><strong>kubectl get ingress</strong> <a href="https://i.stack.imgur.com/YmKBU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmKBU.png" alt="kubectl get ingress" /></a></p> <p><strong>kubectl describe ing ingress-srv</strong> <br /> <a href="https://i.stack.imgur.com/QuV8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QuV8Q.png" alt="kubectl describe ing ingress-srv" /></a></p> <p><strong>Dockerfile CommandService</strong></p> <pre><code>FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env WORKDIR /app COPY *.csproj ./ RUN dotnet restore COPY . ./ RUN dotnet publish -c Release -o out FROM mcr.microsoft.com/dotnet/aspnet:5.0 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ &quot;dotnet&quot;, &quot;PlatformService.dll&quot; ] </code></pre> <p><strong>kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx</strong> <a href="https://i.stack.imgur.com/Ir1HB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ir1HB.png" alt="LOGS INGRESS POD" /></a></p> <p>Am I missing something?</p>
<p>I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using <code>NET stop HTTP</code> in an admin cmd.</p> <p>I noticed that running <code>kubectl get services -n=ingress-nginx</code> resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running <code>kubectl get ingress</code> also didn't show an ADDRESS. Now they both show &quot;localhost&quot; as value for external-ip and ADDRESS.</p> <p>Reference: <a href="https://stackoverflow.com/a/26653919/12290439">Port 80 is being used by SYSTEM (PID 4), what is that?</a></p>
<ol> <li>Does traefik / traefik mesh can handle multi regional arquitecture to solve request to the right user region?</li> <li>Any other self hosted solution recomendation?</li> </ol> <p><strong>Rules:</strong></p> <ul> <li>Each region is independent and one region downtime does'nt affect other region</li> <li>User authentication metadata is replicated across all regions</li> <li>Requests with an authentication token can enter the network from any region and be verified</li> <li>The default entry point for authentication request is usa-region-1</li> <li>The ingress controller / service mesh determines who the user is and in which region their account is hosted</li> <li>The authentication service add &quot;user_region&quot; headers to the request and &quot;precedence_region&quot; to prevent internal loop</li> <li>Detects that destionation region is not current region and proxies the request to the correct region with extra headers added</li> </ul> <p><strong>Diagram:</strong></p> <pre><code> β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ usa-region-1 β”‚ β”‚ β”Œβ”€β”€β”€β–Ί/app-1/*β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”Όβ”€β”€β–Ί Load ──────► β”Œβ”€β”€ Ingress ───── β”‚ β”‚ β”‚ Balancer β”‚ Controller β”œβ”€β”€β”€β–Ί/app-2/*β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └───►/app-3/*β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └─Authentication β”‚ User──────► Cloudflare ──────►│ β”‚ β”‚ Service β”‚ Request β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ *proxie the request* β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ europe-region-1 β”‚ β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β–Ί/app-1/*β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └───┼──► Load ───────► └── Ingress ───── β”‚ β”‚ Balancer Controller β”œβ”€β”€β”€β–Ί/app-2/*β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └───►/app-3/*β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └─Authentication β”‚ β”‚ Service β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<p><strong>LOCAL DATA STORAGE REQUIREMENT</strong></p> <p>In some markets, there could be regulatory reasons for keeping a user's data stored within their home region. So if a European user has travelled to the USA, they should continue to be routed to European servers, and their data will remain in Europe.</p> <p><strong>SIMPLE ROUTING BASED ON HEADERS</strong></p> <p>First you need something in each HTTP request, such as the <code>user_region</code> header that you mention. The ingress controller is the logical place to do this, rather than for each service. Here is a <a href="https://gist.github.com/jirikuncar/6ed34b2bceeba2f3d7fc93ed5539bd3c" rel="nofollow noreferrer">Traefik header based routing example</a>.</p> <p><strong>ROUTING IN INGRESS CONTROLLERS</strong></p> <p>If you need more complex logic, then a plugin can be written, using either the extensibility features of the ingress controller, or a dedicated API gateway placed in front of APIs. For an example see <a href="https://github.com/curityio/dynamic-user-routing/blob/main/reverse-proxy/nginx/nginx.conf" rel="nofollow noreferrer">this NGINX configuration file</a>, which sets the target host name based on <a href="https://github.com/curityio/dynamic-user-routing/blob/main/reverse-proxy/nginx/zonetransfer.lua" rel="nofollow noreferrer">this Lua plugin</a>.</p> <p><strong>FURTHER INFO</strong></p> <p>These links may also be of interest. In particular note how <a href="https://curity.io/resources/learn/dynamic-user-routing/#associating-users-to-regions" rel="nofollow noreferrer">unauthenticated requests</a> can be handled, by identifying the user first, then transferring the user to their home region so that authentication takes place there. This can allow credentials for users to only be stored in their home region.</p> <ul> <li><a href="https://curity.io/resources/learn/dynamic-user-routing/" rel="nofollow noreferrer">Dynamic User Routing</a></li> <li><a href="https://curity.io/resources/learn/integrating-plugins-with-kubernetes-ingress/" rel="nofollow noreferrer">Implementing Plugins in Ingress Controllers</a></li> </ul>
<p>I want to install a previous, specific version of k8slens (<a href="https://k8slens.dev/" rel="nofollow noreferrer">https://k8slens.dev/</a>) but impossible for me to find this (neither for mac nor windows !) Do I have to download the source code and rebuild it? Even with this there is no &quot;install&quot; section in the makefile ! Why is it so difficult to find a specific version?</p>
<p>Yes, you can easily download the source code for a specific version <code>tag</code> and the compile and use it. The list of tags is <a href="https://github.com/lensapp/lens/tags" rel="nofollow noreferrer">here</a>.</p> <p>Once you get the source code of your desired version you can generate the binary with :</p> <pre><code>make build </code></pre> <p>And then simply run that binary to get your required version. Just know that <code>install</code> simply means copying a compiled binary in a known <code>Path</code> of the system so it can be executed. There is nothing special about it.</p>
<p>What is the difference between <code>env</code> and <code>envFrom</code> fields in kubernetes when referencing secrets? Thank you!</p>
<p>Below is the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">&quot;env:&quot;</a> sample which will load variables into container environment as environment variables which can referenced using &quot;$DEMO_GREETING&quot; then you will get &quot;Hello from the environment&quot;</p> <pre><code> env: - name: DEMO_GREETING value: &quot;Hello from the environment&quot; - name: DEMO_FAREWELL value: &quot;Such a sweet sorrow&quot; </code></pre> <p>Similarly you can load <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> as environment variable as below</p> <pre><code> envFrom: - secretRef: name: mysecret </code></pre> <p>Here the secret will be loaded as environment variable and this can be referenced as $mysecret inside the container.</p>
<p>I have 3 ingress namespace in AKS Cluster. One of standart ingress deploy file, the other two are internal and external modified versions of the original file. So right now I have 3 ingress and 3 external ip(one of internal with adding service annotations this line service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot;)</p> <pre><code> $ kubectl get svc -A external-ingress ingress-nginx-controller LoadBalancer 10.245.57.76 3.3.3.3 80:32112/TCP,443:31761/TCP 3h24m external-ingress ingress-nginx-controller-admission ClusterIP 10.245.28.35 &lt;none&gt; 443/TCP 3h24m ingress-nginx ingress-nginx-controller LoadBalancer 10.245.12.12 1.1.1.1 80:31716/TCP,443:32023/TCP 40m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.245.110.233 &lt;none&gt; 443/TCP 40m internal-ingress ingress-nginx-controller LoadBalancer 10.245.173.35 2.2.2.2 80:32567/TCP,443:30296/TCP 3h25m internal-ingress ingress-nginx-controller-admission ClusterIP 10.245.39.250 &lt;none&gt; 443/TCP 3h25m </code></pre> <p>I want to use two of ingress but it doesn't work as i think. I try to manage with &quot; ingressClassName: &quot; but it's working unexpended.</p> <pre><code>$ kubectl get ing -w NAME CLASS HOSTS ADDRESS PORTS AGE external-ingress nginx test.io 1.1.1.1 80, 443 3h4m internal-ingress internal-ingress admin.test.io 1.1.1.1 80 3h4m external-ingress nginx test.io 2.2.2.2 80, 443 3h5m external-ingress nginx test.io 3.3.3.3 80, 443 3h5m external-ingress nginx test.io 1.1.1.1 80, 443 3h5m external-ingress nginx test.io 2.2.2.2 80, 443 3h6m external-ingress nginx test.io 3.3.3.3 80, 443 3h6m external-ingress nginx test.io 1.1.1.1 80, 443 3h6m external-ingress nginx test.io 2.2.2.2 80, 443 3h7m external-ingress nginx test.io 3.3.3.3 80, 443 3h7m external-ingress nginx test.io 1.1.1.1 80, 443 3h7m external-ingress nginx test.io 2.2.2.2 80, 443 3h8m external-ingress nginx test.io 3.3.3.3 80, 443 3h8m </code></pre> <p>When I changed the &quot;ingressClassName: nginx&quot; External IP constantly changing. When I changed &quot;ingressClassName: external-ingress&quot; IP sometimes changing, sometimes not changing.. and when it's chaning it's not work..</p> <p>original deployment file</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <p>I create a external-ingress and internal-ingress with this file changing &quot;ingress-nginx&quot; line to &quot;internal-nginx&quot; and &quot;external-nginx&quot;.</p> <p>Why is the ip constantly changing?</p>
<p>If the ingress resource is regularly changing external IPs, this indicates you have more than one ingress controller in the cluster and have not specified an ingress class on the ingress resource.</p> <p>As such, all ingress controllers in the cluster (which have not been restricted to namespace) will try to &quot;fight&quot; for management of the traffic going through that ingress, each time the winning controller will set the IP of the ingress to their own loadbalancer IP.</p> <p>This is why you're seeing the IP of your ingress change to <code>1.1.1.1</code>, <code>2.2.2.2</code>, and <code>3.3.3.3</code> because these are the ingress controllers you have installed in your cluster, as per your snippet:</p> <pre><code>$ kubectl get svc -A external-ingress ingress-nginx-controller LoadBalancer 10.245.57.76 3.3.3.3 80:32112/TCP,443:31761/TCP 3h24m external-ingress ingress-nginx-controller-admission ClusterIP 10.245.28.35 &lt;none&gt; 443/TCP 3h24m ingress-nginx ingress-nginx-controller LoadBalancer 10.245.12.12 1.1.1.1 80:31716/TCP,443:32023/TCP 40m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.245.110.233 &lt;none&gt; 443/TCP 40m internal-ingress ingress-nginx-controller LoadBalancer 10.245.173.35 2.2.2.2 80:32567/TCP,443:30296/TCP 3h25m internal-ingress ingress-nginx-controller-admission ClusterIP 10.245.39.250 &lt;none&gt; 443/TCP 3h25m </code></pre> <p>You can fix this by ensuring all your ingress controllers are set to only watch for a specific ingress class and/or restricting the controllers to only manage a specific namespace, and making sure your ingress resource specifies an ingress class by the ingress-class annotation and/or the <code>ingressClass</code> block.</p>
<p>I'm currently trying to setup heketi on kubernetes, i need to create an endpoint like so (i'm using Ansible):</p> <pre><code>- hosts: 'masters' remote_user: kube become: yes become_user: kube vars: ansible_python_interpreter: /usr/bin/python3 tasks: - name: &quot;Create gluster endpoints on kubernetes master&quot; kubernetes.core.k8s: state: present definition: apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster labels: storage.k8s.io/name: glusterfs storage.k8s.io/part-of: mine storage.k8s.io/created-by: username subsets: - addresses: - ip: 10.0.0.4 hostname: gluster1 - ip: 10.0.0.5 hostname: gluster2 - ip: 10.0.0.6 hostname: gluster3 - ip: 10.0.0.7 hostname: gluster4 ports: - port: 1 </code></pre> <p>When i run ansible playbook on this i am getting this error:</p> <blockquote> <p>Failed to create object: Namespace is required for v1.Endpoints</p> </blockquote> <p>I can't find any information as to what it's talking about, what is the namespace supposed to be?</p>
<p>An <code>Endpoints</code> resource (like a <code>Pod</code>, <code>Service</code>, <code>Deployment</code>, etc) is a <em>namespaced</em> resource: it cannot be created globally; it must be created inside a specific <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a>.</p> <p>We can't answer the question, &quot;what is the namespace supposed to be?&quot;, because generally this will be something like &quot;the same namespace as the resources that will rely on this <code>Endpoints</code> resource&quot;.</p>
<p>Is it possible to create a Kubernetes cluster admin without the ability to modify/read certain namespace and its content?</p> <p>I am talking about subtracting certain permissions from existing role.</p> <p>thanks.</p>
<p>To get the behavior you want you would need a <a href="https://en.wikipedia.org/wiki/Complement_(set_theory)" rel="nofollow noreferrer">set subtraction</a> of cluster-admin role minus the rules that you have defined. <a href="https://github.com/kubernetes/kubernetes/issues/70387" rel="nofollow noreferrer">It's not supported in K8s as of this writing.</a></p> <p>If you need a custom role which has less permissions than a predefined role, it would be more clear to list those permissions rather than to list the inverse of those permissions.</p>
<p>Let say there's a deployment named my-deployment which is consisted of 3 pods, now we use port-forward forward a local port to this deployment :</p> <pre><code>kubectl port-forward deployment/my-deployment 8888 9999 </code></pre> <p>My question is : when I visit localhost:8888 serveral times then which pod would be forwarded to ? Always forward to a fixed pod(like first pod) ? Or forward it by random ? Or use round-roubin strategy ?</p>
<p><code>when I visit localhost:8888 serveral times then which pod would be forwarded to ?</code></p> <p>Will forward to the first pod sorted by name.</p> <p><code>Always forward to a fixed pod(like first pod) ?</code></p> <p>Fixed.</p> <p><code>Or forward it by random ? Or use round-roubin strategy ?</code></p> <p>Fixed to the first pod sorted by name.</p> <p>Presumed you have performed a port-forward command and curl successfully. Now if you scale the deployment to 0; then scale up; if you curl again now you will get an error. This is because the pod that the port forwarded to has been terminated during the scale to 0.</p>
<p>I'm using Kubernetes on Digitalocean cloud platform</p> <p>the deckorized laravel app is running right now</p> <p>and i need to run <code>php artisan optimize:clear</code> to all pods which are loading the app</p> <p>how can i do that?</p> <p><strong>Edit:</strong></p> <p>I got the solution</p> <p>First step connect to pod container</p> <pre><code>kubectl exec -it pod_name -- /bin/bash </code></pre> <p>then</p> <pre><code>php artisan optimize:clear </code></pre> <p><strong>or</strong></p> <pre><code>kubectl exec -it pod_name -- /bin/bash -c &quot;php artisan route:clear&quot; </code></pre>
<p>You can build a bash file like the below:</p> <pre><code>#!/bin/bash PODS=$(kubectl get pods -n my-production | grep php-fpm | awk -F' *|/' '$2 == $3 &amp;&amp; $4 == &quot;Running&quot;' | awk {'print $1'} | column -t ) for POD in $PODS do echo $POD kubectl -n my-production exec $POD -c php -- php artisan optimize:clear done echo &quot;Done!&quot; </code></pre>
<p>Is the service affected when all masters are stopped?</p> <p>OpenShift 4</p> <ul> <li>Infra Node 3</li> <li>Master Node 3</li> <li>Worker Node 3</li> </ul> <p>β€» Router pods are in the Infra Node.</p> <p>The work request is as follows.</p> <ul> <li><p>frontend(DC) -&gt; api(DC)</p> <ul> <li>Internet -&gt; Infra Node(Router) -&gt; SDN -&gt; Worker Node</li> </ul> </li> <li><p>www.frontend.test.com(443) - Route</p> <ul> <li>api REST Call (HttpUrlConnection or HttpClient 4.x, api:8080 or api.test1.svc.cluster.local:8080)</li> </ul> </li> </ul> <p>When all master nodes are stopped, the front-end always succeeds. However, the API call fails intermittently.</p> <p>Slow Hang or UnknownHostException Message.</p> <p>It is ok if there is at least one master node.</p> <pre><code>GET http://api.test1.svc.cluster.local:8080/ : java.net.UnknownHostException: api.test1.svc.cluster.local or GET http://api:8080/ : java.net.UnknownHostException: api </code></pre> <p>When slow Message.</p> <pre><code>&quot;http-nio-8080-exec-2&quot; #33 daemon prio=5 os_prio=0 cpu=6.89ms elapsed=452.97s tid=0x00007fdb88f15800 nid=0xb3 runnable [0x00007fdb34fc2000] java.lang.Thread.State: RUNNABLE at java.net.Inet6AddressImpl.lookupAllHostAddr([email protected]/Native Method) at java.net.InetAddress$PlatformNameService.lookupAllHostAddr([email protected]/InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService([email protected]/InetAddress.java:1519) at java.net.InetAddress$NameServiceAddresses.get([email protected]/InetAddress.java:848) - locked &lt;0x0000000782ff5be0&gt; (a java.net.InetAddress$NameServiceAddresses) at java.net.InetAddress.getAllByName0([email protected]/InetAddress.java:1509) at java.net.InetAddress.getAllByName([email protected]/InetAddress.java:1368) at java.net.InetAddress.getAllByName([email protected]/InetAddress.java:1302) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at </code></pre> <p>thank you.</p>
<p>Yes, it is affected. Here's what's happening when you stop all the master nodes.</p> <ol> <li>The incoming traffic to the DC egress is being forwarded to the <code>Ingress</code> component of your Kubernetes/Openshift cluster. (Your <code>front-end</code>).</li> <li>This is succeeding because the name resolution of <code>Ingress</code> is responsibility of your infrastructure, since <code>Ingress</code> interfacing is external to <code>Openshift</code>.</li> <li>Once the traffic reaches the <code>Ingress</code> (<code>front-end</code> successfully reached), it needs to be forwarded now, to the <code>backend</code> service depending on the path in the request.</li> <li>This cannot be done, since <code>Ingress</code> objects, by design, dynamically resolve <code>Service</code> DNS names into IP addresses in order to reach them inside the cluster. This is done, so that when services go down and come up and the IP address changes, <code>Ingress</code> doesn't need to be reconfigured since the DNS name stays consistent.</li> <li>Here, the resolution fails, because your DNS system (probably <code>core-dns</code>) is supposed to be running on the <code>master</code> nodes, which it isn't, and this leads to <code>unresolved name</code> behavior.</li> <li>Sometimes, it is possible that the <code>Ingress</code> has a local resolver cache entry which is valid and the request makes it to the <code>Service</code> and it gets a response. But this is highly unstable, since it is possible that this cache has been set with a auto-clean timeout and those entries are automatically purged after a while.</li> </ol>
<p>I built a service that utilizes docker pods to process data. The time it takes varies from as little as 15 minutes to as much as 1 hour.</p> <p>My applications captures SIGTERM to ensure a graceful shutdown takes place when demand drops while Pods and Nodes are decommissioned.</p> <p>In each docker image I placed code to report back if it shutdown because it completed the work and if a SIGTERM event took place and thus completed its processing and terminated.</p> <p>My system is deployed in AWS using EKS. I use EKS to manage node deployment when demand goes up and spindown nodes when demand drops. I use KEDA to manage POD deployment which is what helps trigger whether additional nodes are needed or not. In KEDA I have the cooldownPeriod defined for 2 hours the maximum I expect a pod to take even though the max it would ever take is 1 hour.</p> <p>In AWS EKS, I have defined the terminationGracePeriodSeconds for 2 hours as well.</p> <p>I isolated the issue during Node scale down that when nodes are being terminated, the terminationGracePeriodSeconds is not being honored and my Pods are being shutdown within ~30 minutes. Because the Pods are abruptly removed I am unable to look at their logs to see what happened.</p> <p>I tried to simulate this issue by issuing a kubernetes node drain and kept my pod running</p> <pre><code>kubectl drain &lt;MY NODE&gt; </code></pre> <p>I saw the SIGTERM come through, and I also noticed that the pod was only terminated after 2 hours and not before.</p> <p>So for a brief minute I thought maybe I did not configure the terminationGracePeriod properly, so I checked:</p> <pre><code>kubectl get deployment test-mypod -o yaml|grep terminationGracePeriodSeconds terminationGracePeriodSeconds: 7200 </code></pre> <p>I even redeployed the config but that made no difference.</p> <p>However, I was able to reproduce the issue by modifying the desiredSize of the Node group. I can reproduce it programmatically in Python by doing this:</p> <pre><code> resp = self.eks_client.update_nodegroup_config(clusterName=EKS_CLUSTER_NAME, nodegroupName=EKS_NODE_GROUP_NAME, scalingConfig={'desiredSize': configured_desired_size}) </code></pre> <p>or by simply going to AWS console and modifying the desiredSize there.</p> <p>I see EKS choosing a node and if it happens that there is a pod processing data that will take about an hour, the pod is sometimes prematurely terminated.</p> <p>I have logged on to that node that is being scaled down and found no evidence of the prematurely terminated Pod in the logs.</p> <p>I was able to capture this information once</p> <pre><code>kubectl get events | grep test-mypod-b8dfc4665-zp87t 54m Normal Pulling pod/test-mypod-b8dfc4665-zp87t Pulling image ... 54m Normal Pulled pod/test-mypod-b8dfc4665-zp87t Successfully pulled image ... 54m Normal Created pod/test-mypod-b8dfc4665-zp87t Created container mypod 54m Normal Started pod/test-mypod-b8dfc4665-zp87t Started container mypod 23m Normal ScaleDown pod/test-mypod-b8dfc4665-zp87t deleting pod for node scale down 23m Normal Killing pod/test-mypod-b8dfc4665-zp87t Stopping container mypod 13m Warning FailedKillPod pod/test-po-b8dfc4665-zp87t error killing pod: failed to &quot;KillContainer&quot; for &quot;mypod&quot; with KillContainerError: &quot;rpc error: code = Unknown desc = operation timeout: context deadline exceeded&quot; </code></pre> <p>I once saw a pod removed for no reason as such when scaledown was disabled but it decided to remove my pod:</p> <pre><code>kubectl get events | grep test-mypod-b8dfc4665-vxqhv 45m Normal Pulling pod/test-mypod-b8dfc4665-vxqhv Pulling image ... 45m Normal Pulled pod/test-mypod-b8dfc4665-vxqhv Successfully pulled image ... 45m Normal Created pod/test-mypod-b8dfc4665-vxqhv Created container mypod 45m Normal Started pod/test-mypod-b8dfc4665-vxqhv Started container mypod 40m Normal Killing pod/test-mypod-b8dfc4665-vxqhv Stopping container mypod </code></pre> <p>This is the kuberenets version I have</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;18&quot;, GitVersion:&quot;v1.18.0&quot; GitCommit:&quot;9e991415386e4cf155a24b1da15becaa390438d8&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-03-25T14:58:59Z&quot;, GoVersion:&quot;go1.13.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;18+&quot;, GitVersion:&quot;v1.18.20-eks-8c49e2&quot;, GitCommit:&quot;8c49e2efc3cfbb7788a58025e679787daed22018&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-10-17T05:13:46Z&quot;, GoVersion:&quot;go1.13.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>To minimize this issue, I deployed a Pod Disruption Budget during peak hours to block scale down and in the evening during low demand I remove the PDB which initiates the scaledown. However, that is not the right solution and even during low peak there are still pods that get stopped prematurely.</p>
<p>We faced the same issue with AWS EKS and cluster-autoscaler - nodes were unexpectedly shut down, no preventive actions were working, and even the <strong>node</strong> annotation <code>cluster-autoscaler.kubernetes.io/scale-down-disabled=true</code> did not make any difference.</p> <p>After two days of troubleshooting, we found the reason - it was because we use Multiple Availability Zone in ASG configuration, which has an automatic &quot;AZRebalance&quot; process. The AZRebalance tries to ensure that the number of nodes is approximately the same between all availability zones. Therefore, sometimes when the scale-up event occurs, it tries to rebalance nodes by killing one node and creating another in a different time zone. The message in the events log looks like this:</p> <p><a href="https://i.stack.imgur.com/EKvJS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EKvJS.png" alt="enter image description here" /></a></p> <p>Cluster-autoscaler does not control this process, so there are two systems (cluster-autoscaler and AWS ASG) that manage the number of nodes simultaneously, which leads to unexpected behavior.</p> <p>As a workaround, we suspended the &quot;AZRebalance&quot; process in the ASG. <a href="https://i.stack.imgur.com/h4h5a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4h5a.png" alt="enter image description here" /></a></p> <p>Another solution would be to use ASG for each availability zone separately and use <code>--balance-similar-node-groups</code> <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler" rel="nofollow noreferrer">feature</a> in the cluster-autoscaler.</p> <p><a href="https://faun.pub/aws-eks-and-cluster-autoscaler-gotchas-97efa48c01c4" rel="nofollow noreferrer">Here's</a> the article about that and <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler" rel="nofollow noreferrer">here's</a> the cluster-autoscaler documentation.</p>
<p>I'm trying to migrate Several spring boot services to EKS and they can't retrieve aws credentials from credentials chain and pods are failing with following error: <code>Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain</code></p> <p>These are what I've tried so far:</p> <p>I'm using <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html" rel="nofollow noreferrer">Web identity token from AWS STS</a> for credentials retrieval.</p> <pre><code>@Bean public AWSCredentialsProvider awsCredentialsProvider() { if (System.getenv(&quot;AWS_WEB_IDENTITY_TOKEN_FILE&quot;) != null) { return WebIdentityTokenCredentialsProvider.builder().build(); } return new DefaultAWSCredentialsProviderChain(); } @Bean public SqsClient sqsClient(AWSCredentialsProvider awsCredentialsProvider) { return SqsClient .builder() .credentialsProvider(() -&gt; (AwsCredentials) awsCredentialsProvider.getCredentials()) .region(Region.EU_WEST_1).build(); } @Bean public SnsClient snsClient(AWSCredentialsProvider awsCredentialsProvider) { return SnsClient .builder() .credentialsProvider(() -&gt; (AwsCredentials) awsCredentialsProvider.getCredentials()) .region(Region.EU_WEST_1).build(); } </code></pre> <p>The services also have <code>aws-java-sdk-sts</code> maven dependency packaged.</p> <p>IAM role for the services is also fine and <code>AWS_WEB_IDENTITY_TOKEN_FILE</code> is a also automatically created within pod after each Jenkins build based on K8s manifest file.</p> <p>From pod I can make GET and POST request to SNS and SQS without any problem.</p>
<p>Problem was fixed.</p> <p>Main issue was conflicting AWS SDK BOM version with individual models. Also previous version of BOM I was using wasn't supporting AWS SDK v2.x .</p> <p>These are the main take aways from the issue:</p> <ol> <li><p>AWS SDK authenticate services using <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html" rel="nofollow noreferrer">credentials provider chain</a> . The <a href="https://sdk.amazonaws.com/java/api/latest/index.html?software/amazon/awssdk/auth/credentials/DefaultCredentialsProvider.html" rel="nofollow noreferrer">default credential provider</a> chain of the AWS SDK for Java 2.x searches for credentials in your environment using a predefined sequence.</p> <p>1.1 As of <a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html" rel="nofollow noreferrer">AWS SDK for Java 2.x</a> Web identity token from AWS STS is within default provider chain.</p> <p>1.2 As long as using v2 of the SDK and having the STS dependency makes explicit configuration of Web identity token redundant.</p> <p>1.3 Make sure candidate service is using AWS SDK v2 as it’ll reduce the configuration code to minimum.</p> </li> </ol> <p>If a candidate service using AWS SDK v1 following configuration should be added as Web identity token isn’t in default provider chain for v1.</p> <pre><code>@Bean public AWSCredentialsProvider awsCredentialsProvider() { if (System.getenv(&quot;AWS_WEB_IDENTITY_TOKEN_FILE&quot;) != null) { return WebIdentityTokenCredentialsProvider.builder().build(); } return new DefaultAWSCredentialsProviderChain(); } </code></pre> <p>Last but not least try to use try to use latest <a href="https://mvnrepository.com/artifact/software.amazon.awssdk/bom" rel="nofollow noreferrer">AWS SDK BOM</a> dependency . <a href="https://github.com/aws/aws-sdk-java-v2" rel="nofollow noreferrer">(currently all modules have the same version, but this may not always be the case)</a></p>
<p>I'm using the latest version of google-kubernetes (1.22.8-gke.202) in a Kubernetes managed cluster. I also have a custom service account that has access to the &quot;Artifact Registry Reader&quot; scope that should grant it permission to pull private images from the repository - calling this <em>custom-service-account</em>.</p> <p>I've validated that the nodes themselves have the <em>custom-service-account</em> service account linked to them within Compute Engine. Kubernetes is setup with a service account that is linked to the IAM service account with the same name through <a href="https://www.cobalt.io/blog/google-cloud-workload-identity-with-kubernetes-and-terraform" rel="nofollow noreferrer">workload identity</a>. However, when I try to spawn a pod that pulls from my private repo it fails indefinitely.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 21m (x3 over 24m) default-scheduler 0/2 nodes are available: 2 node(s) were unschedulable. Warning FailedScheduling 19m default-scheduler no nodes available to schedule pods Normal NotTriggerScaleUp 18m (x25 over 24m) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had taint {reserved-pool: true}, that the pod didn't tolerate Normal Scheduled 18m default-scheduler Successfully assigned default/test-service-a-deployment-5757fc5797-b54gx to gke-personal-XXXX--personal-XXXX--ac9a05b6-16sb Normal Pulling 17m (x4 over 18m) kubelet Pulling image &quot;us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest&quot; Warning Failed 17m (x4 over 18m) kubelet Failed to pull image &quot;us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest&quot;: failed to resolve reference &quot;us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest&quot;: failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Warning Failed 17m (x4 over 18m) kubelet Error: ErrImagePull Warning Failed 16m (x6 over 18m) kubelet Error: ImagePullBackOff Normal BackOff 3m27s (x65 over 18m) kubelet Back-off pulling image &quot;us-central1-docker.pkg.dev/personal-XXXX/my-test-repo/my-test-repo-business-logic:latest&quot; </code></pre> <p>I've also ssh'ed into the nodes themselves and at least by default with a regular <code>docker pull</code> or <code>crictl pull</code> see this same error.</p> <p>So, the specific questions I have:</p> <ul> <li>How is GCP injecting the service account credentials into Kubernetes/Docker worker that tries to launch the images? Is it expected that the regular docker command doesn't seem to have these credentials?</li> <li>Do I need to manually bootstrap some additional authentication for Kubernetes aside from just inheriting the service account on the pods?</li> </ul> <p>EDIT: Result of <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#troubleshooting" rel="nofollow noreferrer">here</a></p> <pre><code>&gt; gcloud container clusters describe personal-XXXX-gke --zone us-central1-a --format=&quot;value(workloadIdentityConfig.workloadPool)&quot; personal-XXXX.svc.id.goog &gt; gcloud container node-pools describe personal-XXXX-gke-node-pool --cluster personal-XXXX-gke --format=&quot;value(config.workloadMetadataConfig.mode)&quot; --zone us-central1-a GKE_METADATA &gt; kubectl describe serviceaccount --namespace default be-service-account Name: be-service-account Namespace: default Labels: &lt;none&gt; Annotations: iam.gke.io/gcp-service-account: [email protected] Image pull secrets: &lt;none&gt; Mountable secrets: be-service-account-token-jmss9 Tokens: be-service-account-token-jmss9 Events: &lt;none&gt; &gt; gcloud iam service-accounts get-iam-policy [email protected] bindings: - members: - serviceAccount:personal-XXXX.svc.id.goog[default/be-service-account] role: roles/iam.workloadIdentityUser etag: BwXjqJ9DC6A= version: 1 </code></pre>
<p>When checking for access to artifact registry, please check permission and scopes as per this <a href="https://cloud.google.com/artifact-registry/docs/access-control#gke" rel="nofollow noreferrer">documentation</a>.</p>
<p>I'm starting in Kubernetes and I'm trying to update the Image in DockerHub that is used for the Kubernetes's Pod creation and then with <code>kubectl rollout restart deployment deploymentName</code> command it should pull the newest image and rebuild the pods. The problem I'm facing is that it only works when I specify a version in the tag both in the image and the deployment.yaml` file.</p> <p>In my repo I have 2 images <code>fixit-server:latest</code> and <code>fixit-server:0.0.2</code> (the actual latest one).</p> <p>With <code>deployment.yaml</code> file set as</p> <pre><code>spec: containers: - name: fixit-server-container image: vinnytwice/fixit-server # imagePullPolicy: Never resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; </code></pre> <p>I run <code>kubectl apply -f infrastructure/k8s/server-deployment.yaml</code> and it gets created, but when running <code>kubectl get pods</code> I get</p> <pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pods NAME READY STATUS RESTARTS AGE fixit-server-5c7bfbc5b7-cgk24 0/1 ErrImagePull 0 7s fixit-server-5c7bfbc5b7-g7f8x 0/1 ErrImagePull 0 7s </code></pre> <p>I then instead specify the version number in the <code>deployment.yaml</code> file</p> <pre><code> spec: containers: - name: fixit-server-container image: vinnytwice/fixit-server:0.0.2 # imagePullPolicy: Never resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; </code></pre> <p>run again <code>kubectl apply -f infrastructure/k8s/server-deployment.yaml</code> and get <code>configured</code> as expected. Running <code>kubectl rollout restart deployment fixit-server</code> I get <code>restarted</code> as expected. But still running <code>kubectl get pods</code> shows</p> <pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pods NAME READY STATUS RESTARTS AGE fixit-server-5c7bfbc5b7-cgk24 0/1 ImagePullBackOff 0 12m fixit-server-5d78f8848c-bbxzx 0/1 ImagePullBackOff 0 2m58s fixit-server-66cb98855c-mg2jn 0/1 ImagePullBackOff 0 74s </code></pre> <p>So I deleted the deployment and applied it again and pods are now running correctly.</p> <p>Why when omitting a version number for the image to use ( which should imply :latest) the <code>:latest</code> tagged image doesn't get pulled from the repo? What's the correct way of using the <code>:latest</code> tagged image? Thank you very much. Cheers</p> <p>repo: <a href="https://i.stack.imgur.com/Cz4Xz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cz4Xz.png" alt="enter image description here" /></a></p> <p>images:</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE vinnytwice/fixit-server 0.0.2 53cac5b0a876 10 hours ago 1.3GB vinnytwice/fixit-server latest 53cac5b0a876 10 hours ago 1.3GB </code></pre>
<p>You can use <a href="https://github.com/ryandaniels/docker-script-find-latest-image-tag/blob/aed09da3813429cbe979e34a050a48c3b19a7002/docker_image_find_tag.sh" rel="nofollow noreferrer"><code>docker_image_find_tag.sh</code></a> to check if your image has a <code>latest</code> tag or not.<br /> It will show the tag/version for shows <code>image:&lt;none&gt;</code> or <code>image:latest</code>.</p> <p>That way, you can check if, that mentioned in &quot;<a href="https://komodor.com/learn/how-to-fix-errimagepull-and-imagepullbackoff/" rel="nofollow noreferrer">How to fix ErrImagePull and ImagePullBackoff</a>&quot; if this is linked to:</p> <blockquote> <ul> <li><strong>Cause</strong>: Pod specification provides an invalid tag, or fails to provide a tag</li> <li><strong>Resolution</strong>: Edit pod specification and provide the correct tag.<br /> If the image does not have a latest tag, you must provide a valid tag</li> </ul> </blockquote> <p>And:</p> <blockquote> <p>What's the correct way of using the <code>:latest</code> tagged image</p> </blockquote> <p>Ideally, by <em>not</em> using it ;) <code>latest</code> can shift at any time, and by using a <em>fixed</em> label, you ensure a better reproducibility of your deployment.</p>
<p>I have kube-prometheus-stack running on a kubernetes cluster along with prometheus-blackbox-exporter. I want to monitor multiple http targets. I have tried setting this up with an servicemonitor but when I add a target the target does not get the right labels</p> <p>The servicemonitor yaml:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: annotations: meta.helm.sh/release-name: blackbox meta.helm.sh/release-namespace: default generation: 1 labels: app.kubernetes.io/instance: blackbox app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus-blackbox-exporter app.kubernetes.io/version: 0.20.0 helm.sh/chart: prometheus-blackbox-exporter-5.7.0 release: kube-prometheus name: blackbox-prometheus-blackbox-exporter namespace: default spec: endpoints: - interval: 10s path: /probe port: http scheme: http params: module: - http_2xx target: - https://google.com - https://example.com metricRelabelings: - sourceLabels: [__address__] targetLabel: __param_target jobLabel: kube-prometheus selector: matchLabels: app.kubernetes.io/instance: blackbox app.kubernetes.io/name: prometheus-blackbox-exporter </code></pre> <p>But in grafana I only get one instance label right: <a href="https://i.stack.imgur.com/NSQxH.png" rel="nofollow noreferrer">grafana instances</a></p>
<p>spec.endpoints is a <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor" rel="nofollow noreferrer">list</a>. In yaml each lists element are defined with a <a href="https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html" rel="nofollow noreferrer">dash</a>. So you have to modify your service monitor yaml as:</p> <pre class="lang-yaml prettyprint-override"><code>spec: endpoints: - interval: 10s path: /probe port: metrics params: module: - http_2xx target: - https://www.google.com relabelings: - sourceLabels: [__param_target] targetLabel: target - interval: 10s path: /probe port: metrics params: module: - http_2xx target: - https://www.example.com relabelings: - sourceLabels: [__param_target] targetLabel: target </code></pre> <p>It works <a href="https://i.stack.imgur.com/JMxzP.png" rel="nofollow noreferrer">like it is shown here</a></p>
<p>K8S Version: 1.23</p> <p>I have an hourly CronJob with 5 inner pods. After some time, part of these pods goes to shutdown and moved to the <code>Completed</code> state (it's expected), but the rest - still works, so have a <code>Running</code> state.</p> <p>At the next hour, this CronJob will not be triggered due to the <code>Running</code> pods (it's also expected). But I need to force recreate pods with the <code>Completed</code> state if a <code>Running</code> still exists. Is it possible?</p>
<p>It appears that the reason you're letting the <code>Running</code> pods exist is that you expect those pods to take a long time before finishing. This means that the scheduling of those pods <em>should</em> be different from others, since they can take a longer time to finish.</p> <p>You can split your <code>CronJob</code> into two <code>CronJobs</code>. One of these will run every hour and will only have the pods that get <code>Completed</code>. The other will run less frequently (maybe every 2 hours?) allowing the <code>Running</code> pods to finish.</p> <p>This way, you will be able to manage your <code>cron</code> tasks separately.</p> <blockquote> <p>Note: For k8s version, we usually mention the API version which is of the form <code>v1.xx</code>. It appears you are reporting the version of a public cloud offering of k8s like <code>AKS</code>, <code>EKS</code> or <code>GKE</code>. Just FYI.</p> </blockquote>
<p>I am very new to azure kubernetes! Right now I am just playing around with it, trying to work out how to create things using kubectl. I already have my cluster set up, and I have created a couple of pods using an existing container in our registry. I also created an ingress controller using azure application gateway.</p> <p>What I want to do next is use a deployment yaml file to add a new replicated set of pods to the cluster, with a different image in there, and add the new service endpoint to the ingress controller. Remember, I am very new to this, so I am not sure if this is what you should do, or if you should create the pods/service first and then change the ingress controller? Or if you should have a separate deployment yaml for the ingress and recreate it on its own?</p> <p>Anyway, here is the deployment yaml file that I have. When I try and run it with :</p> <pre><code>kubectl apply -f deployment.yml </code></pre> <p>I just get the error :</p> <pre><code>error: error validating &quot;deployment.yml&quot;: error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>So I guess my question is, am I doing this right? And if this is the way I should be adding to my cluster, any idea what is wrong with this yaml?</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: new-kuber-endpoint namespace: kuber-ut spec: replicas: 2 selector: matchLabels: app: new-kuber-endpoint template: metadata: labels: app: new-kuber-endpoint spec: nodeSelector: kubernetes.io/os: linux containers: - name: amica-endpoint image: mycontainerreg.azurecr.io/new-endpoint:20220707.1 ports: - containerPort: 80 resources: requests: cpu: '0' memory: '0' limits: cpu: '256' memory: 11400G --- apiVersion: v1 kind: Service metadata: name: new-kuber-endpoint-service namespace: kuber-ut spec: type: LoadBalancer ports: - targetPort: 80 name: port80 port: 80 protocol: TCP selector: app: new-kuber-endpoint --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kuberdevingress namespace: kuber-ut uid: d9d490f0-1d3a-4433-bb03-7f1ba0dc611f resourceVersion: '1490171' generation: 3 spec: rules: - http: paths: - path: /old pathType: Prefix backend: service: name: old-endpoint-service port: number: 80 - path: /new pathType: Prefix backend: service: name: new-kuber-endpoint-service port: number: 80 status: loadBalancer: ingress: - ip: &lt;IP goes here&gt; </code></pre> <p>I copied the kuberdevingress section from the azure portal, so I am not confident that is how I would recreate that ingress?</p>
<p>You can use the single YAML file and manage it as per the requirement of creating multiple files for different resources that are on you.</p> <p>If you want to update the deployment you can simply update the YAML and run</p> <pre><code>kubectl apply -f &lt;filename&gt;.yaml </code></pre> <p>Also, there is an issue with your YAML, you can separate the resource using the <code>---</code> in YAML if you don't manage the three different files for deployment, service, and ingress</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: new-kuber-endpoint namespace: kuber-test spec: replicas: 2 selector: matchLabels: app: new-kuber-endpoint template: metadata: labels: app: new-kuber-endpoint spec: nodeSelector: kubernetes.io/os: linux containers: - name: new-endpoint image: mycontainerregistry.azurecr.io/new-endpoint:20220707.1 ports: - containerPort: 80 resources: requests: cpu: '0' memory: '0' limits: cpu: '256' memory: 11400G --- apiVersion: v1 kind: Service metadata: name: new-kuber-endpoint-service namespace: kuber-test spec: type: LoadBalancer ports: - targetPort: 80 name: port80 port: 80 protocol: TCP selector: app: new-kuber-endpoint --- apiVersion: v1 kind: Ingress metadata: name: kuberdevingress namespace: kuber-test annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: old-endpoint-service port: number: 80 - path: / pathType: Prefix backend: service: name: new-endpoint-service port: number: 80 </code></pre> <p>You can now update any field in the above YAML and re-apply the changes again to the Kubernetes cluster, it will update the changes.</p> <p>If it's in service it will update that, if it's in ingress it will update those changes accordingly.</p> <p>If you want to change the docker image you can update the deployment image with a different docker tag it will fetch and deploy that version.</p>
<p>On AKS, I am getting a very strange error pulling an image from a public docker repository: <code>Failed to pull image &quot;jeremysv/eventstore-proxy:latest&quot;: rpc error: code = InvalidArgument desc = failed to pull and unpack image &quot;docker.io/jeremysv/eventstore-proxy:latest&quot;: unable to fetch descriptor (sha256:46e5822176985eff4826449b3e4d4de5589aa012847153a4a2e134d216b7b28a) which reports content size of zero: invalid argument</code></p> <p>I have tried deleting and recreating the AKS cluster, however the error is still there.</p> <p>Using <code>docker pull</code> on my local machine works fine.</p>
<p>First of all have you been able to pull images from your repo to AKS in the past? If yes, what is the difference between this time and the previous successful one?</p> <p>If not, I look it up and it seems to be an error for which Azure is aware of. Both of those guys kind of had the same issue as you: <a href="https://faultbucket.ca/2022/05/aks-image-pull-failed-from-proget/" rel="nofollow noreferrer">AKS image pull failed</a> and <a href="https://learn.microsoft.com/en-us/answers/questions/653640/kubernetes-in-aks-error-while-pulling-image-from-p.html" rel="nofollow noreferrer">Kubernetes (in AKS) error while pulling image</a>, and it seems to come from:</p> <blockquote> <p>localy Content-Length for HTTP HEAD request (downloading docker image manifets) is OK (real non 0 size), but for HTTP HEAD request (downloading docker image manifets) from Internet, where network traffic is through a Azure proxy, Content-Length is set to 0 and containerd returns an error when pull docker image.</p> </blockquote> <p>So Azure is working on it, but it isn't clear if it's going to change it.</p> <p>That being said, those guys tried to pull images from private repo, while your image is public and I was able to pull it too on a VM. So I think that the problem either comes from your syntax (you probably already checked and re-checked it but if you want to have a triple check you can post it here) or from Azure proxying images coming from some repo it doesn't know.</p> <p>A simple way to overcome this that comes in mind is to have your image in another repo, in Azure Container Registry for example.</p>
<p>I'm running Docker Desktop with Kubernetes.</p> <p>I can ssh to the node and I have other pods running on the node.</p> <p>However, when I apply a StatefulSet to the cluster I get:</p> <pre><code>0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. </code></pre> <p>The Stateful Set is here:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components</a></p> <pre><code>kubectl get no NAME STATUS ROLES AGE VERSION docker-desktop Ready control-plane 6d2h v1.24.1 </code></pre>
<p>If you are applying the manifest defined <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">here</a> as it is, the problem is in the below snippet, particularly with the <code>storageClassName.</code> Likely, your cluster does <strong>not</strong> have a storage class called <code>my-storage-class.</code></p> <pre><code> volumeClaimTemplates: - metadata: name: www spec: accessModes: [ &quot;ReadWriteOnce&quot; ] storageClassName: &quot;my-storage-class&quot; resources: requests: storage: 1Gi </code></pre> <p>To get the definitive error statement, you can run the following command:</p> <pre><code>kubectl describe pvc www-web-0 </code></pre> <p>you will notice something like:</p> <pre><code>storageclass.storage.k8s.io &quot;my-storage-class&quot; not found </code></pre> <p><strong>Solution:</strong></p> <p>You can run the following command to get your cluster's available <code>storage class</code> and replace it in yaml file.</p> <pre><code>kubectl get sc </code></pre> <p>Alternatively, you can delete the <code>storageClassName</code> and let the default storage class do the magic. However, for this to work, you must have a default sc present in your cluster.</p> <p>If you have no storage class present, you need to create one. Check <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">this</a> out.</p>
<p>I have deployment a postgresql stateful service in kubernetes, now mapped the <code>/bitnami/postgresql</code> to the NFS volumn like this way:</p> <pre><code>spec: volumes: - name: dshm emptyDir: medium: Memory - name: data-reddwarf-postgresql-postgresql-general persistentVolumeClaim: claimName: data-reddwarf-postgresql-postgresql-general containers: - name: reddwarf-postgresql image: docker.io/bitnami/postgresql:13.3.0-debian-10-r75 ports: - name: tcp-postgresql containerPort: 5432 protocol: TCP env: - name: BITNAMI_DEBUG value: 'false' - name: POSTGRESQL_PORT_NUMBER value: '5432' - name: POSTGRESQL_VOLUME_DIR value: /bitnami/postgresql - name: PGDATA value: /bitnami/postgresql/data - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: reddwarf-postgresql key: postgresql-password - name: POSTGRESQL_ENABLE_LDAP value: 'no' - name: POSTGRESQL_ENABLE_TLS value: 'no' - name: POSTGRESQL_LOG_HOSTNAME value: 'false' - name: POSTGRESQL_LOG_CONNECTIONS value: 'false' - name: POSTGRESQL_LOG_DISCONNECTIONS value: 'false' - name: POSTGRESQL_PGAUDIT_LOG_CATALOG value: 'off' - name: POSTGRESQL_CLIENT_MIN_MESSAGES value: error - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES value: pgaudit resources: limits: cpu: 600m memory: 1Gi requests: cpu: 250m memory: 256Mi volumeMounts: - name: dshm mountPath: /dev/shm - name: data-reddwarf-postgresql-postgresql-general mountPath: /bitnami/postgresql </code></pre> <p>now I want to add another folder <code>/docker-entrypoint-preinitdb.d</code> mapping from deployment to the NFS PV. Am I need to add a new PVC&amp;PV for mapping the new folder? what should I do to use the same PV&amp;PVC with multiple folder so that I did not need to add extra NFS mapping? I have already tried the config like this:</p> <pre><code>volumeMounts: - name: dshm mountPath: /dev/shm - name: data-reddwarf-postgresql-postgresql-general mountPath: /bitnami/postgresql - name: data-reddwarf-postgresql-postgresql-general mountPath: /docker-entrypoint-preinitdb.d </code></pre> <p>but the NFS folder did not create the container folder <code>docker-entrypoint-preinitdb.d</code>. Seems the mapping did not success.</p>
<p>You should definitely go with a separate volume.</p> <p>Even if you manage to mount multiple times the same PV in different locations, each mount would still expose all the files and directory in the PV. If <code>/docker-entrypoint-preinitdb.d</code> contains a bunch of scripts and no actual data, you could use a <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="nofollow noreferrer">ConfigMap</a> instead of creating a new NFS PVC.</p>
<p>I'm trying to setup a basic NATS service on my kubernetes cluster, according to their documentation, <a href="https://docs.nats.io/running-a-nats-service/nats-kubernetes" rel="nofollow noreferrer">here</a>. I executed the following code:</p> <pre><code>$ helm install test-nats nats/nats NAME: test-nats LAST DEPLOYED: Thu Jul 14 13:18:09 2022 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: You can find more information about running NATS on Kubernetes in the NATS documentation website: https://docs.nats.io/nats-on-kubernetes/nats-kubernetes NATS Box has been deployed into your cluster, you can now use the NATS tools within the container as follows: kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l nats-box:~# nats-sub test &amp; nats-box:~# nats-pub test hi nats-box:~# nc test-nats 4222 Thanks for using NATS! $ kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l _ _ _ __ __ _| |_ ___ | |__ _____ __ | '_ \ / _` | __/ __|_____| '_ \ / _ \ \/ / | | | | (_| | |_\__ \_____| |_) | (_) &gt; &lt; |_| |_|\__,_|\__|___/ |_.__/ \___/_/\_\ nats-box v0.11.0 test-nats-box-84c48d46f-j7jvt:~# </code></pre> <p>Now, so far, everything has conformed to their start guide. However, when I try to test the connection, I run into trouble:</p> <pre><code>test-nats-box-84c48d46f-j7jvt:~# nats-sub test &amp; test-nats-box-84c48d46f-j7jvt:~# /bin/sh: nats-sub: not found test-nats-box-84c48d46f-j7jvt:~# nats-pub test hi /bin/sh: nats-pub: not found </code></pre> <p>It looks like the commands weren't found but they should have been installed when I did the <code>helm install</code>. What's going on here?</p>
<p>I have reproduced the set-up on my kubernetes cluster and have successfully deployed the nats box and started a <a href="https://docs.nats.io/nats-concepts/core-nats/pubsub/pubsub_walkthrough#1.-create-subscriber-1" rel="nofollow noreferrer">client subscriber</a> program in which subscribers listen on subjects, and publishers send messages on specific subjects.</p> <p><strong>1. Create Subscriber</strong></p> <p>In a shell or command prompt session, start a client subscriber program.</p> <pre><code> nats sub &lt; subject&gt; </code></pre> <p>Here, &lt; subject &gt; is a subject to listen on. It helps to use unique and well thought-through subject strings because you need to ensure that messages reach the correct subscribers even when wildcards are used.</p> <p>For example:</p> <pre><code>nats sub msg.test </code></pre> <p>You should see the message: Listening on [msg.test].</p> <p><strong>2. Create a Publisher and publish a message</strong></p> <p>Create a NATS publisher and send a message.</p> <pre><code>nats pub &lt; subject&gt; &lt; message&gt; </code></pre> <p>Where &lt; subject&gt; is the subject name and &lt; message&gt; is the text to publish.</p> <p>For example:</p> <pre><code>nats pub msg.test nats-message-1 </code></pre> <p>You'll notice that the publisher sends the message and prints: Published [msg.test] : 'NATS MESSAGE'.</p> <p>The subscriber receives the message and prints: [#1] Received on [msg.test]: 'NATS MESSAGE'.</p> <p>Here, you have provided the wrong syntax nats-sub and nats-pub which are deprecated. Try using the above commands to give precise results.</p>
<p>I have an application with different versions. The base resource file for each version is slightly different. But the patch which needs to be applied to the base file is same. What should be the best structure to apply the same patch to different base resource and have different output files respectively.</p> <pre><code>/base1/ /app-v1 /kustomization.yaml /base2/ /app-v2 /kustomization.yaml /overlays/ /dev/ /staging/ </code></pre> <p>How should I specify the resource to make the overlay and patch in base kustomization.yaml which are same to v1 and v2 apply to the different base files?</p> <p>Ideally, use different <code>kustomize build</code> command for different base but using same patch file.</p>
<p>One way to do it would be to have a kustomization file in <code>/overlays/</code>, including patches and configurations from <code>dev/</code> and <code>staging/</code>. Eg:</p> <pre><code>$&gt; cat ./overlays/kustomization.yaml resources: - ./dev/foo.yaml - ./staging/bar.yaml patchesJson6902: - target: version: v1 groups: apps kind: Deployment name: my-app patch: ./dev/patch-deploy.yaml </code></pre> <p>And include that <code>overlays</code> folder from your base1 and base2 kustomization:</p> <pre><code>$&gt; cat ./base1/kustomization.yaml resources: - ./app-v1/stuff.yaml - ../overlays/ [...] $&gt; cat ./base2/kustomization.yaml resources: [...] - ../overlays/ </code></pre> <p>Then, you can run kustomization from either base folder, while they would all process the content of your overlays folder.</p>
<p>What is the correct way to pass auth header in auth-url for external auth? Below is my current nginx ingress config:</p> <pre><code>nginx.ingress.kubernetes.io/auth-cache-key: $remote_user$http_authorization nginx.ingress.kubernetes.io/auth-response-headers: Authorization nginx.ingress.kubernetes.io/auth-signin: mydomain/signin nginx.ingress.kubernetes.io/auth-url: mydomain/check </code></pre> <p>Traffic will first go to /signin, and after my external oauth signin, it will go to my another /redirect endpoint where I send Authorization header, and redirect back to the original url. But next time when traffic goes to original url and hits /check, it does not pass Authorization header to my /check endpoint so it fails the auth verification again. I thought nginx.ingress.kubernetes.io/auth-response-headers would pass the header for me but it did not work. Do you know how to pass header in my case?</p> <p>Many thanks!</p>
<p>Have you tried setting specifically the header inside the <code>auth-snippet</code> annotation?</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/auth-snippet: | proxy_set_header Authorization $http_authorization; </code></pre>
<p>I’m trying to set up <code>argocd</code> in EKS cluster using helm. I want to use ALB as a load balancer. The UI is reachable through <code>http://node_ip:8080</code>, but not through the ALB or the LB that was created.</p> <p>Here is my configuration:</p> <pre><code> ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;:443}]' alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: IP alb.ingress.kubernetes.io/certificate-arn: &quot;${cert}&quot; alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10 </code></pre> <p>Here is the ingress that was created:</p> <pre><code>kubectl get ing -n argocd NAME CLASS HOSTS ADDRESS PORTS AGE argocd-server &lt;none&gt; eks-test-alb-argocd.abc.com internal-k8s-argocd-argocdse-111.222.us-east-1.elb.amazonaws.com 80 9h </code></pre> <p>Here is what i get when trying to use the DNS or the LB address:</p> <pre><code>curl internal-k8s-argocd-argocdse-111-222.us-east-1.elb.amazonaws.com &lt;html&gt; &lt;head&gt;&lt;title&gt;502 Bad Gateway&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;502 Bad Gateway&lt;/h1&gt;&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>What could be the issue?</p>
<p>Well, I have found the issue, and it was much simple than i thought.</p> <p>Since the pods are listening to port 80, all I had to do is to change the <code>backend-protocol</code> to <code>HTTP</code>. That's it.</p>
<p>I don't understand how replication works in Kubernetes.</p> <p>I understand that two replicas on different nodes will provide fault tolerance for the application, but I don’t understand this:</p> <p>Suppose the application is given the following resources:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: memory: &quot;1G&quot; cpu: &quot;1&quot; limits: memory: &quot;1G&quot; cpu: &quot;1&quot; </code></pre> <p>The application has two replicas. Thus, in total, 2 CPUs and 2G RAM are available for applications.</p> <p>But what happens if the application receives a request with a size of 1.75G? After all, only 1G RAM is available in one replica. Will the request be distributed among all replicas?</p> <p><strong>Answer for Harsh Manvar</strong></p> <p>Maybe you misunderstood me?</p> <p>What you explained is not entirely true.</p> <p>Here is a real, working deployment of four replicas:</p> <pre><code>$ kubectl get deployment dev-phd-graphql-server-01-master-deployment NAME READY UP-TO-DATE AVAILABLE AGE dev-phd-graphql-server-01-master-deployment 4/4 4 4 6d15h </code></pre> <pre><code>$ kubectl describe deployment dev-phd-graphql-server-01-master-deployment ... Limits: cpu: 2 memory: 4G Requests: cpu: 2 memory: 4G ... </code></pre>
<p>No, it won't get distributed one replica will start simply and the other will stay in <strong>pending</strong> state.</p> <p>If you will describe that pending POD(replica) it show this error :</p> <pre><code>0/1 nodes available: insufficient cpu, insufficient memory </code></pre> <blockquote> <p>kubectl describe pod <strong>POD-name</strong></p> </blockquote> <p>K8s will check for the requested resource</p> <pre><code>requests: memory: &quot;1G&quot; cpu: &quot;1&quot; </code></pre> <p>if mentioned minimum requested resources available it will deploy the replica and other will goes in pending state.</p> <p><strong>Update</strong></p> <blockquote> <p>But what happens if the application receives a request with a size of 1.75G? After all, only 1G RAM is available in one replica.</p> </blockquote> <pre><code>requests: memory: &quot;1G&quot; cpu: &quot;1&quot; limits: memory: &quot;1G&quot; cpu: &quot;1&quot; </code></pre> <p>If you have a set request of <strong>1 GB</strong> and application start using the <strong>1.75 GB</strong> it will kill or restart the POD due to hitting the limit.</p> <p>But yes in some cases container might can exceeds the limit if Node has memory available.</p> <blockquote> <p>A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.</p> </blockquote> <p>Read more at : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit</a></p> <p>You might would like to read this also : <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run</a></p>
<p>I'm trying to configure email alerts through grafana and I'm unable to find the grafana.ini file. I followed <a href="https://grafana.com/docs/grafana/next/setup-grafana/installation/kubernetes/#deploy-grafana-enterprise-on-kubernetes" rel="nofollow noreferrer">this</a> page to deploy a grafana instance on my local minikube setup and it works fine.</p> <p>I'm just wondering how to access the grafana.ini file and modify it on minikube? I'm attempting to configure the smtp details in the grafana.ini file so I can send email alerts.</p>
<p>Depending on your environment, configuring Grafana using environment variables instead of editing <code>grafana.ini</code> might be an easier option.</p> <p>The environment variable pattern is the following: <code>GF_&lt;SectionName&gt;_&lt;KeyName&gt;</code></p> <p>E.g., to configure SMTP, one would edit the following environment variables:</p> <pre class="lang-bash prettyprint-override"><code># This is a bash script. In real life, you would # probably add these environment variables in a .yaml file # (e.g. docker-compose, Kubernetes deployment schema) export GF_SMTP_ENABLED=true export GF_SMTP_ENABLED=false export GF_SMTP_HOST=localhost:25 export GF_SMTP_USER= export GF_SMTP_PASSWORD= export GF_SMTP_CERT_FILE= export GF_SMTP_KEY_FILE= export GF_SMTP_SKIP_VERIFY=false export [email protected] export GF_SMTP_FROM_NAME=grafana export GF_SMTP_EHLO_IDENTITY= export GF_SMTP_STARTTLS_POLICY= </code></pre> <p><strong>Useful links</strong></p> <ul> <li><a href="https://grafana.com/docs/grafana/next/setup-grafana/configure-grafana/" rel="nofollow noreferrer">Configure Grafana</a></li> <li><a href="https://github.com/grafana/grafana/blob/main/conf/defaults.ini" rel="nofollow noreferrer">Default <code>grafana.ini</code></a></li> </ul>
<p>I have the following data in my function $newdict</p> <pre><code> modules: module1: enabled: true module2: enabled: false module3: enabled: true, etc </code></pre> <p>What is need to do is : check if module1 enabled, then set enabled for module2 as well. What I tried in _helpers.tpl file:</p> <pre><code> {{- range $key, $value := $newdict -}} {{ if and (eq $key &quot;module1&quot;) (eq &quot;enabled&quot; &quot;true&quot;) }} {{ $_ := set $newdict $key &quot;module2&quot; (eq &quot;enabled&quot; &quot;true&quot;) }} {{ end }} {{ end }} {{ toYaml $newdict }} </code></pre> <p>helm lint doesn't show any errors, but the changes are not reflected in $newdict</p> <p>This thing is needed for deployment.yaml to deploy init container:</p> <pre><code> initContainers: {{- $mycustom := (include &quot;myfunction&quot; . ) | fromYaml}} {{- range $key, $value := $mycustom }} {{- if $value.enabled }} - name: init-{{ $key }}-myinit </code></pre> <p>So, in the end, I need to deploy init container &quot;module2&quot; only if &quot;module1&quot; is also deployed</p>
<p>Helm's general style is that values are immutable. I'd avoid the <code>set</code> function here and stick to a more functional style.</p> <p>It may help your setup that you can directly index things in dict (map) structures, or use the standard Go template <code>index</code> function. You don't need to iterate through the entire dict looking for a key. So if you're sure the variable has keys <code>module1</code> and <code>module2</code> then you can simplify the final logic to:</p> <pre class="lang-yaml prettyprint-override"><code>initContainers: {{- if or .Values.module1.enabled .Values.module2.enabled }} - name: init-module2-myinit ... {{- end }} </code></pre> <p>Your setup hints at having more init containers; <code>module1</code>, <code>module2</code>, and <code>module3</code> would each have their own init containers, but you also want to emit <code>module2</code>'s whenever <code>module1</code> is turned on regardless of the <code>module2</code> setting. One way to approach this might be to write a helper function that decided whether a given module was enabled:</p> <pre class="lang-none prettyprint-override"><code>{{/* Decide whether some module is enabled. Call with a list of two items, the values structure to examine and the specific module key. Returns either the string &quot;true&quot; or an empty string. */}} {{- define &quot;is-module-enabled&quot; -}} {{- $modules := index . 0 -}} {{- $key := index . 1 -}} {{- if index $modules $key &quot;enabled&quot; -}} true {{- else if and (eq $key &quot;module2&quot;) $modules.module1.enabled -}} true {{- end -}} {{- end -}} </code></pre> <pre class="lang-yaml prettyprint-override"><code>initContainers: {{- range $key := .Values.modules -}} {{- if include &quot;is-module-enabled&quot; (list .Values.modules $key) }} - name: init-{{ $key }}-myinit ... {{- end }} {{- end }} </code></pre>
<p>I have a Deployment that runs one container with a root user. It's perfectly clear that everything my application saves under <code>/data</code> will be ephemeral if I don't have any type of persistent volume and <code>volumeMounts</code> defined.</p> <p>My question is what would be the purpose of specifying an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> Volume and mounting it to <code>/data</code>? If I don't use <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#emptydirvolumesource-v1-core" rel="nofollow noreferrer">fancy features</a> for emptyDir like <code>medium</code> or <code>sizeLimit</code> isn't it the same as using no volumes at all?</p>
<p>One of the main features of the emptyDir is that it can be mounted on multiple containers inside the same Pod. If you don't use this feature I guess there is not much to gain from it. If I remember correctly emptyDirs are preserved when the container inside the pod are restarted but not when the pod is re-scheduled.</p>
<p>I'm creating kubevirt in minikube, initially kubevirt-operator.yaml fails with ImagePullBackOff. After I added secret in the yaml</p> <pre><code>imagePullSecrets: - name: regcred containers: </code></pre> <p>all my virt-operator* started to run. virt-api* pods still shows ImagePullBackOff. The error comes out as</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 27m default-scheduler Successfully assigned kubevirt/virt-api-787487d9cd-t68qf to minikube Normal Pulling 25m (x4 over 27m) kubelet Pulling image &quot;us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0&quot; Warning Failed 25m (x4 over 27m) kubelet Failed to pull image &quot;us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0&quot;: rpc error: code = Unknown desc = Error response from daemon: pull access denied for us-ashburn-1.ocir.io/xxx/virt-api, repository does not exist or may require 'docker login': denied: Anonymous users are only allowed read access on public repos Warning Failed 25m (x4 over 27m) kubelet Error: ErrImagePull Warning Failed 25m (x6 over 27m) kubelet Error: ImagePullBackOff Normal BackOff 2m26s (x106 over 27m) kubelet Back-off pulling image &quot;us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0&quot; </code></pre> <p><strong>Manually, I can pull the same image with docker login.</strong> Any help would be much appreciated. Thanks</p>
<p>This docker image looks like it is in a private registry(and from oracle). And I assume the regcred is not correct. Can you login there with docker login? if so you can create regcred secret like this</p> <pre><code>$ kubectl create secret docker-registry regcred --docker-server=&lt;region-key&gt;.ocir.io --docker-username='&lt;tenancy-namespace&gt;/&lt;oci-username&gt;' --docker-password='&lt;oci-auth-token&gt;' --docker-email='&lt;email-address&gt;' </code></pre> <p>Also check this oracle tutorial: <a href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html" rel="nofollow noreferrer">https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html</a></p>
<p>I'm looking to get the kubelet_* metrics from EKS/GKE servers.</p> <p>neither <code>metrics-server</code> or <code>kube-state-metrics</code> seem to provide them to prometheus. Information online suggests the metrics are coming from the kubelet itself, but I'm not precisely sure which software piece is usually used to provide those metrics.</p> <p>I seem to be able to do a --raw query on the node to get the information, but I'd <em>rather</em> not write my own exporter for that. :)</p>
<p>It's true that kubelet exposes <code>kubelet_*</code> metrics. By default they're available on port <code>10250</code>, path <code>/metrics</code>. Additionally, kubelet also has metrics in <code>/metrics/cadvisor</code>, <code>/metrics/resource</code> and <code>/metrics/probes</code> endpoints.</p> <p>I'm running a self-managed Prometheus deployment and use this config to scrape kubelet metrics:</p> <pre><code>- job_name: 'kubelet' scheme: https # these are provided by the service account I created for Prometheuse tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: # map node labels to prometheus labels - action: labelmap regex: __meta_kubernetes_node_label_(.+) </code></pre>
<p>I want to use k8s job to perform some scheduled tasks, as follows:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: container-rgs4wl image: 'alpine-curl:3.16.0' command: - |- for i in `cat domains` do http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i` curl -X POST -H 'Content-type: application/json' --data &quot;{\&quot;text\&quot;:\&quot;$i check http status ${http_status}\&quot;}&quot; ${SLACK_WEBHOOK_URL} done </code></pre> <p>After completion, it will prompt me the following error:</p> <pre><code>Error: failed to start container &quot;container-rgs4wl&quot;: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: &quot;for i in `cat domains`οΌ›\ndo\n http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i`\n curl -X POST -H 'Content-type: application/json' --data \&quot;{\\\&quot;text\\\&quot;:\\\&quot;$i check http status ${http_status}\\\&quot;}\&quot; ${SLACK_WEBHOOK_URL}\ndone&quot;: stat for i in `cat domains`οΌ› do http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i` curl -X POST -H 'Content-type: application/json' --data &quot;{\&quot;text\&quot;:\&quot;$i check http status ${http_status}\&quot;}&quot; ${SLACK_WEBHOOK_URL} done: no such file or directory: unknown </code></pre> <hr /> <p>When I specify command in <code>/bin/bash -c</code>:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: container-ixi4bh image: 'harbor.weex.tech/public/alpine-curl:3.16.0' command: - /bin/sh - '-c' - | for i in `cat domains` do http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i` curl -X POST -H 'Content-type: application/json' --data &quot;{\&quot;text\&quot;:\&quot;$i check http status ${http_status}\&quot;}&quot; ${SLACK_WEBHOOK_URL} done </code></pre> <p>It prompts the following in the log:</p> <pre><code> /bin/sh: syntax error: unexpected word (expecting &quot;do&quot;) </code></pre> <blockquote> <p>When I use the docker run command to execute in a locally started container it works fine.</p> </blockquote> <pre class="lang-bash prettyprint-override"><code>$ docker run -ti --rm alpine-curl:3.16.0 sh / # for i in `cat domains` &gt; do &gt; http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i` &gt; curl -X POST -H 'Content-type: application/json' --data &quot;{\&quot;text\&quot;:\&quot;$i check http status ${http_st atus}\&quot;}&quot; ${SLACK_WEBHOOK_URL} &gt; done okokokokokokok/ # </code></pre> <blockquote> <p><strong>ok</strong> means normal return.</p> </blockquote> <hr /> <p>well...well...well...Maybe it's an image problem, I replaced it with <code>centos:centos7</code>:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: container-x8ba1n image: 'centos:centos7' command: - /bin/bash - '-c' - | for i in `cat domains` do http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i` curl -X POST -H 'Content-type: application/json' --data &quot;{\&quot;text\&quot;:\&quot;$i check http status ${http_status}\&quot;}&quot; ${SLACK_WEBHOOK_URL} done </code></pre> <p>There are still exceptions, as follows:</p> <pre><code> /bin/bash: -c: line 1: syntax error near unexpected token `http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i`' /bin/bash: -c: line 1: ` http_status=`curl -I -m 10 -o /dev/null -s -w %{http_code} $i`' </code></pre> <p>Hey...I really don't know the reason, but I finally wrote a shell script and mounted it into the container for execution, this is ok.</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: container-u37d3w image: 'centos:centos7' command: - /bin/bash args: - /check.sh env: - name: SLACK_WEBHOOK_URL value: &gt;- https://hooks.slack.com/services/... resources: {} volumeMounts: - name: volume-m3lym7 readOnly: true mountPath: /domains subPath: domains - name: volume-ho9ctd readOnly: true mountPath: /check.sh subPath: check.sh </code></pre> <p>I would like to know what is the difference between using <code>command</code> or <code>args</code> and executing a shell script.</p>
<p>You can consider it like, <strong>command</strong> is like the <strong>command</strong> we run in Linux, for example, <strong>printenv</strong>, <strong>grep</strong>, <strong>echo</strong> and you pass the <strong>argument</strong> into it that's <strong>args</strong>.</p> <p>like <code>echo(command) &quot;hello&quot;(args)</code></p> <p>So if you want to run the command in shell you can try something like</p> <pre><code>command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;while true; do echo hello; sleep 10;done&quot;] </code></pre> <p>Or else you can try something like</p> <pre><code> image: k8s.gcr.io/busybox command: [ &quot;sh&quot;, &quot;-c&quot;] args: - while true; do echo -en '\n'; printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; sleep 10; done; env: </code></pre> <p>You can read more about : <a href="https://kubernetes.io/docs/tasks/inject-data-application/_print/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/_print/</a></p>
<p>I get this error:</p> <blockquote> <p>ERROR! couldn't resolve module/action 'kubernetes.core.k8s'. This often indicates a misspelling, missing collection, or incorrect module path.</p> </blockquote> <p>However, I thought I installed it and looks like it is installed:</p> <pre><code>[me@server ansible]$ ansible-galaxy collection install kubernetes.core Process install dependency map Starting collection install process Skipping 'kubernetes.core' as it is already installed </code></pre> <p>The configuration is</p> <pre><code>[user@server ~]$ ansible --version ansible 2.9.25 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] </code></pre> <p>Than I've tried installing under the <code>root</code> user and under the user I'm running Ansible as and found it did install, but now got this error:</p> <pre><code>The full traceback is: Traceback (most recent call last): File &quot;/tmp/ansible_kubernetes.core.k8s_payload_yikI_F/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_utils/common.py&quot;, line 92, in &lt;module&gt; from kubernetes.dynamic.resource import ResourceInstance ImportError: No module named kubernetes.dynamic.resource fatal: [1.2.3.4]: FAILED! =&gt; { &quot;changed&quot;: false, &quot;error&quot;: &quot;No module named kubernetes.dynamic.resource&quot;, &quot;invocation&quot;: { &quot;module_args&quot;: { &quot;api_key&quot;: null, &quot;api_version&quot;: &quot;v1&quot;, &quot;append_hash&quot;: false, &quot;apply&quot;: false, &quot;ca_cert&quot;: null, &quot;client_cert&quot;: null, &quot;client_key&quot;: null, &quot;context&quot;: null, &quot;continue_on_error&quot;: false, &quot;delete_options&quot;: null, &quot;force&quot;: false, &quot;host&quot;: null, &quot;kind&quot;: null, &quot;kubeconfig&quot;: null, &quot;label_selectors&quot;: null, &quot;merge_type&quot;: null, &quot;name&quot;: null, &quot;namespace&quot;: null, &quot;password&quot;: null, &quot;persist_config&quot;: null, &quot;proxy&quot;: null, &quot;proxy_headers&quot;: null, &quot;resource_definition&quot;: null, &quot;src&quot;: &quot;/home/maasfsr/AnsibleClone/deployments/develop/k8s/neo4j-bolt-pv.yaml&quot;, &quot;state&quot;: &quot;present&quot;, &quot;template&quot;: null, &quot;username&quot;: null, &quot;validate&quot;: null, &quot;validate_certs&quot;: null, &quot;wait&quot;: false, &quot;wait_condition&quot;: null, &quot;wait_sleep&quot;: 5, &quot;wait_timeout&quot;: 120 } }, &quot;msg&quot;: &quot;Failed to import the required Python library (kubernetes) on K8S-Master's Python /usr/bin/python. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter&quot; } </code></pre> <p>Is this a different question I should post or related?</p>
<p>Although you can follow this answer: <a href="https://stackoverflow.com/a/60867449/5508998">https://stackoverflow.com/a/60867449/5508998</a></p> <p>I found that it only worked when you install those packages using <strong>sudo</strong>, so</p> <pre><code>sudo pip install openshift pyyaml kubernetes </code></pre>
<p>By default pods can communicate with each other in Kubernetes, which is unwanted should a pod be compromised. We want to use <strong>NetworkPolicies</strong> to control inbound (ingress) and outbound (egress) traffic to/from pods.</p> <p>Specifically pods should ONLY be able to:</p> <ul> <li>Egress: Call services on the internet</li> <li>Ingress: Receive requests from the <strong>Nginx-ingress controller</strong></li> <li>Ingress: Send logs via <strong>promtail</strong> to <strong>Loki</strong></li> </ul> <h2>What I have tried</h2> <h2>1. Denying all ingress and egress</h2> <p>This is the default policy that we want to gradually open up. It blocks all ingress and egress.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny-all namespace: mynamespace spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <h2>2. Opening egress to internet only</h2> <p>We allow egress only to IP-adresses that are not reserved for <strong>private networks</strong> according to <a href="https://en.wikipedia.org/wiki/Private_network#Private_IPv4_addresses" rel="nofollow noreferrer">wikipedia</a>.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-allow-internet-only namespace: mynamespace spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 </code></pre> <h2>3. Opening Ingress from ingress controller and loki</h2> <p>We have deployed the standard <strong>NginX Ingress Controller</strong> in namespace <strong>default</strong>, and it has the lable <strong>app.kubernetes.io/name=ingress-nginx</strong>. We have also deployed the standard <strong>loki-grafana stack</strong> to the <strong>default</strong> namespace, which uses <strong>promtail</strong> to transfer logs to <strong>Loki</strong>. Here I allow pods to recieve ingress from the <strong>promtail</strong> and <strong>ingress-nginx</strong> pods.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-allow-ingress-controller-and-promptail namespace: mynamespace spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=ingress-nginx - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=promtail </code></pre> <h2>So, does this configuration look right?</h2> <p>I am new to Kubernetes, so I hope you guys can help point me in the right direction. Does this configuration do what I intent it to do, or have I missed something? E.g. is it enough that I have just blocked <strong>egress</strong> within the <strong>private network</strong> to ensure that the pods are isolated from each other, or should I also make the ingress configuration as I have done here?</p>
<p>I have compared your Ingress with <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#allow-all-ingress-traffic" rel="nofollow noreferrer">K8 Doc</a> and Egress with this <a href="https://stackoverflow.com/a/57845666/19230181">SO</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-and-all-egress-traffic" rel="nofollow noreferrer">deny Both ingress and Egress</a> seems to be correct.The only thing we need to do is check whether all the name space is given correct or not. Seems to be correct as per your YAML file.</p> <p>But kubernetes pods use the DNS server inside Kubernetes; due to this DNS server being blocked, we need to define more specific IP ranges to allow DNS lookups. Follow this <a href="https://stackoverflow.com/a/57204119/19230181">SO</a> to define <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">DNS config</a> at pod levels and to get curl calls with domain names allow Egress to Core DNS from kube-system(by adding a namespace selecter (kube-system) and a pod selector (dns pods)).</p> <h3>How to identify dns pod</h3> <pre><code># Identifying DNS pod kubectl get pods -A | grep dns # Identifying DNS pod label kubectl describe pods -n kube-system coredns-64cfd66f7-rzgwk </code></pre> <h2>Adding DNS pod to NetworkPolicy</h2> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-allow-internet-only namespace: mynamespace spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: &quot;kube-system&quot; - podSelector: matchLabels: k8s-app: &quot;kube-dns&quot; </code></pre>
<p>Is there a Kubectl command or config map in the cluster that can help me find what CNI is being used?</p>
<p>First of all checking presence of exactly one config file in <code>/etc/cni/net.d</code> is a good start:</p> <pre><code>$ ls /etc/cni/net.d 10-flannel.conflist </code></pre> <p>and <code>ip a s</code> or <code>ifconfig</code> helpful for checking existence of network interfaces. e.g. <code>flannel</code> CNI should setup <code>flannel.1</code> interface:</p> <pre><code>$ ip a s flannel.1 3: flannel.1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UNKNOWN group default link/ether de:cb:d1:d6:e3:e7 brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::dccb:d1ff:fed6:e3e7/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>When creating a cluster, <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">CNI installation</a> is typically installed using:</p> <pre><code>kubectl apply -f &lt;add-on.yaml&gt; </code></pre> <p>thus the networking pod will be called <code>kube-flannel*</code>, <code>kube-calico*</code> etc. depending on your <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">networking configuration</a>.</p> <p>Then <code>crictl</code> will help you inspect running pods and containers.</p> <pre><code>crictl pods ls </code></pre> <p>On a controller node in a healthy cluster you should have all pods in <code>Ready</code> state.</p> <pre><code>crictl pods ls POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME dc90dd87e18cf 3 minutes ago Ready coredns-6d4b75cb6d-r2j9s kube-system 0 (default) d1ab9d0aa815a 3 minutes ago Ready kubernetes-dashboard-cd4778d69-xmtkz kube-system 0 (default) 0c151fdd92e71 3 minutes ago Ready coredns-6d4b75cb6d-bn8hr kube-system 0 (default) 40f18ce56f776 4 minutes ago Ready kube-flannel-ds-d4fd7 kube-flannel 0 (default) 0e390a68380a5 4 minutes ago Ready kube-proxy-r6cq2 kube-system 0 (default) cd93e58d3bf70 4 minutes ago Ready kube-scheduler-c01 kube-system 0 (default) 266a33aa5c241 4 minutes ago Ready kube-apiserver-c01 kube-system 0 (default) 0910a7a73f5aa 4 minutes ago Ready kube-controller-manager-c01 kube-system 0 (default) </code></pre> <p>If your cluster is properly configured you should be able to list containers using <code>kubectl</code>:</p> <pre><code>kubectl get pods -n kube-system </code></pre> <p>if <code>kubectl</code> is not working (<code>kube-apiserver</code> is not running) you can fallback to <code>crictl</code>.</p> <p>On an unhealthy cluster <code>kubectl</code> will show pods in <code>CrashLoopBackOff</code> state. <code>crictl pods ls</code> command will give you similar picture, only displaying pods from single node. Also check <a href="https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/" rel="nofollow noreferrer">documentation for common CNI errors</a>.</p> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d4b75cb6d-brb9d 0/1 ContainerCreating 0 25m coredns-6d4b75cb6d-pcrcp 0/1 ContainerCreating 0 25m kube-apiserver-cm01 1/1 Running 27 (18m ago) 26m kube-apiserver-cm02 0/1 Running 31 (8m11s ago) 23m kube-apiserver-cm03 0/1 CrashLoopBackOff 33 (2m22s ago) 26m kube-controller-manager-cm01 0/1 CrashLoopBackOff 13 (50s ago) 24m kube-controller-manager-cm02 0/1 CrashLoopBackOff 7 (15s ago) 24m kube-controller-manager-cm03 0/1 CrashLoopBackOff 15 (3m45s ago) 26m kube-proxy-2dvfg 0/1 CrashLoopBackOff 8 (97s ago) 25m kube-proxy-7gnnr 0/1 CrashLoopBackOff 8 (39s ago) 25m kube-proxy-cqmvz 0/1 CrashLoopBackOff 8 (19s ago) 25m kube-scheduler-cm01 1/1 Running 28 (7m15s ago) 12m kube-scheduler-cm02 0/1 CrashLoopBackOff 28 (4m45s ago) 18m kube-scheduler-cm03 1/1 Running 36 (107s ago) 26m kubernetes-dashboard-cd4778d69-g8jmf 0/1 ContainerCreating 0 2m27s </code></pre> <p><code>crictl ps</code> will give you containers (like <code>docker ps</code>), watch for high number of attempts:</p> <pre><code>CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD d54c6f1e45dea 2ae1ba6417cbc 2 seconds ago Running kube-proxy 1 347fef3ae1e98 kube-proxy-7gnnr d6048ef9e30c7 d521dd763e2e3 41 seconds ago Running kube-apiserver 27 640658b58d1ae kube-apiserver-cm03 b6b8c7a24914e 3a5aa3a515f5d 41 seconds ago Running kube-scheduler 28 c7b710a0acf30 kube-scheduler-cm03 b0a480d2c1baf 586c112956dfc 42 seconds ago Running kube-controller-manager 8 69504853ab81b kube-controller-manager-cm03 </code></pre> <p>and check logs using</p> <pre><code>crictl logs d54c6f1e45dea </code></pre> <p>Last not least <code>/opt/cni/bin/</code> path usually contains binaries required for networking. Another <code>PATH</code> might defined in add on setup or CNI config.</p> <pre><code>$ ls /opt/cni/bin/ bandwidth bridge dhcp firewall flannel host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan </code></pre> <p>Finally <code>crictl</code> reads <code>/etc/crictl.yaml</code> config, you should set proper runtime and image endpoint to match you <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">container runtime</a>:</p> <pre><code>runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 10 </code></pre>
<p>I have a pipeline which creates an image and pushes to Quay repository and then deploys to Openshift cluster. In my deployment-config file I have mentioned the image name as quay.:dev. So whenever a new image is created, it is created with same name and tag. My question is will this pipeline and deployment make any changes to the pod running in the namespace or it will not identify any change and old pod will keep on running ? Do I need to change tag on every run ?</p>
<p>You need to change the image tag on every run. As a general rule, in Kubernetes you want to avoid fixed image tags like <code>...:dev</code> or <code>...:latest</code>.</p> <p>With the workflow you suggest, two things will happen.</p> <p>First, updating the Kubernetes Deployment will have no effect. The new Deployment object will be byte-for-byte identical to the old one. The cluster will see you want to have 3 Pods running the <code>quay...:dev</code> image, and those exist, so there's nothing to change.</p> <p>Second, the images in the cluster can get out of sync. Kubernetes (and plain Docker) will normally only pull an image if it doesn't exist on the node. So if you do start a new Pod and some <code>quay...:dev</code> image already exists on the node, it will just get used as-is. But say a node fails and is replaced, and it was running a replica of your application; the new node won't have a copy of your image, and if the replacement Pod is scheduled there then it will pull the then-current copy of the image, even though it's not the same actual build as in the rest of the cluster.</p> <p>I'd recommend using a unique tag for every build of your application. The source control commit ID or the current date/time are two things that are easy to use. You then need to update the Deployment's <code>image:</code> on every build, but since this changes it will cause every existing Pod to be replaced with the newer image.</p>
<p>Not sure what I am missing. Please, find below all the config scripts I have used</p> <pre><code>2022-07-21T07:26:56.903Z info service/collector.go:220 Starting otelcol... {&quot;service&quot;: &quot;my-prom-instance&quot;, &quot;Version&quot;: &quot;0.54.0&quot;, &quot;NumCPU&quot;: 4} 2022-07-21T07:26:56.903Z info service/collector.go:128 Everything is ready. Begin running and processing data. {&quot;service&quot;: &quot;my-prom-instance&quot;} 2022-07-21T07:26:56.902Z debug discovery/manager.go:309 Discoverer channel closed {&quot;service&quot;: &quot;my-prom-instance&quot;, &quot;kind&quot;: &quot;receiver&quot;, &quot;name&quot;: &quot;prometheus&quot;, &quot;pipeline&quot;: &quot;metrics&quot;, &quot;provider&quot;: &quot;static/0&quot;} W0721 07:26:56.964183 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:monitoring:otel-collector-collector&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E0721 07:26:56.964871 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:monitoring:otel-collector-collector&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope W0721 07:26:58.435237 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Pod: pods is forbidden: User &quot;system:serviceaccount:monitoring:otel-collector-collector&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; at the cluster scope E0721 07:26:58.435924 1 reflector.go:138] </code></pre> <p>clusterRole.yaml</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus namespace: monitoring rules: - apiGroups: [&quot;&quot;] resources: - nodes - nodes/proxy - services - endpoints - pods verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] - apiGroups: - extensions resources: - ingresses verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] - nonResourceURLs: [&quot;/metrics&quot;] verbs: [&quot;get&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: default namespace: monitoring </code></pre> <p>config-map.yaml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: prometheus-server-conf labels: name: prometheus-server-conf namespace: monitoring data: prometheus.rules: |- groups: - name: devopscube demo alert rules: - alert: High Pod Memory expr: sum(container_memory_usage_bytes) &gt; 1 for: 1m labels: severity: slack annotations: summary: High Memory Usage prometheus.yml: |- global: scrape_interval: 5s evaluation_interval: 5s rule_files: - /etc/prometheus/prometheus.rules alerting: alertmanagers: - scheme: http static_configs: - targets: - &quot;alertmanager.monitoring.svc:9093&quot; scrape_configs: - job_name: 'node-exporter' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_endpoints_name] regex: 'node-exporter' action: keep - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: 'kube-state-metrics' static_configs: - targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080'] - job_name: 'kubernetes-cadvisor' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name </code></pre> <p>prometheus-deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: prometheus-deployment namespace: monitoring labels: app: prometheus-server spec: replicas: 1 selector: matchLabels: app: prometheus-server template: metadata: labels: app: prometheus-server spec: containers: - name: prometheus image: prom/prometheus args: - &quot;--config.file=/etc/prometheus/prometheus.yml&quot; - &quot;--storage.tsdb.path=/prometheus/&quot; ports: - containerPort: 9090 volumeMounts: - name: prometheus-config-volume mountPath: /etc/prometheus/ - name: prometheus-storage-volume mountPath: /prometheus/ volumes: - name: prometheus-config-volume configMap: defaultMode: 420 name: prometheus-server-conf - name: prometheus-storage-volume emptyDir: {} </code></pre> <p>otel-deployment.yaml</p> <pre><code>apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel-collector namespace: monitoring spec: config: | receivers: prometheus: config: scrape_configs: - job_name: 'kube-state-metrics' scrape_interval: 5s scrape_timeout: 1s static_configs: - targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080'] - job_name: k8s kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] regex: &quot;true&quot; action: keep metric_relabel_configs: - source_labels: [__name__] regex: &quot;(request_duration_seconds.*|response_duration_seconds.*)&quot; action: keep processors: batch: exporters: logging: service: pipelines: metrics: receivers: [prometheus] exporters: [logging] telemetry: logs: level: debug initial_fields: service: my-prom-instance </code></pre> <p>otel-service.yaml</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sa namespace: monitoring </code></pre>
<p>the service account is defined with name otel-collector-sa, and your ClusterRoleBinding link to service account default</p>
<p>My quest is basically:</p> <p>What is does the pipe {'|'} mean in a configmap? Code Sample: Taken from: <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/</a></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: game-demo data: # property-like keys; each key maps to a simple value player_initial_lives: &quot;3&quot; ui_properties_file_name: &quot;user-interface.properties&quot; # file-like keys game.properties: | # &lt;--- Here enemy.types=aliens,monsters player.maximum-lives=5 user-interface.properties: | # &lt;--- Here color.good=purple color.bad=yellow allow.textmode=true </code></pre> <p>This seems a bit of trivial but what does it mean and what is its purpose. Any special use cases.</p> <p>Thank You for any clarification!</p>
<p>It's a multiline string. See <a href="https://yaml-multiline.info/" rel="nofollow noreferrer">https://yaml-multiline.info/</a> for examples</p>
<p>By default pods can communicate with each other in Kubernetes, which is unwanted should a pod be compromised. We want to use <strong>NetworkPolicies</strong> to control inbound (ingress) and outbound (egress) traffic to/from pods.</p> <p>Specifically pods should ONLY be able to:</p> <ul> <li>Egress: Call services on the internet</li> <li>Ingress: Receive requests from the <strong>Nginx-ingress controller</strong></li> <li>Ingress: Send logs via <strong>promtail</strong> to <strong>Loki</strong></li> </ul> <h2>What I have tried</h2> <h2>1. Denying all ingress and egress</h2> <p>This is the default policy that we want to gradually open up. It blocks all ingress and egress.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny-all namespace: mynamespace spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <h2>2. Opening egress to internet only</h2> <p>We allow egress only to IP-adresses that are not reserved for <strong>private networks</strong> according to <a href="https://en.wikipedia.org/wiki/Private_network#Private_IPv4_addresses" rel="nofollow noreferrer">wikipedia</a>.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-allow-internet-only namespace: mynamespace spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 </code></pre> <h2>3. Opening Ingress from ingress controller and loki</h2> <p>We have deployed the standard <strong>NginX Ingress Controller</strong> in namespace <strong>default</strong>, and it has the lable <strong>app.kubernetes.io/name=ingress-nginx</strong>. We have also deployed the standard <strong>loki-grafana stack</strong> to the <strong>default</strong> namespace, which uses <strong>promtail</strong> to transfer logs to <strong>Loki</strong>. Here I allow pods to recieve ingress from the <strong>promtail</strong> and <strong>ingress-nginx</strong> pods.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-allow-ingress-controller-and-promptail namespace: mynamespace spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=ingress-nginx - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name=default - podSelector: matchLabels: app.kubernetes.io/name=promtail </code></pre> <h2>So, does this configuration look right?</h2> <p>I am new to Kubernetes, so I hope you guys can help point me in the right direction. Does this configuration do what I intent it to do, or have I missed something? E.g. is it enough that I have just blocked <strong>egress</strong> within the <strong>private network</strong> to ensure that the pods are isolated from each other, or should I also make the ingress configuration as I have done here?</p>
<p>For those curious I ended with the following network policy:</p> <pre><code>--- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all namespace: &lt;K8S_NAMESPACE&gt; spec: podSelector: {} policyTypes: - Ingress - Egress --- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-internet namespace: &lt;K8S_NAMESPACE&gt; spec: podSelector: {} policyTypes: - Egress - Ingress egress: - to: - ipBlock: cidr: &quot;0.0.0.0/0&quot; except: - &quot;10.0.0.0/8&quot; - &quot;172.16.0.0/12&quot; - &quot;192.168.0.0/16&quot; - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: &quot;kube-system&quot; - podSelector: matchLabels: k8s-app: &quot;kube-dns&quot; --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-ingress-from-ingresscontroller namespace: &lt;K8S_NAMESPACE&gt; spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: &quot;default&quot; - podSelector: matchLabels: app.kubernetes.io/name: &quot;ingress-nginx&quot; --- </code></pre> <p>It turned out that the DNS server had to be added to <strong>allow-internet</strong> and that it was not necessary to add <strong>allow-ingress-from-promtail</strong>, as promtail gets the log in another way that through ingress.</p>
<p>I'm trying to move my development environment to Kubernetes to be more in line with existing deployment stages. In that context I need to call a service by its Ingress DNS name internally, while this DNS name resolves to an IP unreachable from the cluster itself. I would like to create a DNS alias inside the cluster which would point to the service, basically a reversal of a <code>ExternalName</code> service.</p> <p>Example:</p> <ul> <li>The external DNS name is <code>my-service.my-domain.local</code>, resolving to 127.0.0.1</li> <li>Internal service is <code>my-service.my-namespace.svc.cluster.local</code></li> <li>A process running in a pod can't reach <code>my-service.my-domain.local</code> because of the resolved IP, but could reach <code>my-service.my-namespace.svc.cluster.local</code>, but needs to be accessing the former by name</li> <li>I would like to have a cluster-internal DNS name <code>my-service.my-domain.local</code>, resolving to the service <code>my-service.my-namespace.svc.cluster.local</code> (<code>ExternalName</code> service would do the exact opposite).</li> </ul> <p>Is there a way to implement this in Kubernetes?</p>
<p>You can use the core dns and add the entry over there using configmap</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: annotations: labels: eks.amazonaws.com/component: coredns k8s-app: kube-dns name: coredns namespace: kube-system data: Corefile: | domain-name:port { errors cache 30 forward . &lt;IP or custom DNS&gt; reload } </code></pre> <p>To test you can start one busy box pod</p> <pre><code>kubectl run busybox --restart=Never --image=busybox:1.28 -- sleep 3600 </code></pre> <p>hit the domain name from inside of busy box</p> <pre><code>kubectl exec busybox -- nslookup domain-name </code></pre> <p>Official doc ref : <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/</a></p> <p>Nice article for ref : <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/</a></p> <p>Or</p> <p>you can map the domain to the <strong>service name</strong> using rewrite, <code>rewrite name example.io service.default.svc.cluster.local</code></p> <blockquote> <p>Use the Rewrite plug-in of CoreDNS to resolve a specified domain name to the domain name of a Service.</p> </blockquote> <pre><code>apiVersion: v1 data: Corefile: |- .:5353 { bind {$POD_IP} cache 30 errors health {$POD_IP}:8080 kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } rewrite name example.io service.default.svc.cluster.local loadbalance round_robin prometheus {$POD_IP}:9153 forward . /etc/resolv.conf reload } kind: ConfigMap metadata: labels: app: coredns k8s-app: coredns kubernetes.io/cluster-service: &quot;true&quot; kubernetes.io/name: CoreDNS release: cceaddon-coredns name: coredns namespace: kube-system </code></pre>
<p>I have a Kubernetes deployment and I want to insert some conf parameters end of file while creating the pod. How can I do that ? It shouldn't restart the pod.</p>
<p>You can either use the <code>command</code> directive to override the entrypoint of the container, or mount in the conf as a configmap, then edit that.</p> <p>Here's an example of overriding the entrypoint -- more reading available <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">here</a></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment labels: app: myapp spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myapp:latest command: ['/bin/bash -c &quot;echo newvkey=newval &gt;/etc/myconf.conf &amp;&amp; ./entrypoint.sh&quot;'] ports: - containerPort: 80 </code></pre>
<p>I want to display the pod IP address in an nginx pod. Currently I am using an init container to initialize the pod by writing to a volume.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx volumeMounts: - name: workdir mountPath: /usr/share/nginx/html # These containers are run during pod initialization initContainers: - name: install image: busybox:1.28 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP command: - echo - $(POD_IP) &gt;&gt; /work-dir/index.html volumeMounts: - name: workdir mountPath: &quot;/work-dir&quot; dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre> <p>This should work in theory, but the file redirect doesn't work and the mounted file in the nginx container is blank. There's probably an easier way to do this, but I'm curious why this doesn't work.</p>
<p>Nothing is changed, except how <code>command</code> is passed in the init container. See <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell" rel="nofollow noreferrer">this</a> for an explanation.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx volumeMounts: - name: workdir mountPath: /usr/share/nginx/html # These containers are run during pod initialization initContainers: - name: install image: busybox:1.28 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP command: - 'sh' - '-c' - 'echo $(POD_IP) &gt; /work-dir/index.html' volumeMounts: - name: workdir mountPath: &quot;/work-dir&quot; dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre>
<p>Environment: MacBook Pro - Chip: Apple M1 Pro, macOS Monterey 12.2.1</p> <p><strong>How do I run a docker MS SQL (any version) on Mac M1 and connect from Azure Data Studio?</strong></p> <p>I was able to create a docker image of azure-sql-edge. Pod is running OK.</p> <p>INFO: Connection opened from 127.0.0.1:57588 to 127.0.0.1:1431</p> <p><strong>YAML</strong> file is attached here.</p> <p>Azure Data Studio Error:</p> <p><strong>Error: server as localhost 1433</strong><br> A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 35 - An internal exception was caught)</p> <p><strong>Error: server as localhost without Port number</strong> <br> A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - 40034080)</p> <p><strong>YAML:</strong></p> <h2>apiVersion: apps/v1 kind: Deployment metadata: name: mssql-depl spec: replicas: 1 selector: matchLabels: app: mssql template: metadata: labels: app: mssql spec: containers: - name: mssql image: mcr.microsoft.com/azure-sql-edge ports: - containerPort: 1433 env: - name: MSSQL_PID value: &quot;Developer&quot; - name: ACCEPT_EULA value: &quot;Y&quot; - name: MSSQL_SA_PASSWORD valueFrom: secretKeyRef: name: mssql key: MSSQL_SA_PASSWORD volumeMounts: - mountPath: var/opt/mssql/data name: mssqldb volumes: - name: mssqldb persistentVolumeClaim: claimName: mssql-claim</h2> <p>apiVersion: v1 kind: Service metadata: name: mssql-clusterip-srv spec: type: ClusterIP selector: app: msql ports:</p> <ul> <li>name: mssql protocol: TCP port: 1433 targetPort: 1433</li> </ul> <hr /> <p>apiVersion: v1 kind: Service metadata: name: mssql-loadbalancer spec: type: LoadBalancer selector: app: msql ports:</p> <ul> <li>protocol: TCP port: 1433 targetPort: 1433</li> </ul> <p><strong>Detailed error from the Azure Data Studio:</strong><br> Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - Undefined error: 0) at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action<code>1 wrapCloseInAction) at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at Microsoft.Data.SqlClient.TdsParserStateObject.ThrowExceptionAndWarning(Boolean callerHasConnectionLock, Boolean asyncClose) at Microsoft.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at Microsoft.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync() at Microsoft.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket() at Microsoft.Data.SqlClient.TdsParser.ConsumePreLoginHandshake(Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, Boolean&amp; marsCapable, Boolean&amp; fedAuthRequired) at Microsoft.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, Boolean withFailover, SqlAuthenticationMethod authType) at Microsoft.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover) at Microsoft.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout) at Microsoft.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance) at Microsoft.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, Boolean applyTransientFaultHandling, String accessToken, DbConnectionPool pool) at Microsoft.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions) at Microsoft.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup, DbConnectionOptions userOptions) at Microsoft.Data.ProviderBase.DbConnectionFactory.&lt;&gt;c__DisplayClass48_0.&lt;CreateReplaceConnectionContinuation&gt;b__0(Task</code>1 _) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) --- End of stack trace from previous location --- at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task&amp; currentTaskSlot, Thread threadPoolThread) --- End of stack trace from previous location --- at Microsoft.SqlTools.ServiceLayer.Connection.ReliableConnection.ReliableSqlConnection.&lt;&gt;c__DisplayClass30_0.&lt;b__0&gt;d.MoveNext() in D:\a\1\s\src\Microsoft.SqlTools.ManagedBatchParser\ReliableConnection\ReliableSqlConnection.cs:line 316 --- End of stack trace from previous location --- at Microsoft.SqlTools.ServiceLayer.Connection.ConnectionService.TryOpenConnection(ConnectionInfo connectionInfo, ConnectParams connectionParams) in D:\a\1\s\src\Microsoft.SqlTools.ServiceLayer\Connection\ConnectionService.cs:line 602 ClientConnectionId:81d69633-9dd4-4088-88d4-5327bb824852</p>
<p>The following link should help on dockerizing SQLServer on Mac Arm-based machine.</p> <pre><code>docker pull mcr.microsoft.com/azure-sql-edge </code></pre> <p>This would install SQLServer on Linux. From there you can try to access the database using Azure Data Studio.</p> <p>Details are in the url below. <a href="https://medium.com/geekculture/docker-express-running-a-local-sql-server-on-your-m1-mac-8bbc22c49dc9" rel="nofollow noreferrer">https://medium.com/geekculture/docker-express-running-a-local-sql-server-on-your-m1-mac-8bbc22c49dc9</a></p>
<p>I'm using the <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">C# client API</a> and I'm trying to:</p> <ol> <li>Get all pods associated with a given StatefulSet</li> <li>Get the cluster IP address of each pod</li> </ol> <p>I am aware of <code>Kubernetes.CoreV1.ListNamespacedPodAsync</code> and <code>Kubernetes.AppsV1.ListNamespacedStatefulSetAsync</code>. The part that I'm missing is how to</p> <ol> <li>Know whether a given pod is associated with the StatefulSet</li> <li>Discover a pod's IP address</li> </ol>
<p>A feature of Kubernetes StatefulSet is <em>Stable Network Identity</em>. This requires that additional <em>headless Services</em> are created, in addition to the <code>StatefulSet</code> resource.</p> <p>When this is done, you would typically access these instances by their <strong>hostname</strong> instead of using IP address directly. See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">Stable Network ID</a> for documentation about this.</p> <p>E.g. after <em>headless Services</em> are created you could access 3 instances on these <strong>hostnames</strong></p> <ul> <li>my-app-0</li> <li>my-app-1</li> <li>my-app-2</li> </ul> <p>(when accessed from within the same namespace)</p>
<p>Is there anyway to list all securityContext including default and defined:</p> <ol> <li>Pod Level</li> <li>Container Level</li> </ol> <p>Using <code>kubectl get pod -o yaml</code> only show the defined in <code>spec.securityContext</code> and <code>spec.containers[*].securityContext</code> of manifest without the default one?</p>
<p>Here you are, source can be found <a href="https://medium.com/@pjbgf/kubectl-list-security-context-settings-for-all-pods-containers-within-a-cluster-93349681ef5d" rel="noreferrer">in this medium article</a>:</p> <pre><code>kubectl get pods --all-namespaces -o go-template \ --template='{{range .items}}{{&quot;pod: &quot;}}{{.metadata.name}} {{if .spec.securityContext}} PodSecurityContext: {{&quot;runAsGroup: &quot;}}{{.spec.securityContext.runAsGroup}} {{&quot;runAsNonRoot: &quot;}}{{.spec.securityContext.runAsNonRoot}} {{&quot;runAsUser: &quot;}}{{.spec.securityContext.runAsUser}} {{if .spec.securityContext.seLinuxOptions}} {{&quot;seLinuxOptions: &quot;}}{{.spec.securityContext.seLinuxOptions}} {{end}} {{else}}PodSecurity Context is not set {{end}}{{range .spec.containers}} {{&quot;container name: &quot;}}{{.name}} {{&quot;image: &quot;}}{{.image}}{{if .securityContext}} {{&quot;allowPrivilegeEscalation: &quot;}}{{.securityContext.allowPrivilegeEscalation}} {{if .securityContext.capabilities}} {{&quot;capabilities: &quot;}}{{.securityContext.capabilities}} {{end}} {{&quot;privileged: &quot;}}{{.securityContext.privileged}} {{if .securityContext.procMount}} {{&quot;procMount: &quot;}}{{.securityContext.procMount}} {{end}} {{&quot;readOnlyRootFilesystem: &quot;}}{{.securityContext.readOnlyRootFilesystem}} {{&quot;runAsGroup: &quot;}}{{.securityContext.runAsGroup}} {{&quot;runAsNonRoot: &quot;}}{{.securityContext.runAsNonRoot}} {{&quot;runAsUser: &quot;}}{{.securityContext.runAsUser}} {{if .securityContext.seLinuxOptions}} {{&quot;seLinuxOptions: &quot;}}{{.securityContext.seLinuxOptions}} {{end}}{{if .securityContext.windowsOptions}} {{&quot;windowsOptions: &quot;}}{{.securityContext.windowsOptions}} {{end}} {{else}} SecurityContext is not set {{end}} {{end}}{{end}}' </code></pre>
<p>I'm starting out with K8s and I'm stuck at setting up mongo db in replica set mode with local persistent volume. I'm using StorageClass, PersistentVolume and PersistentVolumeClaim.</p> <pre><code>vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mongo-pv 1Gi RWO Retain Available mongo-storage 24m </code></pre> <p>but when inspect the pod I get</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. </code></pre> <p>This post answer <a href="https://stackoverflow.com/a/70069138/2704032">https://stackoverflow.com/a/70069138/2704032</a> confirmed my suspect that I might be using the wrong label.. So I had a look at the PV and I see that as I've set nodeAffinity as</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernets.io/hostname operator: In values: - docker-desktop </code></pre> <p>it's looking for</p> <pre><code>Node Affinity: Required Terms: Term 0: kubernets.io/hostname in [docker-desktop] </code></pre> <p>I checked nodes with <code>kubectl get nodes --show-labels</code> and it does have that label as the output shows</p> <pre><code>NAME STATUS ROLES AGE VERSION LABELS docker-desktop Ready control-plane 7d9h v1.24.1 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=docker-desktop,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= </code></pre> <p>I tried using volumeClaimTemplate in the StatefulSet as</p> <pre><code>volumeClaimTemplates: - metadata: name: mongo-vctemplate spec: storageClassName: mongo-storage accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>but it didn't make a difference..I also tried to specify the pic in the pv with the claimRef parameter but still the insidious error comes up at pod creation..</p> <p>What else can I check or do I need to setup? Many thanks as usual Here are my yaml files</p> <h1>StorageClass</h1> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: mongo-storage provisioner: kubernetes.io/no-provisioner # volumeBindingMode: Immediate volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain </code></pre> <h1>PV</h1> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mongo-pv labels: type: local spec: capacity: storage: 1Gi # persistentVolumeReclaimPolicy: Retain # prod persistentVolumeReclaimPolicy: Delete # local tests storageClassName: mongo-storage # claimRef: # name: mongo-pvc accessModes: - ReadWriteOnce # volumeMode: Filesystem #default if omitted # hostPath: # path: /mnt/data local: path: /mnt/disk/vol1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernets.io/hostname operator: In values: - docker-desktop </code></pre> <h1>PVC</h1> <pre><code>piVersion: v1 kind: PersistentVolumeClaim metadata: name: mongo-pvc spec: storageClassName: mongo-storage # volumeName: mongo-pv # this will make it unbundable??? resources: requests: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce </code></pre> <h1>StatefulSet</h1> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo-statefulset spec: selector: matchLabels: app: mongo-pod # has to match .spec.template.metadata.labels serviceName: mongo-clusterip-service replicas: 1 # 3 template: metadata: labels: app: mongo-pod # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: mongo-container image: mongo ports: - containerPort: 27017 volumeMounts: - name: mongo-pv-cont mountPath: /data/db #/mnt/data volumes: - name: mongo-pv-cont persistentVolumeClaim: claimName: mongo-pvc </code></pre>
<p>It is a typo in <code>kubernets.io/hostname</code>. It should be <code>kubernetes.io/hostname</code> in the pv definition.</p> <p>similar to this one: <a href="https://stackoverflow.com/questions/52183750/error-while-using-local-persistent-volumes-in-statefulset-pod">Error while using local persistent volumes in statefulset pod</a></p>
<p>I'm on Ubuntu linux VM and trying to run minikube on it.</p> <p>I installed kubectl via homebrew and then installed minikube by following below installation guides:<br> kubectl: <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux</a><br> minikube: <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-minikube/</a></p> <p>I started minikube as <code>sudo minikube start --driver=none</code> which has the following output:<br> <a href="https://i.stack.imgur.com/h1QdZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h1QdZ.png" alt="enter image description here"></a></p> <p>When I run the command: <code>kubectl get pods</code>, I get an error:</p> <pre><code>Error in configuration: * unable to read client-cert /home/jenkins/.minikube/profiles/minikube/client.crt for minikube due to open /home/jenkins/.minikube/profiles/minikube/client.crt: permission denied * unable to read client-key /home/jenkins/.minikube/profiles/minikube/client.key for minikube due to open /home/jenkins/.minikube/profiles/minikube/client.key: permission denied </code></pre> <p>The user I'm installing all above is <code>/home/jenkins</code>. I'm not sure what's going wrong. Can someone help?</p>
<p>From <a href="https://github.com/kubernetes/minikube/issues/8363#issuecomment-637892712" rel="nofollow noreferrer">#8363</a> shared by <a href="https://stackoverflow.com/a/70451125/1235675">this answer</a>, I did <code>nano ~/.kube/config</code> and corrected the path..</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 clusters: - cluster: certificate-authority: /home/username/.minikube/ca.crt server: https://192.168.64.3:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config users: - name: minikube user: client-certificate: /home/username/.minikube/profiles/client.crt client-key: /home/username/.minikube/profiles/client.key </code></pre>
<p>I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.</p> <p>I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.</p> <p>What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.</p> <p>The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:</p> <pre><code> server { preread_by_lua_block { ngx.var.proxy_upstream_name=&quot;tcp-my-namespace-my-service-7550&quot;; } listen 7550; proxy_timeout 600s; proxy_next_upstream on; proxy_next_upstream_timeout 600s; proxy_next_upstream_tries 3; proxy_pass upstream_balancer; } </code></pre> <p>Any help at all is greatly appreciated and I'm happy to provide more info.</p>
<p>Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the &quot;logical&quot; messages on the stream 99% of the time.</p> <p>I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.</p>
<p>I have few contexts setup in K9s. How do I switch between them?</p> <p>For example, I am in the <code>PROD</code> context, and I want change the context to <code>NONPROD</code>.</p> <p>How do I do this using the shortcuts?</p>
<p>You can switch to another context by typing <code>:ctx &lt;context-name&gt;</code>. For example:</p> <pre><code># Switch to the PROD context :ctx PROD # Switch to the NONPROD context :ctx NONPROD </code></pre> <p>A full list of commands and keybindings are available <a href="https://k9scli.io/topics/commands/" rel="noreferrer">here</a>.</p>
<p>I am currently having issues trying to get Prometheus to scrape the metrics for my Minikube cluster. Prometheus is installed via the <code>kube-prometheus-stack</code></p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>I am currently accessing Prometheus from an Ingress with a locally signed TLS certificate and it appears it's leading to conflicts as connection keeps getting refused by the cluster.</p> <p>TLS is set up via Minikube ingress <a href="https://minikube.sigs.k8s.io/docs/tutorials/custom_cert_ingress/" rel="nofollow noreferrer">add-on</a>:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create secret -n kube-system tls mkcert-tls-secret --cert=cert.pem --key=key.pem minikube addons configure ingress &lt;&lt;&lt; &quot;kube-system/mkcert-tls-secret&quot; &amp;&amp; \ minikube addons disable ingress &amp;&amp; \ minikube addons enable ingress </code></pre> <p>It seems Prometheus can't get access to <code>http-metrics</code> as a target. I installed Prometheus via a helm chart:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace monitoring &amp;&amp; \ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &amp;&amp; \ helm repo update &amp;&amp; \ helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>Here is my Prometheus configuration:</p> <pre class="lang-yaml prettyprint-override"><code>global: scrape_interval: 30s scrape_timeout: 10s evaluation_interval: 30s external_labels: prometheus: monitoring/prometheus-stack-kube-prom-prometheus prometheus_replica: prometheus-prometheus-stack-kube-prom-prometheus-0 alerting: alert_relabel_configs: - separator: ; regex: prometheus_replica replacement: $1 action: labeldrop alertmanagers: - follow_redirects: true enable_http2: true scheme: http path_prefix: / timeout: 10s api_version: v2 relabel_configs: - source_labels: [__meta_kubernetes_service_name] separator: ; regex: prometheus-stack-kube-prom-alertmanager replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-web replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - monitoring rule_files: - /etc/prometheus/rules/prometheus-prometheus-stack-kube-prom-prometheus-rulefiles-0/*.yaml scrape_configs: - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-controller-manager/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: https authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-controller-manager);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system - job_name: serviceMonitor/monitoring/prometheus-stack-kube-prom-kube-etcd/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: http authorization: type: Bearer credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token follow_redirects: true enable_http2: true relabel_configs: - source_labels: [job] separator: ; regex: (.*) target_label: __tmp_prometheus_job_name replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_label_app, __meta_kubernetes_service_labelpresent_app] separator: ; regex: (kube-prometheus-stack-kube-etcd);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_service_label_release, __meta_kubernetes_service_labelpresent_release] separator: ; regex: (prometheus-stack);true replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: http-metrics replacement: $1 action: keep - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_container_name] separator: ; regex: (.*) target_label: container replacement: $1 action: replace - source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace - source_labels: [__meta_kubernetes_service_label_jobLabel] separator: ; regex: (.+) target_label: job replacement: ${1} action: replace - separator: ; regex: (.*) target_label: endpoint replacement: http-metrics action: replace - source_labels: [__address__] separator: ; regex: (.*) modulus: 1 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: &quot;0&quot; replacement: $1 action: keep kubernetes_sd_configs: - role: endpoints kubeconfig_file: &quot;&quot; follow_redirects: true enable_http2: true namespaces: own_namespace: false names: - kube-system </code></pre> <p>I am also currently accessing (works just fine) the Prometheus instance outside of the cluster with an Ingress using the TLS certificate:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheusdashboard-ingress namespace: monitoring labels: name: prometheusdashboard-ingress spec: tls: - hosts: - prometheus.demo secretName: mkcert-tls-secret rules: - host: prometheus.demo http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: prometheus-stack-kube-prom-prometheus port: number: 9090 </code></pre> <p>Here's the output in the target page of Prometheus:</p> <p><a href="https://i.stack.imgur.com/iOd8O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iOd8O.png" alt="Prometheus targets" /></a></p> <p>What do I get the stack access to this TLS certificate which I assume is the main issue here?</p>
<h1>Solution to the problem</h1> <p>After some thorough analysis of my various kubernetes clusters, I have found out the following errors exist:</p> <h3>Docker Desktop Environment throws this:</h3> <p><code>Warning Failed 6s (x3 over 30s) kubelet Error: failed to start container &quot;node-exporter&quot;: Error response from daemon: path / is mounted on / but it is not a shared or slave mount</code></p> <p>The <a href="https://stackoverflow.com/questions/70556984/kubernetes-node-exporter-container-is-not-working-it-shows-this-error-message">solution</a> tries solving it but it didn't work out for me.</p> <h3>Minikube Environment</h3> <p>The same error was replicated there too, I opened the web UI for minikube and found that these services were related to these ports <a href="https://i.stack.imgur.com/X7fyw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X7fyw.png" alt="Web UI" /></a></p> <p>Through this image, we can understand that these services are relevant to these ports. You can try port-forwarding them but even that doesn't help much.</p> <p>The only way I can see this working is to apply the configuration through <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus chart</a>.</p>
<p>I am running a small 3 node test kubernetes cluster (using kubeadm) running on Ubuntu Server 22.04, with Flannel as the network fabric. I also have a separate gitlab private server, with container registry set up and working.</p> <p>The problem I am running into is I have a simple test deployment, and when I apply the deployment yaml, it fails to pull the image from the gitlab private server.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: platform-deployment spec: replicas: 1 selector: matchLabels: app: platform-service template: metadata: labels: app: platform-service spec: containers: - name: platform-service image: registry.examle.com/demo/platform-service:latest </code></pre> <h4>Ubuntu Server: /etc/hosts (the relevant line)</h4> <pre class="lang-bash prettyprint-override"><code>192.168.1.30 registry.example.com </code></pre> <h4>The Error</h4> <pre class="lang-bash prettyprint-override"><code>Failed to pull image &quot;registry.example.com/demo/platform-service:latest&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;registry.example.com/deni/platform-service:latest&quot;: failed to resolve reference &quot;registry.example.com/demo/platform-service:latest&quot;: failed to do request: Head &quot;https://registry.example.com/v2/demo/platform-service/manifests/latest&quot;: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout </code></pre> <p>The 'xxx.xxx.xxx.xxx' is related to my external network, to which exists a domain name in the DNS, however all of my internal networks are set up to attach to the internal network representation, and 'registry.example.com' is a representation of my own domains.</p> <p>Also to note:</p> <pre class="lang-bash prettyprint-override"><code>docker pull registry.example.com/demo/platform-service:latest </code></pre> <p>From the command line of the server, works perfectly fine, it is just not working from kubernetes deploy yaml.</p> <h4>The problem</h4> <p>While the network on the server, and the host files on the server are configured correctly, the docker image is not resolving because when I apply it is not using the correct IP (that is configured in hosts), rather a public IP that is a different server. And the reason for the timeout is because the public facing server is not set up the same.</p> <p>When I run <code>kubectl apply -f platform-service.yaml</code> why does it not respect the hosts file of the server, and is there a way configure hosts inside Kubernetes.</p> <p>(If this problem is not clear, I apologize, I am quite new, and still learning terminology, maybe why google is not helping me with this problem.)</p> <p>The closest S/O I could find is:</p> <p><a href="https://stackoverflow.com/questions/62940403/kubernetes-not-able-pull-image-from-private-registry-having-private-domain-point">Kubernetes not able pull image from private registry having private domain pointed via /etc/hosts</a></p> <p>(SO Answer #1): hostAliases (this is for the pod itself, not pulling the image), also, installed through apt/package manager rather than snap. With the rest of the answer suggests changing the distribution, which I would rather go with my current setup than change it.</p> <h4>Update</h4> <p>Attempts to add hosts to coredns not working either: (<a href="https://stackoverflow.com/questions/65283827/how-to-change-host-name-resolve-like-host-file-in-coredns">How to change host name resolve like host file in coredns</a>)</p> <pre><code>kubectl -n kube-system edit configmap/coredns </code></pre> <pre><code>... .:53 { errors health { lameduck 5s } ready hosts custom.hosts registry.example.com { 192.168.1.30 registry.example.com fallthrough } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ... </code></pre> <p>deleted the coredns pods (so they are recreated)</p> <p>and still the docker pull on the deployment fails with the external ip address instead of the internal address.</p>
<p>After going through many different solutions and lots of research and testing. The answer was actually very simple.</p> <h3>Solution in my case</h3> <p>The /etc/hosts file <em>MUST</em> contain the host for the registry (and possibly the entry for the gitlab instance as well) on <em>EVERY</em> node of the cluster including the master node.</p> <pre><code>192.168.1.30 registry.example.com 192.168.1.30 gitlab.example.com # Necessary in my case, not sure required </code></pre> <p>Once I included that on each of the 2 slaves, it attempted to pull the image, and failed with credential issues (which I was expecting to see once the hosts issue was resolved). From there I was able to add the credentials and now the image pulls fine from the private registry rather than the public facing registry.</p> <h4>Bonus: Fix for credentials error connecting to private registry (not part of the original question, but part of the setup process for connecting)</h4> <p>After fixing the /etc/hosts issue, you will probably need to set up 'regcred' credentials to access the private registry, Kubernetes documentation provides the steps on that part:</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
<p>While trying to add/update a dependency to a helm chart I'm getting this error. No helm plugins are installed with the name <code>helm-manager</code>.</p> <pre class="lang-sh prettyprint-override"><code>$ helm dep update Getting updates for unmanaged Helm repositories... ...Unable to get an update from the &quot;https://kubernetes-charts.storage.googleapis.com/&quot; chart repository: failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden ...Unable to get an update from the &quot;https://kubernetes-charts.storage.googleapis.com/&quot; chart repository: failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the &quot;bitnami&quot; chart repository Update Complete. Happy Helming! Error: no cached repository for helm-manager-1067d9c6027b8c3f27b49e40521d64be96ea412858d8e45064fa44afd3966ddc found. (try 'helm repo update'): open /Users/&lt;redacted&gt;/Library/Caches/helm/repository/helm-manager-1067d9c6027b8c3f27b49e40521d64be96ea412858d8e45064fa44 afd3966ddc-index.yaml: no such file or directory </code></pre>
<p>I get this sometimes when there's a mismatch between my <code>Chart.yaml</code> and the configuration of my subchart dependencies.</p> <p>For instance:</p> <p><code>Chart.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>dependencies: - name: foo ... - name: bar ... </code></pre> <p><code>values.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>foo: flag: true # but no entry for bar </code></pre> <p>It may be an artifact of some other element of my configuration (Artifactory hosting a proxy of the world for Helm) but I find myself running into this frequently enough that I hope this answer may help someone else.</p>
<p>I wrote a <code>readiness_probe</code> for my pod by using a bash script. Readiness probe failed with Reason: Unhealthy but when I manually get in to the pod and run this command <code>/bin/bash -c health=$(curl -s -o /dev/null --write-out &quot;%{http_code}&quot; http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi</code> bash script exits with code 0. What could be the reason? I am attaching the code and the error below.</p> <p><strong>Edit:</strong> Found out that the health variable is set to 000 which means timeout in for bash script.</p> <pre><code>readinessProbe: exec: command: - /bin/bash - '-c' - |- health=$(curl -s -o /dev/null --write-out &quot;%{http_code}&quot; http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi </code></pre> <p>&quot;kubectl describe pod {pod_name}&quot; result:</p> <pre><code>Name: rustici-engine-54cbc97c88-5tg8s Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Tue, 12 Jul 2022 18:39:08 +0200 Labels: app.kubernetes.io/name=rustici-engine pod-template-hash=54cbc97c88 Annotations: &lt;none&gt; Status: Running IP: 172.17.0.5 IPs: IP: 172.17.0.5 Controlled By: ReplicaSet/rustici-engine-54cbc97c88 Containers: rustici-engine: Container ID: docker://f7efffe6fc167e52f913ec117a4d78e62b326d8f5b24bfabc1916b5f20ed887c Image: batupaksoy/rustici-engine:singletenant Image ID: docker-pullable://batupaksoy/rustici-engine@sha256:d3cf985c400c0351f5b5b10c4d294d48fedfd2bb2ddc7c06a20c1a85d5d1ae11 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Tue, 12 Jul 2022 18:39:12 +0200 Ready: False Restart Count: 0 Limits: memory: 350Mi Requests: memory: 350Mi Liveness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out &quot;%{http_code}&quot; http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=20 Readiness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out &quot;%{http_code}&quot; http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=10 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whb8d (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-whb8d: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24s default-scheduler Successfully assigned default/rustici-engine-54cbc97c88-5tg8s to minikube Normal Pulling 23s kubelet Pulling image &quot;batupaksoy/rustici-engine:singletenant&quot; Normal Pulled 21s kubelet Successfully pulled image &quot;batupaksoy/rustici-engine:singletenant&quot; in 1.775919851s Normal Created 21s kubelet Created container rustici-engine Normal Started 20s kubelet Started container rustici-engine Warning Unhealthy 4s kubelet Readiness probe failed: Warning Unhealthy 4s kubelet Liveness probe failed: </code></pre>
<p>The probe could be failing because it is facing performance issues or slow startup. To troubleshoot this issue, you will need to check that the probe doesn’t start until the app is up and running in your pod. Perhaps you will need to increase the <code>Timeout of the Readiness Probe</code>, as well as the <code>Timeout of the Liveness Probe</code>, like in the following example:</p> <pre><code>readinessProbe: initialDelaySeconds: 10 periodSeconds: 2 timeoutSeconds: 10 </code></pre> <p>You can find more details about how to configure the Readlines Probe and Liveness Probe in this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">link</a>.</p>
<p>Suppose, I have several Kubernetes clusters and I have some namespaces in each of them.</p> <p>Some of these namespaces are labeled <code>project: a</code> and others <code>project: b</code>.</p> <p>Now I want to ensure that resources in namespaces labeled <code>project: a</code> can communicate with each other, but not with something else, same for the other projects.</p> <p>If it was just one Kubernetes cluster, then I would simply use NetworkPolicies.</p> <p>However, I would like to connect the clusters somehow and to ensure that this restriction applies also when the resources are spread in many clusters.</p>
<p>Network policies are bound to the local cluster and don't work across clusters for now.</p>
<p>I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.</p> <p>I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.</p> <p>What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.</p> <p>The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:</p> <pre><code> server { preread_by_lua_block { ngx.var.proxy_upstream_name=&quot;tcp-my-namespace-my-service-7550&quot;; } listen 7550; proxy_timeout 600s; proxy_next_upstream on; proxy_next_upstream_timeout 600s; proxy_next_upstream_tries 3; proxy_pass upstream_balancer; } </code></pre> <p>Any help at all is greatly appreciated and I'm happy to provide more info.</p>
<p>What you describe is how nginx works out of the box with http. However</p> <ol> <li>Nginx has a detailed understanding of http</li> <li>HTTP is a message based protocol i.e. uses requests and replies</li> </ol> <p>Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.</p> <p>You need to implement a protol-aware mitm.</p>
<p>How can I reset RESTART counts to zero in below Kubernetes command output. I know stats are saved in etcd, but how to reset/erase the data?</p> <pre><code>/home/mytest&gt;kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default load-generator 1/1 Running 2 3d21h 192.168.252.102 testnode &lt;none&gt; &lt;none&gt; kube-system calico-kube-controllers-65f8bc95db-ph48b 1/1 Running 7 51d 192.168.252.80 testnode &lt;none&gt; &lt;none&gt; kube-system calico-node-tr8dr 1/1 Running 6 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system coredns-66bff467f8-lcn5z 1/1 Running 18 51d 192.168.252.116 testnode &lt;none&gt; &lt;none&gt; kube-system coredns-66bff467f8-tmgml 1/1 Running 17 51d 192.168.252.123 testnode &lt;none&gt; &lt;none&gt; kube-system etcd-testnode 1/1 Running 23 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-testnode 1/1 Running 50 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-testnode 1/1 Running 238 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system kube-proxy-wn28b 1/1 Running 6 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-testnode 1/1 Running 233 51d 10.61.62.152 testnode &lt;none&gt; &lt;none&gt; kube-system metrics-server-585bd46ccb-55q59 1/1 Running 1 37h 192.168.252.84 testnode &lt;none&gt; &lt;none&gt; kube-system tiller-deploy-56b574c76d-kj45f 1/1 Running 4 45d 192.168.252.65 testnode &lt;none&gt; &lt;none&gt; </code></pre>
<p>The only way to accomplish this is by restarting the pod.</p> <p>Also, a feature to artificially reset the counter has been rejected.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/50375" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/50375</a></p>
<p>Assume an application consisting of different web services implemented with ASP.NET Core, each of which is deployed as a Pod in a Kubernetes cluster (AKS to be exact). Now, suppose I want to secure the cluster-internal communication between those services via HTTPS. This requires me to:</p> <ol> <li>get TLS certificates for each of the services,</li> <li>have the Pods trust those TLS certificates (or, rather, the signing CA), and</li> <li>rotate the certificates when their validity period ends.</li> </ol> <p>What I've already learned:</p> <ul> <li><a href="https://stackoverflow.com/a/48168572/62838">This StackOverflow answer</a> indicates that this adds a lot of complexity and discourages going that route. Nevertheless, I'd like to know what such a setup would comprise.</li> <li>Projects such as <em>LettuceEncrypt</em> allow to automate most of the steps 1 and 3 above. You only need a CA that implements the ACME protocol.</li> <li>The <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Kubernetes docs (&quot;Managing TLS in a cluster&quot;)</a> mention a Kubernetes API which uses a <em>&quot;protocol that is similar to the ACME draft&quot;</em> to manage CSRs.</li> <li>However, in the docs, they're doing all the work manually (setting up a local CA, issuing CSRs manually, signing the CSRs manually using the local CA, all via the cfssl tools) that I'm wondering why on earth I would actually want to use those APIs. What are they doing for me besides storing CSRs as Kubernetes resources?</li> <li>The docs also mention that Kubernetes clusters already include a root CA that one could use for the purpose of issuing TLS certificates for Pods, but don't explain <em>how</em> one would do so: &quot;<em>It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this.</em>&quot;</li> <li>The quote above seems to suggest and warn against using the cluster root CA at the same time. Why the warning, wouldn't it simplify things a lot if we could use an existing CA?</li> </ul> <p>In my mind, this could be really simple: Just set up Kestrel with <em>LettuceEncrypt</em>, configure it against the cluster root CA, and have all the Pods trust that CA (by importing the corresponding certificate as a trusted root).</p> <p>Is it that simple? Or what am I missing?</p> <p><strong>Update 2022-07-26:</strong> Note that I need to support Windows containers.</p>
<p>For this purpose you should use <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-mutual-tls/" rel="nofollow noreferrer">mTLS</a>. To archive this with an AKS Cluster you can easily active the <a href="https://learn.microsoft.com/en-us/azure/aks/open-service-mesh-about" rel="nofollow noreferrer">Open Service Mesh Add-On</a>. With OSM enabled, you can now encrypt communications between service endpoints deployed in the cluster. The cool thing is the the OSM Add-on integrates with <a href="https://learn.microsoft.com/en-us/azure/aks/open-service-mesh-integrations#metrics-observability" rel="nofollow noreferrer">Azure Monitor</a>.</p> <p><a href="https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/#https-ingress-mtls-and-tls" rel="nofollow noreferrer">Here</a> an example to do mTLS with ingress-nginx :</p> <blockquote> <p>To proxy connections to HTTPS backends, we will configure the Ingress and IngressBackend configurations to use https as the backend protocol, and have OSM issue a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate will be stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends.</p> </blockquote>
<p>I wrote a bash script to automate the deployment of an application in a kubernetes cluster using helm and kubectl. I use <strong>cert-manager</strong> to automate issuing and renewing of TLS certificates, obtained by <strong>letsencrypt</strong>, needed by the application itself.</p> <p>The script can deploy the application in either one of many environments such as testing (test) and production (prod) using different settings and manifests as needed. For each environment I create a separate namespace and deploy the needed resources in it. In production I use the <strong>letsencrypt</strong> production server (spec.acme.server: <a href="https://acme-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-v02.api.letsencrypt.org/directory</a>) whereas, in any other env such as testing, I use the staging server (spec.acme.server: <a href="https://acme-staging-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-staging-v02.api.letsencrypt.org/directory</a>). <strong>The hostnames I request the certificates for are a different set depending on the environment: xyz.test.mysite.tld in testing vs xyz.mysite.tld in production</strong>. I provide the same contact e-mail address for all environments.</p> <p>Here the full manifest of the letsencrypt issuer for testing:</p> <pre><code>apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-staging spec: acme: email: [email protected] server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-staging-issuer-private-key solvers: - http01: ingress: class: public-test-it-it </code></pre> <p>And here the full manifest of the letsencrypt issuer for production:</p> <pre><code>apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-production spec: acme: email: [email protected] server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-production-issuer-private-key solvers: - http01: ingress: class: public-prod-it-it </code></pre> <p>When I deploy the application the first time, either in test or prod environements, everything works as expected, and cert-manager gets the TLS certificates signed by letsencrypt (staging or production server respectively) and stored in secrets. But when I deploy the application in another environment (so that I have both test and prod running in parallel), cert-manager can't get the certificates signed anymore, and the chain certificaterequest-&gt;order-&gt;challenge stops at the challenge step with the following output:</p> <pre><code>kubectl describe challenge xyz-tls-certificate ... Status: Presented: true Processing: true Reason: Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200' State: pending Events: &lt;none&gt; </code></pre> <p>and I can verify that indeed I get a 404 when trying to curl any of the challenges' URLs:</p> <pre><code>curl -v http://xyz.test.mysite.tld/.well-known/acme-challenge/IECcFDmQF_fzGKcA9hJvFGEWRjDCAE_fs8dnBXlr_wY * Trying vvv.xxx.yyy.zzz:80... * Connected to xyz.test.mysite.tld (vvv.xxx.yyy.zzz) port 80 (#0) &gt; GET /.well-known/acme-challenge/IECcFDmQF_fzGKcA9hJvFGEWRjDCAE_fs8dnBXlr_wY HTTP/1.1 &gt; Host: xyz.test.mysite.tld &gt; User-Agent: curl/7.74.0 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 404 Not Found &lt; date: Thu, 21 Jul 2022 09:48:08 GMT &lt; content-length: 21 &lt; content-type: text/plain; charset=utf-8 &lt; * Connection #0 to host xyz.test.mysite.tld left intact default backend - 404 </code></pre> <p>So letsencrypt can't access the challenges' URLs and won't sign the TLS certs.</p> <p>I tried to debug the 404 error and found that I can successfully curl the pods and services backing the challenges from another pod running in the cluster/namespace, but I get 404s from the outside world. This seems like an issue with the ingress controller (HAProxytech/kubernetes-ingress in my case), but I can't explain why the mechanism worked upon first deployment and then not anymore..</p> <p>I inspected the cert-manager logs and found lines such:</p> <pre><code>kubectl logs -n cert-manager cert-manager-... I0721 13:27:45.517637 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress &quot;msg&quot;=&quot;found one existing HTTP01 solver ingress&quot; &quot;dnsName&quot;=&quot;xyz.test.mysite.tld&quot; &quot;related_resource_kind&quot;=&quot;Ingress&quot; &quot;related_resource_name&quot;=&quot;cm-acme-http-solver-8668s&quot; &quot;related_resource_namespace&quot;=&quot;app-test-it-it&quot; &quot;related_resource_version&quot;=&quot;v1&quot; &quot;resource_kind&quot;=&quot;Challenge&quot; &quot;resource_name&quot;=&quot;xyz-tls-certificate-hwvjf-2516368856-1193545890&quot; &quot;resource_namespace&quot;=&quot;app-test-it-it&quot; &quot;resource_version&quot;=&quot;v1&quot; &quot;type&quot;=&quot;HTTP-01&quot; E0721 13:27:45.527238 1 sync.go:186] cert-manager/challenges &quot;msg&quot;=&quot;propagation check failed&quot; &quot;error&quot;=&quot;wrong status code '404', expected '200'&quot; &quot;dnsName&quot;=&quot;xyz.test.mysite.tld&quot; &quot;resource_kind&quot;=&quot;Challenge&quot; &quot;resource_name&quot;=&quot;xyz-tls-certificate-hwvjf-2516368856-1193545890&quot; &quot;resource_namespace&quot;=&quot;app-test-it-it&quot; &quot;resource_version&quot;=&quot;v1&quot; &quot;type&quot;=&quot;HTTP-01&quot; </code></pre> <p>which seems to confirm that cert-manager could self-check, from within the cluster, that the challenges' URLs are in place, but those are not reachable by the outside world (propagation check failed). It seems like cert-manager set-up challenges' pods/services/ingresses all right, but then requests sent to the challenges' URLs are not routed to the backing pods/services. And this only the second time I try to deploy the app..</p> <p>I also verified that, after issuing the certificates upon the first deployment, cert-manager (correctly) removed all related pods/services/ingresses from the related namespace, so there should not be any conflict from duplicated challenges' resources.</p> <p>I restate here that the certificates are issued flawlessly the first time I deploy the application, either in test or prod environment, but they won't be issued anymore if I deploy the app again in a different environment.</p> <p>Any idea why this is the case?</p>
<p>I finally found out what the issue was..</p> <p>Basically, I was installing a separate HAProxy ingress controller (haproxytech/kubernetes-ingress) per environment (test/prod), and therefore each namespace had its own ingress controller which I referenced in my manifests. This should have worked in principle, but it turned out cert-manager could not reference the right ingress-controller upon setting up the letsencrypt challenges.</p> <p>The solution consisted in creating a single HAproxy ingress controller (in its own separate namespace) to serve the whole cluster and be referenced by all other environments/namespaces. This way the challenges for both testing and production environment where correctly set-up by cert-manager and verified by letsencrypt, which signed the required certificates.</p> <p><strong>In the end I highly recommend using a single HAproxy ingress controller per cluster, installed in its own namespace.</strong> This configuration is less redundant and eliminates potential issues such as the one I faced.</p>
<p>Background:</p> <ul> <li><code>kubectl</code> version- 1.24</li> <li>Docker version- 20.10.17</li> <li>MacOS Monterey v12.4</li> </ul> <p>Question:</p> <p>I'm following <a href="https://blog.gruntwork.io/a-crash-course-on-kubernetes-a96c3891ad82" rel="nofollow noreferrer">this tutorial</a>, trying to set up a local k8s deployment + service. I successfully ran the following commands:</p> <pre><code>$ kubectl config use-context docker-desktop Switched to context &quot;docker-desktop&quot;. $ kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready control-plane 154m v1.24.1 $ kubectl create deployment simple-webapp \ --image training/webapp \ --replicas=2 \ --port=5000 deployment.apps/simple-webapp created $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE simple-webapp 2/2 2 2 3m23s $ kubectl create service loadbalancer simple-webapp \ --tcp=80:5000 service/simple-webapp created $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 156m simple-webapp LoadBalancer 10.109.168.49 &lt;pending&gt; 80:30582/TCP 9m34s </code></pre> <p>However, when I try to run <code>curl http://localhost</code> as mentioned in the tutorial, instead of getting the expected output (<code>Hello world!</code>), I get:</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;301 Moved Permanently&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;301 Moved Permanently&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.21.2&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I'm brand-new to k8s and have no idea where to start diagnosing the problem. Not sure if this is useful info, but I ran <code>docker ps</code> and it looks like there are two k8s-related docker containers currently running:</p> <pre><code>$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 411b7110b315 training/webapp &quot;python app.py&quot; 11 minutes ago Up 11 minutes k8s_webapp_simple-webapp-5d47556b5-jkvsc_default_9893ef1a-4bc5-43f3-80ac-151d567a89f6_0 9752c12315cc training/webapp &quot;python app.py&quot; 11 minutes ago Up 11 minutes k8s_webapp_simple-webapp-5d47556b5-269s8_default_5c50b3f8-57a9-4066-bd66-498cd30c758c_0 </code></pre> <p>I do see <code>nginx</code> referenced in the <code>curl</code> output, does this mean that the 301 redirect is coming from inside the <code>training/webapp</code> process from the Docker container? If so, how would I fix this?</p> <p>EDIT: I tried creating both the deployment and service on a different port (<code>5001</code> instead of <code>5000</code>), since when I followed part 1 of the tutorial I ran into problems with port 5000 already being in use. That worked for part 1 of the tutorial, however unfortunately it didn't solve my problem for part 2- I ran into the same <code>301 redirect</code> problem when using both port 5000 and 5001.</p> <p>EDIT 2: When I run <code>kubectl get pods</code>, I see the following:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE simple-webapp-5d47556b5-sg9h4 1/1 Running 0 32s simple-webapp-5d47556b5-xw2nh 1/1 Running 0 32s </code></pre> <p>I continue to see the <code>301 redirect</code> even after running <code>curl</code> to confirm the pods are up.</p> <p>EDIT 3: I <em>do</em> notice that when I run <code>kubectl get nodes</code>, the single node has a role of <code>control-plane</code>, <em>not</em> <code>control-plane,master</code> as noted in the tutorial. I also notice that running <code>kubectl get services</code> shows that the <code>EXTERNAL-IP</code> of the loadbalancer is <code>pending</code>, not <code>localhost</code> as expected from the tutorial. Are either of these facts relevant?</p> <p>EDIT 4: Further down in the tutorial, we delete both the deployment and service that were previously created, and we re-create them using <code>deployment.yml</code> and <code>service.yml</code> files, alongside the commands <code>kubectl apply -f deployment.yml</code> and <code>kubectl apply -f service.yml</code>. When I copy/paste the YAML from the tutorial into new files with the specified filenames, I get the same <code>301 Redirect</code> error.</p> <p>EDIT 5: In <a href="https://news.ycombinator.com/item?id=32216700" rel="nofollow noreferrer">this Hacker News comment thread</a>, I saw that Kubernetes had removed support for Docker as of v1.24. I had previously upgraded my <code>kubectl</code> Homebrew version to 1.24 because my prior version didn't support the <code>--replicas</code> flag. When I read this thread, I downgraded to v1.22 (the previous version), verified the version was correct via <code>kubectl version</code>, deleted/recreated the previously-created deployment and service, and tried <code>curl</code> again. I still get the <code>301 redirect</code> error.</p> <p>EDIT 6: Here's the full output of <code>curl -v http://localhost</code>:</p> <pre><code>curl -v http://localhost * Trying 127.0.0.1:80... * Connected to localhost (127.0.0.1) port 80 (#0) &gt; GET / HTTP/1.1 &gt; Host: localhost &gt; User-Agent: curl/7.79.1 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 301 Moved Permanently &lt; Server: nginx/1.21.2 &lt; Date: Mon, 25 Jul 2022 04:53:40 GMT &lt; Content-Type: text/html &lt; Content-Length: 169 &lt; Connection: keep-alive &lt; Location: https://localhost/ &lt; &lt;html&gt; &lt;head&gt;&lt;title&gt;301 Moved Permanently&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;301 Moved Permanently&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.21.2&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; * Connection #0 to host localhost left intact </code></pre> <p>Looks like it's trying to redirect from <code>http</code> to <code>https</code>? Could this be some setting within the <code>training/webapp</code> Docker image?</p> <p>Also, here's the result of <code>sudo lsof -i -P | grep LISTEN | grep :80</code>:</p> <pre><code>$ sudo lsof -i -P | grep LISTEN | grep :80 nginx 3582 richiethomas 6u IPv4 0x63a69bc74912e747 0t0 TCP *:80 (LISTEN) nginx 3582 richiethomas 7u IPv6 0x63a69bbdb3ebcbe7 0t0 TCP *:80 (LISTEN) nginx 3701 richiethomas 6u IPv4 0x63a69bc74912e747 0t0 TCP *:80 (LISTEN) nginx 3701 richiethomas 7u IPv6 0x63a69bbdb3ebcbe7 0t0 TCP *:80 (LISTEN) </code></pre>
<p>It looks like you already have <code>nginx</code> running on port <code>80</code>(and <code>443</code>) on your system. Likely, another docker container outside kubernetes, or directly on your system. It already has taken port <code>80</code>, hence your LoadBalancer is in pending state.</p> <p>Either stop that nginx service or use another port like <code>8080</code> for your LoadBalancer:</p> <pre><code>$ kubectl create service loadbalancer simple-webapp \ --tcp=8080:5000 </code></pre> <p>The <code>training/webapp</code> does not use nginx.</p>
<p>I have 2 pods that are meant to send logs to Elastic search. Logs in /var/log/messages get sent but some reason service_name.log doesn't get sent - I think it is due to the configuration for Elastic search. There is a .conf file in these 2 pods that handle the connection to Elastic Search.</p> <p>I want to make changes to test if this is indeed the issue. I am not sure if the changes take effect as soon as I edit the file. Is there a way to restart/update the pod without losing changes I might make to this file?</p>
<p>To store non-confidential data as a configuration file in a volume, you could use <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMaps</a>.</p> <p>Here is an example of a Pod that mounts a ConfigMap in a volume:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: &quot;/etc/foo&quot; readOnly: true volumes: - name: foo configMap: name: myconfigmap </code></pre>
<p>I've just finished setting up AKS with AGIC and using Azure CNI. I'm trying to deploy NGINX to test if I set the AKS up correctly with the following configuration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway kubernetes.io/ingress.allow-http: &quot;false&quot; appgw.ingress.kubernetes.io/use-private-ip: &quot;false&quot; appgw.ingress.kubernetes.io/override-frontend-port: &quot;443&quot; spec: tls: - hosts: - my.domain.com secretName: aks-ingress-tls rules: - host: my.domain.com http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: component: nginx template: metadata: labels: component: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: ClusterIP selector: component: nginx ports: - port: 80 protocol: TCP </code></pre> <p>There's no error or any other log message on apply the above configurations.</p> <pre class="lang-bash prettyprint-override"><code>&gt; k apply -f nginx-test.yml deployment.apps/nginx-deployment created service/nginx-service created ingress.networking.k8s.io/nginx-ingress created </code></pre> <hr /> <p>But after a further investigation in the Application Gateway I found these entries in the Activity log popped up at the same time I applied the said configuration.</p> <p><a href="https://i.stack.imgur.com/RbXI6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RbXI6.png" alt="Activity log in AGIC" /></a></p> <p>Further details in one of the entries is as follows:</p> <ul> <li><strong>Operation name</strong>: Create or Update Application Gateway</li> <li><strong>Error code</strong>: RequestDisallowedByPolicy</li> <li><strong>Message</strong>: Resource 'my-application-gateway' was disallowed by policy. <pre class="lang-json prettyprint-override"><code>[ { &quot;policyAssignment&quot;: { &quot;name&quot;: &quot;Encryption In Transit&quot;, &quot;id&quot;: &quot;/providers/Microsoft.Management/managementGroups/***/providers/Microsoft.Authorization/policyAssignments/EncryptionInTransit&quot; }, &quot;policyDefinition&quot;: { &quot;name&quot;: &quot;HTTPS protocol only on Application Gateway listeners&quot;, &quot;id&quot;: &quot;/providers/microsoft.management/managementgroups/***/providers/Microsoft.Authorization/policyDefinitions/HttpsOnly_App_Gateways&quot; }, &quot;policySetDefinition&quot;: { &quot;name&quot;: &quot;Encryption In Transit&quot;, &quot;id&quot;: &quot;/providers/Microsoft.Management/managementgroups/***/providers/Microsoft.Authorization/policySetDefinitions/EncryptionInTransit&quot; } } ] </code></pre> </li> </ul> <p>My organization have a policy to enforce TLS but from my configuration I'm not sure what I did wrong as I have already configured the ingress to only use HTTPS and also have certificate (from the secret) installed.</p> <p>I'm not sure where to look and wish someone could guide me in the correct direction. Thanks!</p>
<p>β€’ As you said, your organization has a <strong>policy for enforcing TLS for securing encrypted communication over HTTPS</strong>. Therefore, when you create an <strong>β€˜NGINX’ deployment through the β€˜yaml’ file posted</strong>, you can see that the <strong>nginx application is trying to connect to the application gateway ingress controller over Port 80 which is reserved for HTTP communications. Thus, your nginx application has also disallowed the usage of private IPs with the AGIC due to which the nginx application is directly overriding the HTTPS 443 port for reaching out to the domain β€˜my.domain.com’ over port 80 without using the SSL/TLS certificate-based port for communication</strong>.</p> <p>Thus, would suggest you to please <em><strong>configure NGINX application for port 443 as the frontend port for the cluster IP and ensure β€˜SSL redirection’ is set to enabled due to which when the NGINX application is deployed</strong></em>, it will be not face the policy restrictions and get failed. Also, refer to the below snapshot of the listeners in application gateway and load balancer when provisioning an AGIC for an AKS cluster.</p> <p><a href="https://i.stack.imgur.com/lpx4N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lpx4N.png" alt="AKS application gateway backend port" /></a></p> <p><a href="https://i.stack.imgur.com/QJNYT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJNYT.png" alt="AKS application gateway frontend port" /></a></p> <p>Also, for more detailed information on deploying the NGINX application in AKS cluster on ports, kindly refer to the below documentation link: -</p> <p><strong><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli</a></strong></p>
<p>Is it possible to programatically set the sourcetype to be the namespace from where the logs were generated? I am using the fluentd plugin to send data to the Splunk http event collector. Elsewhere, it was recommended to use ${record['kubernetes']['namespace_name'] to set the index name to be the namespace name. When I do this for sourcetype, that actual text just shows up in Splunk rather than translating to the specific namespace names.</p> <pre><code>@include systemd.conf @include kubernetes.conf &lt;match kubernetes.var.log.containers.fluentd**&gt; type null &lt;/match&gt; &lt;match **&gt; type splunk-http-eventcollector all_items true server host:port token **** index kubernetes protocol https verify false sourcetype ${record['kubernetes']['namespace_name'] source kubernetes buffer_type memory buffer_queue_limit 16 chunk_limit_size 8m buffer_chunk_limit 150k flush_interval 5s &lt;/match&gt; </code></pre>
<p>If you have not defined a <code>sourcetype</code> in an appropriate <code>props.conf</code> (and associated <code>transforms.conf</code>), Splunk will try to determine the sourcetype based on heuristics</p> <p>Those heuristics are not generally very accurate on custom data sources</p> <p>Instead of trying to &quot;programatically set the sourcetype to be the namespace from where the logs were generated&quot;, add a field whose contents indicate the namespace from which the logs are generated (eg &quot;namespace&quot;)</p> <p>It's much simpler, extends your logging more efficiently, and doesn't require the definition of scores or hundreds or thousands of individual sourcetypes</p>
<p>Is there a 'master switch' to enable/disable Keda and HPA? I can enable/disable scaling rules by editing the replicacount to 0, but is there a main enalbe/disable field?</p> <pre><code> cooldownPeriod: 1800 maxReplicaCount: 8 minReplicaCount: 2 pollingInterval: 300 </code></pre>
<p>You can enable/disable scaling rules either by editing the replica count to 0 or you can use a single field called <strong>Pause autoscaling</strong>.</p> <p><strong>Pause autoscaling</strong> lets you enable/disable autoscaling by using <strong>autoscaling.keda.sh/paused-replicas</strong> annotation. It can be useful to instruct KEDA to pause autoscaling of objects, if you want to do cluster maintenance or you want to avoid resource starvation by removing non-mission-critical workloads.</p> <p>You can enable this by adding the below annotation to your <code>ScaledObject</code> definition:</p> <pre><code>metadata: annotations: autoscaling.keda.sh/paused-replicas: &quot;0&quot; </code></pre> <p>The presence of this annotation will pause autoscaling no matter what number of replicas is provided. The above annotation will scale your current workload to 0 replicas and pause autoscaling. You can set the value of replicas for an object to be paused at any arbitrary number. To enable autoscaling again, simply remove the annotation from the <code>ScaledObject</code> definition.</p> <p>Refer to the <a href="https://keda.sh/docs/2.7/concepts/scaling-deployments/" rel="nofollow noreferrer">KEDA documentation</a> for more information.</p>
<p>We run an internal a healthcheck of the service every 5 seconds. And we run Kubernetes liveness probes every 1 second. So in the worst scenario the Kubernetes loadbalancer has up-to-date information every 6 seconds.</p> <p>My question is what happens when a client request hits a pod which is broken but not seen by the loadbalancer as unhealthy? Should the client implement a logic with retries? Or should we implement backend logic to handle the cases when a request hits a pod which is not yet seen as unhealthy by the loadbalancer?</p>
<p>Not sure how your architecture is however LoadBalancers are generally set with the ingress controller like <strong>Nginx</strong> and etc.</p> <p>Load Balancer backed by the ingress controller forwards the traffic to the <strong>K8s service</strong>, and the <strong>K8s</strong> service mostly manages the request routing to PODs, not LB.</p> <p>Based on the Readiness K8s service route the request to PODs, so if your POD is <strong>NotReady</strong>, the request won't reach there. Due to any delay if the request reaches to that POD there could be a chance you get <code>internal error</code> or so in return.</p> <p><strong>Retries</strong></p> <p>yes, you implement the retries at the client side but if you are on K8s, you can offload the retries part to the service mesh. it would be easy to maintain and integrate retries logic with the K8s and service mesh.</p> <p>You can use the service mesh like <strong>Istio</strong> and implement the retries policy at virtual service level</p> <pre><code>retries: attempts: 5 retryOn: 5xx </code></pre> <p><strong>Virtual service</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 retries: attempts: 3 perTryTimeout: 2s </code></pre> <p>Read more at : <a href="https://istio.io/latest/docs/concepts/traffic-management/#retries" rel="nofollow noreferrer">https://istio.io/latest/docs/concepts/traffic-management/#retries</a></p>
<p>Nginx can be configured to support the HAProxy proxy protocol for inbound traffic: <a href="http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol" rel="nofollow noreferrer">http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol</a></p> <p>However, using <code>proxy_protocol on;</code>, nginx cannot handle HTTP(S) traffic without the PROXY line.</p> <p>The traefik reverse proxy, on the other hand, is able to treat the PROXY line as optional:</p> <blockquote> <p>If Proxy Protocol header parsing is enabled for the entry point, this entry point can accept connections with or without Proxy Protocol headers. <a href="https://doc.traefik.io/traefik/routing/entrypoints/#proxyprotocol" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/entrypoints/#proxyprotocol</a></p> </blockquote> <p><strong>Is it possible to configure nginx to treat the PROXY line as optional?</strong></p> <p>My use case is nginx as a kubernetes ingress to handle proxy protocol traffic from outside via a load balancer and without the proxy protocol for cluster-internal HTTP(S) traffic.</p> <p>I know there is <em>hairpin-proxy</em> that works, but it's a bit hacky as it adds rewrites for all CoreDNS cluster-internal DNS entries. <a href="https://github.com/compumike/hairpin-proxy#:%7E:text=Modify%20nginx%20to%20treat%20the%20PROXY%20line%20as%20optional" rel="nofollow noreferrer">https://github.com/compumike/hairpin-proxy#:~:text=Modify%20nginx%20to%20treat%20the%20PROXY%20line%20as%20optional</a></p> <hr /> <p>The K8s Cloud Setup used in this scenario:</p> <ul> <li>Rancher Kubernetes Engine (RKE)</li> <li>Hetzner Cloud VMs for k8s Nodes</li> <li>Hetzner Load Balancer as K8s Load Balancer</li> <li>hcloud-cloud-controller-manager for provisioning Hetzner Cloud servers (<a href="https://github.com/hetznercloud/hcloud-cloud-controller-manager" rel="nofollow noreferrer">https://github.com/hetznercloud/hcloud-cloud-controller-manager</a>)</li> <li>ingress-nginx as k8s ingress (<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a>)</li> <li>Worklaod deployments using helm3</li> </ul>
<p><a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt" rel="nofollow noreferrer">Proxy protocol specification</a> forbids this kind of mixing of proxy and non-proxy connections. The reason is that it would allow easy spoofing of ip addresses and other data.</p> <p>Quote from the specification:</p> <pre><code>The receiver MUST be configured to only receive the protocol described in this specification and MUST not try to guess whether the protocol header is present or not. This means that the protocol explicitly prevents port sharing between public and private access. Otherwise it would open a major security breach by allowing untrusted parties to spoof their connection addresses. The receiver SHOULD ensure proper access filtering so that only trusted proxies are allowed to use this protocol. </code></pre>