prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have a link to a public URL in the format of <code>https://storage.googleapis.com/companyname/foldername/.another-folder/file.txt</code></p> <p>I want to create an ingress rule to create a path to this public file, so that whoever open a specific URL, e.g., <a href="https://myapp.mydomain.com/.another-folder/myfile.txt" rel="nofollow noreferrer">https://myapp.mydomain.com/.another-folder/myfile.txt</a> -&gt; it open up above file.</p> <p>I tried a few different ingress rules such as:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: googlestoragebucket spec: externalName: storage.googleapis.com ports: - name: https port: 443 protocol: TCP targetPort: 443 type: ExternalName --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: staging-ingress annotations: kubernetes.io/ingress.global-static-ip-name: staging-static-ip kubernetes.io/ingress.class: gce spec: defaultBackend: service: name: website-frontend port: number: 80 rules: - host: myapp.mydomain.com http: paths: - path: /.another-folder/ pathType: Prefix backend: service: name: googlestoragebucket port: number: 443 - pathType: ImplementationSpecific backend: service: name: myactual-app port: number: 80 </code></pre> <p>But I couldn't make it wrok. In this case I've got an error: <code>Translation failed: invalid ingress spec: service &quot;staging/googlestoragebucket&quot; is type &quot;ExternalName&quot;, expected &quot;NodePort&quot; or &quot;LoadBalancer</code></p> <p>I don’t mind any other solutions to achieve the same result in the context of GCP and Kubernetes.</p> <p>Do you have any ideas?</p> <p>Looking forward for you suggestions.</p>
<p>Think that you should be able to do it via Cloud External Load Balancer:</p> <p>Here is some information about that:</p> <p><a href="https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets</a></p> <p>Then you can point the ingress to that load balancer: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features</a></p> <p>Another option is use some proxy, la Nginx, there is a post on GitHub about this: <a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1809</a></p>
<p>When trying to deploy Clickhouse operator on Kubernetes, by default access_management is commented out in users.xml file. Is there a way to uncomment it when installing kubernetes operator?</p> <p>Clickhouse Operator deployment:</p> <pre><code>kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml </code></pre> <p>I have tried to do that through &quot;ClickHouseInstallation&quot; but that didn't work.</p> <p>Furthermore, Clickhouse operator source code doesn't contain parameter for access_management</p>
<p>look to <code>kubectl explain chi.spec.configuration.files</code> and <code>kubectl explain chi.spec.configuration.users</code></p> <p>try</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: clickhouse.altinity.com/v1 kind: ClickHouseInstallation metadata: name: access-management-example spec: configuration: files: users.d/access_management.xml: | &lt;clickhouse&gt;&lt;users&gt; &lt;default&gt;&lt;access_management&gt;1&lt;/access_management&gt;&lt;/default&gt; &lt;/users&gt;&lt;/clickhouse&gt; </code></pre> <p>you shall carry on itself about replicate RBAC objects during change cluster layout (scale-up)</p>
<p>I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the <code>kubectl</code> provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers. This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the <code>kubectl_manifest</code> resources:</p> <pre><code>Error: failed to create kubernetes rest client for read of resource: Get &quot;http://localhost/api?timeout=32s&quot;: dial tcp 127.0.0.1:80: connect: connection refused </code></pre> <p>Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!</p>
<p>Fixed the issue by adding <code>load_config_file = false</code> to the <code>kubectl</code> provider config. My provider config now looks like this:</p> <pre><code>data &quot;google_client_config&quot; &quot;default&quot; {} provider &quot;kubernetes&quot; { host = &quot;https://${endpoint from GKE}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(CA certificate from GKE) } provider &quot;kubectl&quot; { host = &quot;https://${endpoint from GKE}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(CA certificate from GKE) load_config_file = false } </code></pre>
<p>If I do</p> <pre><code>POD=$($KUBECTL get pod -lsvc=app,env=production -o jsonpath=&quot;{.items[0].metadata.name}&quot;) kubectl debug -it --image=mpen/tinker &quot;$POD&quot; -- zsh -i </code></pre> <p>I can get into a shell running inside my pod, but I want access to the filesystem for a container I've called &quot;php&quot;. I think this should be at <code>/proc/1/root/app</code> but that directory doesn't exist. For reference, my Dockerfile has:</p> <pre><code>WORKDIR /app COPY . . </code></pre> <p>So all the files should be in the root <code>/app</code> directory.</p> <p>If I add <code>--target=php</code> then I get permission denied:</p> <pre><code>❯ cd /proc/1/root cd: permission denied: /proc/1/root </code></pre> <p>How do I get access to the files?</p>
<p>Reading through <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/" rel="nofollow noreferrer">the documentation</a>, using <code>kubectl debug</code> won't give you access to the filesystem in another container.</p> <p>The simplest option may be to use <code>kubectl exec</code> to start a shell inside an existing container. There are some cases in which this isn't an option (for example, some containers contain only a single binary, and won't have a shell or other common utilities avaiable), but a php container will typically have a complete filesystem.</p> <p>In this case, you can simply:</p> <pre><code>kubectl exec -it $POD -- sh </code></pre> <p>You can replace <code>sh</code> by <code>bash</code> or <code>zsh</code> depending on what shells are available in the existing image.</p> <hr /> <p>The linked documentation provides several other debugging options, but all involve working on <em>copies of</em> the pod.</p>
<p>I learnt that to run a container as rootless, you need to specify either the SecurityContext:runAsUser 1000 or specify the USER directive in the DOCKERFILE.</p> <p>Question on this is that there is no UID 1000 on the Kubernetes/Docker host system itself.</p> <p>I learnt before that Linux User Namespacing allows a user to have a different UID outside it's original NS.</p> <p>Hence, how does UID 1000 exist under the hood? Did the original root (UID 0) create a new user namespace which is represented by UID 1000 in the container?</p> <p>What happens if we specify UID 2000 instead?</p>
<p>Hope this answer helps you</p> <blockquote> <p>I learnt that to run a container as rootless, you need to specify either the SecurityContext:runAsUser 1000 or specify the USER directive in the DOCKERFILE</p> </blockquote> <p>You are correct except in <code>runAsUser: 1000</code>. you can specify any UID, not only <code>1000</code>. Remember any UID you want to use (<code>runAsUser: UID</code>), that <code>UID</code> should already be there!</p> <hr /> <p>Often, base images will already have a user created and available but leave it up to the development or deployment teams to leverage it. For example, the official Node.js image comes with a user named node at UID <code>1000</code> that you can run as, but they do not explicitly set the current user to it in their Dockerfile. We will either need to configure it at runtime with a <code>runAsUser</code> setting or change the current user in the image using a <code>derivative Dockerfile</code>.</p> <pre class="lang-yaml prettyprint-override"><code>runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000 </code></pre> <p>Remmeber that <code>runAsUser</code> and <code>runAsGroup</code> <strong>ensures</strong> container processes do not run as the <code>root</code> user but don’t rely on the <code>runAsUser</code> or <code>runAsGroup</code> settings to guarantee this. Be sure to also set <code>runAsNonRoot: true</code>.</p> <hr /> <p>Here is full example of <code>securityContext</code>:</p> <pre class="lang-yaml prettyprint-override"><code># generic pod spec that's usable inside a deployment or other higher level k8s spec apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: # basic container details - name: my-container-name # never use reusable tags like latest or stable image: my-image:tag # hardcode the listening port if Dockerfile isn't set with EXPOSE ports: - containerPort: 8080 protocol: TCP readinessProbe: # I always recommend using these, even if your app has no listening ports (this affects any rolling update) httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload path: /ready port: 8080 livenessProbe: # only needed if your app tends to go unresponsive or you don't have a readinessProbe, but this is up for debate httpGet: # Lots of timeout values with defaults, be sure they are ideal for your workload path: /alive port: 8080 resources: # Because if limits = requests then QoS is set to &quot;Guaranteed&quot; limits: memory: &quot;500Mi&quot; # If container uses over 500MB it is killed (OOM) #cpu: &quot;2&quot; # Not normally needed, unless you need to protect other workloads or QoS must be &quot;Guaranteed&quot; requests: memory: &quot;500Mi&quot; # Scheduler finds a node where 500MB is available cpu: &quot;1&quot; # Scheduler finds a node where 1 vCPU is available # per-container security context # lock down privileges inside the container securityContext: allowPrivilegeEscalation: false # prevent sudo, etc. privileged: false # prevent acting like host root terminationGracePeriodSeconds: 600 # default is 30, but you may need more time to gracefully shutdown (HTTP long polling, user uploads, etc) # per-pod security context # enable seccomp and force non-root user securityContext: seccompProfile: type: RuntimeDefault # enable seccomp and the runtimes default profile runAsUser: 1001 # hardcode user to non-root if not set in Dockerfile runAsGroup: 1001 # hardcode group to non-root if not set in Dockerfile runAsNonRoot: true # hardcode to non-root. Redundant to above if Dockerfile is set USER 1000 </code></pre> <hr /> <p>sources:</p> <ul> <li><a href="https://github.com/BretFisher/podspec" rel="nofollow noreferrer">Kubernetes Pod Specification Good Defaults</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">Configure a Security Context for a Pod or Container</a></li> <li><a href="https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/" rel="nofollow noreferrer">10 Kubernetes Security Context settings you should understand</a></li> </ul>
<p>Is there any way to perform an update action on all pods simultaneously?</p> <p>We have a process running in kubernetes as a stateful set where we want to update all the pods at the same time. We cannot seem to find a configuration for that. I am aware of <code>rollingUpdate</code>, which only updates one pod at a time.</p> <p>This is what we have currently</p> <pre><code> updateStrategy: rollingUpdate: partition: 2 type: RollingUpdate </code></pre> <p>I also tried with <code>maxUnavailable</code>, but still did not work. Is there any other hack to get this done?</p>
<p>There is no native alternative for updating all pods simultaneously when using Statefulsets.</p> <p>The closest thing to it is to use the <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#parallel-pod-management" rel="nofollow noreferrer">Parallel Pod Management policy</a>, but it only affects the behavior for scaling operations (including initial setup) and doesn't work for updates.</p> <hr /> <p>Although, the OpenKruise project has an extended component suite that enables <a href="https://openkruise.io/docs/next/user-manuals/advancedstatefulset/" rel="nofollow noreferrer">Advanced StatefulSet</a> to update workflow.</p> <p>Here is a minimal working example that will <strong>upgrade all pods at once</strong>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps.kruise.io/v1beta1 kind: StatefulSet metadata: name: sample spec: replicas: 5 serviceName: fake-service selector: matchLabels: app: sample template: metadata: labels: app: sample spec: readinessGates: - conditionType: InPlaceUpdateReady containers: - name: main image: nginx:alpine podManagementPolicy: Parallel updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 100% </code></pre> <p>Note this will <strong>certainly cause downtime</strong>, but you can adjust to something like <code>maxUnavailable: 50%</code> to make it more resilient.</p>
<p>I want to set some proxy (http, https etc) config on my k8s container. That k8s container will run some user scripts, it may also run some <code>sudo</code> commands in that script.</p> <p>What is the best way to pass this proxy config in my k8s container.</p> <p>One approach I was trying was passing the proxy config as env variable to the container in my deployment manifest file.</p> <p>Drawback: It will set only for default user and not root user. <code>sudo</code> commands will fail. User/container start script will need to set proxy config in the container for root user.</p> <p>Any other alternative or clean way of passing env variable to all users in the container?</p> <hr /> <p>Another approach I am using now is <code>env_keep</code> and adding proxy config env variable list in <code>env_keep</code> and when command is executed with sudo in pod, it will take env variable of default user that I set using deployment manifest.</p> <hr /> <p>Is there any other alternatives ?</p>
<p>Best practice is to run the <code>Init container</code> as <strong>Root</strong>.</p> <p>And further, once you startup script is done with work you can run your container as the Non-root user.</p> <p>Init container : <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p> <p><strong>Update:</strong></p> <p>You can also use the <code>sudo -E</code></p> <pre><code>-E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. </code></pre>
<p>Is there any shorter alias on the kubectl/oc for deployments? In OpenShift you have deployment configurations and you can access them using their alias <code>dc</code>.</p> <p>Writing <code>deployment</code> all the time takes too much time. Any idea how to shorten that without setting a local alias on each machine?</p> <p>Reality:</p> <pre><code>kubectl get deployment/xyz </code></pre> <p>Dream:</p> <pre><code>kubectl get d/xyz </code></pre>
<p>All of the above answers are correct and I endorse the idea of using aliases: I have several myself. But the question was fundamentally about shortnames of API Resources, like <code>dc</code> for <code>deploymentcontroller</code>.</p> <p>And the answer to that question is to use <code>oc api-resources</code> (or <code>kubectl api-resources</code>). Each API Resource also includes any SHORTNAMES that are available. For example, the results for me of <code>oc api-resources |grep deploy</code> on OpenShift 4.10 is:</p> <pre><code>➜oc api-resources |grep deploy deployments deploy apps/v1 true Deployment deploymentconfigs dc apps.openshift.io/v1 true DeploymentConfig </code></pre> <p>Thus we can see that the previously given answer of &quot;deploy&quot; is a valid SHORTNAME of deployments. But it's also useful for just browsing the list of other available abbreviations.</p> <p>I'll also make sure that you are aware of <code>oc completion</code>. For example <code>source &lt;(oc completion zsh)</code> for zsh. You say you have multiple devices, so you may not set up aliases, but completions are always easy to add. That way you should never have to type more than a few characters and then autocomplete yourself the rest of the way.</p>
<p>After I install the promethus using helm in kubernetes cluster, the pod shows error like this:</p> <pre><code>0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. </code></pre> <p>this is the deployment yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: kube-prometheus-1660560589-node-exporter-n7rzg generateName: kube-prometheus-1660560589-node-exporter- namespace: reddwarf-monitor uid: 73986565-ccd8-421c-bcbb-33879437c4f3 resourceVersion: '71494023' creationTimestamp: '2022-08-15T10:51:07Z' labels: app.kubernetes.io/instance: kube-prometheus-1660560589 app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: node-exporter controller-revision-hash: 65c69f9b58 helm.sh/chart: node-exporter-3.0.8 pod-template-generation: '1' ownerReferences: - apiVersion: apps/v1 kind: DaemonSet name: kube-prometheus-1660560589-node-exporter uid: 921f98b9-ccc9-4e84-b092-585865bca024 controller: true blockOwnerDeletion: true status: phase: Pending conditions: - type: PodScheduled status: 'False' lastProbeTime: null lastTransitionTime: '2022-08-15T10:51:07Z' reason: Unschedulable message: &gt;- 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. qosClass: BestEffort spec: volumes: - name: proc hostPath: path: /proc type: '' - name: sys hostPath: path: /sys type: '' - name: kube-api-access-9fj8v projected: sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: name: kube-root-ca.crt items: - key: ca.crt path: ca.crt - downwardAPI: items: - path: namespace fieldRef: apiVersion: v1 fieldPath: metadata.namespace defaultMode: 420 containers: - name: node-exporter image: docker.io/bitnami/node-exporter:1.3.1-debian-11-r23 args: - '--path.procfs=/host/proc' - '--path.sysfs=/host/sys' - '--web.listen-address=0.0.0.0:9100' - &gt;- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ - &gt;- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) ports: - name: metrics hostPort: 9100 containerPort: 9100 protocol: TCP resources: {} volumeMounts: - name: proc readOnly: true mountPath: /host/proc - name: sys readOnly: true mountPath: /host/sys - name: kube-api-access-9fj8v readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount livenessProbe: httpGet: path: / port: metrics scheme: HTTP initialDelaySeconds: 120 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 6 readinessProbe: httpGet: path: / port: metrics scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 6 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: runAsUser: 1001 runAsNonRoot: true restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst serviceAccountName: kube-prometheus-1660560589-node-exporter serviceAccount: kube-prometheus-1660560589-node-exporter hostNetwork: true hostPID: true securityContext: fsGroup: 1001 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - k8smasterone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: kube-prometheus-1660560589 app.kubernetes.io/name: node-exporter namespaces: - reddwarf-monitor topologyKey: kubernetes.io/hostname schedulerName: default-scheduler tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule - key: node.kubernetes.io/network-unavailable operator: Exists effect: NoSchedule priority: 0 enableServiceLinks: true preemptionPolicy: PreemptLowerPriority </code></pre> <p>I have checked the host machine and found the port 9100 is free, why still told that no port for this pod? what should I do to avoid this problem? this is the host port 9100 check command:</p> <pre><code>[root@k8smasterone grafana]# lsof -i:9100 [root@k8smasterone grafana]# </code></pre> <p>this is the pod describe info:</p> <pre><code>➜ ~ kubectl describe pod kube-prometheus-1660560589-node-exporter-n7rzg -n reddwarf-monitor Name: kube-prometheus-1660560589-node-exporter-n7rzg Namespace: reddwarf-monitor Priority: 0 Node: &lt;none&gt; Labels: app.kubernetes.io/instance=kube-prometheus-1660560589 app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=node-exporter controller-revision-hash=65c69f9b58 helm.sh/chart=node-exporter-3.0.8 pod-template-generation=1 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: DaemonSet/kube-prometheus-1660560589-node-exporter Containers: node-exporter: Image: docker.io/bitnami/node-exporter:1.3.1-debian-11-r23 Port: 9100/TCP Host Port: 9100/TCP Args: --path.procfs=/host/proc --path.sysfs=/host/sys --web.listen-address=0.0.0.0:9100 --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) Liveness: http-get http://:metrics/ delay=120s timeout=5s period=10s #success=1 #failure=6 Readiness: http-get http://:metrics/ delay=30s timeout=5s period=10s #success=1 #failure=6 Environment: &lt;none&gt; Mounts: /host/proc from proc (ro) /host/sys from sys (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9fj8v (ro) Conditions: Type Status PodScheduled False Volumes: proc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: sys: Type: HostPath (bare host directory volume) Path: /sys HostPathType: kube-api-access-9fj8v: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m54s (x233 over 3h53m) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. </code></pre> <p>this is the netstat:</p> <pre><code>[root@k8smasterone ~]# netstat -plant |grep 9100 [root@k8smasterone ~]# </code></pre> <p>I also tried this to allow the pods running in master node by add this config:</p> <pre><code>tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master </code></pre> <p>still did not fixed this problem.</p>
<p>When you configure your pod with <code>hostNetwork: true</code>, the containers running in this pod can directly see the network interfaces of the host machine where the pod was started.</p> <p>The container port will be exposed to the external network at :, the hostPort is the port requested by the user in the configuration <code>hostPort</code>.</p> <p>To bypass your problem, you have two options:</p> <ul> <li>setting <code>hostNetwork: false</code></li> <li>choose a different <code>hostPort</code> (it is better in the range 49152 to 65535)</li> </ul>
<p>Defining the <strong>defaultMode</strong> in a Kubernetes <strong>volume</strong> field within a <strong>deployment</strong> element can become quite tricky.</p> <p>It expects three decimals, corresponding to the binary UNIX permissions.</p> <p>As an example, to mount the ConfigMap with permissions r------, you'd need to specify 256.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: foo namespace: foo spec: replicas: 1 selector: matchLabels: app: foo template: metadata: labels: app: foo spec: containers: - image: php-fpm:latest volumeMounts: - name: phpini mountPath: /usr/local/etc/php/conf.d/99-settings.ini readOnly: true subPath: 99-settings.ini volumes: - configMap: defaultMode: 256 name: phpini-configmap optional: false name: phpini --- apiVersion: v1 kind: ConfigMap metadata: labels: app: foo namespace: foo name: phpini-configmap data: 99-settings.ini: | ; Enable Zend OPcache extension module zend_extension = opcache </code></pre>
<p>Use the following table:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>unix decimal</th> <th>unix readable</th> <th>binary equivalent</th> <th>defaultMode</th> </tr> </thead> <tbody> <tr> <td>400</td> <td>r--------</td> <td>100000000</td> <td>256</td> </tr> <tr> <td>440</td> <td>r--r-----</td> <td>100100000</td> <td>288</td> </tr> <tr> <td>444</td> <td>r--r--r--</td> <td>100100100</td> <td>292</td> </tr> <tr> <td>600</td> <td>rw-------</td> <td>110000000</td> <td>384</td> </tr> <tr> <td>600</td> <td>rw-r-----</td> <td>110100000</td> <td>416</td> </tr> <tr> <td>660</td> <td>rw-rw----</td> <td>110110000</td> <td>432</td> </tr> <tr> <td>660</td> <td>rw-rw-r--</td> <td>110110100</td> <td>436</td> </tr> <tr> <td>666</td> <td>rw-rw-rw-</td> <td>110110110</td> <td>438</td> </tr> <tr> <td>700</td> <td>rwx------</td> <td>111000000</td> <td>448</td> </tr> <tr> <td>770</td> <td>rwxrwx---</td> <td>111111000</td> <td>504</td> </tr> <tr> <td>777</td> <td>rwxrwxrwx</td> <td>111111111</td> <td>511</td> </tr> </tbody> </table> </div> <p>A more direct way to do this is to use a base8 to base10 converter like <a href="http://www.unitconversion.org/numbers/base-8-to-base-10-conversion.html" rel="noreferrer">this one</a></p>
<p>In a container-based environment such as Kubernetes, the UseContainerSupport JVM feature is handy as it allows configuring heap size as a percentage of container memory via options such as XX:MaxRAMPercentage instead of a static value via Xmx. This way you don't have to potentially adjust your JVM options every time the container memory limit changes, potentially allowing use of vertical autoscaling. The primary goal is hitting a Java OufOfMemoryError rather than running out of memory at the container (e.g. K8s OOMKilled).</p> <p>That covers heap memory. In applications that use a significant amount of direct memory via NIO (e.g. gRPC/Netty), what are the options for this? The main option I could find is XX:MaxDirectMemorySize, but this takes in a static value similar to Xmx.</p>
<p>There's no similar switch for MaxDirectMemorySize as far as I know. But by default (if you don't specify <code>-XX:MaxDirectMemorySize</code>) the limit is same as for <code>MaxHeapSize</code>. That means, if you set <code>-XX:MaxRAMPercentage</code> then the same limit applies to <code>MaxDirectMemory</code>.</p> <p>Note: that you cannot verify this simply via <code>-XX:+PrintFlagsFinal</code> because that prints 0:</p> <pre><code>java -XX:MaxRAMPercentage=1 -XX:+PrintFlagsFinal -version | grep 'Max.*Size' ... uint64_t MaxDirectMemorySize = 0 {product} {default} size_t MaxHeapSize = 343932928 {product} {ergonomic} ... openjdk version &quot;17.0.2&quot; 2022-01-18 ... </code></pre> <p>See also <a href="https://dzone.com/articles/default-hotspot-maximum-direct-memory-size" rel="nofollow noreferrer">https://dzone.com/articles/default-hotspot-maximum-direct-memory-size</a> and <a href="https://stackoverflow.com/questions/53543062/replace-access-to-sun-misc-vm-for-jdk-11">Replace access to sun.misc.VM for JDK 11</a></p> <p>My own experiments here: <a href="https://github.com/jumarko/clojure-experiments/pull/32" rel="nofollow noreferrer">https://github.com/jumarko/clojure-experiments/pull/32</a></p>
<p>I am using active FTP to transfer file(via the <strong>PORT</strong> command). I can initiate active FTP sessions using <strong>LoadBalancer IP</strong> and Loadbalancer Service <strong>Target Port</strong>. I tried a similar way to initiate active FTP session using <strong>Node External IP</strong> and <strong>Node Port</strong> but I am not able to do it. I am using npm.js <strong>basic-ftp</strong> module for it. The code for my connection is given below:</p> <pre><code>let client = new ftp.Client(ftpTimeout * 1000); client.prepareTransfer = prepareTransfer; </code></pre> <p>And prepareTransfer has implementation like:</p> <pre><code>export async function prepareTransfer(ftp: FTPContext): Promise&lt;FTPResponse&gt; { // Gets the ip address of either LoadBalancer(for LoadBalancer service) or Node(For NodePort Service) const ip = await getIp(); // Gets a TargetPort for LoadBalancer service or Node Port for NodePort service const port = await getFtpPort(); // Example command: PORT 192,168,150,80,14,178 // The first four octets are the IP address while the last two octets comprise the //port that will be used for the data connection. // To find the actual port multiply the fifth octet by 256 and then add the sixth //octet to the total. // Thus in the example above the port number is ( (14*256) + 178), or 3762 const p1 = Math.floor(port / 256); const p2 = port % 256; const command = `PORT ${ip.replace(/\./g, ',')},${p1},${p2}`; // https://github.com/patrickjuchli/basic-ftp/issues/195 // Data socket pipes before the connection so use the mock socket. const mockSocket = new MockSocket(); ftp.dataSocket = mockSocket as any; let ftpResponse = new Promise&lt;FTPResponse&gt;((resolve, reject) =&gt; { let response: FTPResponse; const server = createServer(function (socket) { console.log('FTP Client connected'); mockSocket.setSocket(socket, server, port); }); server.on('error', err =&gt; { reject(err); }); server.listen(port, async () =&gt; { console.log(`socket server for FTP started at port ${port}`); // send the port request response = await ftp.request(command); if (response.code !== 200) { reject('Could not connect'); server.close(); } resolve(response); }); }); return ftpResponse; } </code></pre> <p>Any idea why NodePort is not working here?</p> <p>Helm Chart for NodePort:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: &quot;ftp-service&quot; spec: type: NodePort externalTrafficPolicy: Local selector: statefulset.kubernetes.io/pod-name: &quot;pod1&quot; ports: - protocol: TCP name: ftp-8000 port: 8000 targetPort: 8000 nodePort: 30014 </code></pre> <p>Helm chart for loadbalancer:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: &quot;ftp-service-load&quot; spec: type: LoadBalancer externalTrafficPolicy: Local selector: statefulset.kubernetes.io/pod-name: &quot;pod1&quot; ports: - protocol: TCP name: ftp-8000 port: 8000 targetPort: 8000 nodePort: 30014 </code></pre>
<p>ok, finally I figured out the solution. I have been using <strong>Azure Kubernetes Service(AKS)</strong>. Need to set the inbound port rule for the appropriate <code>NodePort</code>(under <strong>Virtual machine scale sets</strong>, selecting the right node pools whose public IPs are enabled, and under <strong>Networking</strong> tab). Also, need to make <code>externalTrafficPolicy</code> to <code>Cluster</code> or simply remove the line <code>externalTrafficPolicy: Local</code> from the helm chart.</p>
<p>I found the mention of an agent node in the aks documentation but i'm not finding the defition of it. can anyone please explain it to ? also want to know if is it an azure concept or a kubernetes concept.</p> <p>Regards,</p>
<p>In Kubernetes the term <code>node</code> refers to a compute node. Depending on the role of the node it is usually referred to as <code>control plane node</code> or <code>worker node</code>. From the <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p><strong>A Kubernetes cluster consists of a set of worker machines, called nodes</strong>, that run containerized applications. Every cluster has at least one worker node.</p> <p>The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.</p> </blockquote> <p><code>Agent nodes</code> in AKS refers to the worker nodes (which should not be confused with the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a>, which is the primary &quot;node agent&quot; that runs on each worker node)</p>
<p>I'm testing a database insert statement similar to the following which works locally but not after deployment to a kubernetes cluster connected to a managed database host:</p> <pre><code>func Insert(w http.ResponseWriter, r *http.Request) { db := dbConn() //If it's a post request, assign a variable to the value returned in each field of the New page. if r.Method == &quot;POST&quot; { email := r.FormValue(&quot;email&quot;) socialNetwork := r.FormValue(&quot;social_network&quot;) socialHandle := r.FormValue(&quot;social_handle&quot;) createdOn := time.Now().UTC() //prepare a query to insert the data into the database insForm, err := db.Prepare(`INSERT INTO public.users(email, social_network, social_handle) VALUES ($1,$2, $3)`) //check for and handle any errors CheckError(err) //execute the query using the form data _, err = insForm.Exec(email, socialNetwork, socialHandle) CheckError(err) //print out added data in terminal log.Println(&quot;INSERT: email: &quot; + email + &quot; | social network: &quot; + socialNetwork + &quot; | social handle : &quot; + socialHandle + &quot; | created on: &quot; + createdOn.String() + &quot; | createdOn is type: &quot; + reflect.TypeOf(createdOn).String()) sendThanks(socialHandle, email) } defer db.Close() //redirect to the index page http.Redirect(w, r, &quot;/thanks&quot;, 301) } </code></pre> <p>I've configured a deployment as follows with a corresponding secrets object:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: novvsworld namespace: novvsworld spec: replicas: 1 selector: matchLabels: app: novvsworld template: metadata: labels: app: novvsworld spec: containers: - name: novvsworld image: my.registry.com/registry/novvsworld:latest resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 3000 env: - name: DBHOST valueFrom: secretKeyRef: name: novvworld-secrets key: DBHOST - name: DBPORT valueFrom: secretKeyRef: name: novvworld-secrets key: DBPORT - name: DBUSER valueFrom: secretKeyRef: name: novvworld-secrets key: DBUSER - name: DBPASS valueFrom: secretKeyRef: name: novvworld-secrets key: DBPASS - name: DBSSLMODE valueFrom: secretKeyRef: name: novvworld-secrets key: DBSSLMODE - name: SENDGRID_API_KEY valueFrom: secretKeyRef: name: novvworld-secrets key: SENDGRID_API_KEY </code></pre> <p>The value of 'DBSSLMODE' is currently set to &quot;disabled&quot; in the secrets file.</p> <p>When testing the insert statement by inputting data through the front end, the following panic is returned:</p> <p><code>022/08/15 18:50:58 http: panic serving 10.244.0.38:47590: pq: no pg_hba.conf entry for host &quot;167.172.231.113&quot;, user &quot;novvsworld&quot;, database &quot;novvsworld&quot;, no encryption </code></p> <p>Am I missing an additional configuration for the encryption and shouldn't setting the sslmode to disabled bypass this?</p>
<blockquote> <p>Am I missing an additional configuration for the encryption and shouldn't setting the sslmode to disabled bypass this?</p> </blockquote> <p>Yes, and that is the problem. The client refuses to use SSL. While the server (configuration not shown, but can be inferred from the error) refuses to proceed <em>without</em> SSL.</p> <p>As long as both sides make incompatible demands and refuse to compromise, nothing can get done.</p>
<p>Is it possible write write an existing environment variable into a file from a Kubernetes deployment.yaml file?</p> <p>The background: I've already parsed a json containing secrets. Now, I'd like to store that secret in a local file.</p> <p>So far, I've tried something like this:</p> <pre><code> lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;], args: [&quot;echo $PRIVATE_KEY &gt; /var/private.key&quot;] </code></pre> <p>( I've setup /var/ as an empty writeVolume. )</p> <p>Or perhaps there is a completely different way to do this, such as storing the secret in it's own, separate secret?</p>
<p>Rather than using <code>postStart</code> , I'd suggest you use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a>, the <code>postStart</code> hook doesn't guarantee that it will be executed before the container <code>ENTRYPOINT</code>.</p> <p>You can define your environment variables in your deployment manifest, by setting static values or referencing a <code>configMap</code> or <code>secret</code>. Your init container would run a bash script that writes the content of each variable to a file.</p> <p>A second approach would be to mount a <code>configMap</code> as a volume inside your pod, e.g.:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: SPECIAL_LEVEL: very SPECIAL_TYPE: charm </code></pre> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ &quot;/bin/sh&quot;, &quot;-c&quot;, &quot;ls /etc/config/&quot; ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: special-config restartPolicy: Never </code></pre> <p>That would create two files inside <code>/etc/config</code>, named as the key defined in your <code>configMap</code> with the content of its value.</p>
<p>I am trying to run an application locally on k8s but I am not able to reach it.</p> <p>here is my deloyment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: listings labels: app: listings spec: replicas: 2 selector: matchLabels: app: listings template: metadata: labels: app: listings spec: containers: - image: mydockerhub/listings:latest name: listings envFrom: - secretRef: name: listings-secret - configMapRef: name: listings-config ports: - containerPort: 8000 name: django-port </code></pre> <p>and it is my service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: listings labels: app: listings spec: type: NodePort selector: app: listings ports: - name: http port: 8000 targetPort: 8000 nodePort: 30036 protocol: TCP </code></pre> <p>At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.</p> <p>When I run <code>kubectl get svc -o wide</code> I see</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR listings NodePort 10.107.77.231 &lt;none&gt; 8000:30036/TCP 28s app=listings </code></pre> <p>When I run <code>kubectl get node -o wide</code> I see</p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 &lt;none&gt; Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12 </code></pre> <p>and when I run <code>minikube ip</code> it shows <code>192.168.49.2</code></p> <p>I try to open <code>http://192.168.49.2:30036/health</code> it is not opening <code>This site can’t be reached</code></p> <p>How should expose my application externally?</p> <p>note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application</p>
<p>That is because your local and minikube are not in the same network segment, you must do something more to access minikube service on windows.</p> <p>First</p> <pre><code>$ minikube service list </code></pre> <p>That will show your service detail which include name, url, nodePort, targetPort.</p> <p>Then</p> <pre><code>$ minikube service --url listings </code></pre> <p>It will open a port to listen on your windows machine that can forward the traffic to minikube node port.</p> <p>Or you can use command <code>kubectl port-forward</code> to expose service on host port, like:</p> <pre><code>kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000 </code></pre> <p>Then try with <code>http://localhost:30036/health</code></p>
<p>This is my <code>~/.kube/config</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 clusters: - cluster: server: https://192.168.10.190:6443 name: cluster-1 - cluster: server: https://192.168.99.101:8443 name: cluster-2 contexts: - context: cluster: cluster-1 user: kubernetes-admin-1 name: cluster-1 - context: cluster: cluster-2 user: kubernetes-admin-2 name: cluster-2 kind: Config preferences: {} users: - name: kubernetes-admin-1 user: client-certificate: /home/user/.minikube/credential-for-cluster-1.crt client-key: /home/user/.minikube/credential-for-cluster-1.key - name: kubernetes-admin-2 user: client-certificate: /home/user/.minikube/credential-for-cluster-2.crt client-key: /home/user/.minikube/credential-for-cluster-2.key </code></pre> <hr /> <p>My understanding is, <code>cluster-1</code> &amp; <code>cluster-2</code> are kubernetes physical clusters (<code>Control Plane</code>).</p> <p>Each physical cluster has multiple virtual clusters (<code>Namespaces</code>)</p> <p>If my understanding is correct, then with the above <code>kubeConfig</code>, What is the <code>kubectl</code> syntax to <strong>get all the namespaces in cluster</strong>?</p>
<p>short answer, you can get all ns across the existing cluster in <code>kubeconfig</code></p> <pre><code>for context in $(kubectl config view -o jsonpath='{.clusters[*].name}'); do kubectl config use-context $context ; kubectl get ns; done #or for context in $(kubectl config view -o jsonpath='{.clusters[*].name}'); do kubectl config use-context $context ;kubectl get ns;done </code></pre> <p>you can get <strong>all namespace</strong> from each cluster using below command <strong>(current context)</strong></p> <pre><code>kubectl get namespace </code></pre> <p>the above will return <code>namespace</code> in the current context, so you have two cluster, its mean you will need two different context to get all the namespace from both cluster</p> <blockquote> <p>A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: <strong>cluster, namespace, and user</strong>. By default, the kubectl command-line tool uses parameters from the <strong>current context to communicate with the cluster</strong>.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#context" rel="nofollow noreferrer">organize-cluster-access-kubeconfig-context</a></p> <p>A namespace is simply the <strong>isolation</strong> of the resources. for example</p> <p>you can not create <strong>two deployments</strong> with the same name in a single namespace, because those resources are <strong>namespace scoped</strong>.</p> <p>so you can deploy multiple deployment under <code>develop</code>, <code>stage</code> and <code>production</code> namespace.</p> <p><a href="https://i.stack.imgur.com/EXoY6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EXoY6.png" alt="enter image description here" /></a></p> <p><a href="https://belowthemalt.com/2022/04/09/kubernetes-namespaces/" rel="nofollow noreferrer">kubernetes-namespaces</a></p>
<p>I am new to Argo and following the Quickstart templates and would like to deploy the HTTP template as a workflow.</p> <p>I create my cluster as so:</p> <pre class="lang-bash prettyprint-override"><code>minikube start --driver=docker --cpus='2' --memory='8g' kubectl create ns argo kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml </code></pre> <p>I then apply the HTTP template <code>http_template.yaml</code> from the docs:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: http-template- spec: entrypoint: main templates: - name: main steps: - - name: get-google-homepage template: http arguments: parameters: [ { name: url, value: &quot;https://www.google.com&quot; } ] - name: http inputs: parameters: - name: url http: timeoutSeconds: 20 # Default 30 url: &quot;{{inputs.parameters.url}}&quot; method: &quot;GET&quot; # Default GET headers: - name: &quot;x-header-name&quot; value: &quot;test-value&quot; # Template will succeed if evaluated to true, otherwise will fail # Available variables: # request.body: string, the request body # request.headers: map[string][]string, the request headers # response.url: string, the request url # response.method: string, the request method # response.statusCode: int, the response status code # response.body: string, the response body # response.headers: map[string][]string, the response headers successCondition: &quot;response.body contains \&quot;google\&quot;&quot; # available since v3.3 body: &quot;test body&quot; # Change request body </code></pre> <p><code>argo submit -n argo http_template.yaml --watch</code></p> <p>However I get the the following error:</p> <pre><code>Name: http-template-564qp Namespace: argo ServiceAccount: unset (will run with the default ServiceAccount) Status: Error Message: failed to get token volumes: service account argo/default does not have any secrets </code></pre> <p>I'm not clear on why this doesn't work given it's straight from the Quickstart documentation. Help would be appreciated.</p>
<p>It seems your default serviceaccount is missing a credential (kubernetes secret)</p> <p>You can verify the existence of the credential by checking which one it needs by running <code>kubectl get serviceaccount -n default default -o yaml</code></p> <pre><code>kubectl get serviceaccount -n default default -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2022-02-10T10:48:54Z&quot; name: default namespace: default resourceVersion: &quot;*******&quot; uid: ******************** secrets: - name: default-token-***** </code></pre> <p>Now you should be able to find the secret which is attached to the serviceaccount</p> <p><code>kubectl get secret -n default default-token-***** -o yaml</code></p> <p>Or you can just run</p> <p><code>kubectl get secret -n default</code></p> <p>To see all secrets in the respective namespace (in this example, default)</p>
<p>Is it possible write write an existing environment variable into a file from a Kubernetes deployment.yaml file?</p> <p>The background: I've already parsed a json containing secrets. Now, I'd like to store that secret in a local file.</p> <p>So far, I've tried something like this:</p> <pre><code> lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;], args: [&quot;echo $PRIVATE_KEY &gt; /var/private.key&quot;] </code></pre> <p>( I've setup /var/ as an empty writeVolume. )</p> <p>Or perhaps there is a completely different way to do this, such as storing the secret in it's own, separate secret?</p>
<p>Usually when we need to read some secrets from a secret manager, we use an init container, and we create an <code>emptyDir</code> shared between the pods to write the secrets and access them from the other containers. In this case you can use a different docker image with secret manager dependencies and creds, without install those dependencies and provide the creds to the main container:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: initContainers: - name: init-container image: alpine command: - /bin/sh - -c - 'echo &quot;test_value&quot; &gt; /mnt/volume/var.txt' volumeMounts: - mountPath: /mnt/volume name: shared-storage containers: - image: alpine name: test-container command: - /bin/sh - -c - 'READ_VAR=$(cat /mnt/volume/var.txt) &amp;&amp; echo &quot;main_container: ${READ_VAR}&quot;' volumeMounts: - mountPath: /mnt/volume name: shared-storage volumes: - name: shared-storage emptyDir: {} </code></pre> <p>Here is the log:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl logs test-pd main_container: test_value </code></pre>
<p>lately I am configuring a k8s cluster composed of 3 nodes(master, worker1 and worker2) that will host an UDP application(8 replicas of it). Everything is done and the cluster is working very well but there is only one problem.</p> <p>Basically there is a Deployment which describes the Pod and it looks like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &lt;name&gt; labels: app: &lt;app_name&gt; spec: replicas: 8 selector: matchLabels: app: &lt;app_name&gt; template: metadata: labels: app: &lt;app_name&gt; spec: containers: - name: &lt;name&gt; image: &lt;image&gt; ports: - containerPort: 6000 protocol: UDP </code></pre> <p>There is also a Service which is used to access to the UDP application:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: &lt;service_name&gt; labels: app: &lt;app_name&gt; spec: type: NodePort ports: - port: 6000 protocol: UDP nodePort: 30080 selector: app: &lt;app_name&gt; </code></pre> <p>When i try to access to the service 2 different scenarios may occur:</p> <ul> <li>The request is assigned to a POD that is in the same node that received the request</li> <li>The request is assigned to a POD that is in the other node</li> </ul> <p>In the second case the request arrives correctly to the POD but with a source IP which ends by 0 (for example 10.244.1.0) so the response will never be delivered correctly.</p> <p>I can't figure it out, I really tried everything but this problem still remains. In this moment to make the cluster working properly i added <code>externalTrafficPolicy: Local</code> and <code>internalTrafficPolicy: Local</code> to the Service in this way the requests will remain locally so when a request is sent to worker1 it will be assigned to a Pod which is running on worker1, the same for the worker2.</p> <p>Do you have any ideas about the problem? Thanks to everyone.</p>
<p>Have you confirmed that the response is not delivered correctly for your second scenario? The source IP address in that case should be the one of the node where the request first arrived.</p> <p>I am under the impression that you are assuming that since the IP address ends in 0 this is necessarily a network address, and that could be a wrong assumption, as it depends on the <a href="https://www.hacksplaining.com/glossary/netmasks#:%7E:text=Netmasks%20(or%20subnet%20masks)%20are,Internet%20Protocol%20(IP)%20address." rel="nofollow noreferrer">Netmask</a> configured for the Subnetwork where the nodes are allocated; for example, if the nodes are in the Subnet 10.244.0.0/23, then the network address is 10.244.0.0, and 10.244.1.0 is just another usable address that can be assigned to a node.</p> <p>Now, if your application needs to preserve the client's IP address, then that could be an issue since, by default, the source IP seen in the target container is not the original source IP of the client. In this case, additionally to configuring the <code>externalTrafficPolicy</code> as Local, you would need to configure a <code>healthCheckNodePort</code> as specified in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">Preserving the client source IP</a> documentation.</p>
<pre><code>2022-08-17T16:14:15.5682728Z error: error validating &quot;deployment.yml&quot;: error validating data: ValidationError(HorizontalPodAutoscaler.spec.metrics[1].resource.target.averageUtilization): invalid type for io.k8s.api.autoscaling.v2.MetricTarget.averageUtilization: got &quot;string&quot;, expected &quot;integer&quot;; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <pre><code>--- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-xyz spec: scaleTargetRef: apiVersion: apps/v1 kind: StatefulSet name: XYZ minReplicas: ${MinRequestForwarderReplicas} maxReplicas: ${MaxRequestForwarderReplicas} metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 1500Mi </code></pre> <p>I tried the memory &quot;averageUtilization&quot; values with 1.5Gi and &quot;1.5Gi&quot; with double quotes. Can anyone check and let me know if I am missing something here?</p>
<pre><code>averageUtilization: </code></pre> <p>is represented as a percent of requested memory. So you're wrong with <code>Mi</code>.</p> <p>It should be.</p> <pre><code>averageUtilization: 65 </code></pre> <p>Specify <code>averageUtilization</code> and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured.</p>
<p>I want to deploy some pods (from deployment X) only on nodes where pods from deployment Y do not exist. Is there a way to make something like that? First idea is to use node affinity and deploy X on nodes with label X and Y where there is no X, but I would like to avoid that.</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">inter pod affinity and Anti affinity</a>. Here is the relevant snippet from the documentation:</p> <blockquote> <p>Inter-pod affinity and anti-affinity allow you to constrain which nodes your Pods can be scheduled on based on the labels of Pods already running on that node, instead of the node labels.</p> <p>Inter-pod affinity and anti-affinity rules take the form &quot;this Pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more Pods that meet rule Y&quot;, where X is a topology domain like node, rack, cloud provider zone or region, or similar and Y is the rule Kubernetes tries to satisfy.</p> </blockquote>
<p>I am trying to retrieve the hostname in my Application Load Balancer that I configured as ingress.</p> <p>The scenario currently is: I am deploying a helm chart using terraform, and have configured an ALB as ingress. The ALB and the Helm chart was deployed normally and is working, however, I need to retrieve the hostname of this ALB to create a Route53 record pointing to this ALB. When I try to retrieve this information, it returns null values.</p> <p>According to terraform's own documentation, the correct way is as follows:</p> <pre><code>data &quot;kubernetes_ingress&quot; &quot;example&quot; { metadata { name = &quot;terraform-example&quot; } } resource &quot;aws_route53_record&quot; &quot;example&quot; { zone_id = data.aws_route53_zone.k8.zone_id name = &quot;example&quot; type = &quot;CNAME&quot; ttl = &quot;300&quot; records = [data.kubernetes_ingress.example.status.0.load_balancer.0.ingress.0.hostname] } </code></pre> <p>I did exactly as in the documentation (even the provider version is the latest), here is an excerpt of my code:</p> <pre><code># Helm release resource resource &quot;helm_release&quot; &quot;argocd&quot; { name = &quot;argocd&quot; repository = &quot;https://argoproj.github.io/argo-helm&quot; chart = &quot;argo-cd&quot; namespace = &quot;argocd&quot; version = &quot;4.9.7&quot; create_namespace = true values = [ templatefile(&quot;${path.module}/settings/helm/argocd/values.yaml&quot;, { certificate_arn = module.acm_certificate.arn }) ] } # Kubernetes Ingress data to retrieve de ingress hostname from helm deployment (ALB Hostname) data &quot;kubernetes_ingress&quot; &quot;argocd&quot; { metadata { name = &quot;argocd-server&quot; namespace = helm_release.argocd.namespace } depends_on = [ helm_release.argocd ] } # Route53 record creation resource &quot;aws_route53_record&quot; &quot;argocd&quot; { name = &quot;argocd&quot; type = &quot;CNAME&quot; ttl = 600 zone_id = aws_route53_zone.r53_zone.id records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname] } </code></pre> <p>When I run the <code>terraform apply</code> I've get the following error:</p> <pre><code>╷ │ Error: Attempt to index null value │ │ on route53.tf line 67, in resource &quot;aws_route53_record&quot; &quot;argocd&quot;: │ 67: records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname] │ ├──────────────── │ │ data.kubernetes_ingress.argocd.status is null │ │ This value is null, so it does not have any indices. </code></pre> <p>My ingress configuration (deployed by Helm Release):</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server namespace: argocd uid: 646f6ea0-7991-4a13-91d0-da236164ac3e resourceVersion: '4491' generation: 1 creationTimestamp: '2022-08-08T13:29:16Z' labels: app.kubernetes.io/component: server app.kubernetes.io/instance: argocd app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: argocd-server app.kubernetes.io/part-of: argocd helm.sh/chart: argo-cd-4.9.7 annotations: alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/certificate-arn: &gt;- arn:aws:acm:us-east-1:124416843011:certificate/7b79fa2c-d446-423d-b893-c8ff3d92a5e1 alb.ingress.kubernetes.io/group.name: altb-devops-eks-support-alb alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}]' alb.ingress.kubernetes.io/load-balancer-name: altb-devops-eks-support-alb alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/tags: &gt;- Name=altb-devops-eks-support-alb,Stage=Support,CostCenter=Infrastructure,Project=Shared Infrastructure,Team=DevOps alb.ingress.kubernetes.io/target-type: ip kubernetes.io/ingress.class: alb meta.helm.sh/release-name: argocd meta.helm.sh/release-namespace: argocd finalizers: - group.ingress.k8s.aws/altb-devops-eks-support-alb managedFields: - manager: controller operation: Update apiVersion: networking.k8s.io/v1 time: '2022-08-08T13:29:16Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:&quot;group.ingress.k8s.aws/altb-devops-eks-support-alb&quot;: {} - manager: terraform-provider-helm_v2.6.0_x5 operation: Update apiVersion: networking.k8s.io/v1 time: '2022-08-08T13:29:16Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:alb.ingress.kubernetes.io/backend-protocol: {} f:alb.ingress.kubernetes.io/certificate-arn: {} f:alb.ingress.kubernetes.io/group.name: {} f:alb.ingress.kubernetes.io/listen-ports: {} f:alb.ingress.kubernetes.io/load-balancer-name: {} f:alb.ingress.kubernetes.io/scheme: {} f:alb.ingress.kubernetes.io/tags: {} f:alb.ingress.kubernetes.io/target-type: {} f:kubernetes.io/ingress.class: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app.kubernetes.io/component: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:helm.sh/chart: {} f:spec: f:rules: {} - manager: controller operation: Update apiVersion: networking.k8s.io/v1 time: '2022-08-08T13:29:20Z' fieldsType: FieldsV1 fieldsV1: f:status: f:loadBalancer: f:ingress: {} subresource: status selfLink: /apis/networking.k8s.io/v1/namespaces/argocd/ingresses/argocd-server status: loadBalancer: ingress: - hostname: &gt;- internal-altb-devops-eks122-support-alb-1845221539.us-east-1.elb.amazonaws.com spec: rules: - host: argocd.altb.co http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: number: 80 </code></pre>
<p>The terraform datasource for Ingress is : kubernetes_ingress_v1. <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1</a></p> <pre><code>data &quot;kubernetes_ingress_v1&quot; &quot;argocd&quot; { metadata { name = &quot;argocd-server&quot; namespace = helm_release.argocd.namespace } depends_on = [ helm_release.argocd ] } </code></pre> <p>This should work.</p>
<p>I am running rancher latest docker image on Mac M1 laptop but the contact failed to start. The command I am using is sudo docker run -d -p 80:80 -p 443:443 --privileged rancher/rancher.</p> <p>Below is the versions for my environment:</p> <p>$ docker --version Docker version 20.10.13, build a224086</p> <p>$ uname -a Darwin Joeys-MBP 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T6000 arm64</p> <p>$ docker images|grep rancher rancher/rancher latest f09cdb8a8fba 3 weeks ago 1.39GB</p> <p>Below is the logs from the container.</p> <pre><code>$ docker logs -f 8d21d7d19b21 2022/04/28 03:34:00 [INFO] Rancher version v2.6.4 (4b4e29678) is starting 2022/04/28 03:34:00 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:true Embedded:false BindHost: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false Trace:false NoCACerts:false AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features: ClusterRegistry:} 2022/04/28 03:34:00 [INFO] Listening on /tmp/log.sock 2022/04/28 03:34:00 [INFO] Waiting for k3s to start 2022/04/28 03:34:01 [INFO] Waiting for server to become available: an error on the server (&quot;apiserver not ready&quot;) has prevented the request from succeeding 2022/04/28 03:34:03 [INFO] Waiting for server to become available: an error on the server (&quot;apiserver not ready&quot;) has prevented the request from succeeding 2022/04/28 03:34:05 [INFO] Running in single server mode, will not peer connections 2022/04/28 03:34:05 [INFO] Applying CRD features.management.cattle.io 2022/04/28 03:34:05 [INFO] Waiting for CRD features.management.cattle.io to become available 2022/04/28 03:34:05 [INFO] Done waiting for CRD features.management.cattle.io to become available 2022/04/28 03:34:08 [INFO] Applying CRD navlinks.ui.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusters.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD apiservices.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD settings.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD preferences.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD features.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusterrepos.catalog.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD operations.catalog.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD apps.catalog.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD fleetworkspaces.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD bundles.fleet.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusters.fleet.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD managedcharts.management.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io 2022/04/28 03:34:08 [INFO] Applying CRD clusters.provisioning.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD rkeclusters.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraps.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD rkebootstraptemplates.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD rkecontrolplanes.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD custommachines.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD etcdsnapshots.rke.cattle.io 2022/04/28 03:34:09 [INFO] Applying CRD clusters.cluster.x-k8s.io 2022/04/28 03:34:09 [INFO] Applying CRD machinedeployments.cluster.x-k8s.io 2022/04/28 03:34:09 [INFO] Applying CRD machinehealthchecks.cluster.x-k8s.io 2022/04/28 03:34:09 [INFO] Applying CRD machines.cluster.x-k8s.io 2022/04/28 03:34:09 [INFO] Applying CRD machinesets.cluster.x-k8s.io 2022/04/28 03:34:09 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to become available 2022/04/28 03:34:09 [INFO] Done waiting for CRD machinesets.cluster.x-k8s.io to become available 2022/04/28 03:34:09 [INFO] Creating CRD authconfigs.management.cattle.io 2022/04/28 03:34:09 [INFO] Creating CRD groupmembers.management.cattle.io 2022/04/28 03:34:09 [INFO] Creating CRD groups.management.cattle.io 2022/04/28 03:34:09 [INFO] Creating CRD tokens.management.cattle.io 2022/04/28 03:34:09 [INFO] Creating CRD userattributes.management.cattle.io 2022/04/28 03:34:09 [INFO] Creating CRD users.management.cattle.io 2022/04/28 03:34:09 [INFO] Waiting for CRD tokens.management.cattle.io to become available 2022/04/28 03:34:10 [INFO] Done waiting for CRD tokens.management.cattle.io to become available 2022/04/28 03:34:10 [INFO] Waiting for CRD userattributes.management.cattle.io to become available 2022/04/28 03:34:10 [INFO] Done waiting for CRD userattributes.management.cattle.io to become available 2022/04/28 03:34:10 [INFO] Waiting for CRD users.management.cattle.io to become available 2022/04/28 03:34:11 [INFO] Done waiting for CRD users.management.cattle.io to become available 2022/04/28 03:34:11 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD apps.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD catalogs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD apprevisions.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD dynamicschemas.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD pipelineexecutions.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD etcdbackups.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD pipelinesettings.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD globalrolebindings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD pipelines.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD catalogtemplateversions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD globalroles.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD sourcecodecredentials.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clusteralerts.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clusteralertgroups.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD kontainerdrivers.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD nodedrivers.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clustercatalogs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD sourcecoderepositories.project.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clusterloggings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD nodepools.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD nodetemplates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clusteralertrules.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clustermonitorgraphs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clusterscans.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD nodes.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD composeconfigs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapps.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD multiclusterapprevisions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD monitormetrics.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projects.management.cattle.io 2022/04/28 03:34:11 [INFO] Waiting for CRD sourcecodecredentials.project.cattle.io to become available 2022/04/28 03:34:11 [INFO] Creating CRD rkek8ssystemimages.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD notifiers.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD rkek8sserviceoptions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectalerts.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD rkeaddons.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectalertgroups.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD roletemplates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectcatalogs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectloggings.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD samltokens.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectalertrules.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clustertemplates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD projectmonitorgraphs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD clustertemplaterevisions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD cisconfigs.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD cisbenchmarkversions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD templates.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD templateversions.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD templatecontents.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD globaldnses.management.cattle.io 2022/04/28 03:34:11 [INFO] Creating CRD globaldnsproviders.management.cattle.io 2022/04/28 03:34:11 [INFO] Waiting for CRD nodetemplates.management.cattle.io to become available 2022/04/28 03:34:12 [INFO] Waiting for CRD projectalertgroups.management.cattle.io to become available 2022/04/28 03:34:12 [FATAL] k3s exited with: exit status 1 </code></pre>
<p>I would recommend trying to run it with a specific tag, i.e. <code>rancher/rancher:v2.6.6</code>.</p> <p>Some other things that may interfere: What size setup are you running on? CPU and minimum memory requirements are currently 2 CPUs and 4gb RAM.</p> <p>Also, you can try their docker install scripts and check out other documentation here: <a href="https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.6/en/installation/requirements/installing-docker/</a></p> <p>Edit: noticed you're running on ARM. There is additional documentation for running rancher on ARM here: <a href="https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.5/en/installation/resources/advanced/arm64-platform/</a></p>
<p>I am installing nginx ingress controller through helm chart and pods are not coming up. Got some issue with the permission.</p> <p>Chart link - <a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx</a></p> <p>I am using latest version 4.2.1</p> <p>I done debugging as stated here <a href="https://github.com/kubernetes/ingress-nginx/issues/4061" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/4061</a> also tried to run as root user <strong>runAsUser: 0</strong></p> <p>I think i got this issue after cluster upgrade from 1.19 to 1.22. Previously it was working fine.</p> <p>Any suggestion what i need to do to fix that?</p> <blockquote> <p>unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied</p> </blockquote>
<p>You obviously have permission problem. Looking at the Chart you specified, the are multiple values of <code>runAsUser</code> for different config.</p> <pre><code>controller.image.runAsUser: 101 controller.admissionWebhooks.patch.runAsUser: 2000 defaultBackend.image.runAsUser: 65534 </code></pre> <p>I'm not sure why these are different, but if possible -</p> <p>Try to delete your existing chart, and fresh install it.</p> <p>If the issue still persist - check the deployment / pod events, see if the cluster alerts you about something.</p> <p>Also worth noting, there were breaking changes in 1.22 to <code>Ingress</code> resource. Check <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/#what-to-do" rel="nofollow noreferrer">this</a> and this <a href="https://kubernetes.io/blog/2021/08/04/kubernetes-1-22-release-announcement/#major-changes" rel="nofollow noreferrer">links</a> from the official release notes.</p>
<p>I have a docker image that I want to run inside my django code. Inside that image there is an executable that I have written using c++ that writes it's output to google cloud storage. Normally when I run the django code like this:</p> <pre><code>container = client.V1Container(name=container_name, command=[&quot;//usr//bin//sleep&quot;], args=[&quot;3600&quot;], image=container_image, env=env_list, security_context=security) </code></pre> <p>And manually go inside the container to run this:</p> <pre><code>gcloud container clusters get-credentials my-cluster --region us-central1 --project proj_name &amp;&amp; kubectl exec pod-id -c jobcontainer -- xvfb-run -a &quot;path/to/exe&quot; </code></pre> <p>It works as intended and gives off the output to cloud storage. (I need to use a virtual monitor so I'm using xvfb first). However I must call this through django like this:</p> <pre><code>container = client.V1Container(name=container_name, command=[&quot;xvfb-run&quot;], args=[&quot;-a&quot;,&quot;\&quot;path/to/exe\&quot;&quot;], image=container_image, env=env_list, security_context=security) </code></pre> <p>But when I do this, the job gets created but never finishes and does not give off an output to the storage. When I go inside my container to run <code>ps aux</code> I get this output:</p> <pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2888 1836 ? Ss 07:34 0:00 /bin/sh /usr/bin/xvfb-run -a &quot;path/to/exe&quot; root 16 0.0 1.6 196196 66256 ? S 07:34 0:00 Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.r5gaBO/Xauthority root 35 0.0 0.0 7016 1552 ? Rs 10:31 0:00 ps aux </code></pre> <p>It looks like it's stuck inside my code but my code does not have a loop that it can stuck inside, perhaps there is an error occurring (I don't think so since the exact same command is working when typed manually). If there is an error how can I see the console output? Why is my code get stuck and how can I get my desired output? Could there be an error caused by permissions (The code does a lot of stuff that requires permissions like writing to storage and reading files inside the pod, but like mentioned works normally when i run it via the command line)?</p>
<p>Apparently for anyone having a similar issue, we fixed it by adding the command we want to run at the end of the <code>Dockerfile</code> instead of passing it as a parameter inside django's container call like this:</p> <pre><code>cmd[&quot;entrypoint.sh&quot;] </code></pre> <p>entrypoint.sh:</p> <pre><code>xvfb-run -a &quot;path/to/exe&quot; </code></pre> <p>Instead of calling it inside django like we did before and simply removing the command argument from the container call so it looked like this:</p> <pre><code>container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security) </code></pre>
<p>In microservices environment deployed to the Kubernetes cluster, why will we use API gateway (for example Spring cloud gateway) if Kubernetes supplies the same service with Ingress?</p>
<p>Ingress controller makes one Kubernetes service that gets exposed as LoadBalancer.For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response &amp; request, and other add-ons/plugin options.</p> <p>API gateway can also do the work of simple routing but it mostly gets used when you need higher flexibility, security and configuration options.While multiple teams or projects can share a set of Ingress controllers, or Ingress controllers can be specialized on a per‑environment basis, there are reasons you might choose to deploy a dedicated API gateway inside Kubernetes rather than leveraging the existing Ingress controller. Using both an Ingress controller and an API gateway inside Kubernetes can provide flexibility for organizations to achieve business requirements</p> <p>For accessing database</p> <p>If this database and cluster are somewhere in the cloud you could use internal Database IP. If not you should provide the IP of the machine where this Database is hosted.</p> <p>You can also refer to this <a href="https://medium.com/@ManagedKube/kubernetes-access-external-services-e4fd643e5097" rel="nofollow noreferrer">Kubernetes Access External Services</a> article.</p>
<p>This question has been asked before, ive been trying plenty of examples over the past two days to try and configure with no luck so I am posting my environment for any help.</p> <p><strong>Problem</strong> <br /> Nextjs environment variables are all undefined after deploying to kubernetes using Terraform</p> <p><strong>Expected Result</strong> <br /></p> <pre><code>staging: NEXT_PUBLIC_APIROOT=https://apis-staging.mywebsite.com production: NEXT_PUBLIC_APIROOT=https://apis.mywebsite.com </code></pre> <p>The secrets are stored in github actions. I have a terraform setup that deploys my application to my staging and production klusters, a snippet below:</p> <pre><code>env: ENV: staging PROJECT_ID: ${{ secrets.GKE_PROJECT_STAG }} GOOGLE_CREDENTIALS: ${{ secrets.GOOGLE_CREDENTIALS_STAG }} GKE_SA_KEY: ${{ secrets.GKE_SA_KEY_STAG }} NEXT_PUBLIC_APIROOT: ${{ secrets.NEXT_PUBLIC_APIROOT_STAGING }} </code></pre> <p>I have an additional step to manually create a .env file as well</p> <pre><code> - name: env-file run: | touch .env.local echo NEXT_PUBLIC_APIROOT: ${{ secrets.NEXT_PUBLIC_APIROOT_STAGING }} &gt;&gt; .env.local </code></pre> <p><strong>Dockerfile</strong></p> <pre><code>FROM node:16-alpine AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json package-lock.json .npmrc ./ RUN npm ci FROM node:16-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:16-alpine AS runner WORKDIR /app RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001 # You only need to copy next.config.js if you are NOT using the default configuration COPY --from=builder /app/next.config.js ./ COPY --from=builder /app/public ./public COPY --from=builder /app/package.json ./package.json # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 CMD [&quot;node&quot;, &quot;server.js&quot;] </code></pre> <p><strong>next.config.js</strong></p> <pre><code>module.exports = withBundleAnalyzer({ publicRuntimeConfig: { NEXT_PUBLIC_APIROOT: process.env.NEXT_PUBLIC_APIROOT, }, output: 'standalone', webpack: (config, { dev, isServer }) =&gt; { if (dev &amp;&amp; isServer) { const ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin') config.plugins.push( new ForkTsCheckerWebpackPlugin({ eslint: { files: './src/**/*.{ts,tsx,js,jsx}', }, }) ) } return config }, }) </code></pre> <p>Anybody have experience with this issue?</p>
<p>I wanna start by saying that I am by no means an expert in NextJS. Therefore, I tried approaching your problem under the following assumptions:</p> <ul> <li>the issue is not necessarily related to NextJS, as it appears to be related to Kubernetes resource deployment and management</li> <li>leverage the default <code>next.config.js</code> mechanism that automatically loads environment variables from <code>.env.local</code></li> <li>use a GKE cluster to use the same deployment target (region: <code>us-central1-c</code>)</li> </ul> <p>My first step was to create a dummy NextJS application with a single API endpoint that simply prints one of the environment variables that I am trying to set when deploying the workload to Kubernetes. When it comes to the Dockerfile, I used the exact same image that you provided. Please find below the relevant files from my dummy app:</p> <p><strong>pages/api/test.js</strong></p> <pre><code>export default function handler(req, res) { res.status(200).json(process.env.NEXT_PUBLIC_APIROOT) } </code></pre> <p><strong>next.config.js</strong></p> <pre><code>const withBundleAnalyzer = require('@next/bundle-analyzer')({ enabled: true, }); module.exports = withBundleAnalyzer({ publicRuntimeConfig: { NEXT_PUBLIC_APIROOT: process.env.NEXT_PUBLIC_APIROOT, }, output: 'standalone' }) </code></pre> <p><strong>Dockerfile</strong></p> <pre><code>FROM node:16-alpine AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci FROM node:16-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:16-alpine AS runner WORKDIR /app RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001 # You only need to copy next.config.js if you are NOT using the default configuration COPY --from=builder /app/next.config.js ./ COPY --from=builder /app/public ./public COPY --from=builder /app/package.json ./package.json # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 CMD [&quot;npm&quot;, &quot;start&quot;] </code></pre> <p>There is a single change that I've done in the Dockerfile and that is updating the CMD entry so that the application starts via the <code>npm start</code> command.</p> <p>As per the official <a href="https://nextjs.org/docs/basic-features/environment-variables" rel="nofollow noreferrer">documentation</a>, NextJS will try to look for <code>.env.local</code> in the app root folder and load those environment variables in <code>process.env</code>.</p> <p>Therefore, I created a YAML file with Kubernetes resources that will be used to create the deployment setup.</p> <p><strong>nextjs-app-setup.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nextjs-app-config data: .env.local: |- NEXT_PUBLIC_APIROOT=hello_i_am_an_env_variable --- apiVersion: apps/v1 kind: Deployment metadata: name: nextjs-app labels: app: nextjs-app spec: replicas: 1 selector: matchLabels: app: nextjs-app template: metadata: labels: app: nextjs-app spec: containers: - name: nextjs-app image: public.ecr.aws/u4x8r8g3/nextjs-app:latest ports: - containerPort: 3000 volumeMounts: - name: nextjs-app-config mountPath: &quot;/app/.env.local&quot; subPath: &quot;.env.local&quot; readOnly: true volumes: - name: nextjs-app-config configMap: name: nextjs-app-config --- apiVersion: v1 kind: Service metadata: name: nextjs-service spec: selector: app: nextjs-app ports: - protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>There are multiple things happening in the above configuration:</p> <ul> <li>Define a ConfigMap resource that will hold all of the required environment variables that the NextJS application will require. There is a single entry for <code>.env.local</code> that will hold all of the environment variables and will be mounted as a file in the application pod</li> <li>Define a Deployment resource for the NextJS application. The most important section here is the <code>volumes</code> and <code>volumeMounts</code> blocks. Here, I am mounting the <code>.env.local</code> entry from the ConfigMap that was defined on the <code>/app/.env.local</code> path</li> <li>Define a Service resource to be able to interact with the NextJS application</li> </ul> <p>After connecting to the GKE cluster via kubectl, I applied the configuration via <code>kubectl apply -f nextjs-app-setup.yaml</code>.</p> <p>To connect to the service from my local workstation, I executed <code>kubectl port-forward service/nextjs-service 3000:3000</code>. Then I navigated in my browser to <code>localhost:3000/api/test</code> and can see the value that I set in the ConfigMap as the output.</p> <p>Disclaimer: I understand that your setup might involve some additional components especially when it comes to CI/CD and Infrastructure-as-Code, but my answer here should at least provide you with an approach to accessing environment variables in your containerized NextJS workloads. If you still get <code>undefined</code> values, my assumption is that it would most likely be related to how you are configuring them in your CI/CD pipeline, but that would be a different issue that is not related to NextJS or Kubernetes.</p>
<p>I have two servers with k3s setup. I have install <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">k8s-dashboard</a> on server1.</p> <p>I have setup <a href="https://stackoverflow.com/questions/36306904/configure-kubectl-command-to-access-remote-kubernetes-cluster-on-azure">clustering</a> in server1 i.e. I can access k8s resources of server2 from server1.</p> <pre><code>kubectl config set-cluster server2 --server=https://{IP_OF_SERVER2}:6443 kubectl config set-context server2 --cluster=server2 kubectl config use-context server2 </code></pre> <p>But I want to access all resources of server2 from k8s dashboard of server1.</p> <p>Is this possible to do?</p>
<p>First, The Kubernetes dashboard needs to query <a href="https://github.com/kubernetes-sigs/dashboard-metrics-scraper" rel="nofollow noreferrer">dashboard-metrics-scraper</a>, so you will need to install that before linking the dashboard UI with <code>scraper</code>.</p> <p>Second from the code, it does not look like it accept array, but accept string.</p> <pre><code># Metrics Scraper sidecar host for dashboard K8S_DASHBOARD_SIDECAR_HOST=${K8S_DASHBOARD_SIDECAR_HOST:-&quot;http://localhost:8000&quot;} </code></pre> <p><a href="https://github.com/kubernetes/dashboard/blob/1148f7ba9f9eadd719e53fa3bc8bde5b7cfdb395/aio/develop/run-npm-on-container.sh#L62" rel="nofollow noreferrer">Scraper sidecar</a></p> <p><a href="https://github.com/kubernetes/dashboard/blob/1148f7ba9f9eadd719e53fa3bc8bde5b7cfdb395/aio/develop/run-npm-on-container.sh#L98" rel="nofollow noreferrer">docker-env</a></p> <p>So you will need deploy Metrics Scraper sidecar on the cluster 2 and then you will need to expose the service and might need two instance of the dashboard.</p> <p>so better to create dashboards on its own cluster.</p>
<p>I need to create a Kubernetes clientset using a token extracted from JSON service account key file.</p> <p>I explicitly provide this token inside the config, however it still looks for Google Application-Default credentials, and crashes because it cannot find them.</p> <p>Below is my code:</p> <pre><code>package main import ( &quot;context&quot; &quot;encoding/base64&quot; &quot;fmt&quot; &quot;io/ioutil&quot; &quot;golang.org/x/oauth2&quot; &quot;golang.org/x/oauth2/google&quot; gke &quot;google.golang.org/api/container/v1&quot; &quot;google.golang.org/api/option&quot; &quot;k8s.io/client-go/kubernetes&quot; _ &quot;k8s.io/client-go/plugin/pkg/client/auth/gcp&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; &quot;k8s.io/client-go/tools/clientcmd/api&quot; ) const ( projectID = &quot;my_project_id&quot; clusterName = &quot;my_cluster_name&quot; scope = &quot;https://www.googleapis.com/auth/cloud-platform&quot; ) func main() { ctx := context.Background() // Read JSON key and extract the token data, err := ioutil.ReadFile(&quot;sa_key.json&quot;) if err != nil { panic(err) } creds, err := google.CredentialsFromJSON(ctx, data, scope) if err != nil { panic(err) } token, err := creds.TokenSource.Token() if err != nil { panic(err) } fmt.Println(&quot;token&quot;, token.AccessToken) // Create GKE client tokenSource := oauth2.StaticTokenSource(token) gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource)) if err != nil { panic(err) } // Create a dynamic kube config inMemKubeConfig, err := createInMemKubeConfig(ctx, gkeClient, token, projectID) if err != nil { panic(err) } // Use it to create a rest.Config config, err := clientcmd.NewNonInteractiveClientConfig(*inMemKubeConfig, clusterName, &amp;clientcmd.ConfigOverrides{CurrentContext: clusterName}, nil).ClientConfig() if err != nil { panic(err) } // Create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err) // this where the code crashes because it can't find the Google ADCs } fmt.Printf(&quot;clientset %+v\n&quot;, clientset) } func createInMemKubeConfig(ctx context.Context, client *gke.Service, token *oauth2.Token, projectID string) (*api.Config, error) { k8sConf := api.Config{ APIVersion: &quot;v1&quot;, Kind: &quot;Config&quot;, Clusters: map[string]*api.Cluster{}, AuthInfos: map[string]*api.AuthInfo{}, Contexts: map[string]*api.Context{}, } // List all clusters in project with id projectID across all zones (&quot;-&quot;) resp, err := client.Projects.Zones.Clusters.List(projectID, &quot;-&quot;).Context(ctx).Do() if err != nil { return nil, err } for _, f := range resp.Clusters { name := fmt.Sprintf(&quot;gke_%s_%s_%s&quot;, projectID, f.Zone, f.Name) // My custom naming convention cert, err := base64.StdEncoding.DecodeString(f.MasterAuth.ClusterCaCertificate) if err != nil { return nil, err } k8sConf.Clusters[name] = &amp;api.Cluster{ CertificateAuthorityData: cert, Server: &quot;https://&quot; + f.Endpoint, } k8sConf.Contexts[name] = &amp;api.Context{ Cluster: name, AuthInfo: name, } k8sConf.AuthInfos[name] = &amp;api.AuthInfo{ Token: token.AccessToken, AuthProvider: &amp;api.AuthProviderConfig{ Name: &quot;gcp&quot;, Config: map[string]string{ &quot;scopes&quot;: scope, }, }, } } return &amp;k8sConf, nil } </code></pre> <p>and here is the error message:</p> <pre><code>panic: cannot construct google default token source: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. </code></pre>
<p>Here's what worked for me.</p> <p>It is based on this <a href="https://gist.github.com/ahmetb/548059cdbf12fb571e4e2f1e29c48997" rel="nofollow noreferrer">gist</a> and it's exactly what I was looking for. It uses an <code>oauth2.TokenSource</code> object which can be fed with a variety of token types so it's quite flexible.</p> <p>It took me a long time to find this solution so I hope this helps somebody!</p> <pre><code>package main import ( &quot;context&quot; &quot;encoding/base64&quot; &quot;fmt&quot; &quot;io/ioutil&quot; &quot;log&quot; &quot;net/http&quot; gke &quot;google.golang.org/api/container/v1&quot; &quot;google.golang.org/api/option&quot; &quot;golang.org/x/oauth2&quot; &quot;golang.org/x/oauth2/google&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/rest&quot; clientcmdapi &quot;k8s.io/client-go/tools/clientcmd/api&quot; ) const ( googleAuthPlugin = &quot;gcp&quot; projectID = &quot;my_project&quot; clusterName = &quot;my_cluster&quot; zone = &quot;my_cluster_zone&quot; scope = &quot;https://www.googleapis.com/auth/cloud-platform&quot; ) type googleAuthProvider struct { tokenSource oauth2.TokenSource } // These funcitons are needed even if we don't utilize them // So that googleAuthProvider is an rest.AuthProvider interface func (g *googleAuthProvider) WrapTransport(rt http.RoundTripper) http.RoundTripper { return &amp;oauth2.Transport{ Base: rt, Source: g.tokenSource, } } func (g *googleAuthProvider) Login() error { return nil } func main() { ctx := context.Background() // Extract a token from the JSON SA key data, err := ioutil.ReadFile(&quot;sa_key.json&quot;) if err != nil { panic(err) } creds, err := google.CredentialsFromJSON(ctx, data, scope) if err != nil { panic(err) } token, err := creds.TokenSource.Token() if err != nil { panic(err) } tokenSource := oauth2.StaticTokenSource(token) // Authenticate with the token // If it's nil use Google ADC if err := rest.RegisterAuthProviderPlugin(googleAuthPlugin, func(clusterAddress string, config map[string]string, persister rest.AuthProviderConfigPersister) (rest.AuthProvider, error) { var err error if tokenSource == nil { tokenSource, err = google.DefaultTokenSource(ctx, scope) if err != nil { return nil, fmt.Errorf(&quot;failed to create google token source: %+v&quot;, err) } } return &amp;googleAuthProvider{tokenSource: tokenSource}, nil }); err != nil { log.Fatalf(&quot;Failed to register %s auth plugin: %v&quot;, googleAuthPlugin, err) } gkeClient, err := gke.NewService(ctx, option.WithTokenSource(tokenSource)) if err != nil { panic(err) } clientset, err := getClientSet(ctx, gkeClient, projectID, org, env) if err != nil { panic(err) } // Demo to make sure it works pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(ctx, metav1.ListOptions{}) if err != nil { panic(err) } log.Printf(&quot;There are %d pods in the cluster&quot;, len(pods.Items)) for _, pod := range pods.Items { fmt.Println(pod.Name) } } func getClientSet(ctx context.Context, client *gke.Service, projectID, name string) (*kubernetes.Clientset, error) { // Get cluster info cluster, err := client.Projects.Zones.Clusters.Get(projectID, zone, name).Context(ctx).Do() if err != nil { panic(err) } // Decode cluster CA certificate cert, err := base64.StdEncoding.DecodeString(cluster.MasterAuth.ClusterCaCertificate) if err != nil { return nil, err } // Build a config using the cluster info config := &amp;rest.Config{ TLSClientConfig: rest.TLSClientConfig{ CAData: cert, }, Host: &quot;https://&quot; + cluster.Endpoint, AuthProvider: &amp;clientcmdapi.AuthProviderConfig{Name: googleAuthPlugin}, } return kubernetes.NewForConfig(config) } </code></pre>
<p>I have been trying to implement istio authorization using Oauth2 and keycloak. I have followed few articles related to this <a href="https://medium.com/@senthilrch/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-part-2-of-2-dbb3fb9cd0d0" rel="nofollow noreferrer">API Authentication: Configure Istio IngressGateway, OAuth2-Proxy and Keycloak</a>, <a href="https://istio.io/latest/docs/reference/config/security/authorization-policy/" rel="nofollow noreferrer">Authorization Policy</a></p> <p><strong>Expected output:</strong> My idea is to implement keycloak authentication where oauth2 used as an external Auth provider in the istio ingress gateway. when a user try to access my app in <code>&lt;ingress host&gt;/app</code> , it should automatically redirect to keycloak login page.</p> <p>How do i properly redirect the page to keycloak login screen for authentication ?</p> <p><strong>problem:</strong> When i try to access <code>&lt;ingress host&gt;/app</code>, the page will take 10 seconds to load and it gives status 403 access denied. if i remove the authorization policy (<em>kubectl delete -f authorization-policy.yaml</em>) within that 10 seconds, it will redirect to the login screen (<em>keycloak</em>)</p> <p>oauth2.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: oauth-proxy name: oauth-proxy spec: type: NodePort selector: app: oauth-proxy ports: - name: http-oauthproxy port: 4180 nodePort: 31023 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: oauth-proxy name: oauth-proxy spec: replicas: 1 selector: matchLabels: app: &quot;oauth-proxy&quot; template: metadata: labels: app: oauth-proxy spec: containers: - name: oauth-proxy image: &quot;quay.io/oauth2-proxy/oauth2-proxy:v7.2.0&quot; ports: - containerPort: 4180 args: - --http-address=0.0.0.0:4180 - --upstream=http://test-web-app:3000 - --set-xauthrequest=true - --pass-host-header=true - --pass-access-token=true env: # OIDC Config - name: &quot;OAUTH2_PROXY_PROVIDER&quot; value: &quot;keycloak-oidc&quot; - name: &quot;OAUTH2_PROXY_OIDC_ISSUER_URL&quot; value: &quot;http://192.168.1.2:31020/realms/my_login_realm&quot; - name: &quot;OAUTH2_PROXY_CLIENT_ID&quot; value: &quot;my_nodejs_client&quot; - name: &quot;OAUTH2_PROXY_CLIENT_SECRET&quot; value: &quot;JGEQtkrdIc6kRSkrs89BydnfsEv3VoWO&quot; # Cookie Config - name: &quot;OAUTH2_PROXY_COOKIE_SECURE&quot; value: &quot;false&quot; - name: &quot;OAUTH2_PROXY_COOKIE_SECRET&quot; value: &quot;ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0=&quot; - name: &quot;OAUTH2_PROXY_COOKIE_DOMAINS&quot; value: &quot;*&quot; # Proxy config - name: &quot;OAUTH2_PROXY_EMAIL_DOMAINS&quot; value: &quot;*&quot; - name: &quot;OAUTH2_PROXY_WHITELIST_DOMAINS&quot; value: &quot;*&quot; - name: &quot;OAUTH2_PROXY_HTTP_ADDRESS&quot; value: &quot;0.0.0.0:4180&quot; - name: &quot;OAUTH2_PROXY_SET_XAUTHREQUEST&quot; value: &quot;true&quot; - name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER value: &quot;true&quot; - name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY value: &quot;true&quot; - name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON value: &quot;true&quot; - name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER value: &quot;true&quot; </code></pre> <p>keycloak.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak spec: type: NodePort selector: app: keycloak ports: - name: http-keycloak port: 8080 nodePort: 31020 --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak spec: selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:17.0.0 ports: - containerPort: 8080 args: [&quot;start-dev&quot;] env: - name: KEYCLOAK_ADMIN value: &quot;admin&quot; - name: KEYCLOAK_ADMIN_PASSWORD value: &quot;admin&quot; </code></pre> <p>istio-operator.yaml</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: accessLogFile: /dev/stdout extensionProviders: - name: &quot;oauth2-proxy&quot; envoyExtAuthzHttp: service: &quot;oauth-proxy.default.svc.cluster.local&quot; port: &quot;4180&quot; # The default port used by oauth2-proxy. includeHeadersInCheck: [&quot;authorization&quot;, &quot;cookie&quot;,&quot;x-forwarded-access-token&quot;,&quot;x-forwarded-user&quot;,&quot;x-forwarded-email&quot;,&quot;x-forwarded-proto&quot;,&quot;proxy-authorization&quot;,&quot;user-agent&quot;,&quot;x-forwarded-host&quot;,&quot;from&quot;,&quot;x-forwarded-for&quot;,&quot;accept&quot;,&quot;x-auth-request-redirect&quot;] # headers sent to the oauth2-proxy in the check request. headersToUpstreamOnAllow: [&quot;authorization&quot;, &quot;path&quot;, &quot;x-auth-request-user&quot;, &quot;x-auth-request-email&quot;, &quot;x-auth-request-access-token&quot;,&quot;x-forwarded-access-token&quot;] # headers sent to backend application when request is allowed. headersToDownstreamOnDeny: [&quot;content-type&quot;, &quot;set-cookie&quot;] # headers sent back to the client when request is denied. </code></pre> <p>gateway.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: test-gateway namespace : istio-system spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - '*' </code></pre> <p>virtual-service.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: gateway-vs spec: hosts: - '*' gateways: - istio-system/test-gateway http: - match: - uri: prefix: /oauth2 route: - destination: host: oauth-proxy.default.svc.cluster.local port: number: 4180 - match: - uri: prefix: /app route: - destination: host: test-web-app.default.svc.cluster.local port: number: 3000 </code></pre> <p>authorization-policy.yaml</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: example-auth-policy spec: action: CUSTOM provider: name: &quot;oauth2-proxy&quot; rules: - to: - operation: paths: [&quot;/app&quot;] notPaths: [&quot;/oauth2/*&quot;] </code></pre>
<p>The redirection issue solved by updating authorization policy</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: example-auth-policy namespace: istio-system spec: action: CUSTOM provider: name: &quot;oauth2-proxy&quot; rules: - to: - operation: paths: [&quot;/app&quot;] notPaths: [&quot;/oauth2/*&quot;] selector: matchLabels: app: istio-ingressgateway </code></pre> <ul> <li>Added <code>istio-system</code> namespace instead of workload namespace (it was default in my case)</li> <li>Forgot to add <code>matchLabels</code>.</li> </ul>
<p>I'm trying to implement EFK stack (with Fluent Bit) in my k8s cluster. My log file I would like to parse sometimes is oneline and sometimes multiline:</p> <pre><code>2022-03-13 13:27:04 [-][-][-][error][craft\db\Connection::open] SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known 2022-03-13 13:27:04 [-][-][-][info][application] $_GET = [] $_POST = [] $_FILES = [] $_COOKIE = [ '__test1' =&gt; 'x' '__test2' =&gt; 'x2' ] $_SERVER = [ '__test3' =&gt; 'x3' '__test2' =&gt; 'x3' ] </code></pre> <p>When I'm checking captured logs in Kibana I see that all multiline logs are separated into single lines, which is of course not what we want to have. I'm trying to configure a parser in fluent bit config which will interpret multiline log as one entry, unfortunately with no success.</p> <p>I've tried this:</p> <pre><code>[PARSER] Name MULTILINE_MATCH Format regex Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?&lt;level&gt;.*)\]\[(?&lt;where&gt;.*)\] (?&lt;message&gt;[\s\S]*) Time_Key time Time_Format %b %d %H:%M:%S </code></pre> <p>In k8s all fluent bit configurations are stored in config map. So here's my whole configuration of fluent bit (the multiline parser is at the end):</p> <pre><code>kind: ConfigMap metadata: name: fluent-bit namespace: efk labels: app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False parsers.conf: | [PARSER] Name apache Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^ ]*) +\S*)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?&lt;time&gt;[^\]]*)\] \[(?&lt;level&gt;[^\]]*)\](?: \[pid (?&lt;pid&gt;[^\]]*)\])?( \[client (?&lt;client&gt;[^\]]*)\])? (?&lt;message&gt;.*)$ [PARSER] Name nginx Format regex Regex ^(?&lt;remote&gt;[^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog Format regex Regex ^\&lt;(?&lt;pri&gt;[0-9]+)\&gt;(?&lt;time&gt;[^ ]* {1,2}[^ ]* [^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;ident&gt;[a-zA-Z0-9_\/\.\-]*)(?:\[(?&lt;pid&gt;[0-9]+)\])?(?:[^\:]*\:)? *(?&lt;message&gt;.*)$ Time_Key time Time_Format %b %d %H:%M:%S [PARSER] Name MULTILINE_MATCH Format regex Regex ^\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \[-]\[-]\[-]\[(?&lt;level&gt;.*)\]\[(?&lt;where&gt;.*)\] (?&lt;message&gt;[\s\S]*) Time_Key time Time_Format %b %d %H:%M:%S </code></pre>
<p>Starting from Fluent Bit v1.8, You can use the <code>multiline.parser</code> option as below. docker and cri multiline parsers are predefined in fluent-bit.</p> <pre><code>[INPUT] Name tail Path /var/log/containers/*.log multiline.parser docker, cri Tag kube.* Mem_Buf_Limit 5MB Skip_Long_Lines On </code></pre> <p><a href="https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8" rel="nofollow noreferrer">https://docs.fluentbit.io/manual/pipeline/inputs/tail#multiline-and-containers-v1.8</a></p>
<p>Having setup Kibana and a fleet server, I now have attempted to add APM. When going through the general setup - I forever get an error no matter what is done:</p> <pre><code>failed to listen:listen tcp *.*.*.*:8200: bind: can't assign requested address </code></pre> <p>This is when following the steps for setup of APM having created the fleet server. This is all being launched in Kubernetes and the documentation has been gone through several times to no avail.</p> <p>We did discover that we can hit the</p> <blockquote> <p>/intake/v2/events</p> </blockquote> <p>etc endpoints when shelled into the container but 404 for everything else. Its close but no cigar so far following the instructions.</p>
<p>As it turned out, the general walk through is soon to be depreciated in its current form as is. And setup is far far simpler in a helm file where its actually possible to configure kibana with package ref for your named apm service.</p> <blockquote> <pre><code>xpack.fleet.packages: - name: system version: latest - name: elastic_agent version: latest - name: fleet_server version: latest - name: apm version: latest </code></pre> </blockquote> <pre><code> xpack.fleet.agentPolicies: - name: Fleet Server on ECK policy id: eck-fleet-server is_default_fleet_server: true namespace: default monitoring_enabled: - logs - metrics unenroll_timeout: 900 package_policies: - name: fleet_server-1 id: fleet_server-1 package: name: fleet_server - name: Elastic Agent on ECK policy id: eck-agent namespace: default monitoring_enabled: - logs - metrics unenroll_timeout: 900 is_default: true package_policies: - name: system-1 id: system-1 package: name: system - package: name: apm name: apm-1 inputs: - type: apm enabled: true vars: - name: host value: 0.0.0.0:8200 </code></pre> <p>Making sure these are set in the kibana helm file will allow any spun up fleet server to automatically register as having APM.</p> <p>The missing key in seemingly all the documentation is the need of a APM service. The simplest example of which is here:</p> <p><a href="https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.3/config/recipes/elastic-agent/fleet-apm-integration.yaml" rel="nofollow noreferrer">Example yaml scripts</a></p>
<p>We have created kubernetes dashboard using below command.</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml kubectl patch svc -n kubernetes-dashboard kubernetes-dashboard --type='json' -p '[{&quot;op&quot;:&quot;replace&quot;,&quot;path&quot;:&quot;/spec/type&quot;,&quot;value&quot;:&quot;NodePort&quot;}]' </code></pre> <p>created dashboard-adminuser.yaml file like below.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard </code></pre> <p>Created ClusterRoleBinding.yaml file like below</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard </code></pre> <p>And then run the below command at the end we got a token to login dashboard.</p> <pre><code>kubectl apply -f dashboard-adminuser.yaml kubectl apply -f ClusterRoleBinding.yaml kubectl -n kubernetes-dashboard create token admin-user </code></pre> <p>But the problem is the token which we generated got expired in one hour. We couldn't able to use the same token again, if dashboard logged out.</p> <p>So can we create a token without expiry or at least minimum 6 months?</p> <p>What is the command/procedure to create a token for long time use?</p> <p>And one more thing is that can now we are accessing kubernetes dashboard like below in outside.</p> <p>https://server_ip_address:PORT_NUMBER</p> <p>Now we want to open the kubernetes dashboard using our website URL like below and it should login automatically to the dashboard.</p> <p><a href="https://my-domain-name.com/kubernetes-dashboard/%7Bkubernetes-dashboard-goto-url%7D" rel="noreferrer">https://my-domain-name.com/kubernetes-dashboard/{kubernetes-dashboard-goto-url}</a></p>
<p>you can set <code>--duration=0s:</code></p> <pre><code> --duration=0s: Requested lifetime of the issued token. The server may return a token with a longer or shorter lifetime. </code></pre> <p>so this should work</p> <pre><code>kubectl -n kubernetes-dashboard create token admin-user --duration=times </code></pre> <p>you can check the further option</p> <pre><code>kubectl create token --help </code></pre> <p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-" rel="nofollow noreferrer">kubectl-commands--toke</a></p> <p>After play around with token, it seems like the maximum expiration is 720h.</p> <pre><code>kubectl create token default --duration=488h --output yaml </code></pre> <p>and the output shows</p> <pre><code>kind: TokenRequest metadata: creationTimestamp: null spec: audiences: - https://container.googleapis.com/v1/projects/test/clusters/test boundObjectRef: null expirationSeconds: **172800** status: expirationTimestamp: &quot;2022-08-21T12:37:02Z&quot; token: eyJhbGciOiJSUzI1N.... </code></pre> <p>So the other option is to go with kubeconfig as the dashboard also accepts config.</p> <p><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#kubeconfig" rel="nofollow noreferrer">dashboard-auth-kubeconfig</a></p>
<p>I am using EKS Fargate and created a fargate profile based on this doc: <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html</a>.</p> <p>In the doc it says Fargate is used to allocate what pods are deployed to Fargate instead of Nodegroup or EC2. So my question is should I always have one Fargate profile in one cluster? Is there any reason to have more than 1?</p>
<p>As of Aug 17th 2022, EKS now supports wildcards in Fargate Profile selectors. This means you can now run workloads from various Kubernetes namespaces with a single Fargate Profile. Previously, you would have had to specify every namespace, and were limited to just 5 namespace selectors or label pairs.</p> <p><a href="https://aws.amazon.com/about-aws/whats-new/2022/08/wildcard-support-amazon-eks-fargate-profile-selectors/" rel="nofollow noreferrer">https://aws.amazon.com/about-aws/whats-new/2022/08/wildcard-support-amazon-eks-fargate-profile-selectors/</a></p> <p>For example, now you can use selectors like <code>*-staging</code> to match namespaces ending in <code>-staging</code>. You can also use <code>?</code> to match single characters. <code>app?</code> would match <code>appA</code> and <code>appB</code>.</p> <pre><code>eksctl create fargateprofile \ --cluster my-cluster \ --name my-fargate-profile \ --namespace *-staging </code></pre>
<p>In trying to securely install metrics-server on Kubernetes, I'm having problems.</p> <p>It seems like the metric-server pod is unable to successfully make requests to the Kubelet API on it's <code>10250</code> port.</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE metrics-server 0/1 1 0 16h </code></pre> <p>The Metrics Server deployment never becomes ready and it repeats the same sequence of error logs:</p> <pre class="lang-sh prettyprint-override"><code>I0522 01:27:41.472946 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0522 01:27:41.798068 1 configmap_cafile_content.go:201] &quot;Starting controller&quot; name=&quot;client-ca::kube-system::extension-apiserver-authentication::client-ca-file&quot; I0522 01:27:41.798092 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0522 01:27:41.798068 1 dynamic_cafile_content.go:156] &quot;Starting controller&quot; name=&quot;request-header::/front-ca/front-proxy-ca.crt&quot; I0522 01:27:41.798107 1 dynamic_serving_content.go:131] &quot;Starting controller&quot; name=&quot;serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key&quot; I0522 01:27:41.798240 1 secure_serving.go:266] Serving securely on [::]:4443 I0522 01:27:41.798265 1 tlsconfig.go:240] &quot;Starting DynamicServingCertificateController&quot; W0522 01:27:41.798284 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed I0522 01:27:41.898439 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0522 01:27:55.297497 1 scraper.go:140] &quot;Failed to scrape node&quot; err=&quot;Get \&quot;https://192.168.1.106:10250/metrics/resource\&quot;: context deadline exceeded&quot; node=&quot;system76-pc&quot; E0522 01:28:10.297872 1 scraper.go:140] &quot;Failed to scrape node&quot; err=&quot;Get \&quot;https://192.168.1.106:10250/metrics/resource\&quot;: context deadline exceeded&quot; node=&quot;system76-pc&quot; I0522 01:28:10.325613 1 server.go:187] &quot;Failed probe&quot; probe=&quot;metric-storage-ready&quot; err=&quot;no metrics to serve&quot; I0522 01:28:20.325231 1 server.go:187] &quot;Failed probe&quot; probe=&quot;metric-storage-ready&quot; err=&quot;no metrics to serve&quot; E0522 01:28:25.297750 1 scraper.go:140] &quot;Failed to scrape node&quot; err=&quot;Get \&quot;https://192.168.1.106:10250/metrics/resource\&quot;: context deadline exceeded&quot; node=&quot;system76-pc&quot; </code></pre> <p>I'm running Kubernetes deployed with <code>kubeadm</code> version 1.23.4 and I'm trying to securely use metrics-server.</p> <p>I'm looking for advice that could help with:</p> <ol> <li>How I can accurately diagnose the problem?</li> <li>Or alternatively, what configuration seems most fruitful to check first?</li> <li>Anything that will help with my mental model of which certificates and keys I need to configure explicitly and what is being handled automatically.</li> </ol> <p>So far, I have tried to validate that the I can retrieve API metrics:</p> <p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/stats/summary</code></p> <pre class="lang-json prettyprint-override"><code>{ &quot;node&quot;: { &quot;nodeName&quot;: &quot;system76-pc&quot;, &quot;systemContainers&quot;: [ { &quot;name&quot;: &quot;kubelet&quot;, &quot;startTime&quot;: &quot;2022-05-20T01:51:28Z&quot;, &quot;cpu&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:40Z&quot;, &quot;usageNanoCores&quot;: 59453039, &quot;usageCoreNanoSeconds&quot;: 9768130002000 }, &quot;memory&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:40Z&quot;, &quot;usageBytes&quot;: 84910080, &quot;workingSetBytes&quot;: 84434944, &quot;rssBytes&quot;: 67149824, &quot;pageFaults&quot;: 893055, &quot;majorPageFaults&quot;: 290 } }, { &quot;name&quot;: &quot;runtime&quot;, &quot;startTime&quot;: &quot;2022-05-20T00:33:24Z&quot;, &quot;cpu&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:37Z&quot;, &quot;usageNanoCores&quot;: 24731571, &quot;usageCoreNanoSeconds&quot;: 3955659226000 }, &quot;memory&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:37Z&quot;, &quot;usageBytes&quot;: 484306944, &quot;workingSetBytes&quot;: 242638848, &quot;rssBytes&quot;: 84647936, &quot;pageFaults&quot;: 56994074, &quot;majorPageFaults&quot;: 428 } }, { &quot;name&quot;: &quot;pods&quot;, &quot;startTime&quot;: &quot;2022-05-20T01:51:28Z&quot;, &quot;cpu&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:37Z&quot;, &quot;usageNanoCores&quot;: 292818104, &quot;usageCoreNanoSeconds&quot;: 45976001446000 }, &quot;memory&quot;: { &quot;time&quot;: &quot;2022-05-22T00:48:37Z&quot;, &quot;availableBytes&quot;: 29648396288, &quot;usageBytes&quot;: 6108573696, </code></pre> <p><code>kubectl get --raw /api/v1/nodes/system76-pc/proxy/metrics/resource</code></p> <pre class="lang-sh prettyprint-override"><code># HELP container_cpu_usage_seconds_total [ALPHA] Cumulative cpu time consumed by the container in core-seconds # TYPE container_cpu_usage_seconds_total counter container_cpu_usage_seconds_total{container=&quot;alertmanager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;alertmanager-prometheus-stack-kube-prom-alertmanager-0&quot;} 108.399948 1653182143362 container_cpu_usage_seconds_total{container=&quot;calico-kube-controllers&quot;,namespace=&quot;kube-system&quot;,pod=&quot;calico-kube-controllers-56fcbf9d6b-n87ts&quot;} 206.442768 1653182144294 container_cpu_usage_seconds_total{container=&quot;calico-node&quot;,namespace=&quot;kube-system&quot;,pod=&quot;calico-node-p6pxk&quot;} 6147.643669 1653182155672 container_cpu_usage_seconds_total{container=&quot;cert-manager&quot;,namespace=&quot;cert-manager&quot;,pod=&quot;cert-manager-795d7f859d-8jp4f&quot;} 134.583294 1653182142601 container_cpu_usage_seconds_total{container=&quot;cert-manager&quot;,namespace=&quot;cert-manager&quot;,pod=&quot;cert-manager-cainjector-5fcddc948c-vw4zz&quot;} 394.286782 1653182151252 container_cpu_usage_seconds_total{container=&quot;cert-manager&quot;,namespace=&quot;cert-manager&quot;,pod=&quot;cert-manager-webhook-5b64f87794-pl7fb&quot;} 404.53758 1653182140528 container_cpu_usage_seconds_total{container=&quot;config-reloader&quot;,namespace=&quot;flux-system&quot;,pod=&quot;alertmanager-prometheus-stack-kube-prom-alertmanager-0&quot;} 6.01391 1653182139771 container_cpu_usage_seconds_total{container=&quot;config-reloader&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-prometheus-stack-kube-prom-prometheus-0&quot;} 42.706567 1653182130750 container_cpu_usage_seconds_total{container=&quot;controller&quot;,namespace=&quot;flux-system&quot;,pod=&quot;sealed-secrets-controller-5884bbf4d6-mql9x&quot;} 43.814816 1653182144648 container_cpu_usage_seconds_total{container=&quot;controller&quot;,namespace=&quot;ingress-nginx&quot;,pod=&quot;ingress-nginx-controller-f9d6fc8d8-sgwst&quot;} 645.109711 1653182141169 container_cpu_usage_seconds_total{container=&quot;coredns&quot;,namespace=&quot;kube-system&quot;,pod=&quot;coredns-64897985d-crtd9&quot;} 380.682251 1653182141861 container_cpu_usage_seconds_total{container=&quot;coredns&quot;,namespace=&quot;kube-system&quot;,pod=&quot;coredns-64897985d-rpmxk&quot;} 365.519839 1653182140533 container_cpu_usage_seconds_total{container=&quot;dashboard-metrics-scraper&quot;,namespace=&quot;kubernetes-dashboard&quot;,pod=&quot;dashboard-metrics-scraper-577dc49767-cbq8r&quot;} 25.733362 1653182141877 container_cpu_usage_seconds_total{container=&quot;etcd&quot;,namespace=&quot;kube-system&quot;,pod=&quot;etcd-system76-pc&quot;} 4237.357682 1653182140459 container_cpu_usage_seconds_total{container=&quot;grafana&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-grafana-757f9b9fcc-9f58g&quot;} 345.034245 1653182154951 container_cpu_usage_seconds_total{container=&quot;grafana-sc-dashboard&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-grafana-757f9b9fcc-9f58g&quot;} 123.480584 1653182146757 container_cpu_usage_seconds_total{container=&quot;grafana-sc-datasources&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-grafana-757f9b9fcc-9f58g&quot;} 35.851112 1653182145702 container_cpu_usage_seconds_total{container=&quot;kube-apiserver&quot;,namespace=&quot;kube-system&quot;,pod=&quot;kube-apiserver-system76-pc&quot;} 14166.156638 1653182150749 container_cpu_usage_seconds_total{container=&quot;kube-controller-manager&quot;,namespace=&quot;kube-system&quot;,pod=&quot;kube-controller-manager-system76-pc&quot;} 4168.427981 1653182148868 container_cpu_usage_seconds_total{container=&quot;kube-prometheus-stack&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-kube-prom-operator-54d9f985c8-ml2qj&quot;} 28.79018 1653182155583 container_cpu_usage_seconds_total{container=&quot;kube-proxy&quot;,namespace=&quot;kube-system&quot;,pod=&quot;kube-proxy-gg2wd&quot;} 67.215459 1653182155156 container_cpu_usage_seconds_total{container=&quot;kube-scheduler&quot;,namespace=&quot;kube-system&quot;,pod=&quot;kube-scheduler-system76-pc&quot;} 579.321492 1653182147910 container_cpu_usage_seconds_total{container=&quot;kube-state-metrics&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-kube-state-metrics-56d4759d67-h6lfv&quot;} 158.343644 1653182153691 container_cpu_usage_seconds_total{container=&quot;kubernetes-dashboard&quot;,namespace=&quot;kubernetes-dashboard&quot;,pod=&quot;kubernetes-dashboard-69dc48777b-8cckh&quot;} 78.231809 1653182139263 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;helm-controller-dfb4b5478-7zgt6&quot;} 338.974637 1653182143679 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;image-automation-controller-77fd9657c6-lg44h&quot;} 280.841645 1653182154912 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;image-reflector-controller-86db8b6f78-5rz58&quot;} 2909.277578 1653182144081 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;kustomize-controller-cd544c8f8-hxvk6&quot;} 596.392781 1653182152714 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;notification-controller-d9cc9bf46-2jhbq&quot;} 244.387967 1653182142902 container_cpu_usage_seconds_total{container=&quot;manager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;source-controller-84bfd77bf8-r827h&quot;} 541.650877 1653182148963 container_cpu_usage_seconds_total{container=&quot;metrics-server&quot;,namespace=&quot;flux-system&quot;,pod=&quot;metrics-server-55bc5f774-zznpb&quot;} 174.229886 1653182146946 container_cpu_usage_seconds_total{container=&quot;nfs-subdir-external-provisioner&quot;,namespace=&quot;flux-system&quot;,pod=&quot;nfs-subdir-external-provisioner-858745f657-zcr66&quot;} 244.061329 1653182139840 container_cpu_usage_seconds_total{container=&quot;node-exporter&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-stack-prometheus-node-exporter-wj2fx&quot;} 29.852036 1653182148779 container_cpu_usage_seconds_total{container=&quot;prometheus&quot;,namespace=&quot;flux-system&quot;,pod=&quot;prometheus-prometheus-stack-kube-prom-prometheus-0&quot;} 7141.611234 1653182154042 # HELP container_memory_working_set_bytes [ALPHA] Current working set of the container in bytes # TYPE container_memory_working_set_bytes gauge container_memory_working_set_bytes{container=&quot;alertmanager&quot;,namespace=&quot;flux-system&quot;,pod=&quot;alertmanager-prometheus-stack-kube-prom-alertmanager-0&quot;} 2.152448e+07 1653182143362 </code></pre> <p>metric-server config:</p> <pre class="lang-yaml prettyprint-override"><code> spec: containers: - args: - --secure-port=4443 - --cert-dir=/tmp - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-preferred-address-types=Hostname - --requestheader-client-ca-file=/front-ca/front-proxy-ca.crt - --kubelet-certificate-authority=/ca/ca.crt image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp name: tmp - mountPath: /front-ca name: front-proxy-ca-dir - mountPath: /ca name: ca-dir dnsPolicy: ClusterFirst priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: metrics-server serviceAccountName: metrics-server terminationGracePeriodSeconds: 30 volumes: - emptyDir: {} name: tmp - configMap: defaultMode: 420 name: front-proxy-ca name: front-proxy-ca-dir - configMap: defaultMode: 420 name: kubelet-ca name: ca-dir </code></pre> <p>kube-apiserver config:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.106:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.1.106 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: k8s.gcr.io/kube-apiserver:v1.23.4 </code></pre>
<p>In my case I had the same issue with the metrics-server because of just 1 ocpu on master-node. Use at least 2.</p>
<p>I am using nginx-ingress in my cluster to expose certain services. I have an &quot;auth&quot; service that handles authentication, which I am trying to setup through nginx. Currently the service has a very simple GET endpoint, that always responds with a <code>UserId</code> header and tries to set two cookies:</p> <pre class="lang-js prettyprint-override"><code>// This is implemented on Nest.js which uses express.js @Get('*') auth(@Res() res: Response): void { res.header('UserId', '1') res.cookie('key', 'value') res.cookie('x', 'y') res.status(200).send('hello') } </code></pre> <p>I can confirm that both cookies are being set when I manually send a request to that endpoint, but when I set it as an annotation to the ingress:</p> <pre><code>nginx.ingress.kubernetes.io/auth-url: http://auth.dev.svc.cluster.local </code></pre> <p>and send a request through the ingress, only one of the cookies is forwarded to the Response (the first one <code>key=value</code>). I am not familiar with the nginx configuration, is there something I am supposed to change to make this work, so that both cookies are set?</p> <p>I found <a href="https://github.com/kubernetes/ingress-nginx/issues/8183" rel="nofollow noreferrer">this issue</a> on GitHub, but it seems to be about OAuth2 there is no clear explanation on what I am supposed to change.</p>
<p>I couldn't find a way to make this work with the <code>Set-Cookie</code> header. Not sure if there is a better way, but here is a workaround:</p> <p>I added a snippet for the <code>location</code> block that converts two headers to cookies:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $auth_cookie1 $upstream_http_x_header1; auth_request_set $auth_cookie2 $upstream_http_x_header2; add_header Set-Cookie $auth_cookie1; add_header Set-Cookie $auth_cookie2; </code></pre> <p>And the <code>auth()</code> endpoint now responds with the <code>X-Header1</code> and <code>X-Header2</code> headers:</p> <pre class="lang-js prettyprint-override"><code>import { serialize } from 'cookie' @Get('*') auth(@Res() res: Response): void { res.header('UserId', '1') res.header('X-Header1', serialize('key', 'value')) res.header('X-Header2', serialize('x', 'y')) res.status(200).send('hello') } </code></pre> <p>Everything seems to be working well and this solution is similar to how nginx is adding the Set-Cookie header which doesn't support multiple cookies. The code below is copied from the <code>nginx.conf</code> file in the <code>nginx-controller</code> pod that <code>nginx-ingress</code> creates.</p> <pre><code>auth_request_set $auth_cookie $upstream_http_set_cookie; add_header Set-Cookie $auth_cookie; </code></pre>
<p>What do you use instead og kubectl get ComponentStatus?</p> <pre><code>kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ </code></pre>
<p>Yes, this API is deprecated and as it provided status of <code>etcd, kube-scheduler, and kube-controller-manager</code> components, which we can get through kubectl or using <code>/livez</code> endpoint.</p> <p>so you can try</p> <pre><code>kubectl get --raw='/readyz?verbose' #local cluster curl -k https://localhost:6443/livez?verbose </code></pre> <p><strong>output</strong></p> <pre><code>[+]ping ok [+]log ok [+]etcd ok [+]informer-sync ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]shutdown ok readyz check passed </code></pre> <blockquote> <p>The current state of this API is problematic, and requires reversing the actual data flow (it requires the API server to call to its clients), and is not functional across deployment topologies.</p> <p>It should be clearly marked as deprecated.</p> </blockquote> <p><a href="https://github.com/kubernetes/kubernetes/pull/93570" rel="nofollow noreferrer">Mark componentstatus as deprecated </a></p> <blockquote> <p>The Kubernetes API server provides 3 API endpoints (healthz, livez and readyz) to indicate the current status of the API server. The healthz endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific livez and readyz endpoints instead.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/using-api/health-checks/" rel="nofollow noreferrer">using-api-health-checks</a></p>
<p>I want to create a replica set of MongoDB pods and after pods are in running state, I want to create a collection on every mongo db instance. Here is the code:</p> <pre><code>metadata: name: mongodb-standalone spec: replicas: 3 selector: matchLabels: app: database template: metadata: labels: app: database selector: mongodb-standalone spec: containers: - name: mongodb-standalone image: mongo:4.0.8 lifecycle: postStart: exec: command: [&quot;mongo --eval 'db.createCollection(\&quot;Profile\&quot;);' test&quot;] </code></pre> <p>Still this code is not working.</p>
<p>you can use configmap and mount the db creation script to init</p> <blockquote> <p>When a container is started for the first time it will execute files with extensions .sh and .js that are found in <code>/docker-entrypoint-initdb.d.</code> Files will be executed in alphabetical order. .js files will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the .js script.</p> </blockquote> <p>create file <code>create_db.js</code></p> <pre><code>db.createCollection(&quot;user&quot;) db.createCollection(&quot;movies&quot;) db.user.insert({name: &quot;Ada Lovelace&quot;, age: 205}) db.movies.insertMany( [ { title: 'Titanic', year: 1997, genres: [ 'Drama', 'Romance' ] }, { title: 'Spirited Away', year: 2001, genres: [ 'Animation', 'Adventure', 'Family' ] }, { title: 'Casablanca', genres: [ 'Drama', 'Romance', 'War' ] } ] ) </code></pre> <p>create configmap</p> <pre><code>kubectl create configmap create-db-configmap --from-file=./create_db.js </code></pre> <p>now we are all set, create deployment and check the magic</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: mongo name: mongo spec: replicas: 1 selector: matchLabels: app: mongo strategy: {} template: metadata: creationTimestamp: null labels: app: mongo spec: containers: - image: mongo name: mongo args: [&quot;--dbpath&quot;,&quot;/data/db&quot;] livenessProbe: exec: command: - mongo - --disableImplicitSessions - --eval - &quot;db.adminCommand('ping')&quot; initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: exec: command: - mongo - --disableImplicitSessions - --eval - &quot;db.adminCommand('ping')&quot; initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 env: - name: MONGO_INITDB_DATABASE value: demodb - name: MONGO_INITDB_ROOT_USERNAME value: &quot;root&quot; - name: MONGO_INITDB_ROOT_PASSWORD value: &quot;password&quot; volumeMounts: - name: &quot;mongo-data-dir&quot; mountPath: &quot;/data/db&quot; - name: &quot;init-database&quot; mountPath: &quot;/docker-entrypoint-initdb.d/&quot; volumes: - name: &quot;mongo-data-dir&quot; - name: &quot;init-database&quot; configMap: name: create-db-configmap </code></pre> <p>you can find complete example <a href="https://github.com/Adiii717/kubernetes-mongo-db-init" rel="nofollow noreferrer">here</a></p> <p><a href="https://i.stack.imgur.com/GHeFR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GHeFR.png" alt="enter image description here" /></a></p>
<p>I tried to run Kafka in Raft mode (zookeeper-less) in Kubernetes and everything worked fine with this configuration:</p> <p>I am curious about how to change the provided configuration to run with a replication factor of 3 for instance?</p> <p>The fruitful topic was <a href="https://github.com/bitnami/bitnami-docker-kafka/issues/159" rel="nofollow noreferrer">on the github</a> but no one provided Kafka Kraft mode with replication set up.</p> <p>Statefulset</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: kafka-statefulset namespace: kafka labels: app: kafka-cluster spec: serviceName: kafka-svc replicas: 1 selector: matchLabels: app: kafka-cluster template: metadata: labels: app: kafka-cluster spec: containers: - name: kafka-container image: 'bitnami/kafka:latest' ports: - containerPort: 9092 - containerPort: 9093 env: - name: KAFKA_BROKER_ID value: &quot;1&quot; - name: KAFKA_CFG_NODE_ID value: &quot;1&quot; - name: KAFKA_ENABLE_KRAFT value: &quot;yes&quot; - name: KAFKA_CFG_PROCESS_ROLES value: &quot;broker,controller&quot; - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES value: &quot;CONTROLLER&quot; - name: KAFKA_CFG_LISTENERS value: &quot;CLIENT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094&quot; - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP value: &quot;CONTROLLER:PLAINTEXT,CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT&quot; - name: KAFKA_CFG_INTER_BROKER_LISTENER_NAME value: &quot;CLIENT&quot; - name: KAFKA_CFG_ADVERTISED_LISTENERS value: &quot;CLIENT://kafka-statefulset-0.kafka-svc.kafka.svc.cluster.local:9092,EXTERNAL://127.0.0.1:9094&quot; - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS value: &quot;[email protected]:9093&quot; - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE value: &quot;false&quot; - name: KAFKA_DEFAULT_REPLICATION_FACTOR value: &quot;1&quot; - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: &quot;1&quot; - name: ALLOW_PLAINTEXT_LISTENER value: &quot;yes&quot; </code></pre> <p>Headless service:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: kafka-svc labels: app: kafka-cluster spec: clusterIP: None ports: - name: '9092' port: 9092 protocol: TCP targetPort: 9092 selector: app: kafka-cluster </code></pre>
<p>Finally, I deployed Kafka in Kraft mode with a replication factor of 3 in Kubernetes. I used guidelines within this <a href="https://learnk8s.io/kafka-ha-kubernetes#deploying-a-3-node-kafka-cluster-on-kubernetes" rel="nofollow noreferrer">article</a>. There is a very comprehensive description of how this setup works. I went through the image in docker hub <strong>doughgle/kafka-kraft</strong>, and there is a link to their Github repo where you can find a script:</p> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash NODE_ID=${HOSTNAME:6} LISTENERS=&quot;PLAINTEXT://:9092,CONTROLLER://:9093&quot; ADVERTISED_LISTENERS=&quot;PLAINTEXT://kafka-$NODE_ID.$SERVICE.$NAMESPACE.svc.cluster.local:9092&quot; CONTROLLER_QUORUM_VOTERS=&quot;&quot; for i in $( seq 0 $REPLICAS); do if [[ $i != $REPLICAS ]]; then CONTROLLER_QUORUM_VOTERS=&quot;$CONTROLLER_QUORUM_VOTERS$i@kafka-$i.$SERVICE.$NAMESPACE.svc.cluster.local:9093,&quot; else CONTROLLER_QUORUM_VOTERS=${CONTROLLER_QUORUM_VOTERS::-1} fi done mkdir -p $SHARE_DIR/$NODE_ID if [[ ! -f &quot;$SHARE_DIR/cluster_id&quot; &amp;&amp; &quot;$NODE_ID&quot; = &quot;0&quot; ]]; then CLUSTER_ID=$(kafka-storage.sh random-uuid) echo $CLUSTER_ID &gt; $SHARE_DIR/cluster_id else CLUSTER_ID=$(cat $SHARE_DIR/cluster_id) fi sed -e &quot;s+^node.id=.*+node.id=$NODE_ID+&quot; \ -e &quot;s+^controller.quorum.voters=.*+controller.quorum.voters=$CONTROLLER_QUORUM_VOTERS+&quot; \ -e &quot;s+^listeners=.*+listeners=$LISTENERS+&quot; \ -e &quot;s+^advertised.listeners=.*+advertised.listeners=$ADVERTISED_LISTENERS+&quot; \ -e &quot;s+^log.dirs=.*+log.dirs=$SHARE_DIR/$NODE_ID+&quot; \ /opt/kafka/config/kraft/server.properties &gt; server.properties.updated \ &amp;&amp; mv server.properties.updated /opt/kafka/config/kraft/server.properties kafka-storage.sh format -t $CLUSTER_ID -c /opt/kafka/config/kraft/server.properties exec kafka-server-start.sh /opt/kafka/config/kraft/server.properties </code></pre> <p>This script is necessary for setting proper configuration one by one to pods/brokers.</p> <p>Then I built my own image with the latest version of Kafka, Scala and openjdk 17:</p> <pre><code>FROM openjdk:17-bullseye ENV KAFKA_VERSION=3.3.1 ENV SCALA_VERSION=2.13 ENV KAFKA_HOME=/opt/kafka ENV PATH=${PATH}:${KAFKA_HOME}/bin LABEL name=&quot;kafka&quot; version=${KAFKA_VERSION} RUN wget -O /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz https://downloads.apache.org/kafka/${KAFKA_VERSION}/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \ &amp;&amp; tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt \ &amp;&amp; rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \ &amp;&amp; ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} ${KAFKA_HOME} \ &amp;&amp; rm -rf /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz COPY ./entrypoint.sh / RUN [&quot;chmod&quot;, &quot;+x&quot;, &quot;/entrypoint.sh&quot;] ENTRYPOINT [&quot;/entrypoint.sh&quot;] </code></pre> <p>and here is the Kubernetes configuration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Namespace metadata: name: kafka-kraft --- apiVersion: v1 kind: PersistentVolume metadata: name: kafka-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: '/path/to/dir' --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: kafka-pv-claim namespace: kafka-kraft spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 500Mi --- apiVersion: v1 kind: Service metadata: name: kafka-svc labels: app: kafka-app namespace: kafka-kraft spec: clusterIP: None ports: - name: '9092' port: 9092 protocol: TCP targetPort: 9092 selector: app: kafka-app --- apiVersion: apps/v1 kind: StatefulSet metadata: name: kafka labels: app: kafka-app namespace: kafka-kraft spec: serviceName: kafka-svc replicas: 3 selector: matchLabels: app: kafka-app template: metadata: labels: app: kafka-app spec: volumes: - name: kafka-storage persistentVolumeClaim: claimName: kafka-pv-claim containers: - name: kafka-container image: me/kafka-kraft ports: - containerPort: 9092 - containerPort: 9093 env: - name: REPLICAS value: '3' - name: SERVICE value: kafka-svc - name: NAMESPACE value: kafka-kraft - name: SHARE_DIR value: /mnt/kafka - name: CLUSTER_ID value: oh-sxaDRTcyAr6pFRbXyzA - name: DEFAULT_REPLICATION_FACTOR value: '3' - name: DEFAULT_MIN_INSYNC_REPLICAS value: '2' volumeMounts: - name: kafka-storage mountPath: /mnt/kafka </code></pre> <p>I am not 100% sure if this setup works like with a stable zookeeper setup, but it is currently sufficient for me for the testing phase.</p> <p><strong>UPDATE:</strong> Kafka Kraft is production ready in release 3.3.1</p>
<p>In the namespace, I have multiple applications deployed. I would like to rollout the deployment based on selector. Can someone please share how to achieve it?</p> <p>Thanks</p>
<p>You can achieve that by:</p> <pre class="lang-bash prettyprint-override"><code>kubectl rollout status deployment --selector=&quot;key=value&quot; </code></pre> <p>But this argument is added in kubctl <a href="https://github.com/kubernetes/kubernetes/pull/99758" rel="nofollow noreferrer">v1.24</a>, so if you have a lower version, you need to update it.</p>
<p>In kubernetes (I am using minikube) I have deployed the following deployment using <code>kubectl apply -f nginx-deployment</code>:</p> <pre><code># nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 </code></pre> <p>I get <code>deployment.apps/nginx-deployment created</code> as an output, and when I run <code>kubectl get deployment</code> I get:</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 22s </code></pre> <p>I have also deployed the following service file using <code>kubectl apply -f nginx-service.yml</code> command</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: app: nginx ports: - name: &quot;http&quot; port: 80 targetPort: 80 nodePort: 30080 </code></pre> <p>The output is <code>service/nginx-service created</code> and the output of <code>kubectl get service</code> is:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 127d nginx-service NodePort 10.99.253.196 &lt;none&gt; 80:30080/TCP 75s </code></pre> <p>However, when I try to access the app by entering <code>10.99.253.196</code> into the browser, it doesn't load and when I try localhost:30080 it says <code>Unable to connect</code>. Could someone help me to understand why this is happening/provide further directions for troubleshooting?</p>
<p>Since you are using minikube you might need to run <code>minikube service nginx-service --url</code>, this will create a tunnel to the cluster and expose the service.</p>
<p>I have installed k8s 1.24 version and containerd (containerd://1.5.9) is the CR for my setup (ubuntu 20.04).</p> <p>I have also installed docker on my VM and have added my private repository under /etc/docker/daemon.json with the following changes:</p> <pre><code>{ &quot;insecure-registries&quot; : [&quot;myPvtRepo.com:5028&quot;] } </code></pre> <p>When I am running <code>docker pull myPvtRepo:123/image</code> after login to my pvt repo by using <code>docker login myPvtRepo:123</code> command, I am able to pull the images while running the same command with <code>crictl pull myPvtRepo:123/image</code>, I am facing:</p> <blockquote> <p>E0819 06:49:01.200489 162610 remote_image.go:218] &quot;PullImage from image service failed&quot; err=&quot;rpc error: code = Unknown desc = failed to pull and unpack image &quot;myPvtRepo.com:5028/centos:latest&quot;: failed to resolve reference &quot;myPvtRepo.com:5028/centos:latest&quot;: failed to do request: Head <a href="https://myPvtRepo.com::5028/v2/centos/manifests/latest" rel="nofollow noreferrer">https://myPvtRepo.com::5028/v2/centos/manifests/latest</a>: x509: certificate signed by unknown authority&quot; image=&quot;myPvtRepo.com::5028/centos:latest&quot; FATA[0000] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image &quot;myPvtRepo.com::5028/centos:latest&quot;: failed to resolve reference &quot;myPvtRepo.com:5028/centos:latest&quot;: failed to do request: Head <a href="https://myPvtRepo.com::5028/v2/centos/manifests/latest" rel="nofollow noreferrer">https://myPvtRepo.com::5028/v2/centos/manifests/latest</a>: x509: certificate signed by unknown authority</p> </blockquote> <p>FYI, I have modified /etc/containerd/config.toml with below content.</p> <pre><code>version = 2 [plugin.&quot;io.containerd.grpc.v1.cri&quot;.registry.configs.&quot;myPvtRepo.com:5028&quot;.tls] insecure_skip_verify = true [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;docker.io&quot;] endpoint = [&quot;https://myPvtRepo.com:5028&quot;, &quot;https://myPvtRepo.com:5038&quot;, &quot;https://myPvtRepo.com:5037&quot;, &quot;https://myPvtRepo.com:5039&quot;] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;IP:5000&quot;] endpoint = [&quot;http://IP:5000&quot;] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;IP:5000&quot;] endpoint = [&quot;http://IP:5000&quot;] </code></pre> <p>I have also modified containerd's endpoint to point to containerd's sock.</p> <p>Can you please help me out to understand and fix that even after setting <code>insecure_skip_verify = true</code> for my pvt repository and restarting the containerd service why I am getting this issue.</p>
<p>I got a solution:</p> <pre><code>cd /usr/local/share/ca-certificates/ curl -L --remote-name http://your-artifacts.com/xyz-bundle.crt /usr/sbin/update-ca-certificates </code></pre> <p>This one work for me.</p> <p>Also make sure to update your endpoints under /etc/crictl.yaml</p> <pre><code>runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: &quot;&quot; timeout: 0 debug: false pull-image-on-create: false disable-pull-on-run: false </code></pre>
<p>I am using terraform to deploy a kube cluster to Google Kubernetes Engine.</p> <p>Here is my ingress config - both http and https are working but I want http to auto redirect to https</p> <pre><code>resource &quot;kubernetes_ingress_v1&quot; &quot;ingress&quot; { wait_for_load_balancer = true metadata { name = &quot;ingress&quot; } spec { default_backend { service { name = kubernetes_service.frontend_service.metadata[0].name port { number = 80 } } } rule { http { path { backend { service { name = kubernetes_service.api_service.metadata[0].name port { number = 80 } } } path = &quot;/api/*&quot; } path { backend { service { name = kubernetes_service.api_service.metadata[0].name port { number = 80 } } } path = &quot;/api&quot; } } } tls { secret_name = &quot;tls-secret&quot; } } depends_on = [kubernetes_secret_v1.tls-secret, kubernetes_service.frontend_service, kubernetes_service.api_service] } </code></pre> <p>How can I configure the ingress to auto redirect from http to https?</p>
<p>The following worked for me - I got my hints from <a href="https://github.com/hashicorp/terraform-provider-kubernetes/issues/1326#issuecomment-910374103" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes/issues/1326#issuecomment-910374103</a></p> <pre><code> resource &quot;kubectl_manifest&quot; &quot;app-frontend-config&quot; { wait_for_rollout = true yaml_body = yamlencode({ apiVersion = &quot;networking.gke.io/v1beta1&quot; kind = &quot;FrontendConfig&quot; metadata = { name = &quot;ingress-fc&quot; } spec = { redirectToHttps = { enabled = true } } }) } resource &quot;kubernetes_ingress_v1&quot; &quot;ingress&quot; { wait_for_load_balancer = true metadata { name = &quot;ingress&quot; annotations = { &quot;networking.gke.io/v1beta1.FrontendConfig&quot; = kubectl_manifest.app-frontend-config.name } } spec { default_backend { service { name = kubernetes_service.frontend_service.metadata[0].name port { number = 80 } } } rule { http { path { backend { service { name = kubernetes_service.api_service.metadata[0].name port { number = 80 } } } path = &quot;/api/*&quot; } path { backend { service { name = kubernetes_service.api_service.metadata[0].name port { number = 80 } } } path = &quot;/api&quot; } } } tls { secret_name = &quot;tls-secret&quot; } } depends_on = [kubernetes_secret_v1.tls-secret, kubernetes_service.frontend_service, kubernetes_service.api_service] } </code></pre> <p>You need an additional module in your <code>terraform</code> block</p> <pre><code> kubectl = { source = &quot;gavinbunney/kubectl&quot; version = &quot;&gt;= 1.14.0&quot; } </code></pre> <p>Do not forget to initialise the kubectl provider</p> <pre><code> provider &quot;kubectl&quot; { host = &quot;https://${google_container_cluster.primary.endpoint}&quot; token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate) load_config_file = false } </code></pre>
<p>I am using celery with a fastAPI.</p> <p>Getting <strong>Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)')</strong> while running in docker. When running the same in local machine without docker there is not issue.</p> <p>The configuration for the same is as below.</p> <pre><code>celery_app = Celery('cda-celery-tasks', broker=CFG.BROKER_URL, backend=CFG.BACKEND_URL, include=['src.tasks.tasks'] ) celery_app.conf.task_serializer = 'pickle' celery_app.conf.result_serializer = 'pickle' celery_app.conf.accept_content = ['pickle'] celery_app.conf.enable_utc = True </code></pre> <p>While Running in docker I am getting the error continuously</p> <pre><code>FROM python:3.8 WORKDIR /app COPY . . RUN pip3 install poetry ENV PATH=&quot;/root/.poetry/bin:$PATH&quot; RUN poetry install </code></pre> <p>the celery is started using the following command from kubernetes.</p> <p><code>poetry run celery -A src.infrastructure.celery_application worker --loglevel=INFO --concurrency 2</code></p> <p>While running I am getting the error continuously</p> <p>Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)')</p> <pre><code>body: '{&quot;method&quot;: &quot;enable_events&quot;, &quot;arguments&quot;: {}, &quot;destination&quot;: null, &quot;pattern&quot;: null, &quot;matcher&quot;: null}' (99b) Traceback (most recent call last): File &quot;/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/messaging.py&quot;, line 620, in _receive_callback decoded = None if on_m else message.decode() File &quot;/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py&quot;, line 194, in decode self._decoded_cache = self._decode() File &quot;/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py&quot;, line 198, in _decode return loads(self.body, self.content_type, File &quot;/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/serialization.py&quot;, line 242, in loads raise self._for_untrusted_content(content_type, 'untrusted') kombu.exceptions.ContentDisallowed: Refusing to deserialize untrusted content of type json (application/json) </code></pre> <p>Could someone please tell me the possible cause and solution to manage the same? If I've missed anything, over- or under-emphasized a specific point, please let me know in the comments. Thank you so much in advance for your time.</p>
<p>Configuring the celery_app with the accept_content type seems to fix the issue:</p> <pre><code>celery_app.conf.accept_content = ['application/json', 'application/x-python-serialize', 'pickle'] </code></pre>
<p>I have some amount of traffic that can boost the cpu usage up to 180%. I tried using a single pod which works fine but the response was extremely slow. When I configured my HPA to cpu=80%, min=1 and max={2 or more} I hit connection refused when HPA was creating more pods. I tried put a large value to min (ie. min = 3) the connection refused relief but there will be too many idle pods when traffic is low. Is there any way to stop putting pod online until it is completely started?</p>
<blockquote> <p>I hit connection refused when HPA was creating more pods</p> </blockquote> <p>Kubernetes uses the readinessProbe, to determine whether to redirect clients to some pods. If the readinessProbe for a Pod is not successful, then Service whose selectors could have matched that Pod would not take it under consideration.</p> <p>If there is no readinessProbe defined, or if it was misconfigured, Pods that are still starting up may end up serving client requests. Connection refused could suggest there was no process listening yet for incoming connections.</p> <p>Please share your deployment/statefulset/..., if you need further assistance setting this up.</p>
<p>I have read many links similar to my issue, but none of them were helping me to resolve the issue.</p> <p><strong>Similar Links</strong>:</p> <ol> <li><a href="https://github.com/containerd/containerd/issues/7219" rel="noreferrer">Failed to exec into the container due to permission issue after executing 'systemctl daemon-reload'</a></li> <li><a href="https://github.com/opencontainers/runc/issues/3551" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li> <li><a href="https://stackoverflow.com/questions/73379718/ci-runtime-exec-failed-exec-failed-unable-to-start-container-process-open-de">CI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li> <li><a href="https://github.com/moby/moby/issues/43969" rel="noreferrer">OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown</a></li> <li><a href="https://bbs.archlinux.org/viewtopic.php?id=277995" rel="noreferrer">Fail to execute docker exec</a></li> <li><a href="https://github.com/docker/for-linux/issues/246" rel="noreferrer">OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused &quot;open /proc/self/fd: no such file or directory&quot;: unknown</a></li> </ol> <p><strong>Problem Description</strong>:</p> <p>I have created a new Kubernetes cluster using <code>Kubespray</code>. When I wanted to execute some commands in one of containers I faced to the following error:</p> <h6>Executed Command</h6> <pre class="lang-bash prettyprint-override"><code>kubectl exec -it -n rook-ceph rook-ceph-tools-68d847b88d-7kw2v -- sh </code></pre> <h6>Error:</h6> <blockquote> <p>OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/1: operation not permitted: unknown command terminated with exit code 126</p> </blockquote> <p>I have also logged in to the node, which runs the pod, and try executing the container using <code>docker exec</code> command, but the error was not changed.</p> <p><strong>Workarounds</strong>:</p> <ul> <li><p>As I have found, the error code (126) implies that the permissions are insufficient, but I haven't faced this kind of error (like executing <code>sh</code>) in Docker or Kubernetes.</p> </li> <li><p>I have also checked whether <code>SELinux</code> is enabled or not (as it has been said in the 3rd link).</p> <pre class="lang-bash prettyprint-override"><code>apt install policycoreutils sestatus # Output SELinux status: disabled </code></pre> </li> <li><p>In the 5th link, it was said to check whether you have updated the kernel, and I didn't upgrade anything on the nodes.</p> <pre class="lang-bash prettyprint-override"><code>id; stat /dev/pts/0 # output uid=0(root) gid=0(root) groups=0(root) File: /dev/pts/0 Size: 0 Blocks: 0 IO Block: 1024 character special file Device: 18h/24d Inode: 3 Links: 1 Device type: 88,0 Access: (0600/crw-------) Uid: ( 0/ root) Gid: ( 5/ tty) Access: 2022-08-21 12:01:25.409456443 +0000 Modify: 2022-08-21 12:01:25.409456443 +0000 Change: 2022-08-21 11:54:47.474457646 +0000 Birth: - </code></pre> </li> <li><p>Also tried <code>/bin/sh</code> instead of <code>sh</code> or <code>/bin/bash</code>, but not worked and the same error occurred.</p> </li> </ul> <p>Can anyone help me to find the root cause of this problem and then solve it?</p>
<p>This issue may relate to docker, first drain your node.</p> <pre><code>kubectl drain &lt;node-name&gt; </code></pre> <p>Second, SSH to the node and restart docker service.</p> <pre><code>systemctl restart docker.service </code></pre> <p>At the end try to execute your command.</p>
<p>The grafana helm chart spawns a service on a Classic Load Balancer. I have the AWS load balancer webhook installed, and I'd like to overwrite the annotations on the Grafana service. I'm attempting the following:</p> <pre><code>helm install grafana grafana/grafana \ --namespace grafana \ --set persistence.storageClassName=&quot;gp2&quot; \ --set persistence.enabled=true \ --set adminPassword='abc' \ --values grafana.yaml \ --set service.type=LoadBalancer \ --set nodeSelector.app=prometheus \ --set nodeSelector.k8s-app=metrics-server \ --set service.annotations.&quot;service\.beta.kubernetes\.io/aws-load-balancer-nlb-target-type&quot;=ip \ --set service.annotations.&quot;service\.beta.kubernetes\.io/aws-load-balancer-type&quot;=external </code></pre> <p>but, after trying multiple permutations, I continue to get:</p> <pre><code>Error: INSTALLATION FAILED: YAML parse error on grafana/templates/service.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string </code></pre> <p>What is the correct way of doing this?</p>
<p>there is an issue in the annotation, you are missing escape character for <code>beta\.kubernetes</code></p> <p>try this and it should work.</p> <pre><code>--set service.annotations.&quot;service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type&quot;=ip \ --set service.annotations.&quot;service\.beta\.kubernetes\.io/aws-load-balancer-type&quot;=external </code></pre>
<p>I am running 4 replicas of the <em>bf-v</em> instance. I am using <strong>ClientIP</strong> as a sessionAffinity.</p> <p>I want to distribute requests based on the client IP address, but also distribute client IPs evenly (round-robin based) across replicas. I want pods to have the same number of clients. is there any way to achieve this kind of distribution? (the default round-robin gets affected by session affinity)</p> <p>Thanks :)</p> <p>svc.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: bf-v name: bf-v spec: ports: - port: 80 type: LoadBalancer selector: app: bf-v sessionAffinity: &quot;ClientIP&quot; </code></pre>
<p>Two options:</p> <p>1- Use <strong>IPVS</strong> mode with <strong>ipvs-scheduler=sh</strong> (sh -&gt; source hashing)</p> <p>2- Use <strong>ClientIP</strong> affinity in your svc.yaml with <strong>ipvs-scheduler=rr</strong> (rr -&gt; round robin)</p>
<p>i am confused about <strong>Nginx Ingress</strong> vs <strong>Nginx deployment</strong>(container) in kubernetes since both are controlling incomming requests to the cluster let say i am deploying a web app container and nginx container in one pod and all works perfectly if i deploy an other application and use Nginx Ingress to route incomming requests Then <strong>who will controll incomming requests</strong> Nginx Ingress or that nginx container ? <strong>THANKS</strong></p>
<p>Let's abstract ourselves from nginx. We should distinguish the webserver running alongside an application, from the reverse proxy routing client requests to that application.</p> <p>In Kubernetes, you may deploy applications based on some Nginx, lighttpd, Apache, ... webserver. Sometimes complex configurations routing clients to different bits composing your application (eg: nodejs backend for an API, static assets, php/smarty frontend ...).</p> <p>While most Kubernetes clusters would come with an &quot;Ingress Controller&quot;. A Controller in Kubernetes refers to some software integrating with your cluster API. An Ingress Controller watches for &quot;Ingress&quot; objects, and configures itself proxying client requests to &quot;Services&quot; within your cluster.</p> <p>Answering &quot;who controls incoming requests&quot;, then: a little bit of both. Your Ingress Controller is the proxy exposed to clients connecting to an application in your cluster. The webserver running in your application deployment serves requests proxied by your ingress controller.</p> <p>And why would we do this: consider Kubernetes comes with an SDN. Services and Pods in your cluster are usually not reachable from clients that would not be part of your cluster network. Ingress controllers is a convenient way to allow end-users of a cluster to expose their own applications, in a somewhat-generic way, managing their own Ingresses. While cluster administrators would make sure traffic can reach your application, setting up the actual Ingress Controller.</p>
<p>I'm attempting to configure AKS, and I have the below setup</p> <p><a href="https://i.stack.imgur.com/n6FfO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n6FfO.png" alt="enter image description here" /></a></p> <p>I want to enable HTTPS between Nginx Kubernetes Ingress Controller &amp; Asp.Net Core 6.0 WebAPI PODs, like</p> <p><a href="https://i.stack.imgur.com/XGalr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XGalr.png" alt="enter image description here" /></a></p> <p>How do I setup this? Where do I store the WebAPI SSL certificate?</p>
<p>Reference <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">documentation for annotation</a> to set the ingress backend to <code>HTTPS</code>:</p> <pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS </code></pre> <p>Follow the guidance <a href="https://learn.microsoft.com/en-us/aspnet/core/security/docker-https?view=aspnetcore-6.0" rel="nofollow noreferrer">here</a> to setup SSL certs for your WebAPI pods.</p> <p>The certs can be stored in kubernetes generic <code>secret</code> and can be mounted onto the pods as <code>volumes</code>. In production, the AKS secret storage could be <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" rel="nofollow noreferrer">backed by Azure KeyVault</a>, so the cert would be really stored in the KeyVault.</p> <p>For your test environment, here is how you create secret:</p> <pre><code>kubectl create secret generic webapi-cert-secret --from-file=cert=yourcert.pfx --from-literal=pass='yourcertpasswd' </code></pre> <p>Then mount into your pod/deployment definition (truncated for brevity):</p> <pre><code> env: - name: Kestrel__Certificates__Default__Path value: /certs/aspnet-cert.pfx - name: Kestrel__Certificates__Default__Password valueFrom: secretKeyRef: name: webapi-cert-secret key: pass volumeMounts: - name: certsvolume mountPath: /certs/aspnet-cert.pfx subPath: aspnet-cert.pfx readOnly: true volumes: - name: certsvolume secret: secretName: webapi-cert-secret items: - key: cert path: aspnet-cert.pfx </code></pre>
<p>In a nutshell, most of our apps are configured with the following <code>strategy</code> in the Deployment - </p> <pre><code> strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate </code></pre> <p>The Horizonatal Pod Autoscaler is configured as so </p> <pre><code>spec: maxReplicas: 10 minReplicas: 2 </code></pre> <p>Now when our application was redeployed, instead of running a rolling update, it instantly terminated 8 of our pods and dropped the number of pods to <code>2</code> which is the min number of replicas available. This happened in a fraction of a second as you can see here.</p> <p><a href="https://i.stack.imgur.com/V7AVN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/V7AVN.png" alt="enter image description here"></a></p> <p>Here is the output of <code>kubectl get hpa</code> - </p> <p><a href="https://i.stack.imgur.com/ehlyV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ehlyV.png" alt="enter image description here"></a></p> <p>As <code>maxUnavailable</code> is 25%, shouldn't only about 2-3 pods go down at max ? Why did so many pods crash at once ? It seems as though rolling update is useless if it works this way.</p> <p>What am I missing ?</p>
<p>In our case we added the <code>replicas</code> field a while ago and forgot to remove it when we added the HPA. The HPA does not play nice with the <code>replicas</code> field during deployments, so if you have a HPA remove the <code>replicas</code> field. See <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling</a></p> <blockquote> <p>When an HPA is enabled, it is recommended that the value of spec.replicas of the Deployment and / or StatefulSet be removed from their manifest(s). If this isn't done, any time a change to that object is applied, for example via kubectl apply -f deployment.yaml, this will instruct Kubernetes to scale the current number of Pods to the value of the spec.replicas key. This may not be desired and could be troublesome when an HPA is active.</p> </blockquote> <blockquote> <p>Keep in mind that the removal of spec.replicas may incur a one-time degradation of Pod counts as the default value of this key is 1 (reference Deployment Replicas). Upon the update, all Pods except 1 will begin their termination procedures. Any deployment application afterwards will behave as normal and respect a rolling update configuration as desired.</p> </blockquote>
<p>In Azure Kubernetes I want have a pod with jenkins in defualt namespace, that needs read secret from my aplication workspace.</p> <p>When I tried I get the next error:</p> <pre><code>Error from server (Forbidden): secrets &quot;myapp-mongodb&quot; is forbidden: User &quot;system:serviceaccount:default:jenkinspod&quot; cannot get resource &quot;secrets&quot; in API group &quot;&quot; in the namespace &quot;myapp&quot; </code></pre> <p>How I can bring access this jenkisn pod to read secrets in 'myapp' namespace</p>
<p><code>secret</code> is a namespaced resource and can be accessed via proper rbac permissions. However any improper rbac permissions may lead to leakage.</p> <p>You must <code>role bind</code> the pod's associated service account. Here is a complete example. I have created a new service account for role binding in this example. However, you can use the default <code>service account</code> if you want.</p> <p>step-1: create a namespace called <code>demo-namespace</code></p> <pre><code>kubectl create ns demo-namespace </code></pre> <p>step-2: create a secret in <code>demo-namespace</code>:</p> <pre><code>kubectl create secret generic other-secret -n demo-namespace --from-literal foo=bar secret/other-secret created </code></pre> <p>step-2: Create a service account(<code>my-custom-sa</code>) in the <code>default</code> namespace.</p> <pre><code>kubectl create sa my-custom-sa </code></pre> <p>step-3: Validate that, by default, the service account you created in the last step has no access to the secrets present in <code>demo-namespace</code>.</p> <pre><code>kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa no </code></pre> <p>step-4: Create a cluster role with permissions of <code>get</code> and <code>list</code> secrets from <code>demo-namespace</code> namespace.</p> <pre><code>kubectl create clusterrole role-for-other-user --verb get,list --resource secret clusterrole.rbac.authorization.k8s.io/role-for-other-user created </code></pre> <p>step-5: Create a rolebinding to bind the cluster role created in last step.</p> <pre><code> kubectl create rolebinding role-for-other-user -n demo-namespace --serviceaccount default:my-custom-sa --clusterrole role-for-other-user rolebinding.rbac.authorization.k8s.io/role-for-other-user created </code></pre> <p>step-6: validate that the service account in the default ns now has access to the secrets of <code>demo-namespace</code>. (note the difference from step 3)</p> <pre><code>kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa yes </code></pre> <p>step-7: create a pod in default namsepace and mount the service account you created earlier.</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: my-pod name: my-pod spec: serviceAccountName: my-custom-sa containers: - command: - sleep - infinity image: bitnami/kubectl name: my-pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} </code></pre> <p>step-7: Validate that you can read the secret of <code>demo-namespace</code> from the pod in the default namespace.</p> <pre><code> curl -sSk -H &quot;Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)&quot; https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/demo-namespace/secrets { &quot;kind&quot;: &quot;SecretList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;resourceVersion&quot;: &quot;668709&quot; }, &quot;items&quot;: [ { &quot;metadata&quot;: { &quot;name&quot;: &quot;other-secret&quot;, &quot;namespace&quot;: &quot;demo-namespace&quot;, &quot;uid&quot;: &quot;5b3b9dba-be5d-48cc-ab16-4e0ceb3d1d72&quot;, &quot;resourceVersion&quot;: &quot;662043&quot;, &quot;creationTimestamp&quot;: &quot;2022-08-19T14:51:15Z&quot;, &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kubectl-create&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2022-08-19T14:51:15Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:data&quot;: { &quot;.&quot;: {}, &quot;f:foo&quot;: {} }, &quot;f:type&quot;: {} } } ] }, &quot;data&quot;: { &quot;foo&quot;: &quot;YmFy&quot; }, &quot;type&quot;: &quot;Opaque&quot; } ] } </code></pre>
<p>I have deployed an nginx ingress controller in my eks cluster. I want to add more security to my nginx deployment i.e add <a href="https://content-security-policy.com/examples/nginx/" rel="nofollow noreferrer">content-security-policy</a> header and the below</p> <pre><code>X-Frame-Options: Content-Security-Policy: X-Content-Type-Options: X-XSS-Protection: </code></pre> <p>Is there any document i can follow to do it. please help.</p> <p>I added in the configmap and turns out it didn't help as well.</p> <p>Thanks</p>
<p>you can try this</p> <pre><code>ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers &quot;X-Frame-Options: Deny&quot;; more_set_headers &quot;X-Xss-Protection: 1; mode=block&quot;; more_set_headers &quot;X-Content-Type-Options: nosniff&quot;; more_clear_headers &quot;Cache-Control&quot;; more_set_headers &quot;Cache-Control: must-revalidate&quot;; proxy_set_header l5d-dst-override </code></pre> <p>Here is the list of all <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">nginx-configuration-annotations</a></p> <p>This fixes my issue and verifies it using <a href="https://www.zaproxy.org/" rel="nofollow noreferrer">zaproxy</a></p>
<p>I am writing an Operator for Kubernetes using the Operator SDK and have a question about logging (I am still rather new to Go).</p> <p>I have setup a logger in my controller that I have attached a UUID to as a Trace ID</p> <pre><code>func (r *MyReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { myLog = log.FromContext(ctx).WithValues(&quot;traceID&quot;, uuid.NewUUID()) myCtx := log.IntoContext(ctx, myLog) </code></pre> <p>However I have a Library that is setup at the same time as the controllers and passed in via the struct.</p> <pre><code>type MyReconciler struct { MyBase MyClient MyGreatClient } </code></pre> <p>And each method on the client take the ctx as a parameter.</p> <p>As I want to use the logger in the ctx as it contains my trace ID do I need to add this line to every method in the client or is there a better way to do it?</p> <pre><code>func (mgc *MyGreatClient) SomethingExists(ctx context.Context, something string) bool { myGreatClientLog = log.FromContext(ctx).WithName(&quot;MyGreatClient&quot;) myGreatClientLog.Info(&quot;Checking Something Exists&quot;, &quot;Something&quot;, something) </code></pre> <p>Is there a better way to achieve what I am wanting to do? It doesn't appear that <code>&quot;sigs.k8s.io/controller-runtime/pkg/log&quot;</code> (of type <code>logr.Logger</code>) supports just logging a context like zap.</p> <pre><code> myappcontex.Logger(ctx).Info(&quot;did something awesome&quot;) </code></pre> <p>Any help on doing this the idiomatic way is appreciated</p>
<p>Not at all sure of this answer and I too wonder why logging and logging sinks are built so complex (refer <a href="https://dave.cheney.net/2015/11/05/lets-talk-about-logging" rel="nofollow noreferrer">https://dave.cheney.net/2015/11/05/lets-talk-about-logging</a> which I found reffered in logr <a href="https://pkg.go.dev/github.com/go-logr/[email protected]" rel="nofollow noreferrer">https://pkg.go.dev/github.com/go-logr/[email protected]</a> !);</p> <p>This is how I logged in a generated <code>kubebuilder</code> operator controller</p> <pre><code>log.Log.Info(&quot;Pod Image is set&quot;, &quot;PodImageName&quot;, testOperator.Spec.PodImage) </code></pre> <p>Ouput-</p> <pre><code>1.6611775636957748e+09 INFO Pod Image is set {&quot;PodImageName&quot;: &quot;alexcpn/run_server:1.2&quot;} </code></pre> <p>and with this</p> <pre><code>log.FromContext(ctx).Info(&quot;Pod Image is &quot;, &quot;PodImageName&quot;, testOperator.Spec.PodImage) </code></pre> <p>Ouput is</p> <pre><code>1.6611801111484244e+09 INFO Pod Image is {&quot;controller&quot;: &quot;testoperartor&quot;, &quot;controllerGroup&quot;: &quot;grpcapp.mytest.io&quot;, &quot;controllerKind&quot;: &quot;Testoperartor&quot;, &quot;testoperartor&quot;: {&quot;name&quot;:&quot;testoperartor-sample&quot;,&quot;namespace&quot;:&quot;default&quot;}, &quot;namespace&quot;: &quot;default&quot;, &quot;name&quot;: &quot;testoperartor-sample&quot;, &quot;reconcileID&quot;: &quot;ffa3a957-c14f-4ec9-8cf9-767c38fc26ee&quot;, &quot;PodImageName&quot;: &quot;alexcpn/run_server:1.2&quot;} </code></pre> <p>The controller uses Golang Logr</p> <p><code>All logging in controller-runtime is structured, using a set of interfaces defined by a package called logr (https://pkg.go.dev/github.com/go-logr/logr). The sub-package zap provides helpers for setting up logr backed by Zap (go.uber.org/zap) </code> <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log#DelegatingLogSink</a></p> <p>And I can see that it sets Zap logging in main</p> <pre><code>ctrl.SetLogger(zap.New(zap.UseFlagOptions(&amp;opts))) </code></pre>
<p>while I try to add my k8s cluster in azure vm, is shows error like error: resource mapping not found for name: &quot;cattle-admin-binding&quot; namespace: &quot;cattle-system&quot; from &quot;STDIN&quot;: no matches for kind &quot;ClusterRoleBinding&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; ensure CRDs are installed first</p> <p>Here is the output for my command executed</p> <pre><code>root@kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created namespace/cattle-system created serviceaccount/cattle created secret/cattle-credentials-e558be7 created clusterrole.rbac.authorization.k8s.io/cattle-admin created Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use &quot;kubernetes.io/os&quot; instead deployment.apps/cattle-cluster-agent created daemonset.apps/cattle-node-agent created error: resource mapping not found for name: &quot;cattle-admin-binding&quot; namespace: &quot;cattle-system&quot; from &quot;STDIN&quot;: no matches for kind &quot;ClusterRoleBinding&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; </code></pre> <p>ensure CRDs are installed first</p>
<p>I was also facing the same issue, so I changed the API version for the <code>cattle-admin-binding</code> from beta to stable as below:</p> <p>Old value:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 </code></pre> <p>Changed to:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 </code></pre> <p>Though I ran into some other issues later, the above error was gone.</p>
<p>An action item from the <strong>security scan</strong> is to implement <strong>HSTS</strong> header in ASP.Net Core 6.0 WebAPI.</p> <p>A WebAPI application is deployed on AKS using Application Gateway Ingress Controller. SSL termination occurs at the Application Gateway. Application Gateway Ingress Controllers and PODs communicate using HTTP.</p> <p><a href="https://i.stack.imgur.com/mF57L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mF57L.png" alt="enter image description here" /></a></p> <p>In this case, is it necessary to implement HSTS? In that case, what infrastructure requirements are needed?</p>
<p>The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security" rel="nofollow noreferrer">HSTS header</a> is a browser only instruction. It informs browsers that the site should only be accessed using HTTPS, and that any future attempts to access it using HTTP should automatically be converted to HTTPS.</p> <blockquote> <p>In this case, is it necessary to implement HSTS?</p> </blockquote> <p>If your application hosted in AKS is a web application which will load in browser then, yes. However, as you mentioned, if it is only an API then it does not make much sense.</p> <p>This is also <a href="https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-6.0&amp;tabs=visual-studio" rel="nofollow noreferrer">documented on MSDN</a>:</p> <blockquote> <p>HSTS is generally a browser only instruction. Other callers, such as phone or desktop apps, do not obey the instruction. Even within browsers, a single authenticated call to an API over HTTP has risks on insecure networks. The secure approach is to configure API projects to only listen to and respond over HTTPS.</p> </blockquote> <p>That said, assuming your application is a web application, to implement it with AGIC, you will have to first configure rewrite ruleset on the app gateway. This can be done from portal or with PowerShell:</p> <pre><code># Create RuleSet $responseHeaderConfiguration = New-AzApplicationGatewayRewriteRuleHeaderConfiguration -HeaderName &quot;Strict-Transport-Security&quot; -HeaderValue &quot;max-age=31536000; includeSubDomains; preload&quot; $actionSet = New-AzApplicationGatewayRewriteRuleActionSet -ResponseHeaderConfiguration $responseHeaderConfiguration $rewriteRule = New-AzApplicationGatewayRewriteRule -Name HSTSHeader -ActionSet $actionSet $rewriteRuleSet = New-AzApplicationGatewayRewriteRuleSet -Name SecurityHeadersRuleSet -RewriteRule $rewriteRule # apply the ruleset to your app gateway $appgw = Get-AzApplicationGateway -Name &quot;yourgw&quot; -ResourceGroupName &quot;yourgw-rg&quot; Add-AzApplicationGatewayRewriteRuleSet -ApplicationGateway $appgw -Name $rewriteRuleSet.Name -RewriteRule $rewriteRuleSet.RewriteRules Set-AzApplicationGateway -ApplicationGateway $appgw </code></pre> <p>Next, to map the RuleSet to your ingress path, use the <a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/#rewrite-rule-set" rel="nofollow noreferrer">annotation</a> on your ingress definition to reference the Ruleset:</p> <pre><code>appgw.ingress.kubernetes.io/rewrite-rule-set: SecurityHeadersRuleSet </code></pre>
<p>I have configured an <a href="https://github.com/elastic/cloud-on-k8s/blob/main/config/recipes/beats/filebeat_autodiscover.yaml" rel="nofollow noreferrer">Elastic ECK Beat with autodiscover</a> enabled for all pod logs, but I need to add logs from a specific pod log file too; from this path <code>/var/log/traefik/access.log</code> inside the container. I've tried with module and log config but still nothing works.</p> <p>The access.log file exists on the pods and contains data. The filebeat index does not show any data from this log.file.path</p> <p>Here is the Beat yaml:</p> <pre><code>--- apiVersion: beat.k8s.elastic.co/v1beta1 kind: Beat metadata: name: filebeat namespace: elastic spec: type: filebeat version: 8.3.1 elasticsearchRef: name: elasticsearch kibanaRef: name: kibana config: filebeat: autodiscover: providers: - type: kubernetes node: ${NODE_NAME} hints: enabled: true default_config: type: container paths: - /var/log/containers/*${data.kubernetes.container.id}.log templates: - condition.contains: kubernetes.pod.name: traefik config: - module: traefik access: enabled: true var.paths: [ &quot;/var/log/traefik/*access.log*&quot; ] processors: - add_cloud_metadata: {} - add_host_metadata: {} daemonSet: podTemplate: spec: serviceAccountName: filebeat automountServiceAccountToken: true terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirstWithHostNet hostNetwork: true # Allows to provide richer host metadata containers: - name: filebeat securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true volumeMounts: - name: varlogcontainers mountPath: /var/log/containers - name: varlogpods mountPath: /var/log/pods - name: varlibdockercontainers mountPath: /var/lib/docker/containers - name: varlog mountPath: /var/log env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumes: - name: varlogcontainers hostPath: path: /var/log/containers - name: varlogpods hostPath: path: /var/log/pods - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat namespace: elastic rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: elastic --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat namespace: elastic subjects: - kind: ServiceAccount name: filebeat namespace: elastic roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io </code></pre> <p>Here is the module loaded from Filebeat Logs:</p> <pre><code>... {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.337Z&quot;,&quot;log.logger&quot;:&quot;esclientleg&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;eslegclient/connection.go&quot;,&quot;file.line&quot;:291},&quot;message&quot;:&quot;Attempting to connect to Elasticsearch version 8.3.1&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.352Z&quot;,&quot;log.logger&quot;:&quot;modules&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;fileset/modules.go&quot;,&quot;file.line&quot;:108},&quot;message&quot;:&quot;Enabled modules/filesets: traefik (access)&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.353Z&quot;,&quot;log.logger&quot;:&quot;input&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;log/input.go&quot;,&quot;file.line&quot;:172},&quot;message&quot;:&quot;Configured paths: [/var/log/traefik/*access.log*]&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;input_id&quot;:&quot;fa247382-c065-40ca-974e-4b69f14c3134&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.355Z&quot;,&quot;log.logger&quot;:&quot;modules&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;fileset/modules.go&quot;,&quot;file.line&quot;:108},&quot;message&quot;:&quot;Enabled modules/filesets: traefik (access)&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.355Z&quot;,&quot;log.logger&quot;:&quot;input&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;log/input.go&quot;,&quot;file.line&quot;:172},&quot;message&quot;:&quot;Configured paths: [/var/log/traefik/*access.log*]&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;input_id&quot;:&quot;6883d753-f149-4a68-9499-fe039e0de899&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.437Z&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;input/input.go&quot;,&quot;file.line&quot;:134},&quot;message&quot;:&quot;input ticker stopped&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} {&quot;log.level&quot;:&quot;info&quot;,&quot;@timestamp&quot;:&quot;2022-08-18T19:58:55.439Z&quot;,&quot;log.logger&quot;:&quot;input&quot;,&quot;log.origin&quot;:{&quot;file.name&quot;:&quot;log/input.go&quot;,&quot;file.line&quot;:172},&quot;message&quot;:&quot;Configured paths: [/var/log/containers/*9a1680222e867802388f649f0a296e076193242962b28eb7e0e575bf68826d85.log]&quot;,&quot;service.name&quot;:&quot;filebeat&quot;,&quot;input_id&quot;:&quot;3c1fffae-0213-4889-b0e7-5dda489eeb51&quot;,&quot;ecs.version&quot;:&quot;1.6.0&quot;} ... </code></pre>
<p>Docker logging is based on the stdout/stderr output of a container. If you only write into a log file inside a container it will never be picked up by Docker logging and can therefore also not be processed by your Filebeat setup.</p> <p>Instead ensure that all logs generated by your containers are sent to stdout. Which would mean in your example start the Traeffic pod with <code>--accesslogsfile=/dev/stdout</code> to also send the access logs to stdout instead of the log file.</p>
<p>We're using Gitlab Runner with Kubernetes executor and we were thinking about what I think is currently not possible. We want to assign the Gitlab Runner daemon's pod to a specific node group's worker with instance type X and the jobs' pods to a different node group Y worker nodes as these usually require more computation resources than the Gitlab Runner's pod.</p> <p>This comes in order to save costs, as the node where the Gitlab runner main daemon will always be running, then we want it to be running on a cheap instance, and later the jobs which need more computation capacity then they can run on different instances with different type and which will be started by the Cluster Autoscaler and later destroyed when no jobs are present.</p> <p>I made an investigation about this feature, and the available way to assign the pods to specific nodes is to use the node selector or node affinity, but the rules included in these two configuration sections are applied to all the pods of the Gitlab Runner, the main pod and the jobs pods. The proposal is to make it possible to apply two separate configurations, one for the Gitlab Runner's pod and one for the jobs' pods.</p> <p>The current existing config consists of the node selector and nodes/pods affinity, but as I mentioned these apply globally to all the pods and not to specified ones as we want in our case.</p> <p>Gitlab Runner Kubernetes Executor Config: <a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html</a></p>
<p>This problem is solved! After a further investigation I found that Gitlab Runner's Helm chart provide 2 <code>nodeSelector</code> features, to exactly do what I was looking for, 1 for the main pod which represents the Gitlab Runner pod and the other one for the Gitlab Runner's jobs pods. Below I show a sample of the Helm chart in which I set beside each <code>nodeSelector</code> its domain and the pod that it affects.</p> <p>Note that the first level <code>nodeSelector</code> is the one that affects the main Gitlab Runner pod, and the <code>runners.kubernetes.node_selector</code> is the one that affects the Gitlab Runner's jobs pods.</p> <pre class="lang-yaml prettyprint-override"><code>gitlabUrl: https://gitlab.com/ ... nodeSelector: gitlab-runner-label-example: label-values-example-0 ... runnerRegistrationToken: **** ... runners: config: [[runners]] name = &quot;gitlabRunnerExample&quot; executor = &quot;kubernetes&quot; environment = [&quot;FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=true&quot;] [runners.kubernetes] ... [runners.kubernetes.node_selector] &quot;gitlab-runner-label-example&quot; = &quot;label-values-example-1&quot; [runners.cache] ... [runners.cache.s3] ... ... </code></pre>
<p>I am applying the app-of-apps with Argo CD on my application deployments, where I have a directory with the applications definitions, and then a directory with resource definitions and a <code>kustomization.yaml</code> file. When a new version is released, all I do is run <code>kustomize set image ...</code> in a pipeline that will issue an autocommit and Argo will pick it up.</p> <p>I currently have the following structure of files and it is repeated for other environments, like staging and dev.</p> <pre class="lang-sh prettyprint-override"><code>deployments ├── production │ ├── app-1 │ │ ├── kustomization.yaml | | └── deployment.yaml │ ├── app-2 │ │ ├── kustomization.yaml | | └── deployment.yaml └───└── apps ├── app1.yaml └── app2.yaml </code></pre> <p>I know decided to throw myself in the Helm world and create charts for each application with the required resource definitions. Then, in each environment folder I will keep an appropriate <code>values.yaml</code> file to override the proper values for each environment application deployment.</p> <p>I would like to have the same flow as before, where the pipeline updates the new image tag (this time in the <code>values.yaml</code> file), creates the autocommit and Argo will sync it.</p> <p>Is it possible to somehow do a <code>kustomize set image...</code> in each of the <code>values.yaml</code> file accordingly? Or what would be a smarter approach here?</p>
<p>In my case, I implemented a simple github action that fix 'yaml' format file in another repo then commit it.</p> <p>I have two kinds of github repositories, the one is for application development and the other for storing k8s manifests which app-of-apps pattern is applied.</p> <p>There is a github action for CI/CD in my development repository triggered when dev branches are merged to the 'main'.</p> <p>It build new docker image and publish it to Docker hub(or AWS ECR) with version tag, then update values.yaml with the tag in k8s manifests repository helm chart via another github action (<a href="https://github.com/alphaprime-dev/fix-yaml-in-another-repo/blob/main/action.yml" rel="nofollow noreferrer">'fix-yaml-in-another-repo'</a>)</p>
<p>I am trying to run a local cluster on Mac with M1 chip using Minikube (Docker driver). I enabled ingress addon in Minikube, I have a separate terminal in which I'm running <code>minikube tunnel</code> and I enabled Minikube dashboard, which I want to expose using Ingress. This is my configuration file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.com http: paths: - backend: service: name: kubernetes-dashboard port: number: 80 pathType: Prefix path: / </code></pre> <p>I also put &quot;dashboard.com&quot; in my /etc/hosts file and it's actually resolving to the right IP, but it's not responding when I put &quot;http://dashboard.com&quot; in a browser or when I try to ping it and I always receive a timeout.</p> <p>NOTE: when I run <code>minikube tunnel</code> I get</p> <pre><code>❗ The service/ingress dashboard-ingress requires privileged ports to be exposed: [80 443] 🔑 sudo permission will be asked for it. </code></pre> <p>I insert my sudo password and then nothing gets printed afterwards. Not sure if this is is an issue or the expected behavior.</p> <p>What am I doing wrong?</p>
<p>I had the same behavior, and apparently what's needed for <code>minikube tunnel</code> to work is to map &quot;127.0.0.1&quot; in <code>/etc/hosts</code>, instead of the output from <code>minikube ip</code> or the ingress description. that fixed it for me</p>
<p>I'm trying to create a ConfigMap with ArgoCD.</p> <p>I've created a <code>volumes.yaml</code> file as such</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: persistent-volumes-argo labels: grafana_dashboard: &quot;1&quot; project: &quot;foo&quot; data: kubernetes.json: | {{ .Files.Get &quot;dashboards/persistent-volumes.json&quot; | indent 4 }} </code></pre> <p>But ArgoCD doesn't seem to be able to read the data, the way a standard Helm deployment would.</p> <p>I've tried adding the data directly into the ConfigMap as such</p> <p>(Data omitted for brevity)</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: persistent-volumes-argo labels: grafana_dashboard: &quot;1&quot; project: &quot;foo&quot; data: kubernetes.json: | { &quot;annotations&quot;: { &quot;list&quot;: [ { &quot;builtIn&quot;: 1, &quot;datasource&quot;: &quot;-- Grafana --&quot;, &quot;enable&quot;: true, &quot;hide&quot;: true, &quot;iconColor&quot;: &quot;rgba(0, 211, 255, 1)&quot;, &quot;limit&quot;: 100, &quot;name&quot;: &quot;Annotations &amp; Alerts&quot;, &quot;showIn&quot;: 0, &quot;type&quot;: &quot;dashboard&quot; } ] }, &quot;editable&quot;: true, &quot;gnetId&quot;: 13646, &quot;graphTooltip&quot;: 0, &quot;iteration&quot;: 1659421503107, &quot;links&quot;: [], &quot;panels&quot;: [ { &quot;collapsed&quot;: false, &quot;datasource&quot;: null, &quot;fieldConfig&quot;: { &quot;defaults&quot;: {}, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 1, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 0 }, &quot;id&quot;: 26, &quot;panels&quot;: [], &quot;title&quot;: &quot;Alerts&quot;, &quot;type&quot;: &quot;row&quot; }, { &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;color&quot;: { &quot;mode&quot;: &quot;thresholds&quot; }, &quot;mappings&quot;: [], &quot;noValue&quot;: &quot;--&quot;, &quot;thresholds&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: null }, { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: -0.0001 }, { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: 0.0001 } ] }, &quot;unit&quot;: &quot;none&quot; }, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 4, &quot;w&quot;: 8, &quot;x&quot;: 0, &quot;y&quot;: 1 }, &quot;id&quot;: 21, &quot;options&quot;: { &quot;colorMode&quot;: &quot;background&quot;, &quot;graphMode&quot;: &quot;area&quot;, &quot;justifyMode&quot;: &quot;auto&quot;, &quot;orientation&quot;: &quot;auto&quot;, &quot;reduceOptions&quot;: { &quot;calcs&quot;: [ &quot;mean&quot; ], &quot;fields&quot;: &quot;&quot;, &quot;values&quot;: false }, &quot;text&quot;: {}, &quot;textMode&quot;: &quot;auto&quot; }, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;count (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;} ) and (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;} )) / (max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_capacity_bytes{namespace=~\&quot;${k8s_namespace}\&quot;} )) &gt;= (${warning_threshold} / 100)) or vector (0)&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;timeFrom&quot;: null, &quot;timeShift&quot;: null, &quot;title&quot;: &quot;PVCs Above Warning Threshold&quot;, &quot;type&quot;: &quot;stat&quot; }, { &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;color&quot;: { &quot;mode&quot;: &quot;thresholds&quot; }, &quot;decimals&quot;: 0, &quot;mappings&quot;: [], &quot;noValue&quot;: &quot;--&quot;, &quot;thresholds&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: null }, { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: -0.0001 }, { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: 0.0001 } ] }, &quot;unit&quot;: &quot;none&quot; }, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 4, &quot;w&quot;: 8, &quot;x&quot;: 8, &quot;y&quot;: 1 }, &quot;id&quot;: 24, &quot;options&quot;: { &quot;colorMode&quot;: &quot;background&quot;, &quot;graphMode&quot;: &quot;area&quot;, &quot;justifyMode&quot;: &quot;auto&quot;, &quot;orientation&quot;: &quot;auto&quot;, &quot;reduceOptions&quot;: { &quot;calcs&quot;: [ &quot;mean&quot; ], &quot;fields&quot;: &quot;&quot;, &quot;values&quot;: false }, &quot;text&quot;: {}, &quot;textMode&quot;: &quot;auto&quot; }, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;count((kube_persistentvolumeclaim_status_phase{namespace=~\&quot;${k8s_namespace}\&quot;,phase=\&quot;Pending\&quot;}==1)) or vector(0)&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;timeFrom&quot;: null, &quot;timeShift&quot;: null, &quot;title&quot;: &quot;PVCs in Pending State&quot;, &quot;transformations&quot;: [ { &quot;id&quot;: &quot;organize&quot;, &quot;options&quot;: {} } ], &quot;type&quot;: &quot;stat&quot; }, { &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;color&quot;: { &quot;mode&quot;: &quot;thresholds&quot; }, &quot;decimals&quot;: 0, &quot;mappings&quot;: [], &quot;noValue&quot;: &quot;--&quot;, &quot;thresholds&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: null }, { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: -0.0001 }, { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: 0.0001 } ] }, &quot;unit&quot;: &quot;none&quot; }, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 4, &quot;w&quot;: 8, &quot;x&quot;: 16, &quot;y&quot;: 1 }, &quot;id&quot;: 23, &quot;options&quot;: { &quot;colorMode&quot;: &quot;background&quot;, &quot;graphMode&quot;: &quot;area&quot;, &quot;justifyMode&quot;: &quot;auto&quot;, &quot;orientation&quot;: &quot;auto&quot;, &quot;reduceOptions&quot;: { &quot;calcs&quot;: [ &quot;mean&quot; ], &quot;fields&quot;: &quot;&quot;, &quot;values&quot;: false }, &quot;text&quot;: {}, &quot;textMode&quot;: &quot;auto&quot; }, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;count((kube_persistentvolumeclaim_status_phase{namespace=~\&quot;${k8s_namespace}\&quot;,phase=\&quot;Lost\&quot;}==1)) or vector(0)&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;timeFrom&quot;: null, &quot;timeShift&quot;: null, &quot;title&quot;: &quot;PVCs in Lost State&quot;, &quot;transformations&quot;: [ { &quot;id&quot;: &quot;organize&quot;, &quot;options&quot;: {} } ], &quot;type&quot;: &quot;stat&quot; }, { &quot;collapsed&quot;: false, &quot;datasource&quot;: null, &quot;fieldConfig&quot;: { &quot;defaults&quot;: {}, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 1, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 5 }, &quot;id&quot;: 17, &quot;panels&quot;: [], &quot;title&quot;: &quot;Usage statistics&quot;, &quot;type&quot;: &quot;row&quot; }, { &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;color&quot;: { &quot;mode&quot;: &quot;thresholds&quot; }, &quot;custom&quot;: { &quot;align&quot;: null, &quot;displayMode&quot;: &quot;auto&quot;, &quot;filterable&quot;: false }, &quot;mappings&quot;: [], &quot;noValue&quot;: &quot;--&quot;, &quot;thresholds&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: null } ] }, &quot;unit&quot;: &quot;none&quot; }, &quot;overrides&quot;: [ { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Used (%)&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.displayMode&quot;, &quot;value&quot;: &quot;gradient-gauge&quot; }, { &quot;id&quot;: &quot;thresholds&quot;, &quot;value&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: null }, { &quot;color&quot;: &quot;semi-dark-yellow&quot;, &quot;value&quot;: 70 }, { &quot;color&quot;: &quot;dark-red&quot;, &quot;value&quot;: 80 } ] } }, { &quot;id&quot;: &quot;decimals&quot;, &quot;value&quot;: 1 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Status&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.displayMode&quot;, &quot;value&quot;: &quot;color-background&quot; }, { &quot;id&quot;: &quot;mappings&quot;, &quot;value&quot;: [ { &quot;options&quot;: { &quot;0&quot;: { &quot;text&quot;: &quot;Bound&quot; }, &quot;1&quot;: { &quot;text&quot;: &quot;Pending&quot; }, &quot;2&quot;: { &quot;text&quot;: &quot;Lost&quot; } }, &quot;type&quot;: &quot;value&quot; } ] }, { &quot;id&quot;: &quot;thresholds&quot;, &quot;value&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: null }, { &quot;color&quot;: &quot;light-green&quot;, &quot;value&quot;: 0 }, { &quot;color&quot;: &quot;semi-dark-orange&quot;, &quot;value&quot;: 1 }, { &quot;color&quot;: &quot;semi-dark-red&quot;, &quot;value&quot;: 2 } ] } }, { &quot;id&quot;: &quot;noValue&quot;, &quot;value&quot;: &quot;--&quot; }, { &quot;id&quot;: &quot;custom.align&quot;, &quot;value&quot;: &quot;center&quot; } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Namespace&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 120 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Status&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 80 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Capacity (GiB)&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 120 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Used (GiB)&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 120 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;Available (GiB)&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 120 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;StorageClass&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 150 } ] }, { &quot;matcher&quot;: { &quot;id&quot;: &quot;byName&quot;, &quot;options&quot;: &quot;PersistentVolumeClaim&quot; }, &quot;properties&quot;: [ { &quot;id&quot;: &quot;custom.width&quot;, &quot;value&quot;: 370 } ] } ] }, &quot;gridPos&quot;: { &quot;h&quot;: 12, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 6 }, &quot;id&quot;: 29, &quot;interval&quot;: &quot;&quot;, &quot;options&quot;: { &quot;frameIndex&quot;: 2, &quot;showHeader&quot;: true, &quot;sortBy&quot;: [ { &quot;desc&quot;: false, &quot;displayName&quot;: &quot;PersistentVolumeClaim&quot; } ] }, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;targets&quot;: [ { &quot;expr&quot;: &quot; sum by (persistentvolumeclaim,namespace,storageclass,volumename) (kube_persistentvolumeclaim_info{namespace=~\&quot;${k8s_namespace}\&quot;})&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;A&quot; }, { &quot;expr&quot;: &quot;sum by (persistentvolumeclaim) (kubelet_volume_stats_capacity_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}/1024/1024/1024)&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;B&quot; }, { &quot;expr&quot;: &quot;sum by (persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}/1024/1024/1024)&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;C&quot; }, { &quot;expr&quot;: &quot;sum by (persistentvolumeclaim) (kubelet_volume_stats_available_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}/1024/1024/1024)&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;D&quot; }, { &quot;expr&quot;: &quot;sum(kube_persistentvolumeclaim_status_phase{namespace=~\&quot;${k8s_namespace}\&quot;,phase=~\&quot;(Pending|Lost)\&quot;}) by (persistentvolumeclaim) + sum(kube_persistentvolumeclaim_status_phase{namespace=~\&quot;${k8s_namespace}\&quot;,phase=~\&quot;(Lost)\&quot;}) by (persistentvolumeclaim)&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;E&quot; }, { &quot;expr&quot;: &quot;sum by (persistentvolumeclaim) (kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}/kubelet_volume_stats_capacity_bytes{namespace=~\&quot;${k8s_namespace}\&quot;} * 100)&quot;, &quot;format&quot;: &quot;table&quot;, &quot;instant&quot;: true, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;F&quot; } ], &quot;timeFrom&quot;: null, &quot;timeShift&quot;: null, &quot;title&quot;: &quot;Persistent Volume Claim&quot;, &quot;transformations&quot;: [ { &quot;id&quot;: &quot;seriesToColumns&quot;, &quot;options&quot;: { &quot;byField&quot;: &quot;persistentvolumeclaim&quot; } }, { &quot;id&quot;: &quot;organize&quot;, &quot;options&quot;: { &quot;excludeByName&quot;: { &quot;Time&quot;: true, &quot;Time 1&quot;: true, &quot;Time 2&quot;: true, &quot;Time 3&quot;: true, &quot;Time 4&quot;: true, &quot;Time 5&quot;: true, &quot;Time 6&quot;: true, &quot;Value #A&quot;: true }, &quot;indexByName&quot;: {}, &quot;renameByName&quot;: { &quot;Time 1&quot;: &quot;&quot;, &quot;Time 2&quot;: &quot;&quot;, &quot;Time 3&quot;: &quot;&quot;, &quot;Time 4&quot;: &quot;&quot;, &quot;Time 5&quot;: &quot;&quot;, &quot;Time 6&quot;: &quot;&quot;, &quot;Value #A&quot;: &quot;&quot;, &quot;Value #B&quot;: &quot;Capacity (GiB)&quot;, &quot;Value #C&quot;: &quot;Used (GiB)&quot;, &quot;Value #D&quot;: &quot;Available (GiB)&quot;, &quot;Value #E&quot;: &quot;Status&quot;, &quot;Value #F&quot;: &quot;Used (%)&quot;, &quot;namespace&quot;: &quot;Namespace&quot;, &quot;persistentvolumeclaim&quot;: &quot;PersistentVolumeClaim&quot;, &quot;storageclass&quot;: &quot;StorageClass&quot;, &quot;volumename&quot;: &quot;PhysicalVolume&quot; } } } ], &quot;type&quot;: &quot;table&quot; }, { &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;custom&quot;: { &quot;align&quot;: null, &quot;displayMode&quot;: &quot;auto&quot;, &quot;filterable&quot;: false }, &quot;mappings&quot;: [], &quot;thresholds&quot;: { &quot;mode&quot;: &quot;absolute&quot;, &quot;steps&quot;: [ { &quot;color&quot;: &quot;green&quot;, &quot;value&quot;: null } ] } }, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 5, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 18 }, &quot;id&quot;: 7, &quot;options&quot;: { &quot;showHeader&quot;: true, &quot;sortBy&quot;: [ { &quot;desc&quot;: true, &quot;displayName&quot;: &quot;Status&quot; } ] }, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;kube_storageclass_info&quot;, &quot;format&quot;: &quot;table&quot;, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;timeFrom&quot;: null, &quot;timeShift&quot;: null, &quot;title&quot;: &quot;Storage Class&quot;, &quot;transformations&quot;: [ { &quot;id&quot;: &quot;organize&quot;, &quot;options&quot;: { &quot;excludeByName&quot;: { &quot;Time&quot;: true, &quot;Value&quot;: true, &quot;__name__&quot;: true, &quot;app_kubernetes_io_instance&quot;: true, &quot;app_kubernetes_io_name&quot;: true, &quot;instance&quot;: true, &quot;job&quot;: true, &quot;kubernetes_namespace&quot;: true, &quot;kubernetes_pod_name&quot;: true, &quot;pod_template_hash&quot;: true }, &quot;indexByName&quot;: { &quot;Time&quot;: 1, &quot;Value&quot;: 13, &quot;__name__&quot;: 2, &quot;app_kubernetes_io_instance&quot;: 3, &quot;app_kubernetes_io_name&quot;: 4, &quot;instance&quot;: 5, &quot;job&quot;: 6, &quot;kubernetes_namespace&quot;: 7, &quot;kubernetes_pod_name&quot;: 8, &quot;pod_template_hash&quot;: 9, &quot;provisioner&quot;: 10, &quot;reclaimPolicy&quot;: 11, &quot;storageclass&quot;: 0, &quot;volumeBindingMode&quot;: 12 }, &quot;renameByName&quot;: { &quot;provisioner&quot;: &quot;Provisioner&quot;, &quot;reclaimPolicy&quot;: &quot;ReclaimPolicy&quot;, &quot;storageclass&quot;: &quot;StorageClass&quot;, &quot;volumeBindingMode&quot;: &quot;VolumeBindingMode&quot; } } }, { &quot;id&quot;: &quot;groupBy&quot;, &quot;options&quot;: { &quot;fields&quot;: { &quot;Provisioner&quot;: { &quot;aggregations&quot;: [], &quot;operation&quot;: &quot;groupby&quot; }, &quot;ReclaimPolicy&quot;: { &quot;aggregations&quot;: [], &quot;operation&quot;: &quot;groupby&quot; }, &quot;StorageClass&quot;: { &quot;aggregations&quot;: [], &quot;operation&quot;: &quot;groupby&quot; }, &quot;VolumeBindingMode&quot;: { &quot;aggregations&quot;: [], &quot;operation&quot;: &quot;groupby&quot; } } } } ], &quot;type&quot;: &quot;table&quot; }, { &quot;collapsed&quot;: false, &quot;datasource&quot;: null, &quot;fieldConfig&quot;: { &quot;defaults&quot;: {}, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 1, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 23 }, &quot;id&quot;: 15, &quot;panels&quot;: [], &quot;title&quot;: &quot;Graphical usage data &quot;, &quot;type&quot;: &quot;row&quot; }, { &quot;aliasColors&quot;: {}, &quot;bars&quot;: false, &quot;dashLength&quot;: 10, &quot;dashes&quot;: false, &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fill&quot;: 0, &quot;fillGradient&quot;: 0, &quot;gridPos&quot;: { &quot;h&quot;: 12, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 24 }, &quot;hiddenSeries&quot;: false, &quot;id&quot;: 9, &quot;legend&quot;: { &quot;alignAsTable&quot;: true, &quot;avg&quot;: true, &quot;current&quot;: true, &quot;max&quot;: true, &quot;min&quot;: true, &quot;rightSide&quot;: true, &quot;show&quot;: true, &quot;total&quot;: false, &quot;values&quot;: true }, &quot;lines&quot;: true, &quot;linewidth&quot;: 1, &quot;nullPointMode&quot;: &quot;null&quot;, &quot;options&quot;: { &quot;alertThreshold&quot;: true }, &quot;percentage&quot;: false, &quot;pluginVersion&quot;: &quot;8.0.3&quot;, &quot;pointradius&quot;: 2, &quot;points&quot;: false, &quot;renderer&quot;: &quot;flot&quot;, &quot;seriesOverrides&quot;: [], &quot;spaceLength&quot;: 10, &quot;stack&quot;: false, &quot;steppedLine&quot;: false, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;(max by (persistentvolumeclaim,namespace) (kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}))&quot;, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;{{namespace}} ({{persistentvolumeclaim}})&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;thresholds&quot;: [], &quot;timeFrom&quot;: null, &quot;timeRegions&quot;: [], &quot;timeShift&quot;: null, &quot;title&quot;: &quot;All Running PVCs Used Bytes&quot;, &quot;tooltip&quot;: { &quot;shared&quot;: true, &quot;sort&quot;: 2, &quot;value_type&quot;: &quot;individual&quot; }, &quot;type&quot;: &quot;graph&quot;, &quot;xaxis&quot;: { &quot;buckets&quot;: null, &quot;mode&quot;: &quot;time&quot;, &quot;name&quot;: null, &quot;show&quot;: true, &quot;values&quot;: [] }, &quot;yaxes&quot;: [ { &quot;format&quot;: &quot;bytes&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true }, { &quot;format&quot;: &quot;Date &amp; time&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true } ], &quot;yaxis&quot;: { &quot;align&quot;: false, &quot;alignLevel&quot;: null } }, { &quot;collapsed&quot;: true, &quot;datasource&quot;: null, &quot;fieldConfig&quot;: { &quot;defaults&quot;: {}, &quot;overrides&quot;: [] }, &quot;gridPos&quot;: { &quot;h&quot;: 1, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 36 }, &quot;id&quot;: 19, &quot;panels&quot;: [ { &quot;aliasColors&quot;: {}, &quot;bars&quot;: false, &quot;dashLength&quot;: 10, &quot;dashes&quot;: false, &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;custom&quot;: {} }, &quot;overrides&quot;: [] }, &quot;fill&quot;: 0, &quot;fillGradient&quot;: 0, &quot;gridPos&quot;: { &quot;h&quot;: 7, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 41 }, &quot;hiddenSeries&quot;: false, &quot;id&quot;: 11, &quot;legend&quot;: { &quot;alignAsTable&quot;: true, &quot;avg&quot;: true, &quot;current&quot;: false, &quot;max&quot;: false, &quot;min&quot;: false, &quot;rightSide&quot;: true, &quot;show&quot;: true, &quot;total&quot;: false, &quot;values&quot;: true }, &quot;lines&quot;: true, &quot;linewidth&quot;: 1, &quot;nullPointMode&quot;: &quot;null&quot;, &quot;options&quot;: { &quot;alertThreshold&quot;: true }, &quot;percentage&quot;: false, &quot;pluginVersion&quot;: &quot;7.2.1&quot;, &quot;pointradius&quot;: 2, &quot;points&quot;: false, &quot;renderer&quot;: &quot;flot&quot;, &quot;seriesOverrides&quot;: [], &quot;spaceLength&quot;: 10, &quot;stack&quot;: false, &quot;steppedLine&quot;: false, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;rate(kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}[1h])&quot;, &quot;instant&quot;: false, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;{{namespace}} ({{persistentvolumeclaim}})&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;thresholds&quot;: [], &quot;timeFrom&quot;: null, &quot;timeRegions&quot;: [], &quot;timeShift&quot;: null, &quot;title&quot;: &quot;Hourly Volume Usage Rate&quot;, &quot;tooltip&quot;: { &quot;shared&quot;: true, &quot;sort&quot;: 2, &quot;value_type&quot;: &quot;individual&quot; }, &quot;type&quot;: &quot;graph&quot;, &quot;xaxis&quot;: { &quot;buckets&quot;: null, &quot;mode&quot;: &quot;time&quot;, &quot;name&quot;: null, &quot;show&quot;: true, &quot;values&quot;: [] }, &quot;yaxes&quot;: [ { &quot;format&quot;: &quot;binBps&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true }, { &quot;format&quot;: &quot;Date &amp; time&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true } ], &quot;yaxis&quot;: { &quot;align&quot;: false, &quot;alignLevel&quot;: null } }, { &quot;aliasColors&quot;: {}, &quot;bars&quot;: false, &quot;dashLength&quot;: 10, &quot;dashes&quot;: false, &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;custom&quot;: {} }, &quot;overrides&quot;: [] }, &quot;fill&quot;: 0, &quot;fillGradient&quot;: 0, &quot;gridPos&quot;: { &quot;h&quot;: 7, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 48 }, &quot;hiddenSeries&quot;: false, &quot;id&quot;: 12, &quot;legend&quot;: { &quot;alignAsTable&quot;: true, &quot;avg&quot;: true, &quot;current&quot;: false, &quot;max&quot;: false, &quot;min&quot;: false, &quot;rightSide&quot;: true, &quot;show&quot;: true, &quot;total&quot;: false, &quot;values&quot;: true }, &quot;lines&quot;: true, &quot;linewidth&quot;: 1, &quot;nullPointMode&quot;: &quot;null&quot;, &quot;options&quot;: { &quot;alertThreshold&quot;: true }, &quot;percentage&quot;: false, &quot;pluginVersion&quot;: &quot;7.2.1&quot;, &quot;pointradius&quot;: 2, &quot;points&quot;: false, &quot;renderer&quot;: &quot;flot&quot;, &quot;seriesOverrides&quot;: [], &quot;spaceLength&quot;: 10, &quot;stack&quot;: false, &quot;steppedLine&quot;: false, &quot;targets&quot;: [ { &quot;expr&quot;: &quot;rate(kubelet_volume_stats_used_bytes{namespace=~\&quot;${k8s_namespace}\&quot;}[1d])&quot;, &quot;interval&quot;: &quot;&quot;, &quot;legendFormat&quot;: &quot;{{namespace}} ({{persistentvolumeclaim}})&quot;, &quot;refId&quot;: &quot;A&quot; } ], &quot;thresholds&quot;: [], &quot;timeFrom&quot;: null, &quot;timeRegions&quot;: [], &quot;timeShift&quot;: null, &quot;title&quot;: &quot;Daily Volume Usage Rate&quot;, &quot;tooltip&quot;: { &quot;shared&quot;: true, &quot;sort&quot;: 2, &quot;value_type&quot;: &quot;individual&quot; }, &quot;type&quot;: &quot;graph&quot;, &quot;xaxis&quot;: { &quot;buckets&quot;: null, &quot;mode&quot;: &quot;time&quot;, &quot;name&quot;: null, &quot;show&quot;: true, &quot;values&quot;: [] }, &quot;yaxes&quot;: [ { &quot;format&quot;: &quot;binBps&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true }, { &quot;format&quot;: &quot;Date &amp; time&quot;, &quot;label&quot;: null, &quot;logBase&quot;: 1, &quot;max&quot;: null, &quot;min&quot;: null, &quot;show&quot;: true } ], &quot;yaxis&quot;: { &quot;align&quot;: false, &quot;alignLevel&quot;: null } }, { &quot;aliasColors&quot;: {}, &quot;bars&quot;: false, &quot;dashLength&quot;: 10, &quot;dashes&quot;: false, &quot;datasource&quot;: &quot;$datasource&quot;, &quot;fieldConfig&quot;: { &quot;defaults&quot;: { &quot;custom&quot;: {} }, &quot;overrides&quot;: [] }, &quot;fill&quot;: 0, &quot;fillGradient&quot;: 0, &quot;gridPos&quot;: { &quot;h&quot;: 7, &quot;w&quot;: 24, &quot;x&quot;: 0, &quot;y&quot;: 55 }, &quot;hiddenSeries&quot;: false, &quot;id&quot;: 13, &quot;legend&quot;: { &quot;alignAsTable&quot;: true, &quot;avg&quot;: true, &quot;current&quot;: false, &quot;max&quot;: false, &quot;min&quot;: false, &quot;rightSide&quot;: true, &quot;show&quot;: true, &quot;total&quot;: false, &quot;values&quot;: true }, &quot;lines&quot;: true, &quot;linewidth&quot;: 1, &quot;nullPointMode&quot;: &quot;null&quot;, &quot;options&quot;: { &quot;alertThreshold&quot;: true } } </code></pre> <p>But this errors with <code>rpc error: code = FailedPrecondition desc = Failed to unmarshal &quot;volumes.yaml&quot;: &lt;nil&gt;</code></p> <p>Is there a way to pass in json data when creating a ConfigMap with ArgoCD, either as a template or by dumping the data in the file?</p>
<p>To create configmap with argocd and helm</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ include &quot;helm-chart.fullname&quot; . }}-configmap data: config.json : | {{ toJson .Values.configmap | indent 4 }} </code></pre> <p>and the value file should be like this, which is our JSON but converted to YAML</p> <pre><code>configmap: json: - rigid - better for data interchange yaml: - slim and flexible - better for configuration object: key: value array: - null_value: - boolean: true - integer: 1 - alias: &amp;example aliases are like variables - alias: *example paragraph: &gt; Blank lines denote paragraph breaks content: |- Or we can auto convert line breaks to save space alias: &amp;foo bar: baz alias_reuse: *foo </code></pre> <p><a href="https://www.json2yaml.com/" rel="nofollow noreferrer">json2yaml</a></p> <p>A complete demo app can be found <a href="https://github.com/Adiii717/argocd-demo-app" rel="nofollow noreferrer">here</a></p> <p>A very quick way to create app with the above configmap and demo app</p> <pre><code>argocd app create demo-app --repo https://github.com/Adiii717/argocd-demo-app.git --path helm-chart --dest-namespace default --dest-server https://kubernetes.default.svc --helm-set replicaCount=2 </code></pre> <p><a href="https://i.stack.imgur.com/1Eftq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Eftq.png" alt="enter image description here" /></a> Make sure you also set</p> <pre><code>export ARGOCD_AUTH_TOKEN=&quot;tokeh export ARGOCD_OPTS=&quot;--grpc-web&quot; export ARGOCD_SERVER=&quot;argocd.example.com&quot; </code></pre> <p>added section for the <a href="https://github.com/Adiii717/argocd-demo-app#configmap-creation" rel="nofollow noreferrer">configmap</a></p>
<p>I want to deploy WSO2 API Manager with Kubernetes. Should I use Google Cloud? Is there another way?</p>
<p>The helm charts <a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">1</a> for APIM can be deployed on GKE, AKS, EKS, etc. You can even deploy the all-in-one simple deployment pattern <a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">2</a> in a local Kubernetes cluster like minikube, etc.</p> <p>You might have to use a cloud provider for more advanced patterns since they require more resources to run.</p> <p>All these charts are there as samples to get an idea about the deployment patterns. It is not recommended to deploy those as it is in real production scenarios as the resource requirements and infrastructure vary according to the use cases.</p> <p><a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">1</a> - <a href="https://github.com/wso2/kubernetes-apim" rel="nofollow noreferrer">https://github.com/wso2/kubernetes-apim</a></p> <p><a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">2</a> - <a href="https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single" rel="nofollow noreferrer">https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single</a></p>
<p>I am trying to add livenessprobe for <a href="https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1" rel="nofollow noreferrer">snapshot-controller</a>. snapshot-controller deployment mentioned <a href="https://github.com/kubernetes-csi/external-snapshotter/blob/v6.0.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml" rel="nofollow noreferrer">here</a> to be specific. I tried to exec into controller pod to see what can be used for liveness probe. But not able to get into the pod. Has someone tried adding probes for snapshot-controller.</p>
<p>In the github repository you shared: <a href="https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1#snapshot-controller-command-line-options" rel="nofollow noreferrer">https://github.com/kubernetes-csi/external-snapshotter/tree/v6.0.1#snapshot-controller-command-line-options</a></p> <h4>Important optional arguments that are highly recommended to be used</h4> <blockquote> <p>--http-endpoint: The TCP network address where the HTTP server for diagnostics, including metrics and leader election health check, will listen (example: :8080 which corresponds to port 8080 on local host). The default is empty string, which means the server is disabled.</p> </blockquote> <p>You should be able to use this, if you do enable this option. At which point, you could query port 8080, path /metrics for example.</p> <p>Note: if you need to troubleshoot something like this and do not have a shell in your container image. You could use <code>kubectl debug</code>, see: <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-using-a-copy-of-the-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-using-a-copy-of-the-pod</a></p>
<p>I need a help with EKS managed node group. I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?</p> <p>Thanks in advance for help!</p>
<p>You can create a custom launch template to define your own security group for the node group. You can then reference the launch template in your terraform <code>aws_eks_node_group</code> resource.</p> <pre class="lang-bash prettyprint-override"><code> launch_template { name = aws_launch_template.your_eks_launch_template.name version = aws_launch_template.your_eks_launch_template.latest_version } </code></pre>
<p>I executing a command that give me cpu limit</p> <pre><code>kubectl get pods -o=jsonpath='{.items[*]..resources.limits.cpu}' -A </code></pre> <p>How can I modify the command to show pod name and the memory limit also</p>
<p>You can format the jsonpath like this.</p> <pre><code>kubectl get pods -Ao jsonpath='{range .items[*]}{&quot;name: &quot;}{@.metadata.name}{&quot; cpu: &quot;}{@..resources.limits.cpu}{&quot; memory: &quot;}{@..resources.limits.memory}{&quot;\n&quot;}{&quot;\n&quot;}{end}' </code></pre>
<p>So I had a ConfigMap with a json configuration file in it, like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: my-config-map data: config.json: |+ { &quot;some-url&quot;: &quot;{{ .Values.myApp.someUrl }}&quot; } </code></pre> <p>But I've moved to having my config files outside the ConfigMap's yaml, and just referencing them there, like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: my-config-map data: config.json: |- {{ .Files.Get .Values.myApp.configFile | indent 4 }} </code></pre> <p>But now I want my json to look like the following</p> <pre class="lang-json prettyprint-override"><code>{ &quot;some-url&quot;: &quot;{{ .Values.myApp.someUrl }}&quot; } </code></pre> <p>The only thing I tried is what I just showed. I 'm not even sure how to look for this answer.</p> <p>Is it even possible?</p>
<p>At the time of reading the file, its content is a string. It's not evaluated as template, and therefore you cannot use variables like you do.</p> <p>However, helm has a function for this purpose specifically called <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function" rel="nofollow noreferrer">tpl</a>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: my-config-map data: config.json: |- {{ tpl (.Files.Get .Values.myApp.configFile) $ | indent 4 }} </code></pre> <p>The tpl function takes a template string and renders it with some context. This is useful when you have template snippets in your values file or like in your case in some files content.</p>
<p>Unfortunately I am unable to paste configs or <code>kubectl</code> output, but please bear with me.</p> <p>Using helm to deploy a series of containers to K8s 1.14.6, all containers are deploying successfully <strong>except</strong> for those that have <code>initContainer</code> sections defined within them.</p> <p>In these failing deployments, their templates define <code>container</code> and <code>initContainer</code> stanzas that reference the same <code>persistent-volume</code> (and associated <code>persistent-volume-claim</code>, both defined elsewhere).</p> <p>The purpose of the <code>initContainer</code> is to copy persisted files from a mounted drive location into the appropriate place before the main <code>container</code> is established.</p> <p>Other containers (without <code>initContainer</code> stanzas) mount properly and run as expected.</p> <p>These pods which have <code>initContainer</code> stanzas, however, report &quot;failed to initialize&quot; or &quot;CrashLoopBackOff&quot; as they continually try to start up. The <code>kubectl describe pod</code> of these pods gives only a Warning in the events section that &quot;pod has unbound immediate PersistentVolumeClaims.&quot; The <code>initContainer</code> section of the pod description says it has failed because &quot;Error&quot; with no further elaboration.</p> <p>When looking at the associated <code>pv</code> and <code>pvc</code> entries from <code>kubectl</code>, however, none are left pending, and all report &quot;Bound&quot; with no Events to speak of in the description.</p> <p>I have been able to find plenty of articles suggesting fixes when your <code>pvc</code> list shows Pending claims, yet none so far that address this particular set of circumstance when all <code>pvc</code>s are bound.</p>
<p>When a PVC is &quot;Bound&quot;, this means that you do have a PersistentVolume object in your cluster, whose claimRef refers to that PVC (and usually: that your storage provisioner is done creating the corresponding volume in your storage backend).</p> <p>When a volume is &quot;not bound&quot;, in one of your Pod, this means the node where your Pod was scheduled is unable to attach your persistent volume. If you're sure there's no mistake in your Pods volumes, you should then check logs for your csi volumes attacher pod, when using CSI, or directly in nodes logs when using some in-tree driver.</p> <p>While the crashLoopBackOff thing is something else. You should check for logs of your initContainer: <code>kubectl logs -c &lt;init-container-name&gt; -p</code>. From your explanation, I would suppose there's some permission issues when copying files over.</p>
<p>I’ve built a service that lives in a Docker container. As part of it’s required behavior, when receiving a gRPC request, it needs to send an email as a side effect. So imagine something like</p> <pre><code>service MyExample { rpc ProcessAndSendEmail(MyData) returns (MyResponse) {} } </code></pre> <p>where there’s an additional emission (adjacent to the request/response pattern) of an email message.</p> <p>On a “typical” server deployment, I might have a postfix running ; if I were using a service, I’d just dial it’s SMTP endpoint. I don’t have either readily available in this case.</p> <p>As I’m placing my service in a container and would like to deploy to kubernetes, I’m wondering what solutions work best? There may be a simple postfix-like Docker image I can deploy... I just don’t know.</p>
<p>There's several docker mailservers:</p> <ul> <li><a href="https://github.com/docker-mailserver/docker-mailserver" rel="nofollow noreferrer">https://github.com/docker-mailserver/docker-mailserver</a></li> <li><a href="https://github.com/Mailu/Mailu" rel="nofollow noreferrer">https://github.com/Mailu/Mailu</a></li> <li><a href="https://github.com/bokysan/docker-postfix" rel="nofollow noreferrer">https://github.com/bokysan/docker-postfix</a></li> </ul> <p>Helm charts:</p> <ul> <li><a href="https://github.com/docker-mailserver/docker-mailserver-helm" rel="nofollow noreferrer">https://github.com/docker-mailserver/docker-mailserver-helm</a></li> <li><a href="https://github.com/Mailu/helm-charts" rel="nofollow noreferrer">https://github.com/Mailu/helm-charts</a></li> <li><a href="https://github.com/bokysan/docker-postfix/tree/master/helm" rel="nofollow noreferrer">https://github.com/bokysan/docker-postfix/tree/master/helm</a></li> </ul> <p>Note that the top answer in <a href="https://www.reddit.com/r/kubernetes/comments/uf4r8v/easy_to_deploy_mail_server/" rel="nofollow noreferrer">this reddit thread</a> recommends signing up for a managed mail provider instead of trying to self-host your own.</p>
<p>I have the following deployments one of Django api and the other of celery, when I run the command to get the resource consumption of the pods, it only return those of celery and not those of the API. What are potential reasons for this? given that the same configuration works well on another cluster</p> <p>Kubernetes Server Version: v1.22.5</p> <p><a href="https://i.stack.imgur.com/WKEK9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WKEK9.png" alt="enter image description here" /></a></p> <p><strong>EDIT: Added logs of metrics server</strong></p> <pre><code>I0824 13:28:05.498602 1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) I0824 13:28:06.269888 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0824 13:28:06.269917 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0824 13:28:06.269966 1 configmap_cafile_content.go:201] &quot;Starting controller&quot; name=&quot;client-ca::kube-system::extension-apiserver-authentication::client-ca-file&quot; I0824 13:28:06.269981 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0824 13:28:06.270005 1 configmap_cafile_content.go:201] &quot;Starting controller&quot; name=&quot;client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file&quot; I0824 13:28:06.270025 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0824 13:28:06.270512 1 secure_serving.go:266] Serving securely on [::]:8443 I0824 13:28:06.270577 1 dynamic_serving_content.go:131] &quot;Starting controller&quot; name=&quot;serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key&quot; I0824 13:28:06.270593 1 tlsconfig.go:240] &quot;Starting DynamicServingCertificateController&quot; W0824 13:28:06.270852 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed I0824 13:28:06.277601 1 server.go:187] &quot;Failed probe&quot; probe=&quot;metric-storage-ready&quot; err=&quot;no metrics to serve&quot; I0824 13:28:06.371038 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0824 13:28:06.371093 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController I0824 13:28:06.371932 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file </code></pre>
<p>I upgraded my cluster to 1.23.4 and my HPAs to autoscaling/v2 api and now everything works fine, I suppose my problem was related to this <a href="https://github.com/kubernetes/kubernetes/issues/76292" rel="nofollow noreferrer">issue</a> because my api pod had an initContainer.</p>
<p><strong>Scenario:</strong></p> <p>I have a k8s cluster with <code>host01</code>, <code>host02</code> and <code>host03</code>.</p> <p>I'm running master on <code>host01</code> and slaves on <code>host02</code> and <code>host03</code>, and I want to open a port for client connections on master, ie I want to open: <code>host01:32702</code>.</p> <p>Using following service.yaml, I managed to open <code>host01:32702</code>. To my surprise <code>host02:32702</code> and <code>host03:32702</code> are also open.</p> <pre><code>kind: Service # ... spec: type: NodePort ports: - name: dashboard port: 8265 targetPort: 8265 nodePort: 32702 selector: podonhost01: yes </code></pre> <p><strong>Questions:</strong></p> <ul> <li>Is this intended NodePort behavior? Is the selector not working?</li> <li>If yes, then why is this the intended NodePort behavior? If I have 1000s hosts, then single service will open port 32702 on all these hosts?</li> <li>How do I only open port 32702 on <code>host01</code> and not the rest of hosts?</li> </ul>
<p>You can use the <strong>externalIP</strong>,</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 externalIPs: - HOST-01 IP </code></pre> <p>Ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p>
<p>I just installed ingress controller in an aks cluster using this deployment resource :</p> <blockquote> <p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml</a></p> </blockquote> <p>specific for azure.</p> <p>So far everything works fine the issue i am having is, i get this error on my certificate that :</p> <blockquote> <p>Kubernetes Ingress Controller Fake Certificate</p> </blockquote> <p>i Know i followed all steps as i should, but i can figure out why my certificate says that. I would appreciate if anyone can help guide on a possible fix for the issue.</p> <p>issuer manifest</p> <blockquote> </blockquote> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: &quot;nginx&quot; name: TargetPods-6dc98445c4-jr6pt spec: tls: - hosts: - test.domain.io secretName: TargetPods-tls rules: - host: test.domain.io http: paths: - path: / pathType: Prefix backend: service: name: TargetPod-6dc98445c4-jr6pt port: number: 80 </code></pre> <p>Below is the result of : kubectl get secrets -n ingress-nginx</p> <pre><code>&gt; NAME TYPE DATA AGE default-token-dh88n kubernetes.io/service-account-token 3 45h ingress-nginx-admission Opaque 3 45h ingress-nginx-admission-token-zls6p kubernetes.io/service-account-token 3 45h ingress-nginx-token-kcvpf kubernetes.io/service-account-token 3 45h </code></pre> <p>also the secrets from cert-manager : kubectl get secrets -n cert-manager</p> <pre><code>&gt; NAME TYPE DATA AGE cert-manager-cainjector-token-2m8nw kubernetes.io/service-account-token 3 46h cert-manager-token-vghv5 kubernetes.io/service-account-token 3 46h cert-manager-webhook-ca Opaque 3 46h cert-manager-webhook-token-chz6v kubernetes.io/service-account-token 3 46h default-token-w2jjm kubernetes.io/service-account-token 3 47h letsencrypt-cluster-issuer Opaque 1 12h letsencrypt-cluster-issuer-key Opaque 1 45h </code></pre> <p>Thanks in advance</p>
<p>You're seeing this as it is the default out of the box TLS certificate. You should replace this with your own certificate.</p> <p>Here is some information in the <a href="https://github.com/kubernetes/ingress-nginx/blob/c6a8ad9a65485b1c4593266ab067dc33f3140c4f/docs/user-guide/tls.md#default-ssl-certificate" rel="nofollow noreferrer">documentation</a></p> <p>You essentially want to create a TLS certificate (try <a href="https://shocksolution.com/2018/12/14/creating-kubernetes-secrets-using-tls-ssl-as-an-example/" rel="nofollow noreferrer">this</a> method if you are unfamiliar) and then add --default-ssl-certificate=default/XXXXX-tls in the nginx-controller deployment in you yaml. You can add this as an argument, search for &quot;/nginx-ingress-controller&quot; in your yaml and that'll take you to the relevant section.</p>
<p>I want to share multiple volumes using PersistentVolume reqource of kubernetes.</p> <p>I want to share &quot;/opt/*&quot; folders in pod. But not the &quot;/opt&quot;:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: demo namespace: demo-namespace labels: app: myApp chart: &quot;my-app&quot; name: myApp spec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: &quot;myApp-data&quot; hostPath: path: /opt/* </code></pre> <p>But in pod I am not able to see shared volume. If I share only &quot;/opt&quot; folder then it goes shown in pod.</p> <p>Is there anything I am missing?</p>
<p>If you want to share a folder among some pods or deployments or statefulsets you should create PersistentVolumeClaim and it's access mode should be ReadeWriteMany.So here is an example of PersistentVolumeClaim which has ReadeWriteMany mode</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi </code></pre> <p>then in your pods you should use it as below ...</p> <pre><code> apiVersion: v1 kind: Pod metadata: name: mypod01 spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: c01 image: alpine volumeMounts: - mountPath: &quot;/opt&quot; name: task-pv-storage </code></pre> <pre><code> apiVersion: v1 kind: Pod metadata: name: mypod02 spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: c02 image: alpine volumeMounts: - mountPath: &quot;/opt&quot; name: task-pv-storage </code></pre>
<p>How to Stop/Start application deployed in the ArgoCD?</p> <p>I see only <em>Delete</em> or <em>Sync</em> or deploy/redeploy options. I got running server applications and I'd like to stop (shutdown) temporarily their functionality in the cluster. Or I'm missing something in the concept?</p> <p>Do I need to implement some kind of custom interface for my server applications to make start/stop functionality possible and communicate with my apps directly? (so it is out of ArgoCD responsibility - i.e. it is <em>not</em> like Linux service management system - I need to implement this by myself at application level)</p>
<p>You can set the replica count to 0 so no pod will be created, without having to update your application code or remove the application from argocd.</p> <p>You need to edit the definition of your deployment, setting the <code>replicas</code> to <code>0</code> like so:</p> <pre><code>apiVersion: ... kind: Deployment spec: replicas: 0 ... </code></pre> <p>This can be done in 2 ways:</p> <ul> <li>You can commit the changes in your config and sync argocd so they get applied,</li> <li>Or you can do this directly from the argocd UI: <ul> <li>First disable the auto-sync (<code>App Details</code> &gt; <code>Summary</code> &gt; <code>Disable auto-sync</code>) so your changes don't get overwritten</li> <li>Then edit the desired manifest of your deployment directly in the UI</li> <li>When you want to rollback this change and re-deploy your app, simply sync and you will get your old config back</li> </ul> </li> </ul>
<p>I'm attempting to configure AKS, and I've installed <strong>Istio Gateway,</strong> which interns created an Azure Load Balancer, to make the overall traffic flow to be as shown below.</p> <p><a href="https://i.stack.imgur.com/lZdnv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lZdnv.png" alt="enter image description here" /></a></p> <p>In my opinion, Azure Load Balancer is not required, <strong>Istio Gateway</strong> should connect directly to Azure Application Gateway, as shown below</p> <p><a href="https://i.stack.imgur.com/JgHu0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JgHu0.png" alt="enter image description here" /></a></p> <p>Is this doable? If so, can I get any reference?</p>
<p>From <a href="https://istio.io/latest/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio documentation</a> : <code>Gateway describes a load balancer operating at the edge of the mesh [...]</code>, which means it's the point of entry (endpoint) to your mesh network. Even though it's virtual, it still needs some kind of underlying infrastructure (internal load balancer in your case) to host that load balancing service.</p> <p>Now it's possible to configure your own ingress-gateway (<a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/</a>), but it's usually much simpler (IMHO) to just use the one from your cloud provider, unless you have a specific use case.</p>
<p>In the <a href="https://open-vsx.org/extension/ms-kubernetes-tools/vscode-kubernetes-tools" rel="nofollow noreferrer">VS Code Kubernetes Extension</a>, I am getting an error when I try to Access resources in my cluster.</p> <p>I have updated my ~/.kube/config with the correct data and general format</p> <h2>.kube/config</h2> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: M1ekNDQWMrZ0F3SUJBZ0lCQURB... server: https://{yadayada}.gr7.us-east-1.eks.amazonaws.com name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform contexts: - context: cluster: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform user: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform current-context: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform kind: Config preferences: {} users: - name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - --region - us-east-1 - eks - get-token - --cluster-name - eventplatform command: aws </code></pre> <h2>ERROR</h2> <p><a href="https://i.stack.imgur.com/LoABO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LoABO.png" alt="enter image description here" /></a></p>
<p>The solution to add my AWS credential ENV variables:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: M1ekNDQWMrZ0F3SUJBZ0lCQURB... server: https://{yadayada}.gr7.us-east-1.eks.amazonaws.com name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform contexts: - context: cluster: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform user: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform current-context: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform kind: Config preferences: {} users: - name: arn:aws:eks:us-east-1:{yada}:cluster/eventplatform user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - --region - us-east-1 - eks - get-token - --cluster-name - eventplatform command: aws env: - name: AWS_ACCESS_KEY_ID value: {SOME_VALUES} - name: AWS_SECRET_ACCESS_KEY value: {SOME_OTHER_VALUES} - name: AWS_SESSION_TOKEN value: {SOME_OTHER_OTHER_VALUES} </code></pre>
<p>I'm facing the below mentioned issue while using DHCP IPAM plugin + Macvlan + Multus for the additional interface creation inside my pod and assigning IP from DHCP server.</p> <p>I actually went through the related issues around this problem and tried all the solutions/different configurations mentioned there. But none of them were working so far. The documentation for CNI plugin w.r.t DHCP usage also not quite clear.</p> <p><strong>Related Issues:</strong></p> <ol> <li><a href="https://github.com/k8snetworkplumbingwg/multus-cni/issues/291" rel="nofollow noreferrer">https://github.com/k8snetworkplumbingwg/multus-cni/issues/291</a></li> <li><a href="https://github.com/containernetworking/plugins/issues/587" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/587</a></li> <li><a href="https://github.com/containernetworking/plugins/issues/371" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/371</a></li> <li><a href="https://github.com/containernetworking/plugins/issues/440" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/440</a></li> <li><a href="https://github.com/containernetworking/cni/issues/398" rel="nofollow noreferrer">https://github.com/containernetworking/cni/issues/398</a></li> <li><a href="https://github.com/containernetworking/cni/issues/225" rel="nofollow noreferrer">https://github.com/containernetworking/cni/issues/225</a></li> <li><a href="https://github.com/containernetworking/plugins/issues/371" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/issues/371</a></li> </ol> <p><strong>Solutions Suggested:</strong></p> <ol> <li><a href="https://github.com/containernetworking/plugins/pull/577" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/pull/577</a></li> </ol> <p><strong>DHCP Daemon Logs:</strong></p> <pre><code>[root@test-node cni_plugins]# ./dhcp daemon -broadcast=true 2022/06/09 12:00:03 ac7d57597540992a1af43455da24b3210561ce12b164820ee18f583a304a/test_net_attach1/net1: acquiring lease 2022/06/09 12:00:03 Link &quot;net1&quot; down. Attempting to set up 2022/06/09 12:00:03 network is down 2022/06/09 12:00:03 retrying in 2.881018 seconds 2022/06/09 12:00:16 no DHCP packet received within 10s 2022/06/09 12:00:16 retrying in 2.329120 seconds 2022/06/09 12:00:29 no DHCP packet received within 10s 2022/06/09 12:00:29 retrying in 1.875428 seconds </code></pre> <p><strong>NetworkAttachmentDefinition:</strong></p> <pre><code>apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: &quot;test1&quot; annotations: k8s.v1.cni.cncf.io/resourceName: intel.com/test_net_attach1 spec: config: '{ &quot;name&quot;: &quot;test_net_attach1&quot;, &quot;type&quot;: &quot;macvlan&quot;, &quot;master&quot;: &quot;ens2f0&quot;, &quot;ipam&quot;: { &quot;type&quot;: &quot;dhcp&quot; } }' </code></pre> <p><strong>Test Pod YAML:</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dhcp-test1 annotations: k8s.v1.cni.cncf.io/networks: test1 spec: nodeSelector: deploy: vm1 containers: - name: c1 image: centos:latest command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;sleep infinity&quot;] </code></pre> <p><strong>DHCP SERVER Logs:</strong></p> <pre><code>Internet Systems Consortium DHCP Server 4.4.1 Copyright 2004-2018 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Config file: /data/dhcpd.conf Database file: /data/dhcpd.leases PID file: /var/run/dhcpd.pid Wrote 0 leases to leases file. Listening on LPF/ens2f0/b4:96:91:a5:8c:82/192.168.231.0/24 Sending on LPF/ens2f0/b4:96:91:a5:8c:82/192.168.231.0/24 Sending on Socket/fallback/fallback-net Server starting service. DHCPDISCOVER from 4a:63:ad:6d:9a:44 via ens2f0 DHCPOFFER on 192.168.231.120 to 4a:63:ad:6d:9a:44 via ens2f0 DHCPDISCOVER from 4a:63:ad:6d:9a:44 via ens2f0 DHCPOFFER on 192.168.231.120 to 4a:63:ad:6d:9a:44 via ens2f0 </code></pre> <p><strong>Observations:</strong></p> <ol> <li>DHCP request and response are seen by tcpdump on the interface but the dhcp daemon says none was received.</li> <li>Looks like the DHCP daemon is not picking up the offer.</li> </ol> <p><strong>ENVIRONMENT:</strong> CNI: cni-plugins-linux-amd64-v1.1.1.tgz</p> <p>Could someone please share the working configurations for the below mentioned items</p> <ol> <li>Network attachment definition</li> <li>Sample Pod YAML</li> <li>How to run DHCP Daemon?</li> </ol> <p>for the better documentation and for the new comers who is facing issues around this? Please let me know if you needed anything or if I'm missing anything?</p>
<p>I started the daemon on the server node with:</p> <pre><code>cd /opt/cni/bin sudo ./dhcp daemon </code></pre> <p>In a couple of seconds my pod received its IP address from my DHCP server.</p>
<p><a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">This doc</a> tells about how to run multiple scheduler. But I m not able to understand how does default scheduler is decided? Is it based on <code>--leader-elect</code> option?</p> <p>Can I tell Kubernetes to use my-custom-scheduler as default scheduler instead of kube-scheduler? Is there another way to specify scheduler other than <code>schedulerName</code> in Pod/Deployment spec.</p>
<p>If you can't remove/replace the default scheduler in the control plane (e.g. if you are on a managed k8s platform), you can use <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/" rel="nofollow noreferrer">GateKeeper</a> — the replacement for OPA — or another policy agent to write mutations that are applied to some or all pods on your cluster.</p> <p>For example:</p> <pre><code>apiVersion: mutations.gatekeeper.sh/v1beta1 kind: Assign metadata: name: pod-scheduler-name spec: applyTo: - groups: &quot;&quot; kinds: [&quot;Pod&quot;] versions: [&quot;v1&quot;] match: kinds: - apiGroups: [&quot;*&quot;] kinds: [&quot;Pod&quot;] # Adjust this to a label that is present on the pods of your custom scheduler. # It's important that you leave your custom scheduler to be itself scheduled by the # default scheduler, as otherwise if all pods of your custom scheduler somehow get # terminated, they won't be able to start up again due to not being scheduled. labelSelector: matchExpressions: - key: app operator: NotIn values: [&quot;my-scheduler&quot;] location: &quot;spec.schedulerName&quot; # Adjust this to match the desired profile name from your scheduler's configuration. parameters: assign: value: my-scheduler </code></pre>
<p>When we launch the EKS Cluster using the below manifest, it is creating ALB. We have a default ALB that we are using, let's call it EKS-ALB. The Hosted zone is routing traffic to this EKS-ALB. We gave tag <strong>ingress.k8s.aws/resource:LoadBalancer, ingress.k8s.aws/stack:test-alb, elbv2.k8s.aws/cluster: EKS</strong>. But when we delete the manifest, it is deleting the default ALB and we need to reconfigure hosted zone again with New ALB which will get created in next deployment. Is there any way to block Ingress-controller not deleting ALB, but only deleting the listeners and Target Group?</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-nginx-rule namespace: test annotations: alb.ingress.kubernetes.io/group.name: test-alb alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;: 443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/healthcheck-port: traffic-port alb.ingress.kubernetes.io/healthcheck-path: /index.html alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg spec: ingressClassName: alb rules: - host: test.eks.abc.com http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: test-svc port: number: 5005 --- apiVersion: apps/v1 kind: Deployment metadata: name: test-dep namespace: test labels: app: test spec: replicas: 1 restartPolicy: selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test image: Imagepath imagePullPolicy: IfNotPresent ports: - containerPort: 5005 resources: requests: memory: &quot;256Mi&quot; cpu: &quot;500m&quot; --- apiVersion: v1 kind: Service metadata: name: test-svc namespace: test labels: app: test spec: type: NodePort ports: - port: 5005 targetPort: 80 protocol: TCP selector: app: test --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: test-scaler namespace: test spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-dep minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 60 --- </code></pre>
<p>In order to achieve the existing ALB not being deleted with group.name annotation enabled, we need to meet following conditions:</p> <ol> <li>ALB should be tagged with below 3 tags:</li> </ol> <pre><code>alb.ingress.kubernetes.io/group.name: test-alb alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: instance </code></pre> <ol start="2"> <li>Create a dummy ingress with the same group name with the below manifest.</li> </ol> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-nginx-rule namespace: test annotations: alb.ingress.kubernetes.io/group.name: test-alb alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;: 443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/healthcheck-port: traffic-port alb.ingress.kubernetes.io/healthcheck-path: /index.html alb.ingress.kubernetes.io/success-codes: 200-399 alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg spec: ingressClassName: alb rules: - host: dummy.eks.abc.com http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: test-svc port: number: 5005 </code></pre> <p>After deploying the above manifest, an ingress will be created using the same ALB and listener will have rule of if host is dummy.eks.abc.com, it will return 443. It's create and forget type of manifest, so after creating this ingress, even after we delete all the running deployment services (except the dummy manifest file above), the ALB will remain.</p>
<p>My goal is to have a kubernetes cluster running with stateless replicas of some kind of frontend which can schedule jobs.</p> <p>Afterwards my plan is to have multiple runners (as pods) which are polling for scheduled jobs. Once they receive job data they should launch job executors (also as a pod) on demand.</p> <p>This should look somehow like this:</p> <pre><code> pod A pod B ________ __________ | | kube | | | runner | ------&gt; | executor | |________| |__________| . . pod A' . pod B' ________ __________ | | kube | | | runner | ------&gt; | executor | |________| |__________| </code></pre> <p>Basically I am kind of inspired by the gitlab architecture. <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html#kubernetes-executor-interaction-diagram</a></p> <p>Therefore it would be really awesome if you could give me a hint where to start with. I was considering to use the python api of the cluster but in this I have to somehow make the cluster administration available to the runner.</p> <p>So I'm really thankful for any idea or approach how to realize such an architecture of pod lifecycling.</p> <p>Do you see any security risks with such an approach or is this reasonable securitywise?</p> <p>Thank you very much for your help and hints</p>
<p>Here is a <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">list of available kubernetes api clients</a>. You will use a <strong><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">service account</a></strong> to make authorized calls with your api client against the <strong>kube-apiserver</strong> running in the <strong>controlplane</strong> node. I made a first idea draft for your architecture. Maybe that is inspiring to you. Good look with your project! Alternative to the database you may want to use a message queue.</p> <p><a href="https://i.stack.imgur.com/SFBOV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SFBOV.png" alt="enter image description here" /></a></p>
<p>Is there a way to use Helm to show available chart updates for installed charts?</p> <p>For example I have a "web-app" chart installed as "test" with version 1.2.4, but in my repo 1.2.7 is available:</p> <pre><code># helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test default 1 2020-06-04 07:33:07.822952298 +0000 UTC deployed web-app-1.2.4 0.0.1 # helm search repo myrepo NAME CHART VERSION APP VERSION DESCRIPTION myrepo/ingress 0.1.0 1.16.0 A Helm chart for Kubernetes myrepo/sandbox 1.2.3 1.16.0 A Helm chart for Kubernetes myrepo/web-app 1.2.7 0.0.1 A Helm chart for Kubernetes </code></pre> <p>My goal is to write a script to send notifications of any charts that need updating so that I can review and run updates. I'd be happy to hear about any devOps style tools that do this,</p>
<p>As of August 28th 2022, there is no way of knowing from which repository an already installed helm chart came from.</p> <p>If you want to be able to do some sort of automation, currently you need to track the information of which chart came from which repo externally.<br /> Examples would be: storing configuration in Source Control, Installing charts as argo apps (if you're using argocd), a combination of both, etc.</p> <p>Now since this question doesn't describe the use of any of these methods, I'll just make an assumption and give an example based on of the methods I mentioned.</p> <p>Let's say you store all of the helm charts as dependencies of some local chart in your source control.</p> <p>An example would be a <code>Chart.yaml</code> that looks something like this:</p> <pre><code>apiVersion: v2 name: chart-of-charts description: A Helm chart for Kubernetes type: application version: 0.1.0 dependencies: - name: some-chart version: 0.5.1 repository: &quot;https://somechart.io&quot; - name: web-app version: 0.2.2 repository: &quot;https://myrepo.io&quot; </code></pre> <p>What you could do in this case is traverse through the dependencies and perform a lookup to compare the versions in the .yaml vs versions available.</p> <p>An example of a bash script:</p> <pre><code>#!/bin/bash # requires: # - helm # - yq (https://github.com/mikefarah/yq) chart = Chart.yaml length=$(yq '.dependencies | length' $chart) for i in $(seq $length $END); do iter=$(($i-1)) repo=$(yq .dependencies[$iter].repository $chart) name=$(yq .dependencies[$iter].name $chart) version=$(yq .dependencies[$iter].version $chart) # only if this app points to an external helm chart if helm repo add &quot;repo$iter&quot; $repo &gt; /dev/null 2&gt;&amp;1 then available_version=$(helm search repo &quot;repo$iter/$name&quot; --versions | sed -n '2p' | awk '{print $2}') if [ &quot;$available_version&quot; != &quot;$version&quot; ]; then echo APP: $(echo $chart | sed 's|/Chart.yaml||') echo repository: $repo echo chart name: $name echo current version: $version Available version: $available_version echo fi fi done </code></pre>
<p>community.</p> <p>I have doubts about the use of HorizontalPodAutoscaler (HPA) in Kubernetes, what are the best practices of using HPA, especially in the implementation of MaxReplicate, as an example, If I have a cluster with 3 workers nodes running a single app, and setting up the HPA to scale up to 20 pods, but it is a good practice scale pods 3x more than the available nodes? Or scale the pods up to the same quantity of available worker nodes in the cluster as a better approach?</p> <p>Thank you in advantage</p>
<p>first of all you need to test your application and decide a reasonable resources per pod &quot;request and limits&quot;</p> <p>After setting the limit per pod then you know how many pods your cluster can maintain. <code>for example if you have total/free 10 cpu and 10 Gi memory over the cluster and you set limit per pod to have 1 cpu and 1 Gi memo then you can run up to 10 pods.</code></p> <p>then it's time to run your load test and fire the expected traffic at its max with the lowest number of pods you're planning to run that fit the normal/daily traffic .. gradually startup new pod and check if you can handle the high traffic or you need to add more pods still .. repeat this till you reach a appropriate number of pods. then you got the maximum number of pods that you can configure in your HPA.</p>
<p>Im trying to format my yml to be more readable. I have an if statement that is really long with a bunch of and/ors and I would like to be able to spread it across multiple lines So something along the lines of</p> <pre><code>{{-if or (eq 'abc' .values.foo) (eq 'def' . values.bar) }} Def:'works' {{- end}} </code></pre> <p>But this throws up errors for incomplete if statement. Is there some special character or syntax I can use to achieve the above?</p>
<p>helm supports direct line breaks without special characters.</p> <p>Missing a space between <code>{{</code> and <code>if</code>.</p> <p>There is an extra space between <code>.</code> and <code>values</code>.</p> <p>String constants require double quotes.</p> <p>demo:</p> <p>values.yaml</p> <pre class="lang-yaml prettyprint-override"><code>foo: xxx bar: yyy </code></pre> <p>templates/cm.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: test labels: {{- include &quot;test.labels&quot; . | nindent 4 }} data: cfg: |- {{- if or (eq &quot;abc&quot; .Values.foo) (eq &quot;def&quot; .Values.bar) }} if {{- else }} else {{- end }} </code></pre> <p>cmd</p> <pre class="lang-bash prettyprint-override"><code>helm template --debug test . </code></pre> <p>output</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: test data: cfg: |- else </code></pre>
<p>I use this command to install and enable Kubernetes dashboard on a remote host:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml kubectl proxy --address='192.168.1.132' --port=8001 --accept-hosts='^*$' http://192.168.1.132:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login </code></pre> <p>But I get:</p> <pre><code>Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. Read more here . </code></pre> <p>Is it possible to enable SSL connection on the Kubernetes host so that I can access it without this warning message and enable login?</p>
<p>From the service definition</p> <pre class="lang-yaml prettyprint-override"><code>kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard </code></pre> <p>Which exposes port 443 (aka https). So it's already preconfigured. First, use https instead of http in your URL.</p> <p>Then, instead of doing a <code>kubectl proxy</code>, why not simply</p> <pre><code>kubectl port-forward -n kubernetes-dashboard services/kubernetes-dashboard 8001:443 </code></pre> <p>Access endpoint via <a href="https://127.0.0.1:8001/#/login" rel="noreferrer">https://127.0.0.1:8001/#/login</a></p> <p>Now it's going to give the typical &quot;certificate not signed&quot; since the certificate are self signed (arg <code>--auto-generate-certificates</code> in deployment definition). Just skip it with your browser. See an article like <a href="https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/" rel="noreferrer">https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/</a> if you need to configure a signed certificate.</p>
<p>I've placed a docker compose file <strong>project.yaml</strong> at the location /etc/project/project.yaml</p> <p>the file and well as the project directory have the same file permission, i.e. -rxwrxxrwx but when I run docker-compose</p> <pre><code>sudo docker-compose -f ./project.yaml up -d </code></pre> <p>if errors out with the following Cannot find the file ./project.yaml</p> <p>I have checked various times and it seems there is no permission issue. Can anyone tell why we have this problem and what would be the solution</p>
<p>Beside using the full path, as <a href="https://stackoverflow.com/questions/73500671/docker-compose-cannot-find-the-yaml-file#comment129801726_73500671">commented</a> by <a href="https://stackoverflow.com/users/14312225/quoc9x">quoc9x</a>, double-check your current working directory when you call a command with a relative path <code>./project.yaml</code></p> <p>If you are not in the right folder, that would explain the error message.</p>
<p>I want to make a container that is able to transfer files between itself and other containers on the cluster. I have multiple containers that are responsible for executing a task, and they are waiting to get an input file to do so. I want a separate container to be responsible for handling files before and after the task is executed by the other containers. As an example:</p> <ol> <li>have all files on the file manager container.</li> <li>let the file manager container automatically copy a file to a task executing container.</li> <li>let task executing container run the task.</li> <li>transfer the output of the task executing container to the file manager container.</li> </ol> <p>And i want to do this automatically, so that for example 400 input files can be processed to output files in this way. What would be the best way to realise such a process with kubernetes? Where should I start?</p>
<p>A simple approach would be to set up the NFS or use the File system like AWS EFS or so.</p> <p>You can mount the File system or NFS directly to POD which will be in <strong>ReadWriteMany</strong> access method.</p> <p><strong>ReadWriteMany</strong> - Multiple POD can access the single file system.</p> <p>If you don't want to use the Managed service like EFS or so you can also set up the file system on <strong>K8s</strong> checkout the <strong>MinIO</strong> : <a href="https://min.io/" rel="nofollow noreferrer">https://min.io/</a></p> <p>All files will be saved in the <strong>File system</strong> and as per <strong>POD</strong> requirement, it can simply access it from the file system.</p> <p>You can create different directories to separate the outputs.</p> <p>If you want only read operation, meaning all PODs can read the files only you can also set up the <code>ReadOnlyMany</code> access mode.</p> <p>If you are GCP you can checkout this nice document : <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">https://cloud.google.com/filestore/docs/accessing-fileshares</a></p>
<p>I've just migrated to M1 Macbook and tried to deploy couchbase using Couchbase Helm Chart on Kubernetes. <a href="https://docs.couchbase.com/operator/current/helm-setup-guide.html" rel="nofollow noreferrer">https://docs.couchbase.com/operator/current/helm-setup-guide.html</a></p> <p>But, couchbase server pod fails with message below</p> <blockquote> <p>Readiness probe failed: dial tcp 172.17.0.7:8091: connect: connection refused</p> </blockquote> <p>Pod uses image: couchbase/server:7.0.2</p> <p>Error from log file:</p> <pre><code>Starting Couchbase Server -- Web UI available at http://&lt;ip&gt;:8091 and logs available in /opt/couchbase/var/lib/couchbase/logs runtime: failed to create new OS thread (have 2 already; errno=22) fatal error: newosproc runtime stack: runtime.throw(0x4d8d66, 0x9) /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/panic.go:596 +0x95 runtime.newosproc(0xc420028000, 0xc420038000) /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/os_linux.go:163 +0x18c runtime.newm(0x4df870, 0x0) /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1628 +0x137 runtime.main.func1() /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:126 +0x36 runtime.systemstack(0x552700) /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:327 +0x79 runtime.mstart() /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:1132 goroutine 1 [running]: runtime.systemstack_switch() /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:281 fp=0xc420024788 sp=0xc420024780 runtime.main() /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/proc.go:127 +0x6c fp=0xc4200247e0 sp=0xc420024788 runtime.goexit() /home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200247e8 sp=0xc4200247e0 {&quot;init terminating in do_boot&quot;,{{badmatch,{error,{{shutdown,{failed_to_start_child,encryption_service,{port_terminated,normal}}},{ns_babysitter,start,[normal,[]]}}}},[{ns_babysitter_bootstrap,start,0,[{file,&quot;src/ns_babysitter_bootstrap.erl&quot;},{line,23}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}} init terminating in do_boot ({{badmatch,{error,{{_},{_}}}},[{ns_babysitter_bootstrap,start,0,[{_},{_}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}) </code></pre> <p>Any help would be appreciated.</p>
<p>It seems ARM64 version of Couchbase Server for MacOS has become available since Couchbase Server 7.1.1.</p> <p>So, I ran the command below to install couchbase.</p> <pre><code>helm install couchbasev1 --values myvalues.yaml couchbase/couchbase-operator </code></pre> <p>myvalues.yaml:</p> <pre><code>cluster:   image: couchbase/server:7.1.1 </code></pre> <p>  And it worked.</p>
<p>I have created an Openshift cluster where my pods and services are running. Before creating an ingress, i deployed a Kong ingress controller that auto provisioned a Azure load balancer.</p> <p><a href="https://i.stack.imgur.com/wp3Vr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wp3Vr.png" alt="enter image description here" /></a></p> <p>From my understanding, Kong uses Nginx that can be configured to be used as a load balancer itself. Why does it need a Load balancer to be provisioned by Cloud infrastructure on which kubernetes cluster is running?</p>
<p>You might have deployed the Kong Ingress controller with the service <code>type : LoadBalancer</code></p> <p><strong>LoadBalancer</strong> service type is mainly use to expose the <code>Kubernetes</code> service using LB of <strong>Cloud</strong> provider.</p> <p>ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a></p> <p>You can change the Kong service type to <strong>ClusterIP</strong> and it will as expected it will forward the request.</p>
<p><strong>I am new to helm and kubernetes.</strong></p> <p>My current requirement is to use setup multiple services using a common helm chart.</p> <p>Here is the scenario.</p> <ol> <li><p>I have a common docker image for all of the services</p> </li> <li><p>for each of the services there are different commands to run. <strong>In total there are more than 40 services.</strong></p> <p>Example</p> </li> </ol> <blockquote> <pre><code>pipenv run python serviceA.py pipenv run python serviceB.py pipenv run python serviceC.py and so on... </code></pre> </blockquote> <p>Current state of helm chart I have is</p> <pre><code>demo-helm |- Chart.yaml |- templates |- deployment.yaml |- _helpers.tpl |- values |- values-serviceA.yaml |- values-serviceB.yaml |- values-serviceC.yaml and so on ... </code></pre> <p>Now, since I want to use the same helm chart and deploy multiple services. How should I do it?</p> <p>I used following command <code>helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml</code> but it only does a deployment for values file provided at the end.</p> <p>Here is my <code>deployment.yaml</code> file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ include &quot;helm.fullname&quot; . }} labels: {{- include &quot;helm.labels&quot; . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include &quot;helm.selectorLabels&quot; . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include &quot;helm.selectorLabels&quot; . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} containers: - name: {{ .Chart.Name }} image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} command: {{- toYaml .Values.command |nindent 12}} resources: {{- toYaml .Values.resources | nindent 12 }} volumeMounts: - name: secrets mountPath: &quot;/usr/src/app/config.ini&quot; subPath: config.ini {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} volumes: - name: secrets secret: secretName: sample-application defaultMode: 0400 </code></pre> <p><strong>Update.</strong></p> <p>Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.</p> <p><code>deployment.yaml</code></p> <pre><code>{{- range $service, $val := .Values.services }} --- apiVersion: apps/v1 kind: Deployment metadata: name: {{ $service }} labels: app: {{ .nameOverride }} spec: replicas: {{ .replicaCount }} selector: matchLabels: app: {{ .nameOverride }} template: metadata: labels: app: {{ .nameOverride }} spec: imagePullSecrets: - name: aws-ecr containers: - name: {{ $service }} image: &quot;image-latest-v3&quot; imagePullPolicy: IfNotPresent command: {{- toYaml .command |nindent 12}} resources: {{- toYaml .resources | nindent 12 }} volumeMounts: - name: secrets mountPath: &quot;/usr/src/app/config.ini&quot; subPath: config.ini volumes: - name: secrets secret: secretName: {{ .secrets }} defaultMode: 0400 {{- end }} </code></pre> <p>and <code>values.yaml</code></p> <pre><code>services: #Services for region1 serviceA-region1: nameOverride: &quot;serviceA-region1&quot; fullnameOverride: &quot;serviceA-region1&quot; command: [&quot;bash&quot;, &quot;-c&quot;, &quot;python serviceAregion1.py&quot;] secrets: vader-search-region2 resources: {} replicaCount: 5 #Services for region2 serviceA-region2: nameOverride: &quot;serviceA-region2&quot; fullnameOverride: &quot;serviceA-region2&quot; command: [&quot;bash&quot;, &quot;-c&quot;, &quot;python serviceAregion2.py&quot;] secrets: vader-search-region2 resources: {} replicaCount: 5 </code></pre> <p>Now I want to know will the following configuration work with the changes I am posting below for both <code>values.yaml</code></p> <pre><code>services: region: #Services for region1 serviceA-region1: nameOverride: &quot;serviceA-region1&quot; fullnameOverride: &quot;serviceA-region1&quot; command: [&quot;bash&quot;, &quot;-c&quot;, &quot;python serviceAregion1.py&quot;] secrets: vader-search-region2 resources: {} replicaCount: 5 region:2 #Services for region2 serviceA-region2: nameOverride: &quot;serviceA-region2&quot; fullnameOverride: &quot;serviceA-region2&quot; command: [&quot;bash&quot;, &quot;-c&quot;, &quot;python serviceAregion2.py&quot;] secrets: vader-search-region2 resources: {} replicaCount: 5 </code></pre> <p>and <code>deployment.yaml</code></p> <pre><code>{{- range $region, $val := .Values.services.region }} --- apiVersion: apps/v1 kind: Deployment metadata: name: {{ $region }}-{{ .nameOverride }} labels: app: {{ .nameOverride }} spec: replicas: {{ .replicaCount }} selector: matchLabels: app: {{ .nameOverride }} template: metadata: labels: app: {{ .nameOverride }} spec: imagePullSecrets: - name: aws-ecr containers: - name: {{ $region }}-{{ .nameOverride }} image: &quot;image-latest-v3&quot; imagePullPolicy: IfNotPresent command: {{- toYaml .command |nindent 12}} resources: {{- toYaml .resources | nindent 12 }} volumeMounts: - name: secrets mountPath: &quot;/usr/src/app/config.ini&quot; subPath: config.ini volumes: - name: secrets secret: secretName: {{ .secrets }} defaultMode: 0400 {{- end }} </code></pre>
<p>I can recommend you try a helmfile-based approach. I prefer a 3-file approach. What you'll need :</p> <ul> <li>helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.</li> <li>helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)</li> <li>helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)</li> <li>a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)</li> </ul> <p>Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.</p> <p>Meanwhile, you can refer to the official docs <a href="https://helmfile.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> or the <a href="https://lyz-code.github.io/blue-book/devops/helmfile/" rel="nofollow noreferrer">Blue Books</a> if you have Github access on your machine.</p>
<p>I have created the below 'pod` in default namespace</p> <pre><code> kubectl run myhttpd --image=&quot;docker.io/library/nginx:latest&quot; --restart=Never -l app=httpd-server --port 80 </code></pre> <p>I was creating another Pod on a different <code>namespace</code> to check the connectivity on <code>port 80</code> on <code>default namespace</code> with the below <code>command</code></p> <pre><code>kubectl run cli-httpd --rm -it --image=busybox --restart=Never -l app=myhttpd -- /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 100.64.9.198 (IP of application in default namespace) </code></pre> <p>In order to allow the connectivity between both the namespace , I have created the below <code>Pod network policy</code></p> <pre><code> apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-port-ingress-80 namespace: default spec: podSelector: matchLabels: app: myhttpd policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.64.8.0/22 ports: - protocol: TCP port: 80 </code></pre> <p><code>10.64.8.0/22</code> is the Pods network range.</p> <p>But the connectivity is timing out. Please suggest to allow this connectivty</p>
<p>In NetworkPolicy, the ipBlock is usually meant to allow communications from outside your SDN.</p> <p>What you want to do is to filter based on pod labels.</p> <p>Having started your test pod, check for its labels</p> <pre><code>kubectl get pods --show-labels </code></pre> <p>Pick one that identify your Pod, while not matching anything else, then fix your NetworkPolicy. Should look something like:</p> <pre><code>spec: ingress: - from: - podSelector: # assuming client pod belongs to same namespace as application matchLabels: app: my-test # netpol allows connections from any pod with label app=my-test ports: - port: 80 # netpol allows connections to port 80 only protocol: TCP podSelector: matchLabels: app: myhttpd # netpol applies to any pod with label app=myhttpd policyTypes: - Ingress </code></pre> <p>While ... I'm not certain what the NetworkPolicy specification says regarding ipBlocks (can they refer to SDN ranges?) ... depending on your SDN, I guess your configuration &quot;should&quot; work, in some cases, maybe. Maybe your issue is only related to label selectors?</p> <p>Note, allowing connections from everywhere, I would use:</p> <pre><code>spec: ingress: - {} .... </code></pre>
<p>From a certain PVC, I'm trying to get the volume id from the metadata of the PV associated with the PVC using <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Api</a>.</p> <p>I'm able to describe PVC with <code>read_namespaced_persistent_volume_claim</code> function and obtain the PV name <code>spec.volume_name</code>. Now I need to go deeper and get the <code>Source.VolumeHandle</code> attribute from the PV metadata to get de EBS Volume Id and obtain the volume status from aws, but I can't find a method to describe pv from the python api.</p> <p>Any help?</p> <p>Thanks</p>
<p>While <code>PersistentVolumeClaims</code> are namedspaced, <code>PersistentVolumes</code> are not. Looking at the available methods in the V1 API...</p> <pre><code>&gt;&gt;&gt; v1 = client.CoreV1Api() &gt;&gt;&gt; print('\n'.join([x for x in dir(v1) if x.startswith('read') and 'volume' in x])) read_namespaced_persistent_volume_claim read_namespaced_persistent_volume_claim_status read_namespaced_persistent_volume_claim_status_with_http_info read_namespaced_persistent_volume_claim_with_http_info read_persistent_volume read_persistent_volume_status read_persistent_volume_status_with_http_info read_persistent_volume_with_http_info </code></pre> <p>...it looks like <code>read_persistent_volume</code> is probably what we want. Running <code>help(v1.read_persistent_volume)</code> gives us:</p> <pre><code>read_persistent_volume(name, **kwargs) method of kubernetes.client.api.core_v1_api.CoreV1Api instance read_persistent_volume read the specified PersistentVolume This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True &gt;&gt;&gt; thread = api.read_persistent_volume(name, async_req=True) &gt;&gt;&gt; result = thread.get() :param async_req bool: execute request asynchronously :param str name: name of the PersistentVolume (required) :param str pretty: If 'true', then the output is pretty printed. :param _preload_content: if False, the urllib3.HTTPResponse object will be returned without reading/decoding response data. Default is True. :param _request_timeout: timeout setting for this request. If one number provided, it will be total request timeout. It can also be a pair (tuple) of (connection, read) timeouts. :return: V1PersistentVolume If the method is called asynchronously, returns the request thread. </code></pre>
<p>I am planning to have a Kubernetes cluster for production using Ingress for external requests.</p> <p>I have an elastic database that is not going to be part of the Kubernetes cluster. I have a microservice in the Kubernetes cluster that communicates with the elastic database through HTTP (Get,Post etc).</p> <p>Should I create another NodePort Service in order to communicate with the elastic database or should it be through the Ingress controller as it's an HTTP request? if both are valid options please let me know what is better to use and why</p>
<blockquote> <p>Should I create another NodePort Service in order to communicate with the elastic database or should it be through the Ingress controller as it's an HTTP request?</p> </blockquote> <p>There is no requirement of it if your k8s cluster is a public, microservices will be able to send requests to the <strong>Elasticsearch</strong> database.</p> <p><strong>Ingress</strong> and <strong>Egress</strong> endpoints might not be the same point in <strong>K8s</strong>.</p> <blockquote> <p>I have a microservice in the Kubernetes cluster that communicates with the elastic database through HTTP (Get,Post etc).</p> </blockquote> <p>May there is some misunderstanding, Ingress is for the incoming request it's not guarantee when you are running the microservice on Kubernetes your HTTP outgoing egress request will go through the same way.</p> <p>If your microservice running on the K8s cluster, it will use the Node's IP on which POD is running as outgoing IP.</p> <p>You can verify this quickly using <strong>kubectl exec</strong> command</p> <pre><code>kubectl exec -it &lt;Any POD name&gt; -n &lt;namespace name&gt; -- /bin/bash </code></pre> <p>Run the command now</p> <pre><code>curl https://ifconfig.me </code></pre> <p>above command will response with the IP from where the request is going out in your cluster, it will be Node's IP on which your POD is scheduled.</p> <p><strong>Extra</strong></p> <p>So you can manage the <strong>ingress</strong> for incoming traffic no extra config is required for <strong>egress</strong> traffic, but if you want to whitelist single IP in the Elasticsearch database then you have to set up the <strong>NAT gateway</strong>.</p> <p>So all traffic of K8s microservices will go out from a <strong>single</strong> <strong>IP</strong>(Nat gateway's IP), it will be different IP from the <strong>Ingress IP</strong>.</p> <p>If you are on GCP, here is terraform script to setup the NAT gateway also : <a href="https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway" rel="nofollow noreferrer">https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway</a></p> <p>You might will get an idea by watching the diagram in the above link.</p>
<p>I noticed a strange behavior while experimenting with <code>kubectl run</code> :</p> <ul> <li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c &quot;ls -lah&quot;</code> &gt; <strong>OK</strong></p> <pre><code>kubectl run nodejs --image=node:lts-alpine \ --restart=Never --quiet -i --rm \ --command -- /bin/sh -c &quot;ls -lah&quot; </code></pre> </li> <li><p>When command to be executed is passed in <code>--overrides</code> with <code>&quot;command&quot;: [ &quot;ls&quot;, &quot;-lah&quot; ]</code> &gt; <strong>OK</strong></p> <pre><code>kubectl run nodejs --image=node:lts-alpine \ --restart=Never \ --overrides=' { &quot;kind&quot;: &quot;Pod&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;nodejs&quot; }, &quot;spec&quot;: { &quot;volumes&quot;: [ { &quot;name&quot;: &quot;host-volume&quot;, &quot;hostPath&quot;: { &quot;path&quot;: &quot;/home/dferlay/Sources/df-sdc/web/themes/custom/&quot; } } ], &quot;containers&quot;: [ { &quot;name&quot;: &quot;nodejs&quot;, &quot;image&quot;: &quot;busybox&quot;, &quot;command&quot;: [ &quot;ls&quot;, &quot;-lah&quot; ], &quot;workingDir&quot;: &quot;/app&quot;, &quot;volumeMounts&quot;: [ { &quot;name&quot;: &quot;host-volume&quot;, &quot;mountPath&quot;: &quot;/app&quot; } ], &quot;terminationMessagePolicy&quot;: &quot;FallbackToLogsOnError&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot; } ], &quot;restartPolicy&quot;: &quot;Never&quot;, &quot;securityContext&quot;: { &quot;runAsUser&quot;: 1000, &quot;runAsGroup&quot;: 1000 } } } ' \ --quiet -i --rm </code></pre> </li> <li><p>When the command to be executed is passed as option flag <code>--command -- /bin/sh -c &quot;ls -lah&quot;</code> and <code>--overrides</code> is used for something else (volume for instance) &gt; <strong>KO</strong></p> <pre><code>kubectl run nodejs --image=node:lts-alpine --restart=Never \ --overrides=' { &quot;kind&quot;: &quot;Pod&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;nodejs&quot; }, &quot;spec&quot;: { &quot;volumes&quot;: [ { &quot;name&quot;: &quot;host-volume&quot;, &quot;hostPath&quot;: { &quot;path&quot;: &quot;/home/dferlay/Sources/df-sdc/web/themes/custom/&quot; } } ], &quot;containers&quot;: [ { &quot;name&quot;: &quot;nodejs&quot;, &quot;image&quot;: &quot;busybox&quot;, &quot;workingDir&quot;: &quot;/app&quot;, &quot;volumeMounts&quot;: [ { &quot;name&quot;: &quot;host-volume&quot;, &quot;mountPath&quot;: &quot;/app&quot; } ], &quot;terminationMessagePolicy&quot;: &quot;FallbackToLogsOnError&quot;, &quot;imagePullPolicy&quot;: &quot;IfNotPresent&quot; } ], &quot;restartPolicy&quot;: &quot;Never&quot;, &quot;securityContext&quot;: { &quot;runAsUser&quot;: 1000, &quot;runAsGroup&quot;: 1000 } } } ' \ --quiet -i --rm --command -- /bin/sh -c &quot;ls -lah&quot; </code></pre> </li> </ul> <p>So it looks like using <code>--overrides</code> prevents <code>--command</code> to be used.</p> <p>However, I precisely need to use <code>--command</code> to bypass the array format expected by <code>--overrides</code> (ie. <code>&quot;command&quot;: [ &quot;ls&quot;, &quot;-lah&quot; ]</code>) because in my use case the command is a placeholder and cannot be known in advance.</p> <ul> <li>How can I do that ? Is there something I'm missing ?</li> </ul> <p>FYI: <code>kubectl version=v1.23.1+k3s2</code></p>
<p>You can bypass the array format by using the <code>args</code> field:</p> <pre><code>&quot;command&quot;: [ &quot;sh&quot;, &quot;-c&quot; ], &quot;args&quot;: [ &quot;pwd &amp;&amp; id &amp;&amp; node YOUR_COMMAND&quot; ] </code></pre>
<p>Is there a way to assign pods to the nodes in a particular nodegroup without labeling each node in the nodegroup?</p> <p>E.g.:</p> <p>Suppose I have two nodegroup <code>NG1</code> and <code>NG2</code>, and I have two apps <code>A1</code> and <code>A2</code></p> <p>I want to assign pods of app <code>A1</code> to nodegroup <code>NG1</code> and pods of app <code>A2</code> to nodegroup <code>A2</code>. (I don't want to assign labels to each node in the nodegroup manually and then use nodeselector)</p>
<p>You can use some of the default labels if those are coming and not same across the both Node pools</p> <pre><code>failure-domain.beta.kubernetes.io/zone failure-domain.beta.kubernetes.io/region beta.kubernetes.io/instance-type beta.kubernetes.io/os beta.kubernetes.io/arch </code></pre> <p>For example, if both of your Node pool running the different type of instances, you can use the <code>beta.kubernetes.io/instance-type</code></p> <p><strong>Example</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/instance-type operator: In values: - Node Type - Node Type containers: - name: with-node-affinity image: registry.k8s.io/pause:2.0 </code></pre> <p>You can also use the <code>topology.kubernetes.io/zone</code> if zone if difference is there.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - antarctica-east1 - antarctica-west1 containers: - name: with-node-affinity image: registry.k8s.io/pause:2.0 </code></pre> <p><strong>Update</strong></p> <p>If all the labels are the <strong>same</strong> you can try below command, which will list and label from a specific node group <code>alpha.eksctl.io/nodegroup-name=ng-1</code> :</p> <pre><code>kubectl label nodes -l alpha.eksctl.io/nodegroup-name=ng-1 new-label=foo </code></pre>
<p>I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting <code>ImagePullBackOff</code> error. Here is the yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-deployment labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - protocol: TCP port: 27017 targetPort: 27017 </code></pre> <p>I tried to pull the image locally by using <code>docker pull mongo</code>, <code>minikube image pull mongo</code> &amp; <code>minikube image pull mongo-express</code> several times while restarting docker and minikube several times.</p> <p>Logining into dockerhub (both in broweser and through terminal didn't work)</p> <p>I also tried to login into docker using <code>docker login</code> command and then modified my <code>/etc/resolv.conf</code> and adding <code>nameserver 8.8.8.8</code> and then restartied docker using <code>sudo systemctl restart docker</code> but even that failed to work.</p> <p>On running <code>kubectl describe pod</code> command I get this output:</p> <pre><code>Name: mongodb-deployment-6bf8f4c466-85b2h Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Mon, 29 Aug 2022 23:04:12 +0530 Labels: app=mongodb pod-template-hash=6bf8f4c466 Annotations: &lt;none&gt; Status: Pending IP: 172.17.0.2 IPs: IP: 172.17.0.2 Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466 Containers: mongodb: Container ID: Image: mongo Image ID: Port: 27017/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: MONGO_INITDB_ROOT_USERNAME: &lt;set to the key 'mongo-root-username' in secret 'mongodb-secret'&gt; Optional: false MONGO_INITDB_ROOT_PASSWORD: &lt;set to the key 'mongo-root-password' in secret 'mongodb-secret'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-vlcxl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message </code></pre> <hr /> <pre><code> Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube Warning Failed 18m (x2 over 20m) kubelet Failed to pull image &quot;mongo:latest&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image &quot;mongo:latest&quot; Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff Normal Pulling 17m (x3 over 22m) kubelet Pulling image &quot;mongo:latest&quot; Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image &quot;mongo:latest&quot; Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image &quot;mongo:latest&quot;: rpc error: code = Unknown desc = context deadline exceeded Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image &quot;mongo:latest&quot; Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff </code></pre> <p>PS: Ignore any any spacing errors</p>
<p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p> <p>First, pull the image via <code>Docker</code></p> <pre class="lang-bash prettyprint-override"><code>docker image pull mongo </code></pre> <p>Then load the downloaded image to <code>minikube</code></p> <pre class="lang-bash prettyprint-override"><code>minikube image load mongo </code></pre> <p>And then everything will work because now kubectl will use the image that is stored locally.</p>
<p>This is my <code>ClusterRoleBinding</code> and <code>ClusterRole</code> defination</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-namespaces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: bootstrap subjects: - kind: ServiceAccount name: executors namespace: bootstrap --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: bootstrap rules: - apiGroups: - '*' resources: - namespaces verbs: - get - list - watch - create - update - patch - delete </code></pre> <p>The service account</p> <pre><code>[node1 ~]$ kubectl get sa executors -n bootstrap -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2022-08-30T19:51:17Z&quot; name: executors namespace: bootstrap resourceVersion: &quot;2209&quot; uid: 488f5a2d-c44d-4db1-8d18-11a4f0206952 secrets: - name: executors-token-2b2wl </code></pre> <p>The test Config</p> <pre><code>[node1 ~]$ kubectl create namespace test --as=executors Error from server (Forbidden): namespaces is forbidden: User &quot;executors&quot; cannot create resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope [no </code></pre> <pre><code>[node1 ~]$ kubectl auth can-i create namespace --as=executors Warning: resource 'namespaces' is not namespace scoped no </code></pre> <p>Why I'm getting the above error I did follow the k8's doc of ClusterRoleBinding here</p>
<p>Try this and let me know how it goes.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-namespaces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: bootstrap subjects: - kind: ServiceAccount name: executors namespace: bootstrap --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: bootstrap rules: - apiGroups: - '' resources: - namespaces verbs: - get - list - watch - create - update - patch - delete </code></pre> <p>I see that in my cluster ClusterRole <code>system:controller:namespace-controller</code> have apiGroups of <code>''</code> instead of <code>'*'</code> seen in your original ClusterRole.</p>
<p>Can you please assist when deploying we getting ImagePullBackOff for our pods.</p> <p>running <code>kubectl get &lt;pod-name&gt; -n namespace -o yaml</code> am getting below error.</p> <pre class="lang-yaml prettyprint-override"><code>containerStatuses: - image: mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644 imageID: &quot;&quot; lastState: {} name: dmd-base ready: false restartCount: 0 started: false state: waiting: message: Back-off pulling image &quot;mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot; reason: ImagePullBackOff hostIP: x.x.x.53 phase: Pending podIP: x.x.x.237 </code></pre> <p>and running <code>kubectl describe pod &lt;pod-name&gt; -n namespace</code> am getting below error infomation</p> <pre class="lang-none prettyprint-override"><code> Normal Scheduled 85m default-scheduler Successfully assigned dmd-int/app-app-base-5b4b75756c-lrcp6 to aks-agentpool-35064155-vmss00000a Warning Failed 85m kubelet Failed to pull image &quot;mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: [rpc error: code = Unknown desc = failed to pull and unpack image &quot;mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: failed to resolve reference &quot;mycontainer-registry.io/commpany/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: failed to do request: Head &quot;https://mycontainer-registry.azurecr.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: dial tcp: lookup mycontainer-registry.azurecr.io on [::1]:53: read udp [::1]:56109-&gt;[::1]:53: read: connection refused, rpc error: code = Unknown desc = failed to pull and unpack image &quot;mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: failed to resolve reference &quot;mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: failed to do request: Head &quot;https://mycontainer-registry.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644&quot;: dial tcp: lookup mycontainer-registry.io on [::1]:53: read udp [::1]:60759-&gt;[::1]:53: read: connection refused]` </code></pre> <p>From the described logs I can see the issue is a connection but I can't tell where the issue is with connectivity, we running our apps in a Kubernetes cluster on Azure.</p> <p>If anyone has come across this issue can you please assist the application has been running successfully throughout the past months we just got this issue this morning.</p>
<p>There is a known Azure outage multiple regions today. Some DNS issue that also affects image pulls. <a href="https://status.azure.com/en-us/status" rel="noreferrer">https://status.azure.com/en-us/status</a></p>
<p>I'm trying to install Kubernetes with dashboard but I get the following issue:</p> <pre><code>test@ubuntukubernetes1:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-ksc9n 0/1 CrashLoopBackOff 14 (2m15s ago) 49m kube-system coredns-6d4b75cb6d-27m6b 0/1 ContainerCreating 0 4h kube-system coredns-6d4b75cb6d-vrgtk 0/1 ContainerCreating 0 4h kube-system etcd-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h kube-system kube-apiserver-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h kube-system kube-controller-manager-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h kube-system kube-proxy-6v8w6 1/1 Running 1 (106m ago) 4h kube-system kube-scheduler-ubuntukubernetes1 1/1 Running 1 (106m ago) 4h kubernetes-dashboard dashboard-metrics-scraper-7bfdf779ff-dfn4q 0/1 Pending 0 48m kubernetes-dashboard dashboard-metrics-scraper-8c47d4b5d-9kh7h 0/1 Pending 0 73m kubernetes-dashboard kubernetes-dashboard-5676d8b865-q459s 0/1 Pending 0 73m kubernetes-dashboard kubernetes-dashboard-6cdd697d84-kqnxl 0/1 Pending 0 48m test@ubuntukubernetes1:~$ </code></pre> <p>Log files:</p> <pre><code>test@ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n Defaulted container &quot;kube-flannel&quot; out of: kube-flannel, install-cni-plugin (init), install-cni (init) I0808 23:40:17.324664 1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true} W0808 23:40:17.324753 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. E0808 23:40:17.547453 1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods &quot;kube-flannel-ds-ksc9n&quot; is forbidden: User &quot;system:serviceaccount:kube-flannel:flannel&quot; cannot get resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;kube-flannel&quot; test@ubuntukubernetes1:~$ </code></pre> <p>Do you know how this issue can be solved? I tried the following installation:</p> <pre><code>Swapoff -a Remove following line from /etc/fstab /swap.img none swap sw 0 0 sudo apt update sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker sudo apt install apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add echo &quot;deb https://apt.kubernetes.io/ kubernetes-xenial main&quot; &gt;&gt; ~/kubernetes.list sudo mv ~/kubernetes.list /etc/apt/sources.list.d sudo apt update sudo apt install kubeadm kubelet kubectl kubernetes-cni sudo kubeadm init --pod-network-cidr=192.168.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml kubectl proxy --address 192.168.1.133 --accept-hosts '.*' </code></pre> <p>Can you advise?</p>
<p>I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.</p> <p>I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.</p>
<p>Good day!</p> <p>I am facing a strange problem. I have a standard deployment that uses a public image. But when I create it, I get the error <strong>ImagePullBackOff</strong></p> <pre><code>$ kubectl get pods </code></pre> <p>result</p> <pre><code>api-gateway-deployment-74968fbf5c-cvqwj 0/1 ImagePullBackOff 0 6h23m api-gateway-gateway-deployment-74968fbf5c-hpdxb 0/1 ImagePullBackOff 0 6h23m api-gateway-gateway-deployment-74968fbf5c-rctv6 0/1 ImagePullBackOff 0 6h23m </code></pre> <p>my deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway-deployment labels: app: api-gateway-deployment spec: replicas: 3 selector: matchLabels: app: api-gateway-deployment template: metadata: labels: app: api-gateway-deployment spec: containers: - name: api-gateway-node image: creatorsprodhouse/api-gateway:latest imagePullPolicy: Always ports: - containerPort: 80 </code></pre> <p>I am using the docker driver, is there anything I can do wrong?</p> <pre><code>minikube start --driver=docker </code></pre>
<p>I think your internet connection is slow. The timeout to pull an image is <code>120</code> seconds, so kubectl could not pull the image in under <code>120</code> seconds.</p> <p>First, pull the image via <code>Docker</code></p> <pre class="lang-bash prettyprint-override"><code>docker image pull creatorsprodhouse/api-gateway:latest </code></pre> <p>Then load the downloaded image to <code>minikube</code></p> <pre class="lang-bash prettyprint-override"><code>minikube image load creatorsprodhouse/api-gateway:latest </code></pre> <p>And then everything will work because now kubectl will use the image that is stored locally.</p>
<p>We are running Grafana on EKS Kubernetes v1.21 as a Helm deployment behind a Traefik reverse proxy.</p> <p>Grafana version: <code>v9.0.3</code></p> <p>Recently, Grafana has been posting this same log message every minute without fail:</p> <pre><code>2022-08-24 15:52:47 logger=context traceID=00000000000000000000000000000000 userId=0 orgId=0 uname= t=2022-08-24T13:52:47.293094029Z level=info msg=&quot;Request Completed&quot; method=GET path=/api/live/ws status=401 remote_addr=10.1.3.153 time_ms=4 duration=4.609805ms size=27 referer= traceID=00000000000000000000000000000000 2022-08-24 15:52:47 logger=context traceID=00000000000000000000000000000000 t=2022-08-24T13:52:47.290478899Z level=error msg=&quot;Failed to look up user based on cookie&quot; error=&quot;user token not found&quot; </code></pre> <p>I can't confirm whether these two log messages are related but I believe they are.</p> <p>I cannot find any user with id <code>0</code>.</p> <p>Another log error I see occasionally is</p> <pre><code>2022-08-24 15:43:43 logger=ngalert t=2022-08-24T13:43:43.020553296Z level=error msg=&quot;unable to fetch orgIds&quot; msg=&quot;context canceled&quot; </code></pre> <p>What I can see, is that the <code>remote_addr</code> refers to the node in our cluster that Grafana is deployed on.</p> <p>Can anyone explain why this is continually hitting the endpoint shown?</p> <p>Thanks!</p>
<p>The Grafana Live feature is real-time messaging that uses websockets. It is used in Grafana for notifying on events like someone else is editing the same dashboard as you. It can also be used for streaming data directly to Grafana. <a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/" rel="nofollow noreferrer">Docs here</a></p> <p>You can either turn off Grafana Live or configure your proxy to allow websockets.</p> <ul> <li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#max_connections" rel="nofollow noreferrer">Turn it off by setting config option <code>max_connections</code> to zero</a></li> <li><a href="https://grafana.com/tutorials/run-grafana-behind-a-proxy/" rel="nofollow noreferrer">Instructions on how to configure the Traefik proxy with Grafana</a></li> <li><a href="https://grafana.com/docs/grafana/latest/setup-grafana/set-up-grafana-live/#configure-grafana-live" rel="nofollow noreferrer">Setup guide for Grafana Live</a></li> </ul>