prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have the following <code>MutatingWebhookConfiguration</code></p> <pre><code>apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: example-webhook webhooks: - name: example-webhook.default.svc.cluster.local admissionReviewVersions: - &quot;v1beta1&quot; sideEffects: &quot;None&quot; timeoutSeconds: 30 objectSelector: matchLabels: example-webhook-enabled: &quot;true&quot; clientConfig: service: name: example-webhook namespace: default path: &quot;/mutate&quot; caBundle: &quot;LS0tLS1CR...&quot; rules: - operations: [ &quot;CREATE&quot; ] apiGroups: [&quot;&quot;] apiVersions: [&quot;v1&quot;] resources: [&quot;pods&quot;] </code></pre> <p>I want to inject the <code>webhook</code> pod in an <code>istio</code> enabled namespace with <code>istio</code> having strict TLS mode on.</p> <p>Therefore, (I thought) TLS should not be needed in my <code>example-webhook</code> service so it is crafted as follows:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: example-webhook namespace: default spec: selector: app: example-webhook ports: - port: 80 targetPort: webhook name: webhook </code></pre> <p>However when creating a <code>Pod</code> (that does indeed trigger the webhook) I get the following error:</p> <pre><code>β–Ά k create -f demo-pod.yaml Error from server (InternalError): error when creating &quot;demo-pod.yaml&quot;: Internal error occurred: failed calling webhook &quot;example-webhook.default.svc.cluster.local&quot;: Post &quot;https://example-webhook.default.svc:443/mutate?timeout=30s&quot;: no service port 443 found for service &quot;example-webhook&quot; </code></pre> <p>Can't I configure the webhook not to be called on <code>443</code> but rather on <code>80</code>? Either way TLS termination is done by the <code>istio</code> sidecar.</p> <p>Is there a way around this using <code>VirtualService</code> / <code>DestinationRule</code>?</p> <p><strong>edit</strong>: on top of that, why is it trying to reach the service in the <code>example-webhook.default.svc</code> endpoint? (while it should be doing so in <code>example-webhook.default.svc.cluster.local</code>) ?</p> <h3>Update 1</h3> <p>I have tried to use <code>https</code> as follows:</p> <p>I have created a certificate and private key, using istio's CA.</p> <p>I can verify that my DNS names in the cert are valid as follows (from another pod)</p> <pre><code>echo | openssl s_client -showcerts -servername example-webhook.default.svc -connect example-webhook.default.svc:443 2&gt;/dev/null | openssl x509 -inform pem -noout -text </code></pre> <pre><code>... Subject: C = GR, ST = Attica, L = Athens, O = Engineering, OU = FOO, CN = *.cluster.local, emailAddress = [email protected] ... X509v3 extensions: X509v3 Subject Alternative Name: DNS:*.default.svc.cluster.local, DNS:example-webhook, DNS:example-webhook.default.svc ... </code></pre> <p>but now pod creation fails as follows:</p> <pre><code>β–Ά k create -f demo-pod.yaml Error from server (InternalError): error when creating &quot;demo-pod.yaml&quot;: Internal error occurred: failed calling webhook &quot;example-webhook.default.svc.cluster.local&quot;: Post &quot;https://example-webhook.default.svc:443/mutate?timeout=30s&quot;: x509: certificate is not valid for any names, but wanted to match example-webhook.default.svc </code></pre> <h3>Update 2</h3> <p>The fact that the certs the webhook pod are running with were appropriately created using the <code>istio</code> CA cert, is also validated.</p> <pre><code>curl --cacert istio_cert https://example-webhook.default.svc Test </code></pre> <p>where <code>istio_cert</code> is the file containing istio's CA certificate</p> <p>What is going on?</p>
<p>Not sure if you can use webhook on port 80...</p> <p>Perhaps some of this will be useful to you, I used the following script to generate certificates, you can change it to suit your needs:</p> <pre><code>#!/bin/bash set -e service=webhook-svc namespace=default secret=webhook-certs csrName=${service}.${namespace} cat &lt;&lt;EOF &gt;&gt; csr.conf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = ${service} DNS.2 = ${service}.${namespace} DNS.3 = ${service}.${namespace}.svc EOF openssl genrsa -out server-key.pem 2048 openssl req -new -key server-key.pem -subj &quot;/CN=${service}.${namespace}.svc&quot; -out server.csr -config csr.conf kubectl delete csr ${csrName} 2&gt;/dev/null || true cat &lt;&lt;EOF | kubectl create -f - apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: ${csrName} spec: groups: - system:authenticated request: $(&lt; server.csr base64 | tr -d '\n') usages: - digital signature - key encipherment - server auth EOF sleep 5 kubectl certificate approve ${csrName} for i in {1 .. 10} do serverCert=$(kubectl get csr ${csrName} -o jsonpath='{.status.certificate}') if [[ ${serverCert} != '' ]]; then break fi sleep 1 done if [[ ${serverCert} == '' ]]; then echo &quot;ERROR: After approving csr ${csrName}, the signed certificate did not appear on the resource. Giving up after 10 attempts.&quot; &gt;&amp;2 exit 1 fi echo &quot;${serverCert}&quot; | openssl base64 -d -A -out server-cert.pem # create the secret with CA cert and server cert/key kubectl create secret generic ${secret} \ --from-file=key.pem=server-key.pem \ --from-file=cert.pem=server-cert.pem \ --dry-run -o yaml | kubectl -n ${namespace} apply -f - </code></pre> <p>The script creates a secret, which I then mounted into the webhook, deployment.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: webhook-deployment namespace: default labels: app: webhook annotations: sidecar.istio.io/inject: &quot;false&quot; spec: replicas: 1 selector: matchLabels: app: webhook template: metadata: labels: app: webhook annotations: sidecar.istio.io/inject: &quot;false&quot; spec: containers: - name: webhook image: webhook:v1 imagePullPolicy: IfNotPresent volumeMounts: - name: webhook-certs mountPath: /certs readOnly: true volumes: - name: webhook-certs secret: secretName: webhook-certs </code></pre> <p>service.yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: webhook-svc namespace: default labels: app: webhook spec: ports: - port: 443 targetPort: 8443 selector: app: webhook </code></pre>
<p>I am trying depoly a replicaset of mongodb in my kind kubernetes.</p> <p>Well, my first step is run thsi command:</p> <pre><code>helm upgrade --install mongodb bitnami/mongodb --set mongodb.global.storageClass=standard --set mongodbRootPassword=root --set mongodbUsername=mongoUser --set mongodbPassword=mongoUser --set mongodbDatabase=articles --set mongodb.persistence.size=5Gi --set mongodb.service.type=NodePort --set mongodb.service.nodePort=30005 --set mongodb.metrics.enabled=false --set replicaSet.enabled=true --namespace replicaset-data </code></pre> <p>But I have tow questions:</p> <ul> <li>How I can connect to this replicaSet?</li> <li>The next question is about the persistent data. Y have a extraMounts defined y my kind kulster, but I not found the way to use it in this helm</li> </ul> <p>Could anyone help my in this?</p> <p>A lot of Thanks!</p>
<blockquote> <p>How I can connect to this replicaSet?</p> </blockquote> <p>You can use the K8s service to connect with these replicas <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service type like LoadBalancer</a>, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Port-forward</a>, using ingress</p> <blockquote> <p>The next question is about the persistent data. Y have a extraMounts defined y my kind kulster, but I not found the way to use it in this helm</p> </blockquote> <p>You can use the <strong>PV</strong> and <strong>PVC</strong> with the K8s cluster to persist the data of your database.</p> <p>Read more at : <a href="https://github.com/bitnami/charts/issues/3291" rel="nofollow noreferrer">https://github.com/bitnami/charts/issues/3291</a></p> <p>Prameters : <a href="https://artifacthub.io/packages/helm/bitnami/mongodb#persistence-parameters" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/mongodb#persistence-parameters</a></p>
<p>I have deployed pihole on my k3s cluster using this helm chart <a href="https://github.com/MoJo2600/pihole-kubernetes" rel="nofollow noreferrer">https://github.com/MoJo2600/pihole-kubernetes</a>.<br /> (I used this <a href="https://www.jeffgeerling.com/blog/2020/raspberry-pi-cluster-episode-4-minecraft-pi-hole-grafana-and-more" rel="nofollow noreferrer">tutorial</a>)<br /> I now have my services but they dont have external IPs:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pihole-web ClusterIP 10.43.58.197 &lt;none&gt; 80/TCP,443/TCP 11h pihole-dns-udp NodePort 10.43.248.252 &lt;none&gt; 53:30451/UDP 11h pihole-dns-tcp NodePort 10.43.248.144 &lt;none&gt; 53:32260/TCP 11h pihole-dhcp NodePort 10.43.96.49 &lt;none&gt; 67:30979/UDP 11h </code></pre> <p>I have tried to assing the IPs manually with this command:</p> <pre><code>kubectl patch svc pihole-dns-tcp -p '{&quot;spec&quot;:{&quot;externalIPs&quot;:[&quot;192.168.178.210&quot;]}}' </code></pre> <p>But when executing the command i'm getting this error:</p> <pre><code>Error from server (NotFound): services &quot;pihole-dns-tcp&quot; not found </code></pre> <p>Any Ideas for a fix?<br /> Thank you in advance :)</p>
<ul> <li><p>Looks Like &quot;pihole-dns-tcp&quot; is in a different namespace to the namespace where patch command is being ran.</p> </li> <li><p>As per the article you have shared , it seems like service <code>pihole-dns-tcp</code> is in <code>pihole </code>. So the command should be</p> </li> </ul> <p><code>kubectl patch svc pihole-dns-tcp -n pihole -p '{&quot;spec&quot;:{&quot;externalIPs&quot;:[&quot;192.168.178.210&quot;]}}' </code></p>
<p>I am running Spark Submit through a pod within cluster on Kubernetes with the following script:</p> <pre><code>/opt/spark/bin/spark-submit --master k8s://someCluster --deploy-mode cluster --name someName --class some.class --conf spark.driver.userClassPathFirst=true --conf spark.kubernetes.namespace=someNamespace --conf spark.kubernetes.container.image=someImage --conf spark.kubernetes.container.image.pullSecrets=image-pull-secret --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.kubernetes.authenticate.submission.oauthTokenFile=/var/run/secrets/kubernetes.io/serviceaccount/token --conf spark.kubernetes.authenticate.driver.serviceAccountName=someServiceAccount --conf spark.driver.port=7078 --conf spark.blockManager.port=7079 local:////someApp.jar </code></pre> <p>The script runs fine and the driver pod starts along with the auto-generated service, with ports 7078, 7079, and 4040 plus selector that matches the label that was added to the driver pod. However, the svc has no endpoints.</p> <p>The executor then starts but never succeeds due to the following error below:</p> <pre><code>Exception in thread &quot;main&quot; java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:283) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:272) at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:302) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$3(CoarseGrainedExecutorBackend.scala:303) at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877) at scala.collection.immutable.Range.foreach(Range.scala:158) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$1(CoarseGrainedExecutorBackend.scala:301) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ... 4 more Caused by: java.io.IOException: Failed to connect to drivername-svc.namespace.svc:7078 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:253) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:195) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.net.UnknownHostException: drivername-svc.namespace.svc at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298) at java.base/java.net.InetAddress.getByName(InetAddress.java:1248) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:156) at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:153) at java.base/java.security.AccessController.doPrivileged(Native Method) at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:153) at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:41) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:61) at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:53) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:55) at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:31) at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:106) at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200) at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180) at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) </code></pre> <p>I also have a network policy which exposes the ports 7078 and 7079 through ingress and egress. Not sure what else I am missing.</p>
<p>Found out the endpoint wasn't added to the service because the driver pod has multiple containers, one of them terminates early causing readiness of the pod to be &quot;not ready&quot;; hence the service does not register the driver pod endpoint. Since there are no endpoints for the service, the executor pods trying to communicate with the service sees no host exception.</p>
<p>I encountered as below when executing sudo kubeadm init Help me~~</p> <pre><code>$ sudo kubeadm init [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: blkio error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: E0605 10:35:34.973561 12491 remote_runtime.go:925] &quot;Status from runtime service failed&quot; err=&quot;rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; time=&quot;2022-06-05T10:35:34+09:00&quot; level=fatal msg=&quot;getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre>
<p>work around found here : <a href="https://github.com/containerd/containerd/issues/4581" rel="noreferrer">https://github.com/containerd/containerd/issues/4581</a></p> <pre><code>rm /etc/containerd/config.toml systemctl restart containerd kubeadm init </code></pre>
<p>I'm planning to deploy a small Kubernetes cluster (3x 32GB Nodes). I'm not experienced with K8S and I need to come up with some kind of resilient SQL database setup and CockroachDB seems like a great choice.</p> <p>I wonder if it's possible to <em>relatively easy</em> deploy a configuration, where some CockroachDB instances (nodes?) are living inside the K8S cluster, but at the same time some other instances live outside the K8S cluster (2 on-premise VMs). All those CockroachDB would need to be considered a single CockroachDB cluster. It might be also worth noting that Kubernetes would be hosted in the cloud (eg. Linode).</p> <p>By <em>relatively easy</em> I mean:</p> <ul> <li><em>simplish</em> to deploy</li> <li>requiring little maintenance</li> </ul> <p><a href="https://mermaid.live/edit#pako:eNqNkU1vwjAMhv9K5DM9tGwS6nFjp62bNKReCIcsMTRSk1SpI4QQ_51klH10iOGT5fd9Yjveg3QKoYR167ayEZ7Yyzu3LEYfPjZedA17bENP6JfPswWTp3x1sijtUZJ29gv6BSbgNT7P8m8xxfzhRwGtuoYWY7S4GZ2O0ekF9DMZLfxms86j0T3etGZd5cu6Yvlq3O_uv1Hrqkhk8Ye8vz7pcBGWZVnqfqk4fFNUzwWYgEFvhFbx3Pskc6AGDXIoY9rqTUMcuD1EY-iUIHxSmpyHci3aHicgArnFzkooyQc8m-ZaxGXM4DocAUHgpow" rel="nofollow noreferrer"><img src="https://mermaid.ink/img/pako:eNqNkU1vwjAMhv9K5DM9tGwS6nFjp62bNKReCIcsMTRSk1SpI4QQ_51klH10iOGT5fd9Yjveg3QKoYR167ayEZ7Yyzu3LEYfPjZedA17bENP6JfPswWTp3x1sijtUZJ29gv6BSbgNT7P8m8xxfzhRwGtuoYWY7S4GZ2O0ekF9DMZLfxms86j0T3etGZd5cu6Yvlq3O_uv1Hrqkhk8Ye8vz7pcBGWZVnqfqk4fFNUzwWYgEFvhFbx3Pskc6AGDXIoY9rqTUMcuD1EY-iUIHxSmpyHci3aHicgArnFzkooyQc8m-ZaxGXM4DocAUHgpow" alt="" /></a></p>
<p>Yes, it's straight forward to do a multi-cloud deployment of CRDB. This is one of the great advantages of cockroachdb. Simply run the <code>cockroach start</code> command on each of the VMs/pods running cockroachdb and they will form a cluster.</p> <p>See this blog post/tutorial for more info: <a href="https://www.cockroachlabs.com/blog/multi-cloud-deployment/" rel="noreferrer">https://www.cockroachlabs.com/blog/multi-cloud-deployment/</a></p>
<p>We have a service running behind istio virtual service. Occasionally the request timeouts at the client side while everything is normal inside the service.</p> <p>To solve this problem, we want to record time cost for all queries, and use these data to check time cost for specific query (by specifying filter fields included in the query) when timeouts happen again.</p> <p>Best way to do that is using something like plugin with istio virtual service. But seems istio not provide such thing. How to do that?</p> <p><strong>software info</strong><br /> istio v1.14<br /> kubernetes v1.20</p>
<p>You can check the istio standard metrics and istio with prometheus integration.</p> <p><a href="https://istio.io/latest/docs/reference/config/metrics/#:%7E:text=For%20HTTP%2C%20HTTP%2F2%2C,measures%20the%20duration%20of%20requests" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/metrics/#:~:text=For%20HTTP%2C%20HTTP%2F2%2C,measures%20the%20duration%20of%20requests</a></p> <p>Standard metrics will be helpful to found the timeout issue, latency and other details between service to service communication and ingress to service also if exposed.</p> <p>You can get P50, P90 and other metrics. <a href="https://i.stack.imgur.com/3lbcT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3lbcT.png" alt="enter image description here" /></a></p> <p>I would recommend checking out and installing istio and it's <strong>service dashboard</strong> : <a href="https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengistio-intro-topic.htm" rel="nofollow noreferrer">https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengistio-intro-topic.htm</a></p> <p>Also tracing might be helpful to check the request tracing and time it's taken to process.</p> <p><a href="https://istio.io/latest/docs/tasks/observability/distributed-tracing/jaeger/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/observability/distributed-tracing/jaeger/</a></p>
<p>I am trying to cache response of my API but I am getting <code>X-Cache-Status: MISS</code> every time. My Api return a text/plain response('hello' &amp; 'bye'). I don't know what I am missing, also the setup is on minikube.</p> <p>My Ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test labels: name: test annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/cache-enable: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-buffering: &quot;on&quot; nginx.ingress.kubernetes.io/configuration-snippet: | proxy_cache mycache; proxy_cache_valid 404 5m; proxy_ignore_headers Cache-Control; add_header X-Cache-Status $upstream_cache_status; spec: rules: - host: myhost.local http: paths: - path: /hello pathType: Prefix backend: service: name: hello-api port: number: 8080 - path: /bye pathType: Prefix backend: service: bye name: bye-api port: number: 8081 </code></pre> <p>ingress config map</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: ingress-nginx-controller namespace: ingress-nginx data: http-snippet: &quot;proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=mycache:10m use_temp_path=off max_size=4g inactive=60m;&quot; </code></pre> <p>For some reason <code>Cache-Control</code> is set to <code>Private</code>.</p> <p>I tried solutions mentioned here, but no success.</p> <p><a href="https://stackoverflow.com/questions/66280437/how-to-properly-configure-ingress-cache-to-get-it-working">How to properly configure ingress cache to get it working?</a></p> <p><a href="https://stackoverflow.com/questions/62245119/ingress-nginx-cache">Ingress nginx cache</a></p>
<p>show kubectl -n <em>ingresscontrollernamespace</em> describe pod <em>ingresscontrollerpodname</em></p> <p><a href="https://kubernetes.slack.com/archives/CANQGM8BA/p1654524670938769" rel="nofollow noreferrer">https://kubernetes.slack.com/archives/CANQGM8BA/p1654524670938769</a></p>
<p>I created a job,and rerun it several times。 When I delete this job. Only the latest pod be deleted.</p> <p>How Can I delete these pods all.</p>
<p><strong>For Cronjob</strong></p> <p>You can use the <strong>successfulJobsHistoryLimit</strong> to manage the pod count, if you will set it to <strong>0</strong>, POD will get removed as soon as it complete it's execution successfully.</p> <pre><code>successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 </code></pre> <p>Read more at : <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits</a></p> <p>GCP ref : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs#history-limit" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs#history-limit</a></p> <p><strong>For Job</strong></p> <p>If you are using job not cronjob you can use <code>ttlSecondsAfterFinished</code> - delete the job pod after set second automatically, you can set it accordingly keeping some buffer.</p> <pre><code>ttlSecondsAfterFinished: 100 </code></pre> <p>will solve your issue.</p> <p>Example : <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically</a></p> <p><strong>Extra</strong> :</p> <p>You can delete those pods using simple command but that is one-time solution using the label in PODs or used in job</p> <pre><code>kubectl delete pods -l &lt;labels&gt; -n &lt;namespace&gt; </code></pre>
<p>I'm quite new in GitLab cicd. I have created simple nginx deployment including namespace,configmap,svc,deployment configmap contains simple custom index.html with cicd variable:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: index-html-configmap namespace: lazzio data: index.html: | &lt;html&gt; &lt;h1&gt;Welcomee&lt;/h1&gt; &lt;/br&gt; &lt;h1&gt;Hi! This is a configmap Index file for test-tepl+ingress &lt;/h1&gt; &lt;h2&gt; and this ---&gt; $PW_TEST &lt;--- is a password from gitlab cicd variable&lt;/h2&gt; &lt;/html&gt; </code></pre> <p>custom variable PW_TEST is set under cicd/variables section in UI without protected branch</p> <pre><code>#pipeline : stages: - build variables: ENV_NAME: value: &quot;int&quot; 1st-build: environment: name: ${ENV_NAME} variables: PW_TEST: $PW_TEST image: alpine stage: build before_script: - apk add bash - apk add curl script: - echo $PW_TEST - curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl&quot; - install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f nm.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f index.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f depl.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f svc.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f test_ingress_int.yml </code></pre> <p>but when i log into the cluster and make a curl i got same index file as defined within the index.yml.</p> <p>I know its a stupid useless variable in index, but I'm just testing if variable is passing stored as a custom variable in cicd into the deployments on k3s. within another pipeline where is installing eg. any database or k3s cluster via ansible where password or other secrets are needed, so i want to use cicd variables instead of clear text secrets in a files within GitLab repository.</p> <p>Thanks for any hint.</p>
<p>you have actually few ways to do it.</p> <ol> <li><p>Personally like <a href="https://www.baeldung.com/linux/envsubst-command" rel="nofollow noreferrer">envsubst</a>, it's easy to implement and has a little weight. But you have to install it (in e.g. gitlab runner) to avoid downloading it each time pipeline runs.</p> </li> <li><p>There is also nice/simple solution using shellscript to basically just replace string with var's value. Disadvantage here is you have to write SanityChecks on your own.</p> <pre><code> sed "s/\${PY_VERSION}/${PY_VERSION}/g; s/\${JQ_VERSION}/${JQ_VERSION}/g" "${FILE}yaml.in" > "${FILE}.yaml" </pre></code> </li> <li><p>In complicated dynamic deployments(if you have huge amount of variables) you can use helm to extract variables with option debug. Disadvantage here is you have basically all manifest's declarations in one file in the end.</p> <pre><code>helm --values ci-variables/api-variables.yaml --debug template ./deployment/api-name > apply_file.yaml</pre></code> </li> </ol>
<p>In docker, we can use</p> <pre><code>docker manifest inspect </code></pre> <p>to check the image manifest.</p> <p>For nerdctl and ctr\crictl, there is no manifest related commands.</p> <p>nerdctl image inspect only shows some basic information about the image.</p>
<ol> <li>check sha of the image by <code>nerdctl inspect</code></li> </ol> <pre><code> &quot;Id&quot;: &quot;sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165&quot;, &quot;RepoTags&quot;: [ &quot;k8s.gcr.io/pause:3.7&quot; ], &quot;RepoDigests&quot;: [ &quot;k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c&quot; ], </code></pre> <ol start="2"> <li>check RepoDigests sha and cat the file</li> </ol> <pre><code> find / -name bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c [root@paco ~]# cat /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c { &quot;schemaVersion&quot;: 2, &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.list.v2+json&quot;, &quot;manifests&quot;: [ { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 526, &quot;digest&quot;: &quot;sha256:f81611a21cf91214c1ea751c5b525931a0e2ebabe62b3937b6158039ff6f922d&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;amd64&quot;, &quot;os&quot;: &quot;linux&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 526, &quot;digest&quot;: &quot;sha256:be47ea44f3699337ec9aced948758293f76336bba0900d383623d85c4a88e2d4&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;arm&quot;, &quot;os&quot;: &quot;linux&quot;, &quot;variant&quot;: &quot;v7&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 526, &quot;digest&quot;: &quot;sha256:daff8f62d8a9446b88b0806d7f2bece15c5182b4bd9a327597ecde60cf19efb1&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;arm64&quot;, &quot;os&quot;: &quot;linux&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 526, &quot;digest&quot;: &quot;sha256:2524e66c23cb024134ec213032437b0af1d239de37f70ab6064e1d31be62da61&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;ppc64le&quot;, &quot;os&quot;: &quot;linux&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 526, &quot;digest&quot;: &quot;sha256:779c511730521a030b80471064b9a2b4f531575144574882148bf25716031a0f&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;s390x&quot;, &quot;os&quot;: &quot;linux&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 1158, &quot;digest&quot;: &quot;sha256:778127f16f3d75c810f1928dc6bb474eb22b55ff6e0d703eaff251faa8ab5eec&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;amd64&quot;, &quot;os&quot;: &quot;windows&quot;, &quot;os.version&quot;: &quot;10.0.17763.2686&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 1158, &quot;digest&quot;: &quot;sha256:54cd03c37f2e87826e4c7f1baf500ac6e70fa5fcebadd6254789c80f9268a2f0&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;amd64&quot;, &quot;os&quot;: &quot;windows&quot;, &quot;os.version&quot;: &quot;10.0.19042.1586&quot; } }, { &quot;mediaType&quot;: &quot;application/vnd.docker.distribution.manifest.v2+json&quot;, &quot;size&quot;: 1158, &quot;digest&quot;: &quot;sha256:85cfebc79dccc7a0e56680778a614b29a0b1c2ae98d4b1efc746764c04d5656c&quot;, &quot;platform&quot;: { &quot;architecture&quot;: &quot;amd64&quot;, &quot;os&quot;: &quot;windows&quot;, &quot;os.version&quot;: &quot;10.0.20348.587&quot; } } ] } </code></pre> <p>To summarize, you can go direct to <code>/var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/</code> to find the manifest on your node.</p>
<p>I am using this yaml file for deployment - <a href="https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-dbless.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-dbless.yaml</a></p> <p>I'm wondering if I can install Kong with type NodePort? My enterprise cloud isn't allowing me to create resource of type LoadBalancer.</p>
<p>Yes, you can do it, just change the service type to <strong>NodePort</strong>.</p> <p>There is also thread discussion in Kong community : <a href="https://discuss.konghq.com/t/expose-kong-on-nodeport/8497" rel="nofollow noreferrer">https://discuss.konghq.com/t/expose-kong-on-nodeport/8497</a></p> <pre><code>--- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-type: nlb name: kong-proxy namespace: kong spec: ports: - name: proxy port: 80 protocol: TCP targetPort: 8000 - name: proxy-ssl port: 443 protocol: TCP targetPort: 8443 selector: app: ingress-kong type: NodePort --- </code></pre> <p>You might need to take care of POD scheduling on the same node each time, with NodeSelector or affinity if using NodeIP as DNS somewhere in routing or so.</p> <p>You can also check document of <strong>KIC</strong> in this doc : <a href="https://kind.sigs.k8s.io/docs/user/ingress/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/ingress/</a></p>
<p>When deleting a pod or deploying a new version of a pod kubernetes should theoretically send a <code>SIGTERM</code> to the process running and then wait <code>gracePeriodSeconds</code> (30 by default) seconds before it sends a <code>SIGKILL</code>.</p> <p>I have encountered the problem that this first <code>SIGTERM</code> never seems to be sent. The default settings in my cluster were never changed (kill is sent as expected after 30 seconds), so my assumption is that there might be something wrong, permissions or similar, with my Dockerfile (see below).</p> <p>I've excluded there being an error in the graceful shutdown logic catching the <code>SIGTERM</code> in the executable by <code>kubectl exec</code>-ing into the pod and using <code>kill -15</code> on the process which works as expected.</p> <p>The Dockerfile looks as follows:</p> <pre><code>FROM debian:bullseye-slim AS app ARG USERNAME=app ARG USER_UID=1000 ARG USER_GID=$USER_UID RUN apt update &amp;&amp; apt install -y libssl-dev zstd ca-certificates pkg-config RUN groupadd --gid $USER_GID $USERNAME \ &amp;&amp; useradd --uid $USER_UID --gid $USER_GID -m $USERNAME WORKDIR /home/$USERNAME ARG RELEASE_DIR ARG SERVICE USER $USERNAME COPY $RELEASE_DIR . EXPOSE 8080 ENV CMD=./${SERVICE} CMD ${CMD} </code></pre> <p>Is there something blatantly wrong here? Or does kubernetes require some additional config to actually send the termination signal as expected?</p>
<p>For termination to work correctly, you need to ensure your application is the main container process. With the <a href="https://docs.docker.com/engine/reference/builder/#shell-form-entrypoint-example" rel="noreferrer">shell form</a> <code>CMD</code>, the command your container runs is <code>/bin/sh -c '${CMD}'</code>, and depending on what's in that environment variable and what <code>/bin/sh</code> actually is, that shell wrapper might keep running as the main container process and intercept the termination signal.</p> <p>The same mechanisms apply in both plain Docker and Kubernetes and you should see a similar issue if you <code>docker stop</code> the container locally. This may be easier to debug and iterate on.</p> <p>The easiest way to work around this is to use the <a href="https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example" rel="noreferrer">exec form</a> of <code>CMD</code> that looks like a JSON array. Since this won't run a shell, it also can't do variable expansions, and you'll have to spell out what you want the command to actually be</p> <pre class="lang-bash prettyprint-override"><code>CMD [&quot;./service&quot;] </code></pre> <p>This is still easy to override at runtime, and you in fact don't need that <code>CMD</code> at all:</p> <pre class="lang-bash prettyprint-override"><code># instead of `docker run -e CMD='...'` docker run --rm my-image \ ls -l /home/app </code></pre> <pre class="lang-yaml prettyprint-override"><code># or in a Kubernetes pod spec command: - /home/app/another_app args: - --option </code></pre> <p>You can probably similarly remove pretty much all of the <code>ARG</code> declarations in the Dockerfile (the name or numeric uid of the container user shouldn't matter, for example, and the compiled application filename and host build path are usually fixed) which will simplify the setup.</p>
<pre><code>apiVersion: v1 kind: Pod metadata: name: pv-pod namespace: auth spec: containers: - name: busybox image: busybox #************** HERE is the command! command: ['sh', '-c', 'while true; do echo date &gt; /output/output.log; sleep 5; done'] volumeMounts: - name: pv-storage mountPath: /output volumes: - name: pv-storage persistentVolumeClaim: claimName: host-storage-pvc </code></pre> <p>This command is where i need to insert into a file &quot;success! Wed Jun 8 20:28:01 WAST 2022&quot;,i.e. current timestamp.</p> <pre><code>command: ['sh', '-c', 'while true; do echo success! $date &gt;&gt; /output/output.log; sleep 5; done'] </code></pre> <p>But it keeps writing 'success date'</p>
<p>You could use <a href="https://unix.stackexchange.com/questions/440088/what-is-command-substitution-in-a-shell">command substitution</a> for this. That would be in shell or bash, the same.</p> <p>Additionally, you may need to escape the dollar sign, since <code>$()</code> has a <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/#define-an-environment-dependent-variable-for-a-container" rel="nofollow noreferrer">special meaning in Kubernetes manifests</a>.</p> <pre class="lang-bash prettyprint-override"><code>echo &quot;success $$(date)&quot; &gt;&gt; ... </code></pre>
<p>I have the following <code>MutatingWebhookConfiguration</code></p> <pre><code>apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: example-webhook webhooks: - name: example-webhook.default.svc.cluster.local admissionReviewVersions: - &quot;v1beta1&quot; sideEffects: &quot;None&quot; timeoutSeconds: 30 objectSelector: matchLabels: example-webhook-enabled: &quot;true&quot; clientConfig: service: name: example-webhook namespace: default path: &quot;/mutate&quot; caBundle: &quot;LS0tLS1CR...&quot; rules: - operations: [ &quot;CREATE&quot; ] apiGroups: [&quot;&quot;] apiVersions: [&quot;v1&quot;] resources: [&quot;pods&quot;] </code></pre> <p>I want to inject the <code>webhook</code> pod in an <code>istio</code> enabled namespace with <code>istio</code> having strict TLS mode on.</p> <p>Therefore, (I thought) TLS should not be needed in my <code>example-webhook</code> service so it is crafted as follows:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: example-webhook namespace: default spec: selector: app: example-webhook ports: - port: 80 targetPort: webhook name: webhook </code></pre> <p>However when creating a <code>Pod</code> (that does indeed trigger the webhook) I get the following error:</p> <pre><code>β–Ά k create -f demo-pod.yaml Error from server (InternalError): error when creating &quot;demo-pod.yaml&quot;: Internal error occurred: failed calling webhook &quot;example-webhook.default.svc.cluster.local&quot;: Post &quot;https://example-webhook.default.svc:443/mutate?timeout=30s&quot;: no service port 443 found for service &quot;example-webhook&quot; </code></pre> <p>Can't I configure the webhook not to be called on <code>443</code> but rather on <code>80</code>? Either way TLS termination is done by the <code>istio</code> sidecar.</p> <p>Is there a way around this using <code>VirtualService</code> / <code>DestinationRule</code>?</p> <p><strong>edit</strong>: on top of that, why is it trying to reach the service in the <code>example-webhook.default.svc</code> endpoint? (while it should be doing so in <code>example-webhook.default.svc.cluster.local</code>) ?</p> <h3>Update 1</h3> <p>I have tried to use <code>https</code> as follows:</p> <p>I have created a certificate and private key, using istio's CA.</p> <p>I can verify that my DNS names in the cert are valid as follows (from another pod)</p> <pre><code>echo | openssl s_client -showcerts -servername example-webhook.default.svc -connect example-webhook.default.svc:443 2&gt;/dev/null | openssl x509 -inform pem -noout -text </code></pre> <pre><code>... Subject: C = GR, ST = Attica, L = Athens, O = Engineering, OU = FOO, CN = *.cluster.local, emailAddress = [email protected] ... X509v3 extensions: X509v3 Subject Alternative Name: DNS:*.default.svc.cluster.local, DNS:example-webhook, DNS:example-webhook.default.svc ... </code></pre> <p>but now pod creation fails as follows:</p> <pre><code>β–Ά k create -f demo-pod.yaml Error from server (InternalError): error when creating &quot;demo-pod.yaml&quot;: Internal error occurred: failed calling webhook &quot;example-webhook.default.svc.cluster.local&quot;: Post &quot;https://example-webhook.default.svc:443/mutate?timeout=30s&quot;: x509: certificate is not valid for any names, but wanted to match example-webhook.default.svc </code></pre> <h3>Update 2</h3> <p>The fact that the certs the webhook pod are running with were appropriately created using the <code>istio</code> CA cert, is also validated.</p> <pre><code>curl --cacert istio_cert https://example-webhook.default.svc Test </code></pre> <p>where <code>istio_cert</code> is the file containing istio's CA certificate</p> <p>What is going on?</p>
<p>Did you try adding the <strong>port</strong> attribute in your MutatingWebhookConfiguration</p> <pre class="lang-yaml prettyprint-override"><code>clientConfig: service: name: example-webhook namespace: default path: &quot;/mutate&quot; port: 80 </code></pre>
<p>I'm trying to install Traefik on a K8s cluster using ArgoCD to deploy the official Helm chart. But I also need it to us an additional &quot;values.yml&quot; file. When I try to specify in the Application yaml file what additional values file to use, it fails to file not found for it.</p> <p>Here is what I'm using:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: argo-traefik-chart namespace: argocd spec: project: default source: path: traefik repoURL: https://github.com/traefik/traefik-helm-chart.git targetRevision: HEAD helm: valueFiles: - /traefik-values.yml destination: server: https://kubernetes.default.svc namespace: 2195-leaf-dev-traefik syncPolicy: syncOptions: - CreateNamespace=true automated: prune: true selfHeal: true </code></pre> <p>Here is the traefik-value.yml file.</p> <pre><code>additionalArguments: # Configure your CertificateResolver here... # # HTTP Challenge # --- # Generic Example: # - --certificatesresolvers.generic.acme.email=your-email@example.com # - --certificatesresolvers.generic.acme.caServer=https://acme-v02.api.letsencrypt.org/directory # - --certificatesresolvers.generic.acme.httpChallenge.entryPoint=web # - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json # # Prod / Staging Example: # - --certificatesresolvers.staging.acme.email=your-email@example.com # - --certificatesresolvers.staging.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory # - --certificatesresolvers.staging.acme.httpChallenge.entryPoint=web # - --certificatesresolvers.staging.acme.storage=/ssl-certs/acme-staging.json # - --certificatesresolvers.production.acme.email=your-email@example.com # - --certificatesresolvers.production.acme.caServer=https://acme-v02.api.letsencrypt.org/directory # - --certificatesresolvers.production.acme.httpChallenge.entryPoint=web # - --certificatesresolvers.production.acme.storage=/ssl-certs/acme-production.json # # DNS Challenge # --- # Cloudflare Example: # - --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare # - --certificatesresolvers.cloudflare.acme.email=your-email@example.com # - --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1 # - --certificatesresolvers.cloudflare.acme.storage=/ssl-certs/acme-cloudflare.json # # Generic (replace with your DNS provider): # - --certificatesresolvers.generic.acme.dnschallenge.provider=generic # - --certificatesresolvers.generic.acme.email=your-email@example.com # - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json logs: # Configure log settings here... general: level: DEBUG ports: # Configure your entrypoints here... web: # (optional) Permanent Redirect to HTTPS redirectTo: websecure websecure: tls: enabled: true # (optional) Set a Default CertResolver # certResolver: cloudflare #env: # Set your environment variables here... # # DNS Challenge Credentials # --- # Cloudflare Example: # - name: CF_API_EMAIL # valueFrom: # secretKeyRef: # key: email # name: cloudflare-credentials # - name: CF_API_KEY # valueFrom: # secretKeyRef: # key: apiKey # name: cloudflare-credentials # Just to do it for now envFrom: - secretRef: name: traefik-secrets # Disable Dashboard ingressRoute: dashboard: enabled: true # Persistent Storage persistence: enabled: true name: ssl-certs size: 1Gi path: /ssl-certs # deployment: # initContainers: # # The &quot;volume-permissions&quot; init container is required if you run into permission issues. # # Related issue: https://github.com/containous/traefik/issues/6972 # - name: volume-permissions # image: busybox:1.31.1 # command: [&quot;sh&quot;, &quot;-c&quot;, &quot;chmod -Rv 600 /ssl-certs/*&quot;] # volumeMounts: # - name: ssl-certs # mountPath: /ssl-certs # Set Traefik as your default Ingress Controller, according to Kubernetes 1.19+ changes. ingressClass: enabled: true isDefaultClass: true </code></pre> <p>The traefik-values.yml file is in the same sub-directory as this file. I fire this of with kubectl apply -f but when I got to look at it in the Argo GUI, it shows an error. I'll paste the entire thing below, but it looks like the important part is this:</p> <p>` failed exit status 1: Error: open .traefik-values.yml: no such file or directory</p> <p>It's putting a period before the name of the file. I tried different ways of specifying the file: .traefik-values.yml and ./treafik-values.yml. Those get translated to:</p> <p>: Error: open .traefik/.traefik-values.yml: no such file or directory</p> <p>When I do a helm install using the exact same traefik-values.yml file, I get exactly what I expect. And when I run the Argo without the alternate file, it deploys but with out the needed options of course.</p> <p>Any ideas?</p>
<p>I assume this is because Argo will look for <code>traefik-values.yml</code> file in the <code>repoURL</code> (so, not in the location where <code>Application</code> file is), and it obviously doesn't exist there.</p> <p>You can check more about this issue <a href="https://github.com/argoproj/argo-cd/issues/2789" rel="nofollow noreferrer">here</a>. There you can also find a couple of proposed solutions. Some of them are:</p> <ul> <li>a plugin to do a helm template with your values files</li> <li>a custom CI pipeline to take the content of your values.yaml file and add it to Application manifest</li> <li>putting values directly in Application manifest, skipping the values.yaml file altogether</li> <li>having a chart that depends on a chart like <a href="https://mixi-developers.mixi.co.jp/argocd-with-helm-fee954d1003c" rel="nofollow noreferrer">here</a> (I don't like this one as it is downloading twice from two different locations, plus <a href="https://github.com/argoproj/argo-cd/issues/2789#issuecomment-790720510" rel="nofollow noreferrer">this</a> issue)</li> <li>play around with kustomize</li> <li>or wait for ArgoCD 2.5, it seems it will include a native solution out of the box, according to mentioned github issue</li> </ul>
<p>Giving the port forward object</p> <pre><code>from kubernetes.stream import stream, portforward pf = portforward( k8s_client_v1.connect_get_namespaced_pod_portforward, pod_name, name_space, ports=port ) pf.socket(port) </code></pre> <p><code>pf</code> is a socket object from <code>AF_UNIX</code> family.</p> <p>I need a better understating whether another <code>AF_INET</code> family socket has to be instantiated in oder to achieve the same functionality as <code>kubectl</code> client has, ex: <code>kubectl port-forward $pod_name $port</code></p> <p>I appreciate in advance if someone could share any snippets of this type of implementation.</p> <p>At this moment creating port forward object was inspired from <a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_portforward.py#L176" rel="nofollow noreferrer">here</a></p>
<p>Thanks to your code snipped, I could get this to work for me - which means that your code basically works: I have successfully used it to send TCP data on a custom port to a kubernetes services (haven't tried UDP, though). It seems to work quite similar but not exactly the same as <code>kubectl port-forward</code>.</p> <p>The main difference is that it doesn't actually open an <code>AF_INET</code> socket on your system, but an <code>AF_UNIX</code> socket, which means, that it basically is addressed via a file name instead of a tuple of an IP address and port number. So it really depends on what you try to achieve.</p> <p>If you just want to write to a socket from a Python script that is connected to a specific port of a service: This is the solution. It doesn't really make a difference for you that the kubernetes API provides you an <code>AF_UNIX</code> socket.</p> <p>However, if you want to open a local port that connects to the kubernetes service port (e.g. use local tools to connect to a kubernetes service like a CLI based database management tool), this is only half of what you want to achieve. After some time playing around, including trying hacks like <code>socket.fromfd(unix_sock.fileno(), socket.AF_INET, socket.SOCK_STREAM)</code>, I came to the conclusion that a straight forward approach to &quot;convert&quot; the unix socket to a internet socket isn't possible.</p> <p>A solution, I would suggest is to implement a simple port forwarding in Python. That would mean running a thread that opens an internet socket on the local port you'd like to use, receive data on that and simply forward it to the unix socket. It doesn't feel very elegant and probably isn't very efficient, either, but it should do the job.</p> <p>Another alternative would be using an external tool like <code>socat</code> if available to connect the two sockets.</p>
<p>I have a Discord channel. I want <code>Google Cloud Monitoring</code> to send notifications to this channel via <code>webhook</code>.</p> <p>Help me please T.T</p>
<p>You can use the notification channel of webhook to send the notification to your discord.</p> <p><a href="https://cloud.google.com/monitoring/support/notification-options#webhooks" rel="nofollow noreferrer">https://cloud.google.com/monitoring/support/notification-options#webhooks</a></p> <p><a href="https://i.stack.imgur.com/Co3yp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Co3yp.png" alt="enter image description here" /></a></p> <p><strong>Option 2</strong></p> <p>If Stackdriver not sending the webhook directly you can simply leverage the pub/sub with a cloud function that would be an easy option and cost-efficient.</p> <p><a href="https://i.stack.imgur.com/9sakk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9sakk.png" alt="enter image description here" /></a></p> <p>Cloud monitoring alert will push the payload to <strong>pub/sub</strong> and write a simple cloud function which will consume the <code>pub/sub's</code> payload and <strong>send</strong> <strong>webhook</strong> to discord.</p> <p>So flow will be something like : <strong><code>Stackdriver monitoring &gt; Pub/Sub &gt; CloudFunction (webhook hit to discord) &gt; Discord</code></strong></p> <p><strong>Cloud function code</strong></p> <pre><code>const fetch = require('node-fetch'); exports.helloPubSub = (message, context) =&gt; { const name = message.data ? Buffer.from(message.data, 'base64').toString() : 'World'; console.log(`Hello, ${name}!`); var URL = &quot;Discord URL&quot;; fetch(URL, { 'method': 'POST', 'body': &lt;DARA&gt;, headers: data.getHeaders() }) .then(res=&gt; console.log(res)) .catch(err =&gt; console.error(err)); }; </code></pre> <p><a href="https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/HEAD/functions/helloworld/index.js" rel="nofollow noreferrer">Code ref</a> &amp; <a href="https://cloud.google.com/functions/docs/tutorials/pubsub" rel="nofollow noreferrer">Second Gen tutorial</a></p>
<p>When we run kubectl get pod =&gt; it is listing the count of containers running inside a pod and restart count. So I am not sure which container gets restarted. Either I need to login to UI or using kubectl describe pods.</p> <pre><code>NAME READY STATUS RESTARTS AGE test-pod 2/2 Running 5 14h </code></pre> <p>But I need to see each container names and its restart count using kubectl command somethings as below.</p> <pre><code>NAME STATUS RESTARTS AGE container-1 Running 2 14h container-2 Running 3 14h </code></pre> <p>It would be helpful if someone helps me on this. Thanks in advance!</p>
<p>You can try something like this:</p> <pre><code>kubectl get pods &lt;pod-name&gt; -o jsonpath='{.spec.containers[*].name} {.status.containerStatuses[*].restartCount} {.status.containerStatuses[*].state}' </code></pre> <p>You will get in result container-name, restartCount and state. Then you will be able to format it in such way as you need.</p>
<p>Often there is more than one search result returned from <code>/</code> filter command which is confirm by pressing <code>Enter</code>. Then I can navigate the results, but have no way to filter based on data displayed e.g. <strong>Status</strong> or <strong>CPU</strong>.</p> <p>QUESTION: Is this possible? If so, what I need to do to achieve it?</p> <p><a href="https://i.stack.imgur.com/InfFR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/InfFR.png" alt="enter image description here" /></a></p>
<p>I don't think it's possible to filter search result, but you can sort them, which can be helpful in most cases. For example:</p> <ul> <li>SHIFT+C sorts by CPU</li> <li>SHIFT+M sorts by MEMORY</li> <li>SHIFT+S sorts by STATUS</li> <li>...</li> </ul>
<p>I wonder the difference between kube-proxy and cni.</p> <ol> <li><p>Does calico also use iptables to set policies? In this case, the role overlaps with the kube-proxy, what's the difference between the two?</p> </li> <li><p>Why kube-proxy disable for Calico eBPF mode? Since kube-proxy uses iptables, do you disable kube-proxy that uses iptables to use eBPF?</p> </li> <li><p>If I disable kube-proxy, will the existing iptables policies be removed?</p> </li> </ol> <p>Thank you.</p>
<ol> <li><p>Calico defaults to using iptables to set network policies. Calico iptables chains/rules sit along side and integrate with the kube-proxy rules (when kube-proxy is in iptables mode).</p> </li> <li><p>The BPF code Calico implements intercepts the packets before the kube-proxy iptables rules are able to. You don't have to disable kube-proxy, but there is no reason to run kube-proxy (and the overhead of it managing iptables rules) once Calico can communicate directly with the kube-apiserver service and manage kubernetes services via BPF.</p> </li> <li><p>If kube-proxy is not running, it will not add any k8s iptables rules. If you have been left with rules after kube-proxy is shutdown, a manual iptables flush <code>iptables --flush</code> or a reload of your base iptables config will do. Otherwise a <code>kube-proxy --cleanup</code>.</p> </li> </ol>
<p>I was using Docker Desktop on my Mac for a long time. Currently Docker desktop causing a huge issues related to starting up. Everytime <a href="https://stackoverflow.com/questions/69552636/cannot-launch-docker-desktop-for-mac/71861493#71861493">it needs some tweak</a> to run or uninstall and install it again, that takes several hours to fix every time I start working.</p> <p>So I started using other Docker Desktop alternative for mac that is Colima (<strong>Co</strong>ntainer on <strong>Li</strong>nux on <strong>Ma</strong>c) installed with the help of the <a href="https://github.com/abiosoft/colima" rel="noreferrer">Git link.</a></p> <p>And I am using Docker client for Docker runtime by installing it using: <code>brew install docker</code> and I have logged in using <code>docker login</code> via terminal.</p> <p><strong>The important note here is that</strong> all the above configuration works well with Docker Desktop <a href="https://stackoverflow.com/questions/44346109/how-to-easily-install-and-uninstall-docker-on-macos/65468254#65468254">but after uninstalling everything related to docker</a> and start using <code>Colima</code>.</p> <p>While I run <code>docker info</code> I got the following message:</p> <pre><code>$ docker info Client: Context: default Debug Mode: false Server: ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? errors pretty printing info </code></pre> <p>Here are few other information that might need:</p> <pre><code>$ colima version colima version 0.4.2 git commit: f112f336d05926d62eb6134ee3d00f206560493b runtime: docker arch: x86_64 client: v20.10.17 server: v20.10.11 kubernetes Client Version: v1.24.1 Kustomize Version: v4.5.4 Server Version: v1.23.6+k3s1 </code></pre> <p>Operating System: macOS Monterey Version: 12.3.1 (21E258)</p> <p>Any help would be highly appreciated.</p>
<p>After posting the question on StackOverflow I was non stop searching for a solution. Finally it took a day to fixed it (for me). While searching on google <a href="https://github.com/nektos/act/issues/1051#issuecomment-1070599087" rel="noreferrer">I have found a solution in this link</a>.</p> <p>Basically <code>Colima</code> usage <code>$HOME/.colima/docker.sock</code>, so first I checked if there is anything in the <code>~/.colima</code> by running <code>ls -la ~/.colima</code> and I have found that <code>docker.sock</code> is there.</p> <p>So I set up the DOCKER_HOST by running the following command:</p> <p><code>$ export DOCKER_HOST=&quot;unix://$HOME/.colima/docker.sock&quot;</code></p> <p>and then run:</p> <p><code>$ docker info</code></p> <p>Now it shows everything related to Docker Server :)</p> <pre><code>Client: Context: default Debug Mode: false Server: Containers: 29 Running: 19 Paused: 0 Stopped: 10 Images: 19 Server Version: 20.10.11 ..... </code></pre>
<p>I have an EKS deployment with a service account with policy and role that enable access to S3.</p> <p>This works well for root account in the container. The container can execute <code>aws s3 cp ...</code> with no issue.</p> <p>The problem is that another user cannot. It gets <code>AccessDenied</code> from the S3 service, meaning it does not have the correct credentials.</p> <p>So my question is: how to give rights to another user in the container (which is linux based) in this case?</p> <p>(I don't think it's specific to Eks as service accounts are generic to kubernetes.)</p>
<p>You can check out the reference for how IAM roles for service accounts work in k8s <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html#pod-configuration" rel="nofollow noreferrer">here</a>.<br /> In short, in order to allow another user to use the IAM role the same environment variables have to be configured for that user, and it needs to be able to access the path specified in the <code>AWS_WEB_IDENTITY_TOKEN_FILE</code> variable.<br /> Once both of these prerequisites are met, the user should be able to use the same identity as the root user.</p>
<p>There are two kubelet nodes and each kubelet node contains several containers including server with wildfly. Even though I do not define containerPort &lt;&gt; hostPort, the management console can be reached with port 9990 from outside. I do not have any clue, why?</p> <pre><code>- name: server image: registry/server:develop-latest ports: - name: server-https containerPort: 8443 hostPort: 8443 </code></pre> <p>In docker container inspect &lt;container-id&gt; I see:</p> <pre><code>&quot;ExposedPorts&quot;: { &quot;9990/tcp&quot;: {}, ... </code></pre> <p>So,</p> <ul> <li>Why container port 9990 is exposed? and</li> <li>Why containerPort 9990 is mapped to hostPort and I can reach the port 9990 from outside?</li> </ul>
<p>You can expose the port in two places, when you run the container, and when you build the image. Typically you only do the latter since exposing the port is documentation of what ports are likely listening for connections inside the container (it doesn't have any affect on networking).</p> <p>To see if the port was exposed at build time, you can run:</p> <pre class="lang-bash prettyprint-override"><code>docker image inspect registry/server:develop-latest </code></pre> <p>And if that port wasn't exposed in your build, then it was likely exposed in your base image.</p>
<p>I am following this AWS guide: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/</a> to setup my kubernetes cluster under ALB.</p> <p>After installing the AWS ALB controller on my EKS cluster, following below steps:</p> <pre class="lang-sh prettyprint-override"><code>helm repo add eks https://aws.github.io/eks-charts kubectl apply -k &quot;github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master&quot; helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ --set clusterName=YOUR_CLUSTER_NAME \ --set serviceAccount.create=false \ --set region=YOUR_REGION_CODE \ --set vpcId=&lt;VPC_ID&gt; \ --set serviceAccount.name=aws-load-balancer-controller \ -n kube-system </code></pre> <p>I want to deploy my ingress configurations:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/success-codes: 200,302 alb.ingress.kubernetes.io/target-type: instance kubernetes.io/ingress.class: alb name: staging-ingress namespace: staging finalizers: - ingress.k8s.aws/resources spec: rules: - http: paths: - backend: serviceName: my-service servicePort: 80 path: /api/v1/price </code></pre> <p>Everything looks fine. However, when I run the below command to deploy my ingress:</p> <pre><code>kubectl apply -f ingress.staging.yaml -n staging </code></pre> <p>I am having below error:</p> <pre><code>Error from server (InternalError): error when creating &quot;ingress.staging.yaml&quot;: Internal error occurred: failed calling webhook &quot;vingress.elbv2.k8s.aws&quot;: the server could not find the requested resource </code></pre> <p>There are very few similar issues on Google an none was helping me. Any ideas of what is the problem?</p> <p>K8s version: 1.18</p>
<p>the security group solved me:</p> <pre><code>node_security_group_additional_rules = { ingress_allow_access_from_control_plane = { type = &quot;ingress&quot; protocol = &quot;tcp&quot; from_port = 9443 to_port = 9443 source_cluster_security_group = true description = &quot;Allow access from control plane to webhook port of AWS load balancer controller&quot; } } </code></pre>
<p>When defining the Taints &amp; Tolerations, we defined the Taint as below:</p> <pre><code>kubectl taint nodes node1 key1=value1:NoSchedule </code></pre> <p>Now any pod that does not have toleration defined as below will not be scheduled on node1. And the one that has toleration defined, gets scheduled on this node. But, why do we need to define NoSchedule on the POD? It is already defined on the node.</p> <pre><code>tolerations: - key: &quot;key1&quot; operator: &quot;Equal&quot; value: &quot;value1&quot; effect: &quot;NoSchedule&quot; </code></pre> <p>What impact does it have if:</p> <ol> <li>The node effect is NoSchedule</li> </ol> <pre><code>kubectl taint nodes node1 key1=value1:NoSchedule </code></pre> <ol start="2"> <li>But the POD toleration is NoExecute</li> </ol> <pre><code>tolerations: - key: &quot;key1&quot; operator: &quot;Equal&quot; value: &quot;value1&quot; effect: &quot;NoExecute&quot; </code></pre> <p>Note: I understand that it is trying to match not just &quot;taint value&quot; but also the &quot;taint effect&quot;. But is there any use case for matching &quot;taint effect&quot; as well?</p> <blockquote> <p>tolerations.effect (string) Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.</p> </blockquote> <p>Thanks</p>
<blockquote> <p>What impact does it have if:</p> <ol> <li>The node effect is NoSchedule</li> </ol> <p>kubectl taint nodes node1 key1=value1:NoSchedule</p> <ol start="2"> <li>But the POD toleration is NoExecute</li> </ol> </blockquote> <p>Pod will not be schedule on the node where it failed to tolerate, eg. your sample pod will not be schedule on node that tainted with <code>NoSchdule</code> because it only tolerates <code>NoExecute</code>.</p> <p><code>...use case for matching &quot;taint effect&quot;</code></p> <p>Not sure what it means here; but it is possible to tolerate a key with any effect by only specified the key and value.</p>
<p>I am trying to figure out how to enable the proxy protocol header in GKE.</p> <p>Usually, you can do it with gcloud:</p> <pre><code>gcloud compute target-ssl-proxies update my-ssl-lb-target-proxy \ --proxy-header=PROXY_V1 </code></pre> <p><a href="https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#update_proxy_protocol_header_for_target_tcp_proxy" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#update_proxy_protocol_header_for_target_tcp_proxy</a></p> <p>But GKE doesn't list it up as annotation:</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters</a></p> <p>Isn't there any support yet? :(</p> <p>Thanks</p>
<p>When you create a Service of type &quot;LoadBalancer&quot; in GKE, it uses a <a href="https://cloud.google.com/load-balancing/docs/network" rel="nofollow noreferrer">Network Load Balancer</a> for external services and an <a href="https://cloud.google.com/load-balancing/docs/internal" rel="nofollow noreferrer">Internal TCP/UDP Load Balancer</a> for internal services.</p> <p>You can use TCP and/or SSL Proxy load balancers as well, but it involves a bit of manual configuration. You will need to create a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#how_to" rel="nofollow noreferrer">Standalone NEG</a> and then associate that as the backend of a TCP or SSL Proxy LB.</p> <p>To create a Standalone NEG, you create a Service of type ClusterIP and use a neg annotation:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: neg-demo-svc annotations: cloud.google.com/neg: '{&quot;exposed_ports&quot;: {&quot;80&quot;:{&quot;name&quot;: &quot;NEG_NAME&quot;}}}' spec: type: ClusterIP selector: run: neg-demo-app # Selects Pods labelled run: neg-demo-app ports: - port: 80 protocol: TCP targetPort: 9376 </code></pre> <p>You'd then associate the NEG with the backend service used by your TCP or SSL Proxy LB, for example</p> <pre><code>gcloud compute backend-services add-backend my-bes \ --global \ --network-endpoint-group=NEG_NAME ... </code></pre>
<p>In a namespace <code>foo</code> there is an existing pod running</p> <pre><code>$ kubectl get pods -n foo NAME READY STATUS RESTARTS AGE hello-1816639685-xejyk 1/1 Running 0 6d3h </code></pre> <p>How can we duplicate the <code>hello</code> pod so that we can try out things in this duplicated pod without affecting the original pod?</p>
<p>You can get the pod manifest using YAML output from kubectl and create a new pod based on that specification.</p> <p>For example</p> <p><code>kubectl get pod hello-1816639685-xejyk -n foo -o yaml</code></p> <p>You need to use a different name or namespace, since the name must be unique.</p>
<p>i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls <code>kubectl apply -f kubernetes</code> to apply a bunch of yaml kubernetes definitions</p> <p>let's say i add yaml for a new service, and deploy it -- kubectl will add it</p> <p>but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it</p> <p>is there any way that <code>kubectl</code> can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around</p> <p>does the developer have to know to connect <code>kubectl</code> to the production cluster and delete the service manually, in addition to deleting the yaml definition?</p> <p>is there a mechanism for kubernetes to "know what's missing"?</p>
<p>Before deleting the yaml file, you can run <code>kubectl delete -f file.yaml</code>, this way all the resources created by this file will be deleted.</p> <hr /> <p>However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like <a href="https://github.com/roboll/helmfile" rel="nofollow noreferrer">Helmfile</a>.</p> <p>Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run <code>helmfile apply</code></p>
<p>I'm trying to run a tomcat container in K8S with a non-root user, to do so I set User 'tomcat' with the appropriate permission in Docker Image. I have a startup script that creates a directory in /opt/var/logs (during container startup) and also starts tomcat service.</p> <pre><code>#steps in Dockerfile #adding tomcat user and group and permission to /opt directory addgroup tomcat -g 1001 &amp;&amp; \ adduser -D -u 1001 -G tomcat tomcat &amp;&amp; \ chown -R tomcat:tomcat /opt #switch user User tomcat </code></pre> <p>The pod runs fine in K8S when deployed using deployment without any volume mapped.</p> <p>But I get a permission denied error (permission denied: creating directory /opt/var/logs/docker/) from the startup script, which fails to create a directory when I map the deployment with the persistent volume claim, even though I set the fsgroup as explained here, https://kubernetes.io/docs/tasks/configure-pod-container/security-context/.</p> <p>I have a persistent volume of type hostPath.</p> <p>The deployment definition is as below.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ms-tomcat namespace: ms-ns labels: app: tomcat spec: selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: securityContext: fsGroup: 2000 runAsUser: 1001 runAsGroup: 1001 containers: - name: tomcat image: docker-registry.test.com/tomcat:1.2 volumeMounts: - name: logging-volume mountPath: /opt/var/logs/docker imagePullSecrets: - name: test volumes: - name: logging-volume persistentVolumeClaim: claimName: nonroot-test-pvc </code></pre> <p>PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nonroot-test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: local-node-sc volumeName: nonroot-test-pv </code></pre>
<p>you need to run an initcontainer that will execute chmod 777 on the mounted volume to to be able to use the volume in a container with a non root user</p> <p>this can be easily done by creating an init container from an alpine image</p> <p>you can face this problem when trying to use longhorn as your storage class</p> <p>like this:</p> <pre><code>spec: initContainers: - name: permission-init image: alpine:3.16.0 command: - sh - -c - (chmod 777 /data) volumeMounts: - name: files mountPath: /data </code></pre>
<p>I have installed microk8s on my centos 8 operating system.</p> <pre><code>kube-system coredns-7f9c69c78c-lxm7c 0/1 Running 1 18m kube-system calico-node-thhp8 1/1 Running 1 68m kube-system calico-kube-controllers-f7868dd95-dpsnl 0/1 CrashLoopBackOff 23 68m </code></pre> <p>When I do microk8s enable dns, coredns or calico-kube-controllers cannot be started as above. Describe the pod for coredns :</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned kube-system/coredns-7f9c69c78c-lxm7c to localhost.localdomain Normal Pulled 14m kubelet Container image &quot;coredns/coredns:1.8.0&quot; already present on machine Normal Created 14m kubelet Created container coredns Normal Started 14m kubelet Started container coredns Warning Unhealthy 11m (x22 over 14m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503 Normal SandboxChanged 2m8s kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 2m7s kubelet Container image &quot;coredns/coredns:1.8.0&quot; already present on machine Normal Created 2m7s kubelet Created container coredns Normal Started 2m6s kubelet Started container coredns Warning Unhealthy 2m6s kubelet Readiness probe failed: Get &quot;http://10.1.102.132:8181/ready&quot;: dial tcp 10.1.102.132:8181: connect: connection refused Warning Unhealthy 9s (x12 over 119s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>Describe the pod for calico-kube-controllers :</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 73m default-scheduler no nodes available to schedule pods Warning FailedScheduling 73m (x1 over 73m) default-scheduler no nodes available to schedule pods Warning FailedScheduling 72m (x1 over 72m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate. Normal Scheduled 72m default-scheduler Successfully assigned kube-system/calico-kube-controllers-f7868dd95-dpsnl to localhost.localdomain Warning FailedCreatePodSandBox 72m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;f3ea36b003b0c9142ae63fee31531f9102e40ab837f4d795d1efb5c85af223ec&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;a1c405cdcebe79c586badcc8da47700247751a50ef9a1403e95fc4995485fba0&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;4adb07610eef0d7a618105abf72a114e486c373a02d5d1b204da2bd35268dd1b&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;96aac009175973ac4c20034824db3443b3ab184cfcd1ed23786e539fb6147796&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;79639a18edcffddbdb93492157af43bb6c1f1a9ac2af1b3fbbac58335737d5dc&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;3264f006447297583a37d8cc87ffe01311deaf2a31bf25867b3b18c83db2167d&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox &quot;5c5cf6509bfcf515ad12bc51451e4c385e5242c4f7bb593779d207abf9c906a4&quot;: error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org &quot;default&quot; not found Normal Pulling 70m kubelet Pulling image &quot;calico/kube-controllers:v3.13.2&quot; Normal Pulled 69m kubelet Successfully pulled image &quot;calico/kube-controllers:v3.13.2&quot; in 50.744281789s Normal Created 69m kubelet Created container calico-kube-controllers Normal Started 69m kubelet Started container calico-kube-controllers Warning Unhealthy 69m (x2 over 69m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory Warning MissingClusterDNS 37m (x185 over 72m) kubelet pod: &quot;calico-kube-controllers-f7868dd95-dpsnl_kube-system(d8c3ee40-7d3b-4a84-9398-19ec8a6d9082)&quot;. kubelet does not have ClusterDNS IP configured and cannot create Pod using &quot;ClusterFirst&quot; policy. Falling back to &quot;Default&quot; policy. Warning Unhealthy 31m (x6 over 32m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory Normal Pulled 30m (x4 over 32m) kubelet Container image &quot;calico/kube-controllers:v3.13.2&quot; already present on machine Normal Created 30m (x4 over 32m) kubelet Created container calico-kube-controllers Normal Started 30m (x4 over 32m) kubelet Started container calico-kube-controllers Warning BackOff 22m (x42 over 32m) kubelet Back-off restarting failed container Normal SandboxChanged 10m kubelet Pod sandbox changed, it will be killed and re-created. Warning Unhealthy 9m36s (x6 over 10m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory Normal Pulled 8m51s (x4 over 10m) kubelet Container image &quot;calico/kube-controllers:v3.13.2&quot; already present on machine Normal Created 8m51s (x4 over 10m) kubelet Created container calico-kube-controllers Normal Started 8m51s (x4 over 10m) kubelet Started container calico-kube-controllers Warning BackOff 42s (x42 over 10m) kubelet Back-off restarting failed container </code></pre> <p>I cannot start my microk8s services. I don't encounter these on my Ubuntu server. What can I do in these error situations that I encounter for my Centos 8 server?</p>
<p>Have you tried updating the microk8s version?</p>
<p>So we are using Loki/Grafana to collect logs from our Kubernetes cluster, and the tool is great for that. But now that I have cleaned up the logs, I would also like to visualize them!</p> <ol> <li><p>My logs <a href="https://i.stack.imgur.com/muDOL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/muDOL.png" alt="enter image description here" /></a></p> </li> <li><p>Selecting only JSON lines</p> <pre><code># My query {namespace=&quot;default&quot;,pod=~&quot;kkds-dockerscanner-test.+&quot;} |= &quot;SchemaVersion&quot; | json </code></pre> <h5>My logs</h5> <p><a href="https://i.stack.imgur.com/B58jM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B58jM.png" alt="enter image description here" /></a></p> </li> <li><p>I select only two columns that I want to display in a table</p> <pre><code>{namespace=&quot;default&quot;,pod=~&quot;kkds-dockerscanner-test.+&quot;} |= &quot;SchemaVersion&quot; | json | line_format &quot;{{.ArtifactName}} {{.Metadata_OS_Family}}&quot; </code></pre> </li> <li><p>Now I want to show the results in a table, but Grafana doesn't quite show the table I want. I want to show the output of my query in a table, and not just my output as a string in a column. Do you have any suggestions how to make this work, or can you point to a tutorial?</p> <p><a href="https://i.stack.imgur.com/SvSMH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SvSMH.png" alt="enter image description here" /></a></p> </li> </ol>
<p>As suggested by @Witt, it is possible to transform the JSON by applying a transformation.</p> <p>If I choose the <strong>Transform</strong> pane and extract fields, I now get the JSON keys as columns in the Table view.</p> <p><a href="https://i.stack.imgur.com/dcNXq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dcNXq.png" alt="enter image description here" /></a></p> <p>Under your datasource, you can also add the columns using <strong>Derived Fields</strong> and <strong>Regex</strong>. This, however, is a bit more cumbersome.</p> <p><a href="https://i.stack.imgur.com/TsBmd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TsBmd.png" alt="enter image description here" /></a></p>
<p>When deploying my operator bundle using OLM I get the following error:</p> <pre><code>level=error msg=&quot;File with size ... exceeded 1048576 limit, aborting&quot; file=/bundle/manifests/crd.yaml Error: error loading manifests from directory: file crd.yaml bigger than total allowed limit Usage: opm alpha bundle extract [flags] </code></pre> <p>Where does this size limit come from and how do I get around this?</p>
<p>Bundles have a size limitation because their manifests are used to create a configMap, and the Kubernetes API does not allow configMaps larger than <code>~1MB</code>. Beginning with OLM version <code>v0.19.0</code> and OPM <code>1.17.5</code>, these values are now compressed using <code>Gzip</code> and then encoded using <code>Base64</code> to accommodate larger bundles. The recommendation is to upgrade the SDK and OLM versions used.</p>
<p>I've been studying Kubernetes' main features for days, I understood many things, really I did. But I found nowhere the answer to this question: how does Kubernetes balance requests among cluster's nodes?</p> <p>Well, I mean, suppose an on premise private Kubernetes cluster: LoadBalancer type actually makes a service publish his ports to the network with an IP; Ingress service is a service which sets the rules for some third-part IngressController, which handles requests and forward them to the correct service.</p> <p>What I do not understand:</p> <ol> <li>Does any or all of these components, or others perhaps, actually monitors nodes' (or pods', I don't know) available resources and chooses to which node (or pod) forward the requests?</li> <li>If any real load balancer is present natively in Kubernates, what criteria does it adopt? Maybe the aforementioned resources, or the network latency, or just adopts a round robin.</li> <li>If there is a default policy for balancing, is it possible to customize it and implement your own rules?</li> </ol> <p>Please, tell me if I misunderstood anything and I'll try to focus better on that one. Thank you all.</p>
<p>If you don't have something in place that does load balancing externally (f.e. istio) all your mentioned options boil down to getting tcp connections into the cluster.</p> <p>Inside the cluster a ClusterIP is the real concept for load balancing: All Pods that are assigned to a Service with a ClusterIP will be used (roughly) in a round robin fashion. This is handled by iptables DNAT rules configured by kubeproxy on each node.</p> <p>The external LoadBalancer or Ingress usually do not do load balancing, even if the name might suggest it.</p>
<p>I have written custom golang kong plugin called <code>go-wait</code> following the example from the github repo <a href="https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin" rel="nofollow noreferrer">https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin</a></p> <p>The only difference is I created a custom docker image so kong would have the mentioned plugin by default in it's <code>/usr/local/bin</code> directory Here's the dockerfile</p> <pre><code>FROM golang:1.18.3-alpine as pluginbuild COPY ./charts/custom-plugins/ /app/custom-plugins RUN cd /app/custom-plugins &amp;&amp; \ for d in ./*/ ; do (cd &quot;$d&quot; &amp;&amp; go mod tidy &amp;&amp; GOOS=linux GOARCH=amd64 go build .); done RUN mkdir /app/all-plugin-execs &amp;&amp; cd /app/custom-plugins &amp;&amp; \ find . -type f -not -name &quot;*.*&quot; | xargs -i cp {} /app/all-plugin-execs/ FROM kong:2.8 COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/ COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/plugin-ref/ # Loop through the plugin-ref directory and create an entry for all of them in # both KONG_PLUGINS and KONG_PLUGINSERVER_NAMES env vars respectively # Additionally append `bundled` to KONG_PLUGINS list as without it any unused plugin will case Kong to error out #### Example Env vars for a plugin named `go-wait` # ENV KONG_PLUGINS=go-wait # ENV KONG_PLUGINSERVER_NAMES=go-wait # ENV KONG_PLUGINSERVER_GO_WAIT_QUERY_CMD=&quot;/usr/local/bin/go-wait -dump&quot; #### RUN cd /usr/local/bin/plugin-ref/ &amp;&amp; \ PLUGINS=$(ls | tr '\n' ',') &amp;&amp; PLUGINS=${PLUGINS::-1} &amp;&amp; \ echo -e &quot;KONG_PLUGINS=bundled,$PLUGINS\nKONG_PLUGINSERVER_NAMES=$PLUGINS&quot; &gt;&gt; ~/.bashrc # Loop through the plugin-ref directory and create an entry for QUERY_CMD entries needed to load the plugin # format KONG_PLUGINSERVER_EG_PLUGIN_QUERY_CMD if the plugin name is `eg-plugin` and it should point to the # plugin followed by `-dump` argument RUN cd /usr/local/bin/plugin-ref/ &amp;&amp; \ for f in *; do echo &quot;$f&quot; | tr &quot;[:lower:]&quot; &quot;[:upper:]&quot; | tr '-' '_' | \ xargs -I {} sh -c &quot;echo 'KONG_PLUGINSERVER_{}_QUERY_CMD=' &amp;&amp; echo '\&quot;/usr/local/bin/{} -dump\&quot;' | tr [:upper:] [:lower:] | tr '_' '-'&quot; | \ sed -e '$!N;s/\n//' | xargs -i echo &quot;{}&quot; &gt;&gt; ~/.bashrc; done </code></pre> <p>This works fine in the docker-compose file and docker container. But when I tried to use the same image in the kubernetes environment along with kong-ingress-controller, I started running into errors <code>&quot;failed to fill-in defaults for plugin: go-wait&quot;</code> and/or a bunch of other errors including <code>&quot;plugin 'go-wait' enabled but not installed&quot;</code> in the logs and I ended up not being able to enable it.</p> <p>Has anyone tried including go plugins in their kubernetes/helm kong setup. If so please shed some light on this</p>
<p>Update: Found the answer I was looking for, along with setting the environment variables generated by the image, there's modifications in the <code>_helpers.tpl</code> file of the kong helm chart itself. The reason is that in the deployment charts, the configuration expects plugins to be configured in values-custom.yml used to override the default settings.</p> <p>But the helm chart seems to be specific to values and plugins being loaded via <code>configMaps</code> which turned out to be a huge bottleneck, as any binary plugin you will generate in golang for kong is going to exceed the maximum allowed limit of the configMaps in kubernetes. That's the whole reason I had set out on this endeavor to make the plugins part of my image.</p> <p><strong>TL;dr</strong> I was able to clone the repo to my local system, make the changes for the following patch for loading the plugins from values without having to club them with the lua plugins. (Credits : Answer of <code>thatbenguy</code> from the discussion <a href="https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10" rel="nofollow noreferrer">https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10</a>)</p> <pre><code>--- a/charts/kong/templates/_helpers.tpl +++ b/charts/kong/templates/_helpers.tpl @@ -530,6 +530,9 @@ The name of the service used for the ingress controller's validation webhook {{- define &quot;kong.plugins&quot; -}} {{ $myList := list &quot;bundled&quot; }} +{{- range .Values.plugins.goPlugins -}} +{{- $myList = append $myList .pluginName -}} +{{- end -}} {{- range .Values.plugins.configMaps -}} {{- $myList = append $myList .pluginName -}} {{- end -}} </code></pre> <p>Add the following block to my values-custom.yml and I was good to go.</p> <p>Hopefully this helps anyone else also trying to write custom plugins for kong in golang for use in helm charts.</p> <pre><code>env: database: &quot;off&quot; plugins: bundled,go-wait pluginserver_names: go-wait pluginserver_go_wait_query_cmd: &quot;/usr/local/bin/go-wait -dump&quot; plugins: goPlugins: - pluginName: &quot;go-wait&quot; </code></pre> <p><strong>NOTE</strong> : Please remember all this still depends on you having the prebuilt custom kong plugins in your image, in my case I had built an image from the above dockerfile contents (in question) and pushed that to my own docker hub repo and replaced the image in the values-custom.yml using the following block</p> <pre><code>image: repository: chalukyaj/kong-custom-image tag: &quot;1.0.1&quot; </code></pre> <p>PS: As you guys might have noticed, the only disappointment I have with this is that the environment variables couldn't just be picked from the docker image's ~/.bashrc, which would have made this awesome. But nonetheless, this works, and I couldn't find a single post which showed how to use the new go-pdk (instead of the older go-pluginserver) to build the go plugins and use them in helm.</p>
<p>Our EKS cluster is terraform managed and was specified with EC2 Launch Template as terraform resource. Our aws_eks_node_group includes Launch template section as shown below.</p> <pre><code>resource &quot;aws_eks_node_group&quot; eks_node_group&quot; { ......................... .......................... launch_template { id = aws_launch_template.eksnode_template.id version = aws_launch_template.eksnode_template.default_version name = aws_launch_template.eksnode_template.name } } </code></pre> <p>However, after a while, EKS self-deployed the new Launch Template and linked it to the relevant auto-scaling group.</p> <ol> <li>Why this has happened at the first place and how to avoid it in the future?</li> <li>How can we link the customer managed Launch Template to EKS Autoscaling Group via terraform again? I tried changing the name or version of the launch template, but it is still using one created by EKS (self-managed)</li> </ol>
<ol> <li></li> </ol> <blockquote> <p>The Amazon EKS API creates this launch template either by <strong>copying</strong> one you provide or by creating one automatically with default values in your account.</p> </blockquote> <ol start="2"> <li></li> </ol> <blockquote> <p>We don't recommend that you modify auto-generated launch templates.</p> </blockquote> <p>(<a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html" rel="nofollow noreferrer">source</a>)</p>
<p>I'm new to kubernetes and nginx, and am having trouble understanding how to implement Ingress (nginx-ingress) for my particular use case.</p> <p>I want to expose the pgadmin PostgreSQL admin tool from my cluster.</p> <p>I have other applications exposed via nginx-ingress from my cluster and I like each to hang off its own subdirectory e.g.:</p> <ul> <li>myserver.com/purchasing/index.html</li> <li>myserver.com/sales/index.html</li> </ul> <p>I have a problem doing this with pgadmin (although the problem would no doubt apply equally to other apps that behave in the same way)</p> <p>I have set up an Ingress rule to capture and route accordingly:</p> <pre><code> annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 rules: - http: paths: - path: /pgadmin4(/|$)(.*) </code></pre> <p>it does the routing and the backend is hit.</p> <p>However when pgadmin then does a redirect, I lose the pgadmin/ from the redirect URl i.e.</p> <ul> <li>I send a GET to myserver.com/pgadmin</li> <li>I receive a 302 redirect to myserver.com/login</li> </ul> <p>whereas I want redirecting to:</p> <ul> <li>myserver.com/<strong>pgadmin</strong>/login</li> </ul> <p>What pieces am I missing to achieve this - it feels like it should be simple?</p> <p>Thanks</p>
<p>Had same kind of requirement for me, solved it by adding <code>X-Script-Name</code> in annotations</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header X-Script-Name /pgadmin; </code></pre> <p>The full Ingress file looks like mentioned below.</p> <p>Note: This is my test environment, so I am not enabling SSL certs, In your case, you may need to enable SSL settings</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header X-Script-Name /pgadmin; spec: rules: - host: &quot;34.70.247.68.nip.io&quot; http: paths: - pathType: ImplementationSpecific path: /pgadmin backend: service: name: pgadmin-service port: number: 80 </code></pre> <p>Reference: <a href="https://github.com/helm/charts/issues/21046#issuecomment-591840105" rel="nofollow noreferrer">https://github.com/helm/charts/issues/21046#issuecomment-591840105</a></p>
<p>I am trying to do an upgrade to a CRD in Kubernetes for Apache Flink.</p> <p>Below are the Events in the HelmRelease. I am trying to upgrade the flink-operator from v0.1.0 to v1.0.0. I am also using FluxCD which has been configured to create and/or replace CRDs. Flux is attempting to do this correctly but the Kubernetes API seems to be rejecting the new CRD from the flink operator if I am understanding this correctly.</p> <p><a href="https://i.stack.imgur.com/bxjXU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bxjXU.png" alt="enter image description here" /></a></p> <p>Any further guidance would be appreciated. Thank you</p>
<p>You are correct, unfortunately we simply do not support this graceful upgrade path between 0.1.0 and 1.0.0.</p> <p>The process is documented here: <a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/upgrade/#upgrading-from-v1alpha1---v1beta1" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/upgrade/#upgrading-from-v1alpha1---v1beta1</a></p> <p>We explicitly marked the 0.1.0 version beta, sorry for any inconvenience that this causes. We aim to support the upgrade path you are suggesting going forward, and happy to hear your feedback.</p>
<p>Has anyone been able to get Apache ignite to work with Istio on Kubernetes. I have Apache Ignite client and server pod communication working successfully on Kubernetes but once envoy sidecar is injected we get the following error:</p> <pre class="lang-text prettyprint-override"><code>Caused by: org.apache.ignite.spi.IgniteSpiOperationTimeoutException: Failed to perform handshake due to timeout (consider increasing 'connectionTimeout' configuration property). at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.handshakeTimeoutException(TcpCommunicationSpi.java:3894) [ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeTcpHandshake(TcpCommunicationSpi.java:4138) [ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioSession(TcpCommunicationSpi.java:3607) [ignite-core-2.8.1.jar!/:2.8.1] ... 15 common frames omitted </code></pre> <p>Any help here , would be much appreciated?</p>
<p>A workaround is to add annotation to the deployment</p> <pre><code> traffic.sidecar.istio.io/excludeInboundPorts: &quot;47100,47500&quot; traffic.sidecar.istio.io/excludeOutboundPorts: &quot;47100,47500&quot; </code></pre>
<p>I'm trying to improve my knowledge in GCP-GKE as a newbie and in the way to do that, I found out a little concept that I don't quite understand yet. In GKE, there is a <strong>Service Account</strong> called <code>service-PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com</code> (where the <code>PROJECT_NUM</code> is the ID of our project) and after several hours googling, I couldn't find any article or definition about this stuff. So could you guys please explain to me</p> <ul> <li>What is this <strong>Service Account</strong> ? How was it created (by who)?</li> <li>What is this thing for? How important is it in GKE?</li> <li>What happens if we delete it ? Could we re-created it manually ?</li> </ul> <p>In fact, I found out that in GCP, we have some <strong>Service Account</strong> that have a &quot;robot&quot; suffix: <code>...robot.iam.gserviceaccount.com/</code> (like <code>@gcf-admin-robot.iam.gserviceaccount.com/</code>, <code>@serverless-robot-prod.iam.gserviceaccount.com</code>, etc). What could we say about this, please ?</p> <p>If I misunderstand something, please, point it out for me, I really appreciate that.</p> <p>Thank you guys !!!</p>
<p><a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">Service Accounts</a> aka &quot;robots&quot; contrast with user (&quot;human&quot;) accounts and represent two forms of Google identity.</p> <blockquote> <p><strong>NOTE</strong> Robots was the original name for Service Accounts and is a more colorful description of the intent of these accounts, to run software.</p> </blockquote> <p>(Google) User accounts include consumer (Gmail) e.g. [email protected] and [email protected] (Workspace) accounts. User accounts are used by humans to interact with Google services and must be used (or a suitable delegate) to acccess user-owned content such as Workspace docs, sheets etc.</p> <p>Software (&quot;robots&quot;) generally should run as a Service Account <strong>not</strong> as a User account. In part, you can't easily run software using User accounts because the User OAuth flow is 3-legged and requires interacting with an OAuth Consent Screen to permit an app access to data.</p> <p>There are two flavors of Service Account: Google-created|managed and User-created|managed. The difference is essentially the owner. If you create applications, generally you should create a Service Account for each app and run the app using its Service Account.</p> <p>User-managed Service Accounts take the form <code>{something}@{project}.iam.gserviceaccount.com</code> where you get to define the value of <code>{something}</code> and the Google Project in which the Service Account is created (the project that owns the Service Account) is represented by <code>{project}</code> (actually the Project ID).</p> <p>When Google provides app functionality, it also creates Service Accounts and often, Google &quot;binds&quot; these Service Accounts to your projects that use them in addition to defining the role that the Service Account has in your project.</p> <p>Google-managed Service Accounts take the form <code>{something}@{label}.iam.gserviceaccount.com</code>. Unlike User-managed Service Accounts, Google uses more descriptive labels (<code>{label}</code>) to help explain the role of the Service Account.</p> <blockquote> <p><strong>NOTE</strong> With Google-managed Service Accounts <code>{something}</code> often includes the Project Number (not ID) of (your!) project for which the Google-managed account has been created.</p> </blockquote> <p>You <strong>cannot</strong> delete Google-managed Service Accounts because you(r Google account) does not own the Service Account.</p> <p>You <strong>can</strong> (but <strong>should not</strong>) delete the role binding between one of your projects and a Google-managed Service Account. It may be possible for you to revert (recreate) the binding but you may not have permission to do this.</p>
<p>I have the following in <code>main.tf</code></p> <pre><code>data &quot;kubectl_path_documents&quot; &quot;yaml-files&quot; { pattern = &quot;${path.module}/manifests/*.yaml&quot; } resource &quot;kubectl_manifest&quot; &quot;yaml-manifests&quot; { for_each = toset(data.kubectl_path_documents.yaml-files.documents) yaml_body = each.value } </code></pre> <p>I would need to parameterize certain fields in the yaml files to be able to deploy different set of resources for dev vs prod. I knew there was a way to do this if it was one yaml. How should this be done for many yaml files</p> <pre><code>β”œβ”€β”€ manifests β”‚Β Β  β”œβ”€β”€ gdp-configmap.yaml β”‚Β Β  β”œβ”€β”€ gdp-agent-deamonset.yaml β”‚Β Β  β”œβ”€β”€ gdp-collector-configmap.yaml β”‚Β Β  β”œβ”€β”€ gdp-collector-deployment.yaml </code></pre> <p>Any help is appreciated.</p>
<p>I was able to resolve this using a null_resource that executes a sh script with values to be sed'ed.</p>
<p>I am using kubernetes to deploy a postgresql pod.</p> <p>My k8s manifest for that works nicely. I only paste the relevant code below:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql spec: ... template: ... spec: containers: - name: postgresql image: postgres:14 ... volumeMounts: - name: my-postgres mountPath: /var/lib/postgresql/data ... volumeClaimTemplates: - metadata: name: my-postgres spec: </code></pre> <p>As you can see above, under container spec, I declared the <code>volumeMounts</code> with a mount path. And a <code>volumeClaimTemplates</code> to provision the persistent in cloud.</p> <p>Like said, it works fine.</p> <p>when I open the Dockerfile of <a href="https://github.com/docker-library/postgres/blob/4e56664f1797ba4cc0f5917b6d794792a5571b45/14/bullseye/Dockerfile" rel="nofollow noreferrer">official postgresql here</a>. On line 186, there is <code>VOLUME /var/lib/postgresql/data</code>.</p> <p>I would like to verify if I understand the whole thing correctly:</p> <ol> <li><p>Is it so that the line 186 <code>VOLUME ...</code> in postgres Dockerfile is only meant to tell its official mount point? I mean if I use the Dockerfile to run a container, it DOESN'T create a volume but I need to specific a volume &amp; mount to that mount point. Am I right here?</p> </li> <li><p>In my k8s manifest above, under <code>volumeMounts:</code> the <code>mountPath:</code> value has to be exactly the same as the line 186 <code>VOLUME ...</code> in postgres Dockerfile, because that is the mount point they told. Am I right here?</p> </li> </ol>
<ul> <li>I created simple Dockerfile to examine &quot;VOLUME&quot; instruction in Dockerfile as below :</li> </ul> <pre><code>[root@machine stack]# cat Dockerfile FROM busybox VOLUME /insidecontiner RUN echo &quot;writing From Container&quot; &gt;&gt; /insidecontiner/file CMD sleep 1000 [root@machine stack]# docker build -t test . Sending build context to Docker daemon 7.68kB Step 1/4 : FROM busybox ---&gt; 69593048aa3a Step 2/4 : VOLUME /insidecontiner ---&gt; Running in 71188c3529d8 Removing intermediate container 71188c3529d8 ---&gt; 6a83730242a2 Step 3/4 : RUN echo &quot;writing From Container&quot; &gt;&gt; /insidecontiner/file ---&gt; Running in d73b82b60333 Removing intermediate container d73b82b60333 ---&gt; c2cfb57a967d Step 4/4 : CMD sleep 1000 ---&gt; Running in aff9f1d4a0b3 Removing intermediate container aff9f1d4a0b3 ---&gt; 834cc7a98e91 Successfully built 834cc7a98e91 </code></pre> <ul> <li>There were no docker volumes in the system before running this contaainer</li> </ul> <pre><code>[root@machine stack]# docker volume ls DRIVER VOLUME NAME </code></pre> <ul> <li>Once i created a container using this image , It automatically created a docker volume</li> </ul> <pre><code>[root@machine ~]# docker run -d --name boo test e5994bb0975635354f60a238c19db472700949946c15b6a916e0c2e4ffed33e3 [root@machine ~]# docker volume ls DRIVER VOLUME NAME local 2d7ec8ea6f61cd4c8758d2da666b11b28970b9ff630053bd4109a13f4516c504 [root@machine ~]# docker volume inspect 2d7ec8ea6f61cd4c8758d2da666b11b28970b9ff630053bd4109a13f4516c504 [ { &quot;CreatedAt&quot;: &quot;2022-06-11T13:03:05-04:00&quot;, &quot;Driver&quot;: &quot;local&quot;, &quot;Labels&quot;: null, &quot;Mountpoint&quot;: &quot;/var/lib/docker/volumes/2d7ec8ea6f61cd4c8758d2da666b11b28970b9ff630053bd4109a13f4516c504/_data&quot;, &quot;Name&quot;: &quot;2d7ec8ea6f61cd4c8758d2da666b11b28970b9ff630053bd4109a13f4516c504&quot;, &quot;Options&quot;: null, &quot;Scope&quot;: &quot;local&quot; } ] </code></pre> <ul> <li>That volume is mounted inside the container at <code>/insidecontiner </code> :</li> </ul> <pre><code>[root@machine ~]# docker exec -it boo /bin/sh / # df -h Filesystem Size Used Available Use% Mounted on overlay 50.0G 18.7G 31.3G 37% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 62.8G 0 62.8G 0% /sys/fs/cgroup shm 64.0M 0 64.0M 0% /dev/shm /dev/mapper/rhel-root 50.0G 18.7G 31.3G 37% /insidecontiner </code></pre> <blockquote> <blockquote> <blockquote> <p>Coming to your question &quot;Is it so that the line 186 VOLUME ... in postgres Dockerfile is only meant to tell its official mount point? I mean if I use the Dockerfile to run a container, it DOESN'T create a volume but I need to specific a volume &amp; mount to that mount point. Am I right here?&quot;</p> </blockquote> </blockquote> </blockquote> <hr /> <ul> <li><code>/var/lib/postgresql/data</code> is the place where postgres keeps in datbases</li> <li>It does create a volume automatically</li> <li>It mounts inside the container to the location mention in the Dockerfile</li> </ul> <hr /> <p><strong>Now to Kubernetes behavior</strong></p> <ul> <li>Creating a pod without mentioning any volumes in pod yaml run the pod successfully.ContainerRuntime on the worker node(docker in this example) will a local docker volume for that container inside the pod</li> </ul> <pre><code>[root@machine stack]# kubectl run boo --image prasasai/test pod/boo created [root@machine stack]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES boo 1/1 Running 0 3m51s 192.168.187.80 machine1 &lt;none&gt; &lt;none&gt; [root@machine1 ~]# docker volume inspect 4dbf6bc943a55c89a45d01c4aa3e860141bb7efe01ce046959c5a208dfab7e3e [ { &quot;CreatedAt&quot;: &quot;2022-06-11T17:26:11-04:00&quot;, &quot;Driver&quot;: &quot;local&quot;, &quot;Labels&quot;: null, &quot;Mountpoint&quot;: &quot;/var/lib/docker/volumes/4dbf6bc943a55c89a45d01c4aa3e860141bb7efe01ce046959c5a208dfab7e3e/_data&quot;, &quot;Name&quot;: &quot;4dbf6bc943a55c89a45d01c4aa3e860141bb7efe01ce046959c5a208dfab7e3e&quot;, &quot;Options&quot;: null, &quot;Scope&quot;: &quot;local&quot; } ] </code></pre> <ul> <li>If i mention <code>volumeMounts</code> section matching the <code>VOLUME</code> in Dockerfile then kubernetes is taking care of the volume mapping and ContainerRuntime is not creating the volume :</li> </ul> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: boo name: boo spec: containers: - image: prasasai/test name: boo volumeMounts: - name: mount1 mountPath: /insidecontiner volumes: - name: mount1 hostPath: path: /here </code></pre> <pre><code>[root@machine stack]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES boo 1/1 Running 0 13s 192.168.187.85 machine1 &lt;none&gt; &lt;none&gt; [root@machine1 ~]# docker volume ls DRIVER VOLUME NAME </code></pre> <blockquote> <blockquote> <blockquote> <p>&quot; In my k8s manifest above, under volumeMounts: the mountPath: value has to be exactly the same as the line 186 VOLUME ... in postgres Dockerfile, because that is the mount point they told. Am I right here?&quot;</p> </blockquote> </blockquote> </blockquote> <hr /> <ul> <li>Yes , Thats absolutely correct.</li> </ul> <hr />
<p>the configmap.yaml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ include &quot;test.fullname&quot; . }}-config-map data: brokerConf: | {{ .Values.configmap }} </code></pre> <p>the below values.yaml is right.</p> <pre><code>configmap: | key=values </code></pre> <p>but the values.yaml is wrong</p> <pre><code>configmap: | key=values key2=values2 </code></pre> <p>the helm template core content is</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: test-config-map data: brokerConf: | key=values key2=values2 </code></pre> <p>the error</p> <pre><code>Error: YAML parse error on test/templates/config-map.yaml: error converting YAML to JSON: yaml: line 9: could not find expected ':' helm.go:84: [debug] error converting YAML to JSON: yaml: line 9: could not find expected ':' YAML parse error on v5-proxy/templates/config-map.yaml helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146 helm.sh/helm/v3/pkg/releaseutil.SortManifests helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106 helm.sh/helm/v3/pkg/action.(*Configuration).renderResources helm.sh/helm/v3/pkg/action/action.go:165 </code></pre> <p>how to fix it?</p>
<p>You may update the <code>configmap</code> as below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config-map data: brokerConf: {{ .Values.configmap| indent 4|trim }} </code></pre> <p>The error is caused because the 2nd line in <code>data.brokerConf</code> is not indented properly. Something like below, where <code>key2=values2</code> is an invalid statement in yaml world, the correct is <code>key2: values2</code>.</p> <pre><code>configmap: key=values key2=values2 </code></pre> <p>To fix it we have to use <code>indent</code>, but it will do indent the first line additionally. To <a href="https://github.com/Masterminds/sprig/issues/54#issuecomment-360572045" rel="nofollow noreferrer">address that</a>, <code>trim</code> is used.</p>
<p>I am trying to delete a namespace but it is in terminating state, I tried removing the finalizer and applying replace but not able to succeed. Below are the steps and error</p> <pre><code>[root@~]# kubectl replace "/api/v1/namespaces/service-catalog/finalize" -f n.json namespace/service-catalog replaced [root@~]# [root@~]# [root@~]# k get ns service-catalog NAME STATUS AGE service-catalog Terminating 6d21h [root@~]# k delete ns service-catalog Error from server (Conflict): Operation cannot be fulfilled on namespaces "service-catalog": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system. </code></pre> <p>In the namespace I had created few crd objects and my good guess is those are the thing which are preventing it from deletion. Right now I am not able memorise all the crd object that I created. </p> <p>Is there a way where I can query all the object with the <code>finalizer: service-catalog</code>?</p>
<p>I was looking for all the finalizers that were used in our cluster and this worked for me. It checks for all types of objects in all namespaces and returns their finalizers -- you can probably use awk and grep to filter it out for what you're looking for</p> <p><code>kubectl get all -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers --all-namespaces </code></p> <p>Note, this doesn't return the cluster scoped resources</p>
<p>I am attempting to install my first kubernetes cluster on Ubuntu 22.04 server. I'm following the instructions given at this link (<a href="https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/" rel="nofollow noreferrer">https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/</a>). <em>Granted, this is actually for Ubuntu 20.04, but I presume it should be close enough</em>.</p> <p>I am at step #6, and I've just executed the &quot;INIT&quot; command (sudo kubeadm init --pod-network-cidr=10.255.0.0/16) on just the control node {Haven't gotten to the worker node yet} . I notice that sometimes (fairly often) &quot;kubectl&quot; executions fail:</p> <pre><code>&lt;nonrootuser&gt;@k8s-cluster1-control1:~/kubernetes$ kubectl get pods -A The connection to the server &lt;controlNodeIPAddress&gt;.1:6443 was refused - did you specify the right host or port? </code></pre> <p>When it does function I see:</p> <pre><code>&lt;nonrootuser&gt;@k8s-cluster1-control1:~/kubernetes$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-qmcg7 0/1 Pending 0 8m7s kube-system coredns-6d4b75cb6d-z6ks5 0/1 Pending 0 8m7s kube-system etcd-k8s-cluster1-control1 1/1 Running 13 (4m55s ago) 6m4s kube-system kube-apiserver-k8s-cluster1-control1 1/1 Running 16 (3m43s ago) 9m14s kube-system kube-controller-manager-k8s-cluster1-control1 1/1 Running 5 (3m9s ago) 5m53s kube-system kube-proxy-smp7q 0/1 CrashLoopBackOff 4 (62s ago) 8m8s kube-system kube-scheduler-k8s-cluster1-control1 0/1 CrashLoopBackOff 20 (35s ago) 6m11s </code></pre> <p>According to this link (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state</a>), this isn't normal, but they don't provide much in attempting to debug this.</p> <p>I am &quot;<em><strong>VERY GREEN</strong></em>&quot; to kubenetes, but have been using some for of Linux since 1998. I guess my issue is that I can't find a lot of logging in attempts to debug (or just don't understand it enough yet). Any pointers how I might proceed?</p> <p>Thank you!!</p> <p>P.S. My kubeadm version is:</p> <pre><code>kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;24&quot;, GitVersion:&quot;v1.24.1&quot;, GitCommit:&quot;3ddd0f45aa91e2f30c70734b175631bec5b5825a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-05-24T12:24:38Z&quot;, GoVersion:&quot;go1.18.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
<p>It appears that the &quot;default&quot; install of &quot;containerd&quot; (current version 1.6.6 as of today) from the Ubuntu &quot;Jammy&quot; repository does not create a config file (/etc/containerd/config.toml) that functions properly. Even removing the &quot;disabled_plugins = ...&quot; entry (i.e. everything within the file is commented out) does not provide a stable containerd process. My guess is that since the file is provided, the embedded defaults that containerd is built with are not used and the process is now clueless on how to run.</p> <p>Using the these instructions (<a href="https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html" rel="noreferrer">https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html</a>), I was able to create a new defaule config file, modify it to enable &quot;SystemdCgroup&quot;, and the restart containerd and Kubelet services.</p> <p>Since the restarts, all the pods that are expected to start have reached a &quot;Running&quot; state, and have not restarted (for over 3 hours).</p>
<p>Hi All I'm working on designing a Kubernetes sidecar / Adapter pattern for the Spring Boot Application to export Prometheus metrics, I am aware that we can use maven dependency <code>io.micrometer:micrometer-registry-prometheus</code> to expose Prometheus metrics directly. However, with my limited knowledge, I got to know adapter/sidecar patterns we can extend apps in Kubernetes without changing the code. Being new to the Kubernetes concept I am struggling to figure out whether it can be achieved to adapter/sidecar pattern or not. With my research, I find out example to expose Elasticsearch Prometheus metrics using adapter pattern <a href="https://learnk8s.io/sidecar-containers-patterns" rel="nofollow noreferrer">example</a> where Elasticsearch does not support natively exporting Prometheus metrics.</p> <p>I need help with the following points</p> <ul> <li>Since Spring Boot provides a way to export Prometheus metrics using a micrometer, is it the correct approach to use a sidecar/adapter pattern to export metrics?</li> <li>if yes can you please guide me on how can we achieve this using the sidecar/adapter pattern in Kubernetes?</li> <li>any example would be great help.</li> </ul>
<p>One way of accomplishing this is using a solution such as Open-Telemetry. You can choose to auto-instrument your application using OTEL agents and corresponding collectors (that run as a side-car application)</p> <p>An example of this for the Java Petstore is here : <a href="https://signoz.io/blog/opentelemetry-spring-boot/" rel="nofollow noreferrer">https://signoz.io/blog/opentelemetry-spring-boot/</a></p> <p>While this is not for Prometheus , you get the idea of this. In place of the Signoz exporter here , you can export it to the Prometheus sink and auto instrument &amp; also trace your application</p>
<p>I downloaded Kubernetes for integration with Jenkins and created a Serviceaccount, but the secret is not automatically created.</p> <p>In the past, I remember that a Secret was automatically created and the Token was mounted when Serviceaccount was created.</p> <p>How can I automatically create a Secret as before?</p>
<p>As mentioned by @P.... In kubernetes version 1.24 this behaviour has been changed, the <strong>LegacyServiceAccountTokenNoAutoGeneration</strong> feature gate is enabled by default in 1.24.</p> <p>New secrets containing service account tokens are no longer auto-generated and are not automatically ambient in secrets in 1.24+. Existing secrets containing service account tokens are still usable.</p> <p>API clients scraping token content from auto-generated Secret API objects must start using the <a href="https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/" rel="noreferrer">TokenRequest API</a> to obtain a token (preferred, available in all supported versions), or you can explicitly request a secret-based token if a secret-based token is desired/needed.</p> <p>Refer <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token" rel="noreferrer">manually create a service account API token</a> to explicitly request a secret-based token.</p>
<p>Im trying to get <code>RAW</code> data from operator CR and im getting an <strong>empty object</strong> . (all others value are working as expected ) I’ve created a <strong>minimal example</strong> for the issue, in the example im trying to read the <code>infrastructureConfig</code> object</p> <p><strong>The tricky part here that my struct is reference to another struct which have a property type rawdata</strong></p> <p><a href="https://github.com/JennyMet/gardner_test" rel="nofollow noreferrer">https://github.com/JennyMet/gardner_test</a></p> <p>Here the simple CR</p> <p><code>https://github.com/JennyMet/gardner_test/blob/main/config/samples/mygroup_v1alpha1_rawtest.yaml#L11</code></p> <p>Here im tying to read the data and get an empty object, any idea?</p> <p><a href="https://github.com/JennyMet/gardner_test/blob/main/controllers/rawtest_controller.go#L70" rel="nofollow noreferrer">https://github.com/JennyMet/gardner_test/blob/main/controllers/rawtest_controller.go#L70</a></p> <p>this is the reference of the type which im using <a href="https://github.com/gardener/gardener/blob/5522be0e17ccf38aae36efb9fdb6463c66d6e4f1/pkg/apis/core/v1beta1/types_shoot.go#L1184" rel="nofollow noreferrer">https://github.com/gardener/gardener/blob/5522be0e17ccf38aae36efb9fdb6463c66d6e4f1/pkg/apis/core/v1beta1/types_shoot.go#L1184</a></p> <p><strong>I think its related to fields</strong></p> <p><code>x-kubernetes-preserve-unknown-fields: true</code> <a href="https://book.kubebuilder.io/reference/markers/crd-processing.html" rel="nofollow noreferrer">https://book.kubebuilder.io/reference/markers/crd-processing.html</a> which is not existing How can I add it to the schema here ?</p> <p><a href="https://github.com/JennyMet/gardner_test/blob/main/api/v1alpha1/rawtest_types.go#L32" rel="nofollow noreferrer">https://github.com/JennyMet/gardner_test/blob/main/api/v1alpha1/rawtest_types.go#L32</a> as under the hood it uses</p> <p><a href="https://github.com/gardener/gardener/blob/5522be0e17ccf38aae36efb9fdb6463c66d6e4f1/pkg/apis/core/v1beta1/types_shoot.go#L1184" rel="nofollow noreferrer">https://github.com/gardener/gardener/blob/5522be0e17ccf38aae36efb9fdb6463c66d6e4f1/pkg/apis/core/v1beta1/types_shoot.go#L1184</a></p> <p>I mean I tried and it doesnt work as the <code>InfrastructureConfig</code> which is RAW is under the</p> <pre><code>type System struct { Type system `json:&quot;type,omitempty&quot;` // +kubebuilder:pruning:PreserveUnknownFields Provider v1beta1.Provider `json:&quot;provider,omitempty&quot;` } </code></pre> <p>But the rawData is under <code>Provider</code> which is not my struct, <strong>im just using it.</strong></p> <p>which is like this , see the <code>InfrastructureConfig</code> type...</p> <pre><code>type Provider struct { Type string `json:&quot;type&quot; protobuf:&quot;bytes,1,opt,name=type&quot;` ControlPlaneConfig *runtime.RawExtension `json:&quot;controlPlaneConfig,omitempty&quot; protobuf:&quot;bytes,2,opt,name=controlPlaneConfig&quot;` InfrastructureConfig *runtime.RawExtension `json:&quot;infrastructureConfig,omitempty&quot; protobuf:&quot;bytes,3,opt,name=infrastructureConfig&quot;` } </code></pre>
<p>Currently, you can only put the <code>// +kubebuilder:pruning:PreserveUnknownFields</code> on the <code>Provider v1beta1.Provider</code>, which means all sub fields in it will be allowed with additional unknown fields.</p> <p>The good news is, your problem will be solved after <a href="https://github.com/kubernetes-sigs/controller-tools/pull/683" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/controller-tools/pull/683</a> merged. After that, you have not to use <code>// +kubebuilder:pruning:PreserveUnknownFields</code> and controller-tools would automatically add <code>x-kubernetes-preserve-unknown-fields: true</code> for all <code>RawExtension</code> fields.</p>
<p>I’m doing some <code>chaos testing</code> in <code>K8s</code>. My platform use <code>Istio envoy sidecar</code>, and as some use case of my chaos scenarios, I would like to stop/kill a envoy proxy without kill the service container, and see what is the standard behavior.</p> <p>So far I don’t manage to figure how to do it through <code>kubectl/istioctl</code>.</p> <p>Any idea how to accomplish this?</p> <p>Regards</p>
<p>Use the /quitquitquit endpoint:</p> <p><a href="https://www.envoyproxy.io/docs/envoy/latest/operations/admin#post--quitquitquit" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/operations/admin#post--quitquitquit</a></p> <p>curl -sf -XPOST <a href="http://127.0.0.1:15020/quitquitquit" rel="nofollow noreferrer">http://127.0.0.1:15020/quitquitquit</a></p>
<p><strong>Requirement</strong> : With private GKE ( version : <strong>1.21.11-gke.1100</strong> ), each pod is required to have a dedicated public IP as source IP when reaching to internet. It is not required for ingress but only for egress.</p> <p><strong>Solution tried</strong> : Cloud NAT. Works partially. Meaning, suppose we have 10 pods and each of them is made to run on a distinct node. Cloud NAT does not assign an unique IP to each pod even when the <code>Minimum ports per VM instance</code> is set to the maximum possible value of <code>57344</code>.</p> <p><strong>Experiment Done</strong>: 10 NAT gateway IPs are assigned to the NAT Gateway. 8 pods are created, each running on a dedicated node. Cloud NAT assigned only 3 Cloud NAT IPs instead of 8 even though there aee 10 IPs available.</p> <p>Cloud NAT is configured as below :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Configuration</th> <th>Setting</th> </tr> </thead> <tbody> <tr> <td><code>Manual NAT IP address assignment</code></td> <td><strong>true</strong></td> </tr> <tr> <td><code>Dynamic port allocation</code></td> <td><code>disabled</code></td> </tr> <tr> <td><code>Minimum ports per VM instance</code></td> <td><code>57344</code>. This decides how many VMs can be assigned to the same Cloud NAT IP.</td> </tr> <tr> <td><code>Endpoint-Independent Mapping</code></td> <td><code>disabled</code></td> </tr> </tbody> </table> </div> <p><strong>Instead of converting to a Public GKE cluster</strong>, is there an easier way of achieving this goal?</p> <p>Has anyone ever done such a setup which is proved to work?</p>
<p>You can create the <strong>NAT gateway instance</strong> and forward the traffic from there.</p> <p>Here terraform script to create : <a href="https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples</a></p> <p><a href="https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e" rel="nofollow noreferrer">https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e</a></p> <p>If you are looking to use cloud NAT with route you can checkout this : <a href="https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters</a></p> <p>TF code for NAT : <a href="https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84</a></p> <p>Demo architecture : <a href="https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture</a></p>
<p>I know how to list of pods for certain namespace as follows:</p> <pre><code>kubectl get pods -n mynamespace </code></pre> <p>There is one <code>POD</code> on the list which i am interested. I would like to be able to see what <code>yaml</code> deployment file was used for creation of that pod and the content of it. Moreover what would be <code>kubectl</code> command to edit such <code>yaml</code> file?</p>
<p>To get the deployment used to generate a pod, have to look at the pod yaml or describe the pod:</p> <pre><code>kubectl get pod -n {namespace} {pod-name} -o yaml </code></pre> <p>Look for the &quot;<code>ownerReferences</code>&quot; section. This will tell you what &quot;owns&quot; the pod. This will likely be a ReplicaSet (which is generated by the Deployment). For example, this is my one from my canonical-service-controller-manager pod:</p> <pre><code>$ kubectl get pod -n asm-system canonical-service-controller-manager-67c8f5fff5-dzd4t -o yaml apiVersion: v1 kind: Pod metadata: ... namespace: asm-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: canonical-service-controller-manager-67c8f5fff5 uid: 2504c96b-9ef7-45fa-b5d2-2a20f09486e1 resourceVersion: &quot;13507&quot; uid: 54748cf7-26f8-4e8a-996a-725dbc89c66b ... </code></pre> <p>You can then find the replicaset and do the same thing. In my example, the replicaset is called &quot;<code>canonical-service-controller-manager-67c8f5fff5</code>&quot;</p> <pre><code>$ kubectl get replicaset -n asm-system canonical-service-controller-manager-67c8f5fff5 -o yaml apiVersion: apps/v1 kind: ReplicaSet metadata: annotations: ... generation: 1 labels: ... name: canonical-service-controller-manager-67c8f5fff5 namespace: asm-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: Deployment name: canonical-service-controller-manager uid: eafe2590-5005-492f-9673-0f02ac74a4d5 spec: replicas: 1 ... </code></pre> <p>So you can see here, the deployment that created the replicaset, which in turn created my pod is <code>canonical-service-controller-manager</code></p> <p>So now I can get the deployment yaml using</p> <pre><code>$ kubectl get deployment -n asm-system canonical-service-controller-manager -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: ... creationTimestamp: &quot;2022-06-14T06:35:58Z&quot; generation: 1 labels: ... name: canonical-service-controller-manager namespace: asm-system spec: ... </code></pre> <p>If I wanted to edit the deployment, I can do so using this:</p> <pre><code>kubectl edit deployment -n asm-system canonical-service-controller-manager </code></pre> <p>Bear in mind, that editing the deployment will cycle all the pods in the deployment as they are all replicas.</p>
<p>I'm creating multiple ingress controller in different namespaces. Initially, it's creating a load balancer in AWS and attached pod IP addresses to target groups. After some days it is not updating the new pod IP to the target group. I've attached the ingress controller logs here.</p> <pre><code>E0712 15:02:30.516295 1 leaderelection.go:270] error retrieving resource lock namespace1/ingress-controller-leader-alb: configmaps &quot;ingress-controller-le β”‚ β”‚ ader-alb&quot; is forbidden: User &quot;system:serviceaccount:namespace1:fc-serviceaccount-icalb&quot; cannot get resource &quot;configmaps&quot; in API group &quot;&quot; in the namespace &quot;namespace1&quot; </code></pre> <p>ingress.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: &quot;fc-ingress&quot; annotations: kubernetes.io/ingress.class: alb-namespace1 alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/subnets: alb.ingress.kubernetes.io/certificate-arn: alb.ingress.kubernetes.io/ssl-policy: alb.ingress.kubernetes.io/security-groups: alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;: 443}]' alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: traffic-port alb.ingress.kubernetes.io/healthcheck-path: '/' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2' alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200' alb.ingress.kubernetes.io/healthy-threshold-count: '5' alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=false alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=false alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=0 alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=300 alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=false labels: app: fc-label-app-ingress spec: rules: - host: &quot;hostname1.com&quot; http: paths: - backend: serviceName: service1 servicePort: 80 - host: &quot;hostname2.com&quot; http: paths: - backend: serviceName: service2 servicePort: 80 - host: &quot;hostname3.com&quot; http: paths: - backend: serviceName: service3 servicePort: 80 </code></pre> <p>ingress_controller.yaml</p> <pre><code># Application Load Balancer (ALB) Ingress Controller Deployment Manifest. # This manifest details sensible defaults for deploying an ALB Ingress Controller. # GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller apiVersion: apps/v1 kind: Deployment metadata: labels: app: fc-label-app-icalb name: fc-ingress-controller-alb namespace: namespace1 # Namespace the ALB Ingress Controller should run in. Does not impact which # namespaces it's able to resolve ingress resource for. For limiting ingress # namespace scope, see --watch-namespace. spec: replicas: 1 selector: matchLabels: app: fc-label-app-icalb strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: fc-label-app-icalb spec: containers: - args: # Limit the namespace where this ALB Ingress Controller deployment will # resolve ingress resources. If left commented, all namespaces are used. - --watch-namespace=namespace1 # Setting the ingress-class flag below ensures that only ingress resources with the # annotation kubernetes.io/ingress.class: &quot;alb&quot; are respected by the controller. You may # choose any class you'd like for this controller to respect. - --ingress-class=alb-namespace1 # Name of your cluster. Used when naming resources created # by the ALB Ingress Controller, providing distinction between # clusters. - --cluster-name=$EKS_CLUSTER_NAME # AWS VPC ID this ingress controller will use to create AWS resources. # If unspecified, it will be discovered from ec2metadata. # - --aws-vpc-id=vpc-xxxxxx # AWS region this ingress controller will operate in. # If unspecified, it will be discovered from ec2metadata. # List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region # - --aws-region=us-west-1 # Enables logging on all outbound requests sent to the AWS API. # If logging is desired, set to true. # - ---aws-api-debug # Maximum number of times to retry the aws calls. # defaults to 10. # - --aws-max-retries=10 env: # AWS key id for authenticating with the AWS API. # This is only here for examples. It's recommended you instead use # a project like kube2iam for granting access. #- name: AWS_ACCESS_KEY_ID # value: KEYVALUE # AWS key secret for authenticating with the AWS API. # This is only here for examples. It's recommended you instead use # a project like kube2iam for granting access. #- name: AWS_SECRET_ACCESS_KEY # value: SECRETVALUE # Repository location of the ALB Ingress Controller. image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4 imagePullPolicy: Always name: server resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 serviceAccountName: fc-serviceaccount-icalb --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app: fc-label-app-icalb name: fc-clusterrole-icalb rules: - apiGroups: - &quot;&quot; - extensions resources: - configmaps - endpoints - events - ingresses - ingresses/status - services verbs: - create - get - list - update - watch - patch - apiGroups: - &quot;&quot; - extensions resources: - nodes - pods - secrets - services - namespaces verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: fc-label-app-icalb name: fc-clusterrolebinding-icalb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fc-clusterrole-icalb subjects: - kind: ServiceAccount name: fc-serviceaccount-icalb namespace: namespace1 --- apiVersion: v1 kind: ServiceAccount metadata: labels: app: fc-label-app-icalb name: fc-serviceaccount-icalb namespace: namespace1 </code></pre>
<p>I have had an issue like that on <strong>AKS</strong>. I have two Nginx Ingress Controllers:</p> <ol> <li>external-nginx-ingress</li> <li>internal-nginx-ingress</li> </ol> <p>Only one worked at a time, Internal or external.</p> <p>After specifying a unique <strong><code>election-id</code></strong> for each one the problem was fixed.</p> <p>I use the following HELM chart:</p> <pre><code>Repository = &quot;https://kubernetes.github.io/ingress-nginx&quot; Chart = &quot;ingress-nginx&quot; Chart_version = &quot;4.1.3&quot; K8s Version = &quot;1.22.4&quot; </code></pre> <p><strong>Deployment</strong></p> <pre class="lang-bash prettyprint-override"><code>kubectl get deploy -n ingress NAME READY UP-TO-DATE AVAILABLE external-nginx-ingress-controller 3/3 3 3 internal-nginx-ingress-controller 1/1 1 1 </code></pre> <p><strong>IngressClass</strong></p> <pre class="lang-bash prettyprint-override"><code>kubectl get ingressclass NAME CONTROLLER PARAMETERS external-nginx k8s.io/ingress-nginx &lt;none&gt; internal-nginx k8s.io/internal-ingress-nginx &lt;none&gt; </code></pre> <p><strong>Deployment for External</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: external-nginx-ingress-controller namespace: ingress annotations: meta.helm.sh/release-name: external-nginx-ingress meta.helm.sh/release-namespace: ingress spec: replicas: 3 selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: external-nginx-ingress app.kubernetes.io/name: ingress-nginx template: spec: containers: - name: ingress-nginx-external-controller image: &gt;- k8s.gcr.io/ingress-nginx/controller:v1.2.1 args: - /nginx-ingress-controller - &gt;- --publish-service=$(POD_NAMESPACE)/external-nginx-ingress-controller - '--election-id=external-ingress-controller-leader' - '--controller-class=k8s.io/ingress-nginx' - '--ingress-class=external-nginx' - '--ingress-class-by-name=true' </code></pre> <p><strong>Deployment for Internal</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: internal-nginx-ingress-controller namespace: ingress annotations: meta.helm.sh/release-name: internal-nginx-ingress meta.helm.sh/release-namespace: ingress spec: replicas: 1 selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: internal-nginx-ingress app.kubernetes.io/name: ingress-nginx template: spec: containers: - name: ingress-nginx-internal-controller image: &gt;- k8s.gcr.io/ingress-nginx/controller:v1.2.1 args: - /nginx-ingress-controller - &gt;- --publish-service=$(POD_NAMESPACE)/internal-nginx-ingress-controller - '--election-id=internal-ingress-controller-leader' - '--controller-class=k8s.io/internal-ingress-nginx' - '--ingress-class=internal-nginx' - '--ingress-class-by-name=true' </code></pre>
<p>When learning the Kubernetes CNI, I heard some plugins are using the BGP or VXLAN under the hood.</p> <p>On the internet, border gateway protocol (BGP) manages how packets are routed between edge routers.</p> <p><strong>Autonomous systems (AS)</strong> are network routers managed by a single enterprise or service provider. for example, <a href="https://db-ip.com/as32934-facebook-inc" rel="nofollow noreferrer">Facebook</a> and <a href="https://db-ip.com/as15169-google-llc" rel="nofollow noreferrer">Google</a>.</p> <p>Autonomous systems (AS) communicate with peers and form a mesh.</p> <p><a href="https://i.stack.imgur.com/vI6it.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vI6it.png" alt="enter image description here" /></a></p> <p>But I still can't figure out how does the CNI plugin take advantage of BGP.</p> <p>Imagine there is a Kubernetes cluster, which is composed of 10 nodes. Calico is the chosen CNI plugin.</p> <ul> <li><p>Who plays the <strong>Autonomous System(AS)</strong> role? Is each node an AS?</p> </li> <li><p>How are packets forward from one node to another node? Is the iptable still required?</p> </li> </ul>
<p>The CNI plugin is responsible for allocating IP addresses (IPAM) and ensuring that packets get where they need to get.</p> <p>For Calico specifically, you can get a lot of information from the <a href="https://projectcalico.docs.tigera.io/reference/architecture/overview#ipam-plugin" rel="nofollow noreferrer">architecture page</a> as well as the Calico <a href="https://projectcalico.docs.tigera.io/reference/architecture/design/l3-interconnect-fabric" rel="nofollow noreferrer">network design</a> memoirs.</p> <p>Whenever a new Pod is created, the IPAM plugin allocates an IP address from the global pool and the Kubernetes scheduler assigns the Pod to a Node. The Calico CNI plugin (like any other) configures the networking stack to accept connections to the Pod IP and routes them to the processes inside. This happens with <code>iptables</code> and uses a helper process called Felix.</p> <p>Each Node also runs a BIRD (BGP) daemon that watches for these configuration events: &quot;IP 10.x.y.z is hosted on node A&quot;. These configuration events are turned into BGP updates and sent to other nodes using the open BGP sessions. When the other nodes receive these BGP updates, they program the node route table (with simple <code>ip route</code> commands) to ensure the node knows how to reach the Pod. In this model, yes, every node is an AS.</p> <p>What I just described is the &quot;AS per compute server&quot; model: it is suitable for small deployments in environments where nodes are not necessarily on the same L2 network. The problem is that each node needs to maintain a BGP session with every other node, which scales as O(N^2).</p> <p>For larger deployments therefore, a compromise is to run one AS per rack of compute servers (&quot;AS per rack&quot;). Each top of rack switch then runs BGP to communicate routes to other racks, while the switch internally knows how to route packets.</p>
<p>I am trying to access an external postgres from my pod. The problem is that in order to do this, I need to allow in the external database <code>pg_hba.conf</code> the &quot;host address&quot;/ &quot;IP&quot; of the pod. It is clear that I can temporarily use the address of the node, e.g. someNode.online-server.cloud.</p> <p>The problem is that of course, if the pod restarts, it might restart on another node. For the converse problem, I could use a service/endpoint that would provide an anchor for all external traffic to go through... Is there a way to do something like this in my case? I am thinking port forwarding on a host can be both ways, but not sure what to do in K8s.</p>
<p>It's <a href="https://www.postgresql.org/docs/current/auth-pg-hba-conf.html" rel="nofollow noreferrer">documented</a> that the address field can be a CIDR.</p> <blockquote> <p>Specifies the client machine address(es) that this record matches. This field can contain either a host name, an IP address range, or one of the special key words mentioned below.</p> </blockquote> <p>Therefore, you can add the CIDR of the subnet of your cluster, given the assumption, this is within a private network.</p> <p>This might look something like this:</p> <pre><code># TYPE DATABASE USER ADDRESS METHOD host app app 192.168.0.0/16 scram-sha-256 </code></pre> <p>If this goes through the public web, all pods are likely to go through the same gateway and therefore get the same IP assigned, which you can use.</p> <p>Alternatively, you can also use a hostname.</p>
<p>Assuming I have this values.yaml under my helm chart -</p> <pre class="lang-yaml prettyprint-override"><code>tasks: - name: test-production-dev env: - production - dev - name: test-dev env: - dev - name: test-all environment_variables: STAGE: dev </code></pre> <p>I would like to run my cronjob based on these values -</p> <ul> <li>if <code>.env</code> doesn't exist - run any time.</li> <li>if <code>.env</code> exists - run only if <code>environment_variables.STAGE</code> is in the <code>.env</code> list.</li> </ul> <p>This is what I've done so far ( with no luck ) -</p> <pre class="lang-yaml prettyprint-override"><code>{{- range $.Values.tasks}} # check if $value.env not exists OR contains stage {{if or .env (hasKey .env &quot;$.Values.environment_variables.STAGE&quot;) }} apiVersion: batch/v1 kind: CronJob ... {{- end}} --- {{- end}} </code></pre>
<p>values.yaml</p> <pre class="lang-yaml prettyprint-override"><code>tasks: - name: test-production-dev env: - production - dev - name: test-dev env: - dev - name: test-all - name: test-production env: - production environment_variables: STAGE: dev </code></pre> <p>template/xxx.yaml</p> <p>plan a</p> <pre class="lang-yaml prettyprint-override"><code>... {{- range $.Values.tasks }} {{- $flag := false }} {{- if .env }} {{- range .env }} {{- if eq . $.Values.environment_variables.STAGE }} {{- $flag = true }} {{- end }} {{- end }} {{- else }} {{- $flag = true }} {{- end }} {{- if $flag }} apiVersion: batch/v1 kind: CronJob meta: name: {{ .name }} {{- end }} {{- end }} ... </code></pre> <p>plan b</p> <pre class="lang-yaml prettyprint-override"><code>... {{- range $.Values.tasks }} {{- if or (not .env) (has $.Values.environment_variables.STAGE .env) }} apiVersion: batch/v1 kind: CronJob meta: name: {{ .name }} {{- end }} {{- end }} ... </code></pre> <p>output</p> <pre class="lang-yaml prettyprint-override"><code>... apiVersion: batch/v1 kind: CronJob meta: name: test-production-dev apiVersion: batch/v1 kind: CronJob meta: name: test-dev apiVersion: batch/v1 kind: CronJob meta: name: test-all ... </code></pre>
<p>I am getting an unfamiliar error that is keeping my k8s cluster from starting when I run <code>skaffold dev</code>.</p> <blockquote> <p>➜ ticketing git:(master) βœ— kubectl logs client-depl-d86b58456-tv8l2</p> <blockquote> <p>[email protected] dev next</p> </blockquote> <p>Browserslist: caniuse-lite is outdated. Please run: npx browserslist@latest --update-db ready - started server on http://localhost:3000 node:internal/crypto/hash:67 this[kHandle] = new _Hash(algorithm, xofLen); ^</p> <p>Error: error:0308010C:digital envelope routines::unsupported at new Hash (node:internal/crypto/hash:67:19) at Object.createHash (node:crypto:133:10) at module.exports (/app/node_modules/webpack/lib/util/createHash.js:135:53) at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:417:16) at handleParseError (/app/node_modules/webpack/lib/NormalModule.js:471:10) at /app/node_modules/webpack/lib/NormalModule.js:503:5 at /app/node_modules/webpack/lib/NormalModule.js:358:12 at /app/node_modules/loader-runner/lib/LoaderRunner.js:373:3 at iterateNormalLoaders (/app/node_modules/loader-runner/lib/LoaderRunner.js:214:10) at Array. (/app/node_modules/loader-runner/lib/LoaderRunner.js:205:4) {<br /> opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ], library: 'digital envelope routines', reason: 'unsupported', code: 'ERR_OSSL_EVP_UNSUPPORTED' }</p> <p>Node.js v18.3.0</p> </blockquote> <p>I did run this command: <code>npx browserslist@latest --update-db</code>, that did not solve it.</p>
<p>I had to go inside the <code>package.json</code> file of my frontend client application and change the start script to <code>NODE_OPTIONS=--openssl-legacy-provider next dev</code></p> <p>Error was documented here:</p> <p><a href="https://github.com/nodejs/node/issues/40455" rel="nofollow noreferrer">https://github.com/nodejs/node/issues/40455</a></p> <p>I hope this helps someone else.</p>
<p>I have 2 Springboot microservices, one is the Eureka server and the other one is the Gateway.</p> <p>I can't find the right configuration to make the Gateway register to the Eureka server.</p> <p>This is the <code>eureka.yml</code> with the K8s configuration:</p> <p><strong>Eureka.yml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: eureka-cm data: eureka_service_address: http://eureka-0.eureka:8761/eureka --- apiVersion: v1 kind: Service metadata: name: eureka labels: app: eureka spec: clusterIP: None ports: - port: 8761 name: eureka selector: app: eureka --- apiVersion: apps/v1 kind: StatefulSet metadata: name: eureka spec: serviceName: &quot;eureka&quot; replicas: 1 selector: matchLabels: app: eureka template: metadata: labels: app: eureka spec: containers: - name: eureka image: myrepo/eureka1.0:eureka imagePullPolicy: Always ports: - containerPort: 8761 env: - name: EUREKA_SERVER_ADDRESS valueFrom: configMapKeyRef: name: eureka-cm key: eureka_service_address --- apiVersion: v1 kind: Service metadata: name: eureka-lb labels: app: eureka spec: selector: app: eureka type: NodePort ports: - port: 80 targetPort: 8761 </code></pre> <p><strong>Eureka.application.yml</strong></p> <pre><code>spring: application: name: eureka server: port: 8761 eureka: instance: hostname: &quot;${HOSTNAME}.eureka&quot; client: register-with-eureka: false fetch-registry: false serviceUrl: defaultZone: ${EUREKA_SERVER_ADDRESS} </code></pre> <p><strong>Gateway.yml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: cloud-gateway-app labels: app: cloud-gateway-app spec: replicas: 1 selector: matchLabels: app: cloud-gateway-app template: metadata: labels: app: cloud-gateway-app spec: containers: - name: cloud-gateway-app image: myrepo/gateway1.0:gateway imagePullPolicy: Always ports: - containerPort: 9191 --- apiVersion: v1 kind: Service metadata: name: cloud-gateway-svc spec: type: LoadBalancer ports: - port: 80 targetPort: 9191 protocol: TCP selector: app: cloud-gateway-app </code></pre> <p><strong>Gateway.application.yml</strong></p> <pre><code>eureka: instance: preferIpAddress: true hostname: eureka-0 client: registerWithEureka: true fetchRegistry: true serviceUrl: defaultZone: http://eureka-0.eureka.default.svc.cluster.local:8761/eureka </code></pre> <p>This is the error I got when I check the logs of the Gateway's pod:</p> <pre><code>error on POST request for &quot;http://eureka-0.eureka.default.svc.cluster.local:8761/eureka/apps/API-GATEWAY&quot;: eureka-0.eureka.default.svc.cluster.local; nested exception is java.net.UnknownHostException: eureka-0.eureka.default.svc.cluster.local </code></pre> <p>Following the documentation I've tried to set <code>defaultZone</code> property of the <strong>Gateway.application.properties</strong> file following this pattern:</p> <pre><code>172-17-0-3.default.pod.cluster.local:8761/eureka </code></pre> <p>But in this way too, I can't subscribe to the Eureka Server.</p>
<p>I resolved by modifying the Gateway.application.yml in this way:</p> <pre><code>eureka: instance: preferIpAddress: true hostname: eureka-0 client: registerWithEureka: true fetchRegistry: true serviceUrl: defaultZone: http://eureka-0.eureka:8761/eureka/ </code></pre> <p>EDIT:</p> <p>I'm encountering some problems in registering other microservices to the Eureka Server. I've tried by increasing the replicas of the Eureka Server and made each microservice register to a dedicated replica, but as now this is not working.</p>
<p>How can I rewrite an URI in a VirtualService but just internally? In other words:</p> <ol> <li>A client requests api.example.com/users/get/87</li> <li>IngressGateway receives the request and translates it to api.example.com/get/87 to the &quot;users&quot; pod. The pod's web server doesn't &quot;know&quot; the URL &quot;api.example.com/users/get/87&quot; but &quot;knows&quot; &quot;api.example.com/get/87&quot;</li> <li>To the user's browser, it will still be appearing as &quot;api.example.com/users/get/87&quot; not rewrited.</li> </ol> <p>I need to do it that way because there are several pod/services to respond this same domain &quot;api.example.com&quot;</p> <p>If write it like this below, it doesn't work because it will cause conflict since I have many pod/services on the same domain:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: api namespace: default spec: hosts: - &quot;api.example.com&quot; gateways: - istio-system/default-gateway http: - match: - uri: prefix: /users rewrite: uri: &quot;/&quot; route: - destination: host: user port: number: 80 - match: - uri: prefix: /cars rewrite: uri: &quot;/&quot; route: - destination: host: user port: number: 80 </code></pre>
<p>URL rewrite - Browser URL will not be changed to redirect URL. For ex: &quot;http://xx.xx.xx.xx:8080/api/v1/products&quot; is the url which has been requested in browser. And in virtual service if you use rewrite: uri: /productpage. In this case the browser url will still remain &quot;http://xx.xx.xx.xx:8080/api/v1/products&quot; instead of &quot;http://xx.xx.xx.xx:8080/productpage&quot;</p>
<p>Im using bitnami helm chart for keycloak. and trying to achieve High availability with 3 keycloak replics, using DNS ping.</p> <pre><code>Chart version: 5.2.8 Image version: 15.1.1-debian-10-r10 Helm repo: https://charts.bitnami.com/bitnami -&gt; bitnami/keycloak </code></pre> <p>The modified parameters of values.yaml file is as follows:</p> <pre><code>global: image: registry: docker.io repository: bitnami/keycloak tag: 15.1.1-debian-10-r10 pullPolicy: IfNotPresent pullSecrets: [] debug: true proxyAddressForwarding: true serviceDiscovery: enabled: true protocol: dns.DNS_PING properties: - dns_query=keycloak.identity.svc.cluster.local transportStack: tcp cache: ownersCount: 3 authOwnersCount: 3 replicaCount: 3 ingress: enabled: true hostname: my-keycloak.keycloak.example.com apiVersion: &quot;&quot; ingressClassName: &quot;nginx&quot; path: / pathType: ImplementationSpecific annotations: {} tls: false extraHosts: [] extraTls: [] secrets: [] existingSecret: &quot;&quot; servicePort: http </code></pre> <p>When login to the keycloak UI, after entering the username and password , the login does not happen, it redirects the back to login page.</p> <p>From the pod logs I see following error:</p> <pre><code>0:07:05,251 WARN [org.keycloak.events] (default task-1) type=CODE_TO_TOKEN_ERROR, realmId=master, clientId=security-admin-console, userId=null, ipAddress=10.244.5.46, error=invalid_code, grant_type=authorization_code, code_id=157e0483-67fa-4ea4-a964-387f3884cbc9, client_auth_method=client-secret </code></pre> <p>When I checked about this error in forums, As per some suggestions to set proxyAddressForwarding to true, but with this as well, issue remains same.</p> <p>Apart from this I have tried some other version of the helm chart , but with that the UI itself does not load correctly with page not found errors.</p> <h2>Update</h2> <p>I get the above error i.e, <code>CODE_TO_TOKEN_ERROR</code> in logs when I use the headless service with ingress. But if I use the service of type ClusterIP with ingress , the error is as follows:</p> <pre><code>06:43:37,587 WARN [org.keycloak.events] (default task-6) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=10.122.0.26, error=expired_code, restart_after_timeout=true, authSessionParentId=453870cd-5580-495d-8f03-f73498cd3ace, authSessionTabId=1d17vpIoysE </code></pre> <p>Another additional information I would like to post is , I see following INFO in all the keycloak pod logs at the startup.</p> <pre><code>05:27:10,437 INFO [org.jgroups.protocols.pbcast.GMS] (ServerService Thread Pool -- 58) my-keycloak-0: no members discovered after 3006 ms: creating cluster as coordinator </code></pre> <p>This sounds like the 3 members have not combined and not formed a keycloak cluster.</p>
<p>One common scenario that may lead to such a situation is when the node that issued the access code is not the one who has received the <em>code to token</em> request. So the client gets the access code from node 1 but the second request reaches node 2 and the value is not yet in this node's cache. The safest approach to prevent such scenario is to setup a session sticky load balancer.</p> <p>I suggest you to try setting the <code>service.spec.sessionAffinity</code> to <code>ClientIP</code>. Its default value is <code>None</code>.</p>
<p>We are running a pod in Kubernetes that needs to load a file during runtime. This file has the following properties:</p> <ul> <li>It is known at build time</li> <li>It should be mounted read-only by multiple pods (the same kind)</li> <li>It might change (externally to the cluster) and needs to be updated</li> <li>For various reasons (security being the main concern) the file cannot be inside the docker image</li> <li>It is potentially quite large, theoretically up to 100 MB, but in practice between 200kB - 10MB.</li> </ul> <p>We have considered various options:</p> <ul> <li>Creating a persistent volume, mount the volume in a temporary pod to write (update) the file, unmount the volume, and then mount it in the service with ROX (Read-Only Multiple) claims. This solution means we need downtime during upgrade, and it is hard to automate (due to timings).</li> <li>Creating multiple secrets using the secrets management of Kubernetes, and then &quot;assemble&quot; the file before loading it in an init-container or something similar.</li> </ul> <p>Both of these solutions feels a little bit hacked - is there a better solution out there that we could utilize for solving this?</p>
<p>You need to use a shared filesystem that supports Read/Write Multiple Pods. Here is a link to the CSI Drivers which can be used with Kubernetes and provide those access: <a href="https://kubernetes-csi.github.io/docs/drivers.html" rel="nofollow noreferrer">https://kubernetes-csi.github.io/docs/drivers.html</a></p> <p>Ideally, you need a solution that is not an appliance, and can run anywhere meaning it can run in the cloud or on-prem.</p> <p>The platforms that could work for you are Ceph, GlusterFS, and Quobyte (<em>Disclaimer, I work for Quobyte</em>)</p>
<p>I have Standard GKE cluster and want to migrate all my running services to new Autopilot cluster. I research official documentation and don't find anything how I can perform this migration</p>
<p>At the moment this operation is not possible to convert Standard GKE cluster to Autopilot GKE .</p> <p>In GKE Documentation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview" rel="nofollow noreferrer">Autopilot overview</a> under Other limitations you can find section <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#no_conversion" rel="nofollow noreferrer">No conversion</a>:</p> <blockquote> <p>Converting Standard clusters to Autopilot mode and converting Autopilot clusters to Standard mode is not supported.</p> </blockquote> <p>As stated by @guillaume blaquiere. You have to redeploy all your services. Backup and restore your data manually.</p>
<p>In Azure Kubernetes Service, I am trying to setup Locust load test, as of now locust test contains 1 master and 2 slave pods. With default dnspolicy provided by kubernetes, Slave pods able to establish connection with master pod, which I had confirmed in locust webpage. But to run locust test successfully slave pods need to connect to other services so had to use custom dnspolicy in slave pods.</p> <p>Once I apply the custom dnspolicy in slave pods, slave pods are not able to connect to master pod. I tried to apply the same dnspolicy slave uses, in master deployment file but still slave pods are not able to establish connection with master pod.</p> <p>I am not sure what am I missing in this case, how to establish connection between slave with pods custom dnspolicy to master pod which uses default dns policy provided by Azure kubernetes</p> <p>slave deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: io.kompose.service: slave name: slave spec: replicas: 2 selector: matchLabels: io.kompose.service: slave strategy: {} template: metadata: labels: io.kompose.service: slave spec: imagePullSecrets: - name: secret containers: - args: - -f - /usr/app/locustfile.py - --worker - --master-host - master image: xxxxxxxx/locust-xxxxxx:locust-slave-1.0.2 name: slave resources: {} securityContext: privileged: true capabilities: add: - NET_ADMIN dnsPolicy: &quot;None&quot; dnsConfig: nameservers: - xx.xx.xx.xx - xx.xx.xx.xx searches: - xxxxxx - locust-xxxx.svc.cluster.local - svc.cluster.local - cluster.local - xxxxxxxxxxxxxx.jx.internal.cloudapp.net options: - name: ndots value: &quot;0&quot; restartPolicy: Always status: {} </code></pre> <p>master deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: io.kompose.service: master name: master spec: replicas: 1 selector: matchLabels: io.kompose.service: master strategy: {} template: metadata: labels: io.kompose.service: master spec: imagePullSecrets: - name: secret containers: - image: xxxxxxxxxxxxx/locust-xxxxxxxxxxx:locust-master-1.0.1 name: master resources: {} restartPolicy: Always status: {} </code></pre> <p><em>I am new to networking side of things</em></p>
<p>It was not the issue with kubernetes, I was able to establish connection between master and slave with the help of this link <a href="http://www.github.com/locustio/locust/issues/294" rel="nofollow noreferrer">www.github.com/locustio/locust/issues/294</a>. What was missing was a env variable so I added these env variable in slave deployment.yaml file</p> <pre><code>env: - name: LOCUST_MODE value: slave - name: LOCUST_MASTER value: master </code></pre>
<p>After an unexpected power failure, the mongodb service I deployed on k8s could not be restarted normally. The log of mongodb showed that there was a problem with its data and could not be started.</p> <p>I did not record the specific error log.</p>
<p>Here is my fix:</p> <h2>First</h2> <p>change k8s deployment.yaml config.</p> <p>Because we want to repair the data file of mongodb, the first step is to make the mongo pod run, we run the command in the pod.</p> <p>Now change the startup command of the container:</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: mongodb image: mongo:latest command: [&quot;sleep&quot;] args: [&quot;infinity&quot;] imagePullPolicy: IfNotPresent ports: # ....... </code></pre> <p>After apply it. If I guessed correctly, the mongo pod should be up and running.</p> <h2>Second</h2> <p>Use mongod command to repair data.</p> <pre class="lang-bash prettyprint-override"><code>kubectl exec -it &lt;YOURPODNAME&gt; -- mongod --dbpath &lt;YOURDBPATH&gt; --repair --directoryperdb </code></pre> <p>I have to exec it with <code>--directoryperdb</code>, if you run it error, you can try remove it.</p> <p>If I guessed correctly, So far everything is fine.</p> <h2>Third</h2> <p>Recover k8s deployment.yaml, back to the way they were.</p> <p>Now reapply it.</p> <p>---------- Manual split line</p> <p>The above is my repair process. Its worked for me. I just record it. You can refer to it to fix your mongodb. Thank you.</p>
<p>I added this in my values.yaml expecting the serviceAccount to be created when I do the helm install but that did not work, am I missing something ?</p> <ul> <li>helm version v3.9.0</li> <li>kubernetes version v1.24.0</li> </ul> <pre><code>serviceAccount: # Specifies whether a service account should be created create: true # Annotations to add to the service account annotations: {} # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: my-service-account </code></pre> <p>I even tried adding the following (based on <a href="https://helm.sh/docs/chart_best_practices/rbac/#helm" rel="nofollow noreferrer">https://helm.sh/docs/chart_best_practices/rbac/#helm</a>), with no luck:</p> <pre><code>rbac: # Specifies whether RBAC resources should be created create: true </code></pre> <p>Thanks</p>
<p>Thanks for the help, I ended up putting this file in the templates directory so it gets processed as you mentioned, I used helm lookup to check if the ServiceAccount is there or not so the first helm install does the installation (<a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function</a>)</p> <pre><code>{{- if not (lookup &quot;v1&quot; &quot;ServiceAccount&quot; &quot;my-namespace&quot; &quot;my-service-account&quot;) }} apiVersion: v1 kind: ServiceAccount metadata: name: my-service-account namespace: {{ .Values.namespace }} --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;list&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: my-cluster-role subjects: - kind: ServiceAccount name: my-service-account namespace: {{ .Values.namespace }} {{- end }} </code></pre>
<p>We are developing a simulation software which is deployed and scaled between multiple pods using kubernetes. When a user makes a simulation request, a pod is selected which starts doing the job and is considered as busy. When another user makes a simulation request, it should be routed to the next free pod. Currently, a busy pod is often selected (even though there are free ones) as kubernetes does not know which pods are busy/free.</p> <p>Is it possible to balance requests in such way that a free pod is always selected? (Assuming that each app instance inside a pod exposes an HTTP endpoint which tells it's current busy/free status)</p>
<p>Another solution to the one suggested in above answer is to make use of Headless Service and some reverse-proxy/load balancer like HAProxy(I used it).</p> <p>So you can have a headless service or change the existing service which is exposing your pods like below. It is the configuration <strong>clusterIP:None</strong> which will be responsible for creating a headless service.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: some-service namespace: dev labels: app: &lt;labels here&gt; spec: ports: - port: &lt;port&gt; targetPort: &lt;tport&gt; selector: app: &lt;selectors here&gt; clusterIP: None </code></pre> <p>Then you can deploy HAProxy like below</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: haproxy-headless namespace: dev labels: app: haproxy-headless spec: replicas: 1 selector: matchLabels: app: haproxy-headless template: metadata: labels: app: haproxy-headless spec: containers: - name: haproxy-headless image: haproxy:1.9 ports: - containerPort: 8888 name: management - containerPort: 8085 name: http volumeMounts: - name: haproxy-config mountPath: &quot;/usr/local/etc/haproxy/haproxy.cfg&quot; volumes: - name: haproxy-config configMap: name: haproxy-config </code></pre> <p>ConfigMap for Haproxy:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: haproxy-config namespace: dev data: haproxyconfig.cfg: | defaults mode tcp log global option httplog retries 5 timeout connect 10s timeout client 300s timeout server 300s resolvers test_resolver nameserver dns1 &lt;your dns server address&gt; resolve_retries 30 timeout retry 2s hold valid 10s accepted_payload_size 8192 frontend stats mode http bind :8888 stats enable stats uri / stats refresh 15s frontend test_fe bind :8085 timeout client 60s default_backend test_be backend test_be balance leastconn server-template srv 7 &lt;service-name&gt;.&lt;your namespace&gt;.svc.cluster.local:6311 check resolvers test_resolver </code></pre> <p>Important thing here to understand how we configure our HAProxy here server-template is being used for service discovery using the DNS of K8s and you can use <strong>balance leastconn</strong> to make sure the least used server is used.</p> <p>In the end a service would be required for the HAProxy</p> <pre><code>apiVersion: v1 kind: Service metadata: name: haproxy-service namespace: dev spec: selector: app: haproxy-headless ports: - name: be protocol: TCP port: 6311 targetPort: 8085 - name: mngmnt protocol: TCP port: 8888 targetPort: 8888 type: &lt;Type as per requirement&gt; </code></pre> <p>Now use the above service whenever you want to access the app. Maybe you can read about how to use HAProxy in k8s for load balancing it basically doing service discovery + load balancing.</p> <p>Maybe a few configurations shown in the yaml are not correct or mismatched but i hope concept is clear about how to achieve this.</p>
<p>So I wanted to fetch logs from a specific namespace and send them to opensearch, so I didn't get a clue so I decided to try to make it like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: k8s-app: fluentd-logging name: simple-fluentd-configmap namespace: fluent-log data: fluent.conf: | &lt;match fluent.**&gt; @type null &lt;/match&gt; &lt;source&gt; @type tail path /var/log/pods/containers*.log pos_file /var/log/containers.pos format none &lt;filter **&gt; @type record_modifier remove_keys &quot;container_id,source&quot; @type parser key_name log hash_value_field log &lt;parse&gt; @type json &lt;/parse&gt; &lt;/filter&gt; &lt;match kubernetes.var.log.containers.**access**.log&gt; @type opensearch port 443 logstash_format false scheme https ssl_verify false &lt;/match&gt; &lt;match kubernetes.var.log.containers.**balance**.log&gt; @type opensearch port 443 logstash_format false scheme https ssl_verify false &lt;/match&gt; </code></pre> <p>So the question is how to make it send specific logs based on namespace from kubernetes?</p>
<p>The second match clause in your configuration (the one starting with <code>kubernetes.var.log...</code>) should contain the namespace, and therefore you can filter based on specific namespaces and decide how to handle those specific logs.</p> <p>If, for any reason, the log path in your cluster does not contain the namespace in its path, you can also use the <a href="https://github.com/fabric8io/fluent-plugin-kubernetes" rel="nofollow noreferrer">kubernetes</a> plugin.<br /> It will enrich your logs with metadata relevant to the cluster, and allow you to extract the namespace logs originated from and deal with them accordingly.</p>
<p>I've built an application through Docker with a docker-compose.yml file and I'm now trying to convert it into deployment file for K8S.</p> <p>I tried to use <code>kompose convert</code> command but it seems to work weirdly.</p> <p>Here is my docker-compose.yml:</p> <pre><code>version: &quot;3&quot; services: worker: build: dockerfile: ./worker/Dockerfile container_name: container_worker environment: - PYTHONUNBUFFERED=1 volumes: - ./api:/app/ - ./worker:/app2/ api: build: dockerfile: ./api/Dockerfile container_name: container_api volumes: - ./api:/app/ - /var/run/docker.sock:/var/run/docker.sock ports: - &quot;8050:8050&quot; depends_on: - worker </code></pre> <p>Here is the output of the <code>kompose convert</code> command:</p> <pre><code>[root@user-cgb4-01-01 vm-tracer]# kompose convert WARN Volume mount on the host &quot;/home/david/vm-tracer/api&quot; isn't supported - ignoring path on the host WARN Volume mount on the host &quot;/var/run/docker.sock&quot; isn't supported - ignoring path on the host WARN Volume mount on the host &quot;/home/david/vm-tracer/api&quot; isn't supported - ignoring path on the host WARN Volume mount on the host &quot;/home/david/vm-tracer/worker&quot; isn't supported - ignoring path on the host INFO Kubernetes file &quot;api-service.yaml&quot; created INFO Kubernetes file &quot;api-deployment.yaml&quot; created INFO Kubernetes file &quot;api-claim0-persistentvolumeclaim.yaml&quot; created INFO Kubernetes file &quot;api-claim1-persistentvolumeclaim.yaml&quot; created INFO Kubernetes file &quot;worker-deployment.yaml&quot; created INFO Kubernetes file &quot;worker-claim0-persistentvolumeclaim.yaml&quot; created INFO Kubernetes file &quot;worker-claim1-persistentvolumeclaim.yaml&quot; created </code></pre> <p>And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don't understand these warning that I get. Is there a problem with my volumes?</p> <p>Maybe it will be easier to convert the docker-compose to deployment.yml manually?</p> <p>Thank you,</p>
<p>I'd recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it's hard to correct those without understanding Kubernetes's deployment model.</p> <p>I would clean up your <code>docker-compose.yml</code> file before you start. You have <code>volumes:</code> that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn't work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these <code>volumes:</code> mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.</p> <p>You also do not normally need to specify <code>container_name:</code> or several other networking-related options. Kubernetes does not support multiple networks and so if you have any <code>networks:</code> settings they will be ignored, but most practical Compose files don't need them either. The obsolete <code>links:</code> and <code>expose:</code> options, if you have them, can also usually be safely deleted with no consequences.</p> <pre class="lang-yaml prettyprint-override"><code>version: &quot;3.8&quot; services: worker: build: dockerfile: ./worker/Dockerfile environment: - PYTHONUNBUFFERED=1 api: build: dockerfile: ./api/Dockerfile volumes: - /var/run/docker.sock:/var/run/docker.sock ports: - &quot;8050:8050&quot; depends_on: # won't have an effect in Kubernetes, - worker # but still good Docker Compose practice </code></pre> <p>The bind-mount of the Docker socket is a larger problem. This socket usually doesn't exist in Kubernetes, and if it does exist, it's frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you <em>need</em> to dynamically launch containers, you'd need to use the <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">Kubernetes API</a> to do that instead (look at creating one-off <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a>). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can't fix this architectural problem, though, you will have to modify your code.</p> <p>When all of this is done, I'd expect Kompose to create four files, with one Kubernetes YAML manifest in each: two <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a>, and two matching <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a>. Each of your Docker Compose <code>services:</code> would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.</p>
<p>I am trying to deploy WSO2 API Manager Operator on AWS EKS cluster following the doc [https://github.com/wso2/k8s-wso2am-operator][1]</p> <p>But When I execute the command: <strong>kubectl apply -f scenarios/scenario-1/</strong></p> <p>I get the error: <strong>unable to recognize &quot;scenarios\scenario-1\2-custom-pattern.yaml&quot;: no matches for kind &quot;APIManager&quot; in version &quot;apim.wso2.com/v1alpha1&quot;</strong></p> <p>How do I fix this error?</p>
<p>This indicates that the CRDs are not deployed. Did you install the CRDs as explained <a href="https://apim.docs.wso2.com/en/latest/install-and-setup/setup/kubernetes-operators/k8s-api-operator/install/" rel="nofollow noreferrer">here</a>? The repo you are referring seems to be for APIM operator to deploy APIM. Rther you should be following API operator to deploy APIs. Following is the correct Github repo for the Operator.</p> <p>Github : <a href="https://github.com/wso2/k8s-api-operator" rel="nofollow noreferrer">https://github.com/wso2/k8s-api-operator</a></p>
<p>I have an init container that do some stuff that needs for the main container to run correctly, like creating some directories and a liveness probe that may fail if one of these directories were deleted. When the pod is restarted due to fail of liveness probe I expect that init container is also being restarted, but it won't.</p> <p>This is what kubernetes documentation says about this:</p> <p><code>If the Pod restarts, or is restarted, all init containers must execute again.</code> <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p> <p>Easiest way to prove this behavior was to use the example of the pod from k8s documentation, add a liveness probe that always fails and expect that init container to be restarted, but again, it is not behaving as expected.</p> <p>This is the example I'm working with:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: restartPolicy: Always containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo &quot;App started at $(date)&quot; &amp;&amp; tail -f /dev/null'] livenessProbe: exec: command: - sh - -c - exit 1 initialDelaySeconds: 1 periodSeconds: 1 initContainers: - name: myapp-init image: busybox:1.28 command: ['/bin/sh', '-c', 'sleep 5 &amp;&amp; echo &quot;Init container started at $(date)&quot;'] </code></pre> <p>Sleep and date command are there to confirm that init container was restarted.</p> <p>The pod is being restarted:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/myapp-pod 1/1 Running 4 2m57s </code></pre> <p>But from logs it's clear that init container don't:</p> <pre><code>$ k logs pod/myapp-pod myapp-init Init container started at Thu Jun 16 12:12:03 UTC 2022 $ k logs pod/myapp-pod myapp-container App started at Thu Jun 16 12:14:20 UTC 2022 </code></pre> <p>I checked it on both <strong>v1.19.5</strong> and <strong>v1.24.0</strong> kubernetes servers.</p> <p>The question is how to force the init container to restart on pod restart.</p>
<p>The restart number refers to <strong>container</strong> restarts, not <strong>pod</strong> restarts.</p> <p>init container need to run only once in a pos lifetime, and you need to design your containers like that, you can read this <a href="https://github.com/kubernetes/community/pull/2342" rel="nofollow noreferrer">PR</a>, and especially this <a href="https://github.com/kubernetes/community/pull/2342#issuecomment-983558850" rel="nofollow noreferrer">comment</a></p>
<p>What is the best way to authenticate to a GKE cluster without actually setting env variables for GOOGLE_APPLICATION_CREDENTIALS or KUBECONFIG. I have an application running on a container which has to communicate with multiple GKE clusters at once. Hence I am trying to generate the kubeconfig yaml file on the fly like so. But when I try to run any kubectl commands I get a non authorized error as it is expecting the service account creds to be set in the environment variable. Is there any way to actually avoid this and auth to multiple clusters on the fly by generating the kubeconfig file ?</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: {{.CertData}} server: {{.MasterURL}} name: {{.ClusterName}} contexts: - context: cluster: {{.ClusterName}} user: {{.ClusterName}} name: {{.ClusterName}} current-context: {{.ClusterName}} kind: Config preferences: {} users: - name: {{.ClusterName}} user: auth-provider: config: cmd-args: config config-helper --format=json cmd-path: /usr/local/bin/gcloud/google-cloud-sdk/bin/gcloud expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: {{.ClusterName}} </code></pre>
<p>I ended up using oauth2 to generate a token and inpute that value in the kubeconfig token.</p> <p>Sample code to extract the token</p> <pre><code> func getGKEToken(creds []byte, ctx context.Context) (string, error){ var token *oauth2.Token scopes := []string{&quot;https://www.googleapis.com/auth/cloud-platform&quot;,} cred, err := auth.CredentialsFromJSON(ctx, creds, scopes...) if err != nil { fmt.Printf(&quot;Failed to authenticate using credentials to extract token. Error %s \n&quot;, err.Error()) return &quot;&quot;, err } token, err = cred.TokenSource.Token() if err != nil { return &quot;&quot;, err } return token.AccessToken, nil } </code></pre> <p>GKE kubeconfig template</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: {{.CertData}} server: {{.MasterURL}} name: {{.ClusterName}} contexts: - context: cluster: {{.ClusterName}} user: {{.ClusterName}} name: {{.ClusterName}} current-context: {{.ClusterName}} kind: Config preferences: {} users: - name: {{.ClusterName}} user: token: {{.AccessToken}} </code></pre>
<p>I've deployed Prometheus, Alertmanager, Grafana et cetera in the <code>cattle-monitoring-system</code> namespace using the rancher-monitoring Helm chart. Look <a href="https://github.com/rancher/charts/tree/release-v2.5/charts/rancher-monitoring" rel="nofollow noreferrer">here</a> and <a href="https://rancher.com/docs/rancher/v2.5/en/monitoring-alerting/" rel="nofollow noreferrer">here</a> for documentation. Everything in this namespace is running OK.</p> <p>Now I'm trying to deploy an extra Prometheus instance in another namespace called <code>indigo</code> using the manifest files. Unfortunately the configuration scrape_configs file won't load. I checked this by running the command <code>kubectl exec -it prometheus-indigo-0 -c prometheus -n indigo -- cat /etc/prometheus/config_out/prometheus.env.yaml</code>. Running the same command in the other Prometheus instance will return all the scrape_configs configuration.</p> <p>My <code>deployment.yml</code> file looks like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: $NAMESPACE --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: monitoring-rancher-monitor-prometheus subjects: - kind: ServiceAccount name: prometheus namespace: $NAMESPACE --- apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: indigo namespace: $NAMESPACE spec: serviceAccountName: prometheus additionalScrapeConfigs: name: prometheus-scrape-configs-secret key: prometheus-scrape-configs.yml resources: requests: memory: 400Mi --- apiVersion: v1 kind: Service metadata: name: prometheus-indigo namespace: $NAMESPACE spec: type: NodePort ports: - name: web nodePort: 30900 port: 9090 protocol: TCP targetPort: web selector: prometheus: indigo </code></pre> <p>The <code>prometheus-scrape-configs.yml</code> file looks like this:</p> <pre class="lang-yaml prettyprint-override"><code>global: evaluation_interval: 30s scrape_interval: 30s scrape_configs: - job_name: blackbox # To get metrics about the exporter itself metrics_path: /metrics static_configs: - targets: - ..... - job_name: blackbox-http # To get metrics about the exporter’s targets metrics_path: /probe params: module: [http_2xx] static_configs: - targets: - ..... labels: env: elise - targets: - ..... labels: env: osb - targets: - ..... labels: env: itp relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: ..... </code></pre> <p>Finally, the <code>prometheus-scrape-configs-secret</code> is created in the <code>Makefile</code>:</p> <pre class="lang-yaml prettyprint-override"><code>include ../../../common.mk include ../../environments/$(ENVIRONMENT).mk NAMESPACE ?= indigo .PHONY: deploy deploy: \ init \ deploy-monitoring .PHONY: init init: .PHONY: deploy-monitoring deploy-monitoring: kubectl create secret generic prometheus-scrape-configs-secret \ -n $(NAMESPACE) --from-file=prometheus-scrape-configs.yml envsubst &lt; $(ENVIRONMENT)-deployment.yml | kubectl apply -f - .PHONY: delete delete: delete-monitoring .PHONY: delete-monitoring delete-monitoring: kubectl delete secret prometheus-scrape-configs-secret -n $(NAMESPACE) envsubst &lt; $(ENVIRONMENT)-deployment.yml | kubectl delete -f - </code></pre> <p>So does anybody know why the <code>scrape_configs</code> file is unable to load correctly?</p> <p>EDIT:</p> <p>After running the command <code>kubectl describe pod prometheus-indigo-0 -n indigo</code> I noticed this error below. I also noticed after deploying this Prometheus instance it will get one error, immediately restarts and after that it is running..</p> <pre class="lang-yaml prettyprint-override"><code>State: Running Started: Tue, 14 Jun 2022 18:01:23 +0200 Last State: Terminated Reason: Error Message: ts=2022-06-14T16:01:19.743Z caller=main.go:450 level=error msg=&quot;Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)&quot; file=/etc/prometheus/config_out/prometheus.env.yaml err=&quot;open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory&quot; </code></pre>
<p><a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md#:%7E:text=The%20Prometheus%20resource%20declaratively%20describes,of%20ServiceMonitors%20to%20be%20used." rel="nofollow noreferrer">Thanks to this documentation</a> I managed to fix this issue. I forgot to create a ServiceMonitor that's referring to the <code>blackbox-exporter</code> through labels. Eventually you need to link this in the Kubernetes Prometheus object using the <code>serviceMonitorSelector</code>. <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">You can look here for more information</a> how to configure <code>serviceMonitorSelector</code></p>
<p>I am trying to deploy WSO2 API Manager Operator on AWS EKS cluster following the doc [https://github.com/wso2/k8s-wso2am-operator][1]</p> <p>But When I execute the command: <strong>kubectl apply -f scenarios/scenario-1/</strong></p> <p>I get the error: <strong>unable to recognize &quot;scenarios\scenario-1\2-custom-pattern.yaml&quot;: no matches for kind &quot;APIManager&quot; in version &quot;apim.wso2.com/v1alpha1&quot;</strong></p> <p>How do I fix this error?</p>
<p>wso2am-operator is not tested in newer kubernetes versions (1.20+). Some of the kubernetes APIs used were deprecated even in previous versions and removed in v1.22[1]. Those include <code>rbac.authorization.k8s.io/v1beta1</code> and <code>apiextensions.k8s.io/v1beta1</code>. The error you are seeing is because the custom resource for <code>APIManager</code> is not deployed properly.</p> <p>The Operator should work if you try this on an older kubernetes version. However, it only supports APIM 3.2.0 (latest is APIM 4.1.0).</p> <p>[1] - <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22</a></p>
<p>Error starting tunneling of service.</p> <p>OS: mac os<br /> minikube version: v1.25.2<br /> kubectl version: 1.24.1.</p> <p>and using docker desktop</p> <p>The following script is the configuration of the <code>/zipkin</code> directory.</p> <pre><code># service.yaml apiVersion: v1 kind: Service metadata: name: zipkin spec: selector: app: zipkin ports: - port: 9411 targetPort: 9411 protocol: TCP type: LoadBalancer </code></pre> <pre><code># statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: zipkin labels: app: zipkin spec: serviceName: zipkin replicas: 1 template: metadata: name: zipkin labels: app: zipkin spec: containers: - name: zipkin image: openzipkin/zipkin imagePullPolicy: Always ports: - containerPort: 9411 protocol: TCP resources: requests: cpu: 100m memory: 256Mi limits: cpu: 200m memory: 256Mi restartPolicy: Always selector: matchLabels: app: zipkin </code></pre> <pre><code>kubectl apply -f zipkin kubectl get po NAME READY STATUS RESTARTS AGE zipkin-0 1/1 Running 0 49m </code></pre> <pre><code>minikube service zipkin --url πŸƒ starting tunnel for service zipkin ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. </code></pre> <p>it is not wokring. I can't figure out what the problem is.</p> <p>expected like this: <a href="https://i.stack.imgur.com/IOk4P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IOk4P.png" alt="enter image description here" /></a></p>
<p>I too had the same issue but then figured out from minikube documentation: <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#example-of-nodeport" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#example-of-nodeport</a></p> <p>In one terminal do this:</p> <pre><code>minikube service customer-node --url πŸƒ Starting tunnel for service customer-node. ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. </code></pre> <p>Open another terminal and check SSH Tunnel</p> <pre><code>ps -ef | grep [email protected] 501 81040 81027 0 6:35PM ttys001 0:00.02 ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -N [email protected] -p 56541 -i /Users/johndoe/.minikube/machines/minikube/id_rsa -L 64013:10.104.68.137:80 </code></pre> <p>Check for the port at the end of the output or you can use this command to find the tunnel port</p> <pre><code>ps -ef | grep [email protected] | head -1 | awk '{print $NF}' | awk -F &quot;:&quot; '{print $1}' 64013 </code></pre> <p>You can use then use this port to form this URL for your browser</p> <pre><code>http://127.0.0.1:64013 </code></pre>
<p>I haven't used kubectl in about two months. It was working well in June but now I get the following error:</p> <pre><code>$ kubectl fatal error: runtime: bsdthread_register error runtime stack: runtime.throw(0x2cb65a5, 0x21) /usr/local/go/src/runtime/panic.go:596 +0x95 fp=0x7fff5fbff810 sp=0x7fff5fbff7f0 runtime.goenvs() /usr/local/go/src/runtime/os_darwin.go:108 +0xa0 fp=0x7fff5fbff840 sp=0x7fff5fbff810 runtime.schedinit() /usr/local/go/src/runtime/proc.go:486 +0xa1 fp=0x7fff5fbff880 sp=0x7fff5fbff840 runtime.rt0_go(0x7fff5fbff8b0, 0x1, 0x7fff5fbff8b0, 0x0, 0x1, 0x7fff5fbffa38, 0x0, 0x7fff5fbffa40, 0x7fff5fbffa5c, 0x7fff5fbffa6c, ...) /usr/local/go/src/runtime/asm_amd64.s:158 +0x183 fp=0x7fff5fbff888 sp=0x7fff5fbff880 </code></pre> <p>I've read that this can be caused by websense but I don't have websense installed on my Mac. The only thing that has changed to my knowledge was a routine update to Max OSX (10.12.6), and a routine update to gcloud (163). </p> <p>I've tried reinstalling kubectl, gcloud, but it still fails. As far as I can tell there is something wrong with Go, and I think Go is part of the core library included in gcloud sdk.</p> <p>Any ideas on how to troubleshoot/fix this? I cannot connect to any of my pods due to this issue.</p>
<p>The problem is caused by a library change for system calls in MacOS 12. Older binaries use an old library which isn't supported anymore.</p> <p>Update your kubectl to a new version. This should fix it.</p> <p>If you can't update because you need an older version (e.g. because of api changes) you can build the needed kubectl using a new go version.</p> <p>Necessary:</p> <p>go 1.11 or newer</p> <pre><code>brew install go </code></pre> <p>Get repo</p> <pre><code>git clone https://github.com/kubernetes/kubernetes.git </code></pre> <p>Select version needed. e.g. 1.11.3</p> <pre><code>git checkout tags/v1.11.3 </code></pre> <p>Build</p> <pre><code>GO111MODULE=off make </code></pre> <p><code>GO111MODULE=off</code> turns off modules which are needed by default since go v1.16. If your kubernetes version is newer it might be necessary to have them turned on. If that is the case just leave out <code>GO111MODULE=off</code>.</p> <p>When the build is done you can find the kubectl for your system in <code>_output/bin</code>.</p>
<p>Traefik allows to specify a middleware to authenticate each request using <code>forwardAuth</code>, in the documentation it specifies to use an external provider.</p> <pre class="lang-yaml prettyprint-override"><code># Forward authentication to example.com apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: test-auth spec: forwardAuth: address: https://example.com/auth πŸ‘ˆ </code></pre> <p>I have an auth server running inside my k8 cluster itself which I want to use instead on an external service, is there a way to use <code>forwardAuth</code> to point to internal service?</p>
<p>I used K8 Fully Qualified Domain Name as the address for this to work:</p> <p><code>service-name.namespace.svc.cluster.local</code>.</p>
<p>I'm testing <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-gateways" rel="nofollow noreferrer">Gateway API</a> with GKE (version 1.21.11-gke.1100). I'm using <code>gatewayClassName: gke-l7-rilb</code> for a Gateway with TLS between the client and the gateway. HTTPS is working perfectly between the client and the loadbalancer using a managed regional SSL certificate .</p> <p>I have 2 Httproutes referencing 2 kube services (<code>backendRefs</code>). One service is accessible through HTTP and the other through HTTPS (argo-server service from Argo Workflows project if it may helps).</p> <p>When I create the httproute referencing the service using HTTP, the GCP load balancer backend service is created and working without any problem (Healthy).</p> <p>But when I create the httproute referencing argo-service, a GCP load balancer backend service is created but not working (not healthy) with an endpoint protocol set to HTTP rather than HTTPS. You should know that I made sure to add to argo-server service the annotation <code>cloud.google.com/app-protocols: '{&quot;web&quot;:&quot;HTTPS&quot;}'</code> to enable HTTPS between the load balancer and argo-server application.</p> <p>If I create the same geatway api configuration using ingress resource and the same argo service definition, endpoint protocol ( of the GCP load balancer backend service )is set correctly to HTTPS and is perfectly healthy and working.</p> <p>As if httproute of the gateway API the GKE gateway controller are not taking into consideration the <code>cloud.google.com/app-protocols</code> service annotation although it is mentioned that it's relevant for the gateway API <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gatewayclass-capabilities#service" rel="nofollow noreferrer">here</a>.</p> <p>Edit 1: added yaml files</p> <ul> <li>Gateway:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;gateway.networking.k8s.io/v1alpha2&quot;,&quot;kind&quot;:&quot;Gateway&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;app.kubernetes.io/managed-by&quot;:&quot;gcp-cloud-build-deploy&quot;},&quot;name&quot;:&quot;regional-internal-https&quot;,&quot;namespace&quot;:&quot;exposition&quot;},&quot;spec&quot;:{&quot;addresses&quot;:[{&quot;type&quot;:&quot;NamedAddress&quot;,&quot;value&quot;:&quot;dev-gateway-internal-lb-static-ip&quot;}],&quot;gatewayClassName&quot;:&quot;gke-l7-rilb&quot;,&quot;listeners&quot;:[{&quot;allowedRoutes&quot;:{&quot;kinds&quot;:[{&quot;kind&quot;:&quot;HTTPRoute&quot;}],&quot;namespaces&quot;:{&quot;from&quot;:&quot;Selector&quot;,&quot;selector&quot;:{&quot;matchLabels&quot;:{&quot;exposed&quot;:&quot;true&quot;}}}},&quot;name&quot;:&quot;https&quot;,&quot;port&quot;:443,&quot;protocol&quot;:&quot;HTTPS&quot;,&quot;tls&quot;:{&quot;mode&quot;:&quot;Terminate&quot;,&quot;options&quot;:{&quot;networking.gke.io/pre-shared-certs&quot;:&quot;plat-dev-europe-west1&quot;}}}]}} networking.gke.io/addresses: &quot;&quot; networking.gke.io/backend-services: gkegw1-bkib-argo-argo-server-2746-8ktcvo8d0ktp, gkegw1-bkib-demo-application-demo-service-80-y5bgcnm71kjv, gkegw1-bkib-exposition-gw-serve404-80-pciznuyt569p networking.gke.io/firewalls: &quot;&quot; networking.gke.io/forwarding-rules: gkegw1-bkib-exposition-regional-internal-https-tqsh4njw7io8 networking.gke.io/health-checks: gkegw1-bkib-argo-argo-server-2746-8ktcvo8d0ktp, gkegw1-bkib-demo-application-demo-service-80-y5bgcnm71kjv, gkegw1-bkib-exposition-gw-serve404-80-pciznuyt569p networking.gke.io/last-reconcile-time: &quot;2022-06-16T15:57:45Z&quot; networking.gke.io/ssl-certificates: &quot;&quot; networking.gke.io/target-proxies: gkegw1-bkib-exposition-regional-internal-https-tqsh4njw7io8 networking.gke.io/url-maps: gkegw1-bkib-exposition-regional-internal-https-tqsh4njw7io8 creationTimestamp: &quot;2022-06-15T08:28:20Z&quot; finalizers: - gateway.finalizer.networking.gke.io generation: 1 labels: app.kubernetes.io/managed-by: gcp-cloud-build-deploy managedFields: - apiVersion: gateway.networking.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {} f:spec: .: {} f:addresses: {} f:gatewayClassName: {} f:listeners: .: {} k:{&quot;name&quot;:&quot;https&quot;}: .: {} f:allowedRoutes: .: {} f:kinds: {} f:namespaces: .: {} f:from: {} f:selector: .: {} f:matchLabels: .: {} f:exposed: {} f:name: {} f:port: {} f:protocol: {} f:tls: .: {} f:mode: {} f:options: .: {} f:networking.gke.io/pre-shared-certs: {} manager: kubectl-client-side-apply operation: Update time: &quot;2022-06-15T08:28:20Z&quot; - apiVersion: gateway.networking.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:networking.gke.io/addresses: {} f:networking.gke.io/backend-services: {} f:networking.gke.io/firewalls: {} f:networking.gke.io/forwarding-rules: {} f:networking.gke.io/health-checks: {} f:networking.gke.io/last-reconcile-time: {} f:networking.gke.io/ssl-certificates: {} f:networking.gke.io/target-proxies: {} f:networking.gke.io/url-maps: {} f:finalizers: .: {} v:&quot;gateway.finalizer.networking.gke.io&quot;: {} f:status: f:addresses: {} manager: GoogleGKEGatewayController operation: Update time: &quot;2022-06-15T08:30:16Z&quot; name: regional-internal-https namespace: exposition resourceVersion: &quot;42337844&quot; uid: 59333aea-1a79-4e9b-afbc-595ae9ccdfd7 spec: addresses: - type: NamedAddress value: dev-gateway-internal-lb-static-ip gatewayClassName: gke-l7-rilb listeners: - allowedRoutes: kinds: - group: gateway.networking.k8s.io kind: HTTPRoute namespaces: from: Selector selector: matchLabels: exposed: &quot;true&quot; name: https port: 443 protocol: HTTPS tls: mode: Terminate options: networking.gke.io/pre-shared-certs: plat-dev-europe-west1 status: addresses: - type: IPAddress value: 10.163.112.28 conditions: - lastTransitionTime: &quot;1970-01-01T00:00:00Z&quot; message: Waiting for controller reason: NotReconciled status: Unknown type: Scheduled </code></pre> <ul> <li>Httproute:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;gateway.networking.k8s.io/v1alpha2&quot;,&quot;kind&quot;:&quot;HTTPRoute&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;app.kubernetes.io/managed-by&quot;:&quot;gcp-cloud-build-deploy&quot;},&quot;name&quot;:&quot;argo-server&quot;,&quot;namespace&quot;:&quot;argo&quot;},&quot;spec&quot;:{&quot;hostnames&quot;:[&quot;argo-server.plat.dev.df.gcp.corp.modified.com&quot;],&quot;parentRefs&quot;:[{&quot;kind&quot;:&quot;Gateway&quot;,&quot;name&quot;:&quot;regional-internal-https&quot;,&quot;namespace&quot;:&quot;exposition&quot;}],&quot;rules&quot;:[{&quot;backendRefs&quot;:[{&quot;name&quot;:&quot;argo-server&quot;,&quot;port&quot;:2746}]}]}} creationTimestamp: &quot;2022-06-15T12:27:04Z&quot; generation: 1 labels: app.kubernetes.io/managed-by: gcp-cloud-build-deploy managedFields: - apiVersion: gateway.networking.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {} f:spec: .: {} f:hostnames: {} f:parentRefs: {} f:rules: {} manager: kubectl-client-side-apply operation: Update time: &quot;2022-06-15T12:27:04Z&quot; - apiVersion: gateway.networking.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:parents: {} manager: GoogleGKEGatewayController operation: Update time: &quot;2022-06-15T12:29:02Z&quot; name: argo-server namespace: argo resourceVersion: &quot;42362026&quot; uid: 981ce997-c574-4878-bec1-b03c7707838c spec: hostnames: - argo-server.plat.dev.df.gcp.corp.modified.com parentRefs: - group: gateway.networking.k8s.io kind: Gateway name: regional-internal-https namespace: exposition rules: - backendRefs: - group: &quot;&quot; kind: Service name: argo-server port: 2746 weight: 1 matches: - path: type: PathPrefix value: / status: parents: - conditions: - lastTransitionTime: &quot;2022-06-16T17:00:11Z&quot; message: &quot;&quot; reason: RouteAccepted status: &quot;True&quot; type: Accepted - lastTransitionTime: &quot;2022-06-16T17:00:11Z&quot; message: &quot;&quot; reason: ReconciliationSucceeded status: &quot;True&quot; type: Reconciled controllerName: networking.gke.io/gateway parentRef: group: gateway.networking.k8s.io kind: Gateway name: regional-internal-https namespace: exposition </code></pre> <ul> <li>Service:</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/app-protocols: '{&quot;web&quot;:&quot;HTTPS&quot;}' cloud.google.com/backend-config: '{&quot;default&quot;: &quot;argo-server-backendconfig&quot;}' cloud.google.com/neg: '{&quot;exposed_ports&quot;:{&quot;2746&quot;:{}}}' cloud.google.com/neg-status: '{&quot;network_endpoint_groups&quot;:{&quot;2746&quot;:&quot;k8s1-f83345f9-argo-argo-server-2746-4d39c835&quot;},&quot;zones&quot;:[&quot;europe-west1-c&quot;]}' cluster-autoscaler.kubernetes.io/safe-to-evict: &quot;true&quot; kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;cloud.google.com/app-protocols&quot;:&quot;{\&quot;web\&quot;:\&quot;HTTPS\&quot;}&quot;,&quot;cloud.google.com/backend-config&quot;:&quot;{\&quot;default\&quot;: \&quot;argo-server-backendconfig\&quot;}&quot;,&quot;cloud.google.com/neg&quot;:&quot;{\&quot;ingress\&quot;: true}&quot;,&quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;:&quot;true&quot;},&quot;labels&quot;:{&quot;app.kubernetes.io/managed-by&quot;:&quot;gcp-cloud-build-deploy&quot;},&quot;name&quot;:&quot;argo-server&quot;,&quot;namespace&quot;:&quot;argo&quot;},&quot;spec&quot;:{&quot;ports&quot;:[{&quot;name&quot;:&quot;web&quot;,&quot;port&quot;:2746,&quot;targetPort&quot;:2746}],&quot;selector&quot;:{&quot;app&quot;:&quot;argo-server&quot;}}} creationTimestamp: &quot;2022-06-15T11:44:07Z&quot; labels: app.kubernetes.io/managed-by: gcp-cloud-build-deploy managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:cloud.google.com/app-protocols: {} f:cloud.google.com/backend-config: {} f:cluster-autoscaler.kubernetes.io/safe-to-evict: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {} f:spec: f:ports: .: {} k:{&quot;port&quot;:2746,&quot;protocol&quot;:&quot;TCP&quot;}: .: {} f:name: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:app: {} f:sessionAffinity: {} f:type: {} manager: kubectl-client-side-apply operation: Update time: &quot;2022-06-15T12:27:23Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:cloud.google.com/neg: {} manager: GoogleGKEGatewayController operation: Update time: &quot;2022-06-15T12:28:06Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:cloud.google.com/neg-status: {} manager: glbc operation: Update time: &quot;2022-06-15T12:28:06Z&quot; name: argo-server namespace: argo resourceVersion: &quot;41692832&quot; uid: 25024d53-1d31-4165-8033-1843ec5d72ec spec: clusterIP: 10.163.247.121 clusterIPs: - 10.163.247.121 ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: web port: 2746 protocol: TCP targetPort: 2746 selector: app: argo-server sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre>
<p>I found a solution which I think of as a workaround.</p> <ol> <li><p>Use <code>networking.gke.io/app-protocols: '{&quot;web&quot;:&quot;HTTPS&quot;}'</code> annotation rather than <code>cloud.google.com/app-protocols: '{&quot;web&quot;:&quot;HTTPS&quot;}'</code>. This annotation is to use at service level, where <code>web</code> is name of the port. This will enable HTTPS between the load balancer and the application <em>(Endpoint protocol of the backend service created for the specified HTTPRoute)</em>. <strong>This is working perfectly with <code>gatewayClassName: gke-l7-rilb</code> a Regional Internal Load Balancer.</strong></p> </li> <li><p>create a custom health check using <code>cloud.google.com/v1 BackendConfig</code> where you set the type to HTTPS and port to 2746. More details here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health</a> <em>With ingresses, GCE ingress controller create automatically this health check from the application readiness probe but apparently this function is not yet implemented in the GKE Gateway controller.</em></p> </li> <li><p>Make sure you have firewall rules allowing ingress traffic for Google Cloud health checks on 2746 port <em>With ingresses, GCE ingress controller create automatically the required firewall rule but apparently this function is not yet implemented in the GKE Gateway controller.</em></p> </li> </ol> <p>Finally I said this is a workaround because I imagine and hope that future version of the GKE gateway controller will fix the 3 issues or points I mentioned above.</p>
<p>I am trying to access my microservice &quot;externalforum-api-svc&quot; inside my kubernetes cluster using ocelot gateway. I`ve followed the docs but it does not seem to be working.</p> <p>Can someone please tell me whats wrong with it?</p> <p>I want to deploy the ocelot api gateway as clusterIP and use Ingress to access it from outside of the cluster, but i am facing this issue when trying to reroute from ocelot -&gt; service inside the cluster.</p> <blockquote> <p>## Error warn: Ocelot.Responder.Middleware.ResponderMiddleware[0] requestId: 0HMCO5SFMMOIQ:00000002, previousRequestId: no previous request id, message: Error Code: UnableToFindServiceDiscoveryProviderError Message: Unable to find service discovery provider for type: consul errors found in ResponderMiddleware. Setting error response for request path:/externalForumService, request method: GET</p> </blockquote> <pre><code>{ &quot;Routes&quot;: [ { &quot;UpstreamPathTemplate&quot;: &quot;/externalForumService/GetAll&quot;, &quot;DownstreamPathTemplate&quot;: &quot;/api/externalforum/v1/forum/GetAll&quot;, &quot;DownstreamScheme&quot;: &quot;http&quot;, &quot;ServiceName&quot;: &quot;externalforum-api-svc&quot;, &quot;UpstreamHttpMethod&quot;: [ &quot;Get&quot; ] }, { &quot;UpstreamPathTemplate&quot;: &quot;/externalForumService&quot;, &quot;DownstreamPathTemplate&quot;: &quot;/api/externalforum/v1/forum&quot;, &quot;DownstreamScheme&quot;: &quot;http&quot;, &quot;ServiceName&quot;: &quot;externalforum-api-svc&quot;, &quot;UpstreamHttpMethod&quot;: [ &quot;Get&quot; ] } ], &quot;GlobalConfiguration&quot;: { &quot;ServiceDiscoveryProvider&quot;: { &quot;Namespace&quot;: &quot;propnull&quot;, &quot;Type&quot;: &quot;kube&quot; } } } </code></pre> <h2>Service to map</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: externalforum-api-svc namespace: propnull spec: type: ClusterIP selector: app: externalforum-api ports: - name: http protocol: TCP port: 80 targetPort: 80 </code></pre> <p>I have already ran <code>kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts</code></p> <h2>Specifications</h2> <ul> <li>Version: 17.0.0</li> <li>Platform: net core 5.0</li> </ul>
<p>In my case error was different and it could not map proper pod. So you can map manually</p> <pre><code>{ &quot;Routes&quot;: [ { &quot;DownstreamPathTemplate&quot;: &quot;/api/{everything}&quot;, &quot;DownstreamScheme&quot;: &quot;http&quot;, &quot;DownstreamHostAndPorts&quot;: [ { &quot;Host&quot;: &quot;serviceName.Namespace.svc.cluster.local&quot;, &quot;Port&quot;: 80 } ], &quot;UpstreamPathTemplate&quot;: &quot;/something/{everything}&quot;, },.... </code></pre>
<p>I have many FastApi apps that are running in kubernetes. All of them are using some common kubernetes functionalities like liveness and readiness probe, discovering neighbour pods using ordinals(This is my next challenge). There are some logic in code that I need to implement but in general many parts of the code stay the same like:</p> <ol> <li>create route for live and readiness probes</li> <li>sending request to different ordinals of the statefulset to find neighbours, implementing the endpoints for those requests.</li> </ol> <p>Is there a Library that can use in my python/FastApi code to implement generic features that are available in Kubernetes.</p>
<p>What specific feature of K8S do you want to implement in your FastAPI apps? Liveness- and readiness endpoints are easy (as in, they are endpoints you can define in FastAPI and then declare them in your YAML definition of your pod).</p> <p>My understanding is, you want pods in a StatefulSet to communicate with each other, but you would need information from K8S to do so. E.g. you want FastAPI-Pod-1 to know it is pod 1 out of let's say 4. I would recommend the Downward API that K8S offers and built your logic around that (e.g. read pod information from environment variables): kubernetes.io/docs/tasks/inject-data-application/… I do not know of any standard framework that implements any logic for you..</p>
<p>The timezone did not meet my expectations When i use <code>kubectl logs &lt;mypod&gt; --timestamps</code> to get the pod log.</p> <p>current output:</p> <pre class="lang-bash prettyprint-override"><code>2022-06-15T07:31:41.826543867Z 2022/06/15 15:31:41 [info] Start grpc server listen 58212 port. 2022-06-15T07:31:41.826568525Z 2022/06/15 15:31:41 [info] Start http server listen 10000 port. </code></pre> <p>expected output:</p> <pre class="lang-bash prettyprint-override"><code>2022-06-15T15:31:41+0800 2022/06/15 15:31:41 [info] Start grpc server listen 58212 port. 2022-06-15T15:31:41+0800 2022/06/15 15:31:41 [info] Start http server listen 10000 port. </code></pre> <p>What should I set up to achieve this output?</p>
<p><code>kubectl</code> does not support this directly at the time of writing this and it tricky to do this in a portable way. On Linux something like this would work:</p> <pre><code>kubectl logs --timestamps mypod | while read timestamp line; do \ echo &quot;$(env TZ=&quot;EST&quot; date -d &quot;$timestamp&quot; '+%Y-%m-%dT%H:%M:%S.%N%:z') $line&quot;; done </code></pre> <p>You will get an output like this:</p> <pre><code>2022-06-08T14:13:41.847615539-05:00 INFO [06-08|19:13:41.847] Starting Geth on Ethereum mainnet... </code></pre>
<p>We are in the process of migrating our Service Fabric services to Kubernetes. Most of them were &quot;stateless&quot; services and were easy to migrate. However, we have one &quot;stateful&quot; service that uses SF's Reliable Collections pretty heavily.</p> <p>K8s has Statefulsets, but that's not really comparable to SF's reliable collections.</p> <p>Is there a .NET library or other solution to implement something similar to SF's Reliable Collections in K8s?</p>
<p>AFAIK this cannot be done by using a .Net library. K8 is all about orchestration. SF on the other hand is both an orchestrator + rich programming /application model + state management. If you want to do something like reliable collection in K8 then you have to either</p> <p>A) built your own replication solution with leader election and all.</p> <p>B) use private etcd/cockroachdb etc. store</p> <p>This article is pretty good in terms of differences.</p> <p><a href="https://learn.microsoft.com/en-us/archive/blogs/azuredev/service-fabric-and-kubernetes-comparison-part-1-distributed-systems-architecture#split-brain-and-other-stateful-disasters" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/archive/blogs/azuredev/service-fabric-and-kubernetes-comparison-part-1-distributed-systems-architecture#split-brain-and-other-stateful-disasters</a></p> <p>&quot;Existing systems provide varying levels of support for microservices, the most prominent being Nirmata, Akka, Bluemix, Kubernetes, Mesos, and AWS Lambda [there’s a mixed bag!!]. SF is more powerful: <strong>it is the only data-ware orchestration system today for stateful microservices</strong>&quot;</p>
<p>Im new to helm. While installing a helm chart I had to update its dependency, post which Im unable to install the helm chart.</p> <p>These are the steps that I have followed:</p> <ol> <li>Downloaded the helm chart using</li> </ol> <pre><code>helm pull codecentric/keycloak --version 18.1.1 --untar </code></pre> <ol start="2"> <li>Changed chart.yaml two fields, i.e, <code>name: postgresql</code> is changed to <code>name: postgresql-ha</code> and <code>10.3.13</code> changed to <code>version: 9.1.0</code></li> <li>Deleted the existing dependency under the charts folder i.e , <code>rm -R charts/postgresql</code></li> <li>Helm dependency update I have done using : <code>helm dependency update keycloak</code></li> <li>When I install this updated helm, Im facing the below issue:</li> </ol> <pre><code>$helm install keycloak ./keycloak -n identity Error: INSTALLATION FAILED: template: keycloak/templates/statefulset.yaml:59:48: executing &quot;keycloak/templates/statefulset.yaml&quot; at &lt;include &quot;keycloak.postgresql.fullname&quot; .&gt;: error calling include: template: keycloak/templates/_helpers.tpl:68:3: executing &quot;keycloak.postgresql.fullname&quot; at &lt;include &quot;postgresql.primary.fullname&quot; $postgresContext&gt;: error calling include: template: no template &quot;postgresql.primary.fullname&quot; associated with template &quot;gotpl&quot; </code></pre> <p>I see that its tried to get value from _helpers.tpl file, so I have tried to update the file for words having <code>postgresql</code> to <code>postgresql-ha</code> this has not solved the issue.</p>
<p>it seems that the statefulset template in the keycloak chart includes a function from the postgresql chart when its deployment is enabled. since you replaced the postgresql chart with the postgresql-ha chart, the function cannot be included (probably because the function doesn't exist in the postgresql-ha chart) and therefore the installation fails with this error.</p> <p>I suggest you keep the keycloak chart at its original state, deploy the postgresql-ha chart separately and configure keycloak to use the postgresql-ha deployment instead. This way you don't break the functionality of the keycloak chart but can use the postgresl-ha setup.</p> <p>How this is done is explained in the docs of the keycloak chart (<a href="https://github.com/codecentric/helm-charts/tree/master/charts/keycloak#using-an-external-database" rel="nofollow noreferrer">https://github.com/codecentric/helm-charts/tree/master/charts/keycloak#using-an-external-database</a>).</p>
<p>We have the below directory structure on the Linux system.</p> <pre><code> /root β”œβ”€dirA β”‚ β”œβ”€main.tf β”‚ β”œβ”€terraform.tfvars β”‚ └─variables.tf └─dirB └─main.tf </code></pre> <p><strong>==FIRST==</strong></p> <p>We used the below snippet in <code>main.tf</code> file of <code>dirA</code> to create a local kubeconfig file.</p> <pre><code>resource &quot;local_file&quot; &quot;kubeconfig&quot; { content = module.gke_auth.kubeconfig_raw filename = &quot;./kubeconfig&quot; } </code></pre> <p>Now we would like to access this <code>kubeconfig</code> file in the <code>main.tf</code> file of <code>dirB</code> inside the following snippet. Please suggest how to do that?</p> <pre><code>provider &quot;kubernetes&quot; { config_path = &quot;&lt;PATH_TO_KUBECONFIG_FILE&gt;&quot; } </code></pre> <p><strong>==SECOND==</strong></p> <p>We have defined some variables inside the <code>terraform.tfvars</code> file of <code>dirA</code> and we would like to access those variables inside the <code>main.tf</code> file of <code>dirB</code>. Please suggest how to do this.</p>
<p>We have solved both the issues with the setup below.</p> <p>File <strong>dirA/main.tf</strong> contains something similar to</p> <pre><code>resource &quot;local_file&quot; &quot;kubeconfig&quot; { content = module.gke_auth.kubeconfig_raw filename = &quot;${path.module}/kubeconfig&quot; } output &quot;kubeconfig_file&quot; { value = &quot;${path.cwd}/kubeconfig&quot; } </code></pre> <p>File <strong>dirB/main.tf</strong> contains something similar to</p> <pre><code>data &quot;terraform_remote_state&quot; &quot;kubeconfig_file&quot; { backend = &quot;local&quot; config = { path = &quot;${path.module}/../dirA/terraform.tfstate&quot; } } provider &quot;kubernetes&quot; { config_path = &quot;${data.terraform_remote_state.kubeconfig_file.outputs.kubeconfig_file}&quot; } </code></pre> <p>Finally:</p> <pre><code>cd dirA terraform apply cd ../dirB terraform apply </code></pre> <p><em>Note: In a similar way we can access variables from the stack in the different directory</em></p>
<p>I am learning kubernetes and minikube, and I am following this tutorial:</p> <p><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/</a></p> <p>But I am running into a problem, I am not able to load the exposed service. Here are the steps I make:</p> <pre><code>minikube start </code></pre> <p>The cluster info returns</p> <pre><code>Kubernetes control plane is running at https://127.0.0.1:50121 CoreDNS is running at https://127.0.0.1:50121/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>Then I am creating a deployment</p> <pre><code>kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4 </code></pre> <p>and exposing it as a service</p> <pre><code>kubectl expose deployment hello-minikube1 --type=NodePort --port=8080 </code></pre> <p>When I list the services, I dont have a url:</p> <pre><code>minikube service list </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>NAMESPACE</th> <th>NAME</th> <th>TARGET PORT</th> <th>URL</th> </tr> </thead> <tbody> <tr> <td>default</td> <td>hello-minikube1</td> <td>8080</td> <td></td> </tr> </tbody> </table> </div> <p>and when I try to get the url, I am not getting it, seems to be empty</p> <pre><code>minikube service hello-minikube1 --url </code></pre> <p>This is the response (first line is empty):</p> <pre><code>πŸƒ Starting tunnel for service hello-minikube2. ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. </code></pre> <p>Why I am not getting the url and cannot connect to it? What did I miss?</p> <p>Thanks!</p>
<p>Please use <code>minikube ip</code> command to get the IP of minikube and then use port number with it.</p> <p>Also, refer below link:</p> <p><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#:%7E:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#:~:text=minikube%20tunnel%20runs%20as%20a,on%20the%20host%20operating%20system</a>.</p>
<p>When I type <code>kubectl log pod/yourpod</code> to get my pod's logs, behind the scene, k8s must read the log from somewhere in my pod.</p> <p>What's the default path to the log generated by my container? How to change the path?</p> <p>Inside my container, my process uses <code>sigs.k8s.io/controller-runtime/pkg/log</code> to generate logs.</p>
<p>It is the console output (stdout / stderr) that is captured by the container runtime and made available by the kubelet running on the node to the API server.</p> <p>So there is not real log file, though the container runtime usually has a means of buffering the logs to the file system.</p>
<p>I am deploying authelia with their helm chart <a href="https://artifacthub.io/packages/helm/truecharts/authelia" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/truecharts/authelia</a>.</p> <p>I need to mount a ConfigMap to a specific file in the authelia container, however this is not available as a templated value in the <code>values.yaml</code>.</p> <p>I'm struggling to understand what is the recommended way of doing something like this... this could generalise to &quot;how can I customise an helm deployment beyond what is allowed through the <code>values.yaml</code>?&quot;</p> <p>I can only think of getting all the yaml from the chart, make the change I need and deploy like that... It can't be the best way of doing this</p> <h4>solution</h4> <h5>users_database.yml</h5> <pre class="lang-yaml prettyprint-override"><code>############################################################### # Users Database # ############################################################### # This file can be used if you do not have an LDAP set up. # List of users users: admin: displayname: &quot;admin&quot; password: &quot;redacted&quot; email: admin@redacted groups: - admins - dev </code></pre> <h5>patches/authelia_users_database.yaml</h5> <pre class="lang-yaml prettyprint-override"><code>spec: template: spec: containers: - name: authelia volumeMounts: - name: authelia-users-database mountPath: /etc/config volumes: - name: authelia-users-database configMap: name: authelia-users-database </code></pre> <h6>values.yaml</h6> <pre class="lang-yaml prettyprint-override"><code>authentication_backend: file: enabled: true path: /etc/config/users_database.yml </code></pre> <h6></h6> <pre><code>$ helm repo add authelia https://charts.authelia.com $ helm repo update $ helm upgrade --install authelia authelia/authelia --values values/authelia.yaml --version 0.8.34 $ kubectl create configmap authelia-users-database --from-file=users_database.yml --dry-run -o yaml | kubectl apply -f - $ kubectl patch daemonset authelia --patch-file patches/authelia_users_database.yaml </code></pre>
<p>I would say you could try to use <code>kubectl patch</code> to mount the ConfigMap to the authelia container afterwards. <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">Look here</a> for more information.</p> <p>So create a <code>.yml</code> file with Deployment, Replicaset or Statefulset and add the ConfigMap configuration (just check which Kubernetes object suits best for you according to the Helm deployment). After deploying the application using the Helm chart; run the command <code>kubectl patch -f your-deployment-file.yml</code>.</p>
<p>I am using DSBulk to unload data into CSV from a DSE cluster installed under Kubernetes, My cluster consists of 9 Kubernetes Pods each with 120 GB Ram.</p> <p>I have monitored the resources while unloading the data and observed that the more the data is fetched in CSV the more the ram is getting utilised and pods are restarting due to lack of memory.</p> <p>If one Pod is down at a time the DSBulk unload won't fail, but if 2 Pods are down unload will fail with the exception :</p> <blockquote> <p><strong>Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded).</strong></p> </blockquote> <p>Is there a way to avoid this exceeding of memory happening or is there a way to increase the timeout duration.</p> <p>The command I am using is :</p> <pre><code>dsbulk unload -maxErrors -1 -h β€˜[β€œ &lt; My Host &gt; ”]’ -port 9042 -u &lt; My user name &gt; -p &lt; Password &gt; -k &lt; Key Space &gt; -t &lt; My Table &gt; -url &lt; My Table &gt; --dsbulk.executor.continuousPaging.enabled false --datastax-java-driver.basic.request.page-size 1000 --dsbulk.engine.maxConcurrentQueries 128 --driver.advanced.retry-policy.max-retries 100000 </code></pre>
<p>After a lot of Trial and Error, we found out the problem was with Kubernetes Cassandra pods using the main server's memory size as <strong>Max Direct Memory Size</strong>, rather than using the pods max assigned Ram.</p> <p>The pods were assigned 120 GB of Ram, but Cassandra on each pod was assigning 185 GB Ram to <strong>file_cache_size</strong>, which made the unloading process fails as Kubernetes was rebooting each Pod that utilises Ram more than 120 GB.</p> <p>The reason is that <strong>Max Direct Memory Size</strong> is calculated as:</p> <pre><code>Max direct memory = ((system memory - JVM heap size))/2 </code></pre> <p>And each pod was using 325 GB as <strong>Max Direct Memory Size</strong> and each pods <strong>file_cache_size</strong> sets automatically to be half of <strong>Max Direct Memory Size</strong> value, So whenever a pod requests for memory more than 120 GB Kubernetes will restart it.</p> <p>The solution to it was to set <strong>Max Direct Memory Size</strong> as an env variable in Kubernetes cluster's yaml file with a default value or to override it by setting the <strong>file_cache_size</strong> value on each pod's Cassandra yaml's file</p>
<p>I have two helm charts, I need to generate a random password in one of the charts, and I am required to use the same password in the second.</p> <p>What would be the best practice to achieve this?</p> <p>Thanks in advance!!</p>
<p>Generate the password in a known secret name then refer to that secret in the other chart?</p>
<pre><code> kubectl cp namespace/podname:/path/target . </code></pre> <p>If I use the instructed command from kubernetes guide, it only copies the contents inside the <code>target</code> directory and omits <code>target</code> itself.<br /> I don't want to use <code>mkdir</code> every time I need to copy.<br /> What's the option?</p>
<p>I have a pod under <code>default</code> namespace called <code>ubuntu-pod</code> with a file located at root: <code>/decomission.log</code> and I got the same error:</p> <pre class="lang-yaml prettyprint-override"><code>$ kubectl cp default/ubuntu-pod:/decommission.log decommission.log tar: Removing leading `/' from member names </code></pre> <p>The solution was to remove the slash and then I was able to copy the file with no message:</p> <pre class="lang-yaml prettyprint-override"><code>$ kubectl cp default/ubuntu-pod:decommission.log decommission.log $ ls decommission.log </code></pre>
<p>I have an architectural question: We have a Django project made of multiple apps. There is a core app that holds the main models used for the other sets of apps. Then, we have a couple apps for user facing APIs. Lastly, we have some internal apps and tools used by developers only that are accessible in Admin UI as extended features.</p> <p>Our deployment process is very monolithic. We use Kubernetes and we deploy the whole project as a whole. Meaning that if we only had changes in an internal app and we need that in production, we will build a new Docker image and deploy a new release with a new version tag incremented.</p> <p>I'm not a big fan of this because change in internal tools shouldn't create a new release of the user facing applications.</p> <p>I have been wondering if there is a way to split those deployments (maybe make them into a microservice architecture?). So we could deploy the user facing applications separate from the internal tools. I know I could build separate images, tags and everything for parts of the project but I'm not sure how they could communicate between each other if <code>internal_app_1</code> depends on the models of <code>core</code> app and potentially the <code>settings.py</code> and <code>manage.py</code> file as well.</p> <p>Also because in Kubernetes, having to separate applications would mean to separate deployments with two servers running, so this means two separate Django projects isolated from each other but using the same database.</p> <p>Has anyone worked with something similar or would like to suggest an alternative, if there's any?</p> <p>Below is a tree example of how our project is structured at the moment:</p> <pre class="lang-bash prettyprint-override"><code>β”œβ”€β”€ core | β”œβ”€β”€ models.py | β”œβ”€β”€ views.py | └── urls.py β”œβ”€β”€ userapi_1 | β”œβ”€β”€ views.py | └── urls.py β”œβ”€β”€ userapi_2 | β”œβ”€β”€ views.py | └── urls.py β”œβ”€β”€ insternal_app_1 | β”œβ”€β”€ templates | | └── ... | β”œβ”€β”€ models.py | β”œβ”€β”€ views.py | └── urls.py β”œβ”€β”€ manage.py β”œβ”€β”€ settings.py └── Dockerfiles β”œβ”€β”€ Dockerfile.core └── Dockerfile.internal_app_1 </code></pre>
<p>Django and microservices? Yeah, maybe somewhere in the parallel universe.</p> <p>Only one thing that I may recommend is to build two identical services like <code>django_container_internal</code> and <code>django_container_production</code>. In this case you will be able to release <code>internal tools</code> without stopping <code>production</code>.</p> <p>If you want to prevent access to <code>production</code> functionality with <code>internal</code> endpoints you may deactivate <code>production</code> URLs by using <code>ENVs</code>. Usually Django project has common <code>config/urls.py</code> that aggregate all URL endpoints and looks like</p> <pre class="lang-py prettyprint-override"><code>urlpatterns = [ url('core/api/v1/', include(core.urls)), url('internal/api/v1/', include(internal_app_1.urls)), url('user/api/v1/', include(userapi_1.urls)) ... ] </code></pre> <p>For example you may add <code>IS_INTERNAL_TOOLS</code> environment variable and update <code>urls.py</code> like</p> <pre class="lang-py prettyprint-override"><code>from os import environ urlpatterns = [ url('core/api/v1/', include(core.urls)), ... ] if environ.get('IS_INTERNAL_TOOLS', 'false').lower() in ('true', '1', 'yes'): urlpatterns.append(url('insternal/api/v1/', include(insternal_app_1.urls))) else: urlpatterns.append(url('user/api/v1/', include(userapi_1.urls))) </code></pre> <ul> <li><p>Pros:</p> <ul> <li>All models will be accessible at both services (only one common DAO =&gt; no double developers work to create models twice)</li> <li>Functionality is separated so only necessary features are accessible</li> <li>Easy to implement</li> </ul> </li> <li><p>Cons:</p> <ul> <li>Whole source code stored inside both of containers even if half of it is not used</li> <li>If you using two separate databases for internal tools and external API you have to create all tables in both of it (but looks like that is not your case)</li> <li>Because of it is still monolith <code>internal</code> and <code>production</code> parts heavily dependable on common <code>core</code> and it is impossible to deploy only updated core separately</li> </ul> </li> </ul>
<p>I have a below pipeline</p> <pre><code>apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: git-clone-pipeline spec: params: - name: repo-url type: string workspaces: - name: shared-workspace tasks: - name: clone-repository taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: &quot;$(params.repo-url)&quot; - name: deleteExisting value: &quot;true&quot; - name: build taskRef: name: gradle runAfter: - &quot;clone-repository&quot; params: - name: TASKS value: build - name: GRADLE_IMAGE value: docker.io/library/gradle:jdk17-alpine@sha256:dd16ae381eed88d2b33f977b504fb37456e553a1b9c62100b8811e4d8dec99ff - name: PROJECT_DIR value: ./discount-api workspaces: - name: source workspace: shared-workspace </code></pre> <p>And <strong>pipeline-run</strong></p> <pre><code>apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: run-pipeline namespace: tekton-pipelines spec: serviceAccountName: git-service-account pipelineRef: name: git-clone-pipeline workspaces: - name: shared-workspace emptyDir: {} params: - name: repo-url value: [email protected]:anandjaisy/discount.git </code></pre> <p>And a project structure directory as</p> <p><a href="https://i.stack.imgur.com/o8mEq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o8mEq.png" alt="enter image description here" /></a></p> <p>Facing an issue during the pipeline run</p> <pre><code>2022-06-18T08:17:57.775506334Z Directory '/workspace/source/discount-api' does not contain a Gradle build. </code></pre> <p>This issue is related to the file not being found. The git-clone task has cloned the code somewhere in the cluster. How do I know where is the code?</p> <pre><code>kubectl get pods run-pipeline-build-pod 0/1 Error 0 173m run-pipeline-fetch-source-pod 0/1 Completed 0 173m tekton-dashboard-b7b8599c6-wf7b2 1/1 Running 0 12d tekton-pipelines-controller-674dd45d79-529pc 1/1 Running 0 12d tekton-pipelines-webhook-86b8b9d87b-qmxzk 1/1 Running 0 12d tekton-triggers-controller-6d769dddf7-847nt 1/1 Running 0 12d tekton-triggers-core-interceptors-69c47c4bb7-77bvt 1/1 Running 0 12d tekton-triggers-webhook-7c4fc7c74-lgm79 1/1 Running 0 12d </code></pre> <p>If we do logs on POD <code>run-pipeline-build-pod</code> gets an exception as above.</p> <pre><code>kubectl exec --stdin --tty &lt;Pod-name&gt; -- /bin/bash </code></pre> <p>The above command is used to go inside the pod, however, in the above case the <code>run-pipeline-fetch-source-pod</code> is completed and we can't apply the above code.</p> <p>How can I solve this file issue ?</p>
<p>You're using an emptyDir as a workspace. You should use a PVC. You could fix your PipelineRun with something like this:</p> <pre><code>workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi </code></pre> <p>The workspace is used by your git clone task. Describing the git-clone pod from your PipelineRun, you should find a volume of type emptyDir. Which is where data is cloned.</p> <p>Then, using that workspace in another Task, being an emptyDir, in another Pod: you're starting from scratch, there's no data when pod starts. A PVC would allow you to share data in between tasks.</p> <hr /> <p>As for debugging: once pods are completed, there's not much to do.</p> <p>Debugging PipelineRuns, if it exits too fast for me to <code>kubectl exec -it</code> a Pod, I would try to change the failing container arguments (editing the corresponding Task), adding some &quot;sleep 86400&quot; or something equivalent, instead of its usual command. Then restart a PipelineRun.</p> <p>In your case, if you already had a PVC as your workspace: it would have been easier to just start a Pod (create a deployment), attach the PVC created by your PipelineRun, checking what's in there, ...</p>
<p>I have a domain at Cloudflare and some wildcards for subdomains</p> <p><a href="https://i.stack.imgur.com/R46FF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R46FF.png" alt="enter image description here" /></a></p> <p>which both point to the load balancer of an nginx ingress of a Kubernetes cluster (GKE) of the GCP. Now, we have two pods and services running each (echo1 and echo2, which are essentially identical) and when I apply an ingress</p> <pre><code>kind: Ingress metadata: name: echo-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; spec: rules: - host: &quot;echo1.eu3.example.com&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: echo1 port: number: 80 - host: &quot;echo2.example.com&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: echo2 port: number: 80 </code></pre> <p>I can reach echo2 under echo2.example.com, but not echo1.eu3.example.com. My question is how I can make the second one reachable as well.</p>
<p>I can advise you to make some check.</p> <p>Just set the Proxy status for &quot;echo1.eu3.example.com&quot; as DNS only. Then check the access. If ok - install certificates in kubernetes via cert manager. We faced some times with this issue and resolved by using 3 deep domains. For instance &quot;echo1-eu3.example.com&quot;. It seems cloudfront does not like such domains :) Of course if someone write a solution how to work with deep domains in cloudfront - it would be good practice for us :)</p>
<p>I have a k8s cronjob run my docker image <code>transaction-service</code>.</p> <p>It starts and gets its job done successfully. When it's over, I expect the pod to terminate but... <code>istio-proxy</code> still lingers there:</p> <p><a href="https://i.stack.imgur.com/NdnGQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdnGQ.png" alt="containers" /></a></p> <p>And that results in:</p> <p><a href="https://i.stack.imgur.com/qehiJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qehiJ.png" alt="unready pod" /></a></p> <p>Nothing too crazy, but I'd like to fix it.</p> <p>I know I should call <code>curl -X POST http://localhost:15000/quitquitquit</code></p> <p>But I don't know where and how. I need to call that quitquitquit URL only when transaction-service is in a completed state. I read about <code>preStop</code> lifecycle hook, but I think I need more of a <code>postStop</code> one. Any suggestions?</p>
<p>You have a few options here:</p> <ol> <li>On your job/cronjob spec, add the following lines and your job immediately after:</li> </ol> <pre><code>command: [&quot;/bin/bash&quot;, &quot;-c&quot;] args: - | trap &quot;curl --max-time 2 -s -f -XPOST http://127.0.0.1:15020/quitquitquit&quot; EXIT while ! curl -s -f http://127.0.0.1:15020/healthz/ready; do sleep 1; done echo &quot;Ready!&quot; &lt; your job &gt; </code></pre> <ol start="2"> <li>Disable Istio injection at the Pod level in your Job/Cronjob definition:</li> </ol> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: ... spec: ... jobTemplate: spec: template: metadata: annotations: # disable istio on the pod due to this issue: # https://github.com/istio/istio/issues/11659 sidecar.istio.io/inject: &quot;false&quot; </code></pre> <p>Note: The annotation should be on the Pod's template, not on the Job's template.</p>
<p>I want to show the percentage of the server (or service) uptime in Grafana for the last month. The monitoring service collects data with Prometheus node exporter.</p> <pre><code>uptime % = ( up / down ) * 100 </code></pre>
<p>As <a href="https://stackoverflow.com/users/4653675/marcelo-%C3%81vila-de-oliveira">MarceloÁviladeOliveira</a> said I used Black-Box Exporter with this PromQL :</p> <pre><code>avg_over_time(probe_success{job=&quot;$job&quot;,instance=~&quot;$instance&quot;}[30d]) * 100 </code></pre>
<p>I have a microservice running in kubernetes with deployment. It is serving a Node JS application with PM2. I recently found whenever I deploy a new version of application with helm, i can see PM2 is being exited with code [0]</p> <pre><code>PM2 log: App [api.bundle:0] exited with code [0] via signal [SIGINT] </code></pre> <p>I tried to investigate if there is an exception in application. Could not found any error prior deployment. That leads me to ask if how pm2 is getting restarted? Does kubernetes sends kill signal to pm2 if an new deployment comes in?</p>
<p><code>...whenever I deploy a new version of application with helm, i can see PM2 is being exited with code [0]</code></p> <p>When you do <code>helm upgrade</code> the command triggers rolling update to the deployment which replace existing pod(s) with new pod. During this process, <code>signal [SIGINT]</code> was sent to inform your PM2 container it's time to exit.</p> <p><code>...if how pm2 is getting restarted? Does kubernetes sends kill signal to pm2 if an new deployment comes in?</code></p> <p>Correct.</p>
<p>I'm taking a Kubernetes course and part of the course is trying out minikube. I have minikube and kubectl installed on an Ubuntu 20.04 WSL on Windows 11. When I run <code>minikube dashboard</code> in the Ubuntu terminal the process hangs on <code>Verifying proxy health...</code>. I tried running <code>kubectl proxy</code> in another terminal window then running <code>minikube dashboard</code>. I go to the dashboard URL and use the proxy port number that the kubectl proxy command indicates but I only get this:</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;no endpoints available for service \&quot;kubernetes-dashboard\&quot;&quot;, &quot;reason&quot;: &quot;ServiceUnavailable&quot;, &quot;code&quot;: 503 } </code></pre> <p>I've also tried running minikube by using <code>minikube start --vm-driver=docker</code> as mentioned in this <a href="https://github.com/kubernetes/minikube/issues/9473#issuecomment-1001145166" rel="noreferrer">GitHub issue</a> but it still hangs. How do get the dashboard to run? I'm completely new to Kubernetes in general.</p>
<p>For me the same issue was resolved after a restart of the minikube.</p> <pre><code>minikube stop &lt;close previously opened dashboard process&gt; - for me ctrl+c in separate terminal, but maybe you'd have to kill it minikube start minikube dashboard </code></pre>
<p>I have a k8s service that is behind a load balancer and exposes a <code>/metrics</code> endpoint. However, these metrics are identical for each pod, so there is no need to collect them from each pod - rather, any pod can provide the data. Currently, this results in the same metrics being emitted with only the pod dimension changing.</p> <p>What would be the idiomatic way to handle this? My first thought was to create a pseudo-endpoint that points to the service and collect from that, but this seems overly complicated.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: {{ .Values.name }}-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: &quot;true&quot; prometheus.io/scrape: &quot;true&quot; prometheus.io/path: /metrics prometheus.io/port: &quot;80&quot; labels: app: {{ .Values.name }}-service spec: ports: - name: http port: 80 protocol: TCP type: LoadBalancer selector: app: {{ .Values.name }}-service-pod </code></pre>
<p>Firstly, let me say that is not neccesarily the case that every pod will return the same metrics; pods might return stats on their CPU utilisation, individual jobqueues, a whole load of pod-specific stuff. If this is your own product and they currently don't; they might in the future. It's certainly useful to be able to use your metrics to spot issues in the running of a workload in kubernetes, i.e. monitor the individual pods.</p> <p>Idiomatically, monitoring all pods is the way to go.</p>
<p>I have cronjob where the pod it starts ends up in <code>ImagePullBackOff</code> and the cronjob never scheduled another pod , though it has to per schedule. Is there a way to force the cron controller to schedule another pod even though the previous one ended in ImagePullBackOff.</p> <p>I don't want multiple pods running at the same time so use <code>concurrencyPolicy: Forbid</code> , Is there anyway to get CronJob to still schedule another pod ?</p>
<p>You don't really want the scheduler to schedule another pod. Doing that would lead to a resource leak as mentioned explained in <a href="https://github.com/kubernetes/kubernetes/issues/76570" rel="nofollow noreferrer">Infinite ImagePullBackOff CronJob results in resource leak</a>, which @VonC mentioned in his answer.</p> <p>Instead you should focus on fixing the root cause to why the pod is in <code>ImagePullBackOff</code>. Once that is done Kubernetes will automatically pull the image, run the pod and a new one will be scheduled once the cron schedule is fullfilled.</p> <p><code>ImagePullBackoff</code> means that the container could not start because the image could not be retrieved. The reason could be for example an invalid image id or tag, a missing or invalid <code>imagePullSecret</code> or network connectivity issues.</p> <p>When a pod is in <code>ImagePullBackoff</code> kubernetes will periodically retry to pull the image, and once the image is successfully pulled the pod starts.</p> <p>The delay between pull attempts will increase with each attempt (a <em>BackOff</em>), as explained in the <a href="https://Kubernetes%20raises%20the%20delay%20between%20each%20attempt%20until%20it%20reaches%20a%20compiled-in%20limit,%20which%20is%20300%20seconds%20(5%20minutes)." rel="nofollow noreferrer">docs</a></p> <blockquote> <p>Kubernetes raises the delay between each attempt until it reaches a compiled-in limit, which is 300 seconds (5 minutes).</p> </blockquote>
<p>I have deployed kube-state-metrics into kube-system namespace and in the same cluster we are having prometheus-operator running I've written the below service monitor file for sending metrics to prometheus but it is not working. Please find the files below.</p> <p>Servicemonitor.yaml</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: kube-state-metrics labels: app.kubernetes.io/name: kube-state-metrics namespace: kube-system spec: selector: matchLabels: prometheus-scrape: &quot;true&quot; endpoints: - port: metrics path: /metrics targetPort: 8080 honorLabels: true scheme: https tlsConfig: insecureSkipVerify: true </code></pre> <p>Prometheus-deploy.yaml</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: annotations: argocd.argoproj.io/sync-wave: &quot;1&quot; name: prometheus labels: name: prometheus spec: serviceAccountName: prometheus serviceMonitorSelector: {} serviceMonitorNamespaceSelector: matchLabels: prometheus-scrape: &quot;true&quot; podMonitorSelector: {} podMonitorNamespaceSelector: matchLabels: prometheus-scrape: &quot;true&quot; resources: requests: memory: 400Mi enableAdminAPI: false additionalScrapeConfigs: name: additional-scrape-configs key: prometheus-additional.yaml </code></pre> <p>Can any one please help me out regarding this issue.</p> <p>Thanks.</p>
<p>ServiceMonitor's selector&gt;matchLabels should match with &quot;Service&quot;'s labels. Check if your service has correct label.</p>
<p>I am using google container registry (GCR) to push and pull docker images. I have created a deployment in kubernetes with 3 replicas. The deployment will use a docker image pulled from the GCR.</p> <p>Out of 3 replicas, 2 are pulling the images and running fine.But the third replica is showing the below error and the pod's status remains &quot;ImagePullBackOff&quot; or &quot;ErrImagePull&quot;</p> <blockquote> <p>&quot;Failed to pull image &quot;gcr.io/xxx:yyy&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;gcr.io/xxx:yyy&quot;: failed to resolve reference &quot;gcr.io/xxx:yyy&quot;: unexpected status code: 401 Unauthorized&quot;</p> </blockquote> <p>I am confused like why only one of the replicas is showing the error and the other 2 are running without any issue. Can anyone please clarify this?</p> <p>Thanks in Advance!</p>
<p><strong>ImagePullBackOff</strong> and <strong>ErrImagePull</strong> indicate that the image used by a container cannot be loaded from the image registry.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#401_unauthorized_cannot_pull_images_from_private_container_registry_repository" rel="nofollow noreferrer">401 unauthorized error</a> might occur when you pull an image from a private Container Registry repository. For troubleshooting the error:</p> <ol> <li><p>Identify the node that runs the pod by <code>kubectl describe pod POD_NAME | grep &quot;Node:&quot;</code></p> </li> <li><p>Verify the node has the storage scope by running the command</p> <pre><code>gcloud compute instances describe NODE_NAME --zone=COMPUTE_ZONE --format=&quot;flattened(serviceAccounts[].scopes)&quot; </code></pre> </li> <li><p>The node's access scope should contain at least one of the following:</p> <p>serviceAccounts[0].scopes[0]: <a href="https://www.googleapis.com/auth/devstorage.read_only" rel="nofollow noreferrer">https://www.googleapis.com/auth/devstorage.read_only</a> serviceAccounts[0].scopes[0]: <a href="https://www.googleapis.com/auth/cloud-platform" rel="nofollow noreferrer">https://www.googleapis.com/auth/cloud-platform</a></p> </li> <li><p>Recreate the node pool that node belongs to with sufficient scope and you cannot modify existing nodes, you must recreate the node with the correct scope.</p> <ul> <li><p>Create a new node pool with the gke-default scope by the following command</p> <pre><code>gcloud container node-pools create NODE_POOL_NAME --cluster=CLUSTER_NAME --zone=COMPUTE_ZONE --scopes=&quot;gke-default&quot; </code></pre> </li> <li><p>Create a new node pool with only storage scope</p> <pre><code>gcloud container node-pools create NODE_POOL_NAME --cluster=CLUSTER_NAME --zone=COMPUTE_ZONE --scopes=&quot;https://www.googleapis.com/auth/devstorage.read_only&quot; </code></pre> </li> </ul> </li> </ol> <p>Refer to the <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#ImagePullBackOff" rel="nofollow noreferrer">link</a> for more information on the troubleshooting process.</p>
<p>How do you change the <code>name</code> of a user in a kube config file with kubectl (no text editor)?</p> <p>Example kube config file <code>stage_config.yaml</code>:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://127.0.0.1:6443 name: cluster.local contexts: - context: cluster: cluster.local user: kubernetes-admin name: [email protected] current-context: [email protected] kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>I want to do something like <code>kubectl config rename-user --kubeconfig ~/.kube/stage_config.yaml kubernetes-admin kubernetes-admin-1</code></p> <p>With the output like:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://127.0.0.1:6443 name: cluster.local contexts: - context: cluster: cluster.local user: kubernetes-admin name: [email protected] current-context: [email protected] kind: Config preferences: {} users: - name: kubernetes-admin-1 user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>I've tried <code>kubectl config set</code> but receive the following error:</p> <pre><code>kubectl config set --kubeconfig ~/.kube/stage_config.yaml users.name.kubernetes-admin kubernetes-admin-1 error: can't set a map to a value: map[kubernetes-admin:0xc000c53100] </code></pre> <p><code>kubectl config --help</code> shows that the <code>rename-context</code> command exists, but nothing like <code>rename-user</code> nor <code>rename-cluster</code> exists.</p>
<p>Use JQ to do the magic (just make sure your KubeConfig is in JSON). I'm sure is possible to do it with YQ for YAML but I let someone else fill that one in. Set <code>CLUSTER_NAME</code> to your desired value.</p> <pre class="lang-bash prettyprint-override"><code>jq '.clusters[0].name=&quot;'$CLUSTER_NAME'&quot; | .contexts[0].context.cluster=&quot;'$CLUSTER_NAME'&quot;' source/kubeconfig &gt; target/kubeconfig </code></pre>