Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I Have a problem on Connect from Port Forwarding Database on Openshift :</p> <p>Running Pods Postgresql : <a href="https://i.stack.imgur.com/J5yuF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J5yuF.png" alt="Postgresql Pods have Been Running" /></a> <a href="https://i.stack.imgur.com/50yYE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/50yYE.png" alt="Detail Pods Postgresql" /></a></p> <p>I Try Connect to Container running the database to check process and psql command, then it works : <a href="https://i.stack.imgur.com/bck4h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bck4h.png" alt="Connect inside container postgre" /></a></p> <p>Next, I Try Port Forwarding for Try Connection from outside Openshift Cluster: <a href="https://i.stack.imgur.com/qMfLE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qMfLE.png" alt="Run port forwarding postgresql" /></a></p> <p>Then I Try Connect from Outside Cluster to connect Postgresql have Error: Connection Refuse Im Using IP Based or Hostname / FQDN Not Working and Error Still Exist <a href="https://i.stack.imgur.com/0EFmB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0EFmB.png" alt="Error Try Connect Postgresql to Openshift" /></a></p> <p>And When I Try Check Firewall port it has been opened port 5432/TCP : <a href="https://i.stack.imgur.com/ms2mI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ms2mI.png" alt="Check Ports Has Been Open or Not" /></a></p> <hr /> <p>Anyone Can Help Me With This problem ? Thanks</p> <p>Note: Before I have Been Looking Documentation but Not Working Resolve the Problem Source Documentation: <a href="https://www.openshift.com/blog/openshift-connecting-database-using-port-forwarding" rel="nofollow noreferrer">https://www.openshift.com/blog/openshift-connecting-database-using-port-forwarding</a></p> <p><a href="https://stackoverflow.com/questions/32439167/psql-could-not-connect-to-server-connection-refused-error-when-connecting-to">&quot;psql: could not connect to server: Connection refused&quot; Error when connecting to remote database</a></p>
Vergie Hadiana
<p>The <code>oc port-forward</code> command is forwarding from only your loopback interfaces.</p> <p>If you are running your client on the same machine where the cluster is running, then use <code>localhost</code> as your &quot;Host&quot;.</p> <p>If you are running your client on a different machine, they you need more network redirection to get this to work. Please see this post for more information as well as work-arounds for your problem: <a href="https://stackoverflow.com/questions/52607821/access-openshift-forwarded-ports-from-remote-host">Access OpenShift forwarded ports from remote host</a></p>
Mike Organek
<p>I am using Ambassador API Gateway in my GKE as below:</p> <pre><code>apiVersion: getambassador.io/v2 kind: Mapping metadata: name: my-service spec: host: app.mycompany.com prefix: / service: my-service </code></pre> <p>However, I would like to map all sub domains (*.mycompany.com) and route to my-service</p> <pre><code>apiVersion: getambassador.io/v2 kind: Mapping metadata: name: my-service spec: host: *.app.mycompany.com prefix: / service: my-service </code></pre> <p>How to map wildcard subdomain?</p>
Ha Doan
<p>Based on this <a href="https://www.getambassador.io/docs/latest/topics/using/headers/host/" rel="nofollow noreferrer">documentation</a>, you have to set the host as a regex pattern to match against your subdomains.</p> <p>So in your case, you'd want this:</p> <pre><code>apiVersion: getambassador.io/v2 kind: Mapping metadata: name: my-service spec: host: &quot;[a-z]*\\.app\\.mycompany\\.com&quot; host_regex: true prefix: / service: my-service </code></pre>
Yazen Nasr
<p>I created PV as follows:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: foo-pv spec: storageClassName: &quot;my-storage&quot; claimRef: name: foo-pvc namespace: foo </code></pre> <p>Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?</p> <p>Can someone help me to understand this?</p>
Vasu Youth
<p>You can have 2 types of PVs:</p> <ol> <li>dynamic provisioned by StorageClasses</li> <li>manually/static created by admins</li> </ol> <p><strong>Dynamically</strong> -&gt; this is often used within cloud, for instance when you want to mount an Azure blob/file to a pod. In this case you don't have control over PV name, StorageClass create and bound random created PVs.</p> <p><a href="https://i.stack.imgur.com/m5wOi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m5wOi.png" alt="enter image description here" /></a></p> <p><strong>Manually</strong> -&gt; this will give more control, you can assign a specific name to PV, a specific StorageClass that has Retain policy (do not delete PV after Released by Pod). In result is much easier to reuse that PV, knowing it name and StorageClass membership.</p> <p><a href="https://i.stack.imgur.com/rjj56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rjj56.png" alt="enter image description here" /></a></p>
jhook
<p>I deployed a k8s cluster on AWS EKS fargate. And deployed a elasticsearch container to the pod. The pod is stuck on <code>ContainerCreating</code> state and <code>describe pod</code> shows below error:</p> <pre><code>$ kubectl describe pod es-0 Name: es-0 Namespace: default Priority: 2000001000 Priority Class Name: system-node-critical Node: fargate-ip-10-0-1-207.ap-southeast-2.compute.internal/10.0.1.207 Start Time: Fri, 28 May 2021 16:39:07 +1000 Labels: controller-revision-hash=es-86f54d94fb eks.amazonaws.com/fargate-profile=elk_profile name=es statefulset.kubernetes.io/pod-name=es-0 Annotations: CapacityProvisioned: 1vCPU 2GB Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND kubernetes.io/psp: eks.privileged Status: Pending IP: IPs: &lt;none&gt; Controlled By: StatefulSet/es Containers: es: Container ID: Image: elasticsearch:7.10.1 Image ID: Ports: 9200/TCP, 9300/TCP Host Ports: 0/TCP, 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: cpu: 2 memory: 8 Requests: cpu: 1 memory: 4 Environment: &lt;none&gt; Mounts: /usr/share/elasticsearch/config/elasticsearch.yml from es-config (rw,path=&quot;elasticsearch.yml&quot;) /var/run/secrets/kubernetes.io/serviceaccount from default-token-6qql4 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: es-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: es-config Optional: false default-token-6qql4: Type: Secret (a volume populated by a Secret) SecretName: default-token-6qql4 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 75s (x4252 over 16h) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: container_linux.go:349: starting container process caused &quot;process_linux.go:319: getting the final child's pid from pipe caused \&quot;read init-p: connection reset by peer\&quot;&quot;: unknown </code></pre> <p>How do I know what the issue is and how to fix it? I have tried to restart the <code>Statefulset</code> but it didn't restart. It seems the pod stucked.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: es-config data: elasticsearch.yml: | cluster.name: my-elastic-cluster network.host: &quot;0.0.0.0&quot; bootstrap.memory_lock: false discovery.zen.ping.unicast.hosts: elasticsearch-cluster discovery.zen.minimum_master_nodes: 1 discovery.type: single-node ES_JAVA_OPTS: -Xms2g -Xmx4g --- apiVersion: apps/v1 kind: StatefulSet metadata: name: es namespace: default spec: serviceName: es-entrypoint replicas: 1 selector: matchLabels: name: es template: metadata: labels: name: es spec: volumes: - name: es-config configMap: name: es-config items: - key: elasticsearch.yml path: elasticsearch.yml # - name: persistent-storage # persistentVolumeClaim: # claimName: efs-es-claim securityContext: fsGroup: 1000 runAsUser: 1000 runAsGroup: 1000 containers: - name: es image: elasticsearch:7.10.1 resources: limits: cpu: 2 memory: 8 requests: cpu: 1 memory: 4 ports: - name: http containerPort: 9200 - containerPort: 9300 name: inter-node volumeMounts: - name: es-config mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml # - name: persistent-storage # mountPath: /usr/share/elasticsearch/data --- apiVersion: v1 kind: Service metadata: name: es-entrypoint spec: selector: name: es ports: - port: 9200 targetPort: 9200 protocol: TCP type: NodePort </code></pre>
Joey Yi Zhao
<p>In My case, it is the policy issue of eks cni. If run : <code>kubectl logs -n kube-system -l k8s-app=aws-node</code></p> <pre><code>Installed /host/opt/cni/bin/egress-v4-cni time=&quot;2023-05-07T03:09:29Z&quot; level=info msg=&quot;Starting IPAM daemon... &quot; time=&quot;2023-05-07T03:09:29Z&quot; level=info msg=&quot;Checking for IPAM connectivity... &quot; time=&quot;2023-05-07T03:09:30Z&quot; level=info msg=&quot;Copying config file... &quot; time=&quot;2023-05-07T03:09:30Z&quot; level=info msg=&quot;Successfully copied CNI plugin binary and config file.&quot; time=&quot;2023-05-07T03:09:30Z&quot; level=error msg=&quot;Failed to wait for IPAM daemon to complete&quot; error=&quot;exit status 1&quot; Installed /host/opt/cni/bin/aws-cni Installed /host/opt/cni/bin/egress-v4-cni time=&quot;2023-05-07T03:09:35Z&quot; level=info msg=&quot;Starting IPAM daemon... &quot; time=&quot;2023-05-07T03:09:35Z&quot; level=info msg=&quot;Checking for IPAM connectivity... &quot; time=&quot;2023-05-07T03:09:36Z&quot; level=info msg=&quot;Copying config file... &quot; time=&quot;2023-05-07T03:09:36Z&quot; level=info msg=&quot;Successfully copied CNI plugin binary and config file.&quot; time=&quot;2023-05-07T03:09:36Z&quot; level=error msg=&quot;Failed to wait for IPAM daemon to complete&quot; error=&quot;exit status 1&quot; </code></pre> <p>Once I attach AmazonEKS_CNI_Policy to my worker node role. It works:</p> <pre><code>Installed /host/opt/cni/bin/aws-cni Installed /host/opt/cni/bin/egress-v4-cni time=&quot;2023-05-07T03:11:55Z&quot; level=info msg=&quot;Starting IPAM daemon... &quot; time=&quot;2023-05-07T03:11:55Z&quot; level=info msg=&quot;Checking for IPAM connectivity... &quot; time=&quot;2023-05-07T03:11:56Z&quot; level=info msg=&quot;Copying config file... &quot; time=&quot;2023-05-07T03:11:56Z&quot; level=info msg=&quot;Successfully copied CNI plugin binary and config file.&quot; Installed /host/opt/cni/bin/aws-cni Installed /host/opt/cni/bin/egress-v4-cni time=&quot;2023-05-07T03:11:55Z&quot; level=info msg=&quot;Starting IPAM daemon... &quot; time=&quot;2023-05-07T03:11:55Z&quot; level=info msg=&quot;Checking for IPAM connectivity... &quot; time=&quot;2023-05-07T03:11:56Z&quot; level=info msg=&quot;Copying config file... &quot; time=&quot;2023-05-07T03:11:56Z&quot; level=info msg=&quot;Successfully copied CNI plugin binary and config file.&quot; </code></pre>
Stan
<p>At the moment my daily Kubernetes CronJob creates job instances with a name with the following pattern:</p> <pre><code>my-chron-job-28038600-acddef my-chron-job-28038660-acddef my-chron-job-28038720-acddef </code></pre> <p>I was wondering, if there is an easy way to override this and ensure that the CronJob creates Job following a pattern like the following one, which includes the Job creation/execution date in a human readable format:</p> <pre><code>my-chron-job-2023-05-01-acddef my-chron-job-2023-05-02-acddef my-chron-job-2023-05-03-acddef </code></pre>
Attila
<p>AFAIK it is not possible. However, you can use <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath</a> to combine the pod name with the pod's start time:</p> <pre><code>kubectl get pods -l job-name -o jsonpath='{range .items[*]}{.metadata.labels.job-name}{&quot;-&quot;}{.status.startTime}{&quot;\n&quot;}{end}' </code></pre> <p>Note: the <code>job-name</code> label is automatically populated by k8s.</p> <p>The output would be then something like:</p> <pre><code>my-chron-job-28052272-2023-05-11:52:00Z ... </code></pre> <p>You can further refine the output (through pipes, <code>awk</code>, <code>sed</code> etc.) to achieve exactly the desired format. I don't know if it'ld be worth the effort though.</p>
Kenan Güler
<p>I've created GCP's disk form a snapshot and now I'm trying to resize it using PVC in kubernetes: 100GB -&gt; 400GB. I've applied:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: restored-resize parameters: type: pd-standard provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true reclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: restored-graphite spec: storageClassName: restored-resize capacity: storage: 400G accessModes: - ReadWriteOnce gcePersistentDisk: pdName: dev-restored-graphite fsType: ext4 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restored-graphite spec: # It's necessary to specify &quot;&quot; as the storageClassName # so that the default storage class won't be used, see # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 storageClassName: restored-resize volumeName: restored-graphite accessModes: - ReadWriteOnce resources: requests: storage: 400G </code></pre> <p>Status in PVC shows 400G:</p> <pre><code>(...) status: accessModes: - ReadWriteOnce capacity: storage: 400G phase: Bound </code></pre> <p>However pod mounts previous disk value:</p> <pre><code>/dev/sdc 98.4G 72.8G 25.6G 74% /opt/graphite/storage </code></pre> <p>What am I doing wrong?</p>
sacherus
<p>To me seems that you have setted 400G directly on the manifest, but as the manual said, you should had edited only</p> <pre><code>resources: requests: storage: 400G </code></pre> <p><a href="https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/</a></p> <p>and those, triggering the new condition: <code>FileSystemResizePending</code></p> <p>As of Kubernetes v1.11, those PVC will autoresize in time after some time in this status, due to that, you shouldn't even have to restart the pod bounded to the pc.</p> <p>But, again on your problem: i would edit this way the manifes:</p> <pre><code>spec: storageClassName: restored-resize capacity: storage: 100G </code></pre> <p>in order for the system to reload the old config and notice that the situation is not as he thinks. or at least, that is what i would try (on another environment, not production for sure.)</p>
Alessandro Gregori
<p>I'm getting a <code>directory index of &quot;/src/&quot; is forbidden</code> error when setting up Docker Nginx configuration within Kubernetes. Here is the error from the Kubernetes logs.</p> <pre><code>/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2021/03/11 02:23:24 [error] 23#23: *1 directory index of &quot;/src/&quot; is forbidden, client: 10.136.144.155, server: 0.0.0.0, request: &quot;GET / HTTP/1.1&quot;, host: &quot;10.42.3.4:80&quot; 10.136.144.155 - - [11/Mar/2021:02:23:24 +0000] &quot;GET / HTTP/1.1&quot; 403 125 &quot;-&quot; &quot;kube-probe/1.15&quot; </code></pre> <p>My dockerfile to serve nginx for an Angular app is quite simple:</p> <pre><code>FROM nginx RUN rm /etc/nginx/conf.d/default.conf COPY ./nginx/conf.d /etc/nginx/ COPY dist /src RUN ls /src </code></pre> <p>My nginx.conf file contains:</p> <pre><code>worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name 0.0.0.0; root /src; charset utf-8; include h5bp/basic.conf; # eg https://github.com/h5bp/server-configs-nginx include modules/gzip.conf; location =/index.html { include modules/cors.conf; } location / { try_files $uri$args $uri$args/ /index.html; } } } </code></pre> <p>The Kubernetes deployment is using a Quay image. Do you think my error could be in the dockerfile, the nginx.conf file, or both?</p>
stevetronix
<p>In Dockerfile copy line, you are copying conf.d to nginx folder, try to change that</p> <pre><code>FROM nginx RUN rm /etc/nginx/conf.d/default.conf COPY ./nginx/conf.d /etc/nginx/conf.d COPY dist /src RUN ls /src </code></pre>
GokulnathP
<p>Can I add multiple hosts to the Ingress controller so that they refer to the same target group in the aws load balancer? Example:</p> <pre><code> rules: - host: [&quot;foobar.com&quot;, &quot;api.foobar.com&quot;, &quot;status.foobar.com&quot;] http: paths: - backend: serviceName: foobar servicePort: 80 ``` </code></pre>
Bell
<p>TLDR; no</p> <hr /> <p>Long answer:</p> <p>In <a href="https://github.com/kubernetes/api/blob/master/networking/v1beta1/types.go#L159" rel="nofollow noreferrer">k8s source code</a> you can see that <code>host</code> field's data type is string, so you cannot use array of strings in that place.</p> <p>But you should be able to do the following:</p> <pre><code>rules: - host: &quot;foobar.com&quot; http: paths: - backend: serviceName: foobar servicePort: 80 - host: &quot;api.foobar.com&quot; http: paths: - backend: serviceName: foobar servicePort: 80 - host: &quot;status.foobar.com&quot; http: paths: - backend: serviceName: foobar servicePort: 80 </code></pre>
Matt
<p>I'm trying to encode a database string using base64 on the command line in linux.</p> <p>Once I do I add the value to a secret in kubernetes but my application is failing to connect to the database due to the db string not being accepted. There seems to be a newline getting added when I check the value in lens and this is not there in a similar cluster in the same secret</p> <p>jdbc:postgresql://test.xxxxxxxx.eu-west-2.rds.amazonaws.com/test</p> <pre><code>deirdre$ echo jdbc:postgresql://test.xxxxxxxx.eu-west-2.rds.amazonaws.com/test | base64 | tr -d &quot;\n&quot; amRiYzpwb3N0Z3Jlc3FsOi8vdGVzdC54eHh4eHh4eC5ldS13ZXN0LTIucmRzLmFtYXpvbmF3cy5jb20vdGVzdAo= </code></pre> <p>Is there something I am doing wrong? or is there an issue with the /?</p>
DeirdreRodgers
<p>You can fix those easy with</p> <pre><code>echo -n &quot;string&quot; | base64 </code></pre> <p>&quot;echo -n&quot; removes the trailing newline character.</p> <p>You can also see my last answer i gave to following Question <a href="https://stackoverflow.com/questions/68032810/kubernetes-secrets-as-environment-variable-add-space-character/68033302#68033302">Kubernetes secrets as environment variable add space character</a></p>
bymo
<p>You have a collection of pods that all run the same application, but with slightly different configuration. Applying the configuration at run -time would also be desirable. What's the best way to achieve this?</p> <p>a: Create a separate container image for each version of the application, each with a different configuration</p> <p>b: Use a single container image, but create ConfigMap objects for the different Configurations and apply them to the different pods</p> <p>c: Create persistent Volumes that contain each config files and mount then to different pods</p>
user16350039
<p>In my opinion, the best solution for this would be &quot;b&quot; to use ConfigMaps. Apply changes to the ConfigMap and <strong>redeploy</strong> the image/pod. I use this approach very often at work.</p> <p>Edit from @AndD: [...] ConfigMaps can contain small files and as such can be mounted as read only volumes instead of giving environment variables. So they can be used also instead of option c in case files are required.</p> <p>My favorite command for this</p> <pre><code>kubectl patch deployment &lt;DEPLOYMENT&gt; -p &quot;{\&quot;spec\&quot;: {\&quot;template\&quot;: {\&quot;metadata\&quot;: { \&quot;labels\&quot;: { \&quot;redeploy\&quot;: \&quot;$(date +%s)\&quot;}}}}}&quot; </code></pre> <p>Patches a redeploy date to now in spec.template.metadata.labels.redeploy -&gt; date +%s</p> <p>Option &quot;a&quot; seems to be evenly good, the downside is you have to build the new Image, push it, deploy it. You could build a Jenkins/GitLab/whatever Pipeline to automate this way. But, I think ConfigMaps are fairly easier and are more &quot;manageable&quot;. This approach is viable if you have very very many config files, which would be too much work to implement in ConfigMaps.</p> <p>Option &quot;c&quot; feels a bit too clunky just for config files. Just my two cents. Best Practice is to not store config files in volumes. As you (often) want your Applications to be as &quot;stateless&quot; as possible.</p>
bymo
<p><strong>Background</strong></p> <p>We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns <code>curl: (52) Empty reply from server</code>) while the applications log 499. </p> <pre><code>10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3" </code></pre> <p>My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:</p> <p>ELB -> k8s nginx ingress -> application</p> <p>So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...</p> <p><strong>Potential core problem?</strong></p> <p>While looking the the ingress logs i'm seeing quite a few of these: </p> <pre><code>2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status" </code></pre> <p><strong>Potential Solution</strong> </p> <p>I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:</p> <p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/</a></p> <p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a></p> <p>Thanks in advance for any insight / suggestions / documentation I might be missing!</p>
Chris Robak
<p>here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.</p> <p>First start by getting the ingress controller version by checking the image version on the deploy <code>kubectl -n &lt;namespace&gt; get deployment &lt;deployment-name&gt; | grep 'image:'</code></p> <p>From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2. <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2</a></p> <p>The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl</p> <p><code>vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};</code></p> <p>This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go</p> <pre><code> // Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe // https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone // Default value is 10m VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty" </code></pre> <p>This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.</p> <p>When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:</p> <blockquote> <p>If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).</p> <p><a href="https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone" rel="nofollow noreferrer">https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone</a></p> </blockquote> <p>"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: <code>curl http://localhost:18080/nginx_status/format/json 2&gt; /dev/null | jq .sharedZones.usedSize</code></p> <p>Hope this helps.</p>
sumdonkus
<p>We have created pvc and they are in pending state. So in order to check the state we execute</p> <pre><code>Kubectl describe -f &lt;pvc.yml&gt; </code></pre> <p>it displays result as below</p> <pre><code>Name: myproj-pvc-2020-09-29-04-02-1601377369-49419-1 Namespace: default StorageClass: myproj-storageclass-2020-09-29-04-02-1601377366 Status: Pending Volume: Labels: ansible=csitest-2020-09-29-04-02-1601377369-49419-1 pvcRef=csi-pvc-ansibles-1 Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.myorg.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 85s (x8 over 4m43s) csi.myorg.com_csicentos76w3.mylab.myprojstorage.com_2e1a7c1d-7542-42a5-a2e1-491e1d04b4ee External provisioner is provisioning volume for claim &quot;default/myproj-pvc-2020-09-29-04-02-1601377369-49419-1&quot; Warning ProvisioningFailed 74s (x8 over 4m33s) csi.myorg.com_csicentos76w3.mylab.myprojstorage.com_2e1a7c1d-7542-42a5-a2e1-491e1d04b4ee failed to provision volume with StorageClass &quot;myproj-storageclass-2020-09-29-04-02-1601377366&quot;: rpc error: code = Unavailable desc = Failed to get storage provider from secrets, Request failed with status code 401 and errors Error code (Unauthorized) and message (HTTP 401 Unauthorized.) Normal ExternalProvisioning 6s (x20 over 4m43s) persistentvolume-controller waiting for a volume to be created, either by external provisioner &quot;csi.myorg.com&quot; or manually created by system administrat </code></pre> <p>What I need is to filter only events of this pvc.yaml? If i execute <code>kubectl get -f pvc.yaml -o json</code> it doesn't display error events in json</p> <p>I can do <code>kubectl describe -f &lt;pvc.yml&gt; | grep -A10 Events:</code> but there is no guarantee that only 10 lines will be always error.</p> <p>Also i found this way <code>kubectl get events --field-selector involvedObject.kind=&quot;PersistentVolumeClaim&quot;</code> but this will show all events related to all pvc. I need to get events of Pvc list mentioned in pvc.yml file.</p> <p>How to filter events of all pvc in pvc.yaml?</p>
Samselvaprabu
<p><code>kubectl get -f pvc.yaml -o json</code> does work, but I assume you meant <code>kubectl get events -f pvc.yaml -o json</code></p> <p>kubectl doesn't seem to allow for such filtering. You may want to open a feature request on <a href="https://github.com/kubernetes/kubectl/issues" rel="nofollow noreferrer">kubectl github issues</a>.</p> <p>But meanwhile, here is what I came up with as an alternative:</p> <pre><code>kubectl get -f &lt;pvc.yml&gt; -ojson \ | jq &quot;.items[] | .metadata.name&quot; \ | xargs -I{} kubectl get events --field-selector involvedObject.kind=&quot;PersistentVolumeClaim&quot;,involvedObject.name={} --no-headers --ignore-not-found </code></pre> <p>Notce that you need <code>jq</code> installed. You could also use <code>yq</code> and you wouldn't need this first trick with conversion to json but you would need to slightly adjust yq filter. I also assume you have kubectl installed and of course xargs which should be available by default on all linux machines.</p>
Matt
<p>I'm currently facing a weird issue with K8S. Indeed I'm creating a container with an envFrom statement and the env variable is pulled from a secret:</p> <pre><code>envFrom: - secretRef: name: my-super-secret </code></pre> <p>I have created the secret with the base64 encoded value, and when I echo the variable in the container it has added a space at the end, which is quite an issue since it's a password ;-)</p> <p>Here's my secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-super-secret data: DB_PASSWORD: base64encodedvalue </code></pre> <p>Does anyone could provide me with some guidance here ? I absolutely can't figure out what's happening here ...</p>
moulip
<p>How did you encode the value?</p> <p>Using this (on Mac)</p> <pre><code>echo -n &quot;base64encodedvalue&quot; | base64 YmFzZTY0ZW5jb2RlZHZhbHVl </code></pre> <p>I can access my values just fine in my Containers, without a trailing space.</p> <pre><code>echo YmFzZTY0ZW5jb2RlZHZhbHVl | base64 -d base64encodedvalue </code></pre> <p>Source: <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/</a></p>
bymo
<p>I'm new to k8s and need some direction on how to troubleshoot.</p> <p>I have a postgres container and a graphql container. The graphql container is tries to connect to postgres on startup.</p> <h2>Problem</h2> <p>The graphql container can't connect to postgres. This is the error on startup:</p> <p>{&quot;internal&quot;:&quot;could not connect to server: Connection refused\n\tIs the server running on host &quot;my-app&quot; (xxx.xx.xx.xxx) and accepting\n\tTCP/IP connections on port 5432?\n&quot;, &quot;path&quot;:&quot;$&quot;,&quot;error&quot;:&quot;connection error&quot;,&quot;code&quot;:&quot;postgres-error&quot;}</p> <p>My understanding is that the graphql-container doesn't recognize the IP <code>my-app (xxx.xx.xx.xxx)</code>. This is the actual Pod Host IP, so I'm confused as to why it doesn't recognize it. How do I troubleshoot errors like these?</p> <h2>What I tried</h2> <ul> <li><p>Hardcoding the host in the connection uri in deployment.yaml to the actual pod host IP. Same error.</p> </li> <li><p>Bashed into the graphql container and verified that it had the correct env values with the <code>env</code> command.</p> </li> </ul> <h2>deployment.yaml</h2> <pre><code>spec: selector: matchLabels: service: my-app template: metadata: labels: service: my-app ... - name: my-graphql-container image: image-name:latest env: - name: MY_POSTGRES_HOST value: my-app - name: MY_DATABASE value: db - name: MY_POSTGRES_DB_URL # the postgres connection url that the graphql container uses value: postgres://$(user):$(pw)@$(MY_POSTGRES_HOST):5432/$(MY_DATABASE) ... - name: my-postgres-db image: image-name:latest </code></pre>
tbd_
<p>In <a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">k8s docs about pods</a> you can read:</p> <blockquote> <p>Pods in a Kubernetes cluster are used in two main ways:</p> <ul> <li><p><strong>Pods that run a single container.</strong> [...]</p> </li> <li><p><strong>Pods that run multiple containers that need to work together</strong>. [...]</p> </li> </ul> <blockquote> <p><em><strong>Note</strong></em>: Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled.</p> </blockquote> <p><em><strong>Each Pod is meant to run a single instance of a given application</strong></em>. [...]</p> </blockquote> <hr /> <p>Notice that your deployment doesn't fit this descriprion because you are trying to run two applications in one pod.</p> <p>Remember to always use one pod per container and only use multiple containers per pod if it's impossible to separate them (and for some reason they have to run together).</p> <p>And the rest was already mentioned by David.</p>
Matt
<p>I'm trying to upgrade some GKE cluster from 1.21 to 1.22 and I'm getting some warnings about deprecated APIs. Am running Istio 1.12.1 version as well in my cluster</p> <p>One of them is causing me some concerns:</p> <p><code>/apis/extensions/v1beta1/ingresses</code></p> <p>I was surprised to see this warning because we are up to date with our deployments. We don't use Ingresses.</p> <p>Further deep diving, I got the below details:</p> <pre><code>➜ kubectl get --raw /apis/extensions/v1beta1/ingresses | jq Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress { &quot;kind&quot;: &quot;IngressList&quot;, &quot;apiVersion&quot;: &quot;extensions/v1beta1&quot;, &quot;metadata&quot;: { &quot;resourceVersion&quot;: &quot;191638911&quot; }, &quot;items&quot;: [] } </code></pre> <p>It seems an IngressList is that calls the old API. Tried deleting the same,</p> <pre><code>➜ kubectl delete --raw /apis/extensions/v1beta1/ingresses Error from server (MethodNotAllowed): the server does not allow this method on the requested resource </code></pre> <p>Neither able to delete it, nor able to upgrade.</p> <p>Any suggestions would be really helpful.</p> <p>[Update]: My GKE cluster got updated to <code>1.21.11-gke.1900</code> and after that the warning messages are gone.</p>
Sunil
<p>we have also upgraded cluster/Node version from 1.21 to 1.22 directly from GCP which have successfully upgraded both node as well as cluster version.</p> <p>even after upgrading we are still getting ingresslist</p> <pre><code>/apis/extensions/v1beta1/ingresses </code></pre> <p>we are going to upgrade our cluster version from 1.22 to 1.23 tomorrow will update you soon.</p>
bipin joshi
<p>I'm using the <a href="https://github.com/GoogleCloudPlatform/k8s-config-connector/tree/master/resources/storagenotification" rel="nofollow noreferrer">Config Connector example</a> for the StorageNotification but I keep getting the following error (take from <code>kubectl describe</code>)</p> <blockquote> <p>storagenotification-controller Update call failed: error applying desired state: project: required field is not set</p> </blockquote> <p>I have followed the <a href="https://cloud.google.com/config-connector/docs/how-to/setting-default-namespace" rel="nofollow noreferrer">Setting Config Connector's default namespace</a> but no joy. The StorageNotification API spec doesn't have a field for "project". I thought it just had to be in the right namespace?</p> <p>All the other resources seem to setup OK. Just the notification is not working. Here is my complete yaml</p> <pre><code># Bucket Starts the chain of events apiVersion: storage.cnrm.cloud.google.com/v1beta1 kind: StorageBucket metadata: labels: app: something-processing name: example-something namespace: ${GCP_PROJECT_ID} --- # Pub/Sub topic that bucket events will publish to apiVersion: pubsub.cnrm.cloud.google.com/v1beta1 kind: PubSubTopic metadata: name: my-pubsub-topic labels: app: something-processing namespace: ${GCP_PROJECT_ID} --- # Publisher IAM permissions apiVersion: iam.cnrm.cloud.google.com/v1beta1 kind: IAMPolicy metadata: name: my-pubsub-topic-iam namespace: ${GCP_PROJECT_ID} labels: app: something-processing spec: resourceRef: apiVersion: pubsub.cnrm.cloud.google.com/v1beta1 kind: PubSubTopic name: my-pubsub-topic bindings: - role: roles/pubsub.publisher members: - serviceAccount:service-${GCP_PROJECT_ID}@gs-project-accounts.iam.gserviceaccount.com --- # Trigger that connects the bucket to the pubsub topic apiVersion: storage.cnrm.cloud.google.com/v1beta1 kind: StorageNotification metadata: name: storage-notification namespace: ${GCP_PROJECT_ID} project: ${GCP_PROJECT_ID} labels: app: something-processing spec: bucketRef: name: something payloadFormat: JSON_API_V1 topicRef: name: my-pubsub-topic eventTypes: - "OBJECT_FINALIZE" --- # subscription that gets events from the topic and PUSHes them # to the K8s Ingress endpoint apiVersion: pubsub.cnrm.cloud.google.com/v1beta1 kind: PubSubSubscription metadata: name: pubsub-subscription-topic namespace: ${GCP_PROJECT_ID} labels: app: something-processing spec: pushConfig: # This should match the Ingress path pushEndpoint: https://example.zone/some-ingress-end-point/ topicRef: name: my-pubsub-topic </code></pre> <p>Note: I'm using <code>envsubt</code> to replace the <code>${GCP_PROJECT_ID}</code> with the project ID ;)</p>
CpILL
<p>I was having the same issue and managed to resolve it by changing the topicRef.name to topicRef.external with the fully qualified topic name as expected in the <a href="https://cloud.google.com/storage/docs/json_api/v1/notifications/insert" rel="nofollow noreferrer">REST API</a>. My installation of the config connector was done following the Workload Identity scenario described in the <a href="https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall" rel="nofollow noreferrer">documentation</a>.</p> <pre><code>--- apiVersion: storage.cnrm.cloud.google.com/v1beta1 kind: StorageNotification metadata: name: storage-notification spec: bucketRef: name: ${BUCKET} payloadFormat: JSON_API_V1 topicRef: external: "//pubsub.googleapis.com/projects/${PROJECT_ID}/topics/${TOPIC}" eventType: - "OBJECT_FINALIZE" </code></pre>
Eric Bratter
<p>I'm new in the k8s world. </p> <p>Im my dev enviroment, I use ngnix as proxy(with CORS configs and with headers forwarding like ) for the different microservices (all made with spring boot) I have. In a k8s cluster, I had to replace it with Istio?</p> <p>I'm trying to run a simple microservice(for now) and use Istio for routing to it. I've installed istio with google cloud.</p> <p>If I navigate to IstioIP/auth/api/v1 it returns 404</p> <p>This is my yaml file</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: name: http number: 80 protocol: HTTP hosts: - '*' --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-service spec: hosts: - "*" gateways: - gateway http: - match: - uri: prefix: /auth route: - destination: host: auth-srv port: number: 8082 --- apiVersion: v1 kind: Service metadata: name: auth-srv labels: app: auth-srv spec: ports: - name: http port: 8082 selector: app: auth-srv --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: auth-srv spec: replicas: 1 template: metadata: labels: app: auth-srv version: v1 spec: containers: - name: auth-srv image: gcr.io/{{MY_PROJECT_ID}}/auth-srv:1.5 imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: '-DZIPKIN_SERVER=http://zipkin:9411' ports: - containerPort: 8082 livenessProbe: httpGet: path: /api/v1 port: 8082 initialDelaySeconds: 60 periodSeconds: 5 </code></pre>
Fidelis
<p>Looks like istio doesn't know anything about the url. Therefore you are getting a 404 error response. If you look closer at the configuration in the virtual server you have configured istio to match on path <code>/auth</code>.</p> <p>So if you try to request <code>ISTIOIP/auth</code> you will reach your microservice application. Here is image to describe the traffic flow and why you are getting a 404 response.</p> <p><img src="https://i.stack.imgur.com/K5gdv.png" alt="Logic of the routing" /></p>
m123
<p>How do I do chargeback for shared Kubernetes clusters on Azure? Say there are 10 departments/customers using a cluster split by namespaces, How do I bill them?</p>
Saketh Ram
<p>I would recommend you to use tags. That would make it easier for you to filter down the usage and billing as well.</p> <p>Tags is the easiest and most efficient way to segregate resources.</p>
Susheel Bhatt
<p>I'm running a kubernetes cluster hostet inside 4 kvm, managed by proxmox. After installing the nginx-ingress-controller with</p> <pre><code>helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true -n nginx-ingress </code></pre> <p>the controller is crashing (crashloop) . The logs don't really help (or i don't know where to look exactly)</p> <p>thanks Peter</p> <p>herer the cluster pods:</p> <pre><code>root@sedeka78:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-66bff467f8-jv2mx 1/1 Running 0 83m 10.244.0.9 sedeka78 &lt;none&gt; &lt;none&gt; kube-system coredns-66bff467f8-vwrzb 1/1 Running 0 83m 10.244.0.6 sedeka78 &lt;none&gt; &lt;none&gt; kube-system etcd-sedeka78 1/1 Running 2 84m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-sedeka78 1/1 Running 2 84m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-sedeka78 1/1 Running 4 84m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-fxvfh 1/1 Running 0 83m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-h6btb 1/1 Running 1 78m 10.10.10.79 sedeka79 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-m6dw2 1/1 Running 1 78m 10.10.10.80 sedeka80 &lt;none&gt; &lt;none&gt; kube-system kube-flannel-ds-amd64-wgtqb 1/1 Running 1 78m 10.10.10.81 sedeka81 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-5dvdg 1/1 Running 1 78m 10.10.10.80 sedeka80 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-89pf7 1/1 Running 0 83m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-hhgtf 1/1 Running 1 78m 10.10.10.79 sedeka79 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-kshnn 1/1 Running 1 78m 10.10.10.81 sedeka81 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-sedeka78 1/1 Running 5 84m 10.10.10.78 sedeka78 &lt;none&gt; &lt;none&gt; kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-4trgg 1/1 Running 0 80m 10.244.0.8 sedeka78 &lt;none&gt; &lt;none&gt; kubernetes-dashboard kubernetes-dashboard-7bfbb48676-q6c2t 1/1 Running 0 80m 10.244.0.7 sedeka78 &lt;none&gt; &lt;none&gt; nginx-ingress nginx-ingress-controller-57f4b84b5-ldkk5 0/1 CrashLoopBackOff 19 45m 10.244.1.2 sedeka81 &lt;none&gt; &lt;none&gt; nginx-ingress nginx-ingress-default-backend-7c868597f4-8q9n7 1/1 Running 0 45m 10.244.4.2 sedeka80 &lt;none&gt; &lt;none&gt; root@sedeka78:~# </code></pre> <p>here the logs of the controller:</p> <pre><code>root@sedeka78:~# kubectl logs nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress -v10 I0705 14:31:41.152337 11692 loader.go:375] Config loaded from file: /home/kubernetes/.kube/config I0705 14:31:41.170664 11692 cached_discovery.go:114] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/servergroups.json I0705 14:31:41.174651 11692 cached_discovery.go:71] returning cached discovery info from </code></pre> <p>...</p> <pre><code>I0705 14:31:41.189379 11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/batch/v1beta1/serverresources.json I0705 14:31:41.189481 11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/batch/v1/serverresources.json I0705 14:31:41.189560 11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/certificates.k8s.io/v1beta1/serverresources.json I0705 14:31:41.192043 11692 round_trippers.go:423] curl -k -v -XGET -H &quot;Accept: application/json, */*&quot; -H &quot;User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8&quot; 'https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5' I0705 14:31:41.222314 11692 round_trippers.go:443] GET https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5 200 OK in 30 milliseconds I0705 14:31:41.222588 11692 round_trippers.go:449] Response Headers: I0705 14:31:41.222611 11692 round_trippers.go:452] Cache-Control: no-cache, private I0705 14:31:41.222771 11692 round_trippers.go:452] Content-Type: application/json I0705 14:31:41.222812 11692 round_trippers.go:452] Date: Sun, 05 Jul 2020 12:31:41 GMT I0705 14:31:41.223225 11692 request.go:1068] Response Body: {&quot;kind&quot;:&quot;Pod&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{&quot;name&quot;:&quot;nginx-ingress-controller-57f4b84b5-ldkk5&quot;,&quot;generateName&quot;:&quot;nginx-ingress-controller-57f4b84b5-&quot;,&quot;namespace&quot;:&quot;nginx-ingress&quot;,&quot;selfLink&quot;:&quot;/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5&quot;,&quot;uid&quot;:&quot;778a9c24-9785-462e-9e1e-137a1aa08c87&quot;,&quot;resourceVersion&quot;:&quot;10435&quot;,&quot;creationTimestamp&quot;:&quot;2020-07-05T11:54:55Z&quot;,&quot;labels&quot;:{&quot;app&quot;:&quot;nginx-ingress&quot;,&quot;app.kubernetes.io/component&quot;:&quot;controller&quot;,&quot;component&quot;:&quot;controller&quot;,&quot;pod-template-hash&quot;:&quot;57f4b84b5&quot;,&quot;release&quot;:&quot;nginx-ingress&quot;},&quot;ownerReferences&quot;:[{&quot;apiVersion&quot;:&quot;apps/v1&quot;,&quot;kind&quot;:&quot;ReplicaSet&quot;,&quot;name&quot;:&quot;nginx-ingress-controller-57f4b84b5&quot;,&quot;uid&quot;:&quot;b9c42590-7efb-46d2-b37c-cec3a994bf4e&quot;,&quot;controller&quot;:true,&quot;blockOwnerDeletion&quot;:true}],&quot;managedFields&quot;:[{&quot;manager&quot;:&quot;kube-controller-manager&quot;,&quot;operation&quot;:&quot;Update&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;time&quot;:&quot;2020-07-05T11:54:55Z&quot;,&quot;fieldsType&quot;:&quot;FieldsV1&quot;,&quot;fieldsV1&quot;:{&quot;f:metadata&quot;:{&quot;f:generateName&quot;:{},&quot;f:labels&quot;:{&quot;.&quot;:{},&quot;f:app&quot;:{},&quot;f:app.kubernetes.io/component&quot;:{},&quot;f:component&quot;:{},&quot;f:pod-template-hash&quot;:{},&quot;f:release&quot;:{}},&quot;f:ownerReferences&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;uid\&quot;:\&quot;b9c42590-7efb-46d2-b37c-cec3a994bf4e\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:apiVersion&quot;:{},&quot;f:blockOwnerDeletion&quot;:{},&quot;f:controller&quot;:{},&quot;f:kind&quot;:{},&quot;f:name&quot;:{},&quot;f:uid&quot;:{}}}},&quot;f:spec&quot;:{&quot;f:containers&quot;:{&quot;k:{\&quot;name\&quot;:\&quot;nginx-ingress-controller\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:args&quot;:{},&quot;f:env&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;name\&quot;:\&quot;POD_NAME\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:name&quot;:{},&quot;f:valueFrom&quot;:{&quot;.&quot;:{},&quot;f:fieldRef&quot;:{&quot;.&quot;:{},&quot;f:apiVersion&quot;:{},&quot;f:fieldPath&quot;:{}}}},&quot;k:{\&quot;name\&quot;:\&quot;POD_NAMESPACE\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:name&quot;:{},&quot;f:valueFrom&quot;:{&quot;.&quot;:{},&quot;f:fieldRef&quot;:{&quot;.&quot;:{},&quot;f:apiVersion&quot;:{},&quot;f:fieldPath&quot;:{}}}}},&quot;f:image&quot;:{},&quot;f:imagePullPolicy&quot;:{},&quot;f:livenessProbe&quot;:{&quot;.&quot;:{},&quot;f:failureThreshold&quot;:{},&quot;f:httpGet&quot;:{&quot;.&quot;:{},&quot;f:path&quot;:{},&quot;f:port&quot;:{},&quot;f:scheme&quot;:{}},&quot;f:initialDelaySeconds&quot;:{},&quot;f:periodSeconds&quot;:{},&quot;f:successThreshold&quot;:{},&quot;f:timeoutSeconds&quot;:{}},&quot;f:name&quot;:{},&quot;f:ports&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;containerPort\&quot;:80,\&quot;protocol\&quot;:\&quot;TCP\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:containerPort&quot;:{},&quot;f:name&quot;:{},&quot;f:protocol&quot;:{}},&quot;k:{\&quot;containerPort\&quot;:443,\&quot;protocol\&quot;:\&quot;TCP\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:containerPort&quot;:{},&quot;f:name&quot;:{},&quot;f:protocol&quot;:{}}},&quot;f:readinessProbe&quot;:{&quot;.&quot;:{},&quot;f:failureThreshold&quot;:{},&quot;f:httpGet&quot;:{&quot;.&quot;:{},&quot;f:path&quot;:{},&quot;f:port&quot;:{},&quot;f:scheme&quot;:{}},&quot;f:initialDelaySeconds&quot;:{},&quot;f:periodSeconds&quot;:{},&quot;f:successThreshold&quot;:{},&quot;f:timeoutSeconds&quot;:{}},&quot;f:resources&quot;:{},&quot;f:securityContext&quot;:{&quot;.&quot;:{},&quot;f:allowPrivilegeEscalation&quot;:{},&quot;f:capabilities&quot;:{&quot;.&quot;:{},&quot;f:add&quot;:{},&quot;f:drop&quot;:{}},&quot;f:runAsUser&quot;:{}},&quot;f:terminationMessagePath&quot;:{},&quot;f:terminationMessagePolicy&quot;:{}}},&quot;f:dnsPolicy&quot;:{},&quot;f:enableServiceLinks&quot;:{},&quot;f:restartPolicy&quot;:{},&quot;f:schedulerName&quot;:{},&quot;f:securityContext&quot;:{},&quot;f:serviceAccount&quot;:{},&quot;f:serviceAccountName&quot;:{},&quot;f:terminationGracePeriodSeconds&quot;:{}}}},{&quot;manager&quot;:&quot;kubelet&quot;,&quot;operation&quot;:&quot;Update&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;time&quot;:&quot;2020-07-05T12:27:53Z&quot;,&quot;fieldsType&quot;:&quot;FieldsV1&quot;,&quot;fieldsV1&quot;:{&quot;f:status&quot;:{&quot;f:conditions&quot;:{&quot;k:{\&quot;type\&quot;:\&quot;ContainersReady\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:lastProbeTime&quot;:{},&quot;f:lastTransitionTime&quot;:{},&quot;f:message&quot;:{},&quot;f:reason&quot;:{},&quot;f:status&quot;:{},&quot;f:type&quot;:{}},&quot;k:{\&quot;type\&quot;:\&quot;Initialized\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:lastProbeTime&quot;:{},&quot;f:lastTransitionTime&quot;:{},&quot;f:status&quot;:{},&quot;f:type&quot;:{}},&quot;k:{\&quot;type\&quot;:\&quot;Ready\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:lastProbeTime&quot;:{},&quot;f:lastTransitionTime&quot;:{},&quot;f:message&quot;:{},&quot;f:reason&quot;:{},&quot;f:status&quot;:{},&quot;f:type&quot;:{}}},&quot;f:containerStatuses&quot;:{},&quot;f:hostIP&quot;:{},&quot;f:phase&quot;:{},&quot;f:podIP&quot;:{},&quot;f:podIPs&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;ip\&quot;:\&quot;10.244.1.2\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:ip&quot;:{}}},&quot;f:startTime&quot;:{}}}}]},&quot;spec&quot;:{&quot;volumes&quot;:[{&quot;name&quot;:&quot;nginx-ingress-token-rmhf8&quot;,&quot;secret&quot;:{&quot;secretName&quot;:&quot;nginx-ingress-token-rmhf8&quot;,&quot;defaultMode&quot;:420}}],&quot;containers&quot;:[{&quot;name&quot;:&quot;nginx-ingress-controller&quot;,&quot;image&quot;:&quot;quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0&quot;,&quot;args&quot;:[&quot;/nginx-ingress-controller&quot;,&quot;--default-backend-service=nginx-ingress/nginx-ingress-default-backend&quot;,&quot;--publish-service=nginx-ingress/nginx-ingress-controller&quot;,&quot;--election-id=ingress-controller-leader&quot;,&quot;--ingress-class=nginx&quot;,&quot;--configmap=nginx-ingress/nginx-ingress-controller&quot;],&quot;ports&quot;:[{&quot;name&quot;:&quot;http&quot;,&quot;containerPort&quot;:80,&quot;protocol&quot;:&quot;TCP&quot;},{&quot;name&quot;:&quot;https&quot;,&quot;containerPort&quot;:443,&quot;protocol&quot;:&quot;TCP&quot;}],&quot;env&quot;:[{&quot;name&quot;:&quot;POD_NAME&quot;,&quot;valueFrom&quot;:{&quot;fieldRef&quot;:{&quot;apiVersion&quot;:&quot;v1&quot;,&quot;fieldPath&quot;:&quot;metadata.name&quot;}}},{&quot;name&quot;:&quot;POD_NAMESPACE&quot;,&quot;valueFrom&quot;:{&quot;fieldRef&quot;:{&quot;apiVersion&quot;:&quot;v1&quot;,&quot;fieldPath&quot;:&quot;metadata.namespace&quot;}}}],&quot;resources&quot;:{},&quot;volumeMounts&quot;:[{&quot;name&quot;:&quot;nginx-ingress-token-rmhf8&quot;,&quot;readOnly&quot;:true,&quot;mountPath&quot;:&quot;/var/run/secrets/kubernetes.io/serviceaccount&quot;}],&quot;livenessProbe&quot;:{&quot;httpGet&quot;:{&quot;path&quot;:&quot;/healthz&quot;,&quot;port&quot;:10254,&quot;scheme&quot;:&quot;HTTP&quot;},&quot;initialDelaySeconds&quot;:10,&quot;timeoutSeconds&quot;:1,&quot;periodSeconds&quot;:10,&quot;successThreshold&quot;:1,&quot;failureThreshold&quot;:3},&quot;readinessProbe&quot;:{&quot;httpGet&quot;:{&quot;path&quot;:&quot;/healthz&quot;,&quot;port&quot;:10254,&quot;scheme&quot;:&quot;HTTP&quot;},&quot;initialDelaySeconds&quot;:10,&quot;timeoutSeconds&quot;:1,&quot;periodSeconds&quot;:10,&quot;successThreshold&quot;:1,&quot;failureThreshold&quot;:3},&quot;terminationMessagePath&quot;:&quot;/dev/termination-log&quot;,&quot;terminationMessagePolicy&quot;:&quot;File&quot;,&quot;imagePullPolicy&quot;:&quot;IfNotPresent&quot;,&quot;securityContext&quot;:{&quot;capabilities&quot;:{&quot;add&quot;:[&quot;NET_BIND_SERVICE&quot;],&quot;drop&quot;:[&quot;ALL&quot;]},&quot;runAsUser&quot;:101,&quot;allowPrivilegeEscalation&quot;:true}}],&quot;restartPolicy&quot;:&quot;Always&quot;,&quot;terminationGracePeriodSeconds&quot;:60,&quot;dnsPolicy&quot;:&quot;ClusterFirst&quot;,&quot;serviceAccountName&quot;:&quot;nginx-ingress&quot;,&quot;serviceAccount&quot;:&quot;nginx-ingress&quot;,&quot;nodeName&quot;:&quot;sedeka81&quot;,&quot;securityContext&quot;:{},&quot;schedulerName&quot;:&quot;default-scheduler&quot;,&quot;tolerations&quot;:[{&quot;key&quot;:&quot;node.kubernetes.io/not-ready&quot;,&quot;operator&quot;:&quot;Exists&quot;,&quot;effect&quot;:&quot;NoExecute&quot;,&quot;tolerationSeconds&quot;:300},{&quot;key&quot;:&quot;node.kubernetes.io/unreachable&quot;,&quot;operator&quot;:&quot;Exists&quot;,&quot;effect&quot;:&quot;NoExecute&quot;,&quot;tolerationSeconds&quot;:300}],&quot;priority&quot;:0,&quot;enableServiceLinks&quot;:true},&quot;status&quot;:{&quot;phase&quot;:&quot;Running&quot;,&quot;conditions&quot;:[{&quot;type&quot;:&quot;Initialized&quot;,&quot;status&quot;:&quot;True&quot;,&quot;lastProbeTime&quot;:null,&quot;lastTransitionTime&quot;:&quot;2020-07-05T11:54:56Z&quot;},{&quot;type&quot;:&quot;Ready&quot;,&quot;status&quot;:&quot;False&quot;,&quot;lastProbeTime&quot;:null,&quot;lastTransitionTime&quot;:&quot;2020-07-05T11:54:56Z&quot;,&quot;reason&quot;:&quot;ContainersNotReady&quot;,&quot;message&quot;:&quot;containers with unready status: [nginx-ingress-controller]&quot;},{&quot;type&quot;:&quot;ContainersReady&quot;,&quot;status&quot;:&quot;False&quot;,&quot;lastProbeTime&quot;:null,&quot;lastTransitionTime&quot;:&quot;2020-07-05T11:54:56Z&quot;,&quot;reason&quot;:&quot;ContainersNotReady&quot;,&quot;message&quot;:&quot;containers with unready status: [nginx-ingress-controller]&quot;},{&quot;type&quot;:&quot;PodScheduled&quot;,&quot;status&quot;:&quot;True&quot;,&quot;lastProbeTime&quot;:null,&quot;lastTransitionTime&quot;:&quot;2020-07-05T11:54:56Z&quot;}],&quot;hostIP&quot;:&quot;10.10.10.81&quot;,&quot;podIP&quot;:&quot;10.244.1.2&quot;,&quot;podIPs&quot;:[{&quot;ip&quot;:&quot;10.244.1.2&quot;}],&quot;startTime&quot;:&quot;2020-07-05T11:54:56Z&quot;,&quot;containerStatuses&quot;:[{&quot;name&quot;:&quot;nginx-ingress-controller&quot;,&quot;state&quot;:{&quot;waiting&quot;:{&quot;reason&quot;:&quot;CrashLoopBackOff&quot;,&quot;message&quot;:&quot;back-off 5m0s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-57f4b84b5-ldkk5_nginx-ingress(778a9c24-9785-462e-9e1e-137a1aa08c87)&quot;}},&quot;lastState&quot;:{&quot;terminated&quot;:{&quot;exitCode&quot;:143,&quot;reason&quot;:&quot;Error&quot;,&quot;startedAt&quot;:&quot;2020-07-05T12:27:23Z&quot;,&quot;finishedAt&quot;:&quot;2020-07-05T12:27:53Z&quot;,&quot;containerID&quot;:&quot;docker://4b7d69c47884790031e665801e282dafd8ea5dfaf97d54c6659d894d88af5a7a&quot;}},&quot;ready&quot;:false,&quot;restartCount&quot;:15,&quot;image&quot;:&quot;quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0&quot;,&quot;imageID&quot;:&quot;docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287&quot;,&quot;containerID&quot;:&quot;docker://4b7d69c47884790031e665801e282dafd8ea5dfaf97d54c6659d894d88af5a7a&quot;,&quot;started&quot;:false}],&quot;qosClass&quot;:&quot;BestEffort&quot;}} I0705 14:31:41.239523 11692 round_trippers.go:423] curl -k -v -XGET -H &quot;Accept: application/json, */*&quot; -H &quot;User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8&quot; 'https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5/log' I0705 14:31:41.247040 11692 round_trippers.go:443] GET https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5/log 200 OK in 7 milliseconds I0705 14:31:41.247125 11692 round_trippers.go:449] Response Headers: I0705 14:31:41.247146 11692 round_trippers.go:452] Content-Type: text/plain I0705 14:31:41.247164 11692 round_trippers.go:452] Date: Sun, 05 Jul 2020 12:31:41 GMT I0705 14:31:41.247182 11692 round_trippers.go:452] Cache-Control: no-cache, private ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.32.0 Build: git-446845114 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.17.10 ------------------------------------------------------------------------------- I0705 12:27:23.597622 8 flags.go:204] Watching for Ingress class: nginx W0705 12:27:23.598540 8 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) W0705 12:27:23.598663 8 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0705 12:27:23.599666 8 main.go:220] Creating API client for https://10.96.0.1:443 </code></pre> <p>And here:</p> <pre><code>root@sedeka78:~# kubectl describe pod nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress Name: nginx-ingress-controller-57f4b84b5-ldkk5 Namespace: nginx-ingress Priority: 0 Node: sedeka81/10.10.10.81 Start Time: Sun, 05 Jul 2020 13:54:56 +0200 Labels: app=nginx-ingress app.kubernetes.io/component=controller component=controller pod-template-hash=57f4b84b5 release=nginx-ingress Annotations: &lt;none&gt; Status: Running IP: 10.244.1.2 IPs: IP: 10.244.1.2 Controlled By: ReplicaSet/nginx-ingress-controller-57f4b84b5 Containers: nginx-ingress-controller: Container ID: docker://545ed277d1a039cd36b0d18a66d1f58c8b44f3fc5e4cacdcde84cc68e763b0e8 Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0 Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287 Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: /nginx-ingress-controller --default-backend-service=nginx-ingress/nginx-ingress-default-backend --publish-service=nginx-ingress/nginx-ingress-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=nginx-ingress/nginx-ingress-controller State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 143 Started: Sun, 05 Jul 2020 14:33:33 +0200 Finished: Sun, 05 Jul 2020 14:34:03 +0200 Ready: False Restart Count: 17 Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: nginx-ingress-controller-57f4b84b5-ldkk5 (v1:metadata.name) POD_NAMESPACE: nginx-ingress (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-rmhf8 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: nginx-ingress-token-rmhf8: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-token-rmhf8 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned nginx-ingress/nginx-ingress-controller-57f4b84b5-ldkk5 to sedeka81 Normal Pulling 41m kubelet, sedeka81 Pulling image &quot;quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0&quot; Normal Pulled 41m kubelet, sedeka81 Successfully pulled image &quot;quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0&quot; Normal Created 40m (x3 over 41m) kubelet, sedeka81 Created container nginx-ingress-controller Normal Started 40m (x3 over 41m) kubelet, sedeka81 Started container nginx-ingress-controller Normal Killing 40m (x2 over 40m) kubelet, sedeka81 Container nginx-ingress-controller failed liveness probe, will be restarted Normal Pulled 40m (x2 over 40m) kubelet, sedeka81 Container image &quot;quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0&quot; already present on machine Warning Unhealthy 40m (x6 over 41m) kubelet, sedeka81 Readiness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused Warning Unhealthy 21m (x33 over 41m) kubelet, sedeka81 Liveness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused Warning BackOff 97s (x148 over 38m) kubelet, sedeka81 Back-off restarting failed container </code></pre>
P3t3r
<p>SOLVED I used DEBIAN10 (Buster) and arptables wasn't in the legacy mode.</p> <p>Here is the solution:</p> <pre><code>sudo apt-get install -y iptables arptables ebtables update-alternatives --set iptables /usr/sbin/iptables-nft update-alternatives --set ip6tables /usr/sbin/ip6tables-nft update-alternatives --set arptables /usr/sbin/arptables-nft update-alternatives --set ebtables /usr/sbin/ebtables-nft </code></pre> <p>See here: <a href="https://stackoverflow.com/questions/62441725/update-alternatives-error-alternative-usr-sbin-arptables-legacy-for-arptables">update-alternatives: error: alternative /usr/sbin/arptables-legacy for arptables not registered; not setting</a></p>
P3t3r
<p>I have a set of containerized microservices behind an ALB serving as endpoints for my API. The ALB ingress is internet-facing and I have set up my path routing accordingly. Suddenly the need appeared for some additional (new) containerized microservices to be private (aka not accessible through the internet) but still be reachable from, and able to communicate with, the ones that are public (internally).</p> <p>Is there a way to configure path based routing , or modify the ingress with some annotation to keep certain paths private?</p> <p>If not, would a second ingress (an internal one this time) under the same ALB do the trick for what I want?</p> <p>Thanks, George</p>
George S.
<p>Turns out that (at least for my case) the solution is to ignore the internet-facing Ingress and let it do its thing. Internal facing REST API paths that should not be otherwise accessible can be used through their pods' Service specification.</p> <p>Implementing a Service per microservice will allow internal access in their : without the need to modify anything in the initial Ingress which will continue to handle internet-facing API(s).</p>
George S.
<p>Locally, bare-metal, two separate 'services' successfully talk to each other trough a gRPC connection, a client and a 'backend'. Both are implemented as NestJS apps, using the gRPC transports.</p> <p>When deployed in kubernetes(minikube) environment I get <code>Error: 14 UNAVAILABLE: No connection established</code> in the client. Backend listens on <code>localhost:50051</code> (port is exposed from k8 service), frontend attempts connection on <code>engine-svc.default.svc.cluster.local:50051</code>, where the first part up until the dot is the service name and I've <code>nslookup</code>'d it to make sure this is the full domain name.</p> <p>I tried port-forwarding to the gRPC kubernetes service, this 'backend' part of the connection works fine and communication to/from it works. Thus the 'frontend' deployment/pods is the part to fail establishing a connection.</p> <p>Minikube comes packaged with coreDNS. I've tried debugging it but without success. When the 'frontend' service attempts to connect, the logs from coreDNS are labeled with <code>NOERROR</code> which I've read means the service is found; still it remains unsuccessfull.</p> <p>What else could be an issue? What am I missing? Any help is greatly appreciated.</p>
fmi21
<p>Solved my problem thanks to <a href="https://stackoverflow.com/a/56053059">this answer</a>. The problem was that the backend was listening on <code>localhost:50051</code> which means local connections only; port-forwarding also counts as one so that's why it worked. Changing the 'listen on' property to <code>0.0.0.0:50051</code> on the backend part solved the issue;</p>
fmi21
<p>I have a discord bot that I have deployed on kubernetes. It is built into a docker image that I then deploy to k8s.</p> <p><code>Dockerfile</code>:</p> <pre><code> FROM python:3.7 WORKDIR /app COPY . /app RUN pip install -r requirements.txt ENV PYTHONUNBUFFERED=0 CMD ["python", "-u", "main.py"] </code></pre> <p><code>deployment.yml</code>:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: pythonapp labels: app: pythonapp spec: replicas: 1 selector: matchLabels: app: pythonapp template: metadata: labels: app: pythonapp spec: containers: - name: pythonapp image: registry.example.com/pythonapp:latest imagePullPolicy: Always env: - name: PYTHONUNBUFFERED value: "0" </code></pre> <p>When I deploy it, the program runs fine. But when I go to check the pod's logs, it says <code>The selected container has not logged any messages yet.</code>. </p> <p>How can I get anything that I <code>print()</code> in python to be logged to the kubernetes pod?</p>
cclloyd
<p><code>PYTHONUNBUFFERED</code> must be set to <code>1</code> if you want to show log messages immediately without being buffered first.</p>
bug
<p><strong>Background</strong></p> <p>We have a simple producer/consumer style application with Kafka as the message broker and Consumer Processes running as Kubernetes pods. We have defined two topics namely the in-topic and the out-topic. A set of consumer pods that belong to the same consumer group read messages from the in-topic, perform some work and finally write out the same message (key) to the out-topic once the work is complete.</p> <p><strong>Issue Description</strong></p> <p>We noticed that there are duplicate messages being written out to the out-topic by the consumers that are running in the Kubernetes pods. To rephrase, two different consumers are consuming the same messages from the in-topic twice and thus publishing the same message twice to the out-topic as well. We analyzed the issue and can safely conclude that this issue only occurs when pods are auto-downscaled/deleted by Kubernetes.</p> <p>In fact, an interesting observation we have is that if any message is read by two different consumers from the in-topic (and thus published twice in the out-topic), the given message is always the last message consumed by one of the pods that was downscaled. In other words, if a message is consumed twice, the root cause is always the downscaling of a pod.</p> <p>We can conclude that a pod is getting downscaled after a consumer writes the message to the out-topic but before Kafka can commit the offset to the in-topic.</p> <p><strong>Consumer configuration</strong></p> <pre><code>props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, &quot;true&quot;); props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, &quot;3600000&quot;); props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, &quot;latest&quot;); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;) </code></pre> <p><strong>Zookeeper/broker logs</strong> :</p> <pre><code>[2021-04-07 02:42:22,708] INFO [GroupCoordinator 0]: Preparing to rebalance group PortfolioEnrichmentGroup14 in state PreparingRebalance with old generation 1 (__consumer_offsets-17) (reason: removing member PortfolioEnrichmentConsumer13-9aa71765-2518- 493f-a312-6c1633225015 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) [2021-04-07 02:42:23,331] INFO [GroupCoordinator 0]: Stabilized group PortfolioEnrichmentGroup14 generation 2 (__consumer_offsets-17) (kafka.coordinator.group.GroupCoordinator) [2021-04-07 02:42:23,335] INFO [GroupCoordinator 0]: Assignment received from leader for group PortfolioEnrichmentGroup14 for generation 2 (kafka.coordinator.group.GroupCoordinator) </code></pre> <p><strong>What we tried</strong></p> <p>Looking at the logs, it was clear that rebalancing takes place because of the heartbeat expiration. We added the following configuration parameters to increase the heartbeat and also increase the session time out :</p> <pre><code>props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, &quot;10000&quot;) props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, &quot;latest&quot;); props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, &quot;900000&quot;); props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, &quot;512&quot;); props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, &quot;1&quot;); </code></pre> <p>However, this did not solve the issue. Looking at the broker logs, we can confirm that the issue is due to the downscaling of pods.</p> <p><strong>Question :</strong> What could be causing this behavior where a message is consumed twice when a pod gets downscaled?</p> <p>Note : I already understand the root cause of the issue; however, considering that a consumer is a long lived process running in an infinite loop, how and why is Kubernetes downscaling/killing a pod before the consumer commits the offset? How do I tell Kubernetes not to remove a running pod from a consumer group until all Kafka commits are completed?</p>
Chetan Kinger
<blockquote> <p>&quot;What could be causing this behavior where a message is consumed twice when a pod gets downscaled?&quot;</p> </blockquote> <p>You have provided the answer already yourself: &quot;[...] that a pod is getting downscaled after a consumer writes the message to the out-topic but before Kafka can commit the offset to the in-topic.&quot;</p> <p>As the message was <em>processed but not committed</em>, another pod is re-processing the same message again after the downscaling happens. Remember that adding or removing a consumer from a consumer group always initiates a Rebalancing. You have now first-hand experience why this should generally be avoided as much as feasible. Depending on the Kafka version a rebalance will cause every single consumer of the consumer group to stop consuming until the rebalancing is done.</p> <p>To solve your issue, I see two options:</p> <ul> <li>Only remove running pods out of the Consumer Group when they are idle</li> <li>Reduce the consumer configuration <code>auto.commit.interval.ms</code> to <code>1</code> as this defaults to 5 seconds. This will only work if you set <code>enable.auto.commit</code> to <code>true</code>.</li> </ul>
Michael Heil
<p>I'm asking for help with what I think is a non-trivial question that I've been trying to solve for a few weeks now.</p> <p>I recently upgraded my project to Symfony 5.4. However, after deploying the project to a Kubernetes cluster, php-fpm stopped working in pods. I wrote more about this process <a href="https://stackoverflow.com/questions/76153171/php-fpm-not-working-on-the-kubernetes-pod-after-upgrade-symfony">here</a>. I was immediately sent to look at logs, conifigs and so on, but I made sure that there was no problem with that. And here's why:</p> <p>The fact is that in order to reproduce the problem, ALWAYS(!) two conditions are needed:</p> <ol> <li>Symfony 5.4 packages</li> <li>A remote Kubernetes cluster.</li> </ol> <p>In all other cases, the problem does NOT(!) reproduce.</p> <p>This means that if I, while in a Kubernetes cluster, rollback packages to version 4.4 and everything works fine. On the other hand, on my local machine where there is no Kubernetes cluster, after updating the project to version 5.4, php-fpm also works fine. Both configs are the same, php-fpm versions are the same. Everything is literally identical. From this I conclude that the remote configs are configured correctly and there is no fundamental incompatibility of Symfony packages with the installed fpm.</p> <p>I thought that maybe not enough resources to start/update the system, but:</p> <ol> <li>Composer updated the project during the build stage, but the problem occurs during the deployment stage.</li> <li>I don't think the Kubernetes cluster is so loaded that it can't handle the simple task that my laptop can handle.</li> <li>There are no signals from Kubernetes that there is not enough memory or anything else. The cluster just ignores running php-fpm without a single message.</li> </ol> <p>So, what can be the problem?</p> <p><strong>UPD:</strong></p> <p>I can watch the pod logs and I can see how my <code>entrypoint</code> file is executed, by this:</p> <ol> <li><p>After performing all migrations, the log file shows the entry <code>[OK] Already at the latest version (&quot;App\Migrations\Version20230427064413&quot;)</code>. From this I conclude that the migrations worked successfully and didn't break my application.</p> </li> <li><p>Cache warming up is successful, as evidenced by the line in the logs <code>[OK] Cache for the &quot;prod&quot; environment (debug=false) was successfully warmed.</code></p> </li> <li><p>In the entrypoint given <a href="https://stackoverflow.com/questions/76153171/php-fpm-not-working-on-the-kubernetes-pod-after-upgrade-symfony">here</a> I really don't use the <code>$ php-fpm --test</code> command, however, I can assure you that I have run the command with the <code>-t</code> flag and <code>--debug</code> flag before and there were no errors. The logs show <code>NOTICE: configuration file /usr/local/etc/php-fpm.conf test is successful</code>.</p> </li> <li><p>This is my problem, yes.</p> </li> <li><p>My Symfony 4.4 project runs on the same base image <code>php:8.0-fpm-alpine3.16</code> in exactly the same Kubernetes pod configuration, from which I conclude that OPcache is compatible with PHP version 8.0. However, to be even more sure, I commented out the lines <code>;opcache.preload=/var/www/app/var/cache/prod/srcCimple_KernelProdContainer.preload.php ;opcache.preload_user=www-data</code> in my <code>php.ini</code> and the problem did not go away</p> </li> </ol> <p><strong>UPD 2</strong></p> <ol> <li><p>I tried to print out the code that the <code>php-fpm -F</code> command returns. The output is <code>0</code>, which indicates successful completion of the command, with no errors.</p> </li> <li><p>That's what I tried to do, I first separately update the recipes, prepare the code and make sure that at each step the project runs. The problem is that the project runs locally even with a fully updated Symfony. The problem with php-fpm happens as soon as I update <code>composer update &quot;symfony/*&quot; </code> from 4.4 to 5.1 or higher. Or should I try to update packages one by one ? Is it possible ?</p> </li> <li><p>I left only one line in <code>index.php</code> - <code>require dirname(__DIR__).'/vendor/autoload.php;</code> and the problem is still there.</p> </li> </ol> <p>I was able to find out some more information about this bug: there are lines in the <code>entrypoint</code> file</p> <p><code>php-fpm -F &amp;</code></p> <p><code>nginx -g 'daemon off;' &amp;</code></p> <p>So, I commented out the line with <code>nginx</code> and found out that <code>php-fpm</code> works correctly in this case. Hence, no problem with PHP FPM as such, the problem is that the command <code>nginx -g 'daemon off;' &amp;</code> somehow blocks my fpm. Could it be related to the new rules for nginx config in Symfony 5.4 ?</p> <p>I have created a new <a href="https://stackoverflow.com/questions/76201223/nginx-blocks-php-fpm-process">question</a>.</p>
Roman Zagday
<ol> <li><p>You run <code>app:migrations:migrate</code> which can fail the script (should be done in initContainer).</p> </li> <li><p>You run <code>cache:warmup</code> which can fail the script.</p> </li> <li><p>You do not run <code>$ php-fpm --test</code>, to verify that you config actually valid.</p> </li> <li><p>You update <code>symfony</code>, which may contain bugs, which appears only on cloud hardware.</p> </li> <li><p>You using JIT (OPcache) and it just not work (8.0, 8.1 problem). You can try disable opcache extension.</p> </li> </ol> <hr /> <p>Try to connect to the pod and run project through builtin php server, not php-fpm, if it still not runs, then try to change php version.</p>
Фарид Ахмедов
<p>I get an error when I install the helm chart I created.</p> <p>helm install -f values.yaml --dry-run testbb ./</p> <p>I change the indent and make it like yamls. I use &quot;kubectl get -o yaml&quot; many times, but it doesn't work.</p> <p>Line 50 in yaml file contains volume <strong>name: frontend-http</strong></p> <p>Does anyone know how to solve this, please? Here is the entire yaml template file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ include &quot;frontend-nginx.fullname&quot; . }} labels: {{- include &quot;frontend-nginx.labels&quot; . | nindent 4 }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include &quot;frontend-nginx.selectorLabels&quot; . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include &quot;frontend-nginx.selectorLabels&quot; . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include &quot;frontend-nginx.serviceAccountName&quot; . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 80 protocol: TCP volumeMounts: - mountPath: /usr/local/nginx/conf/nginx.conf name: {{ .Values.nginxconfig.nginxcm }}-volume subPath: nginx.conf - mountPath: /usr/local/nginx/html/app name: data-storage volumes: - configMap: defaultMode: 420 name: frontend-http name: frontend-http-volume {{- if .Values.persistentVolume.enabled }} - name: data-storage persistentVolumeClaim: claimName: {{ .Values.persistentVolume.existingClaim | default (include &quot;frontend-nginx.fullname&quot; .) }} {{- else }} - name: data-storage emptyDir: {} {{- end }} {{- if .Values.persistentVolume.mountPaths }} {{ toYaml .Values.persistentVolume.mountPaths | indent 12 }} {{- end }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} </code></pre>
Mustaine
<p>Try swapping &quot;configMap&quot; and &quot;name&quot;:</p> <pre><code> volumes: - name: frontend-http-volume configMap: defaultMode: 420 name: frontend-http {{- if .Values.persistentVolume.enabled }} - name: data-storage persistentVolumeClaim: claimName: {{ .Values.persistentVolume.existingClaim | default (include &quot;frontend-nginx.fullname&quot; .) }} {{- else }} - name: data-storage emptyDir: {} {{- end }} </code></pre>
KubePony
<p>I'm trying to create a quicklab on GCP to implement CI/CD with Jenkins on GKE, I created a <strong>Multibranch Pipeline</strong>. When I push the modified script to git, Jenkins kicks off a build and fails with the following error:</p> <blockquote> <pre><code> Branch indexing &gt; git rev-parse --is-inside-work-tree # timeout=10 Setting origin to https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default &gt; git config remote.origin.url https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default # timeout=10 Fetching origin... Fetching upstream changes from origin &gt; git --version # timeout=10 &gt; git config --get remote.origin.url # timeout=10 using GIT_ASKPASS to set credentials qwiklabs-gcp-gcpd-502b5f86f641 &gt; git fetch --tags --progress -- origin +refs/heads/*:refs/remotes/origin/* Seen branch in repository origin/master Seen branch in repository origin/new-feature Seen 2 remote branches Obtained Jenkinsfile from 4bbac0573482034d73cee17fa3de8999b9d47ced Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task Waiting for next available executor Agent sample-app-f7hdx-n3wfx is provisioned from template Kubernetes Pod Template --- apiVersion: "v1" kind: "Pod" metadata: annotations: buildUrl: "http://cd-jenkins:8080/job/sample-app/job/new-feature/1/" labels: jenkins: "slave" jenkins/sample-app: "true" name: "sample-app-f7hdx-n3wfx" spec: containers: - command: - "cat" image: "gcr.io/cloud-builders/kubectl" name: "kubectl" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "gcr.io/cloud-builders/gcloud" name: "gcloud" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "golang:1.10" name: "golang" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "cd-jenkins-agent:50000" - name: "JENKINS_AGENT_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http://cd-jenkins:8080/" image: "jenkins/jnlp-slave:alpine" name: "jnlp" volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false nodeSelector: {} restartPolicy: "Never" serviceAccountName: "cd-jenkins" volumes: - emptyDir: {} name: "workspace-volume" Running on sample-app-f7hdx-n3wfx in /home/jenkins/agent/workspace/sample-app_new-feature [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline java.lang.IllegalStateException: Jenkins.instance is missing. Read the documentation of Jenkins.getInstanceOrNull to see what you are doing wrong. at jenkins.model.Jenkins.get(Jenkins.java:772) at hudson.model.Hudson.getInstance(Hudson.java:77) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.areOnMaster(GoogleRobotUsernamePassword.java:146) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.readObject(GoogleRobotUsernamePassword.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at hudson.remoting.UserRequest.deserialize(UserRequest.java:290) at hudson.remoting.UserRequest.perform(UserRequest.java:189) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743) at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357) at hudson.remoting.Channel.call(Channel.java:957) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) Caused: java.lang.Error: Failed to deserialize the Callable object. at hudson.remoting.UserRequest.perform(UserRequest.java:195) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:97) Caused: java.io.IOException: Remote call on JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 failed at hudson.remoting.Channel.call(Channel.java:963) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) Caused: hudson.remoting.RemotingSystemException at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:299) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE </code></pre> </blockquote> <pre><code> </code></pre>
Andre12
<p>I got the same issue as well ... when running the quick lab.</p> <p><a href="https://cloud.google.com/solutions/continuous-delivery-jenkins-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/solutions/continuous-delivery-jenkins-kubernetes-engine</a></p> <p>For my situation, I suspect for some reason, the credential for "Kubernetes Service account" seems not shown up, and when I try to add one, it give global credentials with name "secret-text" ... not sure if it is the root cause, do you encounter the same situation ?</p> <p>p.s. I am running on my GKE cluster, not qwiklab.</p>
user12232194
<p>I have a command similar to this</p> <pre class="lang-sh prettyprint-override"><code>kubectl get secrets \ --selector='my-selector' \ -o jsonpath='{range .items[*] }{&quot;\n&quot;}{.metadata.labels.cluster-name}{&quot;.&quot;}{.metadata.namespace {&quot;:&quot;}{&quot;5432&quot;}{&quot;postgres&quot; }{&quot;:&quot;}{.data.password}{end}' </code></pre> <p>which outputs a list like this (format required)</p> <pre><code>cluster-name.namespace:5432:postgres:YbHF....== cluster-name.namespace:5432:postgres:YbHF....== cluster-name.namespace:5432:postgres:YbHF....== </code></pre> <p>I need to decode the base64 for this file and using the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">kubectl cheat sheet</a> as a reference which gives this example:</p> <pre><code># Output decoded secrets without external tools kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{&quot;### &quot;}}{{$k}}{{&quot;\n&quot;}}{{$v|base64decode}}{{&quot;\n\n&quot;}}{{end}}' </code></pre> <p>I tried the following</p> <pre class="lang-sh prettyprint-override"><code>kubectl get secrets \ --selector='my-selector' \ -o jsonpath='{range .items[*] }{&quot;\n&quot;}{.metadata.labels.cluster-name}{&quot;.&quot;}{.metadata.namespace {&quot;:&quot;}{&quot;5432&quot;}{&quot;postgres&quot; }{&quot;:&quot;}{.data.password|base64decode}{end}' </code></pre> <p>The result is that everything appears apart from the password field which is now blank, for example:</p> <pre><code>cluster-name.namespace:5432:postgres: </code></pre> <p>Any pointers would be appreciated.</p>
Alan
<p>As per @mdaniel suggestion I used the <code>-o go-template</code></p> <p>My main syntaxal changes were removing the [ ], ie, <code>{range .items[*] }</code> to <code>{{range .items}}'</code></p> <p>And if a key contained a <code>-</code> then <code>{.metadata.labels.cluster-name}</code> became <code>{{index .metadata.labels &quot;cluster-name&quot;}}</code></p> <p>My solution below which enabled the base64 decode to work:</p> <pre><code>kubectl get secrets \ --selector='my-selector' \ -o go-template='{{range .items}}{{&quot;\n&quot;}}{{index .metadata.labels &quot;cluster-name&quot;}}{{&quot;.&quot;}}{{.metadata.namespace }}{{&quot;:&quot;}}{{&quot;5432&quot;}}{{&quot;postgres&quot;}}{{&quot;:&quot;}}{{.data.password|base64decode}}{{end}}' </code></pre>
Alan
<p>I have the following problem I have an api in nestjs and micro service that the gateway api accesses services with TCP runs normally but when I create the pods in kubernetes it gives the following error:</p> <p><strong>[Server] Error: listen EADDRNOTAVAIL: address not available 1 92.168.x.x:8879</strong></p> <p>app.module.ts from api-gateway:</p> <pre><code>import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; import { ClientsModule, Transport } from '@nestjs/microservices'; import { AppService } from './app.service'; @Module({ imports: [ ClientsModule.register([ { name: 'SERVICE_A', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8888, }, }, { name: 'SERVICE_B', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8889, }, }, { name: 'USER', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8887, }, }, { name: 'USER_LOGIN', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8886, }, }, { name: 'USER_CREATE', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8885, }, }, { name: 'USER_UPDATE', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8884, }, }, { name: 'CATEGORY', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8883, }, }, { name: 'CATEGORY_BUSCA', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8882, }, }, { name: 'CATEGORY_PRODUCT', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8881, }, }, { name: 'USER_SENHA', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8880, }, }, { name: 'ADM_CONTACT', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8879, }, }, { name: 'LOCATION', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8878, }, }, { name: 'PRODUCT_STAR', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8877, }, }, { name: 'PRODUCT_SINGLE', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8876, }, }, { name: 'PRODUCT_GET_STAR', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8875, }, }, { name: 'PURCHASE_CREATE', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8874, }, }, { name: 'PURCHASE_GET_CART', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8873, }, }, { name: 'PURCHASE_GET', transport: Transport.TCP, options: { host: '192.168.x.x', port: 8870, }, } ]), ], controllers: [AppController], providers: [AppService], }) export class AppModule {} </code></pre> <p>my main.ts of my service:</p> <pre><code>import { NestFactory } from '@nestjs/core'; import { Transport } from '@nestjs/microservices'; import { AppModule } from './app.module'; import { Logger } from '@nestjs/common'; const logger = new Logger(); async function bootstrap() { const app = await NestFactory.createMicroservice(AppModule, { transport: Transport.TCP, options: { host: '192.168.x.x', port: 8879, }, }); app.listen(() =&gt; logger.log('Microservice ADM CONTACT is listening')); } bootstrap(); </code></pre> <p>When I run the service it presents in kubernetes with <strong>kubectl logs</strong> it gives this error:</p> <pre><code>[Nest] 1 - 05/19/2022, 10:12:59 PM [NestFactory] Starting Nest application... [Nest] 1 - 05/19/2022, 10:13:00 PM [InstanceLoader] TypeOrmModule dependencies initialized +281ms [Nest] 1 - 05/19/2022, 10:13:00 PM [InstanceLoader] AppModule dependencies initialized +0ms [Nest] 1 - 05/19/2022, 10:13:00 PM [InstanceLoader] TypeOrmCoreModule dependencies initialized +191ms [Nest] 1 - 05/19/2022, 10:13:00 PM [NestMicroservice] Nest microservice successfully started +9ms [Nest] 1 - 05/19/2022, 10:13:00 PM [Server] Error: listen EADDRNOTAVAIL: address not available 1 92.168.x.x:8879 +6ms </code></pre> <p>If you need I edit the question and add my <strong>yamls</strong></p> <p>Does anyone have any idea of this conflict?</p> <p><strong><a href="https://stackoverflow.com/questions/72890570/im-trying-to-convert-my-application-to-kubenetes-and-nginx-ingress-controller/72891420#72891420">[solved]</a></strong></p>
Rafael Souza
<p>Your code needs updating to be Kubernetes-aware. When your container image is scheduled to a Pod, the Pod will be assigned an IP by the kubelet.</p> <p>As @mdaniel has mentioned, your code needs to bind to 0.0.0.0. Your k8s deployment yaml will need to establish services for your endpoints, and your code should use the name of these services: your deployment yaml will set the service name, so this <strong>will</strong> be known ahead of time.</p> <p>Coming from the linked thread [https://stackoverflow.com/questions/72399521/access-pod-from-another-pod-with-kubernetes-url/72408970#72408970] you will be able to access from the browser either a) when you run the <code>kubectl port-forward svc/servicename</code> commands, or b) if this is being deployed in production for other users, your k8s cluster will have an Ingress and a method for routing to your k8s service.</p> <p>Explaining how to setup Ingress to somebody who isn't a k8s admin is beyond the scope of a S/O answer I'm afraid though, due to the variables of your environment and complexity.</p>
trmatthe
<p>I am using istio and running a service on path "/" and "/app" and both "/" and "/app" will serve same page. Achieving this, I have added rewrite rule on "/app" to "/" and it works fine.</p> <p>But when I am trying to hit "/app/login", rewrite do not serve page "/login".</p> <pre><code> - match: - uri: prefix: /app rewrite: uri: / route: - destination: host: app-svc port: number: 8000 </code></pre>
Ashish Kumar
<p>This <a href="https://github.com/istio/istio/issues/8076" rel="noreferrer">github issue</a> discusses this behavior. Your current rule will rewrite <code>/app/login</code> to <code>//login</code> instead of <code>/login</code>. Apparently duplicate slashes are not ignored automatically. The best solution right now is to tweak your rule as mentioned in <a href="https://github.com/istio/istio/issues/8076#issuecomment-427057691" rel="noreferrer">this comment</a>:</p> <blockquote> <pre><code>- match: - uri: prefix: "/app/" - uri: prefix: "/app" rewrite: uri: "/" </code></pre> </blockquote>
Emil Gi
<p>I have a vault setup in k8s with k8s auth enabled to allow vault agent to read secrets and export them as an environment variables to a k8s pod using K8s service account. everything is working fine if I’m using a single k8s namespace.</p> <p>I am not able to use a service account from A namespace and trying to use it in B namespace after attaching it via a rolebinding in namespace B</p> <p>step 1 - I created a service account called vault-ro in default namespace and configured it in vault k8s auth role. everything works good for any k8s pod in default namespace. they are able to read secerts from vault.</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: vault-ro --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: role-tokenreview-binding ##This Role! namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: vault-ro namespace: default </code></pre> <p>now, I want to enable namespace B to use same vault role and k8s service account to read secret from vault. so i created a rolebinding as follow in namespace B</p> <p>role binding in Namespace B</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: role-tokenreview-binding-dev namespace: B roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: vault-ro namespace: default </code></pre> <p>expected behaviour, I should be able to spin up a k8s pod with vault-ro service account user and it should be able to read the secret from vault same way as it does in default namespace but when I try that, i’m getting error as</p> <p><strong>Error from server (Forbidden): error when creating &quot;test-app-nonprod.yaml&quot;: pods &quot;test-app&quot; is forbidden: error looking up service account B/vault-ro: serviceaccount &quot;vault-ro&quot; not foun</strong>d</p> <p>why it’s not able to reference service account vault-ro from default namespace and still trying to find if it’s present in B namespace? is it something to do with vault? I tried my best to find from everywhere, all documents saying above should work!</p> <p>appreciate any help!</p>
Meet101
<p>Your error message is saying that your pod can't find service account <code>vault-ro</code> in the <code>B</code> namespace.</p> <blockquote> <p>error looking up service account B/vault-ro</p> </blockquote> <p>Are you setting a <code>pod.spec.serviceAccountName</code> entry in your yaml? If so, the service account must exist in the same namespace as the pod is running. Both Pods and ServiceAccounts are namespaced.</p> <p>I can't give you a good link to the docs where this is stated (that the sa must exist, it's implied in a few places. To check whether an object is namespaced or not you can use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace</a>) but I learnt this through experience.</p> <p>I would create another service account in namespace <code>B</code> and do the rolebinding again.</p> <p>As an aside, your have a mix of versions in your yaml for rbac, so if you want to avoid having this possibly bite you in the future if v1beta gets deprecated, it's worth tidying that too. Also, you are doing a ClusterRoleBinding in the first half, and a RoleBinding in the second which isn't necessary.</p> <p>I'd use this manifest and change it for each serviceaccount as indicated:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: role-tokenreview-binding namespace: default #*** change 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: vault-ro namespace: default #*** change 2 </code></pre> <p>Create a serviceaccount in each namespace you need to deploy vault-accessing pods in, and update the namespaces marked for each namespace you create an sa in.</p>
trmatthe
<p>I am trying have kubernetes create new pods on the most requested nodes instead of pods spreading the load across available nodes. The rationale is that this simplifies scale down scenarios and relaunching of application if pods gets moved and a node gets killed during autoscaling. </p> <p>The preferred strategy for descaling is - 1) Never kill a node if there is any running pod 2) New pods are created preferentially on the most requested Nodes 3) The pods will self destruct after job completion. This should, over time, result in free nodes after the tasks are completed and thus descaling will be safe and I don't need to worry about resilience of the running jobs.</p> <p>For this, is there any way I can specify the NodeAffinity in the pod spec, something like:</p> <pre><code> spec: affinity: nodeAffinity: RequiredDuringSchedulingIgnoredDuringExecution: - weight: 100 nodeAffinityTerm: {MostRequestedPriority} </code></pre> <p>The above code has no effect. The documentation for <code>NodeAffinity</code> doesn't specify if I can use <code>MostRequestedPriority</code> in this context. <code>MostRequestedPriority</code> is an option in the kubernetes scheduler service spec. But I am trying to see if I can directly put t in the pod spec, instead of creating a new custom kubernetes scheduler.</p>
Abdul Muneer
<p>Unfortunately there is no option to pass <code>MostRequestedPriority</code> to nodeAffinity field. However you can create <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">simple scheduler</a> to manage pod scheduling. Following configuration will be just enough. </p> <p>First, you have to create <code>Service Account</code> and <code>ClusterRoleBinding</code> for this scheduler: </p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: own-scheduler namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: own-scheduler roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: own-scheduler namespace: kube-system </code></pre> <p>Then create config map with desired <code>policy</code> field including <code>MostRequestedPriority</code>. Each field in <a href="https://kubernetes.io/docs/concepts/scheduling/#filtering" rel="nofollow noreferrer">predicates</a> can be modified to suit your needs best and basically what it does is it filters the nodes to find where a pod can be placed, for example, the <code>PodFitsResources</code> filter checks whether a Node has enough available resource to meet a Pod’s specific resource requests:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: k8s-addon: scheduler.addons.k8s.io name: own-scheduler namespace: kube-system data: policy.cfg: |- { "kind" : "Policy", "apiVersion" : "v1", "predicates" : [ {"name" : "PodFitsHostPorts"}, {"name" : "PodFitsResources"}, {"name" : "NoDiskConflict"}, {"name" : "PodMatchNodeSelector"}, {"name" : "PodFitsHost"} ], "priorities" : [ {"name" : "MostRequestedPriority", "weight" : 1}, {"name" : "EqualPriorityMap", "weight" : 1} ] } </code></pre> <p>Then wrap it up in Deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: component: scheduler tier: control-plane name: own-scheduler namespace: kube-system spec: selector: matchLabels: component: scheduler tier: control-plane replicas: 1 template: metadata: labels: component: scheduler tier: control-plane version: second spec: serviceAccountName: own-scheduler containers: - command: - /usr/local/bin/kube-scheduler - --address=0.0.0.0 - --leader-elect=false - --scheduler-name=own-scheduler - --policy-configmap=own-scheduler image: k8s.gcr.io/kube-scheduler:v1.15.4 livenessProbe: httpGet: path: /healthz port: 10251 initialDelaySeconds: 15 name: kube-second-scheduler readinessProbe: httpGet: path: /healthz port: 10251 resources: requests: cpu: '0.1' securityContext: privileged: false volumeMounts: [] hostNetwork: false hostPID: false volumes: [] </code></pre>
kool
<p>How to pass in the <code>application.properties</code> to the Spring boot application using <code>configmaps</code>. Since the <code>application.yml</code> file contains sensitive information, this requires to pass in <code>secrets</code> and <code>configmaps</code>. In this case what options do we have to pass in both the sensitive and non-sensitive configuration data to the Spring boot pod. I am currently using Spring cloud config server and Spring cloud config server can encrypt the sensitive data using the <code>encrypt.key</code> and decrypt the key.</p>
zilcuanu
<p>ConfigMaps as described by @paltaa would do the trick for non-sensitive information. For sensitive information I would use a <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">sealedSecret</a>.</p> <p>Sealed Secrets is composed of two parts:</p> <ul> <li>A cluster-side controller / operator</li> <li>A client-side utility: kubeseal</li> </ul> <p>The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.</p> <p>These encrypted secrets are encoded in a SealedSecret resource, which you can see as a recipe for creating a secret.</p> <p>Once installed you create your secret as normal and you can then:</p> <p><code>kubeseal --format=yaml &lt; secret.yaml &gt; sealed-secret.yaml</code></p> <p>You can safely push your sealedSecret to github etc.</p> <p>This normal kubernetes secret will appear in the cluster after a few seconds and you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).</p>
Alan
<p>The below snippet is taken from default Corefile of coredns.</p> <pre><code>data: Corefile: | .:53 { errors health { lameduck 5s } ready </code></pre> <p>In this snippet, health plugin is used to report health status to http://localhost:8080/health. If 200 OK is reported means healthy coredns pod. But I have few queries, If anything other than 200 OK is reported as health status of coredns pod, will kubernetes destroy and recreate the coredns pod ? Or how it will be handled by kubernetes?</p> <p>If coredns pod is unhealthy, how is it handled to make it healthy? will kubernetes automatically take care of ensuring a healthy coredns pod ? What is the advised lameduck duration? Based on what parameters, we can come to a appropriate lameduck duration.</p> <p>It would be really helpful if anyone can comment on this. Thanks in advance!</p>
anonymous user
<p>I have coredns deployed on a cluster in the kube-system namespace:</p> <p><em>kubectl describe deploy coredns -n kube-system</em></p> <p>Output: Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3</p> <p>The Liveness/Readiness probes that are used for this must be configured</p>
KubePony
<p>I have a ClusterIP Service which is used to distribute load to 2 PODs internally. The load is not distributed evenly across the PODs.</p> <p>How to make the load distributed evenly ?</p>
Chandu
<p>Kubernetes uses iptables to distribute the load between pods (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptables proxy mode</a> by default).</p> <p>If you have 2 pods, it is distributed evenly with 0.5 (50%) probability. Because it is not using round-robin, the backend pod is chosen randomly. It will be even in a longer time-frame. </p> <p>If there would be 3 pods, probability will change to 1/3 (33%), for 4 pods 1/4 and so on.</p> <p>To check it you can run <code>sudo iptables-save</code>.</p> <p>Example output for 2 pods (for nginx service):</p> <pre><code>sudo iptables-save | grep nginx -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginx:" -m tcp --dport 31554 -j KUBE-SVC-4N57TFCL4MD7ZTDA //KUBE-SVC-4N57TFCL4MD7ZTDA is a tag for nginx service sudo iptables-save | grep KUBE-SVC-4N57TFCL4MD7ZTDA -A KUBE-SVC-4N57TFCL4MD7ZTDA -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SOWYYRHSSTWLCRDY </code></pre> <p>If you want to make sure load is distributed evenly using round-robin algorithm you can use <a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">IPVS</a> which by default uses rr (round-robin). It works as load balancer in front of a cluster and directs requests for TCP- and UDP-based services to the real servers, and make services of the real servers appear as virtual services on a single IP address. It is <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md#run-kube-proxy-in-ipvs-mode" rel="nofollow noreferrer">supported</a> on cluster created by local-up, kubeadm and GCE.</p>
kool
<p>We have been running tidb cluster in k8s. and its working fine since. But suddenly i am getting following issue only in new statsfull pods <code>tidb-tidb-1</code> after scaling tidb-tidb statsfulset. Interestingly tidb-tidb-2 is running. All others pd and tikv pods are also running fine.I have checked the pd url which is not reachable from problematic pods but fine for other pods.Can you please help me to solve this issue.</p> <p><code>tidb-tidb-1 logs:</code></p> <pre><code>[2021/04/11 16:15:44.526 +00:00] [WARN] [base_client.go:180] [&quot;[pd] failed to get cluster id&quot;] [2021/04/11 16:15:48.527 +00:00] [WARN] [base_client.go:180] [&quot;[pd] failed to get cluster id&quot;] [error=&quot;[PD:client:ErrClientGetMember]error:rpc error: code = DeadlineExceeded desc = latest connection error: connection error: desc = \&quot;transport: Error while dialing dial tcp: i/o timeout\&quot; target:test-tidb-pd:2379 status:CONNECTING </code></pre>
Taybur Rahman
<ol> <li>Could you please show namespace information? kubectl get all -n -o wide</li> <li>Please check node information. <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-isolation-restriction" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-isolation-restriction</a></li> <li>Please check the network. If two nodes could ping successful? transport: Error while dialing dial TCP: i/o timeout</li> </ol>
yi888long
<p>I'm trying to run a tomcat container in K8S with a non-root user, to do so I set User 'tomcat' with the appropriate permission in Docker Image. I have a startup script that creates a directory in /opt/var/logs (during container startup) and also starts tomcat service.</p> <pre><code>#steps in Dockerfile #adding tomcat user and group and permission to /opt directory addgroup tomcat -g 1001 &amp;&amp; \ adduser -D -u 1001 -G tomcat tomcat &amp;&amp; \ chown -R tomcat:tomcat /opt #switch user User tomcat </code></pre> <p>The pod runs fine in K8S when deployed using deployment without any volume mapped.</p> <p>But I get a permission denied error (permission denied: creating directory /opt/var/logs/docker/) from the startup script, which fails to create a directory when I map the deployment with the persistent volume claim, even though I set the fsgroup as explained here, https://kubernetes.io/docs/tasks/configure-pod-container/security-context/.</p> <p>I have a persistent volume of type hostPath.</p> <p>The deployment definition is as below.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ms-tomcat namespace: ms-ns labels: app: tomcat spec: selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: securityContext: fsGroup: 2000 runAsUser: 1001 runAsGroup: 1001 containers: - name: tomcat image: docker-registry.test.com/tomcat:1.2 volumeMounts: - name: logging-volume mountPath: /opt/var/logs/docker imagePullSecrets: - name: test volumes: - name: logging-volume persistentVolumeClaim: claimName: nonroot-test-pvc </code></pre> <p>PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nonroot-test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: local-node-sc volumeName: nonroot-test-pv </code></pre>
mansing shinde
<p>you need to run an initcontainer that will execute chmod 777 on the mounted volume to to be able to use the volume in a container with a non root user</p> <p>this can be easily done by creating an init container from an alpine image</p> <p>you can face this problem when trying to use longhorn as your storage class</p> <p>like this:</p> <pre><code>spec: initContainers: - name: permission-init image: alpine:3.16.0 command: - sh - -c - (chmod 777 /data) volumeMounts: - name: files mountPath: /data </code></pre>
Omar Aboul Makarem
<p>I have created a pod in Kubernetes(Google Cloud) and its streaming data via imagezmq.</p> <p>Python code which is streaming the data(Inside Kubernetes Pod)-</p> <pre><code>import imagezmq sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555', REQ_REP=False) sender.send_image('rpi_name',data) </code></pre> <p>I want to access the data from outside the pod, from my system like this.</p> <pre><code>image_hub = imagezmq.ImageHub('tcp://34.86.110.52:80', REQ_REP=False) while True: rpi_name, image = image_hub.recv_image() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + image.tobytes() + b'\r\n') </code></pre> <p>I tried creating a external loadbalance but it did'nt worked. I am not sure what to do</p> <p>Loadbalance YAML-</p> <pre><code>apiVersion: v1 kind: Service spec: clusterIP: 10.72.131.76 externalTrafficPolicy: Cluster ports: - nodePort: 31145 port: 80 protocol: TCP targetPort: 5555 selector: app: camera-65 sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 34.86.110.52 </code></pre> <p>Please Help me.</p>
Devdrone Bhowmik
<p>I found the solution.</p> <p>Changing the ip from 127.0.0.1 to 0.0.0. solved the issue for me.</p> <pre><code>import imagezmq sender = imagezmq.ImageSender(connect_to='tcp://0.0.0.0:5555', REQ_REP=False) sender.send_image('rpi_name',data) </code></pre> <p>Then exposing the pod with LoadBalancer type did the work.</p>
Devdrone Bhowmik
<p>I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.</p> <p>After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.</p> <p>Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.</p>
Sarath S
<p>You can use this script:</p> <pre><code>NAMESPACE=&quot;your_namespace&quot; RESOURCES=&quot;configmap secret daemonset deployment service hpa&quot; for resource in ${RESOURCES};do rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed &quot;s/\&quot;//g&quot;) for r in ${rsrcs};do dir=&quot;${NAMESPACE}/${resource}&quot; mkdir -p &quot;${dir}&quot; kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} &gt; &quot;${dir}/${r}.yaml&quot; done done </code></pre> <p>Remember to specify what resources you want exported in the script.</p> <p><a href="https://gist.github.com/negz/c3ee465b48306593f16c523a22015bec" rel="nofollow noreferrer">More info here </a></p>
Daniel Karapishchenko
<p>I am on a GCP k8s cluster. I want to be sure that no pods or other kubernetes resources are using a particular ConfigMap first, before deleting the ConfigMap. Is there a kubectl command that I can use to check what is using a ConfigMap?</p>
Christian
<p>You could export all your resources and grep for the config map name.</p> <p>You can use this script to export all selected resources (select resources in the RESOURCES category)</p> <pre><code>NAMESPACE=&quot;your_namespace&quot; RESOURCES=&quot;configmap secret daemonset deployment service&quot; for resource in ${RESOURCES};do rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed &quot;s/\&quot;//g&quot;) for r in ${rsrcs};do dir=&quot;${NAMESPACE}/${resource}&quot; mkdir -p &quot;${dir}&quot; kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} &gt; &quot;${dir}/${r}.yaml&quot; done done </code></pre> <p>Next, you can use <code>grep -nr your_config_map_name your_directory(your name space in this case)</code></p> <p>this will show you the files that contain the config map, I.E resources that use it.</p>
Daniel Karapishchenko
<p>I am trying to configure an nginx ingress for a GKE cluster and define a path on a configured subdomain. It seems that even if I am able to successfully ping the host, and the domain binding is done correctly, I keep getting a 404 back whenever I try to access the configured path.</p> <p>My goal is to be able to have a single static IP configured for my ingress controller and expose multiple services on different paths.</p> <p>Below you can find my deployment files - one more thing that I would add is that I am using Terraform to automate the configuration and deployment of GCP and Kubernetes resources.</p> <p>After the GKE cluster is successfully provisioned, I first deploy the official nginx-ingress controller from <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">here</a> - below my Terraform script that configures and deploys the controller with a custom static IP that I provisioned on GCP.</p> <pre><code>resource &quot;helm_release&quot; &quot;nginx&quot; { name = &quot;nginx&quot; chart = &quot;nginx-stable/nginx-ingress&quot; timeout = 900 set { name = &quot;controller.stats.enabled&quot; value = true } set { name = &quot;controller.service.type&quot; value = &quot;LoadBalancer&quot; } set { name = &quot;controller.service.loadBalancerIP&quot; value = &quot;&lt;MY_STATIC_IP_ADDRESS&gt;&quot; } } </code></pre> <p>Below my ingress configuration that I also deploy via Terraform:</p> <pre><code>resource &quot;kubernetes_ingress&quot; &quot;ingress&quot; { wait_for_load_balancer = true metadata { name = &quot;app-ingress&quot; annotations = { &quot;kubernetes.io/ingress.class&quot;: &quot;nginx&quot; &quot;nginx.ingress.kubernetes.io/rewrite-target&quot;: &quot;/&quot; &quot;kubernetes.io/ingress.global-static-ip-name&quot;: &lt;MY_STATIC_IP_ADDRESS&gt; } } spec { rule { host = custom.my_domain.com http { path { backend { service_name = &quot;app-service&quot; service_port = 5000 } path = &quot;/app&quot; } } } } } </code></pre> <p>And the resulting ingress configuration as taken from GCP:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: static-ip-name nginx.ingress.kubernetes.io/rewrite-target: / creationTimestamp: &quot;2021-04-14T20:28:41Z&quot; generation: 7 name: app-ingress namespace: default resourceVersion: HIDDEN selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress uid: HIDDEN spec: rules: - host: custom.my_domain.com http: paths: - backend: serviceName: app-service servicePort: 5000 path: /app status: loadBalancer: ingress: - ip: &lt;MY_STATIC_IP_ADDRESS&gt; </code></pre> <p>And the output for the <code>kubectl describe ingress app-ingress</code> command:</p> <pre><code>Name: app-ingress Namespace: default Address: &lt;MY_STATIC_IP_ADDRESS&gt; Default backend: default-http-backend:80 (192.168.10.8:8080) Rules: Host Path Backends ---- ---- -------- custom.my_domain.com /app app-service:5000 (192.168.10.11:5000) Annotations: kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: static-ip-name nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal AddedOrUpdated 16m (x6 over 32m) nginx-ingress-controller Configuration for default/app-ingress was added or updated </code></pre> <p>I deployed the application that I am trying to expose by using the following configuration files:</p> <p><strong>pvc.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: app-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: default </code></pre> <p><strong>service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-service spec: type: NodePort ports: - port: 5000 targetPort: 5000 protocol: TCP name: http </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: replicas: 1 template: spec: restartPolicy: Always volumes: - name: app-pvc persistentVolumeClaim: claimName: app-pvc containers: - name: app-container image: &quot;eu.gcr.io/&lt;PROJECT_ID&gt;/&lt;IMAGE_NAME&gt;:VERSION_TAG&quot; imagePullPolicy: IfNotPresent ports: - containerPort: 5000 livenessProbe: tcpSocket: port: 5000 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: tcpSocket: port: 5000 initialDelaySeconds: 15 periodSeconds: 20 volumeMounts: - mountPath: &quot;/data&quot; name: app-pvc </code></pre> <p>Everything gets deployed successfully, as I am able to directly connect to the application locally via the configured service by running the following command:</p> <pre><code>kubectl port-forward service/app-service 5000:5000 </code></pre> <p>This allows me to access the application in my browser and everything works as intended.</p> <p>To make sure that <code>&lt;MY_STATIC_IP_ADDRESS&gt;</code> that I configured is properly bound to <code>custom.my_domain.com</code>, I tried to ping the host and I do get the right response back:</p> <pre><code>ping custom.my_domain.com Pinging custom.my_domain.com [&lt;MY_STATIC_IP_ADDRESS&gt;] with 32 bytes of data: Reply from &lt;MY_STATIC_IP_ADDRESS&gt;: bytes=32 time=36ms TTL=113 Reply from &lt;MY_STATIC_IP_ADDRESS&gt;: bytes=32 time=37ms TTL=113 Reply from &lt;MY_STATIC_IP_ADDRESS&gt;: bytes=32 time=36ms TTL=113 Reply from &lt;MY_STATIC_IP_ADDRESS&gt;: bytes=32 time=45ms TTL=113 Ping statistics for &lt;MY_STATIC_IP_ADDRESS&gt;: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 36ms, Maximum = 45ms, Average = 38ms </code></pre> <p>Even if everything appears to be working as intended, whenever I try to navigate to <code>custom.my_domain.com/app</code> in my browser, I keep getting the following response in my browser, even after waiting for more than 30m to make sure that the ingress configuration has been properly registered on GCP: <a href="https://i.stack.imgur.com/ghQOn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ghQOn.png" alt="enter image description here" /></a></p> <p>And this is the entry that shows up in the logs of my nginx-controller pod:</p> <pre><code>&lt;HIDDEN_LOCAL_IP&gt; - - [14/Apr/2021:21:18:10 +0000] &quot;GET /app/ HTTP/1.1&quot; 404 232 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0&quot; &quot;-&quot; </code></pre> <p><strong>UPDATE #1</strong></p> <p>It appears that if I update my ingress to directly expose the targeted service on the <code>/</code> path, it works as intended. Below the updated configuration. Still, it appears that if I try to set any other path, it does not work.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: static-ip-name nginx.ingress.kubernetes.io/rewrite-target: / creationTimestamp: &quot;2021-04-14T20:28:41Z&quot; generation: 7 name: app-ingress namespace: default resourceVersion: HIDDEN selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress uid: HIDDEN spec: rules: - host: custom.my_domain.com http: paths: - backend: serviceName: app-service servicePort: 5000 path: / status: loadBalancer: ingress: - ip: &lt;MY_STATIC_IP_ADDRESS&gt; </code></pre> <p><strong>Update #2</strong></p> <p>After going through the materials shared by @jccampanero in the comments section, I was able to get a working configuration.</p> <p>Instead of using <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">nginx-stable</a> which is referenced on the official nginx website, I used the one <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">here</a> and updated my Terraform script accordingly to use this one with the exact same configuration I had.</p> <p>Afterwards, I had to update my ingress by following the documentation <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">here</a> - below the updated configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: static-ip-name nginx.ingress.kubernetes.io/rewrite-target: /$2 creationTimestamp: &quot;2021-04-14T20:28:41Z&quot; generation: 7 name: app-ingress namespace: default resourceVersion: HIDDEN selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress uid: HIDDEN spec: rules: - host: custom.my_domain.com http: paths: - backend: serviceName: app-service servicePort: 5000 path: /app(/|$)(.*) status: loadBalancer: ingress: - ip: &lt;MY_STATIC_IP_ADDRESS&gt; </code></pre>
vladzam
<p>As indicated in the question comments and in the question itself, very well documented by @vladzam, two are the reasons of the problem.</p> <p>On one hand, the nginx ingress controller available through the Helm <code>stable</code> channel seems to be <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">deprecated</a> in favor of the new <code>ingress-nginx</code> controller - please, see <a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx%60" rel="nofollow noreferrer">the Github repo</a> and the <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">official documentation</a>.</p> <p>On the other, it seems to be a problem related to the definition of the <code>Rewrite target</code> annotation. According to the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>Starting in Version 0.22.0, ingress definitions using the annotation <code>nginx.ingress.kubernetes.io/rewrite-target</code> are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p> </blockquote> <p>As a consequence, it is necessary to modify the definition of the ingress resource to take into account this change. For instance:</p> <pre class="lang-yaml prettyprint-override"><code>$ echo ' apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) ' | kubectl create -f - </code></pre> <p>The question itself provides the exact ingress resource definition.</p>
jccampanero
<p>What is the difference in below three declarations for passing command/arguments:</p> <ol> <li><p>containers:<br /> name: busybox<br /> image: busybox<br /> args:<br /> -sleep<br /> -&quot;1000&quot;</p> </li> <li><p>containers:<br /> name: busybox<br /> image: busybox<br /> command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;sleep 1000&quot;]</p> </li> <li><p>containers:<br /> name: busybox<br /> image: busybox<br /> args:<br /> -sleep<br /> -&quot;1000&quot;</p> </li> </ol> <p>A. Would these produce same result?<br /> B. What is the preference or usage for each?</p>
PS-Atl
<p>The YAML list definition are only a matter of taste, it's just a YAML syntax. This two examples are equivalent:</p> <pre><code>listOne: - item1 - item2 listTwo: ['item1', 'item2'] </code></pre> <p>And this syntax works for both <strong>args</strong> and <strong>command</strong>. Beside that <strong>args</strong> and <strong>command</strong> are slight different, as the documentation says:</p> <ul> <li>If you do not supply command or args for a Container, the defaults defined in the Docker image are used</li> <li>If you supply a command but no args for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored.</li> <li>If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.</li> <li>If you supply a command and args, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args.</li> </ul> <p>Imagine a container like <strong>mysql</strong>, if you look it's Dockerfile you'll notice this:</p> <pre><code>ENTRYPOINT [&quot;docker-entrypoint.sh&quot;] CMD [&quot;mysqld&quot;] </code></pre> <p>The <strong>entrypoint</strong> call a script that prepare everything the database needs, when finish, this script calls <code>exec &quot;$@&quot;</code> and the shell variable <code>$@</code> are everything defined in <strong>cmd</strong>.</p> <p>So, on Kubernetes, if you want to pass arguments to <strong>mysqld</strong> you do something like:</p> <pre class="lang-yaml prettyprint-override"><code>image: mysql args: - mysqld - --skip-grant-tables # or args: [&quot;mysqld&quot;, &quot;--skip-grant-tables&quot;] </code></pre> <p>This still executes the <strong>entrypoint</strong> but now, the value of <code>$@</code> is <code>mysqld --skip-grant-tables</code>.</p>
Hector Vido
<p>I want to expose tomcat service via nodeport service method.I'm trying to achieve the same using kubectl run command option instead of using manifest yaml file with kubectl (apply or create) command</p> <pre><code>[root@master ~]# kubectl run tom --image=tomcat --replicas=3 --port=8080 --labels="env=prod" kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/tom created [root@master ~]# kubectl create service nodeport tomsvc --tcp=32156:8080 --node-port=32666 service/tomsvc created [root@master ~]# [root@master ~]# kubectl get svc tomsvc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE tomsvc NodePort 10.98.117.174 &lt;none&gt; 32156:32666/TCP 30s [root@master ~]# </code></pre> <p>Now the endpoints are not updated since we didn't have option to use label selector during kubectl create service</p> <pre><code>[root@master ~]# kubectl get ep tomsvc NAME ENDPOINTS AGE tomsvc &lt;none&gt; 62s [root@master ~]# </code></pre> <p>After changed the selector from default to env: prod,endpoints got updated</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: "2019-12-28T14:30:21Z" labels: app: tomsvc name: tomsvc namespace: default resourceVersion: "1834608" selfLink: /api/v1/namespaces/default/services/tomsvc uid: 696f4dde-341a-4118-b02b-6aa53df18f74 spec: clusterIP: 10.98.117.174 externalTrafficPolicy: Cluster ports: - name: 32156-8080 nodePort: 32666 port: 32156 protocol: TCP targetPort: 8080 selector: app: tomsvc sessionAffinity: None type: NodePort status: loadBalancer: {} ~ </code></pre> <p>Now i'm able to see endpoints updated with pod ip address and target port which are having labels as env: prod</p> <pre><code>[root@master ~]# kubectl get ep tomsvc NAME ENDPOINTS AGE tomsvc 10.36.0.2:8080,10.36.0.3:8080,10.36.0.4:8080 4m20s [root@master ~]# </code></pre> <p>So to avoid using this option to edit nodeport service yaml file for updating selector,i saw there is an option kubectl expose option to publish service</p> <p>Deleted that nodeport service and recreated using kubectl expose option</p> <pre><code>[root@master ~]# kubectl delete svc tomsvc service "tomsvc" deleted [root@master ~]# kubectl expose deployment tom --port=32156 --type=NodePort --target-port=8080 service/tom exposed [root@master ~]# kubectl get svc tom NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE tom NodePort 10.107.182.35 &lt;none&gt; 32156:30397/TCP 19s </code></pre> <p>Now endpoints automatically updated with pods which are having env: prod as label</p> <pre><code>[root@master ~]# kubectl get ep tom NAME ENDPOINTS AGE tom 10.36.0.2:8080,10.36.0.3:8080,10.36.0.4:8080 25s [root@master ~]# </code></pre> <p>My questions is why there is no option to pass selector while using kubectl create service command and why there is no option --nodeport while running kubectl expose command?</p> <p>Is there anything technically I'm misunderstanding here?</p>
JPNagarajan
<p>If you want to create deployment and expose it using <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer">kubectl run</a> command you can do it by using following:</p> <p><code>kubectl run tom --image=tomcat --replicas=3 --port=8080 --labels="env=prod" --expose=true --service-overrides='{ "spec": { "type": "NodePort" } }' </code></p> <p><code>--expose=true</code> creates a service (by default ClusterIP) and assigns it to the deployment using same selectors as specified for the deployment.</p> <p><code>--service-overrides='{ "spec": { "type": "NodePort" } }'</code> changes its type to NodePort.</p>
kool
<p>I have updated the SSL certificates on my AKS service. But I am getting CORS error. New certificates are getting reflected though. Can someone provide a solution for it. Here is the TLS part of my ingress file.</p> <pre><code>spec: tls: - hosts: - &quot;prodazure.thesolarlabs.com&quot; secretName: tls-secret </code></pre>
Aarti Joshi
<p>A <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="nofollow noreferrer">CORS</a> error is related to the fact that you are issuing a request (fetch, XHR, ...) from a Javascript origin to another service, the destination/backend, and the origin service is not allowed to according to the headers provided by that destination service.</p> <p>It means that your service, the origin, the one deployed in AKS, is trying contacting another service. In order to avoid the CORS errors, that service needs to provide an <code>Access-Control-Allow-Origin</code> that at least includes the host for the origin service, in your example:</p> <pre><code>Access-Control-Allow-Origin: https://prodazure.thesolarlabs.com </code></pre> <p>Changing a SSL certificate by itself shouldn't be the cause of the CORS error: please, as mentioned, adapt your destination/backend service CORS configuration to point to the new host of the origin service, if you changed it, and be sure that you are configuring <code>Access-Control-Allow-Origin</code> for the right HTTP scheme, <code>https</code> in this case, as well.</p>
jccampanero
<p>I am trying to install Kubeflow on Google Cloud Platform (GCP) and Kubernetes Engine (GKE), following the <a href="https://www.kubeflow.org/docs/gke/deploy/" rel="nofollow noreferrer">GCP deployment guide</a>.</p> <p>I created a GCP project of which I am the owner, I enabled billing, set up OAuth credentials and enabled the following APIs:</p> <ul> <li>Compute Engine API</li> <li>Kubernetes Engine API</li> <li>Identity and Access Management (IAM) API</li> <li>Deployment Manager API</li> <li>Cloud Resource Manager API</li> <li>Cloud Filestore API</li> <li>AI Platform Training &amp; Prediction API</li> </ul> <p>However, when I want to <a href="https://www.kubeflow.org/docs/gke/deploy/deploy-ui/" rel="nofollow noreferrer">deploy Kubeflow using the UI</a>, I get the following error:</p> <p><a href="https://i.stack.imgur.com/SrSjY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SrSjY.png" alt="enter image description here"></a></p> <p>So I doublechecked and those APIs are already enabled:</p> <p><a href="https://i.stack.imgur.com/MSdhr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MSdhr.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Fu0Au.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fu0Au.png" alt="enter image description here"></a></p> <p>The log messages at the bottom of the screen are:</p> <pre><code>2020-03-0614:14:04.629: Getting enabled services for project &lt;projectname&gt;.. 2020-03-0614:14:16.909: Could not configure communication with GCP, exiting </code></pre> <p>The <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L695" rel="nofollow noreferrer"><code>Could not configure communication with GCP, exiting</code></a> is triggered when <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L916" rel="nofollow noreferrer"><code>_enableGcpServices()</code></a> fails. </p> <p>The line <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L922" rel="nofollow noreferrer"><code>Getting enabled services for project ...</code></a> is printed but not the line <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L944" rel="nofollow noreferrer"><code>Proceeding with project number: ...</code></a>, so the error must be triggered somewhere in <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L923-L942" rel="nofollow noreferrer">the block of code between those lines</a>. </p> <p>The call to <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L928-L929" rel="nofollow noreferrer"><code>Gapi.cloudresourcemanager.getProjectNumber(project)</code></a> has its own try/catch with a slightly different error message and title (only talks about the cloud resource manager API, not the IAM API), so I assume it is the call to <a href="https://github.com/kubeflow/kubeflow/blob/master/components/gcp-click-to-deploy/src/DeployForm.tsx#L923" rel="nofollow noreferrer"><code>Gapi.getSignedInEmail()</code></a> that fails??</p>
BioGeek
<p>I'd suggest having a look at the service management API, IAM service credentials API and cloud identity aware proxy API possibly. I've only used the CLI install tool previously and not run into these problems, but you might require these services for the IAP deployment?</p>
Joe Peskett
<p>i installed a <code>kubeadm</code> v1.17 cluster with weave initially. I would like to switch it over to use calico. However, as i originally did not install the cluster with </p> <pre><code>kubeadm init --pod-network-cidr=192.168.0.0/16 </code></pre> <p>as per the docs, but with a simple</p> <pre><code>kubeadm init </code></pre> <p>i was wondering what steps i need to perform to achieve the transition from weave to calico? </p>
yee379
<p>To change CNI from Weave Net to Calico in the cluster you can do the following:</p> <p>Delete weave-net pods configuration:</p> <pre><code>kubectl delete -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <p>Then change podCIDR by running following command on your master node:</p> <pre><code>sudo kubeadm init phase control-plane controller-manager --pod-network-cidr=192.168.0.0/16 </code></pre> <p><code>192.168.0.0/16</code> is the default podCIDR used by Calico and can be changed <strong>only once.</strong> </p> <p>If you try to change it afterwards it will show error:</p> <blockquote> <p>spec.podCIDRs: Forbidden: node updates may not change podCIDR except from "" to valid</p> </blockquote> <p>so it's one way operation. </p> <p>After that you can apply calico:</p> <pre><code>kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml </code></pre> <p>Additionally if you choose to set different podCIDR you have to specify podCIDR in <code>kubeadm init</code>:</p> <pre><code>sudo kubeadm init phase control-plane all --pod-network-cidr=&lt;your_podCIDR&gt; </code></pre> <p>then modify Calico DaemonSet:</p> <pre><code>... - name: CALICO_IPV4POOL_CIDR value: "&lt;your_podCIDR&gt;" ... </code></pre> <p>and then apply it. But as mentioned before, you cannot do it once the podCIDR was specified. It can be added but cannot be modified later.</p>
kool
<p>I am new to k8s, have installed Kafka to the local cluster using helm install command and it is installed successfully and shown it using</p> <p>helm list</p> <p>and using</p> <p>Kubectl get all -A</p> <p>Command as running.</p> <p>I installed Confluent.Kafka nugget package in my C# project and try to connect to the pod but it is not connected using localhost:13090 and giving no error message</p> <p>Please note that the namespace of the pod is “default” while the namespace of the application pod is “my-pod”</p> <p>Please advise me, thank you</p>
AB1994
<p>I can see 90% of the answer included in your question, you mention that the namespace is different so that means you need to <strong>add &quot;.default&quot; to your service name</strong>.</p> <p>You also mention a port that possibly used to connect from the outside cluster but inside the cluster, you can use <strong>9092</strong> with the service including the word headless.</p> <p>Example</p> <pre><code>mykafka-service-headless.default:9092 </code></pre> <p>or</p> <pre><code>mykafka-service-headless.default </code></pre> <p>without port as this is the default one.</p> <p><strong>You will not need port forwarding</strong> but just in case, you may execute the <code>port-forward</code> command for 9092 for the subject Kafka pods.</p>
Useme Alehosaini
<p>I have 10 Kubernetes nodes (consider them as VMs) which have between 7 and 14 allocatable CPU cores which can be requested by Kubernetes pods. Therefore I'd like to show cluster CPU usage.</p> <p>This is my current query</p> <pre><code>sum(kube_pod_container_resource_requests_cpu_cores{node=~"$node"}) / sum(kube_node_status_allocatable_cpu_cores{node=~"$node"}) </code></pre> <p>This query shows strange results, for example over 400%.</p> <p>I would like to add filter to only calculate this for nodes that have Running pods, since there might be some old node definitions which are not user. I have inherited this setup, so it is not that easy for me to wrap my head around it.</p> <p>Any suggestions with a query that I can try?</p>
Bob
<p>Your current query is summing up CPU utilization of each nodes so it might show invalid data.</p> <p>You can check CPU utilization of all pods in the cluster by running:</p> <pre><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m])) </code></pre> <p>If you want to check CPU usage of each running pod you can use using:</p> <pre><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m])) by (pod_name) </code></pre>
kool
<p>I would like to start websocket connections (ws://whaterver) in OpenShift but somehow they always ends with ERR_CONNECTION_ABORTED immediately (new WebSocket('ws://whatever').</p> <p>First I thought that the problem is in our application but I created a minimal example and I got the same result.</p> <p>First I created a pod and started this minimal Python websocket server.</p> <pre><code>import asyncio import websockets async def hello(websocket, path): name = await websocket.recv() print(f&quot;&lt; {name}&quot;) greeting = f&quot;Hello {name}!&quot; await websocket.send(greeting) print(f&quot;&gt; {greeting}&quot;) start_server = websockets.serve(hello, &quot;0.0.0.0&quot;, 8000) asyncio.get_event_loop().run_until_complete(start_server) asyncio.get_event_loop().run_forever() </code></pre> <p>Then I created a service (TCP 8000) and created a routing too and I got the same result.</p> <p>I also tried to use different port or different targets (e.g.: /ws), without success. This minimal script was able to respond to a simple http request, but for the websocket connection it can't.</p> <p>Do you have any idea what could be the problem? (by the documentation these connections should work as they are) Should I try to play with some routing environment variables or are there any limitations which are not mentioned in the documentation?</p>
kfr
<p>Posting Károly Frendrich answer as community wiki:</p> <blockquote> <p>Finally we realized that the TLS termination is required to be set.</p> </blockquote> <p>It can be done using <a href="https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html#route-types" rel="nofollow noreferrer">Secured Routes</a></p>
kool
<p>I'm using Grafana plus Prometheus queries to create dashboards in Grafana for Kubernetes. I take the name of the nodes (3 in this case) in a variable and then I pass this values to other query to extract the IPs of the machines. The values extracted are correct. I have the multi-value option enabled.</p> <p>The problem comes with the query <code>sum(rate(container_cpu_usage_seconds_total{id="/", instance=~"$ip_test:10250"}[1m]))</code> and more than one IP because it only takes one of them. In other query it works but I think it is possible because the other query has not the <code>:10250</code> after the variable.</p> <p>My question, do you know any way to concatenate all the ip:port? E.g.: X.X.X.X:pppp|X.X.X.X:pppp </p>
arturoxv
<p>Try it like this:</p> <pre><code>sum(rate(container_cpu_usage_seconds_total{id=&quot;/&quot;, instance=~&quot;($ip_test):10250&quot;}[1m])) </code></pre>
齐引飞
<p>I have a google cloud composer environment. In my DAG I want to create a pod in GKE. When I come to deploy a simple app based on a docker container that doesn't need any volume configuration or secrets, everything works fine, for example:</p> <pre><code>kubernetes_max = GKEStartPodOperator( # The ID specified for the task. task_id=&quot;python-simple-app&quot;, # Name of task you want to run, used to generate Pod ID. name=&quot;python-demo-app&quot;, project_id=PROJECT_ID, location=CLUSTER_REGION, cluster_name=CLUSTER_NAME, # Entrypoint of the container, if not specified the Docker container's # entrypoint is used. The cmds parameter is templated. cmds=[&quot;python&quot;, &quot;app.py&quot;], namespace=&quot;production&quot;, image=&quot;gcr.io/path/to/lab-python-job:latest&quot;, ) </code></pre> <p>But when I have an application that need to access to my GKE cluster volumes, I need to configure volumes in my pod. The issue is the documentation is not clear regarding this. The only example that I ever <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html" rel="nofollow noreferrer">foud</a> is this:</p> <pre><code>volume = k8s.V1Volume( name='test-volume', persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'), ) </code></pre> <p>While the volumes in the my manifest file (I use it to deploy my app from local) looks like this:</p> <pre><code>volumes: - name: volume-prod secret: secretName: volume-prod items: - key: config path: config.json - key: another_config path: another_config.conf - key: random-ca path: random-ca.pem </code></pre> <p>So when I compare how both volumes looks like in the console (when I manually deploy the manifest file that successfully run, and when I deploy the pod using clod composer that fails):</p> <ul> <li><p>The successful run - Manifest file:</p> <p>volume-prod<br> Name: volume-prod<br> Type: secret<br> Source volume identifier: volume-prod</p> </li> <li><p>The failed run - Composer <code>GKEStartPodOperator</code>:</p> <p>volume-prod<br> Name: volume-prod<br> Type: emptyDir<br> Source volume identifier: Node's default medium</p> </li> </ul> <p>How I can configure my pod from cloud composer in a way it can read the volume of my cluster?</p>
Idhem
<p>The <code>KubernetesPodOperator</code>/<code>GKEStartOperator</code> is just a wrapper around the python Kubernetes sdk - I agree that it isn't well documented in the Airflow/Cloud Composer documentation but the Python SDK for Kubernetes itself is well documented.</p> <p>Start here with the kubernetes python sdk documentation: <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md</a></p> <p>You'll notice that the arguments the <code>KubernetesPodOperator</code>/<code>GKEStartOperator</code> take match this spec. If you dig into the source code of the operators you'll see that the operator is nothing more than a builder that creates a <code>kubernetes.client.models.V1Pod</code> object and uses the API to deploy the pod.</p> <p>The <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator" rel="nofollow noreferrer">operator</a> takes a <code>volumes</code> parameter which should be of type <code>List[V1Volume]</code>, where the documentation for <code>V1Volume</code> is <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Volume.md" rel="nofollow noreferrer">here</a>.</p> <p>So in your case you would need to provide:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes.client import models as k8s kubernetes_max = GKEStartPodOperator( # The ID specified for the task. task_id=&quot;python-simple-app&quot;, # Name of task you want to run, used to generate Pod ID. name=&quot;python-demo-app&quot;, project_id=PROJECT_ID, location=CLUSTER_REGION, cluster_name=CLUSTER_NAME, # Entrypoint of the container, if not specified the Docker container's # entrypoint is used. The cmds parameter is templated. cmds=[&quot;python&quot;, &quot;app.py&quot;], namespace=&quot;production&quot;, image=&quot;gcr.io/path/to/lab-python-job:latest&quot;, volumes=[ k8s.V1Volume( name=&quot;volume-prod&quot;, secret=k8s.V1SecretVolumeSource( secret_name=&quot;volume-prod&quot;, items=[ k8s.V1KeyToPath(key=&quot;config&quot;, path=&quot;config.json&quot;), k8s.V1KeyToPath(key=&quot;another_config&quot;, path=&quot;another_config.conf&quot;), k8s.V1KeyToPath(key=&quot;random-ca&quot;, path=&quot;random-ca.pem&quot;), ], ) ) ] ) </code></pre> <p>Alternatively, you can provide your manifest to the <code>pod_template_file</code> argument in <code>GKEStartPodOperator</code> - this will need to be available to the workers inside airflow.</p> <p>There are 3 ways to create pods in Airflow using this Operator:</p> <ol> <li>Use the arguments of the operator to specify what you need and have the operator build the <code>V1Pod</code> for you.</li> <li>Provide a manifest by passing in <code>pod_template_file</code> argument.</li> <li>Use the Kubernetes sdk to create a <code>V1Pod</code> object yourself and pass this to the <code>full_pod_spec</code> argument.</li> </ol>
Daniel T
<p>Im overriding the the dns policy of a pod since I'm facing a <a href="https://stackoverflow.com/questions/63719383/pod-unable-to-install-packagesapt-get-update-or-apt-get-install">issue</a> with default <code>/etc/resolv.conf</code> of the pod. Another issue is that the pod is not able to connect to smtp server server due to default <code>/etc/resolv.conf</code> of the pod</p> <p>Hence the dnspolicy that is desired to be applied to the deployment/pod is:</p> <pre><code> dnsConfig: nameservers: - &lt;ip-of-the-node&gt; options: - name: ndots value: '5' searches: - monitoring.svc.cluster.local - svc.cluster.local - cluster.local dnsPolicy: None </code></pre> <p>In the above configuration the <code>nameservers</code> needs to be IP of the node where pod gets deployed. Since I have three worker nodes, I cannot hard-code the value to specific worker node's IP. I would not prefer configuring the pod to get deployed to particular node since if the resources are not sufficient for the pod to get deployed in a particular node, the pod might remain in pending state.</p> <p>How can I make the <code>nameservers</code> to get value of the IP address of the node where pod gets deployed?</p> <p>Or is it possible to update the <code>nameservers</code> with some kind a of a generic argument so that the pod will be able to connect to smtp server.</p>
Rakesh Kotian
<p><code>dnsConfig</code> <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config" rel="nofollow noreferrer">support up to 3 IP addresses</a> specified so theoretically you could hard code it in the <code>nameservers</code> field. However as a workaround you can pass node ip address as a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#envvarsource-v1-core" rel="nofollow noreferrer">env variable and then pass it to the pod</a>. Example:</p> <pre><code>spec: containers: - name: envar-demo-container command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;printenv NODE_IP &gt;&gt; /etc/resolv.conf&quot;] image: nginx env: - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP </code></pre> <p><code>fieldPath: status.hostIP</code> takes IP address of the node that pod is deployed on and saves it as a environment variable. Then it is written to <code>/etc/resolv.conf</code>.</p>
kool
<p>I'm following <a href="https://learn.openshift.com/operatorframework/go-operator-podset/" rel="nofollow noreferrer">this tutorial</a> to create my first Custom Resource named PodSet and currently at step 6 of 7 to test my CR.</p> <p>Here is my Operator SDK controller Go code:</p> <pre><code>package controllers import ( &quot;context&quot; &quot;reflect&quot; &quot;github.com/go-logr/logr&quot; &quot;k8s.io/apimachinery/pkg/labels&quot; &quot;k8s.io/apimachinery/pkg/runtime&quot; ctrl &quot;sigs.k8s.io/controller-runtime&quot; &quot;sigs.k8s.io/controller-runtime/pkg/client&quot; &quot;sigs.k8s.io/controller-runtime/pkg/controller/controllerutil&quot; &quot;sigs.k8s.io/controller-runtime/pkg/reconcile&quot; appv1alpha1 &quot;github.com/redhat/podset-operator/api/v1alpha1&quot; corev1 &quot;k8s.io/api/core/v1&quot; &quot;k8s.io/apimachinery/pkg/api/errors&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; ) // PodSetReconciler reconciles a PodSet object type PodSetReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=app.example.com,resources=podsets,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=app.example.com,resources=podsets/status,verbs=get;update;patch // +kubebuilder:rbac:groups=v1,resources=pods,verbs=get;list;watch;create;update;patch;delete // Reconcile is the core logic of controller func (r *PodSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { _ = context.Background() _ = r.Log.WithValues(&quot;podset&quot;, req.NamespacedName) // Fetch the PodSet instance (the parent of the pods) instance := &amp;appv1alpha1.PodSet{} err := r.Get(context.Background(), req.NamespacedName, instance) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue return reconcile.Result{}, nil } // Error reading the object - requeue the request return reconcile.Result{}, err } // List all pods owned by this PodSet instance podSet := instance podList := &amp;corev1.PodList{} labelz := map[string]string{ &quot;app&quot;: podSet.Name, // the metadata.name field from user's CR PodSet YAML file &quot;version&quot;: &quot;v0.1&quot;, } labelSelector := labels.SelectorFromSet(labelz) listOpts := &amp;client.ListOptions{Namespace: podSet.Namespace, LabelSelector: labelSelector} if err = r.List(context.Background(), podList, listOpts); err != nil { return reconcile.Result{}, err } // Count the pods that are pending or running and add them to available array var available []corev1.Pod for _, pod := range podList.Items { if pod.ObjectMeta.DeletionTimestamp != nil { continue } if pod.Status.Phase == corev1.PodRunning || pod.Status.Phase == corev1.PodPending { available = append(available, pod) } } numAvailable := int32(len(available)) availableNames := []string{} for _, pod := range available { availableNames = append(availableNames, pod.ObjectMeta.Name) } // Update the status if necessary status := appv1alpha1.PodSetStatus{ PodNames: availableNames, AvailableReplicas: numAvailable, } if !reflect.DeepEqual(podSet.Status, status) { podSet.Status = status err = r.Status().Update(context.Background(), podSet) if err != nil { r.Log.Error(err, &quot;Failed to update PodSet status&quot;) return reconcile.Result{}, err } } // When the number of pods in the cluster is bigger that what we want, scale down if numAvailable &gt; podSet.Spec.Replicas { r.Log.Info(&quot;Scaling down pods&quot;, &quot;Currently available&quot;, numAvailable, &quot;Required replicas&quot;, podSet.Spec.Replicas) diff := numAvailable - podSet.Spec.Replicas toDeletePods := available[:diff] // Syntax help: https://play.golang.org/p/SHAMCdd12sp for _, toDeletePod := range toDeletePods { err = r.Delete(context.Background(), &amp;toDeletePod) if err != nil { r.Log.Error(err, &quot;Failed to delete pod&quot;, &quot;pod.name&quot;, toDeletePod.Name) return reconcile.Result{}, err } } return reconcile.Result{Requeue: true}, nil } // When the number of pods in the cluster is smaller that what we want, scale up if numAvailable &lt; podSet.Spec.Replicas { r.Log.Info(&quot;Scaling up pods&quot;, &quot;Currently available&quot;, numAvailable, &quot;Required replicas&quot;, podSet.Spec.Replicas) // Define a new Pod object pod := newPodForCR(podSet) // Set PodSet instance as the owner of the Pod if err := controllerutil.SetControllerReference(podSet, pod, r.Scheme); err != nil { return reconcile.Result{}, err } err = r.Create(context.Background(), pod) if err != nil { r.Log.Error(err, &quot;Failed to create pod&quot;, &quot;pod.name&quot;, pod.Name) return reconcile.Result{}, err } return reconcile.Result{Requeue: true}, nil } return ctrl.Result{}, nil } // newPodForCR returns a busybox pod with the same name/namespace as the cr func newPodForCR(cr *appv1alpha1.PodSet) *corev1.Pod { labels := map[string]string{ &quot;app&quot;: cr.Name, // the metadata.name field from user's CR PodSet YAML file &quot;version&quot;: &quot;v0.1&quot;, } return &amp;corev1.Pod{ ObjectMeta: metav1.ObjectMeta{ GenerateName: cr.Name + &quot;-pod&quot;, Namespace: cr.Namespace, Labels: labels, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Name: &quot;busybox&quot;, Image: &quot;busybox&quot;, Command: []string{&quot;sleep&quot;, &quot;3600&quot;}, }, }, }, } } // SetupWithManager defines how the controller will watch for resources func (r *PodSetReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&amp;appv1alpha1.PodSet{}). Owns(&amp;corev1.Pod{}). Complete(r) } </code></pre> <p>When I apply below YAML file, I saw strange behaviour of the pods. They were struggling in the first few seconds — some of them get up and running for a while and quickly get into terminating state. When I leave them untouched for few more seconds, the CR reached the desired state just fine.</p> <pre><code>apiVersion: app.example.com/v1alpha1 kind: PodSet metadata: name: podset-sample spec: replicas: 5 </code></pre> <p>I captured the deployment scene above in <a href="https://youtu.be/bK5_2uiDuhE" rel="nofollow noreferrer">this video</a>. And <a href="https://pastebin.com/raw/md0QYx23" rel="nofollow noreferrer">here</a> are the full logs from my local terminal running <code>WATCH_NAMESPACE=podset-operator make run</code> command (sorry, I have to use Pastebin because SO didn't allow me to paste the full logs here because they are too long).</p> <p>So, my questions here are:</p> <ol> <li>What does the <code>Failed to update PodSet status {&quot;error&quot;: &quot;Operation cannot be fulfilled on podsets.app.example.com \&quot;podset-sample\&quot;: the object has been modified; please apply your changes to the latest version and try again&quot;}</code> actually means?</li> <li>Why this happened?</li> <li>What can I do to get rid of these errors?</li> </ol>
Zulhilmi Zainudin
<p>You need get the object before update, that happens because you have old version of the object when you try to update.</p> <p>EDIT:</p> <pre class="lang-golang prettyprint-override"><code>podSet := &amp;appv1alpha1.PodSet{} err := r.Get(context.Background(), req.NamespacedName, podSet) if err != nil { return reconcile.Result{}, err } // Update the status if necessary status := appv1alpha1.PodSetStatus{ PodNames: availableNames, AvailableReplicas: numAvailable, } if !reflect.DeepEqual(podSet.Status, status) { podSet.Status = status err = r.Status().Update(context.Background(), podSet) if err != nil { r.Log.Error(err, &quot;Failed to update PodSet status&quot;) return reconcile.Result{}, err } } </code></pre> <p>you have to bring the latest version of the object from kubernetes to make sure you have the latest version of it</p>
Alejandro Jesus Nuñez Madrazo
<p>I was reviewing some material related to kubernetes security and I found it is possible to expose Kubernetes API server to be accessible from the outside world, My question is what would be the benefit from doing something vulnerable like this, Anyone knows business cases for example that let you did that? Thanks</p>
Muhammad Badawy
<p>Simply, you can use endpoints to deploy any service from your local. for sure you must implement security on your api. I have created an application locally which builds using docker api, and deploy using kubernetes api. Don't forget about securing your apis.</p>
hakik ayoub
<p>Can I use <a href="https://github.com/coredns/policy#domain-name-policy" rel="nofollow noreferrer">coredns Domain name policy</a> to restrict or control egress call.<br/> For example I want to allow <code>google.com</code> and block <code>gitHub.com</code>. What implementation steps required to do this if I had kubernetes setup ready and default <code>coredns</code> pod running in it.</p>
gaurav sinha
<p>I have done this recently using <a href="https://github.com/monzo/egress-operator" rel="nofollow noreferrer">egress-operator</a>. You have to configure this with coredns image that you will build (follow the Readme) and it will route your egress traffic through the operator.</p> <p>On the operator <a href="https://github.com/monzo/egress-operator#usage" rel="nofollow noreferrer">external service</a> you can whitelist the domains.</p> <p><strong>Note:</strong> Try to use docker local registry first instead of cloud to avoid push-pull delay</p>
solveit
<p>I need to implement Rate Limiting (based on URL and path) on applications deployed on Kubernetes Cluster (EKS).</p> <p>I'm looking for a managed way that involves least scripting and does provide an interface through which to manage rate limits for different application.</p> <p>That system should be able to work accurately at the enterprise level.</p> <p>Can somebody please suggest me the path/tool/framework to follow in order to achieve it.</p>
Talha Tariq
<p><code>Rate-limiting</code> is available in NGINX <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a> by using <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="noreferrer">correct annotations</a>. Available options are:</p> <ol> <li><code>nginx.ingress.kubernetes.io/limit-connections</code>: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.</li> <li><code>nginx.ingress.kubernetes.io/limit-rps</code>: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</li> <li><code>nginx.ingress.kubernetes.io/limit-rpm</code>: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</li> <li><code>nginx.ingress.kubernetes.io/limit-burst-multiplier</code>: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.</li> <li><code>nginx.ingress.kubernetes.io/limit-rate-after</code>: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffering" rel="noreferrer">proxy-buffering enabled</a>.</li> <li><code>nginx.ingress.kubernetes.io/limit-rate</code>: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffering" rel="noreferrer">proxy-buffering enabled</a>.</li> <li><code>nginx.ingress.kubernetes.io/limit-whitelist</code>: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.</li> </ol> <p>You can read more about NGINX rate limiting <a href="https://www.freecodecamp.org/news/nginx-rate-limiting-in-a-nutshell-128fe9e0126c/" rel="noreferrer">here</a> and for NGINX rate limiting in kubernetes <a href="https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57" rel="noreferrer">in this guide</a>.</p>
kool
<p>I was having K3s cluster with below pods running:</p> <pre><code>kube-system pod/calico-node-xxxx kube-system pod/calico-kube-controllers-xxxxxx kube-system pod/metrics-server-xxxxx kube-system pod/local-path-provisioner-xxxxx kube-system pod/coredns-xxxxx xyz-system pod/some-app-xxx xyz-system pod/some-app-db-xxx </code></pre> <p>I want to stop all of the K3s pods &amp; reset the containerd state, so I used <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="noreferrer">/usr/local/bin/k3s-killall.sh</a> script and all pods got stopped (at least I was not able to see anything in <code>watch kubectl get all -A</code> except <code>The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?</code> message)</p> <p>Can someone tell me how to start the k3s server up because now after firing <code>kubectl get all -A</code> I am getting message <code>The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?</code></p> <p><strong>PS:</strong></p> <ul> <li>When I ran <code>k3s server</code> command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same message <code>The connection to the...</code> start displaying.</li> </ul> <p>Does this means that <code>k3s-killall.sh</code> have not deleted my pods as it is showing the same pods with same ids ( like <code>pod/some-app-xxx</code> ) ?</p>
Thor
<ol> <li><p>I think you need to restart K3s via systemd if you want your cluster back after kill. Try command: <br/><code>sudo systemctl restart k3s</code> This is supported by the installation script for systemd and openrc. Refer <a href="https://www.rancher.co.jp/docs/k3s/latest/en/running/" rel="noreferrer">rancher doc</a></p> </li> <li><p>The pod-xxx id will remain same as k3s-killall.sh doesn't uninstall k3s (you can verify this, after k3s-killall script <code>k3s -v</code> will return output) and it only restart the pods with same image. The <code>Restarts</code> column will increase the count of all pods.</p> </li> </ol>
solveit
<p>I have a tool which is now available to be deployed on Kubernetes. A known person made a document on getithub where he asked to run two power shell files. <a href="https://github.com/tonikautto/qse-kubernetes-minikube" rel="nofollow noreferrer">source</a></p> <p>When I run first file <code>0-install-tools.ps1</code>, it installs tools like <code>virtualbox</code>, <code>minikube</code>, <code>helm</code> and <code>kubernetes-cli</code>.</p> <p>When I run second file <code>1-Deploy-Minikube.ps1</code>, it is getting failed on last step where it is executing:</p> <pre><code>helm install -n qliksense qlik/qliksense -f values.yaml </code></pre> <p>The person who has created the getithub doc on the same has run it successful but am not sure why it is failing at my windows10 machine.</p> <p>Following error I am getting:</p> <blockquote> <p>error validating data: unknown object type "nil" in Secret.data.redis-password</p> </blockquote> <p>Can you please help me to know why it is failing at my side or there is some problem with the version these powershell files are installing ?</p> <p>Hardware: 3 core and 8 GB RAM.</p> <p>I am hoping a positive response from your team.</p> <p>Thanks, Rohit</p> <p>I rerun powershell files several times.</p> <p>Expected result is mentioned in the article I discussed in content</p>
rkumar1609
<p>set the empty properties as:'''''' instead of nil, null or ""</p>
conan conan
<p>I am trying to run the following command on a pod from my local machine.</p> <pre><code>kubectl -n myns exec mypod -- /bin/bash -c &quot;err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2&gt;&amp;1) || ( export ret=$?; [[ $err == *&quot;No such file or directory&quot;* ]] || exit &quot;$ret&quot; )&quot; </code></pre> <p>The command stores the output of the command in a variable and if the output contains a &quot;No such file or directory&quot; it ignores it else exits with the return code.</p> <p>The command itself: <code>err=$(tar -cvzf /tmp/bludr-logs_aj.tgz ${all_files} 2&gt;&amp;1) || ( export ret=$?; [[ $err == *&quot;No such file or directory&quot;* ]] || exit &quot;$ret&quot; )</code> runs fine when I run this manually inside the pod. But when I try to run this remotely using exec it gives it won't accept <code>*</code>. Here is what I get:</p> <pre><code>root:~# kubectl -n myns exec mypod -- /bin/bash -c &quot;err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2&gt;&amp;1) || ( export ret=$?; [[ $err == *&quot;No such file or directory&quot;* ]] || exit &quot;$ret&quot; )&quot; such: Cowardly: command not found such: -c: line 1: conditional binary operator expected such: -c: line 1: syntax error near `*No' such: -c: line 1: `Try 'tar --help' or 'tar --usage' for more information. || ( export ret=2; [[ == *No' command terminated with exit code 1 </code></pre> <p>I tried to replace * with <code>&amp;2A</code> but that did not work.</p>
codec
<p>Your command contain nested double quotes so use single quotes instead:</p> <pre><code>kubectl -n myns exec mypod -- /bin/bash -c 'err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2&gt;&amp;1) || ( export ret=$?; [[ $err == *&quot;No such file or directory&quot;* ]] || exit &quot;$ret&quot; )' </code></pre>
mar0ne
<p>After getting my k8s cluster up and going I faithfully deployed the following WebUI dashboard using the command:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml </code></pre> <p>When I try to access it I get the following error:</p> <pre><code>Metric client health check failed: an error on the server (&quot;unknown&quot;) has prevented the request from succeeding (get services dashboard-metrics-scraper) </code></pre> <p>If I get all the services I get:</p> <pre><code>k get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 8d kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 8d kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.0.65 &lt;none&gt; 8000/TCP 6m10s kubernetes-dashboard kubernetes-dashboard ClusterIP 10.96.0.173 &lt;none&gt; 443/TCP 6m10s </code></pre> <p>Can someone shed some light? what am I missing?</p> <p>More Info: In the dashboard yaml I found these roles:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: - apiGroups: [&quot;&quot;] resources: [&quot;secrets&quot;] resourceNames: [&quot;kubernetes-dashboard-key-holder&quot;, &quot;kubernetes-dashboard-certs&quot;, &quot;kubernetes-dashboard-csrf&quot;] verbs: [&quot;get&quot;, &quot;update&quot;, &quot;delete&quot;] map. - apiGroups: [&quot;&quot;] resources: [&quot;configmaps&quot;] resourceNames: [&quot;kubernetes-dashboard-settings&quot;] verbs: [&quot;get&quot;, &quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;services&quot;] resourceNames: [&quot;heapster&quot;, &quot;dashboard-metrics-scraper&quot;] verbs: [&quot;proxy&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;services/proxy&quot;] resourceNames: [&quot;heapster&quot;, &quot;http:heapster:&quot;, &quot;https:heapster:&quot;, &quot;dashboard-metrics-scraper&quot;, &quot;http:dashboard-metrics-scraper&quot;] verbs: [&quot;get&quot;] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: - apiGroups: [&quot;metrics.k8s.io&quot;] resources: [&quot;pods&quot;, &quot;nodes&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard </code></pre> <p>Looks like the kubernetes-dashboard user has access to the metrics service I might be wrong</p>
tinashe.chipomho
<p>It looks like the kubernetes-dashboard's serviceaccount doesn't have access to all kubernetes resources (in particular, it can't access the metric server service).</p> <p>To fix this you should create a new ServiceAccount for the dashboard and give it more permissions.</p> <p>Here's one that I found on another similar post (<strong>be careful since it will give admin privileges to the dashboard, and whoever uses it will be able to destroy/create new or existing resources on your kubernetes cluster</strong>):</p> <pre><code> apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system </code></pre> <p>If you don't have a cluster-admin ServiceAccount, create one following this template:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: &quot;true&quot; addonmanager.kubernetes.io/mode: Reconcile </code></pre> <p>Admin ClusterRole:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1alpha1 metadata: name: admin rules: - apiGroups: [&quot;*&quot;] resources: [&quot;*&quot;] verbs: [&quot;*&quot;] nonResourceURLs: [&quot;*&quot;] </code></pre>
shaki
<p>I'm looking at one of our .NET core apps, running on linux on docker/kubernetes. I'm just a bit confused as to why we have so many child processes:</p> <pre><code>root@task-executor-85c5557b77-xnrdr:/# pstree 1 -a sh /start-alloy-engine.sh -x task-executor `-dotnet AlloyTaskExecutor.dll `-76*[{dotnet}] </code></pre> <pre><code>root@task-executor-85c5557b77-xnrdr:/# pstree 1 -a -p -g sh,1,1 /start-alloy-engine.sh -x task-executor `-dotnet,535,1 AlloyTaskExecutor.dll |-{dotnet},536,1 |-{dotnet},537,1 .... </code></pre> <pre><code>htop... PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 535 root 20 0 8144M 220M 53612 S 143. 5.6 53:12.41 dotnet AlloyTaskExecutor.dll 955 root 20 0 8144M 220M 53612 S 37.2 5.6 0:03.81 dotnet AlloyTaskExecutor.dll 941 root 20 0 8144M 220M 53612 R 37.2 5.6 0:21.58 dotnet AlloyTaskExecutor.dll 614 root 20 0 8144M 220M 53612 S 27.2 5.6 10:32.30 dotnet AlloyTaskExecutor.dll 599 root 20 0 8144M 220M 53612 S 17.9 5.6 6:53.90 dotnet AlloyTaskExecutor.dll 655 root 20 0 8144M 220M 53612 S 12.0 5.6 3:02.22 dotnet AlloyTaskExecutor.dll 649 root 20 0 8144M 220M 53612 S 6.6 5.6 3:05.03 dotnet AlloyTaskExecutor.dll 660 root 20 0 8144M 220M 53612 S 5.3 5.6 3:02.42 dotnet AlloyTaskExecutor.dll 946 root 20 0 8144M 220M 53612 S 0.0 5.6 0:13.86 dotnet AlloyTaskExecutor.dll 960 root 20 0 8144M 220M 53612 S 0.0 5.6 0:02.78 dotnet AlloyTaskExecutor.dll 541 root 20 0 8144M 220M 53612 S 0.0 5.6 0:00.47 dotnet AlloyTaskExecutor.dll 1 root 20 0 2388 408 296 S 0.0 0.0 0:00.11 /bin/sh /start-alloy-engine.sh -x task-executor 536 root 20 0 8144M 220M 53612 S 0.0 5.6 0:00.00 dotnet AlloyTaskExecutor.dll 537 root 20 0 8144M 220M 53612 S 0.0 5.6 0:00.00 dotnet AlloyTaskExecutor.dll .... </code></pre> <p>There are a lot of <code>dotnet AlloyTaskExecutor.dll</code> processes that are all children of the one I created. This is happening on all our apps when run on linux, not seeing it on windows.</p> <p>I'm looking into resource usage of this app here, so it's a bit confusing.</p> <h3>Is this some kind of way multitasking is done on .net core linux?</h3> <p>I may want to do some profiling - which process do I even attach to? Just PID 535?</p> <h3>Is it something we've done?</h3> <p>We have no Process.Start in our codebase, so it may be a 3rd party lib doing this.</p> <p>They are all trying to run our app with default args? Is there a different way to change the entrypoint I'm not seeing here?</p> <p>Any advice to find the culprit? I've thought of changing the process security to deny process spawning but I don't know how to do this on linux.</p> <p>Edit: <code>ps aux</code> and <code>top</code> are only showing one process, not sure why that is either.</p> <p>Edit: Are these forked processes maybe? If it helps the app is running hangfire with a redis storage provider.</p>
Dave Higgins
<p>This was answered over on github: <a href="https://github.com/dotnet/runtime/discussions/45835#discussioncomment-178938" rel="nofollow noreferrer">https://github.com/dotnet/runtime/discussions/45835#discussioncomment-178938</a></p> <p>TLDR: The line between processes and threads is slightly blurrier on linux - those other pids are actually threads.</p>
Dave Higgins
<p>Probably a noob K8s networking question. When a pod is talking to a service outside the Kubernetes cluster(ex: internet), what source IP would the service see? I don't think it will be the pod IP (as it is) because NATing involved? Is there some documentation around this topic?</p>
Always_Beginner
<p>You can find the answer to your question in <a href="https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/#kubernetes-networking-basics" rel="nofollow noreferrer">the documentation</a>:</p> <blockquote> <p>For the traffic that goes from pod to external addresses, Kubernetes simply uses <a href="https://en.wikipedia.org/wiki/Network_address_translation#SNAT" rel="nofollow noreferrer">SNAT</a>. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.</p> </blockquote>
kool
<p>I'm deploying filebeat in kubernetes using the k8s manifests from here: <a href="https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml</a></p> <p>The filebeat template is loaded into elasticsearch, but the mappings for nginx module are incomplete:</p> <pre><code>"nginx" : { "properties" : { "access" : { "properties" : { "geoip" : { "type" : "object" }, "user_agent" : { "type" : "object" } } }, "error" : { "properties" : { "connection_id" : { "type" : "long" } } } } }, </code></pre> <p>Most of the nginx properties defined in the fields.yaml are aliases and none of the properties defined as an alias are getting their way into the filebeat template.</p> <p>Is there something I'm missing as part of the filebeat configuration ?</p> <p>I also tried with my custom fields.yaml where I replaced the aliases with their concrete definition and the elasticsearh loaded mapping looks good.</p>
Laurentiu Soica
<p>As pointed out by Elastic's Marcin Tojek in Elastic community thread &quot;<a href="https://discuss.elastic.co/t/filebeat-versions-from-7-0-7-8-fail-to-create-alias-field-mappings-for-majority-of-modules/242874/" rel="nofollow noreferrer">Filebeat Filebeat versions from 7.0 - 7.8 fail to create alias field mappings for majority of modules</a>&quot; the Beats Platform Reference 7.8, chapter Upgrade, section <a href="https://www.elastic.co/guide/en/beats/libbeat/current/upgrading-6-to-7.html" rel="nofollow noreferrer">Upgrade from 6.x to 7.x</a> states the following:</p> <blockquote> <p>Starting with 7.0, the fields exported by Beats conform to the <a href="https://www.elastic.co/guide/en/ecs/1.5/index.html" rel="nofollow noreferrer">Elastic Common Schema (ECS)</a>. Many of the exported fields have been renamed. See <a href="https://www.elastic.co/guide/en/beats/libbeat/7.8/breaking-changes-7.0.html" rel="nofollow noreferrer">Breaking changes in 7.0</a> for the full list of changed names.</p> <p>To help you transition to the new fields, we provide a compatibility layer in the form of ECS-compatible field aliases. To use the aliases, set the following option in the Beat’s configuration file before you upgrade the Elasticsearch index template to 7.0.</p> <p><code>migration.6_to_7.enabled: true</code> The field aliases let you use 6.x dashboards and visualizations with indices created by Beats 7.0 or later. The aliases do <strong>not</strong> work with saved searches or with API calls that manipulate documents directly.</p> </blockquote> <p>Note that as of 2020-07-29 neither the Filebeats Reference 7.8 chapter <a href="https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields.html" rel="nofollow noreferrer">Exported fields</a> mentions this nor do the Filebeat modules' sections which still list all of the aliases which do not get created by default anymore on fresh installations unless explicitly enabling &quot;<code>migration.6_to_7.enabled: true</code>&quot;.</p>
b0le
<p>I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.</p> <p>Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.</p> <pre><code>func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) { log := r.Log.WithValues(&quot;Anexample&quot;, request.NamespacedName) instance := &amp;v1alpha1.Anexample{} err := r.Get(context.TODO(), request.NamespacedName, instance) if err != nil { if errors.IsNotFound(err) { log.Info(&quot;Anexample resource not found. Ignoring since object must be deleted.&quot;) return reconcile.Result{}, nil } log.Error(err, &quot;Failed to get Anexample.&quot;) return reconcile.Result{}, err } </code></pre> <p>I suspect it might be related to election, but I don't understand them.</p> <pre><code> mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ Scheme: scheme, MetricsBindAddress: metricsAddr, Port: 9443, LeaderElection: enableLeaderElection, LeaderElectionID: &quot;2eeda3e4.com.aaa.bbb.ccc&quot;, }) if err != nil { setupLog.Error(err, &quot;unable to start manager&quot;) os.Exit(1) } </code></pre> <p>what happen in Controller? How to fix it?</p>
Joe
<p>We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.</p> <p>request.NamespacedName from controller-runtime is returning the wrong namespace.</p>
Narasimha Karumanchi
<p>I have two kubernetes objects,</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: client-pod labels: component: web spec: containers: - name: client image: stephengrider/multi-client resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 3000 apiVersion: v1 kind: Service metadata: name: client-node-port spec: type: NodePort selector: component: web ports: - port: 3050 targetPort: 3000 nodePort: 31515 </code></pre> <p>and i applied both using <code>kubectl apply -f &lt;file_name&gt;</code> after that, here is the output</p> <pre><code>kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE client-node-port NodePort 10.100.230.224 &lt;none&gt; 3050:31515/TCP 30m </code></pre> <p>the pod output</p> <pre><code>NAME READY STATUS RESTARTS AGE client-pod 1/1 Running 0 28m </code></pre> <p>but when i run <code>minikube ip</code> it returns 127.0.0.1, i'm using minikube with docker driver. </p> <p>After following this issue <a href="https://github.com/kubernetes/minikube/issues/7344" rel="noreferrer">https://github.com/kubernetes/minikube/issues/7344</a>. i got the node-ip using</p> <pre><code>kubectl get node -o json | jq --raw-output \ '.items[0].status.addresses[] | select(.type == "InternalIP") .address ' </code></pre> <p>But even then i am not able to access the service. After more searching i find out</p> <pre><code>minikube service --url client-node-port 🏃 Starting tunnel for service client-node-port. |-----------|------------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|------------------|-------------|------------------------| | default | client-node-port | | http://127.0.0.1:52694 | |-----------|------------------|-------------|------------------------| http://127.0.0.1:52694 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. </code></pre> <p>i can access the service using minikube service. </p> <p>Question:</p> <ol> <li>But i want to know why the nodePort exposed didn't work ? </li> <li>why did i do this workaround to access the application.</li> </ol> <p>More Information:</p> <pre><code>minikube version minikube version: v1.10.1 commit: 63ab801ac27e5742ae442ce36dff7877dcccb278 docker version Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:21:11 2020 OS/Arch: darwin/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:29:16 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 kubectl version Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>if you need more info, i'm willing to provide. </p> <p><code>minikube ssh</code></p> <pre><code>docker@minikube:~$ ip -4 a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 4: docker0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0 valid_lft forever preferred_lft forever 945: eth0@if946: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever </code></pre>
Sathish
<p>I had the same problem. The issue is not with the IP <code>127.0.0.1</code>. The issue was that I was calling the port I have defined in the <code>YAML</code> file for <code>NodePort</code>. It looks like <code>minikube</code> will assign a different port for external access.</p> <p>The way I did:</p> <ul> <li>List all services in a nice formatted table: <pre><code>$minikube service list </code></pre> </li> <li>Show IP and external port: <pre><code>$minikube service Type-Your-Service-Name </code></pre> </li> </ul> <p>If you do that <code>minikube</code> will open the browser and will run your app.</p>
Marcio
<p>According to <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p> <blockquote> <p>Note: Pod anti-affinity requires nodes to be consistently labelled, in other words every node in the cluster must have an appropriate label matching topologyKey. If some or all nodes are missing the specified topologyKey label, it can lead to unintended behavior.</p> </blockquote> <p>What exactly will happen when there is no topologyKey label? Pods will be placed anywhere and everything will be working or I should expect some errors.</p>
Porok12
<p>After testing, it seems to ignore <code>podAntiAffinity</code> when topologyKey is missing.</p>
Porok12
<p>The GKE UI shows a different status for my job than I get back from <code>kubectl</code>. Note that the GKE UI is the correct status AFAICT and <code>kubectl</code> is wrong. However, I want to programmatically get back the correct status using <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#read_namespaced_job" rel="nofollow noreferrer">read_namespaced_job</a> in the Python API, however that status matches <code>kubectl</code>, which seems to be the wrong status.</p> <p>Where does this status in the GKE UI come from?</p> <p>In GKE UI:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: creationTimestamp: "2020-06-04T08:00:06Z" labels: controller-uid: ee750648-1189-4ed5-9803-054d407aa0b2 job-name: tf-nightly-transformer-translate-func-v2-32-1591257600 name: tf-nightly-transformer-translate-func-v2-32-1591257600 namespace: automated ownerReferences: - apiVersion: batch/v1beta1 blockOwnerDeletion: true controller: true kind: CronJob name: tf-nightly-transformer-translate-func-v2-32 uid: 5b619895-4c08-45e9-8981-fbd95980ff4e resourceVersion: "16109561" selfLink: /apis/batch/v1/namespaces/automated/jobs/tf-nightly-transformer-translate-func-v2-32-1591257600 uid: ee750648-1189-4ed5-9803-054d407aa0b2 ... status: completionTime: "2020-06-04T08:41:41Z" conditions: - lastProbeTime: "2020-06-04T08:41:41Z" lastTransitionTime: "2020-06-04T08:41:41Z" status: "True" type: Complete startTime: "2020-06-04T08:00:06Z" succeeded: 1 </code></pre> <p>From <code>kubectl</code>:</p> <pre><code>zcain@zcain:~$ kubectl get job tf-nightly-transformer-translate-func-v2-32-1591257600 --namespace=automated -o yaml apiVersion: batch/v1 kind: Job metadata: creationTimestamp: "2020-06-04T08:00:27Z" labels: controller-uid: b5d4fb20-df8d-45d8-a8b5-e3b0c40999be job-name: tf-nightly-transformer-translate-func-v2-32-1591257600 name: tf-nightly-transformer-translate-func-v2-32-1591257600 namespace: automated ownerReferences: - apiVersion: batch/v1beta1 blockOwnerDeletion: true controller: true kind: CronJob name: tf-nightly-transformer-translate-func-v2-32 uid: 51a40f4a-5595-49a1-b63f-db75b0849206 resourceVersion: "32712722" selfLink: /apis/batch/v1/namespaces/automated/jobs/tf-nightly-transformer-translate-func-v2-32-1591257600 uid: b5d4fb20-df8d-45d8-a8b5-e3b0c40999be ... status: conditions: - lastProbeTime: "2020-06-04T12:04:58Z" lastTransitionTime: "2020-06-04T12:04:58Z" message: Job was active longer than specified deadline reason: DeadlineExceeded status: "True" type: Failed startTime: "2020-06-04T11:04:58Z"[enter image description here][1] </code></pre> <p>Environment:</p> <pre><code>Kubernetes version (kubectl version): Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:33:14Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.26", GitCommit:"525ce678faa2b28483fa9569757a61f92b7b0988", GitTreeState:"clean", BuildDate:"2020-03-06T18:47:39Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"} OS: cat /etc/os-release PRETTY_NAME="Debian GNU/Linux rodete" Python version (python --version): Python 3.7.7 Python client version (pip list | grep kubernetes): kubernetes 10.0.1 </code></pre>
Zachary Cain
<p>For anyone else who finds a similar issue: The problem is with the <code>kubeconfig</code> file (/usr/local/google/home/zcain/.kube/config for me)</p> <p>There is a line in here like this: <code>current-context: gke_xl-ml-test_europe-west4-a_xl-ml-test-europe-west4</code></p> <p>If the <code>current-context</code> is pointing to a different cluster or zone than where your job ran, then when you run <code>kubectl job get</code> or use the Python API, then the job status you get back will be weird. I feel like it should just error out but instead I got the behavior above where I get back an incorrect status.</p> <p>You can run something like <code>gcloud container clusters get-credentials xl-ml-test-europe-west4 --zone europe-west4-a</code> to set your <code>kubeconfig</code> to the correct <code>current-context</code></p>
Zachary Cain
<p>In Helm, it is possible to specify a release name using</p> <p><code>helm install my-release-name chart-path</code></p> <p>This means, I can specify the release name and its components (using fullname) using the CLI.</p> <p>In kustomize (I am new to kustomize), there is a similar concept, <a href="https://kubectl.docs.kubernetes.io/references/kustomize/nameprefix/" rel="nofollow noreferrer"><code>namePrefix</code></a> and <a href="https://kubectl.docs.kubernetes.io/references/kustomize/namesuffix/" rel="nofollow noreferrer"><code>nameSuffix</code></a> which can be defined in a <code>kustomization.yaml</code></p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: overlook- resources: - deployment.yaml </code></pre> <p>However, this approach needs a custom file, and using a &quot;dynamic&quot; namePrefix would mean that a <code>kustomization.yaml</code> has to be generated using a template and kustomize is, well, about avoiding templating.</p> <p>Is there any way to specify that value dynamically?</p>
user140547
<p>You can use <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/mySql/README.md#name-customization" rel="nofollow noreferrer"><code>kustomize edit</code></a> to edit the <code>nameprefix</code> and <code>namesuffix</code> values.</p> <p>For example:</p> <p><code>Deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: replicas: 5 template: containers: - name: the-container image: registry/conatiner:latest </code></pre> <p><code>Kustomization.yaml</code></p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml </code></pre> <p>Then you can run <code>kustomize edit set nameprefix dev-</code> and <code>kustomize build .</code> will return following:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: dev-the-deployment spec: replicas: 5 template: containers: - image: registry/conatiner:latest name: the-container </code></pre>
kool
<p>I have a ready-made Kubernetes cluster with configured grafana + prometheus(operator) monitoring. I added the following labels to pods with my app:</p> <pre class="lang-yaml prettyprint-override"><code>prometheus.io/scrape: &quot;true&quot; prometheus.io/path: &quot;/my/app/metrics&quot; prometheus.io/port: &quot;80&quot; </code></pre> <p>But metrics don't get into Prometheus. However, prometheus has all the default Kubernetes metrics.</p> <p>What is the problem?</p>
jesmart
<p>You should create <a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors" rel="nofollow noreferrer"><code>ServiceMonitor</code> or <code>PodMonitor</code> objects</a>.</p> <p><code>ServiceMonitor</code> which describes the set of targets to be monitored by Prometheus. The Operator automatically generates Prometheus scrape configuration based on the definition and the targets will have the IPs of all the pods behind the service.</p> <p>Example:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: example-app labels: team: frontend spec: selector: matchLabels: app: example-app endpoints: - port: web </code></pre> <p><code>PodMonitor</code>, which declaratively specifies how groups of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.</p> <p>Example:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: example-app labels: team: frontend spec: selector: matchLabels: app: example-app podMetricsEndpoints: - port: web </code></pre>
kool
<p>i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls <code>kubectl apply -f kubernetes</code> to apply a bunch of yaml kubernetes definitions</p> <p>let's say i add yaml for a new service, and deploy it -- kubectl will add it</p> <p>but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it</p> <p>is there any way that <code>kubectl</code> can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around</p> <p>does the developer have to know to connect <code>kubectl</code> to the production cluster and delete the service manually, in addition to deleting the yaml definition?</p> <p>is there a mechanism for kubernetes to "know what's missing"?</p>
ChaseMoskal
<p>Before deleting the yaml file, you can run <code>kubectl delete -f file.yaml</code>, this way all the resources created by this file will be deleted.</p> <hr /> <p>However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like <a href="https://github.com/roboll/helmfile" rel="nofollow noreferrer">Helmfile</a>.</p> <p>Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run <code>helmfile apply</code></p>
Mostafa Wael
<p>I encountered the following error when added <strong>spring-cloud-starter-kubernetes-config</strong> dependency to my pom.xml:</p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. Caused by: java.security.cert.CertificateException: Could not parse certificate: java.io.IOException: Empty input Caused by: java.io.IOException: Empty input </code></pre> <p>To disable k8s, I added in bootstrap.yml following param:</p> <pre><code>spring: cloud: kubernetes: enabled: false </code></pre> <p>But even after that nothing changed and the error remained.</p> <p>Where else should I look? What parameter should I add so that if I have this dependency in pom.xml, I disable Kubernetes when running tests?</p>
Elena
<p>That problem could happen due to the installed <code>kubectl</code>. The easiest way to avoid this problem - rename <code>~/.kube</code> (directory with configs) to some other name like <code>~/.kube-hide</code></p> <pre><code>mv ~/.kube ~/.kube-hide </code></pre> <p>And when you will need to use your <code>kubectl</code> rename it back</p>
Marshtupa
<p>I am trying to use a TPU with Google Cloud's Kubernetes engine. My code returns several errors when I try to initialize the TPU, and any other operations only run on the CPU. To run this program, I am transferring a Python file from my Dockerhub workspace to Kubernetes, then executing it on a single v2 preemptible TPU. The TPU uses Tensorflow 2.3, which is the latest supported version for Cloud TPUs to the best of my knowledge. (I get an error saying the version is not yet supported when I try to use Tensorflow 2.4 or 2.5).</p> <p>When I run my code, Google Cloud sees the TPU but fails to connect to it and instead uses the CPU. It returns this error:</p> <pre><code>tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303) tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (resnet-tpu-fxgz7): /proc/driver/nvidia/version does not exist tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2299995000 Hz tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561fb2112c20 initialized for platform Host (this does not guarantee that XLA will be used). Devices: tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -&gt; {0 -&gt; 10.8.16.2:8470} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -&gt; {0 -&gt; localhost:30001} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -&gt; {0 -&gt; 10.8.16.2:8470} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -&gt; {0 -&gt; localhost:30001} tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:30001 TPU name grpc://10.8.16.2:8470 </code></pre> <p>The errors seem to indicate that tensorflow needs NVIDIA packages installed, but I understood from the Google Cloud TPU documentation that I shouldn't need to use tensorflow-gpu for a TPU. I tried using tensorflow-gpu anyways and received the same error, so I am not sure how to fix this problem. I've tried deleting and recreating my cluster and TPU numerous times, but I can't seem to make any progress. I'm relatively new to Google Cloud, so I may be missing something obvious, but any help would be greatly appreciated.</p> <p>This is the Python script I am trying to run:</p> <pre><code>import tensorflow as tf import os import sys # Parse the TPU name argument tpu_name = sys.argv[1] tpu_name = tpu_name.replace('--tpu=', '') print(&quot;TPU name&quot;, tpu_name) tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_name) # TPU detection tpu_name = 'grpc://' + str(tpu.cluster_spec().as_dict()['worker'][0]) print(&quot;TPU name&quot;, tpu_name) tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) </code></pre> <p>Here is the yaml configuration file for my Kubernetes cluster (though I'm including a placeholder for my real workspace name and image for this post):</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: test spec: template: metadata: name: test annotations: tf-version.cloud-tpus.google.com: &quot;2.3&quot; spec: restartPolicy: Never imagePullSecrets: - name: regcred containers: - name: test image: my_workspace/image command: [&quot;/bin/bash&quot;,&quot;-c&quot;,&quot;pip3 install cloud-tpu-client tensorflow==2.3.0 &amp;&amp; python3 DebugTPU.py --tpu=$(KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS)&quot;] resources: limits: cloud-tpus.google.com/preemptible-v2: 8 backoffLimit: 0 </code></pre>
Lexi2277
<p>There are actually no errors in this workload you've provided or the logs. A few comments which I think might help:</p> <ul> <li><code>pip install tensorflow</code> as you have noted installs <code>tensorflow-gpu</code>. By default, it tries to run GPU specific initializations and fails (<code>failed call to cuInit: UNKNOWN ERROR (303)</code>), so it falls back to local CPU execution. This is an error if you're trying to develop on a GPU VM, but in a typical CPU workload that doesn't matter. Essentially <code>tensorflow == tensorflow-gpu</code> and without a GPU available it's equivalent to <code>tensorflow-cpu</code> with additional error messages. Installing <code>tensorflow-cpu</code> would make these warnings go away.</li> <li>In this workload, the TPU server has its own installation of TensorFlow running as well. It actually doesn't matter if your local VM (e.g. your GKE container) has <code>tensorflow-gpu</code> or <code>tensorflow-cpu</code>, as long as it's the same TF version as the TPU server. Your workload here is successfully connecting to the TPU server, indicated by:</li> </ul> <pre><code>tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -&gt; {0 -&gt; 10.8.16.2:8470} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -&gt; {0 -&gt; localhost:30001} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -&gt; {0 -&gt; 10.8.16.2:8470} tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -&gt; {0 -&gt; localhost:30001} tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:30001 </code></pre>
Allen Wang
<p>I want to setup my Django app in kubernetes environment in such a way that while creating the app container, environment variables are passed such that those environment variables are used to initiate the containers. For example on starting the app container I want to issue management commands such as</p> <pre><code>python manage.py createuser --lastname lname --firstname --fname --number --num &quot; </code></pre> <p>and so on. How to pass these variable values such as lname and fname above inside the container in a generic way such that every time new values can be passed depending on the user credentials and they do not need to be hard coded everytime?</p>
devcloud
<p>You can pass <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container" rel="nofollow noreferrer">environment variables</a> in your deployment manifest, for example:</p> <pre><code>spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: &quot;Hello from the environment&quot; - name: DEMO_FAREWELL value: &quot;Such a sweet sorrow&quot; </code></pre> <p>You <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">can also invoke them in container</a> by using <code>command</code> and <code>argument</code> object, the same way as running a command in container</p> <pre><code> containers: - name: env-print-demo image: bash env: - name: GREETING value: &quot;Warm greetings to&quot; - name: HONORIFIC value: &quot;The Most Honorable&quot; - name: NAME value: &quot;Kubernetes&quot; command: [&quot;echo&quot;] args: [&quot;$(GREETING) $(HONORIFIC) $(NAME)&quot;] </code></pre> <p>It is also possible to pass variables from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer"><code>ConfigMap</code></a> or <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-a-container-environment-variable-with-data-from-a-single-secret" rel="nofollow noreferrer"><code>Secrets</code></a>.</p>
kool
<h3>Summary</h3> <p>trying to get <code>minikube-test-ifs.com</code> to map to my deployment using minikube.</p> <h3>What I Did</h3> <p><code>minikube start</code><br> <code>minikube addons enable ingress</code><br> <code>kubectl apply -f &lt;path-to-yaml-below&gt;</code><br> <code>kubectl get ingress</code><br> Added ingress ip mapping to /etc/hosts file in form <code>&lt;ip&gt; minikube-test-ifs.com</code><br> I go to chrome and enter <code>minikube-test-ifs.com</code> and it doesn't load.<br> I get &quot;site can't be reached, took too long to respond&quot;</p> <h4>yaml file</h4> <p>note - it's all in the default namespace, I don't know if that's a problem.<br> There may be a problem in this yaml, but I checked and double checled and see no potential error... unless I'm missing something</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment labels: app: test spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test image: nginx ports: - name: client containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: test-service spec: selector: app: test ports: - name: client protocol: TCP port: 3000 targetPort: 3000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: minikube-test-ifs.com http: paths: - path: / pathType: Prefix backend: service: name: test-service port: number: 3000 </code></pre> <h3>OS</h3> <p>Windows 10</p> <h3>Other Stuff</h3> <p>I checked <a href="https://stackoverflow.com/questions/58561682/minikube-with-ingress-example-not-working">Minikube with ingress example not working</a> but I already added to my /etc/hosts and I also tried removing the <code>spec.host</code> but that still doesn't work... <br> also checked <a href="https://stackoverflow.com/questions/64143984/minikube-ingress-nginx-controller-not-working">Minikube Ingress (Nginx Controller) not working</a> but that person has his page already loading so not really relevent to me from what I can tell</p> <h3>Any Ideas?</h3> <p>I watched so many Youtube tutorials on this and I follow everything perfectly. I'm still new to this but I don't see a reason for it not working?</p> <h1>Edit</h1> <p>When I run <code>kubectl describe ingress &lt;ingress&gt;</code> I get:</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 8s (x5 over 19m) nginx-ingress-controller Scheduled for sync </code></pre> <p>How do I get it to sync? Is there a problem since it's been &quot;Scheduled for sync&quot; for a long time</p>
KraveXL
<p><strong>Overview</strong></p> <ul> <li>Ingress addon for Minikube using docker driver only works on linux</li> <li>Docker for Windows uses Hyper-V, therefore, if the Docker daemon is running, you will <b>not be able</b> to use VM platforms such as VirtualBox or VMware</li> <li>If you have Windows Pro, Enterprise or Education, you may be able to get it working if you use Hyper-V as your minikube cluster (see Solution 1)</li> <li>If you don't want to upgrade Windows, you can open a minikube cluster on a Linux Virtual Machine and run all your tests there. This will require you to configure some Windows VM settings in order to get your VM's to run (see Solution 2). Note that you can only run <b>either</b> Docker or a VM platform (other than Hyper-V) but <b>not both</b> (See The Second Problem for why this is the case).</li> </ul> <p><strong>The Problem</strong></p> <p>For those of you who are in the same situation as I was, the problem lies in the fact that the minikube ingress addon only works for Linux OS <b>when using the docker driver</b> (Thanks to @rriovall for showing me this <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">documentation</a>).</p> <p><strong>The Second Problem</strong></p> <p>So the solution should be simple, right? Just use a different driver and it should work. The problem here is, when Docker is intalled on Windows, it uses the built in Hyper-V virtualization technology which by default seems to disable all other virtualizatioin tech.</p> <p>I have tested this hypothesis and this seems to be the case. When the Docker daeomon is running, I am unable to boot any virtual machine that I have. For instance, I get an error when I tried to run my VM's on VirtualBox and on VMWare.</p> <p>Furthermore, when I attemt to start a minikube cluster using the virtualbox driver, it gets stuck &quot;booting the kernel&quot; and then I get a <code>This computer doesn't have VT-X/AMD-v enabled</code> error. This error is false as I do have VT-X enabled (I checked my BIOS). This is most likely due to the fact that when Hyper-V is enabled, all other types of virtualization tech <b>seems</b> to be disabled.</p> <p>For my personal machine, when I do a search for &quot;turn windows features on or off&quot; the Docker daemon enabled &quot;Virtual Machine Platform&quot; and then asked me to restart my computer. This happened when I installed Docker. As a test, I turned off both &quot;Virtual Machine Platform&quot; and &quot;Windows Hypervsor Platform&quot; features and restarted my computer.</p> <p>What happened when I did that? The Docker daemon stopped running and I could no longer work with docker, however, I was able to open my VM's <b>and</b> I was able to start my minikube cluster with virtualbox as the driver. The problem? Well, Docker doesn't work so when my cluster tries to pull the docker image I am using, it won't be able to.</p> <p>So here lies the problem. Either you have VM tech enables and Docker disabled, or you have VM tech (other than Hyper-V, I'll touch on that soon) disabled and Docker enabled. But you can't have both.</p> <p><strong>Solution 1</strong> (Untested)</p> <p>The simplest solution would probably be upgrading to Windows Pro, Enterpriseor or Education. The Hyper-V platform is not accessable on normal Windows. Once you have upgraded, you should be able to use Hyper-V as your driver concurrently with the Docker daemon. This, in theory, should make the ingress work.</p> <p><strong>Solution 2</strong> (Tested)</p> <p>If you're like me and don't want to do a system upgrade for something so miniscule, there's another solution.</p> <p>First, search your computer for the &quot;turn windows features on or off&quot; section and disable &quot;Virtual Machine Platform&quot; and &quot;Windows Hypervisor Platform&quot; and restart your computer. (See you in a bit :D)</p> <p>After that, install a virtual machine platform on your computer. I prefer <a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">VirtualBox</a> but you can also use others such as <a href="https://www.vmware.com/products/workstation-player/workstation-player-evaluation.html" rel="nofollow noreferrer">VMware</a>.</p> <p>Once you have a VM platform installed, add a new Linux VM. I would recommend either <a href="https://www.debian.org/" rel="nofollow noreferrer">Debian</a> or <a href="https://ubuntu.com/download/desktop" rel="nofollow noreferrer">Ubuntu</a>. If you are unfamiliar with how to set up a VM, <a href="https://www.youtube.com/watch?v=cx8GzudB6uE" rel="nofollow noreferrer">this</a> video will show you how to do so. This will be the general set up for most iso images.</p> <p>After you have your VM up and running, download <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">minikube</a> and <a href="https://docs.docker.com/engine/install/debian/" rel="nofollow noreferrer">Docker</a> on it. Be sure to install the correct version for your VM (for Debian, install Debian versions, for Ubuntu, install Ubuntu versions. Some downlaods may just be general Linux wich should work on most Linux versions).</p> <p>Once you have everything installed, create a minikube cluster with docker as the driver, apply your Kubernetes configurations (deployment, service and ingress). Configure your <code>/etc/hosts</code> file and go to your browser and it should work. If you don't know how to set up an ingress, you can watch <a href="https://www.youtube.com/watch?v=80Ew_fsV4rM" rel="nofollow noreferrer">this</a> video for an explanation on what an ingress is, how it works, and an example of how to set it up.</p>
KraveXL
<p>My airflow service runs as a kubernetes deployment, and has two containers, one for the <code>webserver</code> and one for the <code>scheduler</code>. I'm running a task using a KubernetesPodOperator, with <code>in_cluster=True</code> parameters, and it runs well, I can even <code>kubectl logs pod-name</code> and all the logs show up. </p> <p>However, the <code>airflow-webserver</code> is unable to fetch the logs:</p> <pre><code>*** Log file does not exist: /tmp/logs/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log *** Fetching from: http://pod-name-7dffbdf877-6mhrn:8793/log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log *** Failed to fetch log file from worker. HTTPConnectionPool(host='pod-name-7dffbdf877-6mhrn', port=8793): Max retries exceeded with url: /log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log (Caused by NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 0x7fef6e00df10&gt;: Failed to establish a new connection: [Errno 111] Connection refused')) </code></pre> <p>It seems as the pod is unable to connect to the airflow logging service, on port 8793. If I <code>kubectl exec bash</code> into the container, I can curl localhost on port 8080, but not on 80 and 8793.</p> <p>Kubernetes deployment:</p> <pre><code># Deployment apiVersion: apps/v1 kind: Deployment metadata: name: pod-name namespace: airflow spec: replicas: 1 selector: matchLabels: app: pod-name template: metadata: labels: app: pod-name spec: restartPolicy: Always volumes: - name: airflow-cfg configMap: name: airflow.cfg - name: dags emptyDir: {} containers: - name: airflow-scheduler args: - airflow - scheduler image: registry.personal.io:5000/image/path imagePullPolicy: Always volumeMounts: - name: dags mountPath: /airflow_dags - name: airflow-cfg mountPath: /home/airflow/airflow.cfg subPath: airflow.cfg env: - name: EXECUTOR value: Local - name: LOAD_EX value: "n" - name: FORWARDED_ALLOW_IPS value: "*" ports: - containerPort: 8793 - containerPort: 8080 - name: airflow-webserver args: - airflow - webserver - --pid - /tmp/airflow-webserver.pid image: registry.personal.io:5000/image/path imagePullPolicy: Always volumeMounts: - name: dags mountPath: /airflow_dags - name: airflow-cfg mountPath: /home/airflow/airflow.cfg subPath: airflow.cfg ports: - containerPort: 8793 - containerPort: 8080 env: - name: EXECUTOR value: Local - name: LOAD_EX value: "n" - name: FORWARDED_ALLOW_IPS value: "*" </code></pre> <p>note: If airflow is run in dev environment (locally instead of kubernetes) it all works perfectly.</p>
Yuzobra
<p>The problem was a bug in how the KubernetesPodExecutor from Airflow v1.10.10 tried to launch the pods. Upgrading to Airflow 2.0 solved the issue.</p>
Yuzobra
<p>I am trying to familiarise myself with kubernetes, and want to run a k8s stack on some low spec hw (think raspberry pi).</p> <p>I found what <a href="https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/" rel="nofollow noreferrer">appears to be a great guide to set up kubernetes un Ubuntu</a> but I ran into issues, causing me to reinstall the OS several times to ensure that I had not made a fundamental mistake which was poisoning my attempt.</p> <p>Getting fed up with waiting for the basics, <a href="https://github.com/JoSSte/k8s-playground" rel="nofollow noreferrer">I tried to set it up in a vagrant environment</a>, which does allow me to skip some of the time-consuming and tedious steps regarding reinstalls, but still seems like a fragile process. Looking at udemy and youtube, as well as getting started articles, a lot of focus appears to be in minikube... as I read it, that is essentially a vm with a ready to go kubernetes set up already.</p> <p>My question is: is the overhead using minikube small enough to use on servers with minimal resources? Or is it only usable for testing and demonstration? Since I have issues getting a working cluster, I can't test and verify it myself...</p>
JoSSte
<p>From minikube <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">documentation</a>: <code>minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes</code></p> <p>If you want to learn more about Kubernetes, I suggest reading and implementing <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">this</a> repository.</p> <p>In the end, if you want to use Kubernetes in production, please forget <code>minikube</code> and run Kubernetes.</p>
Mohammad Amin Taheri
<p>I am trying to list all the nodes that are set to unscheduleable in an operator-sdk operator. Generally (pre 1.12) this means they have <code>spec.unscheduleable</code> set. So I tried this:</p> <pre><code>nodes := &amp;corev1.NodeList{} opts := &amp;client.ListOptions{} if err := opts.SetFieldSelector("spec.unschedulable=true"); err != nil { reqLogger.Info("Failed to set field selector") } </code></pre> <p>Which is erroring:</p> <pre><code>2019-04-23T10:19:39.761-0700 ERROR kubebuilder.controller Reconciler error {"controller": "node-controller", "request": "/nodename", "error": "Index with name field:spec.unschedulable does not exist"} </code></pre> <p>I'm confused about this because the field selector works from kubectl:</p> <pre><code>kubectl get nodes --field-selector="spec.unschedulable=true" </code></pre> <p>Alongside this issue, I have noticed that after v1.12, the spec.unscheduleable field has been <a href="https://github.com/kubernetes/kubernetes/issues/69010" rel="nofollow noreferrer">deprecated</a> in favour of TaintNodeByCondition. This further complicates things, because now I really don't think I can use the fieldselector anyone, because I don't believe (unless I'm mistaken?) that you can use fieldselector with the taints anyway.</p> <p>So, my question is - how can I list all of the tainted/unscheduleable nodes in my cluster efficiently, specifically when using the operator-sdk</p> <p>UPDATE:</p> <p>I have managed to solve the fieldselector problem by using a v1meta.ListOptions call, like so:</p> <pre><code>nodes := &amp;corev1.NodeList{} opts := &amp;client.ListOptions{ Raw: &amp;metav1.ListOptions{ FieldSelector: "spec.unschedulable=true", }, } </code></pre> <p>However, I still don't know how to do this with taints, so I've edited the question and will leave it open</p>
jaxxstorm
<p>I'm using client-go APIs and I've also struggled with this issue, so I did a workaround which I'm not sure whether this is the best option or not. Nevertheless, will share my code.</p> <p>It starts with gettings all the nodes then filtering the ones that do not have taints.</p> <pre><code>func GetNodes(withoutTaints bool, clientset kubernetes.Interface, ctx *context.Context, options *metav1.ListOptions) *v1.NodeList { nodes, err := clientset.CoreV1().Nodes().List(*ctx, *options) if err != nil { panic(err) } if withoutTaints { nodesWithoutTaints := v1.NodeList{} for _, node := range nodes.Items { if len(node.Spec.Taints) == 0 { nodesWithoutTaints.Items = append(nodesWithoutTaints.Items, node) } } return &amp;nodesWithoutTaints } return nodes } </code></pre>
Ali Soliman
<p>I am a relatively seasoned AI practitioner however, I am a complete newbie when it comes to deployment of these models. I followed an online tutorial that deployed the model locally using Docker Desktop. It created a stack of containers for frontend and backend. I installed Tensorflow in each of these containers to run the AI model (RUN pip3 install tensorflow in the Dockerfile). However, I can't deploy it on Kubernetes. I hecked the option which allowed Docker Stacks to be sent to Kubernetes. I can see both frontend and backend images when I run <code>docker images</code>. The next step that I did was that I created a GCP project and created cluster in it. Then I pushed these images after tagging them in the specific format <code>gcr.io/project/name:tag</code> to both front end and back end. Then I deployed both and then exposed them as well as fdep (frontend) and bdep (backend). Both of them are running correctly as seen here: [![enter image description here][1]][1]</p> <p>However, when I go to front end external ip and run the model, nothing happens. As if the back end is not outputting anything. Here is what I get when I use postman to post a request on backend external IP: [![enter image description here][2]][2]</p> <p>Any help here. What am I doing wrong?</p>
Muhammad Aleem
<p>Since this multi-container docker app that was not originally developed for kubernetes, make sure specify a name when generating the service for your backend, </p> <p>kubectl expose deployment bdep --port 8081 --name (name-that-the-front-end-apps-expect)</p> <p>In your case, without the option --name, the service name defaults to the deployment name "bdep", but the front end apps are expecting the name "backend".</p>
Perryn Gordon
<p>I recently started exploring Kubernetes and decided to try and deploy kafka to k8s. However I have a problem with creating the persistent volume. I create a storage class and a persistent volume, but the persistent volume claims stay in status pending saying "no volume plugin matched". This is the yaml files I used with the dashed lines denoting a new file. Anybody has an idea why this is happening?</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate reclaimPolicy: Retain ------------------------ apiVersion: v1 kind: PersistentVolume metadata: name: kafka-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/disks/ssd1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - docker-desktop --------------------------- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: zookeeper-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: local-storage </code></pre>
Konstantin Konstantinov
<p>As MaggieO said changing ReadWriteMany to ReadWriteOnce was part of the problem. The other part was that I had to go and create the /mnt/disks/ssd1 folder on my C: drive manually and write "path: /c/mnt/disks/ssd1" instead. Something that is not present in my example, but I was trying to do anyway and might be helpful to others was that I was trying to have two PVCs for one PV which is impossible. The PV to PVC relationship is 1 to 1.</p>
Konstantin Konstantinov
<p>I keep getting the below error inconsistently on one of my services' endpoint object. : &quot;Failed to update endpoint default/myservice: Operation cannot be fulfilled on endpoints &quot;myservice&quot;: the object has been modified; please apply your changes to the latest version and try again&quot;. I am sure I am not editing the endpoint object manually because all my Kubernetes objects are deployed through helm3 charts. But it keeps giving the same error. It goes away if I delete and recreate the service. Please help/give any leads as to what could be the issue. Below is my service.yml object from the cluster:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: myservice namespace: default selfLink: /api/v1/namespaces/default/services/myservice uid: 4af68af5-4082-4ffb-b11b-641d16b28f31 resourceVersion: '1315842' creationTimestamp: '2020-08-13T11:00:53Z' labels: app: myservice app.kubernetes.io/managed-by: Helm chart: myservice-1.0.0 heritage: Helm release: vanilla annotations: meta.helm.sh/release-name: vanilla meta.helm.sh/release-namespace: default spec: ports: - name: http protocol: TCP port: 5000 targetPort: 5000 selector: app: myservice clusterIP: 10.0.225.85 type: ClusterIP sessionAffinity: None status: loadBalancer: {} </code></pre>
bamishr
<p>It's <a href="https://stackoverflow.com/questions/57957516/kubernetes-failed-to-update-endpoints-warning">common behavior</a> and might happen when you try to deploy resources by copy-pasting manifests including <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata" rel="nofollow noreferrer"><code>metadata</code> fields</a> like <code>creationTimeStamp</code>, <code>resourceVersion</code>, <code>selfLink</code> etc.</p> <p><a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#generated-values" rel="nofollow noreferrer">Those fields</a> are generated before the object is persisted. It appears when you attempt to update the resource that has been already updated and the version has changed so it refuses to update it. The solution is to check your yamls and apply must-have objects without specifying fields populated by the system.</p>
kool
<p>I'm trying to develop a custom resource on kubernetes with kubebuilder. In this CR, I have a field of type <code>url.URL</code></p> <p>I get this error :</p> <pre><code>(*in).DeepCopyInto undefined (type *url.URL has no field or method DeepCopyInto) </code></pre> <p>Is there a way to work with type <code>url.URL</code> when developing a CR ?</p> <p>Thanks</p>
Quentin
<p>So I found a solution<br /> I don't know if it's the best but I've created a custom type URL with the part which is missing to use <code>net/url</code> with <code>controller-gen</code>.</p> <p>It works fine <a href="https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7" rel="nofollow noreferrer">https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7</a></p>
Quentin
<p>I'm using a Microk8s setup with the following configuration -</p> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins labels: app: jenkins spec: selector: matchLabels: app: jenkins replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: jenkins spec: serviceAccountName: jenkins containers: - name: jenkins image: jenkins/jenkins:2.235.1-lts-alpine imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: -Xmx2048m -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 ports: - containerPort: 8080 protocol: TCP - containerPort: 50000 protocol: TCP volumeMounts: - mountPath: /var/jenkins_home name: jenkins restartPolicy: Always securityContext: runAsUser: 0 terminationGracePeriodSeconds: 30 volumes: - name: jenkins persistentVolumeClaim: claimName: jenkins-claim </code></pre> <p><strong>pv.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins labels: type: local spec: storageClassName: manual capacity: storage: 4Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; </code></pre> <p><strong>pvc.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 4Gi </code></pre> <p><strong>rbac.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: jenkins rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods/exec&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods/log&quot;] verbs: [&quot;get&quot;,&quot;list&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;secrets&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;configmaps&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;apps&quot;] resources: [&quot;deployments&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;services&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;ingresses&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins subjects: - kind: ServiceAccount name: jenkins namespace: jenkins </code></pre> <p><strong>service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: jenkins labels: app: jenkins spec: type: NodePort ports: - name: ui port: 8080 targetPort: 8080 protocol: TCP - name: slave port: 50000 protocol: TCP - name: http port: 80 targetPort: 8080 selector: app: jenkins </code></pre> <p>I can access the internet from my node (host), but not from my pods. my node is an ubuntu 18.04.2 LTS machine running on vSphere, within a VPN.</p> <p>in official documentation (<a href="https://microk8s.io/docs/troubleshooting" rel="nofollow noreferrer">https://microk8s.io/docs/troubleshooting</a>) it says to either</p> <pre><code>sudo iptables -P FORWARD ACCEPT sudo apt-get install iptables-persistent </code></pre> <p>or</p> <pre><code>sudo ufw default allow routed </code></pre> <p>both doesn't fix the problem for me.</p> <p>also tried suggestions in <a href="https://github.com/ubuntu/microk8s/issues/1484" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/issues/1484</a> without success.</p>
Yahm Levi Firseck
<p>In order to solve this problem on Microk8s, enable dns addon BEFORE deploying with command <code>microk8s enable dns</code></p>
Yahm Levi Firseck
<p>I have a redis sentinel master slave setup with 1 master and 3 slaves and this is in Kubernetes environment.In the spring lettuce configuration, I have to specify the sentinels URLs with port numbers. How should I specify the URL for each sentinel? <a href="https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis:sentinel" rel="nofollow noreferrer">Spring doc</a> specifies IP and port. In the local it's ok but when in k8s, how should I configure? I installed the set up with <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="nofollow noreferrer">bitnami redis chart</a>. Below is how it's done locally.</p> <pre class="lang-java prettyprint-override"><code>@Bean public RedisConnectionFactory lettuceConnectionFactory() { RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration().master(&quot;mymaster&quot;) .sentinel(&quot;127.0.0.1&quot;, 26379) .sentinel(&quot;127.0.0.1&quot;, 26380); return new LettuceConnectionFactory(sentinelConfig); } </code></pre> <p>Thanks</p>
sachin
<p>First thing -> using the bitnami helm chart is the right way to do things.</p> <p>Although a bit of a different implementation, heres how we implemented the same master slave setup AND also avoided the above problem while ensuring the MAXIMUM availability we ever witnessed (less than 2 secs Downtime for master)</p> <ul> <li><p>We made two services - one for master and another for slaves.</p></li> <li><p>a PV PVC that was shared between the slaves and master where ONLY Master would write and slaves would only read from PV</p></li> <li><p>this way we could always ensure that there was 1 pod running ALL time for master and N replicas behind the headless service for slaves.</p></li> </ul> <p>In application slaves and master URL's would always be different, thus ensuring a clear "WRITE" and "READ" isolation and improving the stability of system with almost no failures for reads.</p>
srinivas-vaddi
<p>I am currently trying to setup ldaps for a sonarqube instance running on kubernetes, but for ldaps to work I need to add the ca to the Java store, ideally I would do this without having to alter the image or doing it manually as it would need to be redone whenever the pod is recreated which goes against kubernetes principles of pods being spendable </p>
vgauzzi
<p>Ideally you WOULD alter your image, or rather create a new, reusable parent image. This would then extend your usual parent image, e.g. some alpine linux, and copy your cerificate data into the image which is then to be imported to your images linux trust store. This is the easiest and most straight forward way I know.</p>
godsim
<p>I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.</p> <p>I would like to utilise kubectl connection plugin: <a href="https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html</a> but having struggle to figure out how to actually do that.</p> <p>Couple of questions:</p> <ol> <li>Do I need to first have inventory for k8s defined? Something like: <a href="https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html</a>. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.</li> <li>If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)</li> </ol> <p>I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.</p> <p>Thanks.</p>
Bakir Jusufbegovic
<p>First install k8s collections</p> <pre><code>ansible-galaxy collection install community.kubernetes </code></pre> <p>and here is play-book, it will sort all pods and run a command in every pod </p> <pre><code>--- - hosts: localhost vars_files: - vars/main.yaml collections: - community.kubernetes tasks: - name: Get the pods in the specific namespace k8s_info: kubeconfig: '{{ k8s_kubeconfig }}' kind: Pod namespace: test register: pod_list - name: Print pod names debug: msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} " - set_fact: pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}" - k8s_exec: kubeconfig: '{{ k8s_kubeconfig }}' namespace: "{{ namespace }}" pod: "{{ item.metadata.name }}" command: apt update with_items: "{{ pod_list.resources }}" register: exec loop_control: label: "{{ item.metadata.name }}" </code></pre>
Bora Özkan
<p>I have 2 kubernetes clusters on digitalocean. One cluster has nginx installed via helm:</p> <pre><code>helm install nginx bitnami/nginx </code></pre> <p>I need to &quot;whitelist&quot; the other cluster IP address. So basically one cluster can receive incoming calls to an endpoint from a specific cluster.<br/></p> <p>I don't know how to configure the helm values.yaml file generated. Normally with nginx we can use:</p> <pre><code>whitelist-source-range </code></pre> <p>But the helm chart i don't know how to do it.<br/> thanks</p>
YoussHark
<p>Whitelist-source-range is an <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range" rel="nofollow noreferrer">annotation</a> that can be added to an Ingress object, for example:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whitelist annotations: ingress.kubernetes.io/whitelist-source-range: &quot;1.1.1.1/24&quot; spec: rules: - host: whitelist.test.net http: paths: - path: / backend: serviceName: webserver servicePort: 80 </code></pre> <p>You may also need to change <code>service.externalTrafficPolicy</code> to <code>Local</code></p>
kool
<p>I am trying to run a cron job in kubernetes that needs to access a database. This is the database yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: component: db name: db spec: selector: matchLabels: component: db replicas: 1 strategy: type: Recreate template: metadata: labels: component: db spec: containers: - name: db image: mysql:5.7 ports: - containerPort: 3306 args: - --transaction-isolation=READ-COMMITTED - --binlog-format=ROW - --max-connections=1000 - --bind-address=0.0.0.0 env: - name: MYSQL_DATABASE valueFrom: secretKeyRef: key: MYSQL_DATABASE name: db-secrets - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: MYSQL_PASSWORD name: db-secrets - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: MYSQL_ROOT_PASSWORD name: db-secrets - name: MYSQL_USER valueFrom: secretKeyRef: key: MYSQL_USER name: db-secrets volumeMounts: - mountPath: /var/lib/mysql name: db-persistent-storage restartPolicy: Always volumes: - name: db-persistent-storage persistentVolumeClaim: claimName: db-pvc </code></pre> <p>And this is the yaml for the cronjob:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: cron spec: schedule: &quot;0 0 * * *&quot; jobTemplate: spec: template: spec: containers: - name: cron image: iulbricht/shopware-status-tool:1.0.0 env: - name: USERNAME valueFrom: secretKeyRef: key: USERNAME name: cron-secrets - name: PASSWORD valueFrom: secretKeyRef: key: PASSWORD name: cron-secrets - name: DATABASE_DSN valueFrom: secretKeyRef: key: DATABASE_DSN name: cron-secrets - name: DHL_API_KEY valueFrom: secretKeyRef: key: DHL_API_KEY name: cron-secrets - name: SHOP_API valueFrom: secretKeyRef: key: SHOP_API name: cron-secrets restartPolicy: OnFailure </code></pre> <p>When the cronjob runs I always get the following message: <code>default addr for network 'db:3306' unknown</code>. The mysql connection string is as follows: mysql://username:password@db:3306/shopware</p> <p>I am using Kustomization and the db and cron are in the save namespace.</p> <p>Can anyone help me find a way to solve this?</p>
Knerd
<p>Can you please try this connection string</p> <pre><code>username:password@tcp(db:3306)/shopware </code></pre>
Roar S.
<p>I have a Kubernetes-cluster with 1 master-node and 3 worker-nodes. All Nodes are running on CentOS 7 with Docker 19.06. I'm also running Longhorn for dynamic provisioning of volumes (if that's important).</p> <p>My problem is that every few days one of the worker nodes grows the HDD-usage to 85% (43GB). This is not a linear increase, but happens over a few hours, sometimes quite rapidly. I can &quot;solve&quot; this problem for a few days by first restarting the docker service and then doing a <code>docker system prune -a</code>. If I don't restart the service first, the prune removes next to nothing (only a few MB).</p> <p>I also tried to find out which container is taking up all that space, but <code>docker system df</code> says it doesn't use the space. I used <code>df</code> and <code>du</code> to crawl along the /var/lib/docker subdirectories too, and it seems none of the folders (alone or all together) takes up much space either. Continuing this all over the system, I can't find any other big directories either. There are 24GB that I just can't account for. What makes me think this is a docker problem nonetheless is that a restart and prune just solves it every time.</p> <p>Googling around I found a lot of similar issues where most people just decided to increase disk space. I'm not keen on accepting this as the preferred solution, as it feels like kicking the can down the road.</p> <p>Would you have any smart ideas on what to do instead of increasing disk space?</p>
larissaphone
<p>It seems like it is expected behavior, from <a href="https://docs.docker.com/config/pruning/" rel="nofollow noreferrer">Docker documentation</a> you can read:</p> <blockquote> <p>Docker takes a conservative approach to cleaning up unused objects (often referred to as “garbage collection”), such as images, containers, volumes, and networks: these objects are generally not removed unless you explicitly ask Docker to do so. This can cause Docker to use extra disk space. For each type of object, Docker provides a <code>prune</code> command. In addition, you can use <code>docker system prune</code> to clean up multiple types of objects at once. This topic shows how to use these <code>prune</code> commands.</p> </blockquote> <p>So it seems like you have to clean it up manually using <code>docker system/image/container prune</code>. Other issue might be that those containers create too much logs and you might need to <a href="https://stackoverflow.com/questions/42510002/how-to-clear-the-logs-properly-for-a-docker-container">clean it up</a>.</p>
kool
<p>I just started working with Kubeflow and I ran into a problem. I need my pipeline to be able to automatically get the name of the experiment it belongs to. I tried to use <a href="https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.get_experiment" rel="nofollow noreferrer">the kfp package</a> but it seems to me that there is no way to get the experiment name of the current run. Do you have any suggestions? Thank you very much!</p>
user2846
<p>A run is tied to a experiment, not the other way around. When you run a pipeline you specify the experiement name with <code>kfp.Client.run_pipeline</code> as argument. When you do not specify an experiment then it will automatically be tied to the default experiment on AI platforms.</p> <p>So you won't need to get the experiment name since you specify the experiment when running a pipeline.</p>
Franco
<p>So I have my own website that I am running and I want to migrate one of my services to my current cluster under a subdomain of my actual website and I'm having some trouble.</p> <p>I have a website that I purchased off of NameCheap and I'm using Cloudfare for all the DNS stuff. So everything is setup correctly. What I can't seem to figure out what to do is getting my subdomain website to actually work.</p> <p>I have tried to add a &quot;A&quot; and &quot;CNAME&quot; record and still can't get it to work.</p> <p>I also tried to follow this site and got no luck. I have tried other stackoverflow links and links posted by cloudfare. But I couldn't get anything to work still: <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p> <p>My services are also running with no issues. My pods and deployments are also fine showing no errors and my website is already running on another link which I'm removing to save money. <a href="http://www.ecoders.ca" rel="nofollow noreferrer">www.ecoders.ca</a>. All I did to migrate over my service was add stuff to my ingress and re-deploy everything to my current cluster. On my current cluster I'm using NGINX.</p> <p>LMK if more information is required.</p> <p><strong>Ingress.yaml</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; # nginx.ingress.kubernetes.io/rewrite-target: / name: ingress spec: rules: - host: www.foo.com http: paths: - backend: serviceName: nk-webapp-service servicePort: 80 path: / - backend: serviceName: stockapp-service servicePort: 80 path: /stock - host: www.bar.foo.com &lt;----------- this does not work http: paths: - backend: serviceName: ecoders-webapi-service servicePort: 80 path: / </code></pre> <p><strong>Cloudfare Setup</strong> <a href="https://i.stack.imgur.com/yHWUz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yHWUz.png" alt="enter image description here" /></a></p> <pre><code>CNAME -&gt; www -&gt; foo.com CNAME -&gt; bar -&gt; foo.com A -&gt; foo.com -&gt; IP ADDRESS </code></pre> <p><strong>Service Setup</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ecoders-webapi spec: replicas: 1 selector: matchLabels: name: ecoders-webapi template: metadata: labels: name: ecoders-webapi spec: containers: - name: webapi image: astronik/webservice:latest imagePullPolicy: Always ports: - containerPort: 8080 </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: name: ecoders-webapi-service spec: type: ClusterIP ports: - name: http port: 80 targetPort: 8080 selector: name: ecoders-webapi </code></pre> <p><strong>UPDATED</strong></p> <p><strong>Ingress.yaml</strong></p> <pre><code> - host: www.bar.foo.com http: paths: - backend: serviceName: ecoders-webapi-service servicePort: 80 path: / - host: bar.foo.com http: paths: - backend: serviceName: ecoders-webapi-service servicePort: 80 path: / </code></pre> <p>I added in a &quot;www&quot; link version as well, and now I'm getting this (before I was getting noting):</p> <p><a href="https://i.stack.imgur.com/dG8yI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dG8yI.png" alt="enter image description here" /></a></p> <p>Does this have something to do with TLS/SSL? Meaning my subdomain doesn't have a certificate?</p> <p><strong>NEW UPDATE</strong></p> <p>So under &quot;SSL/TLS&quot; on the Cloudfare dashboard. Once I turned it off I was able to access my subdomain with no problem. But now how do I get it to run on full? Does my kubernetes cluster require a certificate?</p> <p><a href="https://i.stack.imgur.com/UHLle.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UHLle.png" alt="enter image description here" /></a></p> <p><em><strong>SOLVED</strong></em></p> <p>So it's all fixed up and it had to do with 2 little problems.</p> <p><strong>Problem 1:</strong></p> <p><a href="https://i.stack.imgur.com/cgOlE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cgOlE.png" alt="enter image description here" /></a></p> <p>Essentially I need to change my DNS settings adding www. was adding another subdomain. I removed the 2 CNAMEs I created before and did this.</p> <pre><code>A -&gt; bar -&gt; 10.0.0.0 A -&gt; foo.com -&gt; 10.0.0.0 </code></pre> <p><strong>Problem 2:</strong></p> <p>Needed to update my ingress to remove this line</p> <pre><code>nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; </code></pre> <p><strong>Updated Ingress</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: ingress spec: rules: - host: foo.com http: paths: - backend: serviceName: nk-webapp-service servicePort: 80 path: / - backend: serviceName: stockapp-service servicePort: 80 path: /stock - host: bar.foo.com http: paths: - backend: serviceName: ecoders-web-service servicePort: 80 path: / </code></pre>
Nikster
<p><em><strong>SOLVED</strong></em></p> <p>So it's all fixed up and it had to do with 2 little problems. <a href="https://community.cloudflare.com/t/cloudflare-ssl-not-working-on-subdomains/3792" rel="nofollow noreferrer">Link Cloudfare Page</a></p> <p><strong>Problem 1:</strong></p> <p><a href="https://i.stack.imgur.com/cgOlE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cgOlE.png" alt="enter image description here" /></a></p> <p>Essentially I need to change my DNS settings adding www. was adding another subdomain. I removed the 2 CNAMEs I created before and did this.</p> <pre><code>A -&gt; bar -&gt; 10.0.0.0 A -&gt; foo.com -&gt; 10.0.0.0 </code></pre> <p><strong>Problem 2:</strong></p> <p>Needed to update my ingress to remove this line</p> <pre><code>nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; </code></pre> <p><strong>Updated Ingress</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: ingress spec: rules: - host: foo.com http: paths: - backend: serviceName: nk-webapp-service servicePort: 80 path: / - backend: serviceName: stockapp-service servicePort: 80 path: /stock - host: bar.foo.com http: paths: - backend: serviceName: ecoders-web-service servicePort: 80 path: / </code></pre>
Nikster
<p>I am trying to get docker running on Jenkins which itself is a container. Below is part of the Pod spec.</p> <p><code>cyrilpanicker/jenkins</code> is an image with Jenkins and docker-cli installed. For Docker daemon, I am running another container with <code>docker:dind</code> image (The nodes are running on a k8s cluster). And to get <code>docker.sock</code> linked between them, I am using volume mounts.</p> <pre><code>spec: containers: - name: jenkins image: cyrilpanicker/jenkins volumeMounts: - mountPath: /var/run/docker.sock name: docker-socket - name: docker image: docker:dind securityContext: privileged: true volumeMounts: - mountPath: /var/run/docker.sock name: docker-socket volumes: - name: docker-socket hostPath: path: /docker.sock type: FileOrCreate </code></pre> <p>But this is not working. Below are the logs from the <code>docker</code> container.</p> <pre><code>time=&quot;2021-06-04T20:47:26.059792967Z&quot; level=info msg=&quot;Starting up&quot; time=&quot;2021-06-04T20:47:26.061956820Z&quot; level=warning msg=&quot;could not change group /var/run/docker.sock to docker: group docker not found&quot; failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy </code></pre> <p>Can anyone suggest another way to get this working?</p>
Cyril
<p>According to the kubernetes docs, <code>hostPath</code> mounts a path from node filesystem, so if I understand correctly, this is not what you want to achieve. I'm afraid that it isn't possible do mount single file as a volume, so even if you remove <code>hostPath</code> from <code>volumes</code>, <code>docker.sock</code> will be mounted as directory:</p> <pre><code>jenkins@static-web:/$ ls -la /var/run/ total 20 drwxr-xr-x 1 root root 4096 Jun 5 14:44 . drwxr-xr-x 1 root root 4096 Jun 5 14:44 .. drwxrwxrwx 2 root root 4096 Jun 5 14:44 docker.sock </code></pre> <p>I would try to run docker daemon in dind container with TCP listener instead of sock file:</p> <pre><code>spec: containers: - name: jenkins image: cyrilpanicker/jenkins - name: docker image: docker:dind command: [&quot;dockerd&quot;] args: [&quot;-H&quot;, &quot;tcp://127.0.0.1:2376&quot;] ports: - containerPort: 2376 securityContext: privileged: true </code></pre> <pre><code>jenkins@static-web:/$ docker -H tcp://127.0.0.1:2376 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES </code></pre> <p>And then configure jenkins to use <code>tcp://127.0.0.1:2376</code> as a remote docker daemon.</p>
Arek
<p>I'm running into an issue:</p> <p>Getting a health check to succeed for a .Net app running in an IIS Container when trying to use Container Native Load Balancing(CNLB).</p> <p>I have a Network Endpoint Group(NEG) created by an Ingress resource definition in GKE with a VPC Native Cluster.</p> <p>When I circumvent CNLB by either exposing the NodePort or making a service of type LoadBalancer, the site resolves without issue.</p> <p>All the pod conditions from a describe look good: <a href="https://i.stack.imgur.com/qwCM6.png" rel="nofollow noreferrer">pod readiness</a></p> <p>The network endpoints show up when running <code>describe endpoints</code>: <a href="https://i.stack.imgur.com/hmJEw.png" rel="nofollow noreferrer">ready addresses</a></p> <p>This is the health check that is generated by the load balancer: <a href="https://i.stack.imgur.com/3U33f.png" rel="nofollow noreferrer">GCP Health Check</a></p> <p>When hitting these endpoints from other containers or VMs in the same VPC, /health.htm responds with a 200. Here's from a container in the same namespace, though I have reproduced this with a Linux VM, not in the cluster but in the same VPC: <a href="https://i.stack.imgur.com/8Iydz.png" rel="nofollow noreferrer">endpoint responds</a></p> <p>But in spite of it all, the health check is reporting the pods in my NEG unhealthy: <a href="https://i.stack.imgur.com/7TVE6.png" rel="nofollow noreferrer">Unhealthy Endpoints</a></p> <p>The stackdriver logs confirm the requests are timing out but I'm not sure why when the endpoints are responding to other instances but not the LB: <a href="https://i.stack.imgur.com/nTIx9.png" rel="nofollow noreferrer">Stackdriver Health Check Log</a></p> <p>And I confirmed that GKE created what looks like the correct firewall rule that should allow traffic from the LB to the pods: <a href="https://i.stack.imgur.com/Jgcwu.png" rel="nofollow noreferrer">firewall</a></p> <p>Here is the YAML I'm working with:</p> <p>Deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: subdomain.domain.tld name: subdomain-domain-tld namespace: subdomain-domain-tld spec: replicas: 3 selector: matchLabels: app: subdomain.domain.tld template: metadata: labels: app: subdomain.domain.tld spec: containers: - image: gcr.io/ourrepo/ourimage name: subdomain-domain-tld ports: - containerPort: 80 readinessProbe: httpGet: path: /health.htm port: 80 initialDelaySeconds: 60 periodSeconds: 60 timeoutSeconds: 10 volumeMounts: - mountPath: C:\some-secrets name: some-secrets nodeSelector: kubernetes.io/os: windows volumes: - name: some-secrets secret: secretName: some-secrets </code></pre> <p>Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: subdomain.domain.tld name: subdomain-domain-tld-service namespace: subdomain-domain-tld spec: ports: - port: 80 targetPort: 80 selector: app: subdomain.domain.tld type: NodePort </code></pre> <p>Ingress is extremely basic as we have no real need for multiple routes on this site, however, I'm suspecting whatever issues we're having are here.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: gce labels: app: subdomain.domain.tld name: subdomain-domain-tld-ingress namespace: subdomain-domain-tld spec: backend: serviceName: subdomain-domain-tld-service servicePort: 80 </code></pre> <p>Last somewhat relevant detail is I tried the steps present in this documentation and it worked but it's not identical to my situation as its not using Windows Containers nor Readiness Probes: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#using-pod-readiness-feedback" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#using-pod-readiness-feedback</a></p> <p>Any suggestions would be greatly appreciated. I've spent two days stuck on this and I'm sure it's obvious but I just can't see the problem.</p>
210rain
<p>Apparently it's not documented but this functionality doesn't work with Windows containers at the time of writing. I was able to get in touch with a GCP Engineer and they provided the following:</p> <blockquote> <p>After further investigation, I have found that Windows containers using LoadBalancer service works but, Windows containers using Ingress with NEGS is a limitation so, I have opened an internal case for updating the public documentation [1].</p> <p>Since, Ingress + NEG will not work (per the limitation), I suggest you to use any option you mentioned either exposing the NodePort or making a service of type LoadBalancer.</p> </blockquote>
210rain
<p>Summary of Setup</p> <p>I have a remote Openshift Cluster with three pods. An nginx pod that serves a web app, a .NET pod that serves a .NET web api, and a Postgres database pod.</p> <p>Problem</p> <p>I am able to connect the nginx pod to the .NET pod and have no problem making api request. I cannot however get communication from the .NET pod to the Postgres pod in the Openshift cluster. I can curl the Postgres pod from the .NET pod's terminal in Openshift web console and am able to connect the Postgres pod itself (not the database) using the Postrgres pod's service name, so DNS resolution of the Postgres pod is working. Using Openshift's port forwarding to forward traffic from my local machine's localhost:5432 to my Postgres Pod's port 5432, I can connect to the Postgres datbase while running the .NET API locally using the connection string <code>Host=localhost;Port=5432;Database=postgres;Username=postgres;Password=postgres</code>. I can query, insert, etc with no problem as long as my connection to the database local. So connecting to the database via localhost clearly works. But when inside the Openshift cluster, I can't get the .NET pod to connect to Postgres Pod database. The database should see a connection from another pod as a remote connection right? But it isn't working at all. The postgres logs don't show anything about a connection attempt either.</p> <p>My .NET uses <code>Host=postgres;Port=5432;Database=postgres;Username=postgres;Password=postgres</code>. This is stored in my appsettings.json. This is read in during in my Startup.cs.</p> <pre><code>// (U) add database context services.AddDbContext&lt;StigContext&gt;( options =&gt; options.UseNpgsql(Configuration[&quot;DbConnectionString&quot;]) ); </code></pre> <p>.NET database connection method, this endpoint returns true when I run and connect to the remote Postgres pod locally on my machine via Openshift's port forwarding.</p> <pre><code>public IActionResult TestDatabaseConnection() { try { stigContext.Database.OpenConnection(); return Ok(stigContext.Database.CanConnect()); } catch (Exception e) { var error = e.Message + &quot;\n&quot; + e.ToString() + &quot;\n&quot; + e.StackTrace; return Ok(error); } } </code></pre> <p>Postgres Deployment and Service, PVC is done through the Openshift web console</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: selector: matchLabels: app: postgres replicas: 1 template: metadata: annotations: sidecar.istio.io/inject: &quot;true&quot; sidecar.istio.io/proxyCPULimit: 200m sidecar.istio.io/proxyMemory: 64Mi sidecar.istio.io/proxyMemoryLimit: 256Mi sidecar.istio.io/rewriteAppHTTPProbers: &quot;true&quot; labels: app: postgres version: v1 spec: containers: - name: postgres image: custom-docker-image-for-postgres:latest imagePullPolicy: Always env: - name: POSTGRES_DB value: postgres - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres - name: POSTGRES_HOST_AUTH_METHOD value: trust - name: PGDATA value: /var/lib/postgresql/data/mydata ports: - containerPort: 5432 volumeMounts: - name: postgres-pv-storage mountPath: /var/lib/postgresql/data resources: requests: memory: &quot;128Mi&quot; cpu: &quot;30m&quot; limits: memory: &quot;512Mi&quot; cpu: &quot;60m&quot; volumes: - name: postgres-pv-storage persistentVolumeClaim: claimName: postgres-pv-claim --- apiVersion: v1 kind: Service metadata: labels: app: postgres service: postgres name: postgres spec: ports: - name: http port: 5432 selector: app: postgres </code></pre> <p>I ran a few command on the running Postgres pod to show the running config...</p> <pre><code>$ postgres -C listen_addresses * </code></pre> <pre><code>$ postgres -C hba_file /var/lib/postgresql/data/mydata/pg_hba.conf $ cat /var/lib/postgresql/data/mydata/pg_hba.conf # TYPE DATABASE USER ADDRESS METHOD # &quot;local&quot; is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust # Allow replication connections from localhost, by a user with the # replication privilege. local replication all trust host replication all 127.0.0.1/32 trust host replication all ::1/128 trust # warning trust is enabled for all connections # see https://www.postgresql.org/docs/12/auth-trust.html host all all all trust </code></pre> <pre><code>$ postgres -C port 5432 </code></pre> <p>It seems like it should be able to connect remotely via pod to pod communication but definitely does not. I am at a loss as to what the issue is. I have no idea why its not working. I can provide any additional info if needed. Hopefully this is clear. Any suggestions are appreciated.</p>
JCole91
<p>tdlr: my service definition's port name for my postgres service was 'http' and not 'tcp'. The name field is actually is the protocol so it does matter what you put here.</p> <p>solution found here <a href="https://github.com/istio/istio/issues/16506#issuecomment-636224246" rel="nofollow noreferrer">https://github.com/istio/istio/issues/16506#issuecomment-636224246</a></p> <p>This may not be an issue if your deployment does not use istio.</p> <p>The updated service yml for my postgres deployment is</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: postgres service: postgres name: postgres spec: ports: - name: tcp port: 5432 selector: app: postgres </code></pre>
JCole91
<p>I've built a React application utilizing ViteJS with TypeScript. I've built the project using <code>tsc &amp;&amp; vite build</code>. I've then built the Docker image based on my Dockerfile:</p> <pre><code>FROM node:18.12.0 COPY build /build RUN npm i -g serve CMD [&quot;serve&quot;, &quot;-s&quot;, &quot;build&quot;, &quot;-p&quot;, &quot;5173&quot;] HEALTHCHECK CMD wget --no-verbose --tries=1 --spider http://localhost:5173/ || exit 1 </code></pre> <p>I've tested the Docker Image locally, and it builds as intended and when run the container boots up and the application is accessible and wors as it should.</p> <p>I've been following Microsofts documentation for utilizing an Nginx Ingress Controller: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">Create an ingress controller in Azure Kubernetes Service (AKS)</a></p> <p>So, my manifest for this service looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: aks-helloworld-one spec: replicas: 1 selector: matchLabels: app: aks-helloworld-one template: metadata: labels: app: aks-helloworld-one spec: containers: - name: aks-helloworld-one image: ghcr.io/akusasdesign/localizer:test-serve ports: - containerPort: 5173 env: - name: TITLE value: &quot;Welcome to Azure Kubernetes Service (AKS)&quot; imagePullSecrets: - name: github-container-registry --- apiVersion: v1 kind: Service metadata: name: aks-helloworld-one spec: type: ClusterIP ports: - port: 5173 selector: app: aks-helloworld-one </code></pre> <p>The service and Deployment are both created, but when moving to the url, the page is white/blank. Checking the console I can see the error message:</p> <pre><code>Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of &quot;text/html&quot;. Strict MIME type checking is enforced for module scripts per HTML spec. </code></pre> <p>I've done some searching, but can't find a reason/fix for this problem.</p>
Akusas
<p>The MIME type for <code>.mjs</code> files needs to be <code>text/javascript</code>.<br /> For more innformation check out the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#aside_%E2%80%94_.mjs_versus_.js" rel="nofollow noreferrer">MDN documentation</a> about that topic.</p>
Tino
<p>I have the workload of 6 nodes in 1 node pool, and some number of services, let's say 3n + k. Then I am downscaling them to the number of n + k, and the load of all nodes reduces from 80% to approximately 45-50%, and I want my services to be reschedulled to reduce the overall number of nodes, but this does not happen. Why? Do I need to wait more? Do I need to make some other actions?</p>
DISCO
<p>Once the load decreases to 45 to 50%, GKE should automatically rearrange the workloads between nodes to effectively utilize the resources. However, this process may take some time as Kubernetes keeps nodes in a buffer, expecting the same amount of traffic for a certain duration. If the expected traffic doesn't materialize, Kubernetes should eventually take action on its own to rebalance the workloads. If this activity is not happening or if node downscaling is not occurring, it could be due to the following reasons:</p> <ol> <li><p>Pod Disruption Budget: If there is a Pod Disruption Budget (PDB) set for the workloads, GKE will not be able to kill running pods on the node, which would prevent the node drain from happening. PDBs define the minimum number of pods that must be available during a disruption, and they can prevent GKE from draining nodes when the PDB constraints are not met.</p> </li> <li><p>Pods Using Node Storage: If a pod is using local storage on a node, Kubernetes will avoid killing that pod to avoid data loss. In such cases, the workloads won't be rearranged until Kubernetes can safely terminate the pods without losing data.</p> </li> <li><p>Adding safe to evict annotation:By adding eviction annotations to deployments, administrators can ensure better pod re-arrangement, mitigating potential problems caused by pods not being evicted as expected.</p> </li> </ol> <blockquote> <pre><code>apiVersion: v1 kind: Pod | Deployment metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: &quot;true&quot; </code></pre> </blockquote> <p>To resolve issues with node downscaling and workload rearrangement, you should check and address the above points. If there are Pod Disruption Budgets in place, consider adjusting them to allow for more flexibility during node drain. Additionally, if pods are using local storage, consider migrating them to use network-attached storage or other persistent storage solutions that allow for better workload redistribution. Once these issues are addressed, Kubernetes should be able to handle node downscaling based on pod load and resource utilization effectively.</p>
Chandan A N
<p>I created a two nodes clusters and I created a new job using the busybox image that sleeps for 300 secs. I checked on which node this job is running using</p> <p><code>kubectl get pods -o wide</code></p> <p>I deleted the node but surprisingly the job was still finishing to run on the same node. Any idea if this is a normal behavior? If not how can I fix it?</p>
Luke Solomon
<p>Jobs aren't scheduled or running on nodes. The role of a job is just to define a policy by making sure that a pod with certain specifications exists and ensure that it runs till the completion of the task whether it completed successfully or not.</p> <p>When you create a job, you are declaring a policy that the built-in <code>job-controller</code> will see and will create a pod for. Then the built-in <code>kube-scheduler</code> will see this pod without a node and patch the pod to it with a node's identity. The <code>kubelet</code> will see a pod with a node matching it's own identity and hence a container will be started. As the container will be still running, the <code>control-plane</code> will know that the node and the pod still exist.</p> <p>There are two ways of breaking a node, one with a drain and the second without a drain. The process of breaking a node without draining is identical to a network cut or a server crash. The api-server will keep the node resource for a while, but it 'll cease being Ready. The pods will be then terminated slowly. However, when you drain a node, it looks as if you are preventing new pods from scheduling on to the node and deleting the pods using <code>kubectl delete pod</code>.</p> <p>In both ways, the pods will be deleted and you will be having a job that hasn't run to completion and doesn't have a pod, therefore <code>job-controller</code> will make a new pod for the job and the job's <code>failed-attempts</code> will be increased by 1, and the loop will start over again.</p>
Kareem Yasser
<p>I like to have multiple trainers running simultaneously using the same ExampleGen, Schema and Transform. Below is my code adding extra components as trainer2 evaluator2 and pusher2. But I've been getting the following error, and I'm not sure how to fix them. Can you please advise and thanks in advance! </p> <p><strong>Error:</strong> RuntimeError: Duplicated component_id Trainer for component type tfx.components.trainer.component.Trainer</p> <pre><code>def create_pipeline( pipeline_name: Text, pipeline_root: Text, data_path: Text, preprocessing_fn: Text, run_fn: Text, run_fn2: Text, train_args: trainer_pb2.TrainArgs, train_args2: trainer_pb2.TrainArgs, eval_args: trainer_pb2.EvalArgs, eval_args2: trainer_pb2.EvalArgs, eval_accuracy_threshold: float, eval_accuracy_threshold2: float, serving_model_dir: Text, serving_model_dir2: Text, metadata_connection_config: Optional[ metadata_store_pb2.ConnectionConfig] = None, beam_pipeline_args: Optional[List[Text]] = None, ai_platform_training_args: Optional[Dict[Text, Text]] = None, ai_platform_serving_args: Optional[Dict[Text, Any]] = None, ) -&gt; pipeline.Pipeline: """Implements the custom pipeline with TFX.""" components = [] example_gen = CsvExampleGen(input=external_input(data_path)) components.append(example_gen) schema_gen = SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False) components.append(schema_gen) transform = Transform( examples=example_gen.outputs['examples'], schema=schema_gen.outputs['schema'], preprocessing_fn=preprocessing_fn) components.append(transform) trainer_args = { 'run_fn': run_fn, 'transformed_examples': transform.outputs['transformed_examples'], 'schema': schema_gen.outputs['schema'], 'transform_graph': transform.outputs['transform_graph'], 'train_args': train_args, 'eval_args': eval_args, 'custom_executor_spec': executor_spec.ExecutorClassSpec(trainer_executor.GenericExecutor), } trainer = Trainer(**trainer_args) components.append(trainer) trainer_args2 = { 'run_fn': run_fn2, 'transformed_examples': transform.outputs['transformed_examples'], 'schema': schema_gen.outputs['schema'], 'transform_graph': transform.outputs['transform_graph'], 'train_args': train_args2, 'eval_args': eval_args2, 'custom_executor_spec': executor_spec.ExecutorClassSpec(trainer_executor.GenericExecutor), } trainer2 = Trainer(**trainer_args2) components.append(trainer2) model_resolver = ResolverNode( instance_name='latest_blessed_model_resolver', resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver, model=Channel(type=Model), model_blessing=Channel(type=ModelBlessing)) components.append(model_resolver) model_resolver2 = ResolverNode( instance_name='latest_blessed_model_resolver2', resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver, model=Channel(type=Model), model_blessing=Channel(type=ModelBlessing)) components.append(model_resolver2) evaluator = Evaluator( examples=example_gen.outputs['examples'], model=trainer.outputs['model'], #baseline_model=model_resolver.outputs['model'], # Change threshold will be ignored if there is no baseline (first run). eval_config=eval_config) components.append(evaluator) evaluator2 = Evaluator( examples=example_gen.outputs['examples'], model=trainer2.outputs['model'], baseline_model=model_resolver2.outputs['model'], # Change threshold will be ignored if there is no baseline (first run). eval_config=eval_config2) components.append(evaluator2) pusher_args = { 'model': trainer.outputs['model'], 'model_blessing': evaluator.outputs['blessing'], 'push_destination': pusher_pb2.PushDestination( filesystem=pusher_pb2.PushDestination.Filesystem( base_directory=serving_model_dir)), } pusher = Pusher(**pusher_args) components.append(pusher) pusher_args2 = { 'model': trainer2.outputs['model'], 'model_blessing': evaluator2.outputs['blessing'], 'push_destination': pusher_pb2.PushDestination( filesystem=pusher_pb2.PushDestination.Filesystem( base_directory=serving_model_dir2)), } pusher2 = Pusher(**pusher_args2) # pylint: disable=unused-variable components.append(pusher2) </code></pre>
LLTeng
<p>The above issue has resolved by adding "instance_name" in each pipeline component to identify the unique name. </p>
LLTeng
<p>I want to configure AWS NLB to store logs at the S3 bucket? I have:</p> <ul> <li>AWS EKS cluster (v1.15),</li> <li>NLB (created by Nginx controller),</li> <li>S3 bucket with AIM (done as described here: <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-access-logs.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-access-logs.html</a>).</li> </ul> <p>I've added these annotations to my terraform code to nginx ingress:</p> <pre><code>set { name = &quot;controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-access-log-enabled&quot; value = &quot;true&quot; } set { name = &quot;controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-access-log-s3-bucket-name&quot; value = &quot;nlb-logs-bucket&quot; } set { name = &quot;controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-access-log-s3-bucket-prefix&quot; value = &quot;/nlblogs&quot; } </code></pre> <p>I see that annotations are added to the controller, but in AWS console NLB settings didn't change (logs aren't saving to the bucket).</p>
fireman777
<p>I've found a solution. I hope, it will help anybody.</p> <p>As I understand, mentioned above annotations are only for ELB, and they don't work for NLB. I tried to update EKS to 1.16 and 1.17. It works for ELB, but not for NLB.</p> <p>So, the solution is - to use local-exec provision in Terraform for k8s. At least it works for me.</p> <p>Here is the code:</p> <pre><code>resource &quot;null_resource&quot; &quot;enable_s3_bucket_logging_on_nlb&quot; { triggers = { &lt;TRIGGERS&gt; } provisioner &quot;local-exec&quot; { command = &lt;&lt;EOS for i in $(aws elbv2 describe-load-balancers --region=&lt;REGION&gt; --names=$(echo ${data.kubernetes_service.nginx_ingress.load_balancer_ingress.0.hostname} |cut -d- -f1) | \ jq &quot;.[][] | { LoadBalancerArn: .LoadBalancerArn }&quot; |awk '{print $2}' |tr -d '&quot;'); do \ aws elbv2 modify-load-balancer-attributes --region=&lt;REGION&gt; --load-balancer-arn $i --attributes Key=access_logs.s3.enabled,Value=true \ Key=access_logs.s3.bucket,Value=nlb-logs-bucket Key=access_logs.s3.prefix,Value=nlblogs;\ done; \ EOS } } </code></pre> <p>where:</p> <ul> <li>&lt;TRIGGERS&gt; - condition for the trigger</li> <li>&lt;REGION&gt; - region of your NLB</li> </ul>
fireman777
<p>I am having a question regarding the distribution of pods across nodes. What I like to have is given a 3 node cluster, I want to deploy a pod with 2 replicas while making sure that these replicas are deployed on different nodes in order to achieve high availability. What are the options other than using nodeAffinity?</p>
Rajit Sumbar
<p>First of all, node affinity allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node. Therefore it does not guarantee that each replica will be deployed on a different node or that these nodes will be distributed uniformly across the all of the nodes. However a valid solution will be using <strong>Pod Topology Spread Constraints</strong>.</p> <p>Pod Topology Spread Constraints rely on node labels to identify the topology domain(s) that each Node is in, and then using these labels to match with the pods having the same labels. You can define one or multiple <code>topologySpreadConstraint</code> to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster.</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: mypod labels: node: node1 spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: node: node1 containers: - name: myapp image: image_name </code></pre> <p><code>maxSkew: 1</code> describes the degree to which Pods may be unevenly distributed. It must be greater than zero. Its semantics differs according to the value of whenUnsatisfiable.</p> <p><code>topologyKey: zone</code> implies the even distribution will only be applied to the nodes which have label pair &quot;zone:&quot; present.</p> <p><code>whenUnsatisfiable: DoNotSchedule</code> tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint.</p> <p><code>labelSelector</code> is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain.</p> <p>You can find out more on <strong>Pod Topology Spread Constraints</strong> in this documentation: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/</a></p>
Kareem Yasser
<p>Given a template for the cronjob, is there a possible way to verify that the metadata that I configured is as intended? For example, in the following template I use only the cronjob name which is cronj, what is the purpose then of the job and pods metadata and when do they get used?</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: cronj spec: jobTemplate: metadata: name: cron-job spec: template: metadata: name: cron-pod </code></pre> <p>Also when doing scheduling, what is the difference between using <code>nodeName: controlplane</code> and <code>nodeSelector: kubernetes.io/hostname: master</code></p>
Silvia Wasur
<p>You can check out the documentation of cronjob in kuberenetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a>. It states that a job creates one or more pods and that's why you need them. You can double check the container name in the pod description.</p> <p>As for the difference between nodeName and nodeSelector, the main difference is that nodeName is a manual scheduler, while nodeSelector is scheduled using k8s scheduler. If you use nodeName then it will override the k8s scheduler.</p>
Kareem Yasser
<ol> <li>Let's say,I have 3 nginx pods listening over port 80.</li> <li>Let's say, I have once service <code>nginx-svc</code> of type <code>ClusterIP</code>, listening over port <code>8080</code> and forwarding requests to above nginx pods over port <code>80</code>.</li> <li>Let's say, I have another pod <code>busybox</code>.</li> </ol> <p>How can I configure the <code>network-policy</code> to allow <code>busybox</code> pod to access <code>nginx</code> pods only via service <code>nginx-svc</code> but not directly?</p>
Ankit Agarwal
<p>You can't achieve it the way yo presented it but you can <a href="https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/#limit-access-to-the-nginx-service" rel="nofollow noreferrer">restrict access to the pod for specific service and labels</a> using <code>NetworkPolicy</code>.</p> <p>For example, if you have deployment with labels <code>app: nginx</code> (you can create it using <code>kubectl create deployment nginx --image=nginx</code>) and you expose it, by using <code>kubectl expose deployment nginx --port=80</code> you can create following NetworkPolicy:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: access-nginx spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: #mutable fields access: &quot;true&quot; </code></pre> <p>It will restrict access to pods with only labels <code>app=nginx</code> and <code>access=&quot;true&quot;</code>.</p> <p>After this network policy is applied, you can create busybox pod with labels <code>access=&quot;true&quot;</code> it will be able to communicate with those pods.</p>
kool