prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am deploying my application in a read only kubernetes cluster, so I am using volumes and volumeMounts for tmp folder for apache server. Upon start of apache server within read only pod, I am getting this error:</p> <pre><code>chown: changing ownership of '/var/lock/apache2.fm2cgWmnxk': Operation not permitted </code></pre> <p>I came across this issue <a href="https://stackoverflow.com/questions/43544370/kubernetes-how-to-set-volumemount-user-group-and-file-permissions">Kubernetes: how to set VolumeMount user group and file permissions</a> and tried using SecurityContext.fsGroup but still getting same issue.</p> <p>Here is my deployment.yaml for reference:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &amp;DeploymentName abc spec: replicas: 1 selector: matchLabels: &amp;appName app: *DeploymentName template: metadata: name: main labels: &lt;&lt;: *appName spec: securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 runAsGroup: 3000 fsGroupChangePolicy: &quot;OnRootMismatch&quot; volumes: - name: var-lock emptyDir: {} containers: - name: *DeploymentName image: abc-image ports: - containerPort: 80 volumeMounts: - mountPath: /var/lock name: var-lock readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 80 initialDelaySeconds: 300 periodSeconds: 180 imagePullPolicy: Always tty: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name envFrom: - configMapRef: name: *DeploymentName resources: limits: cpu: 1 memory: 2Gi requests: cpu: 1 memory: 2Gi --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: &amp;hpaName abc spec: maxReplicas: 1 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: *hpaName targetCPUUtilizationPercentage: 60 </code></pre> <p>Any help is appreciated.</p>
<p>Hello, hope you are envoying your Kubernetes journey !</p> <p>I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:</p> <p>First I have setup a kind cluster locally with this configuration (info here: <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>):</p> <pre><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: so-cluster-1 nodes: - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 </code></pre> <p>after this I created my cluster with this command:</p> <pre><code>kind create cluster --config=config.yaml </code></pre> <p>Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: so-tests </code></pre> <p>From there, i got my environment setted up, so I used your deployment config and replaced the deploymentName, appName and hpaName occurences by &quot;so-71823613&quot; (stack-overflow and you question id), but for the test, I decided to not use the hpa config.</p> <p>next, since you did not provide the image you are using for apache, I used the dockerhub image httpd:2.4.53 (<a href="https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore" rel="nofollow noreferrer">https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore</a>)</p> <p>again, since i dont have your configmap config, i decided to comment out the part where you get env variables from the configmap.</p> <p>since the default user in httpd image is &quot;www-data&quot;, I first deployed the pod without any securityContext just to get the id of that user:</p> <pre><code>❯ k exec -it pod/so-71823613-555d8b454-z5ks5 -- id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data) </code></pre> <p>Once that i knew what was the id of the www-data user, I modified the securityContext. I kept the rest of the configuration (probes, volume etc.) as you configured them, here is the manifest now:</p> <p>In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 33(www-data). The runAsGroup field specifies the primary group ID of 33 for all processes within any containers of the Pod. If this field is omitted, the primary group ID of the containers will be root(0). Any files created will also be owned by user 33 and group 33 when runAsGroup is specified. Since fsGroup field is specified, all processes of the container are also part of the supplementary group ID 33. The owner for volume &quot;/var/lock&quot; and any files created in that volume will be Group ID 33. ... fsGroupChangePolicy - fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a Pod. This field only applies to volume types that support fsGroup controlled ownership and permissions. This field has two possible values:</p> <p>OnRootMismatch: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume. Always: Always change permission and ownership of the volume when volume is mounted.</p> <p>( description from here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>)</p> <p>So, once i deployed my configuration using:</p> <pre><code>kubectl apply -f deployment.yaml deployment.apps/so-71823613 created </code></pre> <p>I got this error:</p> <pre><code> k logs -f pod/so-71823613-7c5b65df4d-6scg5 AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message (13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs </code></pre> <p>So, first to fix the first line error, I reconnected into the pod to fetch the httpd.conf file with:</p> <pre><code>k exec -it pod/so-71823613-555d8b454-fgjcs -- cat /usr/local/apache2/conf/httpd.conf &gt; httpd.conf </code></pre> <p>once i get the http.conf file, I modified it, by adding:</p> <pre><code>ServerName localhost:8080 </code></pre> <p>(cf <a href="https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html" rel="nofollow noreferrer">https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html</a>)</p> <p>Then I put the new httpd.conf file into a configmap named &quot;httpconf&quot;, and modified the deployment to mount the configmap into the right place, to replace the first one (here -&gt; &quot;/usr/local/apache2/conf/httpd.conf&quot;) with:</p> <pre><code> ... volumeMounts: ... - name: &quot;config&quot; mountPath: &quot;/usr/local/apache2/conf/httpd.conf&quot; subPath: &quot;httpd.conf&quot; volumes: ... - name: &quot;config&quot; configMap: name: &quot;httpconf&quot; ... ❯ kubectl apply -f configmap.yaml -f deployment.yaml configmap/httpconf created deployment.apps/so-71823613 created </code></pre> <p>Then i got this error remaining:</p> <pre><code>(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 </code></pre> <p>So, to fix it, I changed to listening port of apache directly into the configmap http.conf file (according to this: <a href="https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when">https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when</a>)</p> <pre><code>Listen 8080 ServerName localhost:8080 </code></pre> <p>since I am now exposing the 8080 port, I also modified the probes and the port in consequence:</p> <pre><code>... ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 8080 ... </code></pre> <p>After reapplying my config I got this new error:</p> <pre><code>❯ k logs -f pod/so-71823613-7dd7bdb66d-qtf9t [Wed Apr 20 05:50:57.863971 2022] [core:error] [pid 1:tid 139771999915328] (13)Permission denied: AH00099: could not create /usr/local/apache2/logs/httpd.pid.KYUI5g [Wed Apr 20 05:50:57.864061 2022] [core:error] [pid 1:tid 139771999915328] AH00100: httpd: could not log pid to file /usr/local/apache2/logs/httpd.pid </code></pre> <p>To fix that issue, i used your workaround with the emptyDir and added this:</p> <pre><code> volumeMounts: ... - mountPath: /usr/local/apache2/logs/ name: apache2-logs volumes: ... - name: apache2-logs emptyDir: {} </code></pre> <p>here are the manifests:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: so-71823613 spec: replicas: 1 selector: matchLabels: app: so-71823613 template: metadata: name: main labels: app: so-71823613 spec: securityContext: fsGroup: 33 runAsNonRoot: true runAsUser: 33 runAsGroup: 33 fsGroupChangePolicy: &quot;OnRootMismatch&quot; containers: - name: so-71823613 image: httpd:2.4.53 ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 180 periodSeconds: 60 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 300 periodSeconds: 180 imagePullPolicy: Always tty: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # envFrom: # - configMapRef: # name: so-71823613 resources: limits: cpu: 1 memory: 2Gi requests: cpu: 1 memory: 2Gi volumeMounts: - mountPath: /var/lock name: var-lock - mountPath: /usr/local/apache2/logs/ name: apache2-logs - name: &quot;config&quot; mountPath: &quot;/usr/local/apache2/conf/httpd.conf&quot; subPath: &quot;httpd.conf&quot; volumes: - name: var-lock emptyDir: {} - name: apache2-logs emptyDir: {} - name: &quot;config&quot; configMap: name: &quot;httpconf&quot; --- apiVersion: v1 kind: ConfigMap metadata: name: httpconf data: httpd.conf: | ServerRoot &quot;/usr/local/apache2&quot; Listen 8080 LoadModule mpm_event_module modules/mod_mpm_event.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule filter_module modules/mod_filter.so LoadModule mime_module modules/mod_mime.so LoadModule log_config_module modules/mod_log_config.so LoadModule env_module modules/mod_env.so LoadModule headers_module modules/mod_headers.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule unixd_module modules/mod_unixd.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so &lt;IfModule !mpm_prefork_module&gt; &lt;/IfModule&gt; &lt;IfModule mpm_prefork_module&gt; &lt;/IfModule&gt; LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so &lt;IfModule unixd_module&gt; User www-data Group www-data &lt;/IfModule&gt; ServerAdmin [email protected] ServerName localhost:8080 &lt;Directory /&gt; AllowOverride none Require all denied &lt;/Directory&gt; DocumentRoot &quot;/usr/local/apache2/htdocs&quot; &lt;Directory &quot;/usr/local/apache2/htdocs&quot;&gt; Options Indexes FollowSymLinks AllowOverride None Require all granted &lt;/Directory&gt; &lt;IfModule dir_module&gt; DirectoryIndex index.html &lt;/IfModule&gt; &lt;Files &quot;.ht*&quot;&gt; Require all denied &lt;/Files&gt; ErrorLog /proc/self/fd/2 LogLevel warn &lt;IfModule log_config_module&gt; LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot;&quot; combined LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b&quot; common &lt;IfModule logio_module&gt; LogFormat &quot;%h %l %u %t \&quot;%r\&quot; %&gt;s %b \&quot;%{Referer}i\&quot; \&quot;%{User-Agent}i\&quot; %I %O&quot; combinedio &lt;/IfModule&gt; CustomLog /proc/self/fd/1 common &lt;/IfModule&gt; &lt;IfModule alias_module&gt; ScriptAlias /cgi-bin/ &quot;/usr/local/apache2/cgi-bin/&quot; &lt;/IfModule&gt; &lt;IfModule cgid_module&gt; &lt;/IfModule&gt; &lt;Directory &quot;/usr/local/apache2/cgi-bin&quot;&gt; AllowOverride None Options None Require all granted &lt;/Directory&gt; &lt;IfModule headers_module&gt; RequestHeader unset Proxy early &lt;/IfModule&gt; &lt;IfModule mime_module&gt; TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz &lt;/IfModule&gt; &lt;IfModule proxy_html_module&gt; Include conf/extra/proxy-html.conf &lt;/IfModule&gt; &lt;IfModule ssl_module&gt; SSLRandomSeed startup builtin SSLRandomSeed connect builtin &lt;/IfModule&gt; # --- # apiVersion: autoscaling/v1 # kind: HorizontalPodAutoscaler # metadata: # name: so-71823613 # spec: # maxReplicas: 1 # minReplicas: 1 # scaleTargetRef: # apiVersion: extensions/v1beta1 # kind: Deployment # name: so-71823613 # targetCPUUtilizationPercentage: 60 </code></pre> <p>after waiting the initialDelaySeconds of the probes, I finally get my pod up and running correctly:</p> <pre><code>Every 1.0s: kubectl get po,svc,cm -o wide DESKTOP-6PBJAOK: Wed Apr 20 03:15:02 2022 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/so-71823613-897768549-mcmb4 1/1 Running 0 4m13s 10.244.4.4 so-cluster-1-worker3 &lt;none&gt; &lt;none&gt; NAME DATA AGE configmap/httpconf 1 4m14s </code></pre> <p>Bonus:</p> <p>I then decided to expose the http deployment with a service, here is the manifest (obtained from &quot; k expose deployment so-71823613 --port 80 --target-port 8080 --dry-run=client -o yaml&quot;:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: so-71823613 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: so-71823613 </code></pre> <p>as you can see, I port-forwarded the 8080 pod port to 80 in the service (you can also use an ingress controller to expose the service outside of the cluster )</p> <p>tried this on my machine:</p> <pre><code>❯ k port-forward service/so-71823613 8080:80 Forwarding from 127.0.0.1:8080 -&gt; 8080 Forwarding from [::1]:8080 -&gt; 8080 Handling connection for 8080 </code></pre> <p>and here is the result:</p> <p><a href="https://i.stack.imgur.com/HWIXE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWIXE.png" alt="enter image description here" /></a></p> <p>TADA !</p> <p>To conclude, I tried to reproduce the best i could with your provided information (It was kinda cool), so if this does not work for you, it means that i need more information. Thank you for your lecture. bguess.</p>
<p>I am working through &quot;learn kubernetes the hard way&quot; and am at the &quot;bootstrapping the etcd cluster&quot; step: <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md</a></p> <p>I have run into a command that causes a timeout:</p> <pre><code>somersbmatthews@controller-0:~$ { sudo systemctl daemon-reload; sudo systemctl enable etcd; sudo systemctl start etcd; } Job for etcd.service failed because a timeout was exceeded. See &quot;systemctl status etcd.service&quot; and &quot;journalctl -xe&quot; for details. </code></pre> <p>Here I follow the above recommendations:</p> <p>This is the first thing the CLI asked me to check:</p> <pre><code>somersbmatthews@controller-0:~$ systemctl status etcd.service ● etcd.service - etcd Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: activating (start) since Wed 2020-12-02 03:15:05 UTC; 34s ago Docs: https://github.com/coreos Main PID: 49251 (etcd) Tasks: 8 (limit: 9544) Memory: 10.2M CGroup: /system.slice/etcd.service └─49251 /usr/local/bin/etcd --name controller-0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file&gt; Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 is starting a new election at term 570 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 became candidate at term 571 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 571 Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 is starting a new election at term 571 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 became candidate at term 572 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 572 Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; </code></pre> <p>The second thing the CLI asked me to check:</p> <pre><code>somersbmatthews@controller-0:~$ journalctl -xe -- A stop job for unit etcd.service has finished. -- -- The job identifier is 3597 and the job result is done. Dec 02 03:05:32 controller-0 systemd[1]: Starting etcd... -- Subject: A start job for unit etcd.service has begun execution -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- A start job for unit etcd.service has begun execution. -- -- The job identifier is 3597. Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead Dec 02 03:05:32 controller-0 etcd[48861]: etcd Version: 3.4.10 Dec 02 03:05:32 controller-0 etcd[48861]: Git SHA: 18dfb9cca Dec 02 03:05:32 controller-0 etcd[48861]: Go Version: go1.12.17 Dec 02 03:05:32 controller-0 etcd[48861]: Go OS/Arch: linux/amd64 Dec 02 03:05:32 controller-0 etcd[48861]: setting maximum number of CPUs to 2, total number of available CPUs is 2 Dec 02 03:05:32 controller-0 etcd[48861]: the server is already initialized as member before, starting as etcd member... Dec 02 03:05:32 controller-0 etcd[48861]: peerTLS: cert = /etc/etcd/kubernetes.pem, key = /etc/etcd/kubernetes-key.pem, trusted-ca = /etc/etcd/ca.pem, cli&gt; Dec 02 03:05:32 controller-0 etcd[48861]: name = controller-0 Dec 02 03:05:32 controller-0 etcd[48861]: data dir = /var/lib/etcd Dec 02 03:05:32 controller-0 etcd[48861]: member dir = /var/lib/etcd/member Dec 02 03:05:32 controller-0 etcd[48861]: heartbeat = 100ms Dec 02 03:05:32 controller-0 etcd[48861]: election = 1000ms Dec 02 03:05:32 controller-0 etcd[48861]: snapshot count = 100000 Dec 02 03:05:32 controller-0 etcd[48861]: advertise client URLs = https://10.240.0.10:2379 Dec 02 03:05:32 controller-0 etcd[48861]: initial advertise peer URLs = https://10.240.0.10:2380 Dec 02 03:05:32 controller-0 etcd[48861]: initial cluster = Dec 02 03:05:32 controller-0 etcd[48861]: restarting member f98dc20bce6225a0 in cluster 3e7cc799faffb625 at commit index 3 Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=() Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 became follower at term 183 Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: newRaft f98dc20bce6225a0 [peers: [], term: 183, commit: 3, applied: 0, lastindex: &gt; Dec 02 03:05:32 controller-0 etcd[48861]: simple token is not cryptographically signed Dec 02 03:05:32 controller-0 etcd[48861]: starting server... [version: 3.4.10, cluster version: to_be_decided] Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=(4203990652121993521) Dec 02 03:05:32 controller-0 etcd[48861]: added member 3a57933972cb5131 [https://10.240.0.12:2380] to cluster 3e7cc799faffb625 Dec 02 03:05:32 controller-0 etcd[48861]: starting peer 3a57933972cb5131... Dec 02 03:05:32 controller-0 etcd[48861]: started HTTP pipelining with peer 3a57933972cb5131 Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer) Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer) Dec 02 03:05:32 controller-0 etcd[48861]: started peer 3a57933972cb5131 somersbmatthews@controller-0:~$ journalctl -xe Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 is starting a new election at term 224 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 became candidate at term 225 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 225 Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 is starting a new election at term 225 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 became candidate at term 226 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 226 Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 is starting a new election at term 226 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 became candidate at term 227 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 227 Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:35 controller-0 etcd[48861]: publish error: etcdserver: request timed out Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 is starting a new election at term 227 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 became candidate at term 228 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 228 Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 is starting a new election at term 228 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 became candidate at term 229 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 229 Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 is starting a new election at term 229 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 became candidate at term 230 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 230 Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a&gt; Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a&gt; Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 is starting a new election at term 230 Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 became candidate at term 231 Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 231 </code></pre> <p>So I redo the step that I think allows what is not being allowed above <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules</a>:</p> <pre><code>somersbmatthews@controller-0:~$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ &gt; --allow tcp,udp,icmp \ &gt; --network kubernetes-the-hard-way \ &gt; --source-ranges 10.240.0.0/24,10.200.0.0/16 Creating firewall...failed. ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource: - The resource 'projects/k8s-hard-way-2571/global/firewalls/kubernetes-the-hard-way-allow-internal' already exists </code></pre> <p>but I'm still getting the timeout errors above.</p> <p>Any help is appreciated, thanks :)</p>
<p>I encountered similar error . First , I did a mistake in using master-1 ip address in listen-peer-urls,listen-client-urls ,advertise-client-urls &amp; listen-client-urls</p> <p>Second , try to test if telnet works on telnet 2380. And if doesnt work , then open firewall on both masters</p> <p>sudo firewall-cmd --add-port=2380/tcp --permanent</p> <p>sudo systemctl restart firewalld</p> <p>Also , it is necessary that both masters should not have considerable time difference.</p>
<p>I have an app running in a kubernetes managed docker container, using Azure Kubernetes Service (AKS). I can output logs to a text file for the app deployment using:</p> <pre><code>kubectl logs deployment/my-deployment &gt; out.txt </code></pre> <p>This gives me a file of around max 28Mb. When I get the docker logs for the same container on the physical VM using <code>docker logs ...</code>, the log file is much bigger (up to 120Mb+).</p> <p>How can I increase the size of the available <code>kubectl logs</code> for the deployment? If this option is available, then it would likely be an option that increases the size of the available <code>kubectl logs</code> for the <em>pod</em> that holds the container, as the pod and deployment logs are identical.</p> <p>It's not the docker configuration in <code>/etc/docker/daemon.json</code> that's limiting the <code>kubectl</code> logs, as that's set to 50Mb. I've read that it's the underlying docker configuration that kubernetes uses, but that doesn't seem to be the case, as my<code>kubectl</code> logs are being truncated to around 28Mb.</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer"><code>Kubectl logs</code></a> might read logs with a default log rotation, meaning the <a href="https://stackoverflow.com/a/39398892/6309">logrotate service is active</a>.</p> <p>Check the content of cat /etc/logrotate.d/docker-containers, as in <a href="https://github.com/kubernetes/kubernetes/issues/11046" rel="nofollow noreferrer">this issue</a>, for confirmation.</p> <p>As explained in <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/logging/#:%7E:text=Kubernetes%20uses%20the%20logrotate%20tool,for%20the%20container%20are%20lost." rel="nofollow noreferrer"><code>unofficial-kubernetes</code></a>:</p> <blockquote> <p>An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node. Kubernetes uses the <code>logrotate</code> tool to implement log rotation.</p> <p>Kubernetes performs log rotation daily, or if the log file grows beyond 10MB in size.<br /> Each rotation belongs to a single container; if the container repeatedly fails or the pod is evicted, all previous rotations for the container are lost.<br /> <strong>By default, Kubernetes keeps up to five logging rotations per container</strong>.</p> </blockquote>
<p>Let's take this processor as an example: <em>a CPU with 2 cores and 4 threads (2 threads per core)</em>.</p> <p>From what I've read, such a CPU has 2 physical cores but can process 4 threads simultaneously through hyper threading. But, in reality, one physical core can only truly run one thread at a time, but using hyper threading, the CPU exploits the idle stages in the pipeline to process another thread.</p> <p>Now, here is <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="noreferrer">Kubernetes</a> with Prometheus and Grafana and their CPU resource units measurement - <code>millicore/millicpu</code>. So, they virtually slice a core to 1000 millicores.</p> <p>Taking into account the hyper threading, I can't understand how they calculate those millicores under the hood.</p> <p>How can a process, for example, use 100millicore (10th part of the core)? How is this technically possible?</p> <hr /> <p>PS: accidentally, found a really descriptive explanation here: <a href="https://stackoverflow.com/questions/61851751/multi-threading-with-millicores-in-kubernetes">Multi threading with Millicores in Kubernetes</a></p>
<p>This gets very complicated. So k8s doesn't actually manage this it just provides a layer on top of the underlying container runtime (docker, containerd etc). When you configure a container to use 100 <code>millicore</code> k8's hands that down to the underlying container runtime and the runtime deals with it. Now once you start going to this level you have to start looking at the Linux kernel and how it does cpu scheduling / rate with cgroups. Which becomes incredibly interesting and complicated. In a nutshell though: The linux <code>CFS Bandwidth Control</code> is the thing that manages how much cpu a process (container) can use. By setting the <code>quota</code> and <code>period</code> params to the schedular you can control how much CPU is used by controlling how long a process can run before being paused and how often it runs. as you correctly identify you cant only use a 10th of a core. But you can use a 10th of the time and by doing that you can only use a 10th of the core over time.</p> <p>For example</p> <p>if I set <code>quota</code> to 250ms and <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 250ms of CPU cycle time every 250ms. Which means it can use 100% of the CPU.</p> <p>if I set <code>quota</code> to 500ms and keep the <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 500ms of CPU cycle time every 250ms. Which means it can use 200% of the CPU. (2 cores)</p> <p>if I set <code>quota</code> to 125ms and keep the <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 125ms of CPU cycle time every 250ms. Which means it can use 50% of the CPU.</p> <p>This is a very brief explanation. Here is some further reading:</p> <p><a href="https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7" rel="noreferrer">https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7</a> <a href="https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html" rel="noreferrer">https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html</a></p>
<p>I have two variable containing my namespaces names:</p> <pre><code>$KUBE_NAMESPACE_DEV =&quot;stellacenter-dev&quot; $KUBE_NAMESPACE_STAGE &quot;stellacenter-stage-uat&quot; </code></pre> <p>Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:</p> <pre><code>deploy_dev: stage: deploy image: stellacenter/aws-helm-kubectl before_script: - aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID} - aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY} - aws configure set region ${DEV_AWS_DEFAULT_REGION} script: - sed -i &quot;s/&lt;VERSION&gt;/${CI_COMMIT_SHORT_SHA}/g&quot; provider-service.yml - mkdir -p $HOME/.kube - cp $KUBE_CONFIG_DEV $HOME/.kube/config - chown $(id -u):$(id -g) $HOME/.kube/config - export KUBECONFIG=$HOME/.kube/config - kubectl apply -f ./provider-service.yml only: - developer </code></pre> <p>Provide-service.yml file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: provider-app namespace: &quot;stellacenter-dev&quot; or &quot;stellacenter-stage-uat&quot; labels: app: provider-app spec: replicas: 1 selector: matchLabels: app : provider-app template: metadata: labels: app: provider-app spec: containers: - name: provider-app image: registry.gitlab.com/stella-center/backend-services/provider-service:&lt;VERSION&gt; imagePullPolicy: Always ports: - containerPort: 8092 imagePullSecrets: - name: gitlab-registry-token-auth --- apiVersion: v1 kind: Service metadata: name: provider-service namespace: &quot;stellacenter-dev&quot; &quot;stellacenter-stage-uat&quot; spec: type: NodePort selector: app: provider-app ports: - port: 8092 targetPort: 8092 </code></pre> <p>I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.</p>
<p>You can remove the <code>namespace: NAMESPACE</code> from the manifest, and apply the resource in a namespace using the commandline.</p> <pre><code>- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV} - kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE} </code></pre>
<p>I have tried to deploy Camunda Platform 8 using Helm Chart (<a href="https://artifacthub.io/packages/helm/camunda-platform/camunda-platform" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/camunda-platform/camunda-platform</a>) but am unable to use ingress or load balancer IP for identity service as it redirects to localohost:18080 for keycloak and there is no option for changing localhost:18080 to ingress ip in helm chart. So please suggest how to deploy camunda platform 8 on production on Kubernetes, mainly GKE.</p>
<p>Like any helm chart you can overwrite the configuration values via separate (custom) values.yaml file. The exact entry for your yaml file will be <code>global:</code> <code> identity.auth.publicIssuerUrl: *yourcustomtokenissuerendpoint*</code></p> <p>More details on helm global variables and how to use/override can be found here <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values</a></p>
<p>I have joined a project where we deploy our apps on Azure cloud. The apps are in docker containers orchestrated by kubernetes.</p> <p>We get the data from a POD log file with this command (YAML code below)</p> <p>kubectl logs --namespace=app-prod <br /> deployment/app-fd-prod <br /> &gt; $(logFile)</p> <p>where logFile is a variable</p> <p>Now the problem is, yesterday the log file was 27MB. Today the log file is 2MB that means the old data got deleted some how.</p> <p>I have read about logRotate but i dont think we have implemented that.</p> <p>I want to know what can be the reason of deletion of yesterday's log file text ?</p> <p>Any leads, helps would be appreciated.</p>
<p>pod logs get rotated for you as pointed out by @jordanm in the comments. You need a log aggregation tool like the ELK/EFK stack (Elastic + Logstash + kibana or Elastic + fluentd + kibana) or Promtail + loki + grafana. These stacks tail the logs of containers, do some fancy parsing / manipulation and ships it to persistent storage like S3.</p> <p>Side note you can take a look at the <a href="https://github.com/banzaicloud/logging-operator" rel="nofollow noreferrer">logging operator</a> that makes the config of these things a little easier.</p>
<p>We are following the below article to establish C# client connection to the Ignite Cluster, both of them deployed in the Kubernetes.</p> <p><a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service</a></p> <p>We do not find the equivalent C# class/method to perform the connection configuration in the C# client application.</p> <p><a href="https://i.stack.imgur.com/JJ3Gq.png" rel="nofollow noreferrer">enter image description here</a></p> <p>Please help us to find alternate methods to do the connection configuration for Kubernetes.</p>
<p>This API is not yet available for .NET, the <a href="https://issues.apache.org/jira/browse/IGNITE-13011" rel="nofollow noreferrer">relevant ticket</a> is in progress and most likely will be included into the next release.</p> <p>For now, you can list a set of server IPs for your thin clients explicitly. And for your server and thick client nodes it's fine to rely on spring.xml configuration. More details <a href="https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml" rel="nofollow noreferrer">here</a>.</p> <p>Example:</p> <pre><code> var cfg = new IgniteConfiguration { ... SpringConfigUrl = &quot;/path/to/spring.xml&quot; }; </code></pre> <p>And your spring configuration:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;beans xmlns=&quot;http://www.springframework.org/schema/beans&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd&quot;&gt; &lt;bean class=&quot;org.apache.ignite.configuration.IgniteConfiguration&quot;&gt; &lt;!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. --&gt; &lt;property name=&quot;discoverySpi&quot;&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi&quot;&gt; &lt;property name=&quot;ipFinder&quot;&gt; &lt;!-- Enables Kubernetes IP finder and setting custom namespace and service names. --&gt; &lt;bean class=&quot;org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder&quot;&gt; &lt;property name=&quot;namespace&quot; value=&quot;ignite&quot;/&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/beans&gt; </code></pre>
<p>I'm trying to do a k8s tutorial on youtube <a href="https://www.youtube.com/watch?v=X48VuDVv0do&amp;t=5428s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=X48VuDVv0do&amp;t=5428s</a>. An error occured that k8s pod failed to connect mongodb when I create the deployment mongo-express. Please kindly help!</p> <p>Error Info of pod retrieved by kubectl logs command:</p> <pre><code>Welcome to mongo-express ------------------------ (node:7) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor. Could not connect to database using connectionString: mongodb://username:password@mongodb-service:27017/&quot; (node:7) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [mongodb-service:27017] on first connect [Error: getaddrinfo ENOTFOUND mongodb-service at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) { name: 'MongoNetworkError' }] at Pool.&lt;anonymous&gt; (/node_modules/mongodb/lib/core/topologies/server.js:441:11) at Pool.emit (events.js:314:20) at /node_modules/mongodb/lib/core/connection/pool.js:564:14 at /node_modules/mongodb/lib/core/connection/pool.js:1000:11 at /node_modules/mongodb/lib/core/connection/connect.js:32:7 at callback (/node_modules/mongodb/lib/core/connection/connect.js:300:5) at Socket.&lt;anonymous&gt; (/node_modules/mongodb/lib/core/connection/connect.js:330:7) at Object.onceWrapper (events.js:421:26) at Socket.emit (events.js:314:20) at emitErrorNT (internal/streams/destroy.js:92:8) at emitErrorAndCloseNT (internal/streams/destroy.js:60:3) at processTicksAndRejections (internal/process/task_queues.js:84:21) (node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:7) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. </code></pre> <p><strong>configuration files</strong> Configuration files of kubernetes component for your reference.</p> <pre><code> 1. kubernetes secret apiVersion: v1 kind: Secret metadata: name: mongodb-secret type: Opaque data: mongo-root-username: dXNlcm5hbWU= mongo-root-password: cGFzc3dvcmQ=``` 2. mongodb deployment &amp; service apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-deployment labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password --- apiVersion: v1 kind: Service metadata: name: mongo-service spec: selector: app: mongodb ports: - protocol: TCP port: 27017 targetPort: 27017``` 3. kubernetes configmap ```apiVersion: v1 kind: ConfigMap metadata: name: mongodb-configmap data: database_url: mongodb-service``` 4. mongo-express ```apiVersion: apps/v1 kind: Deployment metadata: name: mongo-express labels: app: mongo-express spec: replicas: 1 selector: matchLabels: app: mongo-express template: metadata: labels: app: mongo-express spec: containers: - name: mongo-express image: mongo-express ports: - containerPort: 8081 env: - name: ME_CONFIG_MONGODB_ADMINUSERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: ME_CONFIG_MONGODB_ADMINPASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password - name: ME_CONFIG_MONGODB_SERVER valueFrom: configMapKeyRef: name: mongodb-configmap key: database_url </code></pre>
<p>I followed the same tutorial and faced the same issue. The problem is with docker. Starting minikube with docker was the issue, seems like docker has some limitations. Install hyperkit if you are using MacOS, Hyper-V if Windows or install virtualbox if you are using some distribution of linux.</p> <p>Then start minikube with the virtual machine you installed, like this</p> <pre><code>minikube start --driver=virtualbox </code></pre>
<p>Is there some way to force pod creation order on Kubernetes?</p> <p>I have a scenario where Kubernetes are selecting a node pool with few resources and the first pod to be deployed consume a very small resource, but the next one consumes a lot of resources and the deployment fails.</p> <p>So I was wondering if there is a way to instruct Kubernetes to deploy first the pod that is resource hog then the small ones</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">Node Selector</a> on your Pod’s specification, you just need <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels" rel="nofollow noreferrer">Node Labels</a> for that.</p> <p>Another option is to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/" rel="nofollow noreferrer">Node Affinity</a>. You just need to have the <strong>Kubernetes cluster</strong>, and the K8s command line ready (<code>kubectl</code>). The steps for that are:</p> <ul> <li>Add a label to the node.</li> <li>Schedule a Pod using required node affinity, or.</li> <li>Schedule a Pod using preferred node affinity.</li> </ul> <p>Visit the official documentation I shared with you some lines above to have the detailed instructions, manifest examples, and the following URL for official K8’s documentation about <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>.</p> <p>Plus, you can also set up the Pod initialization in a specific order. See this <a href="https://stackoverflow.com/questions/56935239/how-to-configure-pod-initialization-in-a-specific-order-in-kubernetes">thread</a> for the proper instructions.</p>
<p>I use istio-ingress gateway and virtualservice to expose different microservices. So far all of them have been http services, so it was straight-forward to follow istio's documentation.</p> <p>But with kafka I am facing some issues. I am using <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">bitnami/kafka</a> helm chart for kafka installation. Here's the values.yaml used for it:</p> <pre><code>global: storageClass: &quot;kafka-sc&quot; replicaCount: 3 deleteTopicEnable: true resources: requests: memory: 1024Mi cpu: 500m limits: memory: 2048Mi cpu: 1000m zookeeper: replicaCount: 3 resources: requests: memory: 1024Mi cpu: 500m limits: memory: 2048Mi cpu: 1000m </code></pre> <p>This deployment exposes kafka on this endpoint: <code>my-kafka.kafka.svc.cluster.local:9092</code></p> <p>I want this endpoint to be accessible via internet using ingress controller. Therefore, I applied following kubernetes manifests --&gt;</p> <p>A. kafka-ingress-gateway.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: kafka-ingress-gateway namespace: kafka spec: selector: istio: ingressgateway servers: - port: number: 9092 name: tcp protocol: TCP hosts: - &quot;kafka.&lt;public_domain&gt;&quot; </code></pre> <p>B. kafka-ingress-virtualservice.yaml</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: kafka-ingress-virtualservice namespace: kafka spec: hosts: - &quot;kafka.&lt;public_domain&gt;&quot; gateways: - kafka/kafka-ingress-gateway tcp: - match: - port: 9092 route: - destination: host: my-kafka.kafka.svc.cluster.local port: number: 9092 </code></pre> <p>To verify whether this works, I am using following approach:</p> <ol> <li>Create a kafka-client pod and login to it in two different terminals</li> <li>In first terminal, I produce in a topic called <code>test</code> using this command: <code>kafka-console-producer.sh --broker-list my-kafka-0.my-kafka-headless.kafka.svc.cluster.local:9092 --topic test</code></li> <li>In second terminal, I consume in <code>test</code> topic using this command.</li> </ol> <p>In here, this works: <code>kafka-console-consumer.sh --bootstrap-server my-kafka.kafka.svc.cluster.local:9092 --topic test --from-beginning</code></p> <p>This does not work: <code>kafka-console-consumer.sh --bootstrap-server kafka.&lt;public_domain&gt;:9092 --topic test --from-beginning</code></p> <p>I am getting this error: <code>WARN [Consumer clientId=consumer-console-consumer-89304-1, groupId=console-consumer-89304] Bootstrap broker kafka.&lt;public_domain&gt;:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)</code></p> <p>I am new to kafka, so not sure what else is required to expose the consumer endpoint. From similar questions on stackoverflow, I noticed we are supposed to <a href="https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml#L403" rel="nofollow noreferrer">define &quot;advertisedListeners&quot; in kafka config</a>, but not sure what value to put there.</p> <p>Please let me know if I am missing any details here.</p>
<p>edit your istio-ingressgateway and add 9092 for tcp port</p> <pre><code>kubectl edit svc -nistio-system istio-ingressgateway </code></pre> <p>add</p> <pre><code>- name: kafka-broker port: 9092 protocol: TCP targetPort: 9092 </code></pre>
<p>My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamically provisioned</a> by <code>pd.csi.storage.gke.io</code> and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.</p> <p>My problem is I need to set <code>accessModes</code> to <code>ReadOnlyMany</code> and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the <code>initContainer</code>.</p> <p>Is there a solution to this issue? <a href="https://stackoverflow.com/questions/57754103/how-to-pre-populate-a-readonlymany-persistent-volume">Answer to this question</a> suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.</p>
<p><code>...I need to have one disk shared among all pods</code></p> <p>You can try Filestore. First your create a FileStore <a href="https://cloud.google.com/filestore/docs/creating-instances" rel="nofollow noreferrer">instance</a> and save your data on a FileStore volume. Then you <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver#enabling_the_on_a_new_cluster" rel="nofollow noreferrer">install</a> FileStore driver on your cluster. Finally you share the data with pods that needs to read the data using a PersistentVolume <a href="https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/examples/kubernetes/pre-provision/preprov-pv.yaml" rel="nofollow noreferrer">referring</a> the FileStore instance and volume above.</p>
<p>I'm using Terraform to provision an EKS cluster (mostly following the example <a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="nofollow noreferrer">here</a>). At the end of the tutorial, there's a method of outputting the configmap through the <code>terraform output</code> command, and then applying it to the cluster via <code>kubectl apply -f &lt;file&gt;</code>. I'm attempting to wrap this <code>kubectl</code> command into the Terraform file using the <code>kubernetes_config_map</code> resource, however when running Terraform for the first time, I receive the following error:</p> <pre><code>Error: Error applying plan: 1 error(s) occurred: * kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred: * kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps) </code></pre> <p>The strange thing is, every subsequent <code>terraform apply</code> works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit <code>depends_on</code> argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.</p> <pre><code>provider "kubernetes" { config_path = "kube_config.yaml" } locals { map_roles = &lt;&lt;ROLES - rolearn: ${aws_iam_role.eks_worker_iam_role.arn} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes ROLES } resource "kubernetes_config_map" "config_map_aws_auth" { metadata { name = "aws-auth" namespace = "kube-system" } data { mapRoles = "${local.map_roles}" } depends_on = ["aws_eks_cluster.eks_cluster"] } </code></pre> <p>I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time. </p> <p>I attempted to get more information by enabling the <code>TRACE</code> debug flag for terraform, however the only output I got was the exact same error as above.</p>
<p>Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:</p> <p><a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543</a></p> <p>So, in others words, I changed the cluster's name in <strong>aws_eks_cluster_auth</strong> block to a <strong>static name</strong>, and worked. Well, perhaps this is a bug on TF.</p>
<p>Imagine the following deployment definition in kubernetes:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: env: staging spec: ... </code></pre> <p>I have two questions in particular:</p> <p>1). The label <code>env: staging</code> won't be available in created pods. how can I access this data programmatically in <code>client-go</code>?</p> <p>2). When pod is created/updated, how can I found which deployment it belongs to?</p>
<blockquote> <p>1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?</p> </blockquote> <p>You can get the <code>Deployment</code> using client-go. See the example <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="nofollow noreferrer">Create, Update &amp; Delete Deployment</a> for operations on a <code>Deployment</code>.</p> <blockquote> <p>2). when pod is created/updated, how can I found which deployment it belongs to?</p> </blockquote> <p>When a <code>Deployment</code> is created, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> is created that manage the <code>Pods</code>.</p> <p>See the <code>ownerReferences</code> field of a <code>Pod</code> to see what <code>ReplicaSet</code> manages it. This is described in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#how-a-replicaset-works" rel="nofollow noreferrer">How a ReplicaSet works</a></p>
<p>Is there a way query cpu request and limit with kubectl for each container in a kubernetes context / namespace, just as I can query cpu usage with kubectl top pods.</p>
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">Requests and limits</a> are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.Limit can never be lower than the request.</p> <p>As said by @chris, try the following commands for cpu requests and limits for kubernetes namespaces</p> <p>You can get the pods and their CPU requests with the following command.</p> <pre><code>kubectl get pods --all-namespaces -o=jsonpath=&quot;{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}&quot; </code></pre> <p>You can get the pods and their CPU Limits with the following command.</p> <pre><code>kubectl get pods --all-namespaces -o=jsonpath=&quot;{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.limits.cpu}{'\n'}{end}{'\n'}{end}&quot; </code></pre>
<p>I have an AWS EKS cluster running in a custom VPC with 2 public and 2 private subnets. The node groups (for my backend) run in the 2 private subnets so they can't be accessed directly.</p> <p>I would like to create an API Gateway which exposes the microservices in the node group so my front-end and third party software can communicate with them. I eventually also like to add authorization to the API Gateway for secutiry. The problem is that I cannot find a good documentation how to do this (Expose the microservices through an API Gateway). Does anyone now khow to do this or where I can find information on how to do this?</p> <p>The situation would look something like this: <a href="https://i.stack.imgur.com/gcnDm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcnDm.png" alt="enter image description here" /></a></p>
<p>You need to use API Gateway private integrations to expose services running in EKS using NLB. Please check the below article for overall solution .</p> <p><a href="https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/</a></p>
<p>how to list the Kubernetes pod which has CharDevice in it ?</p> <p>I could able to get the pod with CharDevice, but need to print the pod name only</p> <pre><code>kubectl get pod -o jsonpath='{spec.volumes.hostPath.type==&quot;CharDevice&quot;}' </code></pre>
<p>You can get json first and use <code>jq</code> to get desired result :</p> <pre><code>kubectl get pod -o json | jq -r '.items[]|select(any(.spec.volumes[];.hostPath.type==&quot;CharDevice&quot;)).metadata.name' </code></pre>
<p>How to setup Apache Pulsar transaction coordinator in Kubernetes setup? I couldn't see much docs on this, though I could modify the <code>broker.conf</code> configmap unsure if that even works.</p>
<p>The transaction coordinator is a part of the broker, I think you just enable the transaction feature in the broker configmap is ok</p>
<p>Trying to install kubernetes dashboard in Ubuntu 16.04 resulting in x509: certificate signed by unknown authority error.</p> <p>Kubernetes cluster with a single node is running fine and deployments are happening too.</p> <p>Tried enabling apiserver-host property in kubernetes-dashboard.yaml file without any lock.</p> <p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml</a> Unable to connect to the server: x509: certificate signed by unknown authority</p> <p>Any suggestions.</p> <p>Output from kubectl get all -n kube-system</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/calico-node-6dgkc 2/2 Running 4 4d23h pod/calico-node-v8xjr 2/2 Running 0 2d4h pod/coredns-fb8b8dccf-8jznp 1/1 Running 2 4d23h pod/coredns-fb8b8dccf-pl87d 1/1 Running 2 4d23h pod/etcd-ets-kubernetes 1/1 Running 2 4d23h pod/kube-apiserver-ets-kubernetes 1/1 Running 2 4d23h pod/kube-controller-manager-ets-kubernetes 1/1 Running 2 4d23h pod/kube-proxy-24qjz 1/1 Running 0 2d4h pod/kube-proxy-ccqpn 1/1 Running 2 4d23h pod/kube-scheduler-ets-kubernetes 1/1 Running 2 4d23h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/calico-typha ClusterIP 10.110.39.31 &lt;none&gt; 5473/TCP 4d23h service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 4d23h Error from server (Forbidden): replicationcontrollers is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicationcontrollers" in API group "" in the namespace "kube-system" Error from server (Forbidden): daemonsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "daemonsets" in API group "apps" in the namespace "kube-system" Error from server (Forbidden): deployments.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "deployments" in API group "apps" in the namespace "kube-system" Error from server (Forbidden): replicasets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicasets" in API group "apps" in the namespace "kube-system" Error from server (Forbidden): statefulsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "statefulsets" in API group "apps" in the namespace "kube-system" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:node:ets-kubernetes" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "kube-system" Error from server (Forbidden): jobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "jobs" in API group "batch" in the namespace "kube-system" Error from server (Forbidden): cronjobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "cronjobs" in API group "batch" in the namespace "kube-system" </code></pre> <p>Output from kubectl get nodes</p> <pre><code>NAME STATUS ROLES AGE VERSION ets-kubernetes Ready master 4d23h v1.14.1 ets-node Ready &lt;none&gt; 2d4h v1.14.1 </code></pre> <hr> <p>Kubectl output.PNG <a href="https://i.stack.imgur.com/SR4SP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SR4SP.png" alt="enter image description here"></a></p> <p>Certificate Error.PNG <a href="https://i.stack.imgur.com/kSs2j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kSs2j.png" alt="enter image description here"></a></p>
<p>Had the same issue after resetting k8s to defaults while having <code>kubectl proxy</code> running.</p> <p>Simply restarting <code>kubectl proxy</code> fixed the issue :)</p>
<p>I want to deploy redis pod which loads a list. Then I will have kubernetes job which will execute bash script with variable taken from that list in redis.</p> <p>How can I make this redis pod to be auto deleted when all items from a list are used?</p>
<p>By default, Kubernetes keeps the completed jobs and associated objects for debugging purposes, and you will lose all the generated logs by them when deleted.</p> <p>That being said, a job can be automatically deleted by using the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">TTL mechanism for finished Jobs</a>.</p> <p>Here you can find an example of a job's manifest with the TTL enabled and set to delete the job and associated objects (pods, services, etc.) 100 sec after its completion:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre>
<p>I have a configmap created. I have added data as a key-value pair in configmap</p> <pre><code>data: EXTERNAL_CONFIG_FILE: /user/app/api/config </code></pre> <p>I need to use this variable to set a mount path in deployment</p> <pre><code>- name: config-properties-volume-mnt mountPath: {{ $EXTERNAL_CONFIG_FILE }} </code></pre> <p>I am getting undefined variable &quot;$EXTERNAL_CONFIG_FILE&quot; while deploying. <strong>I do not want to define this variable in values.yaml.</strong> Is there a way where I can use this variable defined in configmap in deployment?</p>
<p><strong>It's not possible to dynamically define any parameter on a manifest</strong>, you have to use the Helm or Kustomize</p> <p>Or else you can use the <strong>sed</strong> to replace the <strong>Text</strong> in manifest simply</p> <p>You can not use this way, <strong>configmap</strong> and <strong>secret</strong> are normally to inject the variables or file into the POD not at declare step.</p> <pre><code>- name: config-properties-volume-mnt mountPath: {{ $EXTERNAL_CONFIG_FILE }} </code></pre> <p>if you have helm chart keeping details into the <code>vaules.yaml</code> is the only option.</p> <p><strong>Subpath method :</strong></p> <p>You can use the subpath to achieve, with <strong>subpath</strong> you can use the environment: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment</a></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod1 spec: containers: - name: container1 env: - name: EXTERNAL_CONFIG_FILE value: /user/app/api/config image: busybox:1.28 command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt&quot; ] volumeMounts: - name: workdir1 mountPath: /logs # The variable expansion uses round brackets (not curly brackets). subPathExpr: $(EXTERNAL_CONFIG_FILE) restartPolicy: Never volumes: - name: workdir1 hostPath: path: /var/log/pods </code></pre> <p>Instead of <strong>env</strong> used in the above example you can use the <strong>configmap</strong></p>
<p>Is it possible for a pod to act like a spawner? When someone calls the api service in the first pod, it should spawn a new pod. This seems like a very simple thing but I cant really figure out where to look in the docs. Someone already mentioned using operators but I dont really see how that would aid me.</p> <p>Im currenlty migrating a project which uses docker as a spawner to create other dockers. I somehow need this principle to work in kubernetes pods.</p> <p>Kind regards</p>
<p>Have you looked into Kubespawner part of JupyterHub ?</p> <p>I have been trying to find alternatives to Kubespawner and Kubernetes Operators might be the answer. <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/extend-kubernetes/operator/</a></p> <p>GL</p>
<p>I'm not quite sure if which of the following approaches is the better approach to create a controller in kubernetes however I know that:</p> <ul> <li>I don't want to create a custom resource by any means.</li> <li>I do only want to fetch information about k8s native resources (pods, ...) given that there might be a lot of pods in each namespace</li> </ul> <p>I have seens some patterns like:</p> <pre><code>ctrl, err := controller.New(&quot;name-here&quot;, mgr, controller.Options{ Reconciler: &amp;ReconcilePod{Client: mgr.GetClient(), Logger: log}, }) </code></pre> <p>which <code>ReconcilePod</code> is a struct that has a function <code>Reconcile</code> that keep whole business logic.</p> <p>Another approach I have seens is like following:</p> <pre><code>type Controller struct { indexer cache.Indexer queue workqueue.RateLimitingInterface informer cache.Controller } </code></pre> <p>and then defining <code>shared informer</code> and <code>watcher</code> etc. And the third pattern that I have seen is using <code>operators</code></p> <p>what I don't get perhaps is what is the main differences between mentioned approaches above and which one fits my need at scale.</p>
<p>If you don't want to &quot;control&quot; anything, there is no need to create a <em>controller</em>.</p> <p>If you just want to &quot;read&quot; and &quot;watch&quot; resources, you can use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> and see e.g. <a href="https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/" rel="nofollow noreferrer">Extend Kubernetes via a shared informer</a> for inspiration about how to <em>read</em> and <em>watch</em> resources.</p> <blockquote> <p>To stay informed about when these events get triggered you can use a primitive exposed by Kubernetes and the client-go called SharedInformer, inside the cache package. Let’s see how it works in practice.</p> </blockquote> <p>Controllers are more complex and contains a <em>reconciliation loop</em> since they should realize/manage a <em>desired state</em>.</p> <p>An &quot;operator&quot; is a <em>controller</em> as well.</p>
<p>My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamically provisioned</a> by <code>pd.csi.storage.gke.io</code> and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.</p> <p>My problem is I need to set <code>accessModes</code> to <code>ReadOnlyMany</code> and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the <code>initContainer</code>.</p> <p>Is there a solution to this issue? <a href="https://stackoverflow.com/questions/57754103/how-to-pre-populate-a-readonlymany-persistent-volume">Answer to this question</a> suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.</p>
<p>With GKE 1.21 and later, you can enable the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver" rel="nofollow noreferrer">managed Filestore CSI driver</a> in your clusters. You can enable the driver for new clusters</p> <pre><code>gcloud container clusters create CLUSTER_NAME \ --addons=GcpFilestoreCsiDriver ... </code></pre> <p>or update existing clusters:</p> <pre><code>gcloud container clusters update CLUSTER_NAME \ --update-addons=GcpFilestoreCsiDriver=ENABLED </code></pre> <p>Once you've done that, create a storage class (or have or platform admin do it):</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: filestore-example provisioner: filestore.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: tier: standard network: default </code></pre> <p>After that, you can use PVCs and dynamic provisioning:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteMany storageClassName: filestore-example resources: requests: storage: 1Ti </code></pre>
<p>I can't get Kubernetes to pull my private docker image.</p> <p>I have a kubernetes secret called <code>regcred</code> in my <code>default</code> namespace looking like:</p> <pre class="lang-json prettyprint-override"><code>{&quot;auths&quot;: {&quot;index.docker.io&quot;: {&quot;username&quot;: &quot;Username&quot;, &quot;password&quot;: &quot;Password&quot;, &quot;email&quot;: &quot;[email protected]&quot;, &quot;auth&quot;: &quot;VXNlcm5hbWU6UGFzc3dvcmQ=&quot;}}} </code></pre> <p>The auth is <code>Username:Password</code> b64 encoded.</p> <p>This is the pod I am trying to create which just fails everytime with <code>ImagePullBackOff</code>.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: index.docker.io/jor2/hello-world:latest imagePullSecrets: - name: regcred </code></pre> <p>logs:</p> <pre class="lang-sh prettyprint-override"><code>Error from server (BadRequest): container &quot;private-reg-container&quot; in pod &quot;private-reg&quot; is waiting to start: image can't be pulled </code></pre> <pre class="lang-sh prettyprint-override"><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 67s default-scheduler Successfully assigned default/private-reg to 10.144.195.222 Normal Pulling 20s (x3 over 66s) kubelet Pulling image &quot;index.docker.io/jor2/hello-world&quot; Warning Failed 17s (x3 over 63s) kubelet Failed to pull image &quot;index.docker.io/jor2/hello-world&quot;: rpc error: code = NotFound desc = failed to pull and unpack image &quot;docker.io/jor2/hello-world:latest&quot;: failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:361849f1befca621043d13cca4e9bee74861754154556e149340a455e53d1d35: not found Warning Failed 17s (x3 over 63s) kubelet Error: ErrImagePull Normal BackOff 3s (x3 over 63s) kubelet Back-off pulling image &quot;index.docker.io/jor2/hello-world&quot; Warning Failed 3s (x3 over 63s) kubelet Error: ImagePullBackOff </code></pre> <p>The secret looks like:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6IHsiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyJ1c2VybmFtZSI6ICJVc2VybmFtZSIsICJwYXNzd29yZCI6ICJQYXNzd29yZCIsICJlbWFpbCI6ICJyYW5kb21AZW1haWwuY29tIiwgImF1dGgiOiAiVlhObGNtNWhiV1U2VUdGemMzZHZjbVE9In19fQo= kind: Secret metadata: creationTimestamp: &quot;2022-04-25T18:59:03Z&quot; managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:.dockerconfigjson: {} f:type: {} manager: kubectl-create operation: Update time: &quot;2022-04-25T18:59:03Z&quot; name: regcred namespace: default resourceVersion: &quot;270226&quot; uid: bbb1310b-421a-4c93-8488-498e7a52095f type: kubernetes.io/dockerconfigjson </code></pre> <p>Any idea where I am going wrong?</p>
<p>It was a problem with my image and the pod spec needed to be:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: jor2/hello-world:latest imagePullSecrets: - name: regcred </code></pre>
<p>I have the file <code>example-workflow-cowsay.yml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world- spec: entrypoint: whalesay templates: - name: whalesay container: image: docker/whalesay command: [cowsay] args: [&quot;hello world&quot;] resources: limits: memory: 32Mi cpu: 100m </code></pre> <p>I can submit this successfully like this: <code>argo submit -n workflows apps/workflows/example-workflow-cowsay.yml</code>.</p> <p>Can I get the same thing done using <code>kubectl</code> directly? I tried the below but it fails:</p> <pre><code>$ k apply -n workflows -f apps/workflows/example-workflow-cowsay.yml error: from hello-world-: cannot use generate name with apply </code></pre>
<p>Yes, it's right there in the <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#argo-cli" rel="nofollow noreferrer">readme</a> (<a href="https://github.com/argoproj/argo-workflows/blob/37a8a81df1d7ef3067596199f96974d31b200b88/examples/README.md#argo-cli" rel="nofollow noreferrer">version at the time of answering</a>).</p> <p><code>kubectl -n workflows create -f apps/workflows/example-workflow-cowsay.yml</code> did the job.</p> <hr /> <p>To elaborate a bit: This makes sense, as what I was trying to &quot;apply&quot; was a single run of a workflow (think an object instance rather than a class). If I'd tried to apply a CronWorkflow, then <code>kubectl apply</code> would have worked. The error message that I got:</p> <pre><code>error: from hello-world-: cannot use generate name with apply </code></pre> <p>Told me about it, but I didn't understand it at the time. This is invalid:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: CronWorkflow metadata: generateName: some-name ... </code></pre> <p>But this is valid:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: CronWorkflow metadata: name: some-name ... </code></pre>
<p>Streamlit server (run e.g. with <code>streamlit run app-name.py</code>) upon first run for each user requires interaction from her - accepting or rejecting this intro message:</p> <pre><code> Welcome to Streamlit! If you are one of our development partners or are interested in getting personal technical support, please enter your email address below. Otherwise, you may leave the field blank. Email: </code></pre> <p>Streamlit then waits for input, failing to daemonize (run in background), which quickly causes docker containers in Kubernetes or Openshift to crash (enter the famous crash-loop back off procedure).</p> <p>How to avoid such crashing, by running Streamlit in headless/noninteractive mode?</p>
<p>In the <a href="https://docs.streamlit.io/library/advanced-features/configuration#set-configuration-options" rel="noreferrer">streamlit documentation</a> you can find a detailed list of configurable options. Scroll down and you will find the headless option. Just edit the config file.</p> <p><code>headless = true</code></p> <p>Please also refer to <a href="https://discuss.streamlit.io/t/supress-new-browser-window-once-app-is-started-execution/2018/2?u=yochanan_scharf" rel="noreferrer">this post</a>.</p>
<p>I'm having difficulty trying to get kustomize to replace contents of an item in a list.</p> <p>My kustomize file</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - resource.yaml patches: - patch.yaml </code></pre> <p>My patch.yaml file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: template: spec: initContainers: - name: web-service-migration env: - name: PG_DATABASE value: web-pgdb </code></pre> <p>My resource.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: template: spec: initContainers: - name: web-service-migration env: - name: PG_DATABASE valueFrom: secretKeyRef: name: web-pgdb key: database </code></pre> <p>kustomize build returns</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: template: spec: initContainers: - env: - name: PG_DATABASE value: web-pgdb valueFrom: secretKeyRef: key: database name: web-pgdb name: web-service-migration </code></pre> <p>what i want kustomize build to return</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: template: spec: initContainers: - env: - name: PG_DATABASE value: web-pgdb name: web-service-migration </code></pre>
<p>If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify <code>valueFrom</code>, so your patch should look like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: template: spec: initContainers: - name: web-service-migration env: - name: PG_DATABASE value: web-pgdb valueFrom: null </code></pre> <p>More details about strategic merge patch and how to delete maps: <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps" rel="noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps</a></p>
<p>Is there a way to ensure that pods are scaled one at a time when setting replica greater than one?</p> <p>Example: Replica set to 3</p> <ol> <li>Pod 1 - Initializing , pod 2 - Waiting, pod 3 - Waiting</li> <li>Pod 1 - Running , pod 2 - Initializing, pod 3 - Waiting</li> <li>Pod 1 - Running , pod 2 - Running, pod 3 - Initializing</li> <li>Pod 1 - Running , pod 2 - Running, pod 3 - Running</li> </ol>
<p>You can acomplish this behavior using <code>StatefulSets</code>. As it goes from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">Kubernetes docs</a></p> <blockquote> <ul> <li>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</li> <li>When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.</li> <li>Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.</li> <li>Before a Pod is terminated, all of its successors must be completely shutdown.</li> </ul> </blockquote> <p>So, as you can see here, new pod is not booted up until previous one is initializing.</p> <p><strong>Note:</strong> this behavior is guranteed by Kubernetes when <code>OrderedReady</code> pod management policy is used (which is default).</p>
<p>I have an up&amp;running SolrCloud v8.11 cluster on Kubernetes, with solr-operator.</p> <p>The backup is enabled on S3 bucket.</p> <p>How can I correctly write the request to perform a <code>RESTORE</code> of a backup stored in a S3 bucket?</p> <p>I'm unable to figure out what should it be the <code>location</code> and the <code>snapshotName</code> I have to provide in the <code>Restore API</code> request made to Solr.</p> <p>In order to discover those values, I tried to execute the <code>LISTBACKUP</code> action, but in this case the <code>location</code> values is also wrong...</p> <pre class="lang-sh prettyprint-override"><code>$ curl https://my-solrcloud.example.org/solr/admin/collections\?action=LISTBACKUP\&amp;name=collection-name\&amp;repository=collection-backup\&amp;location=my-s3-bucket/collection-backup { &quot;responseHeader&quot;:{ &quot;status&quot;:400, &quot;QTime&quot;:70}, &quot;error&quot;:{ &quot;metadata&quot;:[ &quot;error-class&quot;,&quot;org.apache.solr.common.SolrException&quot;, &quot;root-error-class&quot;,&quot;org.apache.solr.common.SolrException&quot;], &quot;msg&quot;:&quot;specified location s3:///my-s3-bucket/collection-backup/ does not exist.&quot;, &quot;code&quot;:400}} ## The Log in cluster writes: org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. =&gt; org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. </code></pre> <p>After all, the recurring backup works as expected, but sooner or later a <code>RESTORE action</code> will be performed and it's not clear how could it be done correctly.</p> <p>Thank you in advance.</p>
<p>A bit late, but I came across this question while searching for the same answer. There was <a href="https://lists.apache.org/[email protected]:2022-2:S3%20backup" rel="nofollow noreferrer">a thread on the mailing list</a> that helped me to figure out how this is supposed to work.</p> <p>I found the documentation on this pretty confusing, but the <code>location</code> seems to be <em>relative to the backup repository</em>. So, the <code>repository</code> argument already accounts for the bucket name, and the <code>name</code> argument would be the name of the backup you are attempting to list. Solr then builds the S3 path as <code>{repository bucket} + {location} + {backup name}</code>. So, location should simply be: <code>/</code></p> <p>Assume you've set up a <code>backupRepository</code> for the SolrCloud deployment like the following:</p> <pre><code>backupRepositories: - name: &quot;my-backup-repo&quot; s3: region: &quot;us-east-1&quot; bucket: &quot;my-s3-bucket&quot; </code></pre> <p>and you have created a SolrBackup like the following:</p> <pre><code>--- apiVersion: solr.apache.org/v1beta1 kind: SolrBackup metadata: name: &quot;my-collection-backup&quot; spec: repositoryName: &quot;my-backup-repo&quot; solrCloud: &quot;my-solr-cloud&quot; collections: - &quot;my-collection&quot; </code></pre> <p>The full cURL command for LISTBACKUP would be:</p> <pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \ -d action=LISTBACKUP \ -d name=my-collection-backup \ -d repository=my-backup-repo \ -d location=/ </code></pre> <p>Similarly for the RESTORE command:</p> <pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \ -d action=RESTORE \ -d name=my-collection-backup \ -d repository=my-backup-repo \ -d location=/ \ -d collection=my-collection-restore </code></pre>
<p>I'm setting up a CI/CD pipeline for deploying a Kubernetes-based application in an automated way. Part of this deployment involves creating other service accounts and their associated roles.</p> <p>When my pipeline runs, deployment fails with this error message:</p> <pre><code>Error: roles.rbac.authorization.k8s.io &quot;mongodb-kubernetes-operator&quot; is forbidden: user &quot;[email protected]&quot; (groups=[&quot;system:authenticated&quot;]) is attempting to grant RBAC permissions not currently held: │ {APIGroups:[&quot;&quot;], Resources:[&quot;configmaps&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;create&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot; &quot;delete&quot;]} │ {APIGroups:[&quot;&quot;], Resources:[&quot;pods&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;create&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot; &quot;delete&quot;]} │ {APIGroups:[&quot;&quot;], Resources:[&quot;secrets&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;create&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot; &quot;delete&quot;]} │ {APIGroups:[&quot;&quot;], Resources:[&quot;services&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;create&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot; &quot;delete&quot;]} │ {APIGroups:[&quot;apps&quot;], Resources:[&quot;statefulsets&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;create&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot; &quot;delete&quot;]} │ {APIGroups:[&quot;mongodbcommunity.mongodb.com&quot;], Resources:[&quot;mongodbcommunity&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot;]} │ {APIGroups:[&quot;mongodbcommunity.mongodb.com&quot;], Resources:[&quot;mongodbcommunity/finalizers&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot;]} │ {APIGroups:[&quot;mongodbcommunity.mongodb.com&quot;], Resources:[&quot;mongodbcommunity/spec&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot;]} │ {APIGroups:[&quot;mongodbcommunity.mongodb.com&quot;], Resources:[&quot;mongodbcommunity/status&quot;], Verbs:[&quot;list&quot; &quot;watch&quot; &quot;update&quot; &quot;patch&quot; &quot;get&quot;]} │ │ with module.db_document.kubernetes_role.operator_mongodb, │ on modules/db_document/main.tf line 17, in resource &quot;kubernetes_role&quot; &quot;operator_mongodb&quot;: │ 17: resource &quot;kubernetes_role&quot; &quot;operator_mongodb&quot; { │ </code></pre> <p>The error seemed straightforward enough: my service account can't grant permissions that it does not have. Since the error message mentioned my GCP service account, <code>[email protected]</code>, I added to my role definition what I believed to be the matching permissions.</p> <p>Below is my resulting role. It has create, delete, get, list, and update permissions for configMaps, pods, secrets, services, statefulsets , and thirdPartyObjects, which I believed should cover the requirements.</p> <pre><code>resource &quot;google_project_iam_custom_role&quot; &quot;cicd_bot_role&quot; { project = var.project role_id = &quot;cicd_bot&quot; title = &quot;CICD Bot&quot; permissions = [ &quot;artifactregistry.repositories.downloadArtifacts&quot;, &quot;artifactregistry.repositories.uploadArtifacts&quot;, &quot;compute.instanceGroupManagers.get&quot;, &quot;container.clusters.get&quot;, &quot;container.configMaps.create&quot;, &quot;container.configMaps.delete&quot;, &quot;container.configMaps.get&quot;, &quot;container.configMaps.list&quot;, &quot;container.configMaps.update&quot;, &quot;container.cronJobs.create&quot;, &quot;container.cronJobs.delete&quot;, &quot;container.cronJobs.get&quot;, &quot;container.cronJobs.update&quot;, &quot;container.customResourceDefinitions.create&quot;, &quot;container.customResourceDefinitions.delete&quot;, &quot;container.customResourceDefinitions.get&quot;, &quot;container.customResourceDefinitions.list&quot;, &quot;container.customResourceDefinitions.update&quot;, &quot;container.deployments.create&quot;, &quot;container.deployments.delete&quot;, &quot;container.deployments.get&quot;, &quot;container.deployments.update&quot;, &quot;container.ingresses.create&quot;, &quot;container.ingresses.delete&quot;, &quot;container.ingresses.get&quot;, &quot;container.ingresses.update&quot;, &quot;container.jobs.create&quot;, &quot;container.jobs.delete&quot;, &quot;container.jobs.get&quot;, &quot;container.jobs.update&quot;, &quot;container.namespaces.get&quot;, &quot;container.persistentVolumeClaims.create&quot;, &quot;container.persistentVolumeClaims.delete&quot;, &quot;container.persistentVolumeClaims.get&quot;, &quot;container.persistentVolumeClaims.update&quot;, &quot;container.pods.create&quot;, &quot;container.pods.delete&quot;, &quot;container.pods.get&quot;, &quot;container.pods.list&quot;, &quot;container.pods.update&quot;, &quot;container.roleBindings.create&quot;, &quot;container.roleBindings.delete&quot;, &quot;container.roleBindings.get&quot;, &quot;container.roleBindings.update&quot;, &quot;container.roles.create&quot;, &quot;container.roles.delete&quot;, &quot;container.roles.get&quot;, &quot;container.roles.update&quot;, &quot;container.secrets.create&quot;, &quot;container.secrets.delete&quot;, &quot;container.secrets.get&quot;, &quot;container.secrets.list&quot;, &quot;container.secrets.update&quot;, &quot;container.serviceAccounts.create&quot;, &quot;container.serviceAccounts.delete&quot;, &quot;container.serviceAccounts.get&quot;, &quot;container.serviceAccounts.update&quot;, &quot;container.services.create&quot;, &quot;container.services.delete&quot;, &quot;container.services.get&quot;, &quot;container.services.list&quot;, &quot;container.services.update&quot;, &quot;container.statefulSets.create&quot;, &quot;container.statefulSets.delete&quot;, &quot;container.statefulSets.get&quot;, &quot;container.statefulSets.list&quot;, &quot;container.statefulSets.update&quot;, &quot;container.thirdPartyObjects.create&quot;, &quot;container.thirdPartyObjects.delete&quot;, &quot;container.thirdPartyObjects.get&quot;, &quot;container.thirdPartyObjects.list&quot;, &quot;container.thirdPartyObjects.update&quot;, &quot;dns.changes.create&quot;, &quot;dns.changes.get&quot;, &quot;dns.resourceRecordSets.get&quot;, &quot;dns.resourceRecordSets.list&quot;, &quot;dns.resourceRecordSets.update&quot;, &quot;storage.buckets.get&quot;, &quot;storage.objects.create&quot;, &quot;storage.objects.delete&quot;, &quot;storage.objects.get&quot;, &quot;storage.objects.list&quot;, ] } </code></pre> <p>However, after deploying this, the error remained the same. I wondered if adding equivalent permissions on the kubernetes side was necessary, so I created the following ClusterRole and ClusterRoleBinding, too.</p> <pre><code>resource &quot;kubernetes_cluster_role&quot; &quot;cicd_bot&quot; { metadata { name = kubernetes_service_account.cicd_bot.metadata[0].name } rule { api_groups = [&quot;&quot;] resources = [&quot;namespaces&quot;] verbs = [&quot;create&quot;, &quot;delete&quot;, &quot;get&quot;] } rule { api_groups = [&quot;&quot;] resources = [&quot;configmaps&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;, &quot;delete&quot;] } rule { api_groups = [&quot;&quot;] resources = [&quot;pods&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;, &quot;delete&quot;] } rule { api_groups = [&quot;&quot;] resources = [&quot;secrets&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;, &quot;delete&quot;] } rule { api_groups = [&quot;&quot;] resources = [&quot;services&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;, &quot;delete&quot;] } rule { api_groups = [&quot;apps&quot;] resources = [&quot;statefulsets&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;, &quot;delete&quot;] } rule { api_groups = [&quot;mongodbcommunity.mongodb.com&quot;] resources = [&quot;mongodbcommunity&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;] } rule { api_groups = [&quot;mongodbcommunity.mongodb.com&quot;] resources = [&quot;mongodbcommunity/finalizers&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;] } rule { api_groups = [&quot;mongodbcommunity.mongodb.com&quot;] resources = [&quot;mongodbcommunity/spec&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;] } rule { api_groups = [&quot;mongodbcommunity.mongodb.com&quot;] resources = [&quot;mongodbcommunity/status&quot;] verbs = [&quot;list&quot;, &quot;watch&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;get&quot;] } } resource &quot;kubernetes_cluster_role_binding&quot; &quot;cicd_bot&quot; { metadata { name = kubernetes_service_account.cicd_bot.metadata[0].name } subject { kind = &quot;ServiceAccount&quot; namespace = kubernetes_service_account.cicd_bot.metadata[0].namespace name = kubernetes_service_account.cicd_bot.metadata[0].name } role_ref { api_group = &quot;rbac.authorization.k8s.io&quot; kind = &quot;ClusterRole&quot; name = kubernetes_cluster_role.cicd_bot.metadata[0].name } } </code></pre> <p>Unfortunately, the pipeline still fails with the same error. I've been able to overcome a similar error in the past, but not this time. What am I missing?</p> <p>UPDATE: I was able to deploy successfully by attaching the role <code>roles/container.admin</code> to my service account. So now I need to figure out which permission <code>roles/container.admin</code> has that my custom role does not.</p>
<p>Sadly, the one permission that was missing was</p> <pre><code>container.roles.escalate </code></pre> <p>Even including every other <code>container.*</code> permission was insufficient; <code>container.roles.escalate</code> was still needed.</p> <p>This is unfortunate because it makes the cluster more vulnerable to permissions escalation attacks. If there is a safer way to accomplish this, I'd love to hear it. I won't mark my own answer as &quot;correct&quot; because I'm not satisfied with it. But hey, at least it's working...</p>
<p>Copied from here: <a href="https://github.com/kubeflow/pipelines/issues/7608" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/issues/7608</a></p> <p>I have a generated code file that runs against Kubeflow. It ran fine on Kubeflow v1, and now I'm moving it to Kubeflow v2. When I do this, I get the following error: <code>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)</code></p> <p>I honestly don't even know where to go next. It feels like something is fundamentally broken for something to fail in the first character, but I can't see it (it's inside the kubeflow execution).</p> <p>Thanks!</p> <hr /> <h3>Environment</h3> <ul> <li><p>How did you deploy Kubeflow Pipelines (KFP)? Standard deployment to AWS</p> </li> <li><p>KFP version: 1.8.1</p> </li> <li><p>KFP SDK version: 1.8.12</p> </li> </ul> <p>Here's the logs:</p> <pre><code>time=&quot;2022-04-26T17:38:09.547Z&quot; level=info msg=&quot;capturing logs&quot; argo=true WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [KFP Executor 2022-04-26 17:38:24,691 INFO]: Looking for component `run_info_fn` in --component_module_path `/tmp/tmp.NJW6PWXpIt/ephemeral_component.py` [KFP Executor 2022-04-26 17:38:24,691 INFO]: Loading KFP component &quot;run_info_fn&quot; from /tmp/tmp.NJW6PWXpIt/ephemeral_component.py (directory &quot;/tmp/tmp.NJW6PWXpIt&quot; and module name &quot;ephemeral_component&quot;) Traceback (most recent call last): File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 193, in _run_module_as_main &quot;__main__&quot;, mod_spec) File &quot;/usr/local/lib/python3.7/runpy.py&quot;, line 85, in _run_code exec(code, run_globals) File &quot;/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py&quot;, line 104, in &lt;module&gt; executor_main() File &quot;/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py&quot;, line 94, in executor_main executor_input = json.loads(args.executor_input) File &quot;/usr/local/lib/python3.7/json/__init__.py&quot;, line 348, in loads return _default_decoder.decode(s) File &quot;/usr/local/lib/python3.7/json/decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;/usr/local/lib/python3.7/json/decoder.py&quot;, line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) time=&quot;2022-04-26T17:38:24.803Z&quot; level=error msg=&quot;cannot save artifact /tmp/outputs/run_info/data&quot; argo=true error=&quot;stat /tmp/outputs/run_info/data: no such file or directory&quot; Error: exit status 1 </code></pre> <p>Here's the files to repro: root_pipeline_04d99580c84b47c28405a2c8bcae8703.py</p> <pre><code>import kfp.v2.components from kfp.v2.dsl import InputPath from kubernetes.client.models import V1EnvVar from kubernetes import client, config from typing import NamedTuple from base64 import b64encode import kfp.v2.dsl as dsl import kubernetes import json import kfp from run_info import run_info_fn from same_step_000_ce6494722c474dd3b8bef482bb976557 import same_step_000_ce6494722c474dd3b8bef482bb976557_fn run_info_comp = kfp.v2.dsl.component( func=run_info_fn, packages_to_install=[ &quot;kfp&quot;, &quot;dill&quot;, ], ) same_step_000_ce6494722c474dd3b8bef482bb976557_comp = kfp.v2.dsl.component( func=same_step_000_ce6494722c474dd3b8bef482bb976557_fn, base_image=&quot;public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.5.0&quot;, packages_to_install=[ &quot;dill&quot;, &quot;requests&quot;, # TODO: make this a loop ], ) @kfp.dsl.pipeline(name=&quot;root_pipeline_compilation&quot;,) def root( context: str='', metadata_url: str='', ): # Generate secrets (if not already created) secrets_by_env = {} env_vars = { } run_info = run_info_comp(run_id=kfp.dsl.RUN_ID_PLACEHOLDER) same_step_000_ce6494722c474dd3b8bef482bb976557 = same_step_000_ce6494722c474dd3b8bef482bb976557_comp( input_context_path=&quot;&quot;, run_info=run_info.outputs[&quot;run_info&quot;], metadata_url=metadata_url ) same_step_000_ce6494722c474dd3b8bef482bb976557.execution_options.caching_strategy.max_cache_staleness = &quot;P0D&quot; for k in env_vars: same_step_000_ce6494722c474dd3b8bef482bb976557.add_env_variable(V1EnvVar(name=k, value=env_vars[k])) </code></pre> <p>run_info.py</p> <pre><code>&quot;&quot;&quot; The run_info component fetches metadata about the current pipeline execution from kubeflow and passes it on to the user code step components. &quot;&quot;&quot; from typing import NamedTuple def run_info_fn( run_id: str, ) -&gt; NamedTuple(&quot;RunInfoOutput&quot;, [(&quot;run_info&quot;, str),]): from base64 import urlsafe_b64encode from collections import namedtuple import datetime import base64 import dill import kfp client = kfp.Client(host=&quot;http://ml-pipeline:8888&quot;) run_info = client.get_run(run_id=run_id) run_info_dict = { &quot;run_id&quot;: run_info.run.id, &quot;name&quot;: run_info.run.name, &quot;created_at&quot;: run_info.run.created_at.isoformat(), &quot;pipeline_id&quot;: run_info.run.pipeline_spec.pipeline_id, } # Track kubernetes resources associated wth the run. for r in run_info.run.resource_references: run_info_dict[f&quot;{r.key.type.lower()}_id&quot;] = r.key.id # Base64-encoded as value is visible in kubeflow ui. output = urlsafe_b64encode(dill.dumps(run_info_dict)) return namedtuple(&quot;RunInfoOutput&quot;, [&quot;run_info&quot;])( str(output, encoding=&quot;ascii&quot;) ) </code></pre> <p>same_step_000_ce6494722c474dd3b8bef482bb976557.py</p> <pre><code>import kfp from kfp.v2.dsl import component, Artifact, Input, InputPath, Output, OutputPath, Dataset, Model from typing import NamedTuple def same_step_000_ce6494722c474dd3b8bef482bb976557_fn( input_context_path: InputPath(str), output_context_path: OutputPath(str), run_info: str = &quot;gAR9lC4=&quot;, metadata_url: str = &quot;&quot;, ): from base64 import urlsafe_b64encode, urlsafe_b64decode from pathlib import Path import datetime import requests import tempfile import dill import os input_context = None with Path(input_context_path).open(&quot;rb&quot;) as reader: input_context = reader.read() # Helper function for posting metadata to mlflow. def post_metadata(json): if metadata_url == &quot;&quot;: return try: req = requests.post(metadata_url, json=json) req.raise_for_status() except requests.exceptions.HTTPError as err: print(f&quot;Error posting metadata: {err}&quot;) # Move to writable directory as user might want to do file IO. # TODO: won't persist across steps, might need support in SDK? os.chdir(tempfile.mkdtemp()) # Load information about the current experiment run: run_info = dill.loads(urlsafe_b64decode(run_info)) # Post session context to mlflow. if len(input_context) &gt; 0: input_context_str = urlsafe_b64encode(input_context) post_metadata( { &quot;experiment_id&quot;: run_info[&quot;experiment_id&quot;], &quot;run_id&quot;: run_info[&quot;run_id&quot;], &quot;step_id&quot;: &quot;same_step_000&quot;, &quot;metadata_type&quot;: &quot;input&quot;, &quot;metadata_value&quot;: input_context_str, &quot;metadata_time&quot;: datetime.datetime.now().isoformat(), } ) # User code for step, which we run in its own execution frame. user_code = f&quot;&quot;&quot; import dill # Load session context into global namespace: if { len(input_context) } &gt; 0: dill.load_session(&quot;{ input_context_path }&quot;) {dill.loads(urlsafe_b64decode(&quot;gASVGAAAAAAAAACMFHByaW50KCJIZWxsbyB3b3JsZCIplC4=&quot;))} # Remove anything from the global namespace that cannot be serialised. # TODO: this will include things like pandas dataframes, needs sdk support? _bad_keys = [] _all_keys = list(globals().keys()) for k in _all_keys: try: dill.dumps(globals()[k]) except TypeError: _bad_keys.append(k) for k in _bad_keys: del globals()[k] # Save new session context to disk for the next component: dill.dump_session(&quot;{output_context_path}&quot;) &quot;&quot;&quot; # Runs the user code in a new execution frame. Context from the previous # component in the run is loaded into the session dynamically, and we run # with a single globals() namespace to simulate top-level execution. exec(user_code, globals(), globals()) # Post new session context to mlflow: with Path(output_context_path).open(&quot;rb&quot;) as reader: context = urlsafe_b64encode(reader.read()) post_metadata( { &quot;experiment_id&quot;: run_info[&quot;experiment_id&quot;], &quot;run_id&quot;: run_info[&quot;run_id&quot;], &quot;step_id&quot;: &quot;same_step_000&quot;, &quot;metadata_type&quot;: &quot;output&quot;, &quot;metadata_value&quot;: context, &quot;metadata_time&quot;: datetime.datetime.now().isoformat(), } ) </code></pre> <p>Python file to execute to run:</p> <pre><code>from sameproject.ops import helpers from pathlib import Path import importlib import kfp def deploy(compiled_path: Path, root_module_name: str): with helpers.add_path(str(compiled_path)): kfp_client = kfp.Client() # only supporting 'kubeflow' namespace root_module = importlib.import_module(root_module_name) return kfp_client.create_run_from_pipeline_func( root_module.root, arguments={}, ) </code></pre>
<p>Turns out it has to do with not compiling with the right execution mode on.</p> <p>If you're getting this, your code should look like this.</p> <pre><code>Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(pipeline_func=root_module.root, package_path=str(package_yaml_path)) </code></pre>
<p>How to modify istio meshconfig access log format when the output accessLogEncoding set to JSON, changing the format as <a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/" rel="nofollow noreferrer">describe on the docs</a> didn't work.</p> <pre><code>spec: meshConfig: accessLogFile: /dev/stdout accessLogEncoding: JSON accessLogFormat: '[%START_TIME%] \&quot;%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\&quot; %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%\&quot;%UPSTREAM_TRANSPORT_FAILURE_REASON%\&quot; %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \&quot;%REQ(X-FORWARDED-FOR)%\&quot; \&quot;%REQ(USER-AGENT)%\&quot; \&quot;%REQ(X-REQUEST-ID)%\&quot; \&quot;%REQ(:AUTHORITY)%\&quot; \&quot;%UPSTREAM_HOST%\&quot; %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SER' </code></pre>
<p>It works!, This is what i needed:</p> <pre><code>spec: meshConfig: accessLogFile: /dev/stdout accessLogEncoding: JSON accessLogFormat: | { &quot;protocol&quot;: &quot;%PROTOCOL%&quot;, &quot;upstream_service_time&quot;: &quot;%REQ(x-envoy-upstream-service-time)%&quot;, &quot;upstream_local_address&quot;: &quot;%UPSTREAM_LOCAL_ADDRESS%&quot;, &quot;duration&quot;: &quot;%DURATION%&quot;, &quot;upstream_transport_failure_reason&quot;: &quot;%UPSTREAM_TRANSPORT_FAILURE_REASON%&quot;, &quot;route_name&quot;: &quot;%ROUTE_NAME%&quot;, &quot;downstream_local_address&quot;: &quot;%DOWNSTREAM_LOCAL_ADDRESS%&quot;, &quot;user_agent&quot;: &quot;%REQ(USER-AGENT)%&quot;, &quot;response_code&quot;: &quot;%RESPONSE_CODE%&quot;, &quot;response_flags&quot;: &quot;%RESPONSE_FLAGS%&quot;, &quot;start_time&quot;: &quot;%START_TIME%&quot;, &quot;method&quot;: &quot;%REQ(:METHOD)%&quot;, &quot;request_id&quot;: &quot;%REQ(X-REQUEST-ID)%&quot;, &quot;upstream_host&quot;: &quot;%UPSTREAM_HOST%&quot;, &quot;x_forwarded_for&quot;: &quot;%REQ(X-FORWARDED-FOR)%&quot;, &quot;client_ip&quot;: &quot;%REQ(True-Client-Ip)%&quot;, &quot;requested_server_name&quot;: &quot;%REQUESTED_SERVER_NAME%&quot;, &quot;bytes_received&quot;: &quot;%BYTES_RECEIVED%&quot;, &quot;bytes_sent&quot;: &quot;%BYTES_SENT%&quot;, &quot;upstream_cluster&quot;: &quot;%UPSTREAM_CLUSTER%&quot;, &quot;downstream_remote_address&quot;: &quot;%DOWNSTREAM_REMOTE_ADDRESS%&quot;, &quot;authority&quot;: &quot;%REQ(:AUTHORITY)%&quot;, &quot;path&quot;: &quot;%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%&quot;, &quot;response_code_details&quot;: &quot;%RESPONSE_CODE_DETAILS%&quot; } </code></pre>
<p>How can I define multiple secrets in one file? Seems that providing multiple secrets doesn't work.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: ca-secret labels: app.kubernetes.io/managed-by: Helm type: kubernetes.io/tls data: tls.crt: LS0tLDR tls.key: LS0tLDR apiVersion: v1 kind: Secret metadata: name: envoy-secret labels: app.kubernetes.io/managed-by: Helm type: kubernetes.io/tls data: tls.crt: LS0tLS1 tls.key: LS0tLS1 </code></pre> <p>I am not able to use multiple files because I need to generate a single template using helm.</p>
<p>You can have separate manifests in one yaml file by separating them with <code>---</code>. This will work:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: ca-secret labels: app.kubernetes.io/managed-by: Helm type: kubernetes.io/tls data: tls.crt: LS0tLDR tls.key: LS0tLDR --- apiVersion: v1 kind: Secret metadata: name: envoy-secret labels: app.kubernetes.io/managed-by: Helm type: kubernetes.io/tls data: tls.crt: LS0tLS1 tls.key: LS0tLS1 </code></pre>
<p>I recognised a strange behavior after upgrading laravel from 8 to 9. I have a scheduler that executes some jobs.</p> <p>Kernel.php:</p> <pre class="lang-php prettyprint-override"><code>$schedule-&gt;job(new ImportAzApplications, 'imports') -&gt;everyFiveMinutes() -&gt;onOneServer() -&gt;onFailure(function () { Log::error('Scheduled &quot;ImportAzApplications&quot; failed'); }); $schedule-&gt;job(new ImportServicePrincipals, 'imports') -&gt;everyFiveMinutes() -&gt;onOneServer() -&gt;onFailure(function () { Log::error('Scheduled &quot;ImportServicePrincipals&quot; failed'); }); </code></pre> <p>The scheduler will be executed by an cron job in kubernetes. At the <code>artisan schedule:run</code> command will be executed every 5 seconds.</p> <p>Logs:</p> <pre><code>[2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportServicePrincipals [2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportAzApplications </code></pre> <p>Now I would except having two jobs in the <code>imports</code> queue. And I do. But only on my dev machine, not on the staging server.</p> <p>One of the Jobs look like this:</p> <pre class="lang-php prettyprint-override"><code>class ImportAzApplications implements ShouldQueue, ShouldBeUnique { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; /** * Create a new job instance. * * @return void */ public function __construct() { } /** * Execute the job. * * @return void */ public function handle() { $instance = new AzureApplications(); $params = $instance-&gt;azApplicationImportParams(); try { $instance-&gt;import($params); Log::debug('ImportAzApplications: Processing AzureApplications::importAzApplications'); } catch (Exception $exception) { Log::error('ImportAzApplications: '.$exception-&gt;getMessage(), $params); } } } </code></pre> <ul> <li>I enabled debug log-level to fetch debug logs</li> <li>I run unit tests successfully local</li> <li>I validated the scheduler workflow on the stating server with <code>artisan schedule:list', </code>artisan schedule:run', <code>artisan schedule:test</code> and `artisan queue:work --queue=imports'</li> <li>I monitored the database online via <code>tinker</code>: <code>DB::table('jobs')-&gt;get()</code> and <code>DB::table('failed_jobs')-&gt;get()</code>. Both remains empty</li> <li>I run the Jobs manually in tinker <code>(new ImportAzApplications)-&gt;handle()</code> successfully</li> <li>I executed a different job (export task) in the same queue successfully</li> </ul> <p>I'm pretty sure it's a super simple thing that I can't see atm but for now I'm running out of ideas and hope someone has some further ideas</p> <p>Many thanks</p> <h1>Edited</h1> <p>during the weekend, some magic happened and the jobs where executed according to the new logs. I'm going to investigate the voodoo-magic further on kubernetes.</p>
<p>I have to excuse myself to the community. The Problem was actually caused by an pipeline workflow.</p> <p>I run <code>laravel</code> on <code>kubernetes</code>. There're multiple pod for different exercises available.</p> <h2><code>app</code></h2> <p>the actual laraval application</p> <h2><code>queue</code></h2> <p>the same container image with a slide different configuration to process the jobs in the queues using the <code>php artisan queue:work --queue=imports</code> command</p> <h2><code>jobs</code></h2> <p>a kubernetes job that executes scheduled jobs using the <code>php artisan schedule:run</code> command defined in the <code>App\Console\Kernel</code> every minute</p> <p>Once a new release will be pushed to the registry, a <code>kubectl set env</code> command sets the new build version which forces the <code>app</code> Pod to grab a new container image. This is defined the kubernetes strategy.</p> <p>However the <code>queue</code> run with the old image. I've forced the <code>queue</code> to get latest image (no voodoo-magic) and the jobs where executed.</p> <p>Thanks for all that who took there time to investigate and try to reproduce this beavure. That wasn't easy.</p> <p>Cheers!</p>
<p>I have a very simple program:</p> <pre><code>package main import ( &quot;fmt&quot; &quot;github.com/vishvananda/netlink&quot; ) func main() { _, err := netlink.LinkByName(&quot;wlp164s0&quot;) if err != nil { fmt.Println(&quot;error finding VIP Interface, for building DHCP Link : %v&quot;, err) return } fmt.Println(&quot;Worked..&quot;) } </code></pre> <p>If I create a docker image and run it with &quot;--net host&quot;, this program prints &quot;Worked&quot;. It is able to find the interface wlp164s0.</p> <p>If I create a k8s deployment like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: netlink-example labels: app: netlink-example spec: replicas: 1 selector: matchLabels: app: netlink-example template: metadata: labels: app: netlink-example spec: hostNetwork: true containers: - name: netlink image: suruti94/netlink:0.1 imagePullPolicy: IfNotPresent hostNetwork: true nodeSelector: kubernetes.io/os: linux </code></pre> <p>This program prints the error indicating that it can't lookup the interface which means the &quot;hostNetwork: true&quot; is not taking effect. From all my research, this looks right. Any help would be appreciated. I am running this program on Ubuntu 21.04, k8s version 1.22.</p>
<p>After some experimentation, I have come to an understanding that the docker option &quot;--net host&quot; is not the same as &quot;hostNetwork: true&quot; in k8s. I wrongly assumed they produce similar behavior.</p> <ul> <li>docker --net host option makes the host interfaces available in the container which is useful for some applications</li> <li>When you deploy a pod with hostNetwork:true, it means the host network is reachable from the pod. By default when a pod is deployed (I verified this on my local machine using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a>) the host network is reachable. I can see the veth interface connected to the bridge on the host. Even with hostNetwork: false, I was able to update packages on my pod.So, not sure what to make out of this setting. At this stage, I am concluding that there is no option to expose the host interface directly on the pod.</li> </ul>
<p>I've installed minikube over VirtualBox in a Windows 10 Home</p> <p>I am trying to run command: <code>minikube tunnel</code> but I get an error:</p> <pre><code>Status: machine: minikube pid: 10896 route: 10.96.0.0/12 -&gt; 192.168.99.101 minikube: Running services: [] errors: minikube: no errors router: error adding route: Error en la adici�n de la ruta: El objeto ya existe. , 3 loadbalancer emulator: no errors </code></pre> <p>It is the error message (translated):</p> <pre><code>Error in route addition: Object exists already. </code></pre> <p>I would like to know why I get an error in router section?</p> <p>Thanks in advance</p>
<p>Solution to this : This solution works to me</p> <p>Run minikube tunnel in powershell . Run PowerShell as administrator</p> <pre><code>PS C:\Users\QL752LU&gt; minikube tunnel Status: machine: minikube pid: 9272 route: 10.96.0.0/12 -&gt; 192.168.59.100 minikube: Running services: [dockerml] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 9272 route: 10.96.0.0/12 -&gt; 192.168.59.100 minikube: Running services: [dockerml] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: </code></pre>
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p> <p>We get the below error message on start:</p> <p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p> <p>We included the openjdk8 in the docker image. Here is the docker file.</p> <pre><code> #FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base #WORKDIR /app #EXPOSE 80 #EXPOSE 443 ARG REPO=mcr.microsoft.com/dotnet/runtime FROM $REPO:3.1.24-alpine3.15 AS base # Install ASP.NET Core RUN aspnetcore_version=3.1.24 \ &amp;&amp; wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \ &amp;&amp; aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \ &amp;&amp; echo &quot;$aspnetcore_sha512 aspnetcore.tar.gz&quot; | sha512sum -c - \ &amp;&amp; tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \ &amp;&amp; rm aspnetcore.tar.gz RUN apk add openjdk8 ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk ENV PATH=&quot;$JAVA_HOME/bin:${PATH}&quot; WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src ... RUN dotnet restore &quot;API.csproj&quot; COPY . . WORKDIR &quot;API&quot; RUN dotnet build &quot;API.csproj&quot; -c Release -o /app/build FROM build AS publish RUN dotnet publish &quot;API.csproj&quot; -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT [&quot;dotnet&quot;, &quot;API.dll&quot;]```` </code></pre>
<p>In addition to Pavel's response, instead of building your own docker image, you can utilize base image available in GridGain edition: <a href="https://hub.docker.com/r/gridgain/community-dotnet" rel="nofollow noreferrer">https://hub.docker.com/r/gridgain/community-dotnet</a></p> <p>GridGain Community edition is built on Apache Ignite, is free and open source as well. You might check the official docs for more details.</p>
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p> <p>We get the below error message on start:</p> <p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p> <p>We included the openjdk8 in the docker image. Here is the docker file.</p> <pre><code> #FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base #WORKDIR /app #EXPOSE 80 #EXPOSE 443 ARG REPO=mcr.microsoft.com/dotnet/runtime FROM $REPO:3.1.24-alpine3.15 AS base # Install ASP.NET Core RUN aspnetcore_version=3.1.24 \ &amp;&amp; wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \ &amp;&amp; aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \ &amp;&amp; echo &quot;$aspnetcore_sha512 aspnetcore.tar.gz&quot; | sha512sum -c - \ &amp;&amp; tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \ &amp;&amp; rm aspnetcore.tar.gz RUN apk add openjdk8 ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk ENV PATH=&quot;$JAVA_HOME/bin:${PATH}&quot; WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src ... RUN dotnet restore &quot;API.csproj&quot; COPY . . WORKDIR &quot;API&quot; RUN dotnet build &quot;API.csproj&quot; -c Release -o /app/build FROM build AS publish RUN dotnet publish &quot;API.csproj&quot; -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT [&quot;dotnet&quot;, &quot;API.dll&quot;]```` </code></pre>
<p>This Ignite issue on Alpine Linux was fixed in 2.13, which was released yesterday - please try upgrading.</p> <p><a href="https://issues.apache.org/jira/browse/IGNITE-16749" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/IGNITE-16749</a> <a href="https://www.nuget.org/packages/Apache.Ignite/2.13.0" rel="nofollow noreferrer">https://www.nuget.org/packages/Apache.Ignite/2.13.0</a></p> <hr /> <p>Additionally, set <strong>LD_LIBRARY_PATH</strong> environment variable:</p> <ul> <li><strong>openjdk8</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/amd64/server</code></li> <li><strong>openjdk11</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/server</code></li> </ul> <p>Do not set <code>JAVA_HOME</code> and <code>PATH</code> - it is not necessary.</p>
<p>I deployed a EKS cluster via a IAM user and need to give full cluster permission to other aws sso users. I followed this instruction <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a> but still can't make it work.</p> <p>I got below error when run this command by a sso user:</p> <pre><code>$ kubectl get pods error: You must be logged in to the server (Unauthorized) </code></pre> <p>In the doc they say to create <code>aws-auth</code> ConfigMap to map the role as below. But I am not sure what the role arn I should specify. I am using EKS + Fargate so there is no <code>node group</code> or <code>node instance</code> in my cluster. So what kind of role should I specify in below configuration&gt;</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: &lt;ARN of instance role (not instance profile)&gt; username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes </code></pre>
<p>You should specify:</p> <pre><code>- rolearn: arn:aws:iam::&lt;aws account id&gt;:role/&lt;federated user&gt; username: {{SessionName}} groups: - system:masters </code></pre> <p>Please note that &quot;&lt; aws account id &gt;&quot; is the ID of the AWS account in which the EKS cluster is created, and &quot;&lt; federated user &gt;&quot; is the SSO user with which you are accessing this account. This policy will add admin access rights to your k8s cluster. You can obtain the &quot;&lt; federated user &gt;&quot; by logging into an account with that user and in the upper right corner do expand and you will see Federated user. Copy it, and it will look something like this AWSReservedSSO_AdministratorAccess_9999bb9bbbbfff/administrator. Take just the first part before the forward slash.</p>
<p>I am installing <code>kube-prometheus-stack</code> with Helm and I am adding some custome scraping configuration to Prometheus which requires authentication. I need to pass <code>basic_auth</code> with <code>username</code> and <code>password</code> in the <code>values.yaml</code> file.</p> <p>The thing is that I need to commit the <code>values.yaml</code> file to a repo so I am wondering how can I have the username and password set on values file, maybe from a secret in Kubernetes or some other way?</p> <pre class="lang-yaml prettyprint-override"><code>prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: myjob scrape_interval: 20s metrics_path: /metrics static_configs: - targets: - myservice.default.svc.cluster.local:80 basic_auth: username: prometheus password: prom123456 </code></pre>
<p><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="nofollow noreferrer">Scrape config</a> support specifying <code>password_file</code> parameter, so you can mount your own secret in volumes and volumemMounts:</p> <p>Disclaimer, haven't tested it myself, not using a kube-prometheus-stack, but i guess something like this should work:</p> <pre class="lang-yaml prettyprint-override"><code>prometheus: prometheusSpec: additionalScrapeConfigs: - job_name: myjob scrape_interval: 20s metrics_path: /metrics static_configs: - targets: - myservice.default.svc.cluster.local:80 basic_auth: password_file: /etc/scrape_passwordfile # Additional volumes on the output StatefulSet definition. volumes: - name: scrape_passwordfile secret: secretName: scrape_passwordfile optional: false # Additional VolumeMounts on the output StatefulSet definition. volumeMounts: - name: scrape_passwordfile mountPath: &quot;/etc/scrape_passwordfile&quot; </code></pre> <p>Another option is to ditch <code>additionalScrapeConfigs</code> and use <code>additionalScrapeConfigsSecret</code>to store whole config inside secret</p> <pre class="lang-yaml prettyprint-override"><code> ## If additional scrape configurations are already deployed in a single secret file you can use this section. ## Expected values are the secret name and key ## Cannot be used with additionalScrapeConfigs additionalScrapeConfigsSecret: {} # enabled: false # name: # key: </code></pre>
<p>We have a Kubernetes Pod which provides multiple metric endpoints (<code>:3093/metrics</code> and <code>:9113/metrics</code>), but it also has an port which doesn't provide any metric (<code>:80</code>).</p> <p><strong>TL;DR:</strong> Is is possible to scrape only the ports <code>3093</code> and <code>9113</code>?</p> <hr> <p>We are using <a href="https://github.com/prometheus/prometheus/blob/70f3d1e/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">the example configuration</a> for the scrape configuration:</p> <pre><code>- job_name: 'kubernetes-pods' tls_config: insecure_skip_verify: true kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name </code></pre> <p>These endpoints get properly scraped, when enabling scraping with this annotation:</p> <pre><code> annotations: prometheus.io/scrape: "true" </code></pre> <p>But this has the issue, that it also scrapes port <code>:80</code>, which it shouldn't.</p>
<p>The suggestion outlined <a href="https://github.com/prometheus/prometheus/issues/3756#issuecomment-767147462" rel="nofollow noreferrer">here</a> allows scraping multiple endpoints per pod. The idea is to give custom names to container ports, which must be scraped, so these names can be used later during relabeling phase for the discovered targets. For example, the following config adds <code>prom-metrics</code> name to ports 3093 and 9113, which must be scraped by Prometheus. Note that the port 80 has no <code>prom-metrics</code> name, so it won't be scraped.</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: custom-app image: github.com/user/app:latest ports: - containerPort: 80 - containerPort: 3093 name: prom-metrics - containerPort: 9113 name: prom-metrics </code></pre> <p>Later the following scrape config can be used for scraping container ports with <code>prom-metrics</code> name:</p> <pre class="lang-yaml prettyprint-override"><code> - job_name: 'pods-with-prom-metrics' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_container_port_name] regex: prom-metrics action: keep - source_labels: [__address__, __meta_kubernetes_pod_container_port_number] regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_pod_container_name] target_label: container - source_labels: [__meta_kubernetes_pod_name] target_label: pod - source_labels: [__meta_kubernetes_namespace] target_label: namespace </code></pre>
<p>I have install a minikube on my linux-laptop (Computer A)</p> <p><a href="https://i.stack.imgur.com/WdHb7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WdHb7.png" alt="enter image description here" /></a></p> <p>and deploy some service on it :</p> <pre><code>$ minikube service list</code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>NAMESPACE</th> <th>NAME</th> <th>TARGET PORT</th> <th>URL</th> </tr> </thead> <tbody> <tr> <td>cert-manager</td> <td>cert-manager</td> <td>No node port</td> <td></td> </tr> <tr> <td>cert-manager</td> <td>cert-manager-webhook</td> <td>No node port</td> <td></td> </tr> <tr> <td>default</td> <td>aws</td> <td>5000</td> <td><a href="http://192.168.49.2:30728" rel="nofollow noreferrer">http://192.168.49.2:30728</a></td> </tr> <tr> <td>default</td> <td>azure</td> <td>5000</td> <td><a href="http://192.168.49.2:31794" rel="nofollow noreferrer">http://192.168.49.2:31794</a></td> </tr> <tr> <td>default</td> <td>gcloud</td> <td>5000</td> <td><a href="http://192.168.49.2:32367" rel="nofollow noreferrer">http://192.168.49.2:32367</a></td> </tr> <tr> <td>default</td> <td>hft</td> <td>5000</td> <td><a href="http://192.168.49.2:30970" rel="nofollow noreferrer">http://192.168.49.2:30970</a></td> </tr> <tr> <td>default</td> <td>hftf</td> <td>5000</td> <td><a href="http://192.168.49.2:30612" rel="nofollow noreferrer">http://192.168.49.2:30612</a></td> </tr> <tr> <td>default</td> <td>kubernetes</td> <td>No node port</td> <td></td> </tr> <tr> <td>default</td> <td>open</td> <td>5000</td> <td><a href="http://192.168.49.2:31547" rel="nofollow noreferrer">http://192.168.49.2:31547</a></td> </tr> <tr> <td>ingress-nginx</td> <td>ingress-nginx-controller</td> <td>http/80</td> <td><a href="http://192.168.49.2:31053" rel="nofollow noreferrer">http://192.168.49.2:31053</a></td> </tr> <tr> <td></td> <td></td> <td>https/443</td> <td><a href="http://192.168.49.2:30302" rel="nofollow noreferrer">http://192.168.49.2:30302</a></td> </tr> <tr> <td>ingress-nginx</td> <td>ingress-nginx-controller-admission</td> <td>No node port</td> <td></td> </tr> <tr> <td>kube-system</td> <td>kube-dns</td> <td>No node port</td> <td></td> </tr> <tr> <td>kubernetes-dashboard</td> <td>dashboard-metrics-scraper</td> <td>No node port</td> <td></td> </tr> <tr> <td>kubernetes-dashboard</td> <td>kubernetes-dashboard</td> <td>No node port</td> <td></td> </tr> </tbody> </table> </div> <p>As you can see there is 6 service create with fastAPI and that are behind an ingress. I have set my ingress with the host <code>galigator.fun</code>. Cert-manager is ready to fire, but for now TLS is comment in the ingress. <code>galigator.fun</code> is DNS name I own.</p> <pre><code> $ minikube addons enable ingress $ kubectl describe ingress Name: multiplex-applications Namespace: default Address: 192.168.49.2 Default backend: default-http-backend:80 () Rules: Host Path Backends ---- ---- -------- galigator.fun /v1/aws/(.+) aws:5000 (172.17.0.11:5000) /v1/azure/(.+) azure:5000 (172.17.0.14:5000) /v1/gcloud/(.+) gcloud:5000 (172.17.0.13:5000) /v1/deepml/best/(.+) hftf:5000 (172.17.0.10:5000) /v1/deepml/edge/(.+) hft:5000 (172.17.0.3:5000) /v1/deepml/razor/(.+) open:5000 (172.17.0.8:5000) Annotations: cert-manager.io/issue-temporary-certificate: true cert-manager.io/issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 52m (x3 over 53m) nginx-ingress-controller Scheduled for sync </code></pre> <p>The error from Default backend seem not related to my problem...</p> <p>If add an entry to <code>/etc/hosts</code> with <code>minikube ip</code> and <code>galigator.fun</code> then I can run my services without problem. But there are not available from Computer B not Computer C.</p> <p><strong>How to access my services from Computer B and Computer C ?</strong></p> <p><em>I don't want to change any configuration in computer B &amp; C.</em></p> <p>As I understand, the <code>minikube ip</code> isn't know from my Internet Box; it is not a physical equipement. My minikube run from my kubuntu-21.04 and is start with <code>minikube start --ports=80,443</code> or just <code>minikube start</code> but I can't observe a difference.</p> <p>Another thing that don't look to work.</p> <pre><code> $ kubectl port-forward --address 0.0.0.0 deployment/ingress-nginx-controller 80:80 --namespace ingress-nginx Unable to listen on port 80: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 0.0.0.0:80: bind: permission denied] error: unable to listen on any of the requested ports: [{80 80}] </pre></code> <p>I have properly test the connectivity between Internet and Computer A for port 80 &amp; 443 using an Apache. Since I have remove that server.</p>
<p>First as the doc explain I try to play with <code>minikube</code> startup options.</p> <pre><code>minikube start --listen-address='0.0.0.0' --ports=80,443 </code></pre> <p>But as we can see it doesn't change anything:</p> <pre><code>$ docker container ls |grep minikube 0aafb00fd97d gcr.io/k8s-minikube/kicbase:v0.0.25 &quot;/usr/local/bin/entr…&quot; 3 weeks ago Up 22 minutes 127.0.0.1:49172-&gt;22/tcp, 127.0.0.1:49171-&gt;2376/tcp, 127.0.0.1:49170-&gt;5000/tcp, 127.0.0.1:49169-&gt;8443/tcp, 127.0.0.1:49168-&gt;32443/tcp minikube </code></pre> <p>So if we investigate deeper on error of command :</p> <pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 80:80 443:443 </code></pre> <p>we can find that binding to port <code>80</code> and <code>443</code> but they are below <code>1000</code> and require extra privilege/root. And <code>kubectl</code> isn't a root process, for goods reasons.</p> <p>So an ugly solution that work could be to do something like that :</p> <pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 50000:80 50001:443 </code></pre> <p>And then change the NAT rules in the InternetBox. But I don't like it because <code>minikube</code> is an implementation detail that should not impact another hardware.</p> <p>So a less horrible solution is to give kubectl the write to bind to low number ports...</p> <pre><code>$ man 7 capabilities | grep -A 1 CAP_NET_BIND_SERVICE CAP_NET_BIND_SERVICE Bind a socket to Internet domain privileged ports (port numbers less than 1024). </code></pre> <p>So command should be :</p> <pre><code>sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/kubectl </code></pre> <p>and then</p> <pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 80:80 443:443 </code></pre> <p>it works to...</p> <p><strong>But there is one major concern here, we have to redo the last command on every startup, and the last two ones on every upgrade...</strong></p> <p>If someone have a good answer, that could also work for a background server on a <code>LAN</code>, I would love it.</p> <p>Port <code>80</code> is require only during the execution of the challenge from cert-manager. So a solution that could get close it, could also be interesting.</p>
<p>it there for Kubernetes an option to set a DNS alias as in can be done within a host file (62.116.156.20 servera)? Where servera is an external server. As a result any pod within the cluster can get in contact to server via DNS.</p> <p>Thanks</p>
<p>It totally depends how you configure and deploy your pods. There is an option to add a custom DNS entry to the host file of every pod documented here: <a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/</a></p> <p>configuration snip:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: restartPolicy: Never hostAliases: - ip: &quot;127.0.0.1&quot; hostnames: - &quot;foo.local&quot; - &quot;bar.local&quot; - ip: &quot;10.1.2.3&quot; hostnames: - &quot;foo.remote&quot; - &quot;bar.remote&quot; containers: - name: cat-hosts image: busybox:1.28 command: - cat args: - &quot;/etc/hosts&quot; </code></pre>
<p>I have a FastAPI app with the following code</p> <pre class="lang-py prettyprint-override"><code> @app.on_event(&quot;startup&quot;) async def startup_event(): &quot;&quot;&quot;Initialize application services&quot;&quot;&quot; print(&quot;Starting the service&quot;) </code></pre> <p>when I run FastAPI directly from the terminal, I get the following output</p> <pre><code>INFO: Started server process [259936] INFO: Waiting for application startup. Starting the service INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) </code></pre> <p>You can see that the print statement got executed.</p> <p>However, when the same app is automatically run inside a Kubernetes cluster, I get the following output</p> <pre><code> INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) </code></pre> <p>The print statement did not get executed, in fact, any additional code inside the function never gets executed.</p> <p>However, if I exit the process like this:</p> <pre class="lang-py prettyprint-override"><code>@app.on_event(&quot;startup&quot;) async def startup_event(): &quot;&quot;&quot;Initialize application services&quot;&quot;&quot; print(&quot;Starting the service&quot;) exit(99) </code></pre> <p>The process exists then I can see the print statement.</p> <pre><code>SystemExit: 99 ERROR: Application startup failed. Exiting. Starting the service </code></pre> <p>What is the problem here?</p> <p>Edit: Actually no code whatsoever gets executed, I have put print statements literally everywhere and nothing gets printed, but somehow the webserver runs...</p>
<p>So, actually, there is no problem with my code, FastAPI, asyncio, or Kubernetes.</p> <p>Everything was actually working correctly, it's just that the output was buffered.</p> <p>After adding flush=True to the print statement, everything showed.</p> <p>I am answering this in case some poor soul stumbles upon this thread in the future.</p> <p>I spent days debugging this!!!</p>
<p>I'm trying to understand how the Kubernetes <strong>HorizontalPodAutoscaler</strong> works. Until now, I have used the following configuration:</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: my-deployment spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 </code></pre> <p>This uses the <code>targetCPUUtilizationPercentage</code> parameter but I would like to use a metric for the memory percentage used, but I was not able to find any example. Any hint?</p> <p>I found also that there is this type of configuration to support multiple metrics, but the <strong>apiVersion</strong> is <code>autoscaling/v2alpha1</code>. Can this be used in a production environment?</p> <pre><code>kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2alpha1 metadata: name: WebFrontend spec: scaleTargetRef: kind: ReplicationController name: WebFrontend minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - type: Object object: target: kind: Service name: Frontend metricName: hits-per-second targetValue: 1k </code></pre>
<p>Here is a manifest example for what you need, that includes <strong>Memory Metrics</strong>:</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: web-servers spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: web-servers minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 20 - type: Resource resource: name: memory target: type: AverageValue averageValue: 30Mi </code></pre> <p>An important thing to notice is that, as you can see, it uses the <strong>autoscaling/v2beta2 API version</strong>, so you need to follow all the previous instructions listed <a href="https://loft.sh/blog/kubernetes-horizontal-pod-autoscaling/" rel="nofollow noreferrer">here</a>.</p> <p>Regarding the possibility to use the <strong>autoscaling/v2alpha1</strong>, yes, you can use it, as it includes support for scaling on memory and custom metrics as this <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">URL</a> specifies, but keep in mind that alpha versions are released for testing, as they are not final versions.</p> <p>For more <strong>autoscaling/v2beta2 YAML’s</strong> examples and a deeper look into memory metrics, you can take a look at this <a href="https://stackoverflow.com/questions/69184304/how-memory-metric-is-evaluated-by-kubernetes-hpa">thread</a>.</p>
<p>Can somebody please help me understand why is</p> <pre><code>kubectl get all --all-namespaces </code></pre> <p>lists resources from all namespaces but not Ingress. Even if I specify a particular namespace like the following I still don;t get the Ingress resource</p> <pre><code>kubectl get all -n app-space </code></pre> <p>Why do I have to specifically refer resource type [Ingress ] in this case If i have to get the list of all Ingress resources from all namespaces or a specific namespace</p> <pre><code>kubectl get ingress --all-namespaces kubectl get ingress -n app-space </code></pre>
<p><code>kubectl get all</code> is not intended to list all resources cause that would produce unexpected outputs, instead it's meant to list only some most used api resources. check the this <a href="https://github.com/kubernetes/kubernetes/pull/28955#issuecomment-232652070" rel="nofollow noreferrer">Pull Request</a> for the discussion.</p>
<p>I'm running Kubernetes with docker desktop on windows. DD is up-to-date, and the <code>kubectl version</code> command returns 1.22 as both the client and server version.</p> <p>I executed <code>kubectl run my-apache --image httpd</code>, then <code>kubectl get all</code>, which only shows <a href="https://i.stack.imgur.com/9xWby.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xWby.png" alt="enter image description here" /></a></p> <p>There is no deployment or replicaset as I expect. This means some commands, such as <code>kubectl scale</code> doesn't work. Any idea what's wrong? Thanks.</p>
<p>The <code>kubectl run</code> command create pod not deployment, I mean it used to create in the past before kubernetes version 1.18 I guess.</p> <p>For deployment you have run this command</p> <p><code>kubectl create deployment my-apache --image httpd</code></p>
<p>I have a K8S service (app-filestash-testing) running like following:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app-filestash-testing ClusterIP 10.111.128.18 &lt;none&gt; 10000/TCP 18h kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 20h </code></pre> <p>I used the following yaml file to create an Ingress trying reach this service:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app-filestash-testing spec: rules: - host: www.masternode.com http: paths: - backend: serviceName: app-filestash-testing servicePort: 10000 </code></pre> <p>In the <em>/etc/hosts</em> file, I made this change (I used the worker node public IP):</p> <pre><code>127.0.0.1 localhost xx.xxx.xxx.xxx www.masternode.com </code></pre> <p>However, when I checked the Ingress, I saw that the Ingress port is 80.</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE app-filestash-testing nginx www.masternode.com 80 14h </code></pre> <p>Currently the service is running and listening on port 10000, but the Ingress port is 80.</p> <p>I am just wondering is there any method/ setting to change the port number of Ingress to 10000? How to reach this service through Ingress? Is is possible to set the port number in <em>/etc/hosts</em> file?</p> <p>Thanks.</p>
<p>From: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p> <blockquote> <p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.</p> </blockquote> <p>NodePort might be what you are looking for. More information and options are documented here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
<p>Iam able to connect to Kubernetes cluster using Fabric8 api with the following code :</p> <pre><code> String masterUrl = &quot;&lt;Kube Master Url&gt;&quot;; Map&lt;String, String&gt; certFileNamesMap = getCertificates(); Config config = new ConfigBuilder().withMasterUrl(masterUrl).build(); config.setWebsocketPingInterval(30000L); config.setConnectionTimeout(300000); config.setRequestTimeout(300000); if (certFileNamesMap != null) { config.setCaCertFile(certFileNamesMap.get(CA_CERT_FILE)); config.setClientCertFile(certFileNamesMap.get(CLIENT_CERT_FILE)); config.setClientKeyFile(certFileNamesMap.get(CLIENT_KEY_FILE)); } KubernetesClient client = new DefaultKubernetesClient(config); </code></pre> <p>I am able to set the certificates and able to connect to https://MasterIP but with <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> API I'm not able to figure out the way to set the certs as well as remote master IP. Is there anyway to do that with official kubernetes client API? as fabric8 API is officially discontinued by their team.</p> <p>Im trying with the following code inside container in minikube :</p> <pre><code> ApiClient client = ClientBuilder.cluster().build(); Configuration.setDefaultApiClient(client); System.out.println(&quot;Loading in app cluster 3&quot;); CoreV1Api api = new CoreV1Api(); V1PodList list = api.listPodForAllNamespaces(null, null, null, null, 100, null, null, null, null, null); System.out.println(&quot;Listing all pods: &quot;); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } </code></pre> <p>Im getting error as forbidden.</p> <pre><code>io.kubernetes.client.openapi.ApiException: Forbidden at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:993) at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:905) at io.kubernetes.client.openapi.apis.CoreV1Api.listPodForAllNamespacesWithHttpInfo(CoreV1Api.java:35956) at io.kubernetes.client.openapi.apis.CoreV1Api.listPodForAllNamespaces(CoreV1Api.java:35848) </code></pre>
<p>Iam able to figure out the way, we need to use /.kube/config file and copy all the certificates into that file itself, then there is an api in kubernetes java client which reads the config file and gives access to kube cluster.</p>
<p>I have a nodejs application running on Google Kubernetes Engine (v1.20.8-gke.900)</p> <p>I want to add custom header to get client's Region and lat long so I refer to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">this article</a> and <a href="https://cloud.google.com/load-balancing/docs/https/custom-headers" rel="nofollow noreferrer">this one</a> also and created below kubernetes config file, but when I am printing the header I am not getting any custom header.</p> <pre class="lang-yaml prettyprint-override"><code>#k8s.yaml apiVersion: v1 kind: Namespace metadata: name: my-app-ns-prod --- apiVersion: apps/v1 kind: Deployment metadata: name: npm-app-deployment namespace: my-app-ns-prod labels: app: npm-app-deployment tier: backend spec: template: metadata: name: npm-app-pod namespace: my-app-ns-prod labels: app: npm-app-pod tier: backend spec: containers: - name: my-app-container image: us.gcr.io/img/my-app:latest ports: - containerPort: 3000 protocol: TCP envFrom: - secretRef: name: npm-app-secret - configMapRef: name: npm-app-configmap imagePullPolicy: Always imagePullSecrets: - name: gcr-regcred replicas: 3 minReadySeconds: 30 selector: matchLabels: app: npm-app-pod tier: backend --- apiVersion: v1 kind: Service metadata: name: npm-app-service namespace: my-app-ns-prod annotations: cloud.google.com/backend-config: '{&quot;ports&quot;: {&quot;80&quot;:&quot;npm-app-backendconfig&quot;}}' cloud.google.com/neg: '{&quot;ingress&quot;: true}' spec: selector: app: npm-app-pod tier: backend ports: - name: http protocol: TCP port: 80 targetPort: 3000 - name: https protocol: TCP port: 443 targetPort: 3000 type: LoadBalancer --- apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: npm-app-backendconfig namespace: my-app-ns-prod spec: customRequestHeaders: headers: - &quot;HTTP-X-Client-CityLatLong:{client_city_lat_long}&quot; - &quot;HTTP-X-Client-Region:{client_region}&quot; - &quot;HTTP-X-Client-Region-SubDivision:{client_region_subdivision}&quot; - &quot;HTTP-X-Client-City:{client_city}&quot; --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: rules: - host: app.example.com http: paths: - path: /api/v1 pathType: Prefix backend: service: name: npm-app-service port: number: 80 --- apiVersion: v1 kind: ConfigMap metadata: name: npm-app-configmap namespace: my-app-ns-prod data: APP_ID: &quot;My App&quot; PORT: &quot;3000&quot; --- apiVersion: v1 kind: Secret metadata: name: npm-app-secret namespace: my-app-ns-prod type: Opaque data: MONGO_CONNECTION_URI: &quot;&quot; SESSION_SECRET: &quot;&quot; </code></pre>
<p>Actually the issue was with Ingress Controller, I missed to defined &quot;<code>cloud.google.com/backend-config</code>&quot;. Once I had defined that I was able to get the custom header. Also I switched to <strong>GKE Ingress controller</strong> (<code>gce</code>) from <code>nginx</code>. But the same things works with <code>nginx</code> also</p> <p>This is how my final ingress looks like.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: cloud.google.com/backend-config: '{&quot;default&quot;: &quot;npm-app-backendconfig&quot;}' kubernetes.io/ingress.class: &quot;gce&quot; spec: ... ... </code></pre> <p>Reference: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#request_headers" rel="nofollow noreferrer">User-defined request headers</a> :</p>
<p>Is it possible to obtain Kubernetes logs for a dedicated time range?</p> <p>All I can do right now is to make a dump of about the last-hour log for the single pod using <code>kubectl logs &gt; dump.log</code> cmd.</p> <p>But for debugging reasons, it's necessary to obtain the logs for the last week. I was unable to find any abilities to do this in Kubernetes logs.</p> <p>The only thought is to attach some external service like Kibana for the logs collection, but maybe built-in Kubernetes remedies allow to do this?</p> <p>Thank you.</p>
<p><strong>AWK</strong> is an awesome tool in Unix/Linux systems for these types of logical operations</p> <p>So, to display logs in between two-time ranges (ex: 10 AM to 11 AM):</p> <ol> <li><p>Using <code>--since-time</code> and <code>awk</code></p> <p><code>kubectl logs pod_name --since-time=2022-04-30T10:00:00Z | awk '$0 &lt; &quot;2022-04-30 11:00:00&quot;'</code></p> </li> <li><p>Using only <code>awk</code></p> <p><code>kubectl logs pod_name | awk '$0 &gt; &quot;2022-04-30 10:00:00&quot;' | awk '$0 &lt; &quot;2022-04-30 11:00:00&quot;'</code></p> </li> </ol> <p>Note: Please format date_time using in awk command based on the logs output.</p>
<p>I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.</p> <p>The application is an HTTP web backend.</p> <p>Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.</p> <p>I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.</p> <p><strong>So to summarize my question,</strong> does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?</p> <hr /> <p>The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.</p>
<p>I see 2 approaches:</p> <ol> <li>In the new cluster get a new IP address and update the DNS record to point to the new load balancer</li> <li>See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways</a></li> </ol>
<p>Consider this Kubernetes Pod:</p> <pre><code># pod.yaml apiVersion: v1 kind: Pod metadata: name: testing123 spec: containers: - name: testing123 image: busybox:latest command: ['sh', '-c', 'echo &quot;Hello, Kubernetes!&quot; &amp;&amp; sleep 3600'] </code></pre> <p>If I deploy this Pod and run <code>logs</code> I see this:</p> <pre><code>$ kubectl apply -f pod.yaml $ k logs testing123 Hello, Kubernetes! </code></pre> <p>So far, so good. I now &quot;login&quot; to the Pod and run an echo command:</p> <pre><code>$ k exec -ti testing123 -- ash / # echo &quot;Run after logging in.&quot; Run after logging in. / # exit $ k logs testing123 Hello, Kubernetes! $ </code></pre> <p>Why didn't <code>Run after logging in.</code> appear in the logs output of the Pod?</p>
<p>Containers logs are captured from their command line / entrypoint stdout and stderr.</p> <p>When you enter a container (kubectl exec), you spawn a new process, with its own stdin/stdout/stderr.</p>
<p>I use the following function, and I need to <strong>raise</strong> the coverage of it (if possible to 100%), the problem is that typically I use <code>interface</code> to handle such cases in Go and for this specific case not sure how to do it, as this is a bit more <strong>tricky</strong>, any idea?</p> <p><code>The package</code> <a href="https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/compute/v1" rel="nofollow noreferrer">https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/compute/v1</a></p> <p>Which I use doesn't have <code>interface</code> so not sure how can I mock it?</p> <pre><code>import ( &quot;context&quot; &quot;errors&quot; &quot;fmt&quot; &quot;os&quot; compute &quot;cloud.google.com/go/compute/apiv1&quot; &quot;google.golang.org/api/iterator&quot; &quot;google.golang.org/api/option&quot; computev1 &quot;google.golang.org/genproto/googleapis/cloud/compute/v1&quot; ) func Res(ctx context.Context, project string, region string,vpc string,secret string) error { c, err := compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret)) if err != nil { return err } defer c.Close() addrReq := &amp;computev1.ListAddressesRequest{ Project: project, Region: region, } it := c.List(ctx, addrReq) for { resp, err := it.Next() if err == iterator.Done { break } if err != nil { return err } if *(resp.Status) != &quot;IN_USE&quot; { return ipConverter(*resp.Name, vpc) } } return nil } </code></pre>
<p>Whenever I find myself in this scenario, I found that the easiest solution is to create <em>missing</em> interfaces myself. I limit these interfaces to the types and functions that I am using, instead of writing interfaces for the entire library. Then, in my code, instead of accepting third-party concrete types, I accept my interfaces for those types. Then I use gomock to generate mocks for these interfaces as usual.</p> <p>The following is a descriptive example inspired by your code.</p> <pre class="lang-golang prettyprint-override"><code>type RestClient interface { List(context.Context, *computev1.ListAddressesRequest) (ListResult, error) // assuming List returns ListResult type. Close() error } func newRestClient(ctx context.Context, secret string) (RestClient, error) { return compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret)) } func Res(ctx context.Context, project string, region string, vpc string, secret string) error { c, err := newRestClient(ctx, secret) if err != nil { return err } defer c.Close() return res(ctx, project, region, vpc, c) } func res(ctx context.Context, project string, region string, vpc string, c RestClient) error { addrReq := &amp;computev1.ListAddressesRequest{ Project: project, Region: region, } it, err := c.List(ctx, addrReq) if err != nil { return err } for { resp, err := it.Next() if err == iterator.Done { break } if err != nil { return err } if *(resp.Status) != &quot;IN_USE&quot; { return ipConverter(*resp.Name, vpc) } } return nil } </code></pre> <p>Now you can test the important bits of the <code>Res</code> function by injecting a mock <code>RestClient</code> to the internal <code>res</code> function.</p>
<ol> <li><p>Using WordPress php-apache-7.4 as base image, I created a Docker file with few customisation and created an image. I am using the same docker-entrypoint.sh, wp-config-docker.php files from the Docker Hub official image.</p> </li> <li><p>Using the image when I create a container on Docker Desktop it works fine and I am able to load the WP page</p> </li> <li><p>I upload the same image to Docker Hub and from there and using that image created a pod on EKS cluster and I receive the error &quot;exec /usr/local/bin/docker-entrypoint.sh: exec format error.&quot;</p> </li> </ol> <p>I am using the files from the below repo <a href="https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache" rel="noreferrer">https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache</a></p> <p>Only Docker file in the above repo is modified to installed the memcached and copy wp-config.php. The other two files I am using without any changes.</p> <p>I tried changing the docker-entrypoint.sh script to add <code>#!/bin/bash</code> as mentioned in some issue reported, also I tried to create a custom-entrypoint.sh to edit the original docker-entrypoint.sh script which was also suggested in another page but they didn't work.</p> <p>custom-entrypoint.sh</p> <pre><code>#!/bin/bash sed -i -e 's/^exec &quot;$@&quot;/#exec &quot;$@&quot;/g' /usr/local/bin/docker-entrypoint.sh source docker-entrypoint.sh exec &quot;$@&quot; </code></pre> <p>Trying to fix this, only thing is confusing is on Docker Desktop when I create using the same image it runs the cont without any error.</p>
<p>As mentioned in the comment above by David Maze, the issue is due to building the image on Mac M1 Pro.</p> <p>To fix this I need to add <code>FROM --platform=linux/amd64 &lt;image&gt;-&lt;version&gt;</code> in the Dockerfile and build or you can run the below command while running the build</p> <p><code>docker build --platform=linux/amd64 &lt;image&gt;-&lt;version&gt;</code></p> <p>Both solutions will work. I added <code>FROM --platform=linux/amd64</code> to the Dockerfile and it's fixed now.</p>
<p>I wanted to know exactly what is the difference between '&gt;-' and '|-' especially in kubernetes yaml manifests</p>
<p>Newlines in folded block scalars (<code>&gt;</code>) are subject to line folding, newlines in literal block scalars (<code>|</code>) are not.</p> <p>Line folding replaces a single newline between non-empty lines with a space, and in the case of empty lines, reduces the number of newline characters between the surrounding non-empty lines by one:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt; # folds into &quot;one two\nthree four\n\nfive\n&quot; one two three four five </code></pre> <p>Line folding does not occur between lines when at least one line is more indented, i.e. contains whitespace at the beginning that is not part of the block's general indentation:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt; # folds into &quot;one\n two\nthree four\n\n five\n&quot; one two three four five </code></pre> <p>Adding <code>-</code> after either <code>|</code> or <code>&gt;</code> will strip the newline character from the last line:</p> <pre class="lang-yaml prettyprint-override"><code>a: &gt;- # folded into &quot;one two&quot; one two b: &gt;- # folded into &quot;one\ntwo&quot; one two </code></pre> <p>In contrast, <code>|</code> emits every newline character as-is, the sole exception being the last one if you use <code>-</code>.</p>
<p>I have <code>sizeLimit</code> property under <code>emptyDir</code> set to 2Gi in my template base file. I want to remove the <code>sizelimit</code> and just have <code>emptyDir: {}</code>. I've been unable to achieve this using Kustomization overlays. I will detail my folder structure and kustomization yamls below.</p> <p><strong>Folder Structure:</strong></p> <pre><code>application ├── majorbase │   ├── kustomization.yaml │   └── resources │   └── web-template.yaml ├── minorbase │   ├── kustomization.yaml │   └── resources └── myoverlays ├── kustomization.yaml └── resources └── my-new-web.yaml </code></pre> <p>The folder <code>myoverlays</code> contains the following contents in it's kustomization.yaml file</p> <pre><code>bases: - ../minorbase patchesStrategicMerge: - resources/my-new-web.yaml </code></pre> <p>The folder <code>minorbase</code> contains the following contents in it's kustomization.yaml file</p> <pre><code>bases: - ../majorbase </code></pre> <p>The folder <code>majorbase</code> contains the following contents in it's kustomization.yaml file</p> <pre><code>resources: - resources/web-template.yaml </code></pre> <p>The section I want to edit looks like this in the majorbase/template.</p> <pre><code>volumes: - name: test-vol emptyDir: sizeLimit: &quot;2Gi&quot; </code></pre> <p>The above configuration needs to be updated using overlays as below.</p> <pre><code>volumes: - name: test-vol emptyDir: {} </code></pre> <p>This is where my problem lies. Kustomization just picks the 2Gi value mentioned in the base whenever I remove the <code>sizelimit</code> in my overlays. When I mention different value to <code>sizeLimit</code> such as &quot;1Gi&quot; in my overlays file, kustomization is picking up the change. What is the cause of this behaviour? Is it possible to achieve what I'm trying to do here?</p>
<p>NB: This answer assumes a recent version of Kustomize (I'm running 4.5.2 locally). Your examples are using deprecated syntax (the <code>bases</code> section was deprecated <a href="https://github.com/kubernetes-sigs/kustomize/blob/661743c7e5bd8c3d9d6866b6bc0a6f0e0b0512eb/site/content/en/blog/releases/v2.1.0.md#resources-expanded-bases-deprecated" rel="nofollow noreferrer">in version 2.1.0</a>, for example).</p> <hr /> <p>Your problem is that you're using a <code>strategicMerge</code> patch, and you're merging and empty map (<code>{}</code>) with <code>{&quot;sizeLimit&quot;: &quot;26gi&quot;}</code>. If you merge an empty map with anything, it's a no-op: you end up with the &quot;anything&quot;.</p> <p>To explicitly delete an element, you have a few choices.</p> <p>You can use the <code>$patch: replace</code> directive (you can find an example of that <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md" rel="nofollow noreferrer">here</a>) to have Kustomize <em>replace</em> the <code>emptyDir</code> element, rather than merging the contents. That would look like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: example spec: volumes: - name: test-vol emptyDir: $patch: replace </code></pre> <p>The corresponding <code>kustomization.yaml</code> might look something like:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base patches: - path: resources/my-new-web.yaml </code></pre> <hr /> <p>Alternately, you can use a JSONPatch patch, which is good for explicitly deleting fields:</p> <pre><code>- path: /spec/volumes/0/emptyDir/sizeLimit op: remove </code></pre> <p>Where <code>kustomization.yaml</code> would look like:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ....//base patches: - target: kind: Pod name: example path: resources/my-new-web.yaml </code></pre> <hr /> <p>You can find a complete runnable demonstration of this <a href="https://github.com/larsks/so-example-72086055" rel="nofollow noreferrer">here</a>.</p>
<p>Is it possible to scale down a pod to 0 replicas when other pod is down?I'm familiar with the basics of the Horizontal Auto-Scaling concept, but as I understand it scales pod up or down only when demands for resources (CPU, memory) changes. My CI pipeline follows a green/blue pattern, so when the new version of the application is being deployed the second one is scaled down to 0 replicas, leaving other pods belonging to the same environment up wasting resources. Do you have any idea how to solve it using kubernetes or helm features?</p> <p>Thanks</p>
<p>If you have a <strong>CI</strong> pipeline you can just run the kubectl command and scale down the deployment before deploying the blue-green this way no resource wasting will be there.</p> <p>However yes, you can scale <strong>UP/DOWN</strong> the deployment or application based on the custom metrics.</p> <p>i would recommend you checking out Cloud-native project <strong>Keda</strong> : <a href="https://keda.sh/" rel="nofollow noreferrer">https://keda.sh/</a></p> <p><strong>Keda</strong>:</p> <blockquote> <p>KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.</p> </blockquote> <p><strong>Example</strong></p> <pre><code>apiVersion: keda.k8s.io/v1alpha1 kind: ScaledObject metadata: name: {scaled-object-name} spec: scaleTargetRef: deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject containerName: {container-name} #Optional. Default: deployment.spec.template.spec.containers[0] pollingInterval: 30 # Optional. Default: 30 seconds cooldownPeriod: 300 # Optional. Default: 300 seconds minReplicaCount: 0 # Optional. Default: 0 maxReplicaCount: 100 # Optional. Default: 100 triggers: # {list of triggers to activate the deployment} </code></pre> <p>Scale object ref : <a href="https://keda.sh/docs/1.4/concepts/scaling-deployments/#scaledobject-spec" rel="nofollow noreferrer">https://keda.sh/docs/1.4/concepts/scaling-deployments/#scaledobject-spec</a></p>
<p>Imagine we have a folder of <code>.yaml</code> files containing kubernetes objects, let's say, deployments, configmaps and HPAs.</p> <pre><code>./file1.yaml # {'kind': 'Deployment', ... }, {'kind': 'ConfigMap', ...} ./file2.yaml # {'kind': 'ConfigMap', ... }, {'kind': 'HorizontalPodAutoscaler', ... } </code></pre> <p>I need to deserialize them into instances of a proper class, but unlike the regular deserialization method, I want to avoid relying on YAML tags and make a choice by YAML body instead (which is why i have doubts about <code>register_class()</code> approach). There is a key <code>'kind'</code> that should identify the proper class instance.</p> <p>The end goal is to parse, modify and dump those objects back (<strong>preserving comments and formatting</strong>, so those classes would be a subclass of a CommentedMap or something similar).</p> <p>Is there a way in ruamel.yaml how could I parse YAML into</p> <pre><code>from ruamel.yaml.comments import CommentedMap class KubeObjectBase(CommentedMap): def some_additional_func(self): pass class Deployment(KubeObjectBase): def deployment_method(self): pass class ConfigMap(KubeObjectBase): pass </code></pre>
<p>I am not entirely sure what the YAML files actually look like. The part after <code>#</code> in your example isn't correct YAML, so I made things up.</p> <p>This doesn't affect processing to get what you want. As long as you have valid, loadable YAML, just recursive over the data and replace entries.</p> <p>You need to somehow map the value for <code>kind</code> to your actual classes. If there are not that many classes just make a string to class dictionary, if you have many, you should scan your Python files and create that map automatically (either from the class name or from some class attribute):</p> <pre class="lang-py prettyprint-override"><code>import sys import ruamel.yaml FA = ruamel.yaml.comments.Format.attrib from pathlib import Path file1 = Path('file1.yaml') file1.write_text(&quot;&quot;&quot;\ - {'kind': 'Deployment', a: 1} - kind: ConfigMap b: kind: Deployment c: 3 x: 42 &quot;&quot;&quot;) file2 = Path('file2.yaml') file2.write_text(&quot;&quot;&quot;\ [ {'kind': 'ConfigMap', d: 4}, {'kind': 'HorizontalPodAutoscaler', e: 5}, ] &quot;&quot;&quot;) kob_map = {} class KubeObjectBase(ruamel.yaml.comments.CommentedMap): def some_additional_func(self): pass def __repr__(self): return f&quot;{self.__class__.__name__}({', '.join([f'{k}: {v}' for k, v in self.items()])})&quot; class Deployment(KubeObjectBase): def deployment_method(self): pass kob_map['Deployment'] = Deployment class ConfigMap(KubeObjectBase): pass kob_map['ConfigMap'] = ConfigMap class HorizontalPodAutoscaler(KubeObjectBase): pass kob_map['HorizontalPodAutoscaler'] = HorizontalPodAutoscaler yaml = ruamel.yaml.YAML() for v in kob_map.values(): yaml.Representer.add_representer(v, yaml.Representer.represent_dict) def un_kind(d, map): if isinstance(d, dict): for k, v in d.items(): un_kind(v, map) try: if 'kind' in v: # typ = map[v.pop('kind')] typ = nv = map[v['kind']] d[k] = typ(v) setattr(nv, FA, v.fa) setattr(nv, '_comment_attrib', v.ca) except TypeError: pass elif isinstance(d, list): for idx, elem in enumerate(d): un_kind(elem, map) try: if 'kind' in elem: # typ = map[elem.pop('kind')] typ = map[elem['kind']] d[idx] = nv = typ(elem) setattr(nv, FA, elem.fa) setattr(nv, '_comment_attrib', elem.ca) except TypeError: pass for fn in Path('.').glob('*.yaml'): data = yaml.load(fn) print(f'{fn}:') un_kind(data, kob_map) print(list(data)) yaml.dump(data, sys.stdout) </code></pre> <p>which gives:</p> <pre class="lang-none prettyprint-override"><code>file1.yaml: [Deployment(kind: Deployment, a: 1), ConfigMap(kind: ConfigMap, b: Deployment(kind: Deployment, c: 3, x: 42))] - {kind: Deployment, a: 1} - kind: ConfigMap b: kind: Deployment c: 3 x: 42 file2.yaml: [ConfigMap(kind: ConfigMap, d: 4), HorizontalPodAutoscaler(kind: HorizontalPodAutoscaler, e: 5)] [{kind: ConfigMap, d: 4}, {kind: HorizontalPodAutoscaler, e: 5}] </code></pre>
<p>I think the question has most of the information. If I have a Service that is a ClusterIP Service, and a Pod accesses that Service, since the load balancing is built into the routing table, are the routing rules smart enough to prefer traffic to the local Node? Is there a way to configure it so that the traffic does prefer (or even be forced) to go to the local Node?</p> <p>Reasoning - I'm thinking about running a caching server as a DaemonSet, which makes sense if traffic is likely to go to a local Pod.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#internal-traffic-policy" rel="nofollow noreferrer">internalTrafficPolicy</a> is probably what you are looking for.</p>
<p>Our team is planning the migration of legacy enterprise application developed in ASP.net web-Forms, to .Net Core 6 as use the containerized approach. For this, mostly we will target the Kubernetes container orchestration platform.</p> <p>The application is highly configurable and can be integrated with related apps up to certain extent. It has large number of XML based configuration files (more than 100). Current mode of deployment is IIS (on-premise).</p> <p>The major technical challenge that we are facing is to manage our application configuration.</p> <p>So ConfigMap is one the option available in Kubernetes can be used for configuration management. ConfigMap APIs allows to generate ConfigMap from environment, yaml file, existing configuration file or directory. Directory based approach seems more suitable. However, considering the maximum size limit of ConfigMap we may end up creating multiple ConfigMap.</p> <p>We need to make sure:</p> <ol> <li>Migrated app should be able to use configuration however application image should be separate and configuration can be injected from outside.</li> <li>The configuration changes should be reflected in application without POD.</li> </ol> <p>Since the ConfigMap is kind of read-only resource when container starts, I am currently looking for mechanism to use with configuration reload without the need of restarting POD/container.</p> <p>Initial focus is to achieve this. (The impact of changed configuration on active users who might be referring to application feature based on previous configuration is a different topic altogether).</p>
<p>You can do it without restarting the POD using configmap only, however still it more depends on your application end.</p> <p>You can inject your configmap and mount it to POD Kubernetes auto-reload the config map if mounted to the directory. To note it does not work if you are using the <strong>subpath</strong>.</p> <p>Auto reload config map into Kubernetes without restarting the POD, you can more here: <a href="https://medium.com/@harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388</a></p> <p><strong>YAML example</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: test-config data: hello: world --- apiVersion: apps/v1 kind: Deployment metadata: name: test spec: selector: matchLabels: app: test replicas: 1 template: metadata: labels: app: test spec: containers: - name: configmaptestapp image: &lt;Image&gt; volumeMounts: - mountPath: /config name: data-volume ports: - containerPort: 80 volumes: - name: data-volume configMap: name: test-config </code></pre> <p>Official documentation : <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically</a></p> <blockquote> <p>Mounted ConfigMaps are updated automatically When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, the kubelet uses its local cache for getting the current value of the ConfigMap. The type of the cache is configurable using the ConfigMapAndSecretChangeDetectionStrategy field in the KubeletConfiguration struct. A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting all requests directly to the API server. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache propagation delay, where the cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero correspondingly).</p> <p>ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.</p> </blockquote> <p><strong>Note</strong>:</p> <blockquote> <p>A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.</p> </blockquote> <p>In this case your application need to be handling the content properly with change detection etc.</p>
<p>My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy</p> <p>For me, it is really strange to have a cluster without kubeproxy</p> <p>My setup for rancher cluster is below:</p> <pre><code>Cluster Provider: RKE Provision and Provision : Use existing nodes and create a cluster using RKE Network Plugin : canal </code></pre> <p>Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.</p>
<p>Finally, I find my kibe-proxy. It is process of host not docker container.</p> <p>In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.</p> <pre class="lang-sh prettyprint-override"><code>root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16 </code></pre>
<p>I am spinning up a new Jupyter notebook instance from Jupiter hub and wish to have Kubernetes API access from inside the spun up container. According to the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/security.html#kubernetes-api-access" rel="nofollow noreferrer">docs</a>, I added the parameter for service account in my helm values and as expected, I can see the service account token mounted as expected.</p> <pre><code>subu@jupyter-subu:~$ sudo ls /run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token </code></pre> <p>When I try to run kubectl however, I get an access denied</p> <pre><code>subu@jupyter-subu:~$ kubectl get pods error: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied </code></pre> <p>Fair enough, but run it as sudo and it simply ignores the service account token.</p> <pre><code>subu@jupyter-subu:~$ sudo kubectl get pods The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>If I setup the kubectl config manually with the details of the token, it works totally though, its just the default settings that don't work. Any ideas on why this could be happening would be much appreciated!</p>
<p>In order to make kubectl use the projected token, the environment variables KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST must be set in your environment. These are automatically injected upon pod start, but likely only for your user, not for the sudo <code>root</code> user.</p> <p>Make sure to pass these variables for the root environment (<code>sudo -E kubectl get pods</code>) or make sure the projected token is readable by your user (this should be achievable by setting the KubeSpawner's singleuser_uid to your UID <a href="https://github.com/jupyterhub/kubespawner/issues/140" rel="nofollow noreferrer">https://github.com/jupyterhub/kubespawner/issues/140</a>).</p>
<p>Below is my app definition that uses <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/" rel="nofollow noreferrer">azure csi store provider</a>. Unfortunately, this definition throws <code>Error: secret 'my-kv-secrets' not found</code> why is that?</p> <h3>SecretProviderClass</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: my-app-dev-spc spec: provider: azure secretObjects: - secretName: my-kv-secrets type: Opaque data: - objectName: DB-HOST key: DB-HOST parameters: keyvaultName: my-kv-name objects: | array: - | objectName: DB-HOST objectType: secret tenantId: &quot;xxxxx-yyyy-zzzz-rrrr-vvvvvvvv&quot; </code></pre> <h3>Pod</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: run: debug name: debug spec: containers: - args: - sleep - 1d name: debug image: alpine env: - name: DB_HOST valueFrom: secretKeyRef: name: my-kv-secrets key: DB-HOST volumes: - name: kv-secrets csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: my-app-dev-spc nodePublishSecretRef: name: my-sp-secrets </code></pre>
<p>It turned out that secrets store csi works only with <code>volumeMounts</code>. So if you forget to specify it in your yaml definition then it will not work! Below is fix.</p> <h3>Pod</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: run: debug name: debug spec: containers: - args: - sleep - 1d name: debug image: alpine env: - name: DB_HOST valueFrom: secretKeyRef: name: my-kv-secrets key: DB-HOST volumeMounts: - name: kv-secrets mountPath: /mnt/kv_secrets readOnly: true volumes: - name: kv-secrets csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: my-app-dev-spc nodePublishSecretRef: name: my-sp-secrets </code></pre>
<p>We have one Kubernetes cluster running and we want to connect it to a new cluster. I found Google Clouds <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services" rel="nofollow noreferrer">multicluster-service (MCS API)</a> which seems promising, but it needs all <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services#requirements" rel="nofollow noreferrer">clusters to be in the same VPC</a>.</p> <p>How do one migrate / move / add a running cluster to a VPC?</p>
<p>i think you can not <strong>move</strong> the cluster to another VPC, although you can create a new or clone the cluster to new VPC.</p> <p>You can clone the existing one by using <strong>DUPLICATE</strong> tab in cluster details however it will just create infrastructure with the same config specs, during that time you can update Networking (VPC) as per need.</p> <p>Once the duplicate(cloned) cluster is up &amp; running in the desired VPC, you can start migrating your workloads, like deployment, secrets, stateful sets etc.</p> <p>For workload migration you can checkout the tool : <a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p> <p><strong>Velero</strong> is mostly useful for creating the backup &amp; restore option.</p> <p>it would be easy to migrate the workloads instead of going by applying the old <strong>YAML</strong> method however it's more on your setup.</p> <p>Here is my small article about the <code>migrating/cloning</code> workload across <strong>GCP</strong> <strong>GKE</strong> clusters: <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8</a></p>
<p>I have a Kubernetes cluster set up on Google Kubernetes Engine. I have a pod that hosts an RTMP application. This application requires two ports, one for HTTP and one for RTMP. I have set up an ingress Nginx load balancer. It routes the HTTP traffic correctly. I want to be able to route the RTMP traffic as well. But when I write RTMP in my YAML file. It shows it as an unknown field. I am using skaffold and Google Cloud Build to deploy.</p> <p>Here is the Ingress-srv file</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - http: paths: - path: /?(.*) backend: serviceName: stream-server-srv servicePort: 8000 </code></pre> <p>What I want to be able to do is have an entry for RTMP on port 1935. That I can then use to stream love videos using RTMP protocol.</p>
<p>RTMP is a TCP-based protocol and standard Ingress does not support TCP services.</p> <p>The NGINX Ingress controller (which it looks like you are using) can <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">be configured to expose TCP services</a>.</p> <p>First, you'll need to make sure that you expose both the HTTP and RTMP ports in your <code>stream-server-srv</code> Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: stream-server-srv namespace: default spec: selector: app: stream-server type: ClusterIP ports: - name: http-port port: 8000 targetPort: 8000 protocol: TCP - name: rtmp-port port: 1935 targetPort: 1935 protocol: TCP </code></pre> <p>(replace default with your namespace)</p> <p>You will also need to make sure that the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/cloud/1.23/deploy.yaml#L313" rel="nofollow noreferrer">Service used to expose the NGINX ingress</a> exposes port 1935 as well. For example:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.2.0 name: ingress-nginx-controller namespace: ingress-nginx spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - appProtocol: http name: http port: 80 protocol: TCP targetPort: http - appProtocol: https name: https port: 443 protocol: TCP targetPort: https - name: rtmp port: 1935 protocol: TCP targetPort: 1935 selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx type: LoadBalancer </code></pre> <p>Finally, you'll need to update / patch the NGINX tcp services ConfigMap:</p> <pre><code>kubectl patch configmap tcp-services -n ingress-nginx --patch '{&quot;data&quot;:{&quot;1935&quot;:&quot;default/stream-server-srv:1935&quot;}}' </code></pre> <p>(replace &quot;default/stream-server-srv&quot; with your namespace/servicename)</p>
<p>I had a clean Azure subscription and created AKS using Standard SKU and very basic setup (2 D2as_v4 nodes, default settings). Installed NGINX like:</p> <pre><code>helm install ingress-nginx ingress-nginx/ingress-nginx </code></pre> <p>Then put there a dummy app and made it accessible via Ingress + ClusterIP service:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: ingress-demo spec: replicas: 1 selector: matchLabels: app: ingress-demo template: metadata: labels: app: ingress-demo spec: containers: - name: ingress-demo image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: ingress-demo spec: type: ClusterIP ports: - port: 80 selector: app: ingress-demo </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: development-ingress spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: ingress-demo port: number: 80 </code></pre> <p>This <strong>works</strong> and I can access the dummy app on the Public IP assigned to my Ingress.</p> <h2>Issue:</h2> <p>I cannot access my app using any non-defalt path configuration. I get <code>Error: Request timed out</code> for anything else than <code>/</code>. This Ingress change does not work for me:</p> <pre><code>- path: /foo </code></pre> <p>Same issue using regex:</p> <pre><code>annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 ... - path: /dev(/|$)(.*) </code></pre> <h2>UPDATE</h2> <p>I have tried multiple fixes suggested in the <a href="https://github.com/Azure/AKS/issues/2903" rel="nofollow noreferrer">GitHub issue mentioned by silent</a>.<br /> Using following option makes <code>/foo</code> path work:</p> <pre><code>--set controller.service.annotations.&quot;service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path&quot;=/healthz </code></pre> <p>But it still doesn't work for regex. To make path like <code>/dev(/|$)(.*)</code> operable you need either change Azure Load Balancer probe <a href="https://github.com/Azure/AKS/issues/2903#issuecomment-1106784221" rel="nofollow noreferrer">from HTTP to TCP</a> (keeping the port) or to install NGINX controller with this flag:</p> <pre><code>--set controller.service.externalTrafficPolicy=Local </code></pre>
<p>Not quite sure but it sounds like you ran into this issue: <a href="https://github.com/Azure/AKS/issues/2903" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2903</a></p> <p>The solution was posted in this post: <a href="https://github.com/Azure/AKS/issues/2903#issuecomment-1109758644" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2903#issuecomment-1109758644</a></p>
<p>I am having a small shell script . it has the following snippet of code</p> <pre><code>kubectl scale deployment sample1 --replicas=0 -n sb1 kubectl scale deployment sample1 --replicas-3 -n sb1 </code></pre> <p>The first command is scale down my deployment and the second command is to scale up . is there any flags /conditions (wait --for=condition=complete) that I can use to ensure the deployment is scaled down and scaled up ?</p>
<p>There is no wait for completion flag in <code>kubectl scale --help.</code> You may run the following to check if the replica rollout is completed:</p> <pre><code>kubectl scale deployment sample1 --replicas 5 &amp;&amp; kubectl rollout status deployment/sample1 </code></pre> <p>Example:</p> <pre><code>kubectl scale deployment sample1 --replicas 5 deployment.apps/sample1 scaled </code></pre> <p>After scaling check the rollout status, the rollout status will block until the rollout to desired replica count is available:</p> <pre><code>kubectl rollout status deployment/sample1 Waiting for deployment &quot;sample1&quot; rollout to finish: 0 of 5 updated replicas are available... Waiting for deployment &quot;sample1&quot; rollout to finish: 1 of 5 updated replicas are available... Waiting for deployment &quot;sample1&quot; rollout to finish: 3 of 5 updated replicas are available... Waiting for deployment &quot;sample1&quot; rollout to finish: 4 of 5 updated replicas are available... deployment &quot;sample1&quot; successfully rolled out kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE sample1 5/5 5 5 2m40s </code></pre>
<p>Lets say I create a service account and retrieve the token associated to it:</p> <pre class="lang-sh prettyprint-override"><code>kubectl -n myexample describe sa myexample-sa kubectl describe secret myexample-sa-token-xxxxx </code></pre> <p>Value of the token:</p> <pre><code>token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpHWGxoRlNRTklaWjVzQTh2dmZMWVVsM1haclpRbXRVTEpFZnNUcER6RnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZXZlbG9wbWVudCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJteWV4YW1wbGUtc2EtdG9rZW4tOGw3cnciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibXlleGFtcGxlLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTM1NDhjNTUtZmJlYS00MDc1LThhNDYtNTVhZDQwN2VmYzMxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldmVsb3BtZW50Om15ZXhhbXBsZS1zYSJ9.FJMK2PIsloJRqGGIYAs_ZLpVn9-aW4UPWnGvrnNDscAWHtpatTknAJ0T075gXD86X6j_EShp7JLfv5J_aNRTHJWsYNzJIOXH0ZipdvsMW2oMfEK-VCDLgxlJnT3xikIYaFgYRgmw2-iraSiC-HcSmuuF8XPJgW93JNHqy2Vw2lka9GUzaxoD9D4UAvISk19peHPfDJZjEjr4r5QCUljQz8Va72dwOqNh3b01OI0-7epoRWjEjtCCOhKYyu2hErroo6IlaiUchN_VKTrL5182POMONYmKYrP0Z4ymX0AoA9dkKKbLjtm-Vkxp3B6xhtIrvaJ4upGH2AVNYSFb9aYacg </code></pre> <p>Then, I create a pod in a deployment and associate the service account above to the pod:</p> <pre class="lang-yaml prettyprint-override"><code>... spec: template: spec: serviceAccountName: myexample-sa ... </code></pre> <p>Now, After the pod is deployed I <code>exec</code> to it:</p> <pre class="lang-sh prettyprint-override"><code>kubectl -n myexample exec -it name-of-pod -- /bin/bash </code></pre> <p>If I run</p> <pre class="lang-sh prettyprint-override"><code>cat /var/run/secrets/kubernetes.io/serviceaccount/token </code></pre> <p>The output is:</p> <pre><code>eyJhbGciOiJSUzI1NiIsImtpZCI6IkpHWGxoRlNRTklaWjVzQTh2dmZMWVVsM1haclpRbXRVTEpFZnNUcER6RnMifQ.eyJhdWQiOlsidW5rbm93biJdLCJleHAiOjE2ODMxMjk2ODQsImlhdCI6MTY1MTU5MzY4NCwiaXNzIjoicmtlIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZXZlbG9wbWVudCIsInBvZCI6eyJuYW1lIjoic3RhdGljLXdlYiIsInVpZCI6ImZmNjMyOTU4LTM5MDctNDkyOS1hZGJjLWFjY2UyYzhkMTMxOCJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoibXlleGFtcGxlLXNhIiwidWlkIjoiNTM1NDhjNTUtZmJlYS00MDc1LThhNDYtNTVhZDQwN2VmYzMxIn0sIndhcm5hZnRlciI6MTY1MTU5NzI5MX0sIm5iZiI6MTY1MTU5MzY4NCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldmVsb3BtZW50Om15ZXhhbXBsZS1zYSJ9.Po-kZUo8nhOnJGxuHtvz6806QgLqPaztS0iWCNpnY0WCfpbRsmt2SWPckMo4P535DTqEJyDslUCF0loL0Tw2RNZxhHwRa-ul3P2G_9CmeApvGTX4nwyBFXjllsAWDiKWJkrxzpEkS0vf2N4r-9mGlEGkIWmPbUyDRD5LyeVmFMgPLNWYBLlAVG9qN5aJ5zzOq9pDFeY5jSXnOl3Ii3ddCZVxhnHDCGkFzu6w_YWkC-7iN68TlykwZb9wy2tFydCpAsPA </code></pre> <p>I compared the this token to the one I retrieved above (refer to first 2 commands) and they are different! Shouldn't this token value be exactly the same as the one I got for <code>myexample-sa-token-xxxxx</code>?</p>
<p>No, they should not be the same when the type is <strong>Service account token volume projection</strong> which is the case above.</p> <blockquote> <p>Service account token volume projection: Mounts a short-lived, automatically rotating Kubernetes service account token into the Pod. This token is a OpenID Connect Token and can be used to authenticate to the Kubernetes API and other external services.</p> </blockquote> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts#assigning_a_kubernetes_service_account_to_a_pod" rel="nofollow noreferrer">Readmore</a></p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">Service Account Token Volume Projection</a></p> <p>You can decode the token by using <a href="https://jwt.io/" rel="nofollow noreferrer">jwt.io</a> and take a look at the payload. One is representing the service account and the other is actually binding for the pod.</p> <p>Also, note that K8s manages and rotates the pod key.</p> <blockquote> <p>The kubelet will request and store the token on behalf of the pod, make the token available to the pod at a configurable file path, and refresh the token as it approaches expiration. The kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours</p> </blockquote>
<p>I'm trying to develop a custom resource on kubernetes with kubebuilder. In this CR, I have a field of type <code>url.URL</code></p> <p>I get this error :</p> <pre><code>(*in).DeepCopyInto undefined (type *url.URL has no field or method DeepCopyInto) </code></pre> <p>Is there a way to work with type <code>url.URL</code> when developing a CR ?</p> <p>Thanks</p>
<p>So I found a solution<br /> I don't know if it's the best but I've created a custom type URL with the part which is missing to use <code>net/url</code> with <code>controller-gen</code>.</p> <p>It works fine <a href="https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7" rel="nofollow noreferrer">https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7</a></p>
<p>I'm trying to port-forward my kubernetes service (through minikube) using the following command:</p> <pre><code>kubectl port-forward svc/hapi-fhir-server 8080:8080 --address=0.0.0.0 </code></pre> <p>But after trying to reach <code>localhost:8080</code> I get the following error: <code>&quot;....... an error occurred forwarding 8080 -&gt; 8080: error forwarding port 8080 to pod {PodID, uid ....:E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused&quot;</code></p> <p>I checked which port the pod is listening to through the following command <code>kubectl get pod hapi-fhir-server-666b846cbf-lhmr4 --template=&quot;{{(index (index .spec.containers 0).ports 0).containerPort}}&quot;</code> resulting in answer <code>8080</code></p> <p>For if this helps, my service &amp; deployment files (having removed unrelated lines)</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: .... spec: replicas: 2 selector: .... template: .... spec: containers: - image: ......... name: hapi-fhir-server ports: - containerPort: 8080 resources: {} restartPolicy: Always status: {} </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: .... spec: type: NodePort ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 32069 selector: app: hapi-fhir-server status: loadBalancer: {} </code></pre> <p>The image being used is a HAPI FHIR server with the following configuration that runs on Apache Tomcat (server.xml):</p> <pre><code>&lt;Connector port=&quot;${server.port}&quot; protocol=&quot;HTTP/1.1&quot; connectionTimeout=&quot;20000&quot; redirectPort=&quot;8443&quot; maxThreads=&quot;${server.tomcat.max-threads}&quot; minSpareThreads=&quot;${server.tomcat.min-spare-threads}&quot; /&gt; </code></pre> <p><code>server.port</code> being 8080.</p> <p>I can't seem to find an accurate answer to why this is happening, even after going through documentation or similar questions like: <a href="https://stackoverflow.com/questions/53799600/kubernetes-port-forwarding-connection-refused">Kubernetes Port Forwarding - Connection refused</a>.</p> <p>Is there something I am missing, not doing correctly or that I am not thinking of? <strong>Note:</strong> I am relatively new to Kubernetes.</p>
<p>Apparently there was no issue with the Kubernetes or server configuration but rather the Dockerfile that didn't expose port 8080, now it does through the following:</p> <pre><code># Dockerfile ...... EXPOSE 8080 ...... </code></pre> <p>Thanks to IvanAracki that pointed this out to me through the comments.</p>
<p>Running Spring Boot 2.6.6 and Spring Cloud 2021.0.1</p> <p>I'm attempting to migrate an existing service to Kubernetes so I added a dependency on <code>spring-cloud-starter-kubernetes-client-all</code>. By default, I have <code>spring.cloud.kubernetes.enable=false</code> and use the <code>kubernetes</code> profile to enable it. This is intended to allow this service to operate in both Kubernetes and the legacy environment.</p> <p>My unit-tests complete successfully when building locally but fail in my Bitbucket pipeline with the following error:</p> <pre><code>java.lang.IllegalStateException: Failed to load ApplicationContext Caused by: org.springframework.cloud.kubernetes.commons.config.NamespaceResolutionFailedException: unresolved namespace </code></pre> <p>I suspect this occurs because Bitbucket Pipelines are deployed in Kubernetes and Spring somehow detects this. I have tried the following to no avail</p> <ul> <li>Pass <code>--define SPRING_CLOUD_KUBERNETES_ENABLED=false</code> to Maven on the command line</li> <li>Set this as an environment variable e.g., <code>export SPRING_CLOUD_KUBERNETES_ENABLED=false</code></li> <li>Pass <code>--define spring.cloud.kubernetes.enabled=false</code> to Maven on the command line</li> </ul> <p>I have also checked StackOverflow for similar issues and investigated the code also without avail. The class that is actually raising the issue is <code>KubernetesClientConfigUtils</code>, which should be disabled.</p> <p>I would appreciate any guidance you can provide.</p>
<p>Spring Cloud checks whether the application is running in a K8s environment before loading the active spring profile configuration and adds <code>kubernetes</code> to the active profiles. Previously, in Hoxton SR10, the profile was identified and <code>bootstrap-&lt;profile&gt;.yml</code> loaded before checking for Kubernetes. <code>spring.cloud.kubernetes.enabled</code> was picked up from there if set in the profile configuration or the maven pom properties.</p> <p>As maven allows setting system properties on the command line, kubernetes detection can be disabled by setting it there:</p> <pre><code>mvn test -Dspring.cloud.kubernetes.enabled=false </code></pre> <p>The surefire maven plugin allows setting system properties for all tests, so it's possible to set <code>spring.cloud.kubernetes.enabled</code> to be <code>false</code> in the surefire plugin configuration.</p> <pre><code>&lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-surefire-plugin&lt;/artifactId&gt; &lt;configuration&gt; &lt;systemPropertyVariables&gt; &lt;spring.cloud.kubernetes.enabled&gt;false&lt;/spring.cloud.kubernetes.enabled&gt; &lt;/systemPropertyVariables&gt; &lt;/configuration&gt; &lt;/plugin&gt; </code></pre> <p>It is also possible to set the configuration on individual test classes using @Faron's approach to explicitly set the property in any <code>WebMvcTest</code> annotated unit test, e.g.:</p> <pre><code>@WebMvcTest(properties = { &quot;spring.cloud.kubernetes.enabled=false&quot; }) </code></pre> <p>It should also work on other unit test annotation that loads a Spring application context, such as <code>WebFluxTest</code>.</p>
<p>I am creating an Helm Chart and I am having problems when it comes to importing files:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: vcl-template namespace: {{.Release.Namespace}} data: {{- (.Files.Glob &quot;config/varnish/default.vcl.tmpl&quot;).AsConfig | nindent 2 }} {{- (.Files.Glob &quot;config/varnish/nginx.conf&quot;).AsConfig | nindent 2 }} </code></pre> <p>This imports the file <code>config/varnish/nginx.conf</code> just fine but the file <code>config/varnish/default.vcl.tmpl</code> is imported with <code>\n</code> instead of newlines, so the data on the ConfigMap gets all buggy:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: vcl-template namespace: default data: default.vcl.tmpl: &quot;vcl 4.0;\n\nimport std;\nimport directors;\n\n{{ range .Frontends }}\nbackend {{ .Name }} {\n .host = \&quot;{{ .Host }}\&quot;;\n .port = \&quot;{{ .Port }}\&quot;;\n}\n{{- end }}\n\n{{ range .Backends }}\nbackend be-{{ .Name }} {\n .host = \&quot;{{ .Host }}\&quot;;\n .port = \&quot;{{ .Port }}\&quot;;\n}\n{{- end }}\n\nacl purge {\n \ \&quot;127.0.0.1\&quot;;\n \&quot;localhost\&quot;;\n \&quot;::1\&quot;;\n {{- range .Frontends }}\n \ \&quot;{{ .Host }}\&quot;;\n {{- end }}\n {{- range .Backends }}\n \&quot;{{ .Host }}\&quot;;\n {{- end }}\n}\n\nsub vcl_init {\n new cluster = directors.hash();\n\n \ {{ range .Frontends -}}\n cluster.add_backend({{ .Name }}, 1);\n {{ end }}\n\n new lb = directors.round_robin();\n\n {{ range .Backends -}}\n lb.add_backend(be-{{ .Name }});\n {{ end }}\n}\n\nsub vcl_recv {\n\n unset req.http.x-cache;\n \ set req.backend_hint = cluster.backend(req.url);\n set req.http.x-shard = req.backend_hint;\n if (req.http.x-shard != server.identity) {\n return(pass);\n \ }\n set req.backend_hint = lb.backend();\n\n if (req.method == \&quot;PURGE\&quot;) {\n if (client.ip !~ purge) {\n return (synth(405, \&quot;Method not allowed\&quot;));\n }\n # To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header\n # has been added to the response in your backend server config. This is used, for example, by the\n \ # capistrano-magento2 gem for purging old content from varnish during it's deploy routine.\n if (!req.http.X-Magento-Tags-Pattern &amp;&amp; !req.http.X-Pool) {\n return (synth(400, \&quot;X-Magento-Tags-Pattern or X-Pool header required\&quot;));\n \ }\n if (req.http.X-Magento-Tags-Pattern) {\n ban(\&quot;obj.http.X-Magento-Tags ~ \&quot; + req.http.X-Magento-Tags-Pattern);\n }\n if (req.http.X-Pool) {\n ban(\&quot;obj.http.X-Pool ~ \&quot; + req.http.X-Pool);\n }\n return (synth(200, \&quot;Purged\&quot;));\n }\n\n if (req.method != \&quot;GET\&quot; &amp;&amp;\n req.method != \&quot;HEAD\&quot; &amp;&amp;\n req.method != \&quot;PUT\&quot; &amp;&amp;\n req.method != \&quot;POST\&quot; &amp;&amp;\n req.method != \&quot;TRACE\&quot; &amp;&amp;\n req.method != \&quot;OPTIONS\&quot; &amp;&amp;\n req.method != \&quot;DELETE\&quot;) {\n /* Non-RFC2616 or CONNECT which is weird. */\n return (pipe);\n }\n\n # We only deal with GET and HEAD by default\n if (req.method != \&quot;GET\&quot; &amp;&amp; req.method != \&quot;HEAD\&quot;) {\n return (pass);\n }\n\n # Bypass shopping cart, checkout and search requests\n if (req.url ~ \&quot;/checkout\&quot; || req.url ~ \&quot;/catalogsearch\&quot;) {\n return (pass);\n }\n\n # Bypass admin\n if (req.url ~ \&quot;^/admin($|/.*)\&quot;) {\n return (pass);\n }\n\n \ # Bypass health check requests\n if (req.url ~ \&quot;/pub/health_check.php\&quot;) {\n return (pass);\n }\n\n # Set initial grace period usage status\n \ set req.http.grace = \&quot;none\&quot;;\n\n # normalize url in case of leading HTTP scheme and domain\n set req.url = regsub(req.url, \&quot;^http[s]?://\&quot;, \&quot;\&quot;);\n\n \ # collect all cookies\n std.collect(req.http.Cookie);\n\n # Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression\n if (req.http.Accept-Encoding) {\n if (req.url ~ \&quot;\\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$\&quot;) {\n # No point in compressing these\n unset req.http.Accept-Encoding;\n \ } elsif (req.http.Accept-Encoding ~ \&quot;gzip\&quot;) {\n set req.http.Accept-Encoding = \&quot;gzip\&quot;;\n } elsif (req.http.Accept-Encoding ~ \&quot;deflate\&quot; &amp;&amp; req.http.user-agent !~ \&quot;MSIE\&quot;) {\n set req.http.Accept-Encoding = \&quot;deflate\&quot;;\n } else {\n # unknown algorithm\n unset req.http.Accept-Encoding;\n \ }\n }\n\n # Remove all marketing get parameters to minimize the cache objects\n if (req.url ~ \&quot;(\\?|&amp;)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=\&quot;) {\n set req.url = regsuball(req.url, \&quot;(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&amp;?\&quot;, \&quot;\&quot;);\n set req.url = regsub(req.url, \&quot;[?|&amp;]+$\&quot;, \&quot;\&quot;);\n }\n\n # Static files caching\n if (req.url ~ \&quot;^/(pub/)?(media|static)/\&quot;) {\n return (pass);\n }\n\n return (hash);\n}\n\nsub vcl_hash {\n if (req.http.cookie ~ \&quot;X-Magento-Vary=\&quot;) {\n hash_data(regsub(req.http.cookie, \&quot;^.*?X-Magento-Vary=([^;]+);*.*$\&quot;, \&quot;\\1\&quot;));\n }\n\n # For multi site configurations to not cache each other's content\n if (req.http.host) {\n hash_data(req.http.host);\n } else {\n hash_data(server.ip);\n }\n\n if (req.url ~ \&quot;/graphql\&quot;) {\n call process_graphql_headers;\n }\n\n # To make sure http users don't see ssl warning\n \ if (req.http.X-Forwarded-Proto) {\n hash_data(req.http.X-Forwarded-Proto);\n \ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n \ }\n if (req.http.Content-Currency) {\n hash_data(req.http.Content-Currency);\n \ }\n}\n\nsub vcl_backend_response {\n\n set beresp.grace = 3d;\n\n if (beresp.http.content-type ~ \&quot;text\&quot;) {\n set beresp.do_esi = true;\n }\n\n if (bereq.url ~ \&quot;\\.js$\&quot; || beresp.http.content-type ~ \&quot;text\&quot;) {\n set beresp.do_gzip = true;\n \ }\n\n if (beresp.http.X-Magento-Debug) {\n set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;\n }\n\n # cache only successfully responses and 404s\n if (beresp.status != 200 &amp;&amp; beresp.status != 404) {\n set beresp.ttl = 0s;\n set beresp.uncacheable = true;\n return (deliver);\n } elsif (beresp.http.Cache-Control ~ \&quot;private\&quot;) {\n set beresp.uncacheable = true;\n set beresp.ttl = 86400s;\n return (deliver);\n }\n\n \ # validate if we need to cache it and prevent from setting cookie\n if (beresp.ttl &gt; 0s &amp;&amp; (bereq.method == \&quot;GET\&quot; || bereq.method == \&quot;HEAD\&quot;)) {\n unset beresp.http.set-cookie;\n }\n\n # If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass\n if (beresp.ttl &lt;= 0s ||\n beresp.http.Surrogate-control ~ \&quot;no-store\&quot; ||\n (!beresp.http.Surrogate-Control &amp;&amp;\n beresp.http.Cache-Control ~ \&quot;no-cache|no-store\&quot;) ||\n beresp.http.Vary == \&quot;*\&quot;) {\n # Mark as Hit-For-Pass for the next 2 minutes\n set beresp.ttl = 120s;\n set beresp.uncacheable = true;\n }\n\n return (deliver);\n}\n\nsub vcl_deliver {\n if (resp.http.X-Magento-Debug) {\n if (resp.http.x-varnish ~ \&quot; \&quot;) {\n set resp.http.X-Magento-Cache-Debug = \&quot;HIT\&quot;;\n set resp.http.Grace = req.http.grace;\n } else {\n set resp.http.X-Magento-Cache-Debug = \&quot;MISS\&quot;;\n }\n } else {\n unset resp.http.Age;\n }\n\n # Not letting browser to cache non-static files.\n if (resp.http.Cache-Control !~ \&quot;private\&quot; &amp;&amp; req.url !~ \&quot;^/(pub/)?(media|static)/\&quot;) {\n set resp.http.Pragma = \&quot;no-cache\&quot;;\n set resp.http.Expires = \&quot;-1\&quot;;\n set resp.http.Cache-Control = \&quot;no-store, no-cache, must-revalidate, max-age=0\&quot;;\n }\n\n unset resp.http.X-Magento-Debug;\n \ unset resp.http.X-Magento-Tags;\n unset resp.http.X-Powered-By;\n unset resp.http.Server;\n unset resp.http.X-Varnish;\n unset resp.http.Via;\n unset resp.http.Link;\n}\n\nsub vcl_hit {\n if (obj.ttl &gt;= 0s) {\n # Hit within TTL period\n return (deliver);\n }\n if (std.healthy(req.backend_hint)) {\n if (obj.ttl + 300s &gt; 0s) {\n # Hit after TTL expiration, but within grace period\n set req.http.grace = \&quot;normal (healthy server)\&quot;;\n \ return (deliver);\n } else {\n # Hit after TTL and grace expiration\n return (miss);\n }\n } else {\n # server is not healthy, retrieve from cache\n set req.http.grace = \&quot;unlimited (unhealthy server)\&quot;;\n return (deliver);\n }\n}\n&quot; nginx.conf: | worker_processes auto; events { worker_connections 1024; } pcre_jit on; error_log /var/log/nginx/error.log warn; include /etc/nginx/modules/*.conf; http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; client_max_body_size 15m; keepalive_timeout 30; sendfile on; tcp_nodelay on; gzip_vary on; log_format main '$remote_addr - $remote_user [$time_local] &quot;$request&quot; ' '$status $body_bytes_sent &quot;$http_referer&quot; ' '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;'; access_log /var/log/nginx/access.log main; include /etc/nginx/conf.d/*.conf; } </code></pre> <p><code>nginx.conf</code>:</p> <pre><code>worker_processes auto; events { worker_connections 1024; } pcre_jit on; error_log /var/log/nginx/error.log warn; include /etc/nginx/modules/*.conf; http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; client_max_body_size 15m; keepalive_timeout 30; sendfile on; tcp_nodelay on; gzip_vary on; log_format main '$remote_addr - $remote_user [$time_local] &quot;$request&quot; ' '$status $body_bytes_sent &quot;$http_referer&quot; ' '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;'; access_log /var/log/nginx/access.log main; include /etc/nginx/conf.d/*.conf; } </code></pre> <p><code>default.vcl.tmpl</code>:</p> <pre><code>vcl 4.0; import std; import directors; {{ range .Frontends }} backend {{ .Name }} { .host = &quot;{{ .Host }}&quot;; .port = &quot;{{ .Port }}&quot;; } {{- end }} {{ range .Backends }} backend be-{{ .Name }} { .host = &quot;{{ .Host }}&quot;; .port = &quot;{{ .Port }}&quot;; } {{- end }} acl purge { &quot;127.0.0.1&quot;; &quot;localhost&quot;; &quot;::1&quot;; {{- range .Frontends }} &quot;{{ .Host }}&quot;; {{- end }} {{- range .Backends }} &quot;{{ .Host }}&quot;; {{- end }} } sub vcl_init { new cluster = directors.hash(); {{ range .Frontends -}} cluster.add_backend({{ .Name }}, 1); {{ end }} new lb = directors.round_robin(); {{ range .Backends -}} lb.add_backend(be-{{ .Name }}); {{ end }} } sub vcl_recv { unset req.http.x-cache; set req.backend_hint = cluster.backend(req.url); set req.http.x-shard = req.backend_hint; if (req.http.x-shard != server.identity) { return(pass); } set req.backend_hint = lb.backend(); if (req.method == &quot;PURGE&quot;) { if (client.ip !~ purge) { return (synth(405, &quot;Method not allowed&quot;)); } # To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header # has been added to the response in your backend server config. This is used, for example, by the # capistrano-magento2 gem for purging old content from varnish during it's deploy routine. if (!req.http.X-Magento-Tags-Pattern &amp;&amp; !req.http.X-Pool) { return (synth(400, &quot;X-Magento-Tags-Pattern or X-Pool header required&quot;)); } if (req.http.X-Magento-Tags-Pattern) { ban(&quot;obj.http.X-Magento-Tags ~ &quot; + req.http.X-Magento-Tags-Pattern); } if (req.http.X-Pool) { ban(&quot;obj.http.X-Pool ~ &quot; + req.http.X-Pool); } return (synth(200, &quot;Purged&quot;)); } if (req.method != &quot;GET&quot; &amp;&amp; req.method != &quot;HEAD&quot; &amp;&amp; req.method != &quot;PUT&quot; &amp;&amp; req.method != &quot;POST&quot; &amp;&amp; req.method != &quot;TRACE&quot; &amp;&amp; req.method != &quot;OPTIONS&quot; &amp;&amp; req.method != &quot;DELETE&quot;) { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } # We only deal with GET and HEAD by default if (req.method != &quot;GET&quot; &amp;&amp; req.method != &quot;HEAD&quot;) { return (pass); } # Bypass shopping cart, checkout and search requests if (req.url ~ &quot;/checkout&quot; || req.url ~ &quot;/catalogsearch&quot;) { return (pass); } # Bypass admin if (req.url ~ &quot;^/admin($|/.*)&quot;) { return (pass); } # Bypass health check requests if (req.url ~ &quot;/pub/health_check.php&quot;) { return (pass); } # Set initial grace period usage status set req.http.grace = &quot;none&quot;; # normalize url in case of leading HTTP scheme and domain set req.url = regsub(req.url, &quot;^http[s]?://&quot;, &quot;&quot;); # collect all cookies std.collect(req.http.Cookie); # Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ &quot;\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$&quot;) { # No point in compressing these unset req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ &quot;gzip&quot;) { set req.http.Accept-Encoding = &quot;gzip&quot;; } elsif (req.http.Accept-Encoding ~ &quot;deflate&quot; &amp;&amp; req.http.user-agent !~ &quot;MSIE&quot;) { set req.http.Accept-Encoding = &quot;deflate&quot;; } else { # unknown algorithm unset req.http.Accept-Encoding; } } # Remove all marketing get parameters to minimize the cache objects if (req.url ~ &quot;(\?|&amp;)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=&quot;) { set req.url = regsuball(req.url, &quot;(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&amp;?&quot;, &quot;&quot;); set req.url = regsub(req.url, &quot;[?|&amp;]+$&quot;, &quot;&quot;); } # Static files caching if (req.url ~ &quot;^/(pub/)?(media|static)/&quot;) { return (pass); } return (hash); } sub vcl_hash { if (req.http.cookie ~ &quot;X-Magento-Vary=&quot;) { hash_data(regsub(req.http.cookie, &quot;^.*?X-Magento-Vary=([^;]+);*.*$&quot;, &quot;\1&quot;)); } # For multi site configurations to not cache each other's content if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } if (req.url ~ &quot;/graphql&quot;) { call process_graphql_headers; } # To make sure http users don't see ssl warning if (req.http.X-Forwarded-Proto) { hash_data(req.http.X-Forwarded-Proto); } } sub process_graphql_headers { if (req.http.Store) { hash_data(req.http.Store); } if (req.http.Content-Currency) { hash_data(req.http.Content-Currency); } } sub vcl_backend_response { set beresp.grace = 3d; if (beresp.http.content-type ~ &quot;text&quot;) { set beresp.do_esi = true; } if (bereq.url ~ &quot;\.js$&quot; || beresp.http.content-type ~ &quot;text&quot;) { set beresp.do_gzip = true; } if (beresp.http.X-Magento-Debug) { set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control; } # cache only successfully responses and 404s if (beresp.status != 200 &amp;&amp; beresp.status != 404) { set beresp.ttl = 0s; set beresp.uncacheable = true; return (deliver); } elsif (beresp.http.Cache-Control ~ &quot;private&quot;) { set beresp.uncacheable = true; set beresp.ttl = 86400s; return (deliver); } # validate if we need to cache it and prevent from setting cookie if (beresp.ttl &gt; 0s &amp;&amp; (bereq.method == &quot;GET&quot; || bereq.method == &quot;HEAD&quot;)) { unset beresp.http.set-cookie; } # If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass if (beresp.ttl &lt;= 0s || beresp.http.Surrogate-control ~ &quot;no-store&quot; || (!beresp.http.Surrogate-Control &amp;&amp; beresp.http.Cache-Control ~ &quot;no-cache|no-store&quot;) || beresp.http.Vary == &quot;*&quot;) { # Mark as Hit-For-Pass for the next 2 minutes set beresp.ttl = 120s; set beresp.uncacheable = true; } return (deliver); } sub vcl_deliver { if (resp.http.X-Magento-Debug) { if (resp.http.x-varnish ~ &quot; &quot;) { set resp.http.X-Magento-Cache-Debug = &quot;HIT&quot;; set resp.http.Grace = req.http.grace; } else { set resp.http.X-Magento-Cache-Debug = &quot;MISS&quot;; } } else { unset resp.http.Age; } # Not letting browser to cache non-static files. if (resp.http.Cache-Control !~ &quot;private&quot; &amp;&amp; req.url !~ &quot;^/(pub/)?(media|static)/&quot;) { set resp.http.Pragma = &quot;no-cache&quot;; set resp.http.Expires = &quot;-1&quot;; set resp.http.Cache-Control = &quot;no-store, no-cache, must-revalidate, max-age=0&quot;; } unset resp.http.X-Magento-Debug; unset resp.http.X-Magento-Tags; unset resp.http.X-Powered-By; unset resp.http.Server; unset resp.http.X-Varnish; unset resp.http.Via; unset resp.http.Link; } sub vcl_hit { if (obj.ttl &gt;= 0s) { # Hit within TTL period return (deliver); } if (std.healthy(req.backend_hint)) { if (obj.ttl + 300s &gt; 0s) { # Hit after TTL expiration, but within grace period set req.http.grace = &quot;normal (healthy server)&quot;; return (deliver); } else { # Hit after TTL and grace expiration return (miss); } } else { # server is not healthy, retrieve from cache set req.http.grace = &quot;unlimited (unhealthy server)&quot;; return (deliver); } } </code></pre> <p>How come that the second file is not imported correctly? Latest Helm version and latest Go version.</p> <p>Anyone has any ideas? The encoding of both files on VSCode shows as <code>UTF8</code>.</p>
<p>They're actually equivalent from YAML's PoV, just not as pretty, but most important for your specific case it's because yaml cannot represent <strong>trailing</strong> whitespace without quoting it, which is what it did due to line 164 of your .tmpl file, as seen by the <code>\n \n</code> in:</p> <pre class="lang-yaml prettyprint-override"><code> \ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n </code></pre> <pre><code>$ sed -ne 164p default.vcl.tmpl | xxd 00000000: 2020 2020 0a . </code></pre> <p>turning on &quot;strip trailing whitespace&quot; in your editor will help that, or for this specific case you can just fix line 164</p>
<p>I am trying to create an operator using operator-sdk.</p> <p>I have installed opeator-sdk on my mac OS.</p> <p>My Environment Details :</p> <p>go version <strong>go1.15.12 darwin/amd64</strong></p> <p>operator-sdk version: <strong>&quot;v1.7.2&quot;, commit: &quot;6db9787d4e9ff63f344e23bfa387133112bda56b&quot;, kubernetes version: &quot;v1.19.4&quot;, go version: &quot;go1.16.3&quot;, GOOS: &quot;darwin&quot;, GOARCH: &quot;amd64&quot;</strong></p> <p>I am trying to create an operator using command -</p> <pre><code>operator-sdk init hello-operator </code></pre> <p>I have enabled GO111MODULE.</p> <p>When I am trying to run opeator-sdk init , I am getting following error.</p> <pre><code>Writing kustomize manifests for you to edit... Writing scaffold for you to edit... Get controller runtime: $ go get sigs.k8s.io/[email protected] # container/list compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # crypto/internal/subtle compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # unicode/utf8 compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # internal/race compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # k8s.io/apimachinery/pkg/selection compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # encoding compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # unicode/utf16 compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # internal/nettrace compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # math/bits compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # runtime/internal/sys compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # internal/unsafeheader compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # unicode compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # vendor/golang.org/x/crypto/internal/subtle compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # crypto/subtle compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # vendor/golang.org/x/crypto/cryptobyte/asn1 compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # golang.org/x/sys/internal/unsafeheader compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # runtime/internal/atomic compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # google.golang.org/protobuf/internal/flags compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # github.com/google/go-cmp/cmp/internal/flags compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # k8s.io/utils/integer compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # k8s.io/utils/buffer compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # internal/cpu compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # k8s.io/apimachinery/pkg/types compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # sync/atomic compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; # runtime/cgo compile: version &quot;go1.15.6&quot; does not match go tool version &quot;go1.15.12&quot; Error: failed to initialize project: unable to scaffold with &quot;base.go.kubebuilder.io/v3&quot;: exit status 2 FATA[0003] failed to initialize project: unable to scaffold with &quot;base.go.kubebuilder.io/v3&quot;: exit status 2 </code></pre> <p>Does anybody has any idea about this?</p> <p>Thanks in advance.</p>
<p>The bellow commands show how you can run and scaffold an operator with the <code>operator-sdk</code> CLI tool. As of writing the latest version is v1.20.0. It covers some of the pitfalls such as setting the correct GO environment variables and installing the gcc for some OS. I tried it with <code>Ubuntu 18.04.3 LTS (Bionic Beaver)</code></p> <pre class="lang-sh prettyprint-override"><code>#golang echo &quot;--- Installing golang ---&quot; GOVERSION=1.17.9 GOTAR=go$GOVERSION.linux-amd64.tar.gz wget https://dl.google.com/go/$GOTAR sudo tar -xvf $GOTAR rm $GOTAR sudo mv go /usr/local/bin #gcc (used by operator-sdk CLI) echo &quot;--- Installing gcc ---&quot; sudo apt update sudo apt install -y build-essential sudo apt-get install manpages-dev #operator-sdk echo &quot;--- Installing operator-sdk ---&quot; curl -Lo ./operator-sdk https://github.com/operator-framework/operator-sdk/releases/download/v1.20.0/operator-sdk_linux_amd64 chmod +x ./operator-sdk sudo mv ./operator-sdk /usr/local/bin/operator-sdk #environment variables export GOROOT=/usr/local/bin/go export PATH=$GOROOT/bin:$PATH #verify versions go version operator-sdk version #scaffold and run the HelloWorld operator sudo -s mkdir hello-operator chmod 777 hello-operator cd hello-operator operator-sdk init --domain example.com --repo github.com/example/memcached-operator operator-sdk create api --group example --version v1alpha1 --kind HelloWorld --resource --controller make manifests make run </code></pre>
<p>Starting MongoDB with this connection string <code>mongodb://mongo-service:27017</code> is giving me this error.</p> <p>The issue only happens for me when running on Kubernetes. Localhost is working fine.</p> <pre><code>ERROR:src.app:Exception on /api/polls [GET] Traceback (most recent call last): File &quot;/.venv/lib/python3.10/site-packages/flask/app.py&quot;, line 2077, in wsgi_app response = self.full_dispatch_request() File &quot;/.venv/lib/python3.10/site-packages/flask/app.py&quot;, line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/.venv/lib/python3.10/site-packages/flask/app.py&quot;, line 1523, in full_dispatch_request rv = self.dispatch_request() File &quot;/.venv/lib/python3.10/site-packages/flask/app.py&quot;, line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File &quot;/home/appuser/src/app.py&quot;, line 28, in get_all polls = list(repository.get_all_polls()) File &quot;/home/appuser/src/repository.py&quot;, line 11, in get_all_polls return _get_collection().find() File &quot;/home/appuser/src/repository.py&quot;, line 28, in _get_collection return mongo.get_collection(DATABASE, COLLECTION) File &quot;/home/appuser/src/mongo.py&quot;, line 15, in get_collection return MongoClient(connection_string).get_database(database).get_collection(collection) File &quot;/.venv/lib/python3.10/site-packages/pymongo/mongo_client.py&quot;, line 704, in __init__ res = uri_parser.parse_uri( File &quot;/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py&quot;, line 568, in parse_uri nodes = split_hosts(hosts, default_port=default_port) File &quot;/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py&quot;, line 376, in split_hosts nodes.append(parse_host(entity, port)) File &quot;/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py&quot;, line 137, in parse_host raise ValueError(&quot;Port must be an integer between 0 and 65535: %s&quot; % (port,)) ValueError: Port must be an integer between 0 and 65535: 27017 </code></pre> <p>I couldn't find the reason in the <a href="https://www.mongodb.com/docs/manual/reference/connection-string/#standard-connection-string-format" rel="nofollow noreferrer">docs</a>.</p>
<p>As it is required by the Kubernetes manifest schema, Secret values must be encoded in Base 64, which I was doing like this:</p> <pre class="lang-sh prettyprint-override"><code>$ echo 'mongodb://mongo-service:27017' | base64 bW9uZ29kYjovL21vbmdvLXNlcnZpY2U6MjcwMTcK </code></pre> <p>This was adding a line-break at the end, modifying the original string content, and breaking pymongo interpretation of the connection string. Tricky.</p> <pre class="lang-sh prettyprint-override"><code>mongodb://mongo-service:27017 </code></pre> <p>In my case the solution was to change to <code>printf</code> as pointed out in <a href="https://stackoverflow.com/a/51143063/3231778">this answer</a>:</p> <pre class="lang-sh prettyprint-override"><code>$ printf mongodb://mongo-service:27017 | base64 bW9uZ29kYjovL21vbmdvLXNlcnZpY2U6MjcwMTc= </code></pre> <p>A new <a href="https://github.com/mongodb/mongo-python-driver/pull/937" rel="nofollow noreferrer">PR937</a> was created proposing a new message format which can outline this behavior.</p>
<p>I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a></p> <p>I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error &quot;502 Bad Gateway&quot; error.</p> <pre><code>curl elasticsearch.dev.acme.com &lt;html&gt; &lt;head&gt;&lt;title&gt;502 Bad Gateway&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;502 Bad Gateway&lt;/h1&gt;&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.</p> <blockquote> <p>{&quot;type&quot;: &quot;server&quot;, &quot;timestamp&quot;: &quot;2021-04-27T13:12:20,048Z&quot;, &quot;level&quot;: &quot;WARN&quot;, &quot;component&quot;: &quot;o.e.x.s.t.n.SecurityNetty4HttpServerTransport&quot;, &quot;cluster.name&quot;: &quot;elasticsearch-prod&quot;, &quot;node.name&quot;: &quot;elasticsearch-prod-es-default-0&quot;, &quot;message&quot;: &quot;received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}&quot;, &quot;cluster.uuid&quot;: &quot;t0mRfv7kREGQhXW9DVM3Vw&quot;, &quot;node.id&quot;: &quot;nCyAItDmSqGZRa3lApsC6g&quot; }</p> </blockquote> <p><strong>Can you help me understand why this is occuring and how to fix it?</strong></p> <p>I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?</p> <p>FYI i am able to port-forward the service and connect to it with curl using the -k flag.</p> <p><strong>What i have tried</strong></p> <ol> <li>I have tried adding my domain to the section as described here <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san</a></li> <li>I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.</li> </ol> <blockquote> <p>curl -u &quot;elastic:$PASSWORD&quot; &quot;https://localhost:9200&quot; curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a> curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.</p> </blockquote> <ol start="3"> <li>I have tried generating a certificate using the tool <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport" rel="noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport</a></li> </ol> <blockquote> <p>bin/elasticsearch-certutil ca bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem</p> </blockquote> <p>Then using the .crt and .key generated i created a kubectl secret <code>elastic-tls-cert</code>. But again curling localhost without -k gave the following error:</p> <blockquote> <p>curl --cacert cacert.pem -u &quot;elastic:$PASSWORD&quot; -XGET &quot;https://localhost:9200&quot; curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a> curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.</p> </blockquote> <p><strong>elasticsearch.yml</strong></p> <pre><code># This sample sets up an Elasticsearch cluster with 3 nodes. apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: elasticsearch-prod namespace: elastic-system spec: version: 7.12.0 nodeSets: - name: default config: # most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value node.roles: [&quot;master&quot;, &quot;data&quot;, &quot;ingest&quot;, &quot;ml&quot;] # this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost node.store.allow_mmap: false xpack.security.enabled: true podTemplate: metadata: labels: # additional labels for pods foo: bar spec: nodeSelector: acme/node-type: ops # this changes the kernel setting on the node to allow ES to use mmap # if you uncomment this init container you will likely also want to remove the # &quot;node.store.allow_mmap: false&quot; setting above # initContainers: # - name: sysctl # securityContext: # privileged: true # command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] ### # uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification. # automountServiceAccountToken: true containers: - name: elasticsearch # specify resource limits and requests resources: limits: memory: 4Gi cpu: 1 env: - name: ES_JAVA_OPTS value: &quot;-Xms2g -Xmx2g&quot; count: 3 # # request 2Gi of persistent data storage for pods in this topology element volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 250Gi storageClassName: elasticsearch # # inject secure settings into Elasticsearch nodes from k8s secrets references # secureSettings: # - secretName: ref-to-secret # - secretName: another-ref-to-secret # # expose only a subset of the secret keys (optional) # entries: # - key: value1 # path: newkey # project a key to a specific path (optional) http: service: spec: # expose this cluster Service with a LoadBalancer type: NodePort # tls: # selfSignedCertificate: # add a list of SANs into the self-signed HTTP certificate subjectAltNames: # - ip: 192.168.1.2 # - ip: 192.168.1.3 # - dns: elasticsearch.dev.acme.com # - dns: localhost # certificate: # # provide your own certificate # secretName: elastic-tls-cert </code></pre> <p><strong>kubectl version</strong></p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.4&quot;, GitCommit:&quot;e87da0bd6e03ec3fea7933c4b5263d151aafd07c&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-02-18T16:12:00Z&quot;, GoVersion:&quot;go1.15.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19+&quot;, GitVersion:&quot;v1.19.6-eks-49a6c0&quot;, GitCommit:&quot;49a6c0bf091506e7bafcdb1b142351b69363355a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-12-23T22:10:21Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p><strong>helm list</strong></p> <pre><code> NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0 </code></pre> <p><strong>resources</strong></p> <pre><code>pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/elastic-operator-webhook ClusterIP 172.20.218.208 &lt;none&gt; 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator service/elasticsearch-prod-es-default ClusterIP None &lt;none&gt; 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default service/elasticsearch-prod-es-http NodePort 172.20.229.173 &lt;none&gt; 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod service/elasticsearch-prod-es-transport ClusterIP None &lt;none&gt; 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod </code></pre> <p><strong>aws alb ingress controller</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: elastic-ingress namespace: elastic-system annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/group.name: &quot;&lt;redacted&gt;&quot; alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;:80,&quot;HTTPS&quot;: 443}]' alb.ingress.kubernetes.io/certificate-arn: &lt;redacted&gt; alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev alb.ingress.kubernetes.io/healthcheck-path: /health alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300' alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress spec: rules: - host: elasticsearch.dev.acme.com http: paths: - path: /* pathType: Prefix backend: service: name: elasticsearch-prod-es-http port: number: 9200 # - host: kibana.dev.acme.com # http: # paths: # - path: /* # pathType: Prefix # backend: # service: # name: kibana-prod-kb-http # port: # number: 5601 </code></pre>
<p>You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:</p> <pre><code>xpack.security.http.ssl: enabled: false keystore.path: certs/http.p12 </code></pre>
<p>I am trying to update a complex object via a path value which it appear ReJson can do. I have used it as a Cache just fine, I want to use it as a json collection store for real time data (similar to mongodb).</p> <p>for example: { &quot;name&quot;: &quot;Michael \u0027Mike D\u0027 Diamond&quot;, &quot;employeeID&quot;: &quot;E#101&quot;, &quot;role&quot;: &quot;Cleaner&quot;, &quot;location&quot;: { &quot;longitude&quot;: -2.552079, &quot;lattitude&quot;: 51.501335 }, &quot;numberOfAssignedJobs&quot;: 5 }</p> <p>For example I want to update <strong>location.lattitude</strong> without taking the whole object and dealing with read/write sync issues.</p> <p>However I am struggling with the API/How it is used in kubernetes. If anyone has any example code or tips and tricks, anything is much appreciated.</p> <p>Thanks, Matt</p>
<p>You can do this with the <code>JSON.GET</code> command and JSON path syntax (<a href="https://redis.io/docs/stack/json/path/" rel="nofollow noreferrer">docs here</a>)</p> <p>First, save the document in Redis at key &quot;jsondoc&quot;:</p> <pre><code>127.0.0.1:6379&gt; json.set jsondoc $ '{ &quot;name&quot;: &quot;Michael \u0027Mike D\u0027 Diamond&quot;, &quot;employeeID&quot;: &quot;E#101&quot;, &quot;role&quot;: &quot;Cleaner&quot;, &quot;location&quot;: { &quot;longitude&quot;: -2.552079, &quot;latitude&quot;: 51.501335 }, &quot;numberOfAssignedJobs&quot;: 5 }' </code></pre> <p>Atomically update path <code>location.latitude</code> to some other value:</p> <pre><code>127.0.0.1:6379&gt; json.set jsondoc $.location.latitude 52.014 OK </code></pre> <p>Retrieve just <code>location.latitude</code>:</p> <pre><code>127.0.0.1:6379&gt; json.get jsondoc $.location.latitude &quot;52.014&quot; </code></pre>
<p>I am trying to create a module in Terraform to create the basic resources in a Kubernetes cluster, this means a <code>cert-manager</code>, <code>ingress-nginx</code> (as the ingress controller) and a <code>ClusterIssuer</code> for the certificates. In this exact order.</p> <p>The first two I am installing with a <code>helm_release</code> resource and the <code>cluster_issuer</code> via <code>kubernetes_manifest</code>.</p> <p>I am getting the below error, which, after some Google searches, I found out that it's because the <code>cert-manager</code> installs the CRDs that the <code>ClusterIssuer</code> requires but at the <code>terraform plan</code> phase, since they are not installed yet, the manifest cannot detect the <code>ClusterIssuer</code>.</p> <p>Then, I would like to know if there's a way to circumvent this issue but still create everything in the same configuration with only one <code>terraform apply</code>?</p> <p>Note: I tried to use the depends_on arguments and also include a <code>time_sleep</code> block but it's useless because nothing is installed in the plan and that's where it fails</p> <pre><code>| Error: Failed to determine GroupVersionResource for manifest │ │ with module.k8s_base.kubernetes_manifest.cluster_issuer, │ on ../../modules/k8s_base/main.tf line 37, in resource &quot;kubernetes_manifest&quot; &quot;cluster_issuer&quot;: │ 37: resource &quot;kubernetes_manifest&quot; &quot;cluster_issuer&quot; { │ │ no matches for kind &quot;ClusterIssuer&quot; in group &quot;cert-manager.io&quot; </code></pre> <pre><code>resource &quot;helm_release&quot; &quot;cert_manager&quot; { chart = &quot;cert-manager&quot; repository = &quot;https://charts.jetstack.io&quot; name = &quot;cert-manager&quot; create_namespace = var.cert_manager_create_namespace namespace = var.cert_manager_namespace set { name = &quot;installCRDs&quot; value = &quot;true&quot; } } resource &quot;helm_release&quot; &quot;ingress_nginx&quot; { name = &quot;ingress-nginx&quot; repository = &quot;https://kubernetes.github.io/ingress-nginx&quot; chart = &quot;ingress-nginx&quot; create_namespace = var.ingress_nginx_create_namespace namespace = var.ingress_nginx_namespace wait = true depends_on = [ helm_release.cert_manager ] } resource &quot;time_sleep&quot; &quot;wait&quot; { create_duration = &quot;60s&quot; depends_on = [helm_release.ingress_nginx] } resource &quot;kubernetes_manifest&quot; &quot;cluster_issuer&quot; { manifest = { &quot;apiVersion&quot; = &quot;cert-manager.io/v1&quot; &quot;kind&quot; = &quot;ClusterIssuer&quot; &quot;metadata&quot; = { &quot;name&quot; = var.cluster_issuer_name } &quot;spec&quot; = { &quot;acme&quot; = { &quot;email&quot; = var.cluster_issuer_email &quot;privateKeySecretRef&quot; = { &quot;name&quot; = var.cluster_issuer_private_key_secret_name } &quot;server&quot; = var.cluster_issuer_server &quot;solvers&quot; = [ { &quot;http01&quot; = { &quot;ingress&quot; = { &quot;class&quot; = &quot;nginx&quot; } } } ] } } } depends_on = [helm_release.cert_manager, helm_release.ingress_nginx, time_sleep.wait] } </code></pre>
<p><a href="https://cert-manager.io/docs/installation/helm/#option-1-installing-crds-with-kubectl" rel="nofollow noreferrer">Official documentation</a> says to use <code>kubectl apply</code> before installing this with a helm chart, making it a two step process. Using Terraform, this would make it a 3 step process in that you have to apply a targeted section to create the cluster so you can have access to kubeconfig credentials, then run the kubectl apply command to install the CRDs, and finally run terraform apply again to install the helm chart and the rest of the IaC. This is even less ideal.</p> <p>I would use the <code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml</code> in kubectl_manifest resources as the comment above suggests, but this is impossible since this does not link to a single yaml file but so many of them one would not be able to keep up with the changes. Unfortunately, there is no &quot;kubectl_apply&quot; terraform resource** for the helm chart to depend on those CRDs being installed first.</p> <p>Despite all this wonkiness, there is a solution, and that is to use the helm_release resource twice. It requires creating a module and referencing a custom helm chart for the cert-issuer. It's not ideal given the amount of effort that has to be used to create it for custom needs, but once it's created, it's a reusable, modular solution.</p> <pre><code># # Cert-manager # main.tf # resource &quot;helm_release&quot; &quot;cert_manager&quot; { name = &quot;cert-manager&quot; repository = &quot;https://charts.jetstack.io&quot; chart = &quot;cert-manager&quot; version = var.cert_manager_chart_version namespace = var.cert_manager_namespace create_namespace = true set { name = &quot;installCRDs&quot; value = true } } </code></pre> <p>Reference to custom chart:</p> <pre><code># # cert-issuer.tf # # Cert Issuer using Helm resource &quot;helm_release&quot; &quot;cert_issuer&quot; { name = &quot;cert-issuer&quot; repository = path.module chart = &quot;cert-issuer&quot; namespace = var.namespace set { name = &quot;fullnameOverride&quot; value = local.issuer_name } set { name = &quot;privateKeySecretRef&quot; value = local.issuer_name } set { name = &quot;ingressClass&quot; value = var.ingress_class } set { name = &quot;acmeEmail&quot; value = var.cert_manager_email } set { name = &quot;acmeServer&quot; value = var.acme_server } depends_on = [helm_release.cert_manager] } </code></pre> <p>You can see that the above use of <code>helm_release</code> is referencing itself locally as the repository, which requires you to have a custom helm chart, like this:</p> <pre><code># ./cluster-issuer/cluster-issuer.yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: {{ include &quot;cert-issuer.fullname&quot; . }} namespace: {{ .Release.Namespace }} spec: acme: # The ACME server URL server: {{ .Values.acmeServer }} email: {{ .Values.acmeEmail }} # Name of a secret used to store the ACME account private key privateKeySecretRef: name: {{ .Values.privateKeySecretRef }} # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: {{ .Values.ingressClass }} </code></pre> <p>For some reason, this avoids the dependency check terraform uses to throw the error and works fine to get this installed in a single <code>apply</code></p> <p>This could be further simplified by not using values.yaml values by creating a pure chart.</p> <p>** Note, I think another work around is one can use a provisioner like 'local-exec' or 'remote-exec' after a cluser is created to run the kubectl apply command for the CRds directly, but I haven't tested this yet. It would also still require that your provisioning environment have kubectl installed and .kubeconfig properly configured, creating a dependency tree.</p> <p>Also, that is of course not fully working code. for a full example of the module to use or fork, see <a href="https://github.com/DeimosCloud/terraform-kubernetes-cert-manager" rel="nofollow noreferrer">this github repo</a>.</p>
<p>Just study the core of K8S on local machine (Linux Mint 20.2).</p> <p>Created one node cluster locally with:</p> <blockquote> <p>k3d cluster create mycluster</p> </blockquote> <p>And now I want to run spring boot application in a container.<br /> I build local image:</p> <blockquote> <p>library:0.1.0</p> </blockquote> <p>And here is snippet from <code>Deployment.yml</code>:</p> <pre><code>spec: terminationGracePeriodSeconds: 40 containers: - name: 'library' image: library:0.1.0 imagePullPolicy: IfNotPresent </code></pre> <p>Despite the fact that image is already built:</p> <pre><code>docker images REPOSITORY TAG IMAGE ID CREATED SIZE library 0.1.0 254c13416f46 About an hour ago 462MB </code></pre> <p>Starting the container fails:</p> <pre><code>pod/library-867dfb64db-vndtj Pulling image &quot;library:0.1.0&quot; pod/library-867dfb64db-vndtj Failed to pull image &quot;library:0.1.0&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;library:0.1.0&quot;: failed to resolve reference &quot;library:0.1.0&quot;: failed to do request: Head &quot;https://...com/v2/library/manifests/0.1.0&quot;: x509: certificate signed by unknown authority pod/library-867dfb64db-vndtj Error: ErrImagePull pod/library-867dfb64db-vndtj Error: ImagePullBackOff pod/library-867dfb64db-vndtj Back-off pulling image &quot;library:0.1.0&quot; </code></pre> <p><em><strong>How to resolve local images visibility for k3d cluster?</strong></em></p> <hr /> <p>Solution:</p> <p>Update the <code>Deployment.yml</code>:</p> <pre><code>spec: terminationGracePeriodSeconds: 40 containers: - name: 'library-xp' image: xpinjection/library:0.1.0 imagePullPolicy: Never </code></pre> <p>And import the image to cluster:</p> <blockquote> <p>k3d image import xpinjection/library:0.1.0 -c mycluster</p> </blockquote>
<p>If you don't want to use a docker registry, you have to import the locally built image into the k3d cluster:</p> <pre><code>k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags] </code></pre> <p>But don't forget to configure in your deployment:</p> <pre><code>imagePullPolicy: Never </code></pre>
<p>I am configuring an EKS cluster using terraform in a private subnet and trying to access it using a VPN in a public subnet. When I configured it, it works fine but now when I run kubectl get pods or kubectl get svc, it is throwing an error:</p> <p><code>error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot;</code></p> <p>I don't know why it is happening. Please reply if you the solution.</p> <p>Thanks</p>
<p>It's broken with <code>kubectl</code> version <code>1.24</code>. Downgrade to <code>1.23.6</code> will fix the issue for now</p> <pre><code>sudo apt install kubectl=1.23.6-00 </code></pre>
<p>I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes.</p> <p>I have the config.yml followed the doc from <a href="https://circleci.com/developer/orbs/orb/circleci/kubernetes" rel="nofollow noreferrer">https://circleci.com/developer/orbs/orb/circleci/kubernetes</a></p> <p>Here is my version of setup in the config file:</p> <pre><code>version: 2.1 orbs: kube-orb: circleci/[email protected] commands: docker-check: steps: - docker/check: docker-username: MY_USERNAME docker-password: MY_PASS registry: $DOCKER_REGISTRY jobs: create-deployment: executor: aws-eks/python3 parameters: cluster-name: description: Name of the EKS cluster type: string steps: - checkout # It failed on this step - kube-orb/delete-resource: now: true resource-names: my-frontend-deployment resource-types: deployments wait: true </code></pre> <p>Below is a copy of the error log</p> <pre><code>#!/bin/bash -eo pipefail #!/bin/bash RESOURCE_FILE_PATH=$(eval echo &quot;$PARAM_RESOURCE_FILE_PATH&quot;) RESOURCE_TYPES=$(eval echo &quot;$PARAM_RESOURCE_TYPES&quot;) RESOURCE_NAMES=$(eval echo &quot;$PARAM_RESOURCE_NAMES&quot;) LABEL_SELECTOR=$(eval echo &quot;$PARAM_LABEL_SELECTOR&quot;) ALL=$(eval echo &quot;$PARAM_ALL&quot;) CASCADE=$(eval echo &quot;$PARAM_CASCADE&quot;) FORCE=$(eval echo &quot;$PARAM_FORCE&quot;) GRACE_PERIOD=$(eval echo &quot;$PARAM_GRACE_PERIOD&quot;) IGNORE_NOT_FOUND=$(eval echo &quot;$PARAM_IGNORE_NOT_FOUND&quot;) NOW=$(eval echo &quot;$PARAM_NOW&quot;) WAIT=$(eval echo &quot;$PARAM_WAIT&quot;) NAMESPACE=$(eval echo &quot;$PARAM_NAMESPACE&quot;) DRY_RUN=$(eval echo &quot;$PARAM_DRY_RUN&quot;) KUSTOMIZE=$(eval echo &quot;$PARAM_KUSTOMIZE&quot;) if [ -n &quot;${RESOURCE_FILE_PATH}&quot; ]; then if [ &quot;${KUSTOMIZE}&quot; == &quot;1&quot; ]; then set -- &quot;$@&quot; -k else set -- &quot;$@&quot; -f fi set -- &quot;$@&quot; &quot;${RESOURCE_FILE_PATH}&quot; elif [ -n &quot;${RESOURCE_TYPES}&quot; ]; then set -- &quot;$@&quot; &quot;${RESOURCE_TYPES}&quot; if [ -n &quot;${RESOURCE_NAMES}&quot; ]; then set -- &quot;$@&quot; &quot;${RESOURCE_NAMES}&quot; elif [ -n &quot;${LABEL_SELECTOR}&quot; ]; then set -- &quot;$@&quot; -l set -- &quot;$@&quot; &quot;${LABEL_SELECTOR}&quot; fi fi if [ &quot;${ALL}&quot; == &quot;true&quot; ]; then set -- &quot;$@&quot; --all=true fi if [ &quot;${FORCE}&quot; == &quot;true&quot; ]; then set -- &quot;$@&quot; --force=true fi if [ &quot;${GRACE_PERIOD}&quot; != &quot;-1&quot; ]; then set -- &quot;$@&quot; --grace-period=&quot;${GRACE_PERIOD}&quot; fi if [ &quot;${IGNORE_NOT_FOUND}&quot; == &quot;true&quot; ]; then set -- &quot;$@&quot; --ignore-not-found=true fi if [ &quot;${NOW}&quot; == &quot;true&quot; ]; then set -- &quot;$@&quot; --now=true fi if [ -n &quot;${NAMESPACE}&quot; ]; then set -- &quot;$@&quot; --namespace=&quot;${NAMESPACE}&quot; fi if [ -n &quot;${DRY_RUN}&quot; ]; then set -- &quot;$@&quot; --dry-run=&quot;${DRY_RUN}&quot; fi set -- &quot;$@&quot; --wait=&quot;${WAIT}&quot; set -- &quot;$@&quot; --cascade=&quot;${CASCADE}&quot; if [ &quot;$SHOW_EKSCTL_COMMAND&quot; == &quot;1&quot; ]; then set -x fi kubectl delete &quot;$@&quot; if [ &quot;$SHOW_EKSCTL_COMMAND&quot; == &quot;1&quot; ]; then set +x fi error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot; Exited with code exit status 1 CircleCI received exit code 1 </code></pre> <p>Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.</p>
<p>I was facing the exact issue since yesterday morning (16 hours ago). Then taking @Gavy's advice, I simply added this in my <code>config.yml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>steps: - checkout # !!! HERE !!! - kubernetes/install-kubectl: kubectl-version: v1.23.5 - run: </code></pre> <p>And now it works. Hope it helps.</p>
<p>While using this command for RBAC <code>&quot;kubectl config set-cluster test --server=https://127.0.0.1:52807&quot;</code> the IP here is from the kind-cluster that I am running after which I use <code>&quot;kubectl config set-context test --cluster=test&quot;</code> followed by required credentials &amp; switch to the context by <code>&quot;kubectl config use-context test&quot;</code> and I am in the test context but with the first command I am configuring the config file I got that but m I making a cluster within a cluster what you guys understand please help me clear my doubt what is it actually doing?</p>
<p><code>kubectl config set-cluster</code> sets a cluster entry in your <code>kubeconfig</code> file (usually found in <code>$HOME/.kube/config</code>). The <code>kubeconfig</code> file defines how your <code>kubectl</code> is configured.</p> <p>The cluster entry defines where <code>kubectl</code> can find the kubernetes cluster to talk to. You can have multiple clusters defined in your <code>kubeconfig</code> file.</p> <p><code>kubectl config set-context</code> sets a context element, which is used to combine a cluster, namespace and user into a single element so that <code>kubectl</code> has everything it needs to communicate with the cluster. You can have multiple contexts, for example one per kubernetes cluster that you're managing.</p> <p><code>kubectl config use-context</code> sets your current context to be used in <code>kubectl</code>.</p> <p>So to walk through your commands:</p> <ul> <li><code>kubectl config set-cluster test --server=https://127.0.0.1:52807</code> creates a new entry in <code>kubeconfig</code> under the <code>clusters</code> section with a cluster called <code>test</code> pointing towards <code>https://127.0.0.1:52807</code></li> <li><code>kubectl config set-context test --cluster=test</code> creates a new context in <code>kubeconfig</code> called <code>test</code> and tells that context to point to a cluster called <code>test</code></li> <li><code>kubectl config use-context test</code> changes the the current context in <code>kubeconfig</code> to a context called <code>test</code> (which you just created).</li> </ul> <p>More docs on <code>kubectl config</code> and <code>kubeconfig</code>:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></li> <li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration</a></li> </ul>
<p>I have created a GKE cluster on GCP.</p> <p>Kubernetes logs from kubectl logs command is different to /var/log/containers</p> <p>kubectl</p> <pre><code>{&quot;method&quot;:&quot;GET&quot;,&quot;path&quot;:&quot;/healthz&quot;,&quot;format&quot;:&quot;*/*&quot;,&quot;controller&quot;:&quot;Public::PublicPagesController&quot;,&quot;action&quot;:&quot;healthz&quot;,&quot;status&quot;:204,&quot;duration&quot;:0.39,&quot;view&quot;:0.0,&quot;request_id&quot;:&quot;ca29b519-d1e8-49a2-95ae-e5f23b60c36f&quot;,&quot;params&quot;:{},&quot;custom&quot;:null,&quot;request_time&quot;:&quot;2022-04-27T15:25:43.780+00:00&quot;,&quot;process_id&quot;:6,&quot;@version&quot;:&quot;vcam-backend-vvcam-72_shareholder_event-rc16&quot;,&quot;@timestamp&quot;:&quot;2022-04-27T15:25:43.780Z&quot;,&quot;message&quot;:&quot;[204] GET /healthz (Public::PublicPagesController#healthz)&quot;} </code></pre> <p>And logs in /var/log/containers, something add timestamp into the beginning of my container logs:</p> <pre><code>2022-04-27T15:25:43.780523421Z stdout F {&quot;method&quot;:&quot;GET&quot;,&quot;path&quot;:&quot;/healthz&quot;,&quot;format&quot;:&quot;*/*&quot;,&quot;controller&quot;:&quot;Public::PublicPagesController&quot;,&quot;action&quot;:&quot;healthz&quot;,&quot;status&quot;:204,&quot;duration&quot;:0.39,&quot;view&quot;:0.0,&quot;request_id&quot;:&quot;ca29b519-d1e8-49a2-95ae-e5f23b60c36f&quot;,&quot;params&quot;:{},&quot;custom&quot;:null,&quot;request_time&quot;:&quot;2022-04-27T15:25:43.780+00:00&quot;,&quot;process_id&quot;:6,&quot;@version&quot;:&quot;vcam-backend-vvcam-72_shareholder_event-rc16&quot;,&quot;@timestamp&quot;:&quot;2022-04-27T15:25:43.780Z&quot;,&quot;message&quot;:&quot;[204] GET /healthz (Public::PublicPagesController#healthz)&quot;} </code></pre> <p>I want my application log will be consistent, I want it in json format like logs from kubectl command, so I can parse and analyze more.</p> <p>I want to remove this part: <code>2022-04-27T15:25:43.780523421Z stdout F</code></p> <p>Does anybody meet this problem? How can I make containers logs same as kubectl command logs?</p> <p>GKE Version:</p> <pre><code>Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.10-gke.2000&quot;, GitCommit:&quot;0823380786b063c3f71d5e7c76826a972e30550d&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-03-17T09:22:22Z&quot;, GoVersion:&quot;go1.16.14b7&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Docker daemon.json</p> <pre><code>{ &quot;pidfile&quot;: &quot;/var/run/docker.pid&quot;, &quot;iptables&quot;: false, &quot;ip-masq&quot;: false, &quot;log-level&quot;: &quot;warn&quot;, &quot;bip&quot;: &quot;169.254.123.1/24&quot;, &quot;mtu&quot;: 1460, &quot;storage-driver&quot;: &quot;overlay2&quot;, &quot;live-restore&quot;: true, &quot;log-driver&quot;: &quot;json-file&quot;, &quot;log-opts&quot;: { &quot;max-size&quot;: &quot;10m&quot;, &quot;max-file&quot;: &quot;5&quot; } } </code></pre> <p>Notes: I notice that the timestamp in the beginning of log line only show when we add option <code>docker logs -t</code>, <a href="https://docs.docker.com/engine/reference/commandline/logs/" rel="nofollow noreferrer">docs here</a> But I still do not know how to fix this problem in GKE cluster.</p>
<p>This problem is related to Container Runtime Interface (CRI). You can read about CRI <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="nofollow noreferrer">here</a>.</p> <p>For now, I'm still can not change the log format as I want, I just adapt to this new format. This format is called <code>CRI log format</code>, the CRI default of GKE cluster always produce log in this format, and aggregation log applications adapt to this new CRI log format too:</p> <ul> <li>This is PR of <code>Grafana/Loki</code> to support CRI log format: <a href="https://github.com/grafana/loki/pull/365" rel="nofollow noreferrer">PR</a></li> <li>Fluent-bit create a new parser for CRI log format: <a href="https://docs.fluentbit.io/manual/v/1.8/installation/kubernetes#container-runtime-interface-cri-parser" rel="nofollow noreferrer">Docs</a></li> </ul> <p>So I think you need to change the way you approach this problem: If we can not change the log format as we want, we can use application support this log format.</p> <p>Notes: I'm not sure but I think this problem come from: <code>Kubernetes is removing support for Docker as a container runtime</code> - <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">docs</a> , and the new container runtime produce this CRI log format.</p>
<p>I created EKS cluster and now i need to add http_proxy/http_proxies and no_proxy as environment variable to all 300 deployments. I created a config map with the variables. Is there a way to update all pods simultaneously without doing it one by one? And if yes how can i do it? My config map name is proxy-environment-variables.</p> <p>Thank you for the help and assistance!</p> <p>BR,</p> <p>Martin</p> <pre><code>Name: proxy-environment-variables Namespace: kube-system Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== HTTPS_PROXY: ---- http://10.16.11.10:8080 HTTP_PROXY: ---- http://10.16.11.10:8080 NO_PROXY: ---- 169.254.169.254,127.0.0.1,localhost,10.0.0.0/8 http_proxy: ---- http://10.16.11.10:8080 https_proxy: ---- http://10.16.11.10:8080 no_proxy: ---- 169.254.169.254,127.0.0.1,localhost,10.0.0.0/8 BinaryData ==== Events: &lt;none&gt; </code></pre>
<p>You can use the <code>set env</code> to update the environment variable into the deployment:</p> <pre><code>kubectl set env deployment --all --from=configmap/my-env-config </code></pre> <p>Example: The following example is demonstrating if there are multiple deployments, how to add env variables to their pods.</p> <p>// I have the following deployments:</p> <pre><code>kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE dep-1 1/1 1 1 5m21s dep-2 1/1 1 1 41m dep-4 1/1 1 1 3m17s </code></pre> <p>// their respective pods:</p> <pre><code>kubectl get pod NAME READY STATUS RESTARTS AGE dep-1-84fcdf5-wblsm 1/1 Running 0 2m8s dep-2-84fcdf5-qfbd5 1/1 Running 0 3m4s dep-4-6cf4dcf794-wk476 1/1 Running 0 4s </code></pre> <p>// my config map:</p> <pre><code>apiVersion: v1 data: HTTP_PROXY: https://1.2.3.4:1234 kind: ConfigMap metadata: creationTimestamp: &quot;2022-05-04T16:52:00Z&quot; name: my-env-config namespace: default resourceVersion: &quot;1002232&quot; uid: c8606312-90c6-45cf-86f0-f4f874f19909 </code></pre> <p>// HTTP_PROXY env variable is <strong>already</strong> set in dep-1</p> <pre><code>kubectl exec -it dep-1-84fcdf5-wblsm -- printenv HTTP_PROXY https://1.2.3.4:1234 </code></pre> <p>// HTTP_PROXY env variable is <strong>already</strong> set in dep-2</p> <pre><code>kubectl exec -it dep-2-84fcdf5-qfbd5 -- printenv HTTP_PROXY https://1.2.3.4:1234 </code></pre> <p>// HTTP_PROXY env variable is <strong>not</strong> set in dep-4, <strong>this is our focus</strong></p> <pre><code>kubectl exec -it dep-4-6cf4dcf794-wk476 -- printenv HTTP_PROXY command terminated with exit code 1 </code></pre> <p>// <strong>SOLUTION</strong>: Setting env for all the deployments(<code>--all</code>) from a config map called <code>my-env-config</code>)</p> <pre><code> kubectl set env deployment --all --from=configmap/my-env-config deployment.apps/dep-4 env updated </code></pre> <p>//Note the age of the pods, only dep-4's pod age is altered</p> <pre><code>kubectl get pod NAME READY STATUS RESTARTS AGE dep-1-84fcdf5-wblsm 1/1 Running 0 3m40s dep-2-84fcdf5-qfbd5 1/1 Running 0 4m36s dep-4-59d5cfd48d-2tbbm 1/1 Running 0 5s </code></pre> <p>// now HTTP_PROXY is set in dep-4's pod.</p> <pre><code>kubectl exec -it dep-4-59d5cfd48d-2tbbm -- printenv HTTP_PROXY https://1.2.3.4:1234 </code></pre>
<p>I have a GKE cluster, external domain name, and letsencrypt certs. When I am using a Load balancer and instruct pods to use certs that I generate using certbot then performance is quite good. But I have to renew certs manually which takes a lot of effort.</p> <p>When using an ingress controller and letting cert-manager update certs by itself then additional hops add latency and make the traffic path more complex. Then the connection is on h2 from client to ingress and then the connection become plain HTTP from ingress to pods.</p> <p>Is there any way remove the extra hops when using nginx ingress controller and take out the performance issue?</p>
<p>There is no extra hop if you are using the cert-manager with ingress.</p> <p>You can use the cert-manager it will save the <strong>cert</strong> into <strong>secret</strong> and attach to ingress. However, it's up to you where you are doing TLS termination.</p> <p>You can also bypass the HTTPS traffic till POD for end-to-end encryption if you are doing TLS termination at ingress level backed traffic till POD will be in plain <strong>HTTP</strong>.</p> <pre><code>Internet &gt; ingress (TLS in secret) &gt; Plain HTTP if you terminate &gt; service &gt; PODs </code></pre> <p>If you want to use the certificate into POD you can mount the secret into POD and that will be further used by the application.</p> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod</a></p> <p>If you will use the secret with POD you might need to reload the POD in that case you can use the Reloader to auto roll out the PODs.</p> <p>Reloader : <a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">https://github.com/stakater/Reloader</a></p>
<p>i am trying to make a cicd pipeline github-&gt;travisci-&gt;aws eks everything works fine images are posted to dockerhub and all.but when travis is executing kubectl apply -f &quot;the files&quot; it is throwing a error.. error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot;</p> <p>(theres nothing wrong with the source coe/deployment/service files as i manually deployed them on aws eks and they worked fine.)</p> <pre><code> #-----------------travis.yml------------- sudo: required services: - docker env: global: - SHA=$(git rev-parse HEAD) before_install: # Install kubectl - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl - chmod +x ./kubectl - sudo mv ./kubectl /usr/local/bin/kubectl # Install AWS CLI - if ! [ -x &quot;$(command -v aws)&quot; ]; then curl &quot;https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o &quot;awscliv2.zip&quot; ; unzip awscliv2.zip ; sudo ./aws/install ; fi # export environment variables for AWS CLI (using Travis environment variables) - export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} - export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} - export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION} # Setup kubectl config to use the desired AWS EKS cluster - aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME} - echo &quot;$DOCKER_PASSWORD&quot; | docker login -u &quot;$DOCKER_USERNAME&quot; --password-stdin - docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client # - aws s3 ls script: - docker run -e CI=true akifboi/multi-client npm test deploy: provider: script script: bash ./deploy.sh on: branch: master </code></pre> <pre><code>#----deploy.sh-------- # docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client # docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server # docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker # docker push akifboi/multi-client:latest # docker push akifboi/multi-server:latest # docker push akifboi/multi-worker:latest # docker push akifboi/multi-client:$SHA # docker push akifboi/multi-server:$SHA # docker push akifboi/multi-worker:$SHA echo &quot;starting&quot; aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese! echo &quot;applying k8 files&quot; kubectl apply -f ./k8s/ # kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA # kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA # kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA echo &quot;done&quot; </code></pre> <pre><code>#------travis;logs---------- last few lines: starting &quot;ACTIVE&quot; applying k8 files error: exec plugin: invalid apiVersion &quot;client.authentication.k8s.io/v1alpha1&quot; done Already up to date. HEAD detached at c1858f7 Untracked files: (use &quot;git add &lt;file&gt;...&quot; to include in what will be committed) aws/ awscliv2.zip nothing added to commit but untracked files present (use &quot;git add&quot; to track) Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225) Done. Your build exited with 0. </code></pre>
<p>We were installing the latest version of kubectl in CI and hit this error today. After pinning to a previous version (1.18) the error was resolved.</p> <p>the last working version was 1.23.6 and we saw errors with 1.24</p>
<p>I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?</p> <p>I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.</p> <p>I tried using prometheus but have not had any luck so far.</p>
<p>You can enable <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">Auditing</a> on your cluster. For specific resource, use <code>resourceNames</code> in the audit policy to specify the resource name.</p>
<p>The following code will throw an <code>ApiException 410 resource too old</code> on the second <code>watch.stream()</code>:</p> <pre><code># python3 -m venv venv # source venv/bin/activate # pip install 'kubernetes==23.3.0' from kubernetes import client,config,watch config.load_kube_config(context='my-eks-context') v1 = client.CoreV1Api() watcher = watch.Watch() namespace = 'kube-system' last_resource_version=0 # this watch will timeout in 5s to have a fast way to simulate a watch that need to be retried for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5): print(i['object'].metadata.resource_version) last_resource_version = i['object'].metadata.resource_version # we retry the watch starting from the last resource version known # but this will raise a kubernetes.client.exceptions.ApiException: (410) # Reason: Expired: too old resource version: 379140622 (380367990) for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5): print('second loop', i['object'].metadata.resource_version) last_resource_version = i['object'].metadata.resource_version </code></pre> <p>The <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">kubernetes documentation</a> states that:</p> <blockquote> <p>If a client watch is disconnected then that client can start a new watch from the last returned resourceVersion</p> </blockquote> <p>which is what I intended in the above code, which always gives the following exception:</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 24, in &lt;module&gt; File &quot;/Users/rubelagu/git/python-kubernetes-client/venv/lib/python3.8/site-packages/kubernetes/watch/watch.py&quot;, line 182, in stream raise client.rest.ApiException( kubernetes.client.exceptions.ApiException: (410) Reason: Expired: too old resource version: 379164133 (380432814) </code></pre> <p>What am I doing wrong?</p>
<p>It seems that in the initial response to the watch (from an EKS cluster 1.21) the events can be returned in any order.</p> <p>I did two subsequent watches two seconds apart and they contain the same 30 events in completely different ordering.</p> <p>So it's not guaranteed that the last resource version that you see is actually the actual last and it's not guaranteed that you can resume from that <code>resourceVersion</code>/<code>resource_version</code>. Also you are not allowed to sort/collate those events by <code>resourceVersion</code>, since the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">kubernetes documentation for Resource Version Semantics</a> explicity says:</p> <blockquote> <p>Resource versions must be treated as <strong>opaque</strong> [...] You <strong>must not</strong> assume resource versions are <strong>numeric or collatable</strong>.</p> </blockquote> <p>You must account for that by catching the <code>resource too old exception</code> and retrying without specifying a resource version, see below for an example:</p> <pre><code>from kubernetes import client,config,watch from kubernetes.client.exceptions import ApiException config.load_kube_config(context='eks-prod') v1 = client.CoreV1Api() # v1 = config.new_client_from_config(context=&quot;eks-prod&quot;).CoreV1Api() watcher = watch.Watch() namespace = 'my-namespace' def list_pods(resource_version=None): print('start watch from resource version: ', str(resource_version)) try: for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=resource_version, timeout_seconds=2): print(i['object'].metadata.resource_version) last_resource_version = i['object'].metadata.resource_version except ApiException as e: if e.status == 410: # Resource too old return list_pods(resource_version=None) else: raise return last_resource_version last_resource_version = list_pods() print('last_resource_version', last_resource_version) list_pods(last_resource_version) </code></pre>
<p>I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?</p> <p>I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.</p> <p>I tried using prometheus but have not had any luck so far.</p>
<p>You can setup auditing in your Kubernetes Cluster. Refer to this link <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/</a></p>
<p>I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p> <p>After the yaml file is applied and while trying to check the status of the deployment , I get :</p> <pre><code>$ kubectl get deployment -n kube-system aws-load-balancer-controller NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-controller 0/1 1 0 6m39s </code></pre> <p>I tried to debug it and I got this :</p> <pre><code>$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller {&quot;level&quot;:&quot;info&quot;,&quot;logger&quot;:&quot;controller-runtime.metrics&quot;,&quot;msg&quot;:&quot;metrics server is starting to listen&quot;,&quot;addr&quot;:&quot;:8080&quot;} {&quot;level&quot;:&quot;error&quot;,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;unable to create controller&quot;,&quot;controller&quot;:&quot;Ingress&quot;,&quot;error&quot;:&quot;the server could not find the requested resource&quot;} </code></pre> <p>The yaml file is pulled directly from <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml</a> and apart from changing the Kubernetes cluster name, no other modifications are done.</p> <p>Please let me know if I am missing some step in the configuration. Any help would be highly appreciated.</p>
<p>I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.</p> <ul> <li>aws-load-balancer-controller = v2.3.1</li> <li>Kubernetes/EKS = 1.22</li> </ul> <p>Github issue for more information: <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495</a></p>
<p>I am reading the tekton <a href="https://tekton.dev/vault/pipelines-v0.14.3/auth/" rel="noreferrer">docs</a> on authentication it explains that two things are needed to do authentication</p> <p>Create a secret docs give example below</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: basic-user-pass annotations: tekton.dev/git-0: https://github.com # Described below type: kubernetes.io/basic-auth stringData: username: &lt;username&gt; password: &lt;password&gt; </code></pre> <p>Add secret object to the service account</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: basic-user-pass </code></pre> <p>My mental model of a service account in k8s is that it is a JWT used to access the k8s API server. I don't understand what's point of adding a secret to the ServiceAccount, how and why it is used.</p> <p>Questions:</p> <ul> <li>what does it mean for a service account to have secrets added to it?</li> <li>Why is it useful to add a secret to a service account?</li> <li>Why does tekton require the secret to be added to the service account?</li> <li>who/what looks at the service account secrets list?</li> </ul>
<p>hope you are enjoying your kubernetes journey !</p> <p><em><strong>1) what does it mean for a service account to have secrets added to it? / Why is it useful to add a secret to a service account?</strong></em></p> <p>Fist of all, a little remider:</p> <p>As you may know, you have to see the serviceAccount as a user for a machine/an application/a script (and not only in kubernetes) in short, everything that is not human. As a human a service account, in order to authenticate to things (Git repository/docker registry or API that require authentication, needs to have credentials (username+password).</p> <p>In Kubernetes this credentials and especially the password are stored in &quot;secrets&quot;.</p> <p>Now, you should be aware that each namespace in kubernetes has a native service account named &quot;default&quot; that is associated with <strong>every</strong> running pod and that service account is linked to a native &quot;default&quot; kubernetes secret that is also present in all namespaces. This &quot;default&quot; secret contains the ca.crt and a token that let the pod to make calls to the internal Kubernetes API Server endpoint among other things.</p> <p>Since the secrets that contains the &quot;credentials&quot; is linked to a service account that is mounted to a pod, this pod can then be able to authenticate to things that require authentication.</p> <p>For example if someday you'll have to use a private docker registry to pull your images, you can do this in two ways, In each of them you have to create a secret first that will contain your sensitive data (crendentials):</p> <ul> <li>The first way consist of adding your secret name,that contains the registry credentials directly in the default serviceAccount (that, as a reminder, is mounted by default in the pod, or in a new created serviceAccount (like tekton is doing in your case) that will be added to the kubernetes deployment manifest in the field <code>serviceAccountName:</code>.</li> <li>The second way consist of adding the field <code>imagePullSecret</code> in your kubernetes deployment manifest.</li> </ul> <p>This way, when kubernetes comes to pull your private docker image, it will check if the needed credentials that are in the serviceAccount secrets works, if not it will check the secret you have added in the imagePullSecret field (or the opposite) and it will be able to connect to the registry and pull the image to run it as a container in a pod !</p> <p><em><strong>2) who/what looks at the service account secrets list?</strong></em></p> <p>For example in a brand new namespace:</p> <pre><code>❯ k get sa NAME SECRETS AGE default 1 30m </code></pre> <p>This default serviceAccount is linked to a secret named &quot;default-token-r4vrb&quot;:</p> <pre><code>❯ k get sa default -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2022-05-06T08:48:38Z&quot; name: default namespace: so-tests resourceVersion: &quot;1771&quot; uid: c3598708-ad14-4806-af31-5c54d60e29b7 secrets: - name: default-token-r4vrb </code></pre> <p>This default-token secret contains what is needed to authenticate the Kubernetes APi endpoint (certificate + token):</p> <pre><code>❯ k get secret default-token-r4vrb -o yaml apiVersion: v1 data: ca.crt: base64encodedCaCertificate namespace: base64encodedNamespace token: base64encodedToken kind: Secret metadata: annotations: kubernetes.io/service-account.name: default kubernetes.io/service-account.uid: c3598708-ad14-4806-af31-5c54d60e29b7 creationTimestamp: &quot;2022-05-06T08:48:38Z&quot; name: default-token-r4vrb namespace: so-tests resourceVersion: &quot;1770&quot; uid: d342a372-66d1-4c92-b520-23c23babc798 type: kubernetes.io/service-account-token </code></pre> <p><em><strong>3) Why does tekton require the secret to be added to the service account? who/what looks at the service account secrets list?</strong></em></p> <p>Now I hope you know why, they choose to use a serviceAccount to do this but they could have just mounted the secret into the pod directly also :)</p> <p>Hope this has helped you. Here is some docs to be more familiar with K8S SA: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p> <p>bguess/</p>
<p>I have a k8s dev cluster (in aks). I previously used the bitnami chart for deployment of a single MongoDB pod and I never had the following problem.</p> <p>As I'm now using a feature that only works with replicaSets, I recently moved to using the MongoDBCommunity Operator to create a single instance MongoDB replicaSet running in its own namespace (in dev). My application is connecting to and using this instance with no issues at all and my new feature works like a dream.</p> <pre class="lang-bash prettyprint-override"><code>$ k get pods -n db NAME READY STATUS RESTARTS AGE mongodb-kubernetes-operator-748c85557-bffz5 1/1 Running 0 2d16h nga-mongodb-0 2/2 Running 0 2d19h </code></pre> <p>Now I want to use the MongoDB extension in vscode to help with debugging, and that's where the problem is. As I mentioned, with the bitnami chart I had no problem -- I provided the db connection string and it just worked.</p> <p>When I try to do the same with this new deployment though I get the following error:</p> <pre class="lang-bash prettyprint-override"><code>getaddrinfo ENOTFOUND nga-mongodb-0.nga-mongodb-svc.db.svc.cluster.local </code></pre> <p>The operator creates a secret containing the connection string in 2 forms, for the service and for the pod, both give the same error</p> <p>Svc:</p> <pre class="lang-bash prettyprint-override"><code>mongodb+srv://username:[email protected]/admin?ssl=false </code></pre> <p>Pod:</p> <pre class="lang-bash prettyprint-override"><code>mongodb://username:[email protected]:27017/admin?ssl=false </code></pre> <p>I have noticed that neither of those strings includes a replica name, which I find odd, but I use the svc based string for my application and it works fine so it should work ok when port-forwarding.</p> <p>My port-forwarding statement is pretty basic:</p> <pre class="lang-bash prettyprint-override"><code>k port-forward -n db nga-mongodb-0 27017:27017 </code></pre> <p>And if I try <code>curl localhost:27017</code> I get the expected message</p> <pre class="lang-bash prettyprint-override"><code>It looks like you are trying to access MongoDB over HTTP on the native driver port. </code></pre> <p>Which does suggest that I have a working port forwarder.</p> <p>The connection string I use (assuming the pod connection string) then changes to:</p> <pre class="lang-bash prettyprint-override"><code>mongodb://username:password@localhost:27017/admin?ssl=false </code></pre> <p>I suspect, but don't know, that the replica set is trying to forward the client to the primary (which I already connect to if I use the pod connection string) and is sending a kubernetes URL which doesn't translate well via the port-forwarding method. So, I doubt that this is something weird the operator has done.</p> <p>Has anyone else actually got this working or know how to get it working?</p>
<p>So, based on writing up the question and therefore organising my thoughts, I have found the reason this is happening and the answer.</p> <p>I had suggested in my question that this might be a redirect problem, and now I have proof that this is the issue. I've added the following entry to my hosts file:</p> <pre><code>127.0.0.1 nga-mongodb-0.nga-mongodb-svc.db.svc.cluster.local </code></pre> <p>And now it works. So, the MongoDB replicaSet is trying to forward me to the primary (even though I'm already connected to the primary) and is doing so by, quite rightly, returning the only host it knows. By adding that hostname to the <code>/etc/hosts</code> file my machine resolves the name to the localhost and back through the forwarded connection. I haven't actually tried this with the svc approach, but I doubt it would work as I believe it'll constantly redirect.</p>
<p>I am trying to create a replicaset with kubernetes. This time, I don't have a yml file and this is why I am trying to create the replicaset using a command line. Why <code>kubectl create replicaset somename --image=nginx</code> raise an error, and how to fix this?</p>
<p>You cannot create <code>replicaset</code> using the command line. Only the following resource creation is possible using <code>kubectl create</code>:</p> <pre><code>kubectl create --help |awk '/Available Commands:/,/^$/' Available Commands: clusterrole Create a cluster role clusterrolebinding Create a cluster role binding for a particular cluster role configmap Create a config map from a local file, directory or literal value cronjob Create a cron job with the specified name deployment Create a deployment with the specified name ingress Create an ingress with the specified name job Create a job with the specified name namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name priorityclass Create a priority class with the specified name quota Create a quota with the specified name role Create a role with single rule rolebinding Create a role binding for a particular role or cluster role secret Create a secret using specified subcommand service Create a service using a specified subcommand serviceaccount Create a service account with the specified name </code></pre> <p>Although, You may use the following way to create the replica set, in the below example, <code>kubectl create -f</code> is fed with stdout(<code>-</code>):</p> <pre><code>echo &quot;apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend labels: app: guestbook tier: frontend spec: # modify replicas according to your case replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3 &quot; |kubectl create -f - </code></pre>
<p><strong>What I am trying to do?</strong></p> <p>I want to create a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service" rel="nofollow noreferrer">kubernetes service</a> object using terraform, but make it re-usable. So everytime there's a new service, I could just append the variables.</p> <p><strong>My problem:</strong></p> <p>I have been reading and trying different things but I am not sure how would I loop over &quot;annotations&quot; and &quot;selectors&quot; which will have more than one key value pair.</p> <p><strong>Code Example</strong></p> <p>Variables.tf using which I want to build the actual terraform resource.</p> <pre><code>variable &quot;apps&quot; { default = { &quot;app1&quot; = { svc_name = &quot;app1&quot; namespace = &quot;testns&quot; annotations = { &quot;testannotation&quot; = &quot;ann1&quot; } selector = { app = &quot;podinfo&quot; env = &quot;dev&quot; } ports = { name = &quot;http&quot; port = 80 protocol = &quot;TCP&quot; targetPort = 8008 } }, &quot;app2&quot; = { svc_name = &quot;app2&quot; namespace = &quot;testns&quot; annotations = { &quot;testannotation&quot; = &quot;ann1&quot; } selector = { app = &quot;someapp&quot; env = &quot;qa&quot; } ports = { name = &quot;http&quot; port = 8080 protocol = &quot;TCP&quot; targetPort = 8080 } }, } } </code></pre> <p>Here is the main.tf where I want to loop over &quot;annotations&quot; and &quot;selectors&quot; of the variables because there could be more than one of them. BUT there is only one &quot;annotations&quot; and &quot;selector&quot; block. So I couldn't use &quot;dynamic&quot; as it will generate many of those blocks.</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;service&quot; { for_each = var.apps metadata { name = each.value.svc_name namespace = each.value.namespace # annotations = { # HOW DO I GET THE ANNOTATIONS HERE # } } spec { selector = { ## HOW DO I GET THE SELECTORS FROM VARIABLEES HERE } session_affinity = &quot;ClientIP&quot; port { port = each.value.ports.port target_port = each.value.ports.targetPort } type = &quot;ClusterIP&quot; } } </code></pre> <p>I'd appreciate any guide, links or suggestions here!</p>
<p>You do the same as with other properties:</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;service&quot; { for_each = var.apps metadata { name = each.value.svc_name namespace = each.value.namespace annotations = each.value.annotations } spec { selector = each.value.selector session_affinity = &quot;ClientIP&quot; port { port = each.value.ports.port target_port = each.value.ports.targetPort } type = &quot;ClusterIP&quot; } } </code></pre>
<p>I am trying to update a custom resource:</p> <pre class="lang-sh prettyprint-override"><code>kubectl edit ticketservice r134-4979a2f6-f46d-405d-97bc-ab1ba -n 2b9aea53dbfc47a </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: ticketservice.ias.alpha.com/v1alpha1 kind: TicketService metadata: annotations: foo: bar baz: bux .. etc .. creationTimestamp: &quot;2022-05-04T20:08:06Z&quot; &lt;--- here finalizers: - ticket-generator/simulator-finalizer generation: 3 labels: label1: value1 .. etc .. name: r134-4979a2f6-f46d-405d-97bc-ab1baf97543d </code></pre> <p>No matter what I do on the edit, the <code>creationTimestamp</code> just comes back at the same time. I can't edit it, can't remove the line either - it just comes back.</p> <p>I thought maybe the <code>finalizer</code> line was involved, but I was able to delete it, but still no dice editing the <code>creationTimestamp</code></p> <p>I think there must be another resource that locks this value; any idea what the reason is, and if there's another check going on on submission, then how would I trace that?</p>
<p><code>creationtimestamp</code> can only be set by Kubernetes as it turns out. From a trusted source but sadly don't have the specific k8s docs ref.</p>
<p>So after updating to Docker Desktop 4.7.1, local kubernetes (v1.22.5) now is stuck at starting.</p> <p><a href="https://i.stack.imgur.com/LAkxm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LAkxm.png" alt="enter image description here" /></a></p> <p>Uninstalled and re-installed latest Docker Desktop (4.7.1) but after restart, issue persists.</p> <p>Running <code>kubectl get all -n kube-system</code></p> <p>Returns <code>Unable to connect to the server: EOF</code></p> <p>So clearly kubernetes isn't running.</p> <p>Any help is appreciated, thanks.</p> <p>P.S. I am contemplating now on going back to Docker Desktop 4.6.x if I can't resolve this, but I do wish to be on latest Docker Desktop if possible.</p> <p>I am on Windows 10</p>
<p>Ok after couple of hours of trouble shooting I was able to resolved the issue.</p> <h3><strong>Step 1: Reset Docker Desktop To Factory Defaults</strong></h3> <p>Right click docker icon on system tray, then choose <strong>troubleshoot</strong>, then click the <strong>Reset to factory defaults</strong> button.</p> <p><a href="https://i.stack.imgur.com/iLAwY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iLAwY.png" alt="enter image description here" /></a></p> <h3><strong>Step 2: Close Docker Desktop</strong></h3> <h3><strong>Step 3: Delete the .kube folder</strong></h3> <p>It is located on <code>C:\Users\&lt;user name&gt;\.kube</code></p> <p>Make a backup for it if you want</p> <h3><strong>Step 4: Delete the pki folder</strong></h3> <p>It is located on <code>C:\Users\&lt;user name&gt;\AppData\Local\Docker\pki</code></p> <p>Make a backup for it if you want</p> <h3><strong>Step 5: Add some entries in your hosts file</strong></h3> <pre><code>192.168.1.5 host.docker.internal 192.168.1.5 gateway.docker.internal 127.0.0.1 docker-for-desktop 127.0.0.1 kubernetes.docker.internal </code></pre> <h3><strong>Step 6: Run Docker Desktop As Administrator</strong></h3> <p>That's it, I was able to run local kubernetes cluster again</p> <p><a href="https://i.stack.imgur.com/zxyxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zxyxN.png" alt="enter image description here" /></a></p>
<p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">Kubernetes API Concepts &gt; Efficient detection of changes</a>:</p> <blockquote> <p>When retrieving a collection of resources (either namespace or cluster scoped), the response from the API server contains a resourceVersion value. The client can use that resourceVersion to initiate a watch against the API server. When you send a watch request, the API server responds with a stream of changes. These changes itemize the outcome of operations (such as create, delete, and update) that occurred after the resourceVersion you specified as a parameter to the watch request. The overall watch mechanism allows a client to fetch the current state and then subscribe to subsequent changes, without missing any events.</p> </blockquote> <p>When I tried a watch operation (using kubernetes python client) I get a stream of kubernetes events, the events themselves <strong>do not have</strong> a <code>resourceVersion</code>, the object inside the event (<code>kind: Pod</code>) do have <code>resourceVersion</code>.</p> <pre><code>from kubernetes import client,config,watch config.load_kube_config(context='eks-prod') v1 = client.CoreV1Api() watcher = watch.Watch() namespace = 'my-namespace' last_resource_version=0 for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5): print(i['object'].metadata.resource_version) last_resource_version = i['object'].metadata.resource_version </code></pre> <p>The resource version are output in the order they are received and <strong>they are not monotonically increasing</strong> at least in the initial batch of events:</p> <pre><code>380744163 380641499 380710458 380775853 380525082 381044514 380676150 380641735 380566799 380806984 380566585 380710721 378885571 380524650 380710218 380806798 373502190 380566206 381044372 380524615 380676624 380806573 380775663 380605904 380743917 380606153 380676388 380744368 380641258 380775416 380606397 </code></pre> <p>But can I assume that if this watch is disconnected I can <strong>safely</strong> resume from the highest resource version I've seen? In the above case, can I safely resume from <code>381044514</code> (the highest) without missing events?</p>
<p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">Resource Version Semantics</a></p> <blockquote> <p>You must <strong>not assume resource versions are numeric</strong> or collatable. API clients may only compare two resource versions for equality (this means that you must not compare resource versions for greater-than or less-than relationships).</p> </blockquote> <p>So in principle no you can't use the &quot;highest&quot; resource version because they are not really numeric or sortable. The best you can do is use the latest <code>resourceVersion</code> that you received as is , verbatim. And be prepared to get a <code>resource too old</code> that you are supposed to handle by <strong>retrying without specifying a resource version</strong>, in that case you must also handle the case where you will likely receive some events more than once.</p> <p>This scenario where the <code>resourceVersion</code> in the last event received is not the actual latest/most recent is easily reproducible in EKS 1.21 where the initial response to the watch will return the events in more or less random order. If I send two watch requests simultaneously I'll get the same 30 events but in different order.</p>
<p>When a pod is in crashLoopBackOff state (for example), are the <strong>requested</strong> resources (CPU and MEMORY) still allocated to this pod ?</p> <p>If you have the answer, please explain how did you do to make sure that it is or not still allocated to the pod &lt;3</p>
<p>I got the answer, here is the test:</p> <p>However the number of nodes I got or the amount of resources they have, I will create a resourceQuota for the namespace where I will perform the test:</p> <p><code>resourceQuota.yaml</code>:</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: test-so spec: hard: cpu: &quot;10m&quot; memory: 10Mi pods: &quot;10&quot; </code></pre> <p>Let's create a busybox deployment that will CrashLoopBackOff with the maximum resource in the ResourceQuota allocated to it:</p> <p><code>deployment1.yaml</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-so name: test-so spec: replicas: 1 selector: matchLabels: app: test-so template: metadata: labels: app: test-so spec: containers: - image: busybox name: busybox resources: requests: cpu: 10m memory: 10Mi </code></pre> <p>As expected its in CrashLoopBackOff state, however its deployed:</p> <pre><code>&gt; kubectl get pods -o wide: NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/test-so-57f76ccb9b-2w5vk 0/1 CrashLoopBackOff 3 (63s ago) 2m23s 10.244.5.2 so-cluster-1-worker2 &lt;none&gt; &lt;none&gt; </code></pre> <p>Let's now create a second deployment with the same amount of resources:</p> <p><code>deployment2.yaml</code>:</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-so2 name: test-so2 spec: replicas: 1 selector: matchLabels: app: test-so2 template: metadata: labels: app: test-so2 spec: containers: - image: busybox name: busybox resources: requests: cpu: 10m memory: 10Mi </code></pre> <p>No pods created and here is the status of the replicaset:</p> <pre><code>❯ k describe rs test-so2-7dd9c65cbd Name: test-so2-7dd9c65cbd Namespace: so-tests Selector: app=test-so2,pod-template-hash=7dd9c65cbd Labels: app=test-so2 pod-template-hash=7dd9c65cbd Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/test-so2 Replicas: 0 current / 1 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=test-so2 pod-template-hash=7dd9c65cbd Containers: busybox: Image: busybox Port: &lt;none&gt; Host Port: &lt;none&gt; Requests: cpu: 10m memory: 10Mi Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ ReplicaFailure True FailedCreate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-7x8qm&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-kv9m4&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-7w7wz&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-8gcnp&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-vllqf&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-2jhnb&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-gjtvw&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 31s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-qdq44&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 30s replicaset-controller Error creating: pods &quot;test-so2-7dd9c65cbd-69rn7&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi Warning FailedCreate 11s (x4 over 29s) replicaset-controller (combined from similar events): Error creating: pods &quot;test-so2-7dd9c65cbd-jjjl4&quot; is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi </code></pre> <p>So that means that in fact, even if a pod is in CrashLoopBackOff state, it still blocks the requested amount of memory.</p> <p>We know it now ! hahaha</p> <p>Have a nice day, bguess</p>
<p>I'm currently getting errors when trying to exec or get logs for my pods on my GKE cluster.</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl logs &lt;POD-NAME&gt; Error from server: Get &quot;https://&lt;NODE-PRIVATE-IP&gt;:10250/containerLogs/default/&lt;POD-NAME&gt;/&lt;DEPLOYMENT-NAME&gt;&quot;: remote error: tls: internal error </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it &lt;POD-NAME&gt; -- sh Error from server: error dialing backend: remote error: tls: internal error </code></pre> <p>One suspicious thing I found while troubleshooting is that all CSRs are getting denied...</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION csr-79zkn 4m16s kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7b5sx 91m kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7fzjh 103m kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7gstl 19m kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7hrvm 11m kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7mn6h 87m kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied csr-7nd7h 4m57s kubernetes.io/kubelet-serving system:node:&lt;NODE-NAME&gt; Denied ... </code></pre> <p>Any idea why this is happening ? Maybe a firewall issue ?</p> <p>Thanks in advance !</p> <h3>Update 1</h3> <p>Here the same commands with a verbose output <code>--v=8</code>, <em>withtout the <code>goroutines</code> stack trace</em></p> <p><code>$ kubectl logs --v=8 &lt;POD-NAME&gt;</code></p> <pre><code>I0527 09:27:59.624843 10407 loader.go:375] Config loaded from file: /home/kevin/.kube/config I0527 09:27:59.628621 10407 round_trippers.go:420] GET https://&lt;PUBLIC-IP&gt;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt; I0527 09:27:59.628635 10407 round_trippers.go:427] Request Headers: I0527 09:27:59.628644 10407 round_trippers.go:431] Accept: application/json, */* I0527 09:27:59.628649 10407 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a I0527 09:27:59.727411 10407 round_trippers.go:446] Response Status: 200 OK in 98 milliseconds I0527 09:27:59.727461 10407 round_trippers.go:449] Response Headers: I0527 09:27:59.727480 10407 round_trippers.go:452] Audit-Id: ... I0527 09:27:59.727496 10407 round_trippers.go:452] Cache-Control: no-cache, private I0527 09:27:59.727512 10407 round_trippers.go:452] Content-Type: application/json I0527 09:27:59.727528 10407 round_trippers.go:452] Date: Thu, 27 May 2021 07:27:59 GMT I0527 09:27:59.727756 10407 request.go:1097] Response Body: {&quot;kind&quot;:&quot;Pod&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{&quot;name&quot;:&quot;&lt;POD-NAME&gt;&quot;,&quot;generateName&quot;:&quot;&lt;POD-BASE-NAME&gt;&quot;,&quot;namespace&quot;:&quot;default&quot;,&quot;selfLink&quot;:&quot;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt;&quot;,&quot;uid&quot;:&quot;...&quot;,&quot;resourceVersion&quot;:&quot;6764210&quot;,&quot;creationTimestamp&quot;:&quot;2021-05-19T10:33:28Z&quot;,&quot;labels&quot;:{&quot;app&quot;:&quot;&lt;NAME&gt;&quot;,&quot;pod-template-hash&quot;:&quot;...&quot;},&quot;ownerReferences&quot;:[{&quot;apiVersion&quot;:&quot;apps/v1&quot;,&quot;kind&quot;:&quot;ReplicaSet&quot;,&quot;name&quot;:&quot;&lt;POD-BASE-NAME&gt;&quot;,&quot;uid&quot;:&quot;...&quot;,&quot;controller&quot;:true,&quot;blockOwnerDeletion&quot;:true}],&quot;managedFields&quot;:[{&quot;manager&quot;:&quot;kube-controller-manager&quot;,&quot;operation&quot;:&quot;Update&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;time&quot;:&quot;2021-05-19T10:33:28Z&quot;,&quot;fieldsType&quot;:&quot;FieldsV1&quot;,&quot;fieldsV1&quot;:{&quot;f:metadata&quot;:{&quot;f:generateName&quot;:{},&quot;f:labels&quot;:{&quot;.&quot;:{},&quot;f:app&quot;:{},&quot;f:pod-template-hash&quot;:{}},&quot;f:ownerReferences&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;uid\&quot;:\&quot;...\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:apiVersion&quot;:{},&quot;f:blockOwnerDeletion&quot;:{},&quot;f:controller&quot;:{},&quot;f:kind&quot;:{},&quot;f:name&quot;:{},&quot;f:uid&quot;:{}}}},&quot;f:spec&quot;:{&quot;f:c [truncated 3250 chars] I0527 09:27:59.745985 10407 round_trippers.go:420] GET https://&lt;PUBLIC-IP&gt;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt;/log I0527 09:27:59.746035 10407 round_trippers.go:427] Request Headers: I0527 09:27:59.746055 10407 round_trippers.go:431] Accept: application/json, */* I0527 09:27:59.746071 10407 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a I0527 09:27:59.800586 10407 round_trippers.go:446] Response Status: 500 Internal Server Error in 54 milliseconds I0527 09:27:59.800638 10407 round_trippers.go:449] Response Headers: I0527 09:27:59.800654 10407 round_trippers.go:452] Audit-Id: ... I0527 09:27:59.800668 10407 round_trippers.go:452] Cache-Control: no-cache, private I0527 09:27:59.800680 10407 round_trippers.go:452] Content-Type: application/json I0527 09:27:59.800693 10407 round_trippers.go:452] Content-Length: 217 I0527 09:27:59.800712 10407 round_trippers.go:452] Date: Thu, 27 May 2021 07:27:59 GMT I0527 09:27:59.800772 10407 request.go:1097] Response Body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;Get \&quot;https://10.156.0.8:10250/containerLogs/default/&lt;POD-NAME&gt;/&lt;SERVICE-NAME&gt;\&quot;: remote error: tls: internal error&quot;,&quot;code&quot;:500} I0527 09:27:59.801848 10407 helpers.go:216] server response object: [{ &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;Get \&quot;https://10.156.0.8:10250/containerLogs/default/&lt;POD-NAME&gt;/&lt;SERVICE-NAME&gt;\&quot;: remote error: tls: internal error&quot;, &quot;code&quot;: 500 }] F0527 09:27:59.801944 10407 helpers.go:115] Error from server: Get &quot;https://10.156.0.8:10250/containerLogs/default/&lt;POD-NAME&gt;/&lt;SERVICE-NAME&gt;&quot;: remote error: tls: internal error </code></pre> <p><code>kubectl exec --v=8 -it &lt;POD-NAME&gt; -- sh</code></p> <pre><code>I0527 09:44:48.673774 11157 loader.go:375] Config loaded from file: /home/kevin/.kube/config I0527 09:44:48.678514 11157 round_trippers.go:420] GET https://&lt;PUBLIC-IP&gt;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt; I0527 09:44:48.678528 11157 round_trippers.go:427] Request Headers: I0527 09:44:48.678535 11157 round_trippers.go:431] Accept: application/json, */* I0527 09:44:48.678543 11157 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a I0527 09:44:48.795864 11157 round_trippers.go:446] Response Status: 200 OK in 117 milliseconds I0527 09:44:48.795920 11157 round_trippers.go:449] Response Headers: I0527 09:44:48.795963 11157 round_trippers.go:452] Audit-Id: ... I0527 09:44:48.795995 11157 round_trippers.go:452] Cache-Control: no-cache, private I0527 09:44:48.796019 11157 round_trippers.go:452] Content-Type: application/json I0527 09:44:48.796037 11157 round_trippers.go:452] Date: Thu, 27 May 2021 07:44:48 GMT I0527 09:44:48.796644 11157 request.go:1097] Response Body: {&quot;kind&quot;:&quot;Pod&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{&quot;name&quot;:&quot;&lt;POD-NAME&gt;&quot;,&quot;generateName&quot;:&quot;&quot;,&quot;namespace&quot;:&quot;default&quot;,&quot;selfLink&quot;:&quot;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt;&quot;,&quot;uid&quot;:&quot;&quot;,&quot;resourceVersion&quot;:&quot;6764210&quot;,&quot;creationTimestamp&quot;:&quot;2021-05-19T10:33:28Z&quot;,&quot;labels&quot;:{&quot;app&quot;:&quot;...&quot;,&quot;pod-template-hash&quot;:&quot;...&quot;},&quot;ownerReferences&quot;:[{&quot;apiVersion&quot;:&quot;apps/v1&quot;,&quot;kind&quot;:&quot;ReplicaSet&quot;,&quot;name&quot;:&quot;&lt;POD-BASE-NAME&gt;&quot;,&quot;uid&quot;:&quot;...&quot;,&quot;controller&quot;:true,&quot;blockOwnerDeletion&quot;:true}],&quot;managedFields&quot;:[{&quot;manager&quot;:&quot;kube-controller-manager&quot;,&quot;operation&quot;:&quot;Update&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;time&quot;:&quot;2021-05-19T10:33:28Z&quot;,&quot;fieldsType&quot;:&quot;FieldsV1&quot;,&quot;fieldsV1&quot;:{&quot;f:metadata&quot;:{&quot;f:generateName&quot;:{},&quot;f:labels&quot;:{&quot;.&quot;:{},&quot;f:app&quot;:{},&quot;f:pod-template-hash&quot;:{}},&quot;f:ownerReferences&quot;:{&quot;.&quot;:{},&quot;k:{\&quot;uid\&quot;:\&quot;...\&quot;}&quot;:{&quot;.&quot;:{},&quot;f:apiVersion&quot;:{},&quot;f:blockOwnerDeletion&quot;:{},&quot;f:controller&quot;:{},&quot;f:kind&quot;:{},&quot;f:name&quot;:{},&quot;f:uid&quot;:{}}}},&quot;f:spec&quot;:{&quot;f:c [truncated 3250 chars] I0527 09:44:48.814315 11157 round_trippers.go:420] POST https://&lt;PUBLIC-IP&gt;/api/v1/namespaces/default/pods/&lt;POD-NAME&gt;/exec?command=sh&amp;container=&lt;SERVICE-NAME&gt;&amp;stdin=true&amp;stdout=true&amp;tty=true I0527 09:44:48.814372 11157 round_trippers.go:427] Request Headers: I0527 09:44:48.814391 11157 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a I0527 09:44:48.814406 11157 round_trippers.go:431] X-Stream-Protocol-Version: v4.channel.k8s.io I0527 09:44:48.814420 11157 round_trippers.go:431] X-Stream-Protocol-Version: v3.channel.k8s.io I0527 09:44:48.814445 11157 round_trippers.go:431] X-Stream-Protocol-Version: v2.channel.k8s.io I0527 09:44:48.814471 11157 round_trippers.go:431] X-Stream-Protocol-Version: channel.k8s.io I0527 09:44:48.913928 11157 round_trippers.go:446] Response Status: 500 Internal Server Error in 99 milliseconds I0527 09:44:48.913977 11157 round_trippers.go:449] Response Headers: I0527 09:44:48.914005 11157 round_trippers.go:452] Audit-Id: ... I0527 09:44:48.914029 11157 round_trippers.go:452] Cache-Control: no-cache, private I0527 09:44:48.914054 11157 round_trippers.go:452] Content-Type: application/json I0527 09:44:48.914077 11157 round_trippers.go:452] Date: Thu, 27 May 2021 07:44:48 GMT I0527 09:44:48.914099 11157 round_trippers.go:452] Content-Length: 149 I0527 09:44:48.915741 11157 helpers.go:216] server response object: [{ &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;error dialing backend: remote error: tls: internal error&quot;, &quot;code&quot;: 500 }] F0527 09:44:48.915837 11157 helpers.go:115] Error from server: error dialing backend: remote error: tls: internal error </code></pre> <h3>Update 2</h3> <p>After connecting to one of the GKE worker nodes and checking <code>kubelet</code> logs I found these wired lines</p> <pre><code>May 27 09:30:11 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:11.271022 1272 log.go:181] http: TLS handshake error from 10.156.0.9:54672: no serving certificate available for the kubelet May 27 09:30:11 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:11.305628 1272 log.go:181] http: TLS handshake error from 10.156.0.9:54674: no serving certificate available for the kubelet May 27 09:30:12 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:12.067998 1272 log.go:181] http: TLS handshake error from 10.156.0.11:57610: no serving certificate available for the kubelet May 27 09:30:14 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:14.144826 1272 certificate_manager.go:412] Rotating certificates May 27 09:30:14 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:14.154322 1272 reflector.go:207] Starting reflector *v1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146 May 27 09:30:14 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: I0527 09:30:14.448976 1272 reflector.go:213] Stopping reflector *v1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146 May 27 09:30:14 gke-&lt;CLUSTER-NAME&gt;-default-pool-&lt;NODE-UID&gt; kubelet[1272]: E0527 09:30:14.449045 1272 certificate_manager.go:454] certificate request was not signed: cannot watch on the certificate signing request: certificate signing request is denied, reason: AutoDenied, message: </code></pre> <h3>Update 3</h3> <p>I've updated the cluster version from <code>1.19.9-gke.1400</code> to <code>1.19.9-gke.1900</code>. Didn't solved the problem...</p> <p>Made a <code>Credentials Rotation</code> on the cluster. But didn't solved as well...</p> <h1>Final</h1> <p>After trying lot of changes in the cluster :</p> <ul> <li>Restarting kubelet on nodes</li> <li>Restarting nodes</li> <li>Upscaling/Downscaling node pool size</li> <li>Upgrading cluster version</li> <li>Rotating cluster certificates</li> </ul> <p>Even creating a new cluster (on the same project, with same VPC, etc) didn't solve the issue...</p> <p><strong>This problem might be related to changes made on Firewall rules.</strong></p> <p>Only solution found, creating a new GKE cluster in a new GCP project and migrating workflow using Velero.</p>
<p>Assume this answer may help ,</p> <p>*This issue is due to <strong>pending</strong> Certificate Sigining Requests for nodes made by kubelet running in each node *</p> <p>to check the csr pending for nodes</p> <pre><code>kubectl get csr --sort-by=.metadata.creationTimestamp </code></pre> <p>then approve the csr for each node</p> <pre><code>kubectl certificate approve &lt;csr-id&gt; </code></pre> <p>Further details ,refer this section in k8s documentation</p> <blockquote> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs</a></p> <p>One known limitation is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically approved by the default signer in the kube-controller-manager - kubernetes.io/kubelet-serving. This will require action from the user or a third party controller</p> </blockquote> <p>.</p>