prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am deploying my application in a read only kubernetes cluster, so I am using volumes and volumeMounts for tmp folder for apache server. Upon start of apache server within read only pod, I am getting this error:</p>
<pre><code>chown: changing ownership of '/var/lock/apache2.fm2cgWmnxk': Operation not permitted
</code></pre>
<p>I came across this issue <a href="https://stackoverflow.com/questions/43544370/kubernetes-how-to-set-volumemount-user-group-and-file-permissions">Kubernetes: how to set VolumeMount user group and file permissions</a> and tried using SecurityContext.fsGroup but still getting same issue.</p>
<p>Here is my deployment.yaml for reference:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: &DeploymentName abc
spec:
replicas: 1
selector:
matchLabels: &appName
app: *DeploymentName
template:
metadata:
name: main
labels:
<<: *appName
spec:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroupChangePolicy: "OnRootMismatch"
volumes:
- name: var-lock
emptyDir: {}
containers:
- name: *DeploymentName
image: abc-image
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lock
name: var-lock
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 300
periodSeconds: 180
imagePullPolicy: Always
tty: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: *DeploymentName
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: &hpaName abc
spec:
maxReplicas: 1
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: *hpaName
targetCPUUtilizationPercentage: 60
</code></pre>
<p>Any help is appreciated.</p>
| <p>Hello, hope you are envoying your Kubernetes journey !</p>
<p>I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:</p>
<p>First I have setup a kind cluster locally with this configuration (info here: <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>):</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
</code></pre>
<p>after this I created my cluster with this command:</p>
<pre><code>kind create cluster --config=config.yaml
</code></pre>
<p>Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: so-tests
</code></pre>
<p>From there, i got my environment setted up, so I used your deployment config and replaced the deploymentName, appName and hpaName occurences by "so-71823613" (stack-overflow and you question id), but for the test, I decided to not use the hpa config.</p>
<p>next, since you did not provide the image you are using for apache, I used the dockerhub image httpd:2.4.53 (<a href="https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore" rel="nofollow noreferrer">https://hub.docker.com/layers/httpd/library/httpd/2.4.53/images/sha256-10ed1591781d9fdbaefaafee77067f12e833c699c84ed4e21706ccbd5229fd0a?context=explore</a>)</p>
<p>again, since i dont have your configmap config, i decided to comment out the part where you get env variables from the configmap.</p>
<p>since the default user in httpd image is "www-data", I first deployed the pod without any securityContext just to get the id of that user:</p>
<pre><code>❯ k exec -it pod/so-71823613-555d8b454-z5ks5 -- id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)
</code></pre>
<p>Once that i knew what was the id of the www-data user, I modified the securityContext. I kept the rest of the configuration (probes, volume etc.) as you configured them, here is the manifest now:</p>
<p>In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 33(www-data).
The runAsGroup field specifies the primary group ID of 33 for all processes within any containers of the Pod. If this field is omitted, the primary group ID of the containers will be root(0).
Any files created will also be owned by user 33 and group 33 when runAsGroup is specified.
Since fsGroup field is specified, all processes of the container are also part of the supplementary group ID 33. The owner for volume "/var/lock" and any files created in that volume will be Group ID 33.
...
fsGroupChangePolicy - fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a Pod. This field only applies to volume types that support fsGroup controlled ownership and permissions. This field has two possible values:</p>
<p>OnRootMismatch: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume.
Always: Always change permission and ownership of the volume when volume is mounted.</p>
<p>( description from here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>)</p>
<p>So, once i deployed my configuration using:</p>
<pre><code>kubectl apply -f deployment.yaml
deployment.apps/so-71823613 created
</code></pre>
<p>I got this error:</p>
<pre><code> k logs -f pod/so-71823613-7c5b65df4d-6scg5
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.2. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
</code></pre>
<p>So, first to fix the first line error, I reconnected into the pod to fetch the httpd.conf file with:</p>
<pre><code>k exec -it pod/so-71823613-555d8b454-fgjcs -- cat /usr/local/apache2/conf/httpd.conf > httpd.conf
</code></pre>
<p>once i get the http.conf file, I modified it, by adding:</p>
<pre><code>ServerName localhost:8080
</code></pre>
<p>(cf <a href="https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html" rel="nofollow noreferrer">https://ixnfo.com/en/solution-ah00558-apache2-could-not-reliably-determine-the-servers-fully-qualified-domain-name.html</a>)</p>
<p>Then I put the new httpd.conf file into a configmap named "httpconf", and modified the deployment to mount the configmap into the right place, to replace the first one (here -> "/usr/local/apache2/conf/httpd.conf") with:</p>
<pre><code> ...
volumeMounts:
...
- name: "config"
mountPath: "/usr/local/apache2/conf/httpd.conf"
subPath: "httpd.conf"
volumes:
...
- name: "config"
configMap:
name: "httpconf"
...
❯ kubectl apply -f configmap.yaml -f deployment.yaml
configmap/httpconf created
deployment.apps/so-71823613 created
</code></pre>
<p>Then i got this error remaining:</p>
<pre><code>(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
</code></pre>
<p>So, to fix it, I changed to listening port of apache directly into the configmap http.conf file (according to this: <a href="https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when">https://askubuntu.com/questions/338218/why-am-i-getting-permission-denied-make-sock-could-not-bind-to-address-when</a>)</p>
<pre><code>Listen 8080
ServerName localhost:8080
</code></pre>
<p>since I am now exposing the 8080 port, I also modified the probes and the port in consequence:</p>
<pre><code>...
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 8080
...
</code></pre>
<p>After reapplying my config I got this new error:</p>
<pre><code>❯ k logs -f pod/so-71823613-7dd7bdb66d-qtf9t
[Wed Apr 20 05:50:57.863971 2022] [core:error] [pid 1:tid 139771999915328] (13)Permission denied: AH00099: could not create /usr/local/apache2/logs/httpd.pid.KYUI5g
[Wed Apr 20 05:50:57.864061 2022] [core:error] [pid 1:tid 139771999915328] AH00100: httpd: could not log pid to file /usr/local/apache2/logs/httpd.pid
</code></pre>
<p>To fix that issue, i used your workaround with the emptyDir and added this:</p>
<pre><code> volumeMounts:
...
- mountPath: /usr/local/apache2/logs/
name: apache2-logs
volumes:
...
- name: apache2-logs
emptyDir: {}
</code></pre>
<p>here are the manifests:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: so-71823613
spec:
replicas: 1
selector:
matchLabels:
app: so-71823613
template:
metadata:
name: main
labels:
app: so-71823613
spec:
securityContext:
fsGroup: 33
runAsNonRoot: true
runAsUser: 33
runAsGroup: 33
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: so-71823613
image: httpd:2.4.53
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 180
imagePullPolicy: Always
tty: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# envFrom:
# - configMapRef:
# name: so-71823613
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
volumeMounts:
- mountPath: /var/lock
name: var-lock
- mountPath: /usr/local/apache2/logs/
name: apache2-logs
- name: "config"
mountPath: "/usr/local/apache2/conf/httpd.conf"
subPath: "httpd.conf"
volumes:
- name: var-lock
emptyDir: {}
- name: apache2-logs
emptyDir: {}
- name: "config"
configMap:
name: "httpconf"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: httpconf
data:
httpd.conf: |
ServerRoot "/usr/local/apache2"
Listen 8080
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule reqtimeout_module modules/mod_reqtimeout.so
LoadModule filter_module modules/mod_filter.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
<IfModule !mpm_prefork_module>
</IfModule>
<IfModule mpm_prefork_module>
</IfModule>
LoadModule dir_module modules/mod_dir.so
LoadModule alias_module modules/mod_alias.so
<IfModule unixd_module>
User www-data
Group www-data
</IfModule>
ServerAdmin [email protected]
ServerName localhost:8080
<Directory />
AllowOverride none
Require all denied
</Directory>
DocumentRoot "/usr/local/apache2/htdocs"
<Directory "/usr/local/apache2/htdocs">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
<Files ".ht*">
Require all denied
</Files>
ErrorLog /proc/self/fd/2
LogLevel warn
<IfModule log_config_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
CustomLog /proc/self/fd/1 common
</IfModule>
<IfModule alias_module>
ScriptAlias /cgi-bin/ "/usr/local/apache2/cgi-bin/"
</IfModule>
<IfModule cgid_module>
</IfModule>
<Directory "/usr/local/apache2/cgi-bin">
AllowOverride None
Options None
Require all granted
</Directory>
<IfModule headers_module>
RequestHeader unset Proxy early
</IfModule>
<IfModule mime_module>
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
</IfModule>
<IfModule proxy_html_module>
Include conf/extra/proxy-html.conf
</IfModule>
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
# ---
# apiVersion: autoscaling/v1
# kind: HorizontalPodAutoscaler
# metadata:
# name: so-71823613
# spec:
# maxReplicas: 1
# minReplicas: 1
# scaleTargetRef:
# apiVersion: extensions/v1beta1
# kind: Deployment
# name: so-71823613
# targetCPUUtilizationPercentage: 60
</code></pre>
<p>after waiting the initialDelaySeconds of the probes, I finally get my pod up and running correctly:</p>
<pre><code>Every 1.0s: kubectl get po,svc,cm -o wide DESKTOP-6PBJAOK: Wed Apr 20 03:15:02 2022
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/so-71823613-897768549-mcmb4 1/1 Running 0 4m13s 10.244.4.4 so-cluster-1-worker3 <none> <none>
NAME DATA AGE
configmap/httpconf 1 4m14s
</code></pre>
<p>Bonus:</p>
<p>I then decided to expose the http deployment with a service, here is the manifest (obtained from " k expose deployment so-71823613 --port 80 --target-port 8080 --dry-run=client -o yaml":</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: so-71823613
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: so-71823613
</code></pre>
<p>as you can see, I port-forwarded the 8080 pod port to 80 in the service (you can also use an ingress controller to expose the service outside of the cluster )</p>
<p>tried this on my machine:</p>
<pre><code>❯ k port-forward service/so-71823613 8080:80
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
</code></pre>
<p>and here is the result:</p>
<p><a href="https://i.stack.imgur.com/HWIXE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWIXE.png" alt="enter image description here" /></a></p>
<p>TADA !</p>
<p>To conclude, I tried to reproduce the best i could with your provided information (It was kinda cool), so if this does not work for you, it means that i need more information.
Thank you for your lecture.
bguess.</p>
|
<p>I am working through "learn kubernetes the hard way" and am at the "bootstrapping the etcd cluster" step: <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md</a></p>
<p>I have run into a command that causes a timeout:</p>
<pre><code>somersbmatthews@controller-0:~$ { sudo systemctl daemon-reload; sudo systemctl enable etcd; sudo systemctl start etcd; }
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xe" for details.
</code></pre>
<p>Here I follow the above recommendations:</p>
<p>This is the first thing the CLI asked me to check:</p>
<pre><code>somersbmatthews@controller-0:~$ systemctl status etcd.service
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: activating (start) since Wed 2020-12-02 03:15:05 UTC; 34s ago
Docs: https://github.com/coreos
Main PID: 49251 (etcd)
Tasks: 8 (limit: 9544)
Memory: 10.2M
CGroup: /system.slice/etcd.service
└─49251 /usr/local/bin/etcd --name controller-0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file>
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 is starting a new election at term 570
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 became candidate at term 571
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 571
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:15:38 controller-0 etcd[49251]: raft2020/12/02 03:15:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 is starting a new election at term 571
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 became candidate at term 572
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 572
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:15:39 controller-0 etcd[49251]: raft2020/12/02 03:15:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
</code></pre>
<p>The second thing the CLI asked me to check:</p>
<pre><code>somersbmatthews@controller-0:~$ journalctl -xe
-- A stop job for unit etcd.service has finished.
--
-- The job identifier is 3597 and the job result is done.
Dec 02 03:05:32 controller-0 systemd[1]: Starting etcd...
-- Subject: A start job for unit etcd.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit etcd.service has begun execution.
--
-- The job identifier is 3597.
Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Dec 02 03:05:32 controller-0 etcd[48861]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Dec 02 03:05:32 controller-0 etcd[48861]: etcd Version: 3.4.10
Dec 02 03:05:32 controller-0 etcd[48861]: Git SHA: 18dfb9cca
Dec 02 03:05:32 controller-0 etcd[48861]: Go Version: go1.12.17
Dec 02 03:05:32 controller-0 etcd[48861]: Go OS/Arch: linux/amd64
Dec 02 03:05:32 controller-0 etcd[48861]: setting maximum number of CPUs to 2, total number of available CPUs is 2
Dec 02 03:05:32 controller-0 etcd[48861]: the server is already initialized as member before, starting as etcd member...
Dec 02 03:05:32 controller-0 etcd[48861]: peerTLS: cert = /etc/etcd/kubernetes.pem, key = /etc/etcd/kubernetes-key.pem, trusted-ca = /etc/etcd/ca.pem, cli>
Dec 02 03:05:32 controller-0 etcd[48861]: name = controller-0
Dec 02 03:05:32 controller-0 etcd[48861]: data dir = /var/lib/etcd
Dec 02 03:05:32 controller-0 etcd[48861]: member dir = /var/lib/etcd/member
Dec 02 03:05:32 controller-0 etcd[48861]: heartbeat = 100ms
Dec 02 03:05:32 controller-0 etcd[48861]: election = 1000ms
Dec 02 03:05:32 controller-0 etcd[48861]: snapshot count = 100000
Dec 02 03:05:32 controller-0 etcd[48861]: advertise client URLs = https://10.240.0.10:2379
Dec 02 03:05:32 controller-0 etcd[48861]: initial advertise peer URLs = https://10.240.0.10:2380
Dec 02 03:05:32 controller-0 etcd[48861]: initial cluster =
Dec 02 03:05:32 controller-0 etcd[48861]: restarting member f98dc20bce6225a0 in cluster 3e7cc799faffb625 at commit index 3
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=()
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 became follower at term 183
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: newRaft f98dc20bce6225a0 [peers: [], term: 183, commit: 3, applied: 0, lastindex: >
Dec 02 03:05:32 controller-0 etcd[48861]: simple token is not cryptographically signed
Dec 02 03:05:32 controller-0 etcd[48861]: starting server... [version: 3.4.10, cluster version: to_be_decided]
Dec 02 03:05:32 controller-0 etcd[48861]: raft2020/12/02 03:05:32 INFO: f98dc20bce6225a0 switched to configuration voters=(4203990652121993521)
Dec 02 03:05:32 controller-0 etcd[48861]: added member 3a57933972cb5131 [https://10.240.0.12:2380] to cluster 3e7cc799faffb625
Dec 02 03:05:32 controller-0 etcd[48861]: starting peer 3a57933972cb5131...
Dec 02 03:05:32 controller-0 etcd[48861]: started HTTP pipelining with peer 3a57933972cb5131
Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer)
Dec 02 03:05:32 controller-0 etcd[48861]: started streaming with peer 3a57933972cb5131 (writer)
Dec 02 03:05:32 controller-0 etcd[48861]: started peer 3a57933972cb5131
somersbmatthews@controller-0:~$ journalctl -xe
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 is starting a new election at term 224
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 became candidate at term 225
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 225
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:32 controller-0 etcd[48861]: raft2020/12/02 03:06:32 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:32 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 is starting a new election at term 225
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 became candidate at term 226
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 226
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:34 controller-0 etcd[48861]: raft2020/12/02 03:06:34 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 is starting a new election at term 226
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 became candidate at term 227
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 227
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:35 controller-0 etcd[48861]: raft2020/12/02 03:06:35 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:35 controller-0 etcd[48861]: publish error: etcdserver: request timed out
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 is starting a new election at term 227
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 became candidate at term 228
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 228
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:37 controller-0 etcd[48861]: raft2020/12/02 03:06:37 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer ffed16798470cab5 could not connect: dial tcp 10.240.0.11:2380: connect: connection refused
Dec 02 03:06:37 controller-0 etcd[48861]: health check for peer 3a57933972cb5131 could not connect: dial tcp 10.240.0.12:2380: connect: connection refused
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 is starting a new election at term 228
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 became candidate at term 229
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 229
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:38 controller-0 etcd[48861]: raft2020/12/02 03:06:38 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 is starting a new election at term 229
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 became candidate at term 230
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 230
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to 3a57933972cb5131 a>
Dec 02 03:06:39 controller-0 etcd[48861]: raft2020/12/02 03:06:39 INFO: f98dc20bce6225a0 [logterm: 1, index: 3] sent MsgVote request to ffed16798470cab5 a>
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 is starting a new election at term 230
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 became candidate at term 231
Dec 02 03:06:41 controller-0 etcd[48861]: raft2020/12/02 03:06:41 INFO: f98dc20bce6225a0 received MsgVoteResp from f98dc20bce6225a0 at term 231
</code></pre>
<p>So I redo the step that I think allows what is not being allowed above <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md#firewall-rules</a>:</p>
<pre><code>somersbmatthews@controller-0:~$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
> --allow tcp,udp,icmp \
> --network kubernetes-the-hard-way \
> --source-ranges 10.240.0.0/24,10.200.0.0/16
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/k8s-hard-way-2571/global/firewalls/kubernetes-the-hard-way-allow-internal' already exists
</code></pre>
<p>but I'm still getting the timeout errors above.</p>
<p>Any help is appreciated, thanks :)</p>
| <p>I encountered similar error .
First , I did a mistake in using master-1 ip address in listen-peer-urls,listen-client-urls ,advertise-client-urls & listen-client-urls</p>
<p>Second , try to test if telnet works on telnet 2380.
And if doesnt work , then open firewall on both masters</p>
<p>sudo firewall-cmd --add-port=2380/tcp --permanent</p>
<p>sudo systemctl restart firewalld</p>
<p>Also , it is necessary that both masters should not have considerable time difference.</p>
|
<p>I have an app running in a kubernetes managed docker container, using Azure Kubernetes Service (AKS). I can output logs to a text file for the app deployment using:</p>
<pre><code>kubectl logs deployment/my-deployment > out.txt
</code></pre>
<p>This gives me a file of around max 28Mb. When I get the docker logs for the same container on the physical VM using <code>docker logs ...</code>, the log file is much bigger (up to 120Mb+).</p>
<p>How can I increase the size of the available <code>kubectl logs</code> for the deployment? If this option is available, then it would likely be an option that increases the size of the available <code>kubectl logs</code> for the <em>pod</em> that holds the container, as the pod and deployment logs are identical.</p>
<p>It's not the docker configuration in <code>/etc/docker/daemon.json</code> that's limiting the <code>kubectl</code> logs, as that's set to 50Mb. I've read that it's the underlying docker configuration that kubernetes uses, but that doesn't seem to be the case, as my<code>kubectl</code> logs are being truncated to around 28Mb.</p>
| <p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer"><code>Kubectl logs</code></a> might read logs with a default log rotation, meaning the <a href="https://stackoverflow.com/a/39398892/6309">logrotate service is active</a>.</p>
<p>Check the content of cat /etc/logrotate.d/docker-containers, as in <a href="https://github.com/kubernetes/kubernetes/issues/11046" rel="nofollow noreferrer">this issue</a>, for confirmation.</p>
<p>As explained in <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/logging/#:%7E:text=Kubernetes%20uses%20the%20logrotate%20tool,for%20the%20container%20are%20lost." rel="nofollow noreferrer"><code>unofficial-kubernetes</code></a>:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node. Kubernetes uses the <code>logrotate</code> tool to implement log rotation.</p>
<p>Kubernetes performs log rotation daily, or if the log file grows beyond 10MB in size.<br />
Each rotation belongs to a single container; if the container repeatedly fails or the pod is evicted, all previous rotations for the container are lost.<br />
<strong>By default, Kubernetes keeps up to five logging rotations per container</strong>.</p>
</blockquote>
|
<p>Let's take this processor as an example: <em>a CPU with 2 cores and 4 threads (2 threads per core)</em>.</p>
<p>From what I've read, such a CPU has 2 physical cores but can process 4 threads simultaneously through hyper threading. But, in reality, one physical core can only truly run one thread at a time, but using hyper threading, the CPU exploits the idle stages in the pipeline to process another thread.</p>
<p>Now, here is <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="noreferrer">Kubernetes</a> with Prometheus and Grafana and their CPU resource units measurement - <code>millicore/millicpu</code>. So, they virtually slice a core to 1000 millicores.</p>
<p>Taking into account the hyper threading, I can't understand how they calculate those millicores under the hood.</p>
<p>How can a process, for example, use 100millicore (10th part of the core)? How is this technically possible?</p>
<hr />
<p>PS: accidentally, found a really descriptive explanation here: <a href="https://stackoverflow.com/questions/61851751/multi-threading-with-millicores-in-kubernetes">Multi threading with Millicores in Kubernetes</a></p>
| <p>This gets very complicated. So k8s doesn't actually manage this it just provides a layer on top of the underlying container runtime (docker, containerd etc). When you configure a container to use 100 <code>millicore</code> k8's hands that down to the underlying container runtime and the runtime deals with it. Now once you start going to this level you have to start looking at the Linux kernel and how it does cpu scheduling / rate with cgroups. Which becomes incredibly interesting and complicated. In a nutshell though: The linux <code>CFS Bandwidth Control</code> is the thing that manages how much cpu a process (container) can use. By setting the <code>quota</code> and <code>period</code> params to the schedular you can control how much CPU is used by controlling how long a process can run before being paused and how often it runs. as you correctly identify you cant only use a 10th of a core. But you can use a 10th of the time and by doing that you can only use a 10th of the core over time.</p>
<p>For example</p>
<p>if I set <code>quota</code> to 250ms and <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 250ms of CPU cycle time every 250ms. Which means it can use 100% of the CPU.</p>
<p>if I set <code>quota</code> to 500ms and keep the <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 500ms of CPU cycle time every 250ms. Which means it can use 200% of the CPU. (2 cores)</p>
<p>if I set <code>quota</code> to 125ms and keep the <code>period</code> to 250ms. That tells the kernel that this <code>cgroup</code> can use 125ms of CPU cycle time every 250ms. Which means it can use 50% of the CPU.</p>
<p>This is a very brief explanation. Here is some further reading:</p>
<p><a href="https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7" rel="noreferrer">https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7</a>
<a href="https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html" rel="noreferrer">https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html</a></p>
|
<p>I have two variable containing my namespaces names:</p>
<pre><code>$KUBE_NAMESPACE_DEV ="stellacenter-dev"
$KUBE_NAMESPACE_STAGE "stellacenter-stage-uat"
</code></pre>
<p>Now I want to modify the following .gitlab-ci.yaml configuration to include the namespace logic:</p>
<pre><code>deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f ./provider-service.yml
only:
- developer
</code></pre>
<p>Provide-service.yml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: "stellacenter-dev" or "stellacenter-stage-uat"
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: "stellacenter-dev" "stellacenter-stage-uat"
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
</code></pre>
<p>I don't know how to integrate the variables and the values correctly . I'm facing the error while I run pipeline.Kindly help me to sort it out.</p>
| <p>You can remove the <code>namespace: NAMESPACE</code> from the manifest, and apply the resource in a namespace using the commandline.</p>
<pre><code>- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_DEV}
- kubectl apply -f ./provider-service.yml -n ${KUBE_NAMESPACE_STAGE}
</code></pre>
|
<p>I have tried to deploy Camunda Platform 8 using Helm Chart (<a href="https://artifacthub.io/packages/helm/camunda-platform/camunda-platform" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/camunda-platform/camunda-platform</a>) but am unable to use ingress or load balancer IP for identity service as it redirects to localohost:18080 for keycloak and there is no option for changing localhost:18080 to ingress ip in helm chart. So please suggest how to deploy camunda platform 8 on production on Kubernetes, mainly GKE.</p>
| <p>Like any helm chart you can overwrite the configuration values via separate (custom) values.yaml file. The exact entry for your yaml file will be
<code>global:</code>
<code> identity.auth.publicIssuerUrl: *yourcustomtokenissuerendpoint*</code></p>
<p>More details on helm global variables and how to use/override can be found here <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values</a></p>
|
<p>I have joined a project where we deploy our apps on Azure cloud. The apps are in docker containers orchestrated by kubernetes.</p>
<p>We get the data from a POD log file with this command (YAML code below)</p>
<p>kubectl logs --namespace=app-prod <br />
deployment/app-fd-prod <br />
> $(logFile)</p>
<p>where logFile is a variable</p>
<p>Now the problem is, yesterday the log file was 27MB. Today the log file is 2MB that means the old data got deleted some how.</p>
<p>I have read about logRotate but i dont think we have implemented that.</p>
<p>I want to know what can be the reason of deletion of yesterday's log file text ?</p>
<p>Any leads, helps would be appreciated.</p>
| <p>pod logs get rotated for you as pointed out by @jordanm in the comments. You need a log aggregation tool like the ELK/EFK stack (Elastic + Logstash + kibana or Elastic + fluentd + kibana) or Promtail + loki + grafana. These stacks tail the logs of containers, do some fancy parsing / manipulation and ships it to persistent storage like S3.</p>
<p>Side note you can take a look at the <a href="https://github.com/banzaicloud/logging-operator" rel="nofollow noreferrer">logging operator</a> that makes the config of these things a little easier.</p>
|
<p>We are following the below article to establish C# client connection to the Ignite Cluster, both of them deployed in the Kubernetes.</p>
<p><a href="https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service</a></p>
<p>We do not find the equivalent C# class/method to perform the connection configuration in the C# client application.</p>
<p><a href="https://i.stack.imgur.com/JJ3Gq.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Please help us to find alternate methods to do the connection configuration for Kubernetes.</p>
| <p>This API is not yet available for .NET, the <a href="https://issues.apache.org/jira/browse/IGNITE-13011" rel="nofollow noreferrer">relevant ticket</a> is in progress and most likely will be included into the next release.</p>
<p>For now, you can list a set of server IPs for your thin clients explicitly. And for your server and thick client nodes it's fine to rely on spring.xml configuration. More details <a href="https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml" rel="nofollow noreferrer">here</a>.</p>
<p>Example:</p>
<pre><code> var cfg = new IgniteConfiguration
{
...
SpringConfigUrl = "/path/to/spring.xml"
};
</code></pre>
<p>And your spring configuration:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Enables Kubernetes IP finder and setting custom namespace and service names.
-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
</code></pre>
|
<p>I'm trying to do a k8s tutorial on youtube <a href="https://www.youtube.com/watch?v=X48VuDVv0do&t=5428s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=X48VuDVv0do&t=5428s</a>.
An error occured that k8s pod failed to connect mongodb when I create the deployment mongo-express. Please kindly help!</p>
<p>Error Info of pod retrieved by kubectl logs command:</p>
<pre><code>Welcome to mongo-express
------------------------
(node:7) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
Could not connect to database using connectionString: mongodb://username:password@mongodb-service:27017/"
(node:7) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [mongodb-service:27017] on first connect [Error: getaddrinfo ENOTFOUND mongodb-service
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) {
name: 'MongoNetworkError'
}]
at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:441:11)
at Pool.emit (events.js:314:20)
at /node_modules/mongodb/lib/core/connection/pool.js:564:14
at /node_modules/mongodb/lib/core/connection/pool.js:1000:11
at /node_modules/mongodb/lib/core/connection/connect.js:32:7
at callback (/node_modules/mongodb/lib/core/connection/connect.js:300:5)
at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:330:7)
at Object.onceWrapper (events.js:421:26)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
(node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:7) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
</code></pre>
<p><strong>configuration files</strong>
Configuration files of kubernetes component for your reference.</p>
<pre><code> 1. kubernetes secret
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=```
2. mongodb deployment & service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017```
3. kubernetes configmap
```apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service```
4. mongo-express
```apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
</code></pre>
| <p>I followed the same tutorial and faced the same issue. The problem is with docker. Starting minikube with docker was the issue, seems like docker has some limitations. Install hyperkit if you are using MacOS, Hyper-V if Windows or install virtualbox if you are using some distribution of linux.</p>
<p>Then start minikube with the virtual machine you installed, like this</p>
<pre><code>minikube start --driver=virtualbox
</code></pre>
|
<p>Is there some way to force pod creation order on Kubernetes?</p>
<p>I have a scenario where Kubernetes are selecting a node pool with few resources and the first pod to be deployed consume a very small resource, but the next one consumes a lot of resources and the deployment fails.</p>
<p>So I was wondering if there is a way to instruct Kubernetes to deploy first the pod that is resource hog then the small ones</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">Node Selector</a> on your Pod’s specification, you just need <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels" rel="nofollow noreferrer">Node Labels</a> for that.</p>
<p>Another option is to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/" rel="nofollow noreferrer">Node Affinity</a>. You just need to have the <strong>Kubernetes cluster</strong>, and the K8s command line ready (<code>kubectl</code>). The steps for that are:</p>
<ul>
<li>Add a label to the node.</li>
<li>Schedule a Pod using required node affinity, or.</li>
<li>Schedule a Pod using preferred node affinity.</li>
</ul>
<p>Visit the official documentation I shared with you some lines above to have the detailed instructions, manifest examples, and the following URL for official K8’s documentation about <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>.</p>
<p>Plus, you can also set up the Pod initialization in a specific order. See this <a href="https://stackoverflow.com/questions/56935239/how-to-configure-pod-initialization-in-a-specific-order-in-kubernetes">thread</a> for the proper instructions.</p>
|
<p>I use istio-ingress gateway and virtualservice to expose different microservices. So far all of them have been http services, so it was straight-forward to follow istio's documentation.</p>
<p>But with kafka I am facing some issues. I am using <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">bitnami/kafka</a> helm chart for kafka installation. Here's the values.yaml used for it:</p>
<pre><code>global:
storageClass: "kafka-sc"
replicaCount: 3
deleteTopicEnable: true
resources:
requests:
memory: 1024Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 1000m
zookeeper:
replicaCount: 3
resources:
requests:
memory: 1024Mi
cpu: 500m
limits:
memory: 2048Mi
cpu: 1000m
</code></pre>
<p>This deployment exposes kafka on this endpoint: <code>my-kafka.kafka.svc.cluster.local:9092</code></p>
<p>I want this endpoint to be accessible via internet using ingress controller. Therefore, I applied following kubernetes manifests --></p>
<p>A. kafka-ingress-gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kafka-ingress-gateway
namespace: kafka
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9092
name: tcp
protocol: TCP
hosts:
- "kafka.<public_domain>"
</code></pre>
<p>B. kafka-ingress-virtualservice.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kafka-ingress-virtualservice
namespace: kafka
spec:
hosts:
- "kafka.<public_domain>"
gateways:
- kafka/kafka-ingress-gateway
tcp:
- match:
- port: 9092
route:
- destination:
host: my-kafka.kafka.svc.cluster.local
port:
number: 9092
</code></pre>
<p>To verify whether this works, I am using following approach:</p>
<ol>
<li>Create a kafka-client pod and login to it in two different terminals</li>
<li>In first terminal, I produce in a topic called <code>test</code> using this command: <code>kafka-console-producer.sh --broker-list my-kafka-0.my-kafka-headless.kafka.svc.cluster.local:9092 --topic test</code></li>
<li>In second terminal, I consume in <code>test</code> topic using this command.</li>
</ol>
<p>In here, this works: <code>kafka-console-consumer.sh --bootstrap-server my-kafka.kafka.svc.cluster.local:9092 --topic test --from-beginning</code></p>
<p>This does not work: <code>kafka-console-consumer.sh --bootstrap-server kafka.<public_domain>:9092 --topic test --from-beginning</code></p>
<p>I am getting this error: <code>WARN [Consumer clientId=consumer-console-consumer-89304-1, groupId=console-consumer-89304] Bootstrap broker kafka.<public_domain>:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)</code></p>
<p>I am new to kafka, so not sure what else is required to expose the consumer endpoint. From similar questions on stackoverflow, I noticed we are supposed to <a href="https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml#L403" rel="nofollow noreferrer">define "advertisedListeners" in kafka config</a>, but not sure what value to put there.</p>
<p>Please let me know if I am missing any details here.</p>
| <p>edit your istio-ingressgateway and add 9092 for tcp port</p>
<pre><code>kubectl edit svc -nistio-system istio-ingressgateway
</code></pre>
<p>add</p>
<pre><code>- name: kafka-broker
port: 9092
protocol: TCP
targetPort: 9092
</code></pre>
|
<p>My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamically provisioned</a> by <code>pd.csi.storage.gke.io</code> and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.</p>
<p>My problem is I need to set <code>accessModes</code> to <code>ReadOnlyMany</code> and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the <code>initContainer</code>.</p>
<p>Is there a solution to this issue? <a href="https://stackoverflow.com/questions/57754103/how-to-pre-populate-a-readonlymany-persistent-volume">Answer to this question</a> suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.</p>
| <p><code>...I need to have one disk shared among all pods</code></p>
<p>You can try Filestore. First your create a FileStore <a href="https://cloud.google.com/filestore/docs/creating-instances" rel="nofollow noreferrer">instance</a> and save your data on a FileStore volume. Then you <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver#enabling_the_on_a_new_cluster" rel="nofollow noreferrer">install</a> FileStore driver on your cluster. Finally you share the data with pods that needs to read the data using a PersistentVolume <a href="https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/blob/master/examples/kubernetes/pre-provision/preprov-pv.yaml" rel="nofollow noreferrer">referring</a> the FileStore instance and volume above.</p>
|
<p>I'm using Terraform to provision an EKS cluster (mostly following the example <a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="nofollow noreferrer">here</a>). At the end of the tutorial, there's a method of outputting the configmap through the <code>terraform output</code> command, and then applying it to the cluster via <code>kubectl apply -f <file></code>. I'm attempting to wrap this <code>kubectl</code> command into the Terraform file using the <code>kubernetes_config_map</code> resource, however when running Terraform for the first time, I receive the following error:</p>
<pre><code>Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
</code></pre>
<p>The strange thing is, every subsequent <code>terraform apply</code> works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit <code>depends_on</code> argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.</p>
<pre><code>provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
</code></pre>
<p>I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time. </p>
<p>I attempted to get more information by enabling the <code>TRACE</code> debug flag for terraform, however the only output I got was the exact same error as above.</p>
| <p>Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:</p>
<p><a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543</a></p>
<p>So, in others words, I changed the cluster's name in <strong>aws_eks_cluster_auth</strong> block to a <strong>static name</strong>, and worked. Well, perhaps this is a bug on TF.</p>
|
<p>Imagine the following deployment definition in kubernetes:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
env: staging
spec:
...
</code></pre>
<p>I have two questions in particular:</p>
<p>1). The label <code>env: staging</code> won't be available in created pods. how can I access this data programmatically in <code>client-go</code>?</p>
<p>2). When pod is created/updated, how can I found which deployment it belongs to?</p>
| <blockquote>
<p>1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?</p>
</blockquote>
<p>You can get the <code>Deployment</code> using client-go. See the example <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="nofollow noreferrer">Create, Update & Delete Deployment</a> for operations on a <code>Deployment</code>.</p>
<blockquote>
<p>2). when pod is created/updated, how can I found which deployment it belongs to?</p>
</blockquote>
<p>When a <code>Deployment</code> is created, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> is created that manage the <code>Pods</code>.</p>
<p>See the <code>ownerReferences</code> field of a <code>Pod</code> to see what <code>ReplicaSet</code> manages it. This is described in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#how-a-replicaset-works" rel="nofollow noreferrer">How a ReplicaSet works</a></p>
|
<p>Is there a way query cpu request and limit with kubectl for each container in a kubernetes context / namespace, just as I can query cpu usage with kubectl top pods.</p>
| <p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">Requests and limits</a> are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.Limit can never be lower than the request.</p>
<p>As said by @chris, try the following commands for cpu requests and limits for kubernetes namespaces</p>
<p>You can get the pods and their CPU requests with the following command.</p>
<pre><code>kubectl get pods --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
</code></pre>
<p>You can get the pods and their CPU Limits with the following command.</p>
<pre><code>kubectl get pods --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.limits.cpu}{'\n'}{end}{'\n'}{end}"
</code></pre>
|
<p>I have an AWS EKS cluster running in a custom VPC with 2 public and 2 private subnets. The node groups (for my backend) run in the 2 private subnets so they can't be accessed directly.</p>
<p>I would like to create an API Gateway which exposes the microservices in the node group so my front-end and third party software can communicate with them. I eventually also like to add authorization to the API Gateway for secutiry. The problem is that I cannot find a good documentation how to do this (Expose the microservices through an API Gateway). Does anyone now khow to do this or where I can find information on how to do this?</p>
<p>The situation would look something like this:
<a href="https://i.stack.imgur.com/gcnDm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcnDm.png" alt="enter image description here" /></a></p>
| <p>You need to use API Gateway private integrations to expose services running in EKS using NLB. Please check the below article for overall solution .</p>
<p><a href="https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/</a></p>
|
<p>how to list the Kubernetes pod which has CharDevice in it ?</p>
<p>I could able to get the pod with CharDevice, but need to print the pod name only</p>
<pre><code>kubectl get pod -o jsonpath='{spec.volumes.hostPath.type=="CharDevice"}'
</code></pre>
| <p>You can get json first and use <code>jq</code> to get desired result :</p>
<pre><code>kubectl get pod -o json |
jq -r '.items[]|select(any(.spec.volumes[];.hostPath.type=="CharDevice")).metadata.name'
</code></pre>
|
<p>How to setup Apache Pulsar transaction coordinator in Kubernetes setup? I couldn't see much docs on this, though I could modify the <code>broker.conf</code> configmap unsure if that even works.</p>
| <p>The transaction coordinator is a part of the broker, I think you just enable the transaction feature in the broker configmap is ok</p>
|
<p>Trying to install kubernetes dashboard in Ubuntu 16.04 resulting in x509: certificate signed by unknown authority error.</p>
<p>Kubernetes cluster with a single node is running fine and deployments are happening too.</p>
<p>Tried enabling apiserver-host property in kubernetes-dashboard.yaml file without any lock.</p>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml</a>
Unable to connect to the server: x509: certificate signed by unknown authority</p>
<p>Any suggestions.</p>
<p>Output from kubectl get all -n kube-system</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/calico-node-6dgkc 2/2 Running 4 4d23h
pod/calico-node-v8xjr 2/2 Running 0 2d4h
pod/coredns-fb8b8dccf-8jznp 1/1 Running 2 4d23h
pod/coredns-fb8b8dccf-pl87d 1/1 Running 2 4d23h
pod/etcd-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-apiserver-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-controller-manager-ets-kubernetes 1/1 Running 2 4d23h
pod/kube-proxy-24qjz 1/1 Running 0 2d4h
pod/kube-proxy-ccqpn 1/1 Running 2 4d23h
pod/kube-scheduler-ets-kubernetes 1/1 Running 2 4d23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/calico-typha ClusterIP 10.110.39.31 <none> 5473/TCP 4d23h
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d23h
Error from server (Forbidden): replicationcontrollers is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicationcontrollers" in API group "" in the namespace "kube-system"
Error from server (Forbidden): daemonsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "daemonsets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "deployments" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): replicasets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "replicasets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): statefulsets.apps is forbidden: User "system:node:ets-kubernetes" cannot list resource "statefulsets" in API group "apps" in the namespace "kube-system"
Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:node:ets-kubernetes" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "kube-system"
Error from server (Forbidden): jobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "jobs" in API group "batch" in the namespace "kube-system"
Error from server (Forbidden): cronjobs.batch is forbidden: User "system:node:ets-kubernetes" cannot list resource "cronjobs" in API group "batch" in the namespace "kube-system"
</code></pre>
<p>Output from kubectl get nodes</p>
<pre><code>NAME STATUS ROLES AGE VERSION
ets-kubernetes Ready master 4d23h v1.14.1
ets-node Ready <none> 2d4h v1.14.1
</code></pre>
<hr>
<p>Kubectl output.PNG
<a href="https://i.stack.imgur.com/SR4SP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SR4SP.png" alt="enter image description here"></a></p>
<p>Certificate Error.PNG
<a href="https://i.stack.imgur.com/kSs2j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kSs2j.png" alt="enter image description here"></a></p>
| <p>Had the same issue after resetting k8s to defaults while having <code>kubectl proxy</code> running.</p>
<p>Simply restarting <code>kubectl proxy</code> fixed the issue :)</p>
|
<p>I want to deploy redis pod which loads a list. Then I will have kubernetes job which will execute bash script with variable taken from that list in redis.</p>
<p>How can I make this redis pod to be auto deleted when all items from a list are used?</p>
| <p>By default, Kubernetes keeps the completed jobs and associated objects for debugging purposes, and you will lose all the generated logs by them when deleted.</p>
<p>That being said, a job can be automatically deleted by using the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">TTL mechanism for finished Jobs</a>.</p>
<p>Here you can find an example of a job's manifest with the TTL enabled and set to delete the job and associated objects (pods, services, etc.) 100 sec after its completion:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
</code></pre>
|
<p>I have a configmap created. I have added data as a key-value pair in configmap</p>
<pre><code>data:
EXTERNAL_CONFIG_FILE: /user/app/api/config
</code></pre>
<p>I need to use this variable to set a mount path in deployment</p>
<pre><code>- name: config-properties-volume-mnt
mountPath: {{ $EXTERNAL_CONFIG_FILE }}
</code></pre>
<p>I am getting undefined variable "$EXTERNAL_CONFIG_FILE" while deploying. <strong>I do not want to define this variable in values.yaml.</strong> Is there a way where I can use this variable defined in configmap in deployment?</p>
| <p><strong>It's not possible to dynamically define any parameter on a manifest</strong>, you have to use the Helm or Kustomize</p>
<p>Or else you can use the <strong>sed</strong> to replace the <strong>Text</strong> in manifest simply</p>
<p>You can not use this way, <strong>configmap</strong> and <strong>secret</strong> are normally to inject the variables or file into the POD not at declare step.</p>
<pre><code>- name: config-properties-volume-mnt
mountPath: {{ $EXTERNAL_CONFIG_FILE }}
</code></pre>
<p>if you have helm chart keeping details into the <code>vaules.yaml</code> is the only option.</p>
<p><strong>Subpath method :</strong></p>
<p>You can use the subpath to achieve, with <strong>subpath</strong> you can use the environment: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-expanded-environment</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: EXTERNAL_CONFIG_FILE
value: /user/app/api/config
image: busybox:1.28
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
# The variable expansion uses round brackets (not curly brackets).
subPathExpr: $(EXTERNAL_CONFIG_FILE)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
</code></pre>
<p>Instead of <strong>env</strong> used in the above example you can use the <strong>configmap</strong></p>
|
<p>Is it possible for a pod to act like a spawner? When someone calls the api service in the first pod, it should spawn a new pod. This seems like a very simple thing but I cant really figure out where to look in the docs. Someone already mentioned using operators but I dont really see how that would aid me.</p>
<p>Im currenlty migrating a project which uses docker as a spawner to create other dockers. I somehow need this principle to work in kubernetes pods.</p>
<p>Kind regards</p>
| <p>Have you looked into Kubespawner part of JupyterHub ?</p>
<p>I have been trying to find alternatives to Kubespawner and Kubernetes Operators might be the answer. <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/extend-kubernetes/operator/</a></p>
<p>GL</p>
|
<p>I'm not quite sure if which of the following approaches is the better approach to create a controller in kubernetes however I know that:</p>
<ul>
<li>I don't want to create a custom resource by any means.</li>
<li>I do only want to fetch information about k8s native resources (pods, ...) given that there might be a lot of pods in each namespace</li>
</ul>
<p>I have seens some patterns like:</p>
<pre><code>ctrl, err := controller.New("name-here", mgr, controller.Options{
Reconciler: &ReconcilePod{Client: mgr.GetClient(), Logger: log},
})
</code></pre>
<p>which <code>ReconcilePod</code> is a struct that has a function <code>Reconcile</code> that keep whole business logic.</p>
<p>Another approach I have seens is like following:</p>
<pre><code>type Controller struct {
indexer cache.Indexer
queue workqueue.RateLimitingInterface
informer cache.Controller
}
</code></pre>
<p>and then defining <code>shared informer</code> and <code>watcher</code> etc.
And the third pattern that I have seen is using <code>operators</code></p>
<p>what I don't get perhaps is what is the main differences between mentioned approaches above and which one fits my need at scale.</p>
| <p>If you don't want to "control" anything, there is no need to create a <em>controller</em>.</p>
<p>If you just want to "read" and "watch" resources, you can use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> and see e.g. <a href="https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/" rel="nofollow noreferrer">Extend Kubernetes via a shared informer</a> for inspiration about how to <em>read</em> and <em>watch</em> resources.</p>
<blockquote>
<p>To stay informed about when these events get triggered you can use a primitive exposed by Kubernetes and the client-go called SharedInformer, inside the cache package. Let’s see how it works in practice.</p>
</blockquote>
<p>Controllers are more complex and contains a <em>reconciliation loop</em> since they should realize/manage a <em>desired state</em>.</p>
<p>An "operator" is a <em>controller</em> as well.</p>
|
<p>My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamically provisioned</a> by <code>pd.csi.storage.gke.io</code> and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.</p>
<p>My problem is I need to set <code>accessModes</code> to <code>ReadOnlyMany</code> and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the <code>initContainer</code>.</p>
<p>Is there a solution to this issue? <a href="https://stackoverflow.com/questions/57754103/how-to-pre-populate-a-readonlymany-persistent-volume">Answer to this question</a> suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.</p>
| <p>With GKE 1.21 and later, you can enable the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver" rel="nofollow noreferrer">managed Filestore CSI driver</a> in your clusters. You can enable the driver for new clusters</p>
<pre><code>gcloud container clusters create CLUSTER_NAME \
--addons=GcpFilestoreCsiDriver ...
</code></pre>
<p>or update existing clusters:</p>
<pre><code>gcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=ENABLED
</code></pre>
<p>Once you've done that, create a storage class (or have or platform admin do it):</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
tier: standard
network: default
</code></pre>
<p>After that, you can use PVCs and dynamic provisioning:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: filestore-example
resources:
requests:
storage: 1Ti
</code></pre>
|
<p>I can't get Kubernetes to pull my private docker image.</p>
<p>I have a kubernetes secret called <code>regcred</code> in my <code>default</code> namespace looking like:</p>
<pre class="lang-json prettyprint-override"><code>{"auths": {"index.docker.io": {"username": "Username", "password": "Password", "email": "[email protected]", "auth": "VXNlcm5hbWU6UGFzc3dvcmQ="}}}
</code></pre>
<p>The auth is <code>Username:Password</code> b64 encoded.</p>
<p>This is the pod I am trying to create which just fails everytime with <code>ImagePullBackOff</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: index.docker.io/jor2/hello-world:latest
imagePullSecrets:
- name: regcred
</code></pre>
<p>logs:</p>
<pre class="lang-sh prettyprint-override"><code>Error from server (BadRequest): container "private-reg-container" in pod "private-reg" is waiting to start: image can't be pulled
</code></pre>
<pre class="lang-sh prettyprint-override"><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 67s default-scheduler Successfully assigned default/private-reg to 10.144.195.222
Normal Pulling 20s (x3 over 66s) kubelet Pulling image "index.docker.io/jor2/hello-world"
Warning Failed 17s (x3 over 63s) kubelet Failed to pull image "index.docker.io/jor2/hello-world": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/jor2/hello-world:latest": failed to unpack image on snapshotter overlayfs: unexpected media type text/html for sha256:361849f1befca621043d13cca4e9bee74861754154556e149340a455e53d1d35: not found
Warning Failed 17s (x3 over 63s) kubelet Error: ErrImagePull
Normal BackOff 3s (x3 over 63s) kubelet Back-off pulling image "index.docker.io/jor2/hello-world"
Warning Failed 3s (x3 over 63s) kubelet Error: ImagePullBackOff
</code></pre>
<p>The secret looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6IHsiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyJ1c2VybmFtZSI6ICJVc2VybmFtZSIsICJwYXNzd29yZCI6ICJQYXNzd29yZCIsICJlbWFpbCI6ICJyYW5kb21AZW1haWwuY29tIiwgImF1dGgiOiAiVlhObGNtNWhiV1U2VUdGemMzZHZjbVE9In19fQo=
kind: Secret
metadata:
creationTimestamp: "2022-04-25T18:59:03Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl-create
operation: Update
time: "2022-04-25T18:59:03Z"
name: regcred
namespace: default
resourceVersion: "270226"
uid: bbb1310b-421a-4c93-8488-498e7a52095f
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>Any idea where I am going wrong?</p>
| <p>It was a problem with my image and the pod spec needed to be:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: jor2/hello-world:latest
imagePullSecrets:
- name: regcred
</code></pre>
|
<p>I have the file <code>example-workflow-cowsay.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
resources:
limits:
memory: 32Mi
cpu: 100m
</code></pre>
<p>I can submit this successfully like this: <code>argo submit -n workflows apps/workflows/example-workflow-cowsay.yml</code>.</p>
<p>Can I get the same thing done using <code>kubectl</code> directly? I tried the below but it fails:</p>
<pre><code>$ k apply -n workflows -f apps/workflows/example-workflow-cowsay.yml
error: from hello-world-: cannot use generate name with apply
</code></pre>
| <p>Yes, it's right there in the <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#argo-cli" rel="nofollow noreferrer">readme</a> (<a href="https://github.com/argoproj/argo-workflows/blob/37a8a81df1d7ef3067596199f96974d31b200b88/examples/README.md#argo-cli" rel="nofollow noreferrer">version at the time of answering</a>).</p>
<p><code>kubectl -n workflows create -f apps/workflows/example-workflow-cowsay.yml</code> did the job.</p>
<hr />
<p>To elaborate a bit: This makes sense, as what I was trying to "apply" was a single run of a workflow (think an object instance rather than a class). If I'd tried to apply a CronWorkflow, then <code>kubectl apply</code> would have worked. The error message that I got:</p>
<pre><code>error: from hello-world-: cannot use generate name with apply
</code></pre>
<p>Told me about it, but I didn't understand it at the time. This is invalid:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
generateName: some-name
...
</code></pre>
<p>But this is valid:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: some-name
...
</code></pre>
|
<p>Streamlit server (run e.g. with <code>streamlit run app-name.py</code>) upon first run for each user requires interaction from her - accepting or rejecting this intro message:</p>
<pre><code> Welcome to Streamlit!
If you are one of our development partners or are interested in
getting personal technical support, please enter your email address
below. Otherwise, you may leave the field blank.
Email:
</code></pre>
<p>Streamlit then waits for input, failing to daemonize (run in background), which quickly causes docker containers in Kubernetes or Openshift to crash (enter the famous crash-loop back off procedure).</p>
<p>How to avoid such crashing, by running Streamlit in headless/noninteractive mode?</p>
| <p>In the <a href="https://docs.streamlit.io/library/advanced-features/configuration#set-configuration-options" rel="noreferrer">streamlit documentation</a> you can find a detailed list of configurable options.
Scroll down and you will find the headless option.
Just edit the config file.</p>
<p><code>headless = true</code></p>
<p>Please also refer to <a href="https://discuss.streamlit.io/t/supress-new-browser-window-once-app-is-started-execution/2018/2?u=yochanan_scharf" rel="noreferrer">this post</a>.</p>
|
<p>I'm having difficulty trying to get kustomize to replace contents of an item in a list.</p>
<p>My kustomize file</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
patches:
- patch.yaml
</code></pre>
<p>My patch.yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
</code></pre>
<p>My resource.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
valueFrom:
secretKeyRef:
name: web-pgdb
key: database
</code></pre>
<p>kustomize build returns</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
valueFrom:
secretKeyRef:
key: database
name: web-pgdb
name: web-service-migration
</code></pre>
<p>what i want kustomize build to return</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
name: web-service-migration
</code></pre>
| <p>If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify <code>valueFrom</code>, so your patch should look like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
valueFrom: null
</code></pre>
<p>More details about strategic merge patch and how to delete maps: <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps" rel="noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps</a></p>
|
<p>Is there a way to ensure that pods are scaled one at a time when setting replica greater than one?</p>
<p>Example: Replica set to 3</p>
<ol>
<li>Pod 1 - Initializing , pod 2 - Waiting, pod 3 - Waiting</li>
<li>Pod 1 - Running , pod 2 - Initializing, pod 3 - Waiting</li>
<li>Pod 1 - Running , pod 2 - Running, pod 3 - Initializing</li>
<li>Pod 1 - Running , pod 2 - Running, pod 3 - Running</li>
</ol>
| <p>You can acomplish this behavior using <code>StatefulSets</code>. As it goes from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">Kubernetes docs</a></p>
<blockquote>
<ul>
<li>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</li>
<li>When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.</li>
<li>Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.</li>
<li>Before a Pod is terminated, all of its successors must be completely shutdown.</li>
</ul>
</blockquote>
<p>So, as you can see here, new pod is not booted up until previous one is initializing.</p>
<p><strong>Note:</strong> this behavior is guranteed by Kubernetes when <code>OrderedReady</code> pod management policy is used (which is default).</p>
|
<p>I have an up&running SolrCloud v8.11 cluster on Kubernetes, with solr-operator.</p>
<p>The backup is enabled on S3 bucket.</p>
<p>How can I correctly write the request to perform a <code>RESTORE</code> of a backup stored in a S3 bucket?</p>
<p>I'm unable to figure out what should it be the <code>location</code> and the <code>snapshotName</code> I have to provide in the <code>Restore API</code> request made to Solr.</p>
<p>In order to discover those values, I tried to execute the <code>LISTBACKUP</code> action, but in this case the <code>location</code> values is also wrong...</p>
<pre class="lang-sh prettyprint-override"><code>$ curl https://my-solrcloud.example.org/solr/admin/collections\?action=LISTBACKUP\&name=collection-name\&repository=collection-backup\&location=my-s3-bucket/collection-backup
{
"responseHeader":{
"status":400,
"QTime":70},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"specified location s3:///my-s3-bucket/collection-backup/ does not exist.",
"code":400}}
## The Log in cluster writes:
org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. => org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist.
</code></pre>
<p>After all, the recurring backup works as expected, but sooner or later a <code>RESTORE action</code> will be performed and it's not clear how could it be done correctly.</p>
<p>Thank you in advance.</p>
| <p>A bit late, but I came across this question while searching for the same answer. There was <a href="https://lists.apache.org/[email protected]:2022-2:S3%20backup" rel="nofollow noreferrer">a thread on the mailing list</a> that helped me to figure out how this is supposed to work.</p>
<p>I found the documentation on this pretty confusing, but the <code>location</code> seems to be <em>relative to the backup repository</em>. So, the <code>repository</code> argument already accounts for the bucket name, and the <code>name</code> argument would be the name of the backup you are attempting to list. Solr then builds the S3 path as <code>{repository bucket} + {location} + {backup name}</code>. So, location should simply be: <code>/</code></p>
<p>Assume you've set up a <code>backupRepository</code> for the SolrCloud deployment like the following:</p>
<pre><code>backupRepositories:
- name: "my-backup-repo"
s3:
region: "us-east-1"
bucket: "my-s3-bucket"
</code></pre>
<p>and you have created a SolrBackup like the following:</p>
<pre><code>---
apiVersion: solr.apache.org/v1beta1
kind: SolrBackup
metadata:
name: "my-collection-backup"
spec:
repositoryName: "my-backup-repo"
solrCloud: "my-solr-cloud"
collections:
- "my-collection"
</code></pre>
<p>The full cURL command for LISTBACKUP would be:</p>
<pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=LISTBACKUP \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/
</code></pre>
<p>Similarly for the RESTORE command:</p>
<pre><code>$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=RESTORE \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/ \
-d collection=my-collection-restore
</code></pre>
|
<p>I'm setting up a CI/CD pipeline for deploying a Kubernetes-based application in an automated way. Part of this deployment involves creating other service accounts and their associated roles.</p>
<p>When my pipeline runs, deployment fails with this error message:</p>
<pre><code>Error: roles.rbac.authorization.k8s.io "mongodb-kubernetes-operator" is forbidden: user "[email protected]" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
│ {APIGroups:[""], Resources:["configmaps"], Verbs:["list" "watch" "create" "update" "patch" "get" "delete"]}
│ {APIGroups:[""], Resources:["pods"], Verbs:["list" "watch" "create" "update" "patch" "get" "delete"]}
│ {APIGroups:[""], Resources:["secrets"], Verbs:["list" "watch" "create" "update" "patch" "get" "delete"]}
│ {APIGroups:[""], Resources:["services"], Verbs:["list" "watch" "create" "update" "patch" "get" "delete"]}
│ {APIGroups:["apps"], Resources:["statefulsets"], Verbs:["list" "watch" "create" "update" "patch" "get" "delete"]}
│ {APIGroups:["mongodbcommunity.mongodb.com"], Resources:["mongodbcommunity"], Verbs:["list" "watch" "update" "patch" "get"]}
│ {APIGroups:["mongodbcommunity.mongodb.com"], Resources:["mongodbcommunity/finalizers"], Verbs:["list" "watch" "update" "patch" "get"]}
│ {APIGroups:["mongodbcommunity.mongodb.com"], Resources:["mongodbcommunity/spec"], Verbs:["list" "watch" "update" "patch" "get"]}
│ {APIGroups:["mongodbcommunity.mongodb.com"], Resources:["mongodbcommunity/status"], Verbs:["list" "watch" "update" "patch" "get"]}
│
│ with module.db_document.kubernetes_role.operator_mongodb,
│ on modules/db_document/main.tf line 17, in resource "kubernetes_role" "operator_mongodb":
│ 17: resource "kubernetes_role" "operator_mongodb" {
│
</code></pre>
<p>The error seemed straightforward enough: my service account can't grant permissions that it does not have. Since the error message mentioned my GCP service account, <code>[email protected]</code>, I added to my role definition what I believed to be the matching permissions.</p>
<p>Below is my resulting role. It has create, delete, get, list, and update permissions for configMaps, pods, secrets, services, statefulsets , and thirdPartyObjects, which I believed should cover the requirements.</p>
<pre><code>resource "google_project_iam_custom_role" "cicd_bot_role" {
project = var.project
role_id = "cicd_bot"
title = "CICD Bot"
permissions = [
"artifactregistry.repositories.downloadArtifacts",
"artifactregistry.repositories.uploadArtifacts",
"compute.instanceGroupManagers.get",
"container.clusters.get",
"container.configMaps.create",
"container.configMaps.delete",
"container.configMaps.get",
"container.configMaps.list",
"container.configMaps.update",
"container.cronJobs.create",
"container.cronJobs.delete",
"container.cronJobs.get",
"container.cronJobs.update",
"container.customResourceDefinitions.create",
"container.customResourceDefinitions.delete",
"container.customResourceDefinitions.get",
"container.customResourceDefinitions.list",
"container.customResourceDefinitions.update",
"container.deployments.create",
"container.deployments.delete",
"container.deployments.get",
"container.deployments.update",
"container.ingresses.create",
"container.ingresses.delete",
"container.ingresses.get",
"container.ingresses.update",
"container.jobs.create",
"container.jobs.delete",
"container.jobs.get",
"container.jobs.update",
"container.namespaces.get",
"container.persistentVolumeClaims.create",
"container.persistentVolumeClaims.delete",
"container.persistentVolumeClaims.get",
"container.persistentVolumeClaims.update",
"container.pods.create",
"container.pods.delete",
"container.pods.get",
"container.pods.list",
"container.pods.update",
"container.roleBindings.create",
"container.roleBindings.delete",
"container.roleBindings.get",
"container.roleBindings.update",
"container.roles.create",
"container.roles.delete",
"container.roles.get",
"container.roles.update",
"container.secrets.create",
"container.secrets.delete",
"container.secrets.get",
"container.secrets.list",
"container.secrets.update",
"container.serviceAccounts.create",
"container.serviceAccounts.delete",
"container.serviceAccounts.get",
"container.serviceAccounts.update",
"container.services.create",
"container.services.delete",
"container.services.get",
"container.services.list",
"container.services.update",
"container.statefulSets.create",
"container.statefulSets.delete",
"container.statefulSets.get",
"container.statefulSets.list",
"container.statefulSets.update",
"container.thirdPartyObjects.create",
"container.thirdPartyObjects.delete",
"container.thirdPartyObjects.get",
"container.thirdPartyObjects.list",
"container.thirdPartyObjects.update",
"dns.changes.create",
"dns.changes.get",
"dns.resourceRecordSets.get",
"dns.resourceRecordSets.list",
"dns.resourceRecordSets.update",
"storage.buckets.get",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get",
"storage.objects.list",
]
}
</code></pre>
<p>However, after deploying this, the error remained the same. I wondered if adding equivalent permissions on the kubernetes side was necessary, so I created the following ClusterRole and ClusterRoleBinding, too.</p>
<pre><code>resource "kubernetes_cluster_role" "cicd_bot" {
metadata {
name = kubernetes_service_account.cicd_bot.metadata[0].name
}
rule {
api_groups = [""]
resources = ["namespaces"]
verbs = ["create", "delete", "get"]
}
rule {
api_groups = [""]
resources = ["configmaps"]
verbs = ["list", "watch", "create", "update", "patch", "get", "delete"]
}
rule {
api_groups = [""]
resources = ["pods"]
verbs = ["list", "watch", "create", "update", "patch", "get", "delete"]
}
rule {
api_groups = [""]
resources = ["secrets"]
verbs = ["list", "watch", "create", "update", "patch", "get", "delete"]
}
rule {
api_groups = [""]
resources = ["services"]
verbs = ["list", "watch", "create", "update", "patch", "get", "delete"]
}
rule {
api_groups = ["apps"]
resources = ["statefulsets"]
verbs = ["list", "watch", "create", "update", "patch", "get", "delete"]
}
rule {
api_groups = ["mongodbcommunity.mongodb.com"]
resources = ["mongodbcommunity"]
verbs = ["list", "watch", "update", "patch", "get"]
}
rule {
api_groups = ["mongodbcommunity.mongodb.com"]
resources = ["mongodbcommunity/finalizers"]
verbs = ["list", "watch", "update", "patch", "get"]
}
rule {
api_groups = ["mongodbcommunity.mongodb.com"]
resources = ["mongodbcommunity/spec"]
verbs = ["list", "watch", "update", "patch", "get"]
}
rule {
api_groups = ["mongodbcommunity.mongodb.com"]
resources = ["mongodbcommunity/status"]
verbs = ["list", "watch", "update", "patch", "get"]
}
}
resource "kubernetes_cluster_role_binding" "cicd_bot" {
metadata {
name = kubernetes_service_account.cicd_bot.metadata[0].name
}
subject {
kind = "ServiceAccount"
namespace = kubernetes_service_account.cicd_bot.metadata[0].namespace
name = kubernetes_service_account.cicd_bot.metadata[0].name
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = kubernetes_cluster_role.cicd_bot.metadata[0].name
}
}
</code></pre>
<p>Unfortunately, the pipeline still fails with the same error. I've been able to overcome a similar error in the past, but not this time. What am I missing?</p>
<p>UPDATE: I was able to deploy successfully by attaching the role <code>roles/container.admin</code> to my service account. So now I need to figure out which permission <code>roles/container.admin</code> has that my custom role does not.</p>
| <p>Sadly, the one permission that was missing was</p>
<pre><code>container.roles.escalate
</code></pre>
<p>Even including every other <code>container.*</code> permission was insufficient; <code>container.roles.escalate</code> was still needed.</p>
<p>This is unfortunate because it makes the cluster more vulnerable to permissions escalation attacks. If there is a safer way to accomplish this, I'd love to hear it. I won't mark my own answer as "correct" because I'm not satisfied with it. But hey, at least it's working...</p>
|
<p>Copied from here: <a href="https://github.com/kubeflow/pipelines/issues/7608" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/issues/7608</a></p>
<p>I have a generated code file that runs against Kubeflow. It ran fine on Kubeflow v1, and now I'm moving it to Kubeflow v2. When I do this, I get the following error:
<code>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)</code></p>
<p>I honestly don't even know where to go next. It feels like something is fundamentally broken for something to fail in the first character, but I can't see it (it's inside the kubeflow execution).</p>
<p>Thanks!</p>
<hr />
<h3>Environment</h3>
<ul>
<li><p>How did you deploy Kubeflow Pipelines (KFP)?
Standard deployment to AWS</p>
</li>
<li><p>KFP version:
1.8.1</p>
</li>
<li><p>KFP SDK version:
1.8.12</p>
</li>
</ul>
<p>Here's the logs:</p>
<pre><code>time="2022-04-26T17:38:09.547Z" level=info msg="capturing logs" argo=true
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead:
https://pip.pypa.io/warnings/venv
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Looking for component `run_info_fn` in --component_module_path `/tmp/tmp.NJW6PWXpIt/ephemeral_component.py`
[KFP Executor 2022-04-26 17:38:24,691 INFO]: Loading KFP component "run_info_fn" from /tmp/tmp.NJW6PWXpIt/ephemeral_component.py (directory "/tmp/tmp.NJW6PWXpIt" and module name "ephemeral_component")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 104, in <module>
executor_main()
File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 94, in executor_main
executor_input = json.loads(args.executor_input)
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
time="2022-04-26T17:38:24.803Z" level=error msg="cannot save artifact /tmp/outputs/run_info/data" argo=true error="stat /tmp/outputs/run_info/data: no such file or directory"
Error: exit status 1
</code></pre>
<p>Here's the files to repro:
root_pipeline_04d99580c84b47c28405a2c8bcae8703.py</p>
<pre><code>import kfp.v2.components
from kfp.v2.dsl import InputPath
from kubernetes.client.models import V1EnvVar
from kubernetes import client, config
from typing import NamedTuple
from base64 import b64encode
import kfp.v2.dsl as dsl
import kubernetes
import json
import kfp
from run_info import run_info_fn
from same_step_000_ce6494722c474dd3b8bef482bb976557 import same_step_000_ce6494722c474dd3b8bef482bb976557_fn
run_info_comp = kfp.v2.dsl.component(
func=run_info_fn,
packages_to_install=[
"kfp",
"dill",
],
)
same_step_000_ce6494722c474dd3b8bef482bb976557_comp = kfp.v2.dsl.component(
func=same_step_000_ce6494722c474dd3b8bef482bb976557_fn,
base_image="public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.5.0",
packages_to_install=[
"dill",
"requests",
# TODO: make this a loop
],
)
@kfp.dsl.pipeline(name="root_pipeline_compilation",)
def root(
context: str='', metadata_url: str='',
):
# Generate secrets (if not already created)
secrets_by_env = {}
env_vars = {
}
run_info = run_info_comp(run_id=kfp.dsl.RUN_ID_PLACEHOLDER)
same_step_000_ce6494722c474dd3b8bef482bb976557 = same_step_000_ce6494722c474dd3b8bef482bb976557_comp(
input_context_path="",
run_info=run_info.outputs["run_info"],
metadata_url=metadata_url
)
same_step_000_ce6494722c474dd3b8bef482bb976557.execution_options.caching_strategy.max_cache_staleness = "P0D"
for k in env_vars:
same_step_000_ce6494722c474dd3b8bef482bb976557.add_env_variable(V1EnvVar(name=k, value=env_vars[k]))
</code></pre>
<p>run_info.py</p>
<pre><code>"""
The run_info component fetches metadata about the current pipeline execution
from kubeflow and passes it on to the user code step components.
"""
from typing import NamedTuple
def run_info_fn(
run_id: str,
) -> NamedTuple("RunInfoOutput", [("run_info", str),]):
from base64 import urlsafe_b64encode
from collections import namedtuple
import datetime
import base64
import dill
import kfp
client = kfp.Client(host="http://ml-pipeline:8888")
run_info = client.get_run(run_id=run_id)
run_info_dict = {
"run_id": run_info.run.id,
"name": run_info.run.name,
"created_at": run_info.run.created_at.isoformat(),
"pipeline_id": run_info.run.pipeline_spec.pipeline_id,
}
# Track kubernetes resources associated wth the run.
for r in run_info.run.resource_references:
run_info_dict[f"{r.key.type.lower()}_id"] = r.key.id
# Base64-encoded as value is visible in kubeflow ui.
output = urlsafe_b64encode(dill.dumps(run_info_dict))
return namedtuple("RunInfoOutput", ["run_info"])(
str(output, encoding="ascii")
)
</code></pre>
<p>same_step_000_ce6494722c474dd3b8bef482bb976557.py</p>
<pre><code>import kfp
from kfp.v2.dsl import component, Artifact, Input, InputPath, Output, OutputPath, Dataset, Model
from typing import NamedTuple
def same_step_000_ce6494722c474dd3b8bef482bb976557_fn(
input_context_path: InputPath(str),
output_context_path: OutputPath(str),
run_info: str = "gAR9lC4=",
metadata_url: str = "",
):
from base64 import urlsafe_b64encode, urlsafe_b64decode
from pathlib import Path
import datetime
import requests
import tempfile
import dill
import os
input_context = None
with Path(input_context_path).open("rb") as reader:
input_context = reader.read()
# Helper function for posting metadata to mlflow.
def post_metadata(json):
if metadata_url == "":
return
try:
req = requests.post(metadata_url, json=json)
req.raise_for_status()
except requests.exceptions.HTTPError as err:
print(f"Error posting metadata: {err}")
# Move to writable directory as user might want to do file IO.
# TODO: won't persist across steps, might need support in SDK?
os.chdir(tempfile.mkdtemp())
# Load information about the current experiment run:
run_info = dill.loads(urlsafe_b64decode(run_info))
# Post session context to mlflow.
if len(input_context) > 0:
input_context_str = urlsafe_b64encode(input_context)
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "input",
"metadata_value": input_context_str,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
# User code for step, which we run in its own execution frame.
user_code = f"""
import dill
# Load session context into global namespace:
if { len(input_context) } > 0:
dill.load_session("{ input_context_path }")
{dill.loads(urlsafe_b64decode("gASVGAAAAAAAAACMFHByaW50KCJIZWxsbyB3b3JsZCIplC4="))}
# Remove anything from the global namespace that cannot be serialised.
# TODO: this will include things like pandas dataframes, needs sdk support?
_bad_keys = []
_all_keys = list(globals().keys())
for k in _all_keys:
try:
dill.dumps(globals()[k])
except TypeError:
_bad_keys.append(k)
for k in _bad_keys:
del globals()[k]
# Save new session context to disk for the next component:
dill.dump_session("{output_context_path}")
"""
# Runs the user code in a new execution frame. Context from the previous
# component in the run is loaded into the session dynamically, and we run
# with a single globals() namespace to simulate top-level execution.
exec(user_code, globals(), globals())
# Post new session context to mlflow:
with Path(output_context_path).open("rb") as reader:
context = urlsafe_b64encode(reader.read())
post_metadata(
{
"experiment_id": run_info["experiment_id"],
"run_id": run_info["run_id"],
"step_id": "same_step_000",
"metadata_type": "output",
"metadata_value": context,
"metadata_time": datetime.datetime.now().isoformat(),
}
)
</code></pre>
<p>Python file to execute to run:</p>
<pre><code>from sameproject.ops import helpers
from pathlib import Path
import importlib
import kfp
def deploy(compiled_path: Path, root_module_name: str):
with helpers.add_path(str(compiled_path)):
kfp_client = kfp.Client() # only supporting 'kubeflow' namespace
root_module = importlib.import_module(root_module_name)
return kfp_client.create_run_from_pipeline_func(
root_module.root,
arguments={},
)
</code></pre>
| <p>Turns out it has to do with not compiling with the right execution mode on.</p>
<p>If you're getting this, your code should look like this.</p>
<pre><code>Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(pipeline_func=root_module.root, package_path=str(package_yaml_path))
</code></pre>
|
<p>How to modify istio meshconfig access log format when the output accessLogEncoding set to JSON, changing the format as <a href="https://istio.io/latest/docs/tasks/observability/logs/access-log/" rel="nofollow noreferrer">describe on the docs</a> didn't work.</p>
<pre><code>spec:
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
accessLogFormat: '[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS%\"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SER'
</code></pre>
| <p>It works!, This is what i needed:</p>
<pre><code>spec:
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
accessLogFormat: |
{
"protocol": "%PROTOCOL%",
"upstream_service_time": "%REQ(x-envoy-upstream-service-time)%",
"upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
"duration": "%DURATION%",
"upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
"route_name": "%ROUTE_NAME%",
"downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
"user_agent": "%REQ(USER-AGENT)%",
"response_code": "%RESPONSE_CODE%",
"response_flags": "%RESPONSE_FLAGS%",
"start_time": "%START_TIME%",
"method": "%REQ(:METHOD)%",
"request_id": "%REQ(X-REQUEST-ID)%",
"upstream_host": "%UPSTREAM_HOST%",
"x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
"client_ip": "%REQ(True-Client-Ip)%",
"requested_server_name": "%REQUESTED_SERVER_NAME%",
"bytes_received": "%BYTES_RECEIVED%",
"bytes_sent": "%BYTES_SENT%",
"upstream_cluster": "%UPSTREAM_CLUSTER%",
"downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
"authority": "%REQ(:AUTHORITY)%",
"path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
"response_code_details": "%RESPONSE_CODE_DETAILS%"
}
</code></pre>
|
<p>How can I define multiple secrets in one file?
Seems that providing multiple secrets doesn't work.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: ca-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLDR
tls.key: LS0tLDR
apiVersion: v1
kind: Secret
metadata:
name: envoy-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1
tls.key: LS0tLS1
</code></pre>
<p>I am not able to use multiple files because I need to generate a single template using helm.</p>
| <p>You can have separate manifests in one yaml file by separating them with <code>---</code>. This will work:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: ca-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLDR
tls.key: LS0tLDR
---
apiVersion: v1
kind: Secret
metadata:
name: envoy-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1
tls.key: LS0tLS1
</code></pre>
|
<p>I recognised a strange behavior after upgrading laravel from 8 to 9. I have a scheduler that executes some jobs.</p>
<p>Kernel.php:</p>
<pre class="lang-php prettyprint-override"><code>$schedule->job(new ImportAzApplications, 'imports')
->everyFiveMinutes()
->onOneServer()
->onFailure(function () {
Log::error('Scheduled "ImportAzApplications" failed');
});
$schedule->job(new ImportServicePrincipals, 'imports')
->everyFiveMinutes()
->onOneServer()
->onFailure(function () {
Log::error('Scheduled "ImportServicePrincipals" failed');
});
</code></pre>
<p>The scheduler will be executed by an cron job in kubernetes. At the <code>artisan schedule:run</code> command will be executed every 5 seconds.</p>
<p>Logs:</p>
<pre><code>[2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportServicePrincipals
[2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportAzApplications
</code></pre>
<p>Now I would except having two jobs in the <code>imports</code> queue. And I do. But only on my dev machine, not on the staging server.</p>
<p>One of the Jobs look like this:</p>
<pre class="lang-php prettyprint-override"><code>class ImportAzApplications implements ShouldQueue, ShouldBeUnique
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
/**
* Create a new job instance.
*
* @return void
*/
public function __construct()
{
}
/**
* Execute the job.
*
* @return void
*/
public function handle()
{
$instance = new AzureApplications();
$params = $instance->azApplicationImportParams();
try {
$instance->import($params);
Log::debug('ImportAzApplications: Processing AzureApplications::importAzApplications');
} catch (Exception $exception) {
Log::error('ImportAzApplications: '.$exception->getMessage(), $params);
}
}
}
</code></pre>
<ul>
<li>I enabled debug log-level to fetch debug logs</li>
<li>I run unit tests successfully local</li>
<li>I validated the scheduler workflow on the stating server with <code>artisan schedule:list', </code>artisan schedule:run', <code>artisan schedule:test</code> and `artisan queue:work --queue=imports'</li>
<li>I monitored the database online via <code>tinker</code>: <code>DB::table('jobs')->get()</code> and <code>DB::table('failed_jobs')->get()</code>. Both remains empty</li>
<li>I run the Jobs manually in tinker <code>(new ImportAzApplications)->handle()</code> successfully</li>
<li>I executed a different job (export task) in the same queue successfully</li>
</ul>
<p>I'm pretty sure it's a super simple thing that I can't see atm but for now I'm running out of ideas and hope someone has some further ideas</p>
<p>Many thanks</p>
<h1>Edited</h1>
<p>during the weekend, some magic happened and the jobs where executed according to the new logs. I'm going to investigate the voodoo-magic further on kubernetes.</p>
| <p>I have to excuse myself to the community. The Problem was actually caused by an pipeline workflow.</p>
<p>I run <code>laravel</code> on <code>kubernetes</code>. There're multiple pod for different exercises available.</p>
<h2><code>app</code></h2>
<p>the actual laraval application</p>
<h2><code>queue</code></h2>
<p>the same container image with a slide different configuration to process the jobs in the queues using the <code>php artisan queue:work --queue=imports</code> command</p>
<h2><code>jobs</code></h2>
<p>a kubernetes job that executes scheduled jobs using the <code>php artisan schedule:run</code> command defined in the <code>App\Console\Kernel</code> every minute</p>
<p>Once a new release will be pushed to the registry, a <code>kubectl set env</code> command sets the new build version which forces the <code>app</code> Pod to grab a new container image. This is defined the kubernetes strategy.</p>
<p>However the <code>queue</code> run with the old image. I've forced the <code>queue</code> to get latest image (no voodoo-magic) and the jobs where executed.</p>
<p>Thanks for all that who took there time to investigate and try to reproduce this beavure. That wasn't easy.</p>
<p>Cheers!</p>
|
<p>I have a very simple program:</p>
<pre><code>package main
import (
"fmt"
"github.com/vishvananda/netlink"
)
func main() {
_, err := netlink.LinkByName("wlp164s0")
if err != nil {
fmt.Println("error finding VIP Interface, for building DHCP Link : %v", err)
return
}
fmt.Println("Worked..")
}
</code></pre>
<p>If I create a docker image and run it with "--net host", this program prints "Worked". It is able to find the interface wlp164s0.</p>
<p>If I create a k8s deployment like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: netlink-example
labels:
app: netlink-example
spec:
replicas: 1
selector:
matchLabels:
app: netlink-example
template:
metadata:
labels:
app: netlink-example
spec:
hostNetwork: true
containers:
- name: netlink
image: suruti94/netlink:0.1
imagePullPolicy: IfNotPresent
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>This program prints the error indicating that it can't lookup the interface which means the "hostNetwork: true" is not taking effect. From all my research, this looks right. Any help would be appreciated. I am running this program on Ubuntu 21.04, k8s version 1.22.</p>
| <p>After some experimentation, I have come to an understanding that the docker option "--net host" is not the same as "hostNetwork: true" in k8s. I wrongly assumed they produce similar behavior.</p>
<ul>
<li>docker --net host option makes the host interfaces available in the container which is useful for some applications</li>
<li>When you deploy a pod with hostNetwork:true, it means the host network is reachable from the pod. By default when a pod is deployed (I verified this on my local machine using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a>) the host network is reachable. I can see the veth interface connected to the bridge on the host. Even with hostNetwork: false, I was able to update packages on my pod.So, not sure what to make out of this setting. At this stage, I am concluding that there is no option to expose the host interface directly on the pod.</li>
</ul>
|
<p>I've installed minikube over VirtualBox in a Windows 10 Home</p>
<p>I am trying to run command: <code>minikube tunnel</code> but I get an error:</p>
<pre><code>Status:
machine: minikube
pid: 10896
route: 10.96.0.0/12 -> 192.168.99.101
minikube: Running
services: []
errors:
minikube: no errors
router: error adding route: Error en la adici�n de la ruta: El objeto ya existe.
, 3
loadbalancer emulator: no errors
</code></pre>
<p>It is the error message (translated):</p>
<pre><code>Error in route addition: Object exists already.
</code></pre>
<p>I would like to know why I get an error in router section?</p>
<p>Thanks in advance</p>
| <p>Solution to this : This solution works to me</p>
<p>Run minikube tunnel in powershell . Run PowerShell as administrator</p>
<pre><code>PS C:\Users\QL752LU> minikube tunnel
Status:
machine: minikube
pid: 9272
route: 10.96.0.0/12 -> 192.168.59.100
minikube: Running
services: [dockerml]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Status:
machine: minikube
pid: 9272
route: 10.96.0.0/12 -> 192.168.59.100
minikube: Running
services: [dockerml]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Status:
</code></pre>
|
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p>
<p>We get the below error message on start:</p>
<p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p>
<p>We included the openjdk8 in the docker image. Here is the docker file.</p>
<pre><code>
#FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
#WORKDIR /app
#EXPOSE 80
#EXPOSE 443
ARG REPO=mcr.microsoft.com/dotnet/runtime
FROM $REPO:3.1.24-alpine3.15 AS base
# Install ASP.NET Core
RUN aspnetcore_version=3.1.24 \
&& wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \
&& aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \
&& echo "$aspnetcore_sha512 aspnetcore.tar.gz" | sha512sum -c - \
&& tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \
&& rm aspnetcore.tar.gz
RUN apk add openjdk8
ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk
ENV PATH="$JAVA_HOME/bin:${PATH}"
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
...
RUN dotnet restore "API.csproj"
COPY . .
WORKDIR "API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]````
</code></pre>
| <p>In addition to Pavel's response, instead of building your own docker image, you can utilize base image available in GridGain edition: <a href="https://hub.docker.com/r/gridgain/community-dotnet" rel="nofollow noreferrer">https://hub.docker.com/r/gridgain/community-dotnet</a></p>
<p>GridGain Community edition is built on Apache Ignite, is free and open source as well. You might check the official docs for more details.</p>
|
<p>We followed the solution suggested in <a href="https://stackoverflow.com/questions/71957287/apache-ignite-c-sharp-client-connection-configuration-for-kubernetes">Apache Ignite C# Client Connection configuration for kubernetes</a> as thick client to connect the ignite cluster running in kubrenetes.</p>
<p>We get the below error message on start:</p>
<p>failed to start: System.EntryPointNotFoundException: Unable to find an entry point named 'dlopen' in shared library 'libcoreclr.so'. at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.NativeMethodsCore.dlopen(String filename, Int32 flags) at Apache.Ignite.Core.Impl.Unmanaged.Jni.DllLoader.Load(String dllPath) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.LoadDll(String filePath, String simpleName) at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String configJvmDllPath, ILogger log) at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)</p>
<p>We included the openjdk8 in the docker image. Here is the docker file.</p>
<pre><code>
#FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
#WORKDIR /app
#EXPOSE 80
#EXPOSE 443
ARG REPO=mcr.microsoft.com/dotnet/runtime
FROM $REPO:3.1.24-alpine3.15 AS base
# Install ASP.NET Core
RUN aspnetcore_version=3.1.24 \
&& wget -O aspnetcore.tar.gz https://dotnetcli.azureedge.net/dotnet/aspnetcore/Runtime/$aspnetcore_version/aspnetcore-runtime-$aspnetcore_version-linux-musl-x64.tar.gz \
&& aspnetcore_sha512='1341b6e0a9903b253a69fdf1a60cd9e6be8a5c7ea3c4a52cd1a8159461f6ba37bef7c2ae0d6df5e1ebd38cd373cf384dc55c6ef876aace75def0ac77427d3bb0' \
&& echo "$aspnetcore_sha512 aspnetcore.tar.gz" | sha512sum -c - \
&& tar -oxzf aspnetcore.tar.gz -C /usr/share/dotnet ./shared/Microsoft.AspNetCore.App \
&& rm aspnetcore.tar.gz
RUN apk add openjdk8
ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk
ENV PATH="$JAVA_HOME/bin:${PATH}"
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
...
RUN dotnet restore "API.csproj"
COPY . .
WORKDIR "API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]````
</code></pre>
| <p>This Ignite issue on Alpine Linux was fixed in 2.13, which was released yesterday - please try upgrading.</p>
<p><a href="https://issues.apache.org/jira/browse/IGNITE-16749" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/IGNITE-16749</a>
<a href="https://www.nuget.org/packages/Apache.Ignite/2.13.0" rel="nofollow noreferrer">https://www.nuget.org/packages/Apache.Ignite/2.13.0</a></p>
<hr />
<p>Additionally, set <strong>LD_LIBRARY_PATH</strong> environment variable:</p>
<ul>
<li><strong>openjdk8</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/amd64/server</code></li>
<li><strong>openjdk11</strong>: <code>ENV LD_LIBRARY_PATH /usr/lib/jvm/default-jvm/jre/lib/server</code></li>
</ul>
<p>Do not set <code>JAVA_HOME</code> and <code>PATH</code> - it is not necessary.</p>
|
<p>I deployed a EKS cluster via a IAM user and need to give full cluster permission to other aws sso users. I followed this instruction <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a> but still can't make it work.</p>
<p>I got below error when run this command by a sso user:</p>
<pre><code>$ kubectl get pods
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>In the doc they say to create <code>aws-auth</code> ConfigMap to map the role as below. But I am not sure what the role arn I should specify. I am using EKS + Fargate so there is no <code>node group</code> or <code>node instance</code> in my cluster. So what kind of role should I specify in below configuration></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
| <p>You should specify:</p>
<pre><code>- rolearn: arn:aws:iam::<aws account id>:role/<federated user>
username: {{SessionName}}
groups:
- system:masters
</code></pre>
<p>Please note that "< aws account id >" is the ID of the AWS account in which the EKS cluster is created, and "< federated user >" is the SSO user with which you are accessing this account. This policy will add admin access rights to your k8s cluster. You can obtain the "< federated user >" by logging into an account with that user and in the upper right corner do expand and you will see Federated user. Copy it, and it will look something like this AWSReservedSSO_AdministratorAccess_9999bb9bbbbfff/administrator. Take just the first part before the forward slash.</p>
|
<p>I am installing <code>kube-prometheus-stack</code> with Helm and I am adding some custome scraping configuration to Prometheus which requires authentication. I need to pass <code>basic_auth</code> with <code>username</code> and <code>password</code> in the <code>values.yaml</code> file.</p>
<p>The thing is that I need to commit the <code>values.yaml</code> file to a repo so I am wondering how can I have the username and password set on values file, maybe from a secret in Kubernetes or some other way?</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
username: prometheus
password: prom123456
</code></pre>
| <p><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="nofollow noreferrer">Scrape config</a> support specifying <code>password_file</code> parameter, so you can mount your own secret in volumes and volumemMounts:</p>
<p>Disclaimer, haven't tested it myself, not using a kube-prometheus-stack, but i guess something like this should work:</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
password_file: /etc/scrape_passwordfile
# Additional volumes on the output StatefulSet definition.
volumes:
- name: scrape_passwordfile
secret:
secretName: scrape_passwordfile
optional: false
# Additional VolumeMounts on the output StatefulSet definition.
volumeMounts:
- name: scrape_passwordfile
mountPath: "/etc/scrape_passwordfile"
</code></pre>
<p>Another option is to ditch <code>additionalScrapeConfigs</code> and use <code>additionalScrapeConfigsSecret</code>to store whole config inside secret</p>
<pre class="lang-yaml prettyprint-override"><code> ## If additional scrape configurations are already deployed in a single secret file you can use this section.
## Expected values are the secret name and key
## Cannot be used with additionalScrapeConfigs
additionalScrapeConfigsSecret: {}
# enabled: false
# name:
# key:
</code></pre>
|
<p>We have a Kubernetes Pod which provides multiple metric endpoints (<code>:3093/metrics</code> and <code>:9113/metrics</code>), but it also has an port which doesn't provide any metric (<code>:80</code>).</p>
<p><strong>TL;DR:</strong> Is is possible to scrape only the ports <code>3093</code> and <code>9113</code>?</p>
<hr>
<p>We are using <a href="https://github.com/prometheus/prometheus/blob/70f3d1e/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">the example configuration</a> for the scrape configuration:</p>
<pre><code>- job_name: 'kubernetes-pods'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
</code></pre>
<p>These endpoints get properly scraped, when enabling scraping with this annotation:</p>
<pre><code> annotations:
prometheus.io/scrape: "true"
</code></pre>
<p>But this has the issue, that it also scrapes port <code>:80</code>, which it shouldn't.</p>
| <p>The suggestion outlined <a href="https://github.com/prometheus/prometheus/issues/3756#issuecomment-767147462" rel="nofollow noreferrer">here</a> allows scraping multiple endpoints per pod. The idea is to give custom names to container ports, which must be scraped, so these names can be used later during relabeling phase for the discovered targets. For example, the following config adds <code>prom-metrics</code> name to ports 3093 and 9113, which must be scraped by Prometheus. Note that the port 80 has no <code>prom-metrics</code> name, so it won't be scraped.</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: custom-app
image: github.com/user/app:latest
ports:
- containerPort: 80
- containerPort: 3093
name: prom-metrics
- containerPort: 9113
name: prom-metrics
</code></pre>
<p>Later the following scrape config can be used for scraping container ports with <code>prom-metrics</code> name:</p>
<pre class="lang-yaml prettyprint-override"><code> - job_name: 'pods-with-prom-metrics'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
regex: prom-metrics
action: keep
- source_labels: [__address__, __meta_kubernetes_pod_container_port_number]
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_container_name]
target_label: container
- source_labels: [__meta_kubernetes_pod_name]
target_label: pod
- source_labels: [__meta_kubernetes_namespace]
target_label: namespace
</code></pre>
|
<p>I have install a minikube on my linux-laptop (Computer A)</p>
<p><a href="https://i.stack.imgur.com/WdHb7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WdHb7.png" alt="enter image description here" /></a></p>
<p>and deploy some service on it :</p>
<pre><code>$ minikube service list</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAMESPACE</th>
<th>NAME</th>
<th>TARGET PORT</th>
<th>URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>cert-manager</td>
<td>cert-manager</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>cert-manager</td>
<td>cert-manager-webhook</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>default</td>
<td>aws</td>
<td>5000</td>
<td><a href="http://192.168.49.2:30728" rel="nofollow noreferrer">http://192.168.49.2:30728</a></td>
</tr>
<tr>
<td>default</td>
<td>azure</td>
<td>5000</td>
<td><a href="http://192.168.49.2:31794" rel="nofollow noreferrer">http://192.168.49.2:31794</a></td>
</tr>
<tr>
<td>default</td>
<td>gcloud</td>
<td>5000</td>
<td><a href="http://192.168.49.2:32367" rel="nofollow noreferrer">http://192.168.49.2:32367</a></td>
</tr>
<tr>
<td>default</td>
<td>hft</td>
<td>5000</td>
<td><a href="http://192.168.49.2:30970" rel="nofollow noreferrer">http://192.168.49.2:30970</a></td>
</tr>
<tr>
<td>default</td>
<td>hftf</td>
<td>5000</td>
<td><a href="http://192.168.49.2:30612" rel="nofollow noreferrer">http://192.168.49.2:30612</a></td>
</tr>
<tr>
<td>default</td>
<td>kubernetes</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>default</td>
<td>open</td>
<td>5000</td>
<td><a href="http://192.168.49.2:31547" rel="nofollow noreferrer">http://192.168.49.2:31547</a></td>
</tr>
<tr>
<td>ingress-nginx</td>
<td>ingress-nginx-controller</td>
<td>http/80</td>
<td><a href="http://192.168.49.2:31053" rel="nofollow noreferrer">http://192.168.49.2:31053</a></td>
</tr>
<tr>
<td></td>
<td></td>
<td>https/443</td>
<td><a href="http://192.168.49.2:30302" rel="nofollow noreferrer">http://192.168.49.2:30302</a></td>
</tr>
<tr>
<td>ingress-nginx</td>
<td>ingress-nginx-controller-admission</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>kube-system</td>
<td>kube-dns</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>kubernetes-dashboard</td>
<td>dashboard-metrics-scraper</td>
<td>No node port</td>
<td></td>
</tr>
<tr>
<td>kubernetes-dashboard</td>
<td>kubernetes-dashboard</td>
<td>No node port</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>As you can see there is 6 service create with fastAPI and that are behind an ingress.
I have set my ingress with the host <code>galigator.fun</code>. Cert-manager is ready to fire, but for now TLS is comment in the ingress.
<code>galigator.fun</code> is DNS name I own.</p>
<pre><code>
$ minikube addons enable ingress
$ kubectl describe ingress
Name: multiplex-applications
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 ()
Rules:
Host Path Backends
---- ---- --------
galigator.fun
/v1/aws/(.+) aws:5000 (172.17.0.11:5000)
/v1/azure/(.+) azure:5000 (172.17.0.14:5000)
/v1/gcloud/(.+) gcloud:5000 (172.17.0.13:5000)
/v1/deepml/best/(.+) hftf:5000 (172.17.0.10:5000)
/v1/deepml/edge/(.+) hft:5000 (172.17.0.3:5000)
/v1/deepml/razor/(.+) open:5000 (172.17.0.8:5000)
Annotations: cert-manager.io/issue-temporary-certificate: true
cert-manager.io/issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 52m (x3 over 53m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>The error from Default backend seem not related to my problem...</p>
<p>If add an entry to <code>/etc/hosts</code> with <code>minikube ip</code> and <code>galigator.fun</code> then I can run my services without problem.
But there are not available from Computer B not Computer C.</p>
<p><strong>How to access my services from Computer B and Computer C ?</strong></p>
<p><em>I don't want to change any configuration in computer B & C.</em></p>
<p>As I understand, the <code>minikube ip</code> isn't know from my Internet Box; it is not a physical equipement.
My minikube run from my kubuntu-21.04 and is start with <code>minikube start --ports=80,443</code> or just <code>minikube start</code> but I can't observe a difference.</p>
<p>Another thing that don't look to work.</p>
<pre><code>
$ kubectl port-forward --address 0.0.0.0 deployment/ingress-nginx-controller 80:80 --namespace ingress-nginx
Unable to listen on port 80: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 0.0.0.0:80: bind: permission denied]
error: unable to listen on any of the requested ports: [{80 80}]
</pre></code>
<p>I have properly test the connectivity between Internet and Computer A for port 80 & 443 using an Apache. Since I have remove that server.</p>
| <p>First as the doc explain I try to play with <code>minikube</code> startup options.</p>
<pre><code>minikube start --listen-address='0.0.0.0' --ports=80,443
</code></pre>
<p>But as we can see it doesn't change anything:</p>
<pre><code>$ docker container ls |grep minikube
0aafb00fd97d gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 3 weeks ago Up 22 minutes 127.0.0.1:49172->22/tcp, 127.0.0.1:49171->2376/tcp, 127.0.0.1:49170->5000/tcp, 127.0.0.1:49169->8443/tcp, 127.0.0.1:49168->32443/tcp minikube
</code></pre>
<p>So if we investigate deeper on error of command :</p>
<pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 80:80 443:443
</code></pre>
<p>we can find that binding to port <code>80</code> and <code>443</code> but they are below <code>1000</code> and require extra privilege/root. And <code>kubectl</code> isn't a root process, for goods reasons.</p>
<p>So an ugly solution that work could be to do something like that :</p>
<pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 50000:80 50001:443
</code></pre>
<p>And then change the NAT rules in the InternetBox. But I don't like it because <code>minikube</code> is an implementation detail that should not impact another hardware.</p>
<p>So a less horrible solution is to give kubectl the write to bind to low number ports...</p>
<pre><code>$ man 7 capabilities | grep -A 1 CAP_NET_BIND_SERVICE
CAP_NET_BIND_SERVICE
Bind a socket to Internet domain privileged ports (port numbers less than 1024).
</code></pre>
<p>So command should be :</p>
<pre><code>sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/kubectl
</code></pre>
<p>and then</p>
<pre><code>kubectl port-forward -n ingress-nginx --address 0.0.0.0 service/ingress-nginx-controller 80:80 443:443
</code></pre>
<p>it works to...</p>
<p><strong>But there is one major concern here, we have to redo the last command on every startup, and the last two ones on every upgrade...</strong></p>
<p>If someone have a good answer, that could also work for a background server on a <code>LAN</code>, I would love it.</p>
<p>Port <code>80</code> is require only during the execution of the challenge from cert-manager. So a solution that could get close it, could also be interesting.</p>
|
<p>it there for Kubernetes an option to set a DNS alias as in can be done within a host file (62.116.156.20 servera)? Where servera is an external server. As a result any pod within the cluster can get in contact to server via DNS.</p>
<p>Thanks</p>
| <p>It totally depends how you configure and deploy your pods. There is an option to add a custom DNS entry to the host file of every pod documented here: <a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/</a></p>
<p>configuration snip:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox:1.28
command:
- cat
args:
- "/etc/hosts"
</code></pre>
|
<p>I have a FastAPI app with the following code</p>
<pre class="lang-py prettyprint-override"><code> @app.on_event("startup")
async def startup_event():
"""Initialize application services"""
print("Starting the service")
</code></pre>
<p>when I run FastAPI directly from the terminal, I get the following output</p>
<pre><code>INFO: Started server process [259936]
INFO: Waiting for application startup.
Starting the service
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
</code></pre>
<p>You can see that the print statement got executed.</p>
<p>However, when the same app is automatically run inside a Kubernetes cluster, I get the following output</p>
<pre><code> INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
</code></pre>
<p>The print statement did not get executed, in fact, any additional code inside the function never gets executed.</p>
<p>However, if I exit the process like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.on_event("startup")
async def startup_event():
"""Initialize application services"""
print("Starting the service")
exit(99)
</code></pre>
<p>The process exists then I can see the print statement.</p>
<pre><code>SystemExit: 99
ERROR: Application startup failed. Exiting.
Starting the service
</code></pre>
<p>What is the problem here?</p>
<p>Edit: Actually no code whatsoever gets executed, I have put print statements literally everywhere and nothing gets printed, but somehow the webserver runs...</p>
| <p>So, actually, there is no problem with my code, FastAPI, asyncio, or Kubernetes.</p>
<p>Everything was actually working correctly, it's just that the output was buffered.</p>
<p>After adding flush=True to the print statement, everything showed.</p>
<p>I am answering this in case some poor soul stumbles upon this thread in the future.</p>
<p>I spent days debugging this!!!</p>
|
<p>I'm trying to understand how the Kubernetes <strong>HorizontalPodAutoscaler</strong> works.
Until now, I have used the following configuration:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
</code></pre>
<p>This uses the <code>targetCPUUtilizationPercentage</code> parameter but I would like to use a metric for the memory percentage used, but I was not able to find any example.
Any hint?</p>
<p>I found also that there is this type of configuration to support multiple metrics, but the <strong>apiVersion</strong> is <code>autoscaling/v2alpha1</code>. Can this be used in a production environment?</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2alpha1
metadata:
name: WebFrontend
spec:
scaleTargetRef:
kind: ReplicationController
name: WebFrontend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
- type: Object
object:
target:
kind: Service
name: Frontend
metricName: hits-per-second
targetValue: 1k
</code></pre>
| <p>Here is a manifest example for what you need, that includes <strong>Memory Metrics</strong>:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: web-servers
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-servers
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 20
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 30Mi
</code></pre>
<p>An important thing to notice is that, as you can see, it uses the <strong>autoscaling/v2beta2 API version</strong>, so you need to follow all the previous instructions listed <a href="https://loft.sh/blog/kubernetes-horizontal-pod-autoscaling/" rel="nofollow noreferrer">here</a>.</p>
<p>Regarding the possibility to use the <strong>autoscaling/v2alpha1</strong>, yes, you can use it, as it includes support for scaling on memory and custom metrics as this <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">URL</a> specifies, but keep in mind that alpha versions are released for testing, as they are not final versions.</p>
<p>For more <strong>autoscaling/v2beta2 YAML’s</strong> examples and a deeper look into memory metrics, you can take a look at this <a href="https://stackoverflow.com/questions/69184304/how-memory-metric-is-evaluated-by-kubernetes-hpa">thread</a>.</p>
|
<p>Can somebody please help me understand why is</p>
<pre><code>kubectl get all --all-namespaces
</code></pre>
<p>lists resources from all namespaces but not Ingress. Even if I specify a particular namespace like the following I still don;t get the Ingress resource</p>
<pre><code>kubectl get all -n app-space
</code></pre>
<p>Why do I have to specifically refer resource type [Ingress ] in this case If i have to get the list of all Ingress resources from all namespaces or a specific namespace</p>
<pre><code>kubectl get ingress --all-namespaces
kubectl get ingress -n app-space
</code></pre>
| <p><code>kubectl get all</code> is not intended to list all resources cause that would produce unexpected outputs, instead it's meant to list only some most used api resources. check the this <a href="https://github.com/kubernetes/kubernetes/pull/28955#issuecomment-232652070" rel="nofollow noreferrer">Pull Request</a> for the discussion.</p>
|
<p>I'm running Kubernetes with docker desktop on windows. DD is up-to-date, and the <code>kubectl version</code> command returns 1.22 as both the client and server version.</p>
<p>I executed <code>kubectl run my-apache --image httpd</code>, then <code>kubectl get all</code>, which only shows
<a href="https://i.stack.imgur.com/9xWby.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xWby.png" alt="enter image description here" /></a></p>
<p>There is no deployment or replicaset as I expect. This means some commands, such as <code>kubectl scale</code> doesn't work. Any idea what's wrong? Thanks.</p>
| <p>The <code>kubectl run</code> command create pod not deployment, I mean it used to create in the past before kubernetes version 1.18 I guess.</p>
<p>For deployment you have run this command</p>
<p><code>kubectl create deployment my-apache --image httpd</code></p>
|
<p>I have a K8S service (app-filestash-testing) running like following:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-filestash-testing ClusterIP 10.111.128.18 <none> 10000/TCP 18h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
</code></pre>
<p>I used the following yaml file to create an Ingress trying reach this service:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-filestash-testing
spec:
rules:
- host: www.masternode.com
http:
paths:
- backend:
serviceName: app-filestash-testing
servicePort: 10000
</code></pre>
<p>In the <em>/etc/hosts</em> file, I made this change (I used the worker node public IP):</p>
<pre><code>127.0.0.1 localhost
xx.xxx.xxx.xxx www.masternode.com
</code></pre>
<p>However, when I checked the Ingress, I saw that the Ingress port is 80.</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
app-filestash-testing nginx www.masternode.com 80 14h
</code></pre>
<p>Currently the service is running and listening on port 10000, but the Ingress port is 80.</p>
<p>I am just wondering is there any method/ setting to change the port number of Ingress to 10000? How to reach this service through Ingress? Is is possible to set the port number in <em>/etc/hosts</em> file?</p>
<p>Thanks.</p>
| <p>From: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p>
<blockquote>
<p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.</p>
</blockquote>
<p>NodePort might be what you are looking for. More information and options are documented here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
|
<p>Iam able to connect to Kubernetes cluster using Fabric8 api with the following code :</p>
<pre><code> String masterUrl = "<Kube Master Url>";
Map<String, String> certFileNamesMap = getCertificates();
Config config = new ConfigBuilder().withMasterUrl(masterUrl).build();
config.setWebsocketPingInterval(30000L);
config.setConnectionTimeout(300000);
config.setRequestTimeout(300000);
if (certFileNamesMap != null) {
config.setCaCertFile(certFileNamesMap.get(CA_CERT_FILE));
config.setClientCertFile(certFileNamesMap.get(CLIENT_CERT_FILE));
config.setClientKeyFile(certFileNamesMap.get(CLIENT_KEY_FILE));
}
KubernetesClient client = new DefaultKubernetesClient(config);
</code></pre>
<p>I am able to set the certificates and able to connect to https://MasterIP but with <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> API I'm not able to figure out the way to set the certs as well as remote master IP. Is there anyway to do that with official kubernetes client API? as fabric8 API is officially discontinued by their team.</p>
<p>Im trying with the following code inside container in minikube :</p>
<pre><code> ApiClient client = ClientBuilder.cluster().build();
Configuration.setDefaultApiClient(client);
System.out.println("Loading in app cluster 3");
CoreV1Api api = new CoreV1Api();
V1PodList list =
api.listPodForAllNamespaces(null,
null,
null,
null,
100,
null,
null,
null,
null,
null);
System.out.println("Listing all pods: ");
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
</code></pre>
<p>Im getting error as forbidden.</p>
<pre><code>io.kubernetes.client.openapi.ApiException: Forbidden
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:993)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:905)
at io.kubernetes.client.openapi.apis.CoreV1Api.listPodForAllNamespacesWithHttpInfo(CoreV1Api.java:35956)
at io.kubernetes.client.openapi.apis.CoreV1Api.listPodForAllNamespaces(CoreV1Api.java:35848)
</code></pre>
| <p>Iam able to figure out the way, we need to use /.kube/config file and copy all the certificates into that file itself, then there is an api in kubernetes java client which reads the config file and gives access to kube cluster.</p>
|
<p>I have a nodejs application running on Google Kubernetes Engine (v1.20.8-gke.900)</p>
<p>I want to add custom header to get client's Region and lat long so I refer to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">this article</a> and <a href="https://cloud.google.com/load-balancing/docs/https/custom-headers" rel="nofollow noreferrer">this one</a> also and created below kubernetes config file, but when I am printing the header I am not getting any custom header.</p>
<pre class="lang-yaml prettyprint-override"><code>#k8s.yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-app-ns-prod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: npm-app-deployment
namespace: my-app-ns-prod
labels:
app: npm-app-deployment
tier: backend
spec:
template:
metadata:
name: npm-app-pod
namespace: my-app-ns-prod
labels:
app: npm-app-pod
tier: backend
spec:
containers:
- name: my-app-container
image: us.gcr.io/img/my-app:latest
ports:
- containerPort: 3000
protocol: TCP
envFrom:
- secretRef:
name: npm-app-secret
- configMapRef:
name: npm-app-configmap
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-regcred
replicas: 3
minReadySeconds: 30
selector:
matchLabels:
app: npm-app-pod
tier: backend
---
apiVersion: v1
kind: Service
metadata:
name: npm-app-service
namespace: my-app-ns-prod
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"npm-app-backendconfig"}}'
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: npm-app-pod
tier: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
- name: https
protocol: TCP
port: 443
targetPort: 3000
type: LoadBalancer
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: npm-app-backendconfig
namespace: my-app-ns-prod
spec:
customRequestHeaders:
headers:
- "HTTP-X-Client-CityLatLong:{client_city_lat_long}"
- "HTTP-X-Client-Region:{client_region}"
- "HTTP-X-Client-Region-SubDivision:{client_region_subdivision}"
- "HTTP-X-Client-City:{client_city}"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api/v1
pathType: Prefix
backend:
service:
name: npm-app-service
port:
number: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: npm-app-configmap
namespace: my-app-ns-prod
data:
APP_ID: "My App"
PORT: "3000"
---
apiVersion: v1
kind: Secret
metadata:
name: npm-app-secret
namespace: my-app-ns-prod
type: Opaque
data:
MONGO_CONNECTION_URI: ""
SESSION_SECRET: ""
</code></pre>
| <p>Actually the issue was with Ingress Controller, I missed to defined "<code>cloud.google.com/backend-config</code>". Once I had defined that I was able to get the custom header. Also I switched to <strong>GKE Ingress controller</strong> (<code>gce</code>) from <code>nginx</code>. But the same things works with <code>nginx</code> also</p>
<p>This is how my final ingress looks like.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
cloud.google.com/backend-config: '{"default": "npm-app-backendconfig"}'
kubernetes.io/ingress.class: "gce"
spec:
...
...
</code></pre>
<p>Reference: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#request_headers" rel="nofollow noreferrer">User-defined request headers</a> :</p>
|
<p>Is it possible to obtain Kubernetes logs for a dedicated time range?</p>
<p>All I can do right now is to make a dump of about the last-hour log for the single pod using <code>kubectl logs > dump.log</code> cmd.</p>
<p>But for debugging reasons, it's necessary to obtain the logs for the last week. I was unable to find any abilities to do this in Kubernetes logs.</p>
<p>The only thought is to attach some external service like Kibana for the logs collection, but maybe built-in Kubernetes remedies allow to do this?</p>
<p>Thank you.</p>
| <p><strong>AWK</strong> is an awesome tool in Unix/Linux systems for these types of logical operations</p>
<p>So, to display logs in between two-time ranges (ex: 10 AM to 11 AM):</p>
<ol>
<li><p>Using <code>--since-time</code> and <code>awk</code></p>
<p><code>kubectl logs pod_name --since-time=2022-04-30T10:00:00Z | awk '$0 < "2022-04-30 11:00:00"'</code></p>
</li>
<li><p>Using only <code>awk</code></p>
<p><code>kubectl logs pod_name | awk '$0 > "2022-04-30 10:00:00"' | awk '$0 < "2022-04-30 11:00:00"'</code></p>
</li>
</ol>
<p>Note: Please format date_time using in awk command based on the logs output.</p>
|
<p>I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.</p>
<p>The application is an HTTP web backend.</p>
<p>Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.</p>
<p>I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.</p>
<p><strong>So to summarize my question,</strong> does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?</p>
<hr />
<p>The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.</p>
| <p>I see 2 approaches:</p>
<ol>
<li>In the new cluster get a new IP address and update the DNS record to point to the new load balancer</li>
<li>See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways</a></li>
</ol>
|
<p>Consider this Kubernetes Pod:</p>
<pre><code># pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testing123
spec:
containers:
- name: testing123
image: busybox:latest
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
</code></pre>
<p>If I deploy this Pod and run <code>logs</code> I see this:</p>
<pre><code>$ kubectl apply -f pod.yaml
$ k logs testing123
Hello, Kubernetes!
</code></pre>
<p>So far, so good. I now "login" to the Pod and run an echo command:</p>
<pre><code>$ k exec -ti testing123 -- ash
/ # echo "Run after logging in."
Run after logging in.
/ # exit
$ k logs testing123
Hello, Kubernetes!
$
</code></pre>
<p>Why didn't <code>Run after logging in.</code> appear in the logs output of the Pod?</p>
| <p>Containers logs are captured from their command line / entrypoint stdout and stderr.</p>
<p>When you enter a container (kubectl exec), you spawn a new process, with its own stdin/stdout/stderr.</p>
|
<p>I use the following function, and I need to <strong>raise</strong> the coverage of it (if possible to 100%), the problem is that typically I use <code>interface</code> to handle such cases in Go and for this specific case not sure how to do it, as this is a bit more <strong>tricky</strong>, any idea?</p>
<p><code>The package</code> <a href="https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/compute/v1" rel="nofollow noreferrer">https://pkg.go.dev/google.golang.org/genproto/googleapis/cloud/compute/v1</a></p>
<p>Which I use doesn't have <code>interface</code> so not sure how can I mock it?</p>
<pre><code>import (
"context"
"errors"
"fmt"
"os"
compute "cloud.google.com/go/compute/apiv1"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
computev1 "google.golang.org/genproto/googleapis/cloud/compute/v1"
)
func Res(ctx context.Context, project string, region string,vpc string,secret string) error {
c, err := compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret))
if err != nil {
return err
}
defer c.Close()
addrReq := &computev1.ListAddressesRequest{
Project: project,
Region: region,
}
it := c.List(ctx, addrReq)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return err
}
if *(resp.Status) != "IN_USE" {
return ipConverter(*resp.Name, vpc)
}
}
return nil
}
</code></pre>
| <p>Whenever I find myself in this scenario, I found that the easiest solution is to create <em>missing</em> interfaces myself. I limit these interfaces to the types and functions that I am using, instead of writing interfaces for the entire library. Then, in my code, instead of accepting third-party concrete types, I accept my interfaces for those types. Then I use gomock to generate mocks for these interfaces as usual.</p>
<p>The following is a descriptive example inspired by your code.</p>
<pre class="lang-golang prettyprint-override"><code>type RestClient interface {
List(context.Context, *computev1.ListAddressesRequest) (ListResult, error) // assuming List returns ListResult type.
Close() error
}
func newRestClient(ctx context.Context, secret string) (RestClient, error) {
return compute.NewAddressesRESTClient(ctx, option.WithCredentialsFile(secret))
}
func Res(ctx context.Context, project string, region string, vpc string, secret string) error {
c, err := newRestClient(ctx, secret)
if err != nil {
return err
}
defer c.Close()
return res(ctx, project, region, vpc, c)
}
func res(ctx context.Context, project string, region string, vpc string, c RestClient) error {
addrReq := &computev1.ListAddressesRequest{
Project: project,
Region: region,
}
it, err := c.List(ctx, addrReq)
if err != nil {
return err
}
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return err
}
if *(resp.Status) != "IN_USE" {
return ipConverter(*resp.Name, vpc)
}
}
return nil
}
</code></pre>
<p>Now you can test the important bits of the <code>Res</code> function by injecting a mock <code>RestClient</code> to the internal <code>res</code> function.</p>
|
<ol>
<li><p>Using WordPress php-apache-7.4 as base image, I created a Docker file with few customisation and created an image. I am using the same docker-entrypoint.sh, wp-config-docker.php files from the Docker Hub official image.</p>
</li>
<li><p>Using the image when I create a container on Docker Desktop it works fine and I am able to load the WP page</p>
</li>
<li><p>I upload the same image to Docker Hub and from there and using that image created a pod on EKS cluster and I receive the error "exec /usr/local/bin/docker-entrypoint.sh: exec format error."</p>
</li>
</ol>
<p>I am using the files from the below repo
<a href="https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache" rel="noreferrer">https://github.com/docker-library/wordpress/tree/3b5c63b5673f298c14142c0c0e3e51edbdb17fd3/latest/php7.4/apache</a></p>
<p>Only Docker file in the above repo is modified to installed the memcached and copy wp-config.php. The other two files I am using without any changes.</p>
<p>I tried changing the docker-entrypoint.sh script to add <code>#!/bin/bash</code> as mentioned in some issue reported, also I tried to create a custom-entrypoint.sh to edit the original docker-entrypoint.sh script which was also suggested in another page but they didn't work.</p>
<p>custom-entrypoint.sh</p>
<pre><code>#!/bin/bash
sed -i -e 's/^exec "$@"/#exec "$@"/g' /usr/local/bin/docker-entrypoint.sh
source docker-entrypoint.sh
exec "$@"
</code></pre>
<p>Trying to fix this, only thing is confusing is on Docker Desktop when I create using the same image it runs the cont without any error.</p>
| <p>As mentioned in the comment above by David Maze, the issue is due to building the image on Mac M1 Pro.</p>
<p>To fix this I need to add <code>FROM --platform=linux/amd64 <image>-<version></code> in the Dockerfile and build or you can run the below command while running the build</p>
<p><code>docker build --platform=linux/amd64 <image>-<version></code></p>
<p>Both solutions will work. I added <code>FROM --platform=linux/amd64</code> to the Dockerfile and it's fixed now.</p>
|
<p>I wanted to know exactly what is the difference between '>-' and '|-' especially in kubernetes yaml manifests</p>
| <p>Newlines in folded block scalars (<code>></code>) are subject to line folding, newlines in literal block scalars (<code>|</code>) are not.</p>
<p>Line folding replaces a single newline between non-empty lines with a space, and in the case of empty lines, reduces the number of newline characters between the surrounding non-empty lines by one:</p>
<pre class="lang-yaml prettyprint-override"><code>a: > # folds into "one two\nthree four\n\nfive\n"
one
two
three
four
five
</code></pre>
<p>Line folding does not occur between lines when at least one line is more indented, i.e. contains whitespace at the beginning that is not part of the block's general indentation:</p>
<pre class="lang-yaml prettyprint-override"><code>a: > # folds into "one\n two\nthree four\n\n five\n"
one
two
three
four
five
</code></pre>
<p>Adding <code>-</code> after either <code>|</code> or <code>></code> will strip the newline character from the last line:</p>
<pre class="lang-yaml prettyprint-override"><code>a: >- # folded into "one two"
one
two
b: >- # folded into "one\ntwo"
one
two
</code></pre>
<p>In contrast, <code>|</code> emits every newline character as-is, the sole exception being the last one if you use <code>-</code>.</p>
|
<p>I have <code>sizeLimit</code> property under <code>emptyDir</code> set to 2Gi in my template base file. I want to remove the <code>sizelimit</code> and just have <code>emptyDir: {}</code>. I've been unable to achieve this using Kustomization overlays. I will detail my folder structure and kustomization yamls below.</p>
<p><strong>Folder Structure:</strong></p>
<pre><code>application
├── majorbase
│ ├── kustomization.yaml
│ └── resources
│ └── web-template.yaml
├── minorbase
│ ├── kustomization.yaml
│ └── resources
└── myoverlays
├── kustomization.yaml
└── resources
└── my-new-web.yaml
</code></pre>
<p>The folder <code>myoverlays</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>bases:
- ../minorbase
patchesStrategicMerge:
- resources/my-new-web.yaml
</code></pre>
<p>The folder <code>minorbase</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>bases:
- ../majorbase
</code></pre>
<p>The folder <code>majorbase</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>resources:
- resources/web-template.yaml
</code></pre>
<p>The section I want to edit looks like this in the majorbase/template.</p>
<pre><code>volumes:
- name: test-vol
emptyDir:
sizeLimit: "2Gi"
</code></pre>
<p>The above configuration needs to be updated using overlays as below.</p>
<pre><code>volumes:
- name: test-vol
emptyDir: {}
</code></pre>
<p>This is where my problem lies. Kustomization just picks the 2Gi value mentioned in the base whenever I remove the <code>sizelimit</code> in my overlays. When I mention different value to <code>sizeLimit</code> such as "1Gi" in my overlays file, kustomization is picking up the change. What is the cause of this behaviour? Is it possible to achieve what I'm trying to do here?</p>
| <p>NB: This answer assumes a recent version of Kustomize (I'm running 4.5.2 locally). Your examples are using deprecated syntax (the <code>bases</code> section was deprecated <a href="https://github.com/kubernetes-sigs/kustomize/blob/661743c7e5bd8c3d9d6866b6bc0a6f0e0b0512eb/site/content/en/blog/releases/v2.1.0.md#resources-expanded-bases-deprecated" rel="nofollow noreferrer">in version 2.1.0</a>, for example).</p>
<hr />
<p>Your problem is that you're using a <code>strategicMerge</code> patch, and you're merging and empty map (<code>{}</code>) with <code>{"sizeLimit": "26gi"}</code>. If you merge an empty map with anything, it's a no-op: you end up with the "anything".</p>
<p>To explicitly delete an element, you have a few choices.</p>
<p>You can use the <code>$patch: replace</code> directive (you can find an example of that <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md" rel="nofollow noreferrer">here</a>) to have Kustomize <em>replace</em> the <code>emptyDir</code> element, rather than merging the contents. That would look like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: test-vol
emptyDir:
$patch: replace
</code></pre>
<p>The corresponding <code>kustomization.yaml</code> might look something like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: resources/my-new-web.yaml
</code></pre>
<hr />
<p>Alternately, you can use a JSONPatch patch, which is good for explicitly deleting fields:</p>
<pre><code>- path: /spec/volumes/0/emptyDir/sizeLimit
op: remove
</code></pre>
<p>Where <code>kustomization.yaml</code> would look like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ....//base
patches:
- target:
kind: Pod
name: example
path: resources/my-new-web.yaml
</code></pre>
<hr />
<p>You can find a complete runnable demonstration of this <a href="https://github.com/larsks/so-example-72086055" rel="nofollow noreferrer">here</a>.</p>
|
<p>Is it possible to scale down a pod to 0 replicas when other pod is down?I'm familiar with the basics of the Horizontal Auto-Scaling concept, but as I understand it scales pod up or down only when demands for resources (CPU, memory) changes.
My CI pipeline follows a green/blue pattern, so when the new version of the application is being deployed the second one is scaled down to 0 replicas, leaving other pods belonging to the same environment up wasting resources. Do you have any idea how to solve it using kubernetes or helm features?</p>
<p>Thanks</p>
| <p>If you have a <strong>CI</strong> pipeline you can just run the kubectl command and scale down the deployment before deploying the blue-green this way no resource wasting will be there.</p>
<p>However yes, you can scale <strong>UP/DOWN</strong> the deployment or application based on the custom metrics.</p>
<p>i would recommend you checking out Cloud-native project <strong>Keda</strong> : <a href="https://keda.sh/" rel="nofollow noreferrer">https://keda.sh/</a></p>
<p><strong>Keda</strong>:</p>
<blockquote>
<p>KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can
drive the scaling of any container in Kubernetes based on the number
of events needing to be processed.</p>
</blockquote>
<p><strong>Example</strong></p>
<pre><code>apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: {scaled-object-name}
spec:
scaleTargetRef:
deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject
containerName: {container-name} #Optional. Default: deployment.spec.template.spec.containers[0]
pollingInterval: 30 # Optional. Default: 30 seconds
cooldownPeriod: 300 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
maxReplicaCount: 100 # Optional. Default: 100
triggers:
# {list of triggers to activate the deployment}
</code></pre>
<p>Scale object ref : <a href="https://keda.sh/docs/1.4/concepts/scaling-deployments/#scaledobject-spec" rel="nofollow noreferrer">https://keda.sh/docs/1.4/concepts/scaling-deployments/#scaledobject-spec</a></p>
|
<p>Imagine we have a folder of <code>.yaml</code> files containing kubernetes objects, let's say, deployments, configmaps and HPAs.</p>
<pre><code>./file1.yaml # {'kind': 'Deployment', ... }, {'kind': 'ConfigMap', ...}
./file2.yaml # {'kind': 'ConfigMap', ... }, {'kind': 'HorizontalPodAutoscaler', ... }
</code></pre>
<p>I need to deserialize them into instances of a proper class, but unlike the regular deserialization method, I want to avoid relying on YAML tags and make a choice by YAML body instead (which is why i have doubts about <code>register_class()</code> approach). There is a key <code>'kind'</code> that should identify the proper class instance.</p>
<p>The end goal is to parse, modify and dump those objects back (<strong>preserving comments and formatting</strong>, so those classes would be a subclass of a CommentedMap or something similar).</p>
<p>Is there a way in ruamel.yaml how could I parse YAML into</p>
<pre><code>from ruamel.yaml.comments import CommentedMap
class KubeObjectBase(CommentedMap):
def some_additional_func(self):
pass
class Deployment(KubeObjectBase):
def deployment_method(self):
pass
class ConfigMap(KubeObjectBase):
pass
</code></pre>
| <p>I am not entirely sure what the YAML files actually look like. The part after <code>#</code> in your example
isn't correct YAML, so I made things up.</p>
<p>This doesn't affect processing to get what you want. As long as you have valid, loadable YAML,
just recursive over the data and replace entries.</p>
<p>You need to somehow map the value for <code>kind</code> to your actual classes. If there are not
that many classes just make a string to class dictionary, if you have many, you should
scan your Python files and create that map automatically (either from the class name
or from some class attribute):</p>
<pre class="lang-py prettyprint-override"><code>import sys
import ruamel.yaml
FA = ruamel.yaml.comments.Format.attrib
from pathlib import Path
file1 = Path('file1.yaml')
file1.write_text("""\
- {'kind': 'Deployment', a: 1}
- kind: ConfigMap
b:
kind: Deployment
c: 3
x: 42
""")
file2 = Path('file2.yaml')
file2.write_text("""\
[
{'kind': 'ConfigMap', d: 4},
{'kind': 'HorizontalPodAutoscaler', e: 5},
]
""")
kob_map = {}
class KubeObjectBase(ruamel.yaml.comments.CommentedMap):
def some_additional_func(self):
pass
def __repr__(self):
return f"{self.__class__.__name__}({', '.join([f'{k}: {v}' for k, v in self.items()])})"
class Deployment(KubeObjectBase):
def deployment_method(self):
pass
kob_map['Deployment'] = Deployment
class ConfigMap(KubeObjectBase):
pass
kob_map['ConfigMap'] = ConfigMap
class HorizontalPodAutoscaler(KubeObjectBase):
pass
kob_map['HorizontalPodAutoscaler'] = HorizontalPodAutoscaler
yaml = ruamel.yaml.YAML()
for v in kob_map.values():
yaml.Representer.add_representer(v, yaml.Representer.represent_dict)
def un_kind(d, map):
if isinstance(d, dict):
for k, v in d.items():
un_kind(v, map)
try:
if 'kind' in v:
# typ = map[v.pop('kind')]
typ = nv = map[v['kind']]
d[k] = typ(v)
setattr(nv, FA, v.fa)
setattr(nv, '_comment_attrib', v.ca)
except TypeError:
pass
elif isinstance(d, list):
for idx, elem in enumerate(d):
un_kind(elem, map)
try:
if 'kind' in elem:
# typ = map[elem.pop('kind')]
typ = map[elem['kind']]
d[idx] = nv = typ(elem)
setattr(nv, FA, elem.fa)
setattr(nv, '_comment_attrib', elem.ca)
except TypeError:
pass
for fn in Path('.').glob('*.yaml'):
data = yaml.load(fn)
print(f'{fn}:')
un_kind(data, kob_map)
print(list(data))
yaml.dump(data, sys.stdout)
</code></pre>
<p>which gives:</p>
<pre class="lang-none prettyprint-override"><code>file1.yaml:
[Deployment(kind: Deployment, a: 1), ConfigMap(kind: ConfigMap, b: Deployment(kind: Deployment, c: 3, x: 42))]
- {kind: Deployment, a: 1}
- kind: ConfigMap
b:
kind: Deployment
c: 3
x: 42
file2.yaml:
[ConfigMap(kind: ConfigMap, d: 4), HorizontalPodAutoscaler(kind: HorizontalPodAutoscaler, e: 5)]
[{kind: ConfigMap, d: 4}, {kind: HorizontalPodAutoscaler, e: 5}]
</code></pre>
|
<p>I think the question has most of the information. If I have a Service that is a ClusterIP Service, and a Pod accesses that Service, since the load balancing is built into the routing table, are the routing rules smart enough to prefer traffic to the local Node? Is there a way to configure it so that the traffic does prefer (or even be forced) to go to the local Node?</p>
<p>Reasoning - I'm thinking about running a caching server as a DaemonSet, which makes sense if traffic is likely to go to a local Pod.</p>
| <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#internal-traffic-policy" rel="nofollow noreferrer">internalTrafficPolicy</a> is probably what you are looking for.</p>
|
<p>Our team is planning the migration of legacy enterprise application developed in ASP.net web-Forms, to .Net Core 6 as use the containerized approach. For this, mostly we will target the Kubernetes container orchestration platform.</p>
<p>The application is highly configurable and can be integrated with related apps up to certain extent. It has large number of XML based configuration files (more than 100). Current mode of deployment is IIS (on-premise).</p>
<p>The major technical challenge that we are facing is to manage our application configuration.</p>
<p>So ConfigMap is one the option available in Kubernetes can be used for configuration management. ConfigMap APIs allows to generate ConfigMap from environment, yaml file, existing configuration file or directory. Directory based approach seems more suitable. However, considering the maximum size limit of ConfigMap we may end up creating multiple ConfigMap.</p>
<p>We need to make sure:</p>
<ol>
<li>Migrated app should be able to use configuration however application image should be separate and configuration can be injected from outside.</li>
<li>The configuration changes should be reflected in application without POD.</li>
</ol>
<p>Since the ConfigMap is kind of read-only resource when container starts, I am currently looking for mechanism to use with configuration reload without the need of restarting POD/container.</p>
<p>Initial focus is to achieve this. (The impact of changed configuration on active users who might be referring to application feature based on previous configuration is a different topic altogether).</p>
| <p>You can do it without restarting the POD using configmap only, however still it more depends on your application end.</p>
<p>You can inject your configmap and mount it to POD Kubernetes auto-reload the config map if mounted to the directory. To note it does not work if you are using the <strong>subpath</strong>.</p>
<p>Auto reload config map into Kubernetes without restarting the POD, you can more here: <a href="https://medium.com/@harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388</a></p>
<p><strong>YAML example</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
hello: world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: data-volume
ports:
- containerPort: 80
volumes:
- name: data-volume
configMap:
name: test-config
</code></pre>
<p>Official documentation : <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically</a></p>
<blockquote>
<p>Mounted ConfigMaps are updated automatically When a ConfigMap
currently consumed in a volume is updated, projected keys are
eventually updated as well. The kubelet checks whether the mounted
ConfigMap is fresh on every periodic sync. However, the kubelet uses
its local cache for getting the current value of the ConfigMap. The
type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).</p>
<p>ConfigMaps consumed as environment variables are not updated
automatically and require a pod restart.</p>
</blockquote>
<p><strong>Note</strong>:</p>
<blockquote>
<p>A container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.</p>
</blockquote>
<p>In this case your application need to be handling the content properly with change detection etc.</p>
|
<p>My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy</p>
<p>For me, it is really strange to have a cluster without kubeproxy</p>
<p>My setup for rancher cluster is below:</p>
<pre><code>Cluster Provider: RKE
Provision and Provision : Use existing nodes and create a cluster using RKE
Network Plugin : canal
</code></pre>
<p>Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.</p>
| <p>Finally, I find my kibe-proxy. It is process of host not docker container.</p>
<p>In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.</p>
<pre class="lang-sh prettyprint-override"><code>root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16
</code></pre>
|
<p>I am spinning up a new Jupyter notebook instance from Jupiter hub and wish to have Kubernetes API access from inside the spun up container. According to the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/security.html#kubernetes-api-access" rel="nofollow noreferrer">docs</a>, I added the parameter for service account in my helm values and as expected, I can see the service account token mounted as expected.</p>
<pre><code>subu@jupyter-subu:~$ sudo ls /run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
</code></pre>
<p>When I try to run kubectl however, I get an access denied</p>
<pre><code>subu@jupyter-subu:~$ kubectl get pods
error: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
</code></pre>
<p>Fair enough, but run it as sudo and it simply ignores the service account token.</p>
<pre><code>subu@jupyter-subu:~$ sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>If I setup the kubectl config manually with the details of the token, it works totally though, its just the default settings that don't work. Any ideas on why this could be happening would be much appreciated!</p>
| <p>In order to make kubectl use the projected token, the environment variables KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST must be set in your environment. These are automatically injected upon pod start, but likely only for your user, not for the sudo <code>root</code> user.</p>
<p>Make sure to pass these variables for the root environment (<code>sudo -E kubectl get pods</code>) or make sure the projected token is readable by your user (this should be achievable by setting the KubeSpawner's singleuser_uid to your UID <a href="https://github.com/jupyterhub/kubespawner/issues/140" rel="nofollow noreferrer">https://github.com/jupyterhub/kubespawner/issues/140</a>).</p>
|
<p>Below is my app definition that uses <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/" rel="nofollow noreferrer">azure csi store provider</a>. Unfortunately, this definition throws <code>Error: secret 'my-kv-secrets' not found</code> why is that?</p>
<h3>SecretProviderClass</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-app-dev-spc
spec:
provider: azure
secretObjects:
- secretName: my-kv-secrets
type: Opaque
data:
- objectName: DB-HOST
key: DB-HOST
parameters:
keyvaultName: my-kv-name
objects: |
array:
- |
objectName: DB-HOST
objectType: secret
tenantId: "xxxxx-yyyy-zzzz-rrrr-vvvvvvvv"
</code></pre>
<h3>Pod</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
</code></pre>
| <p>It turned out that secrets store csi works only with <code>volumeMounts</code>. So if you forget to specify it in your yaml definition then it will not work! Below is fix.</p>
<h3>Pod</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumeMounts:
- name: kv-secrets
mountPath: /mnt/kv_secrets
readOnly: true
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
</code></pre>
|
<p>We have one Kubernetes cluster running and we want to connect it to a new cluster. I found Google Clouds <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services" rel="nofollow noreferrer">multicluster-service (MCS API)</a> which seems promising, but it needs all <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services#requirements" rel="nofollow noreferrer">clusters to be in the same VPC</a>.</p>
<p>How do one migrate / move / add a running cluster to a VPC?</p>
| <p>i think you can not <strong>move</strong> the cluster to another VPC, although you can create a new or clone the cluster to new VPC.</p>
<p>You can clone the existing one by using <strong>DUPLICATE</strong> tab in cluster details however it will just create infrastructure with the same config specs, during that time you can update Networking (VPC) as per need.</p>
<p>Once the duplicate(cloned) cluster is up & running in the desired VPC, you can start migrating your workloads, like deployment, secrets, stateful sets etc.</p>
<p>For workload migration you can checkout the tool : <a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p>
<p><strong>Velero</strong> is mostly useful for creating the backup & restore option.</p>
<p>it would be easy to migrate the workloads instead of going by applying the old <strong>YAML</strong> method however it's more on your setup.</p>
<p>Here is my small article about the <code>migrating/cloning</code> workload across <strong>GCP</strong> <strong>GKE</strong> clusters: <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8</a></p>
|
<p>I have a Kubernetes cluster set up on Google Kubernetes Engine. I have a pod that hosts an RTMP application. This application requires two ports, one for HTTP and one for RTMP. I have set up an ingress Nginx load balancer. It routes the HTTP traffic correctly. I want to be able to route the RTMP traffic as well. But when I write RTMP in my YAML file. It shows it as an unknown field. I am using skaffold and Google Cloud Build to deploy.</p>
<p>Here is the Ingress-srv file</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: stream-server-srv
servicePort: 8000
</code></pre>
<p>What I want to be able to do is have an entry for RTMP on port 1935. That I can then use to stream love videos using RTMP protocol.</p>
| <p>RTMP is a TCP-based protocol and standard Ingress does not support TCP services.</p>
<p>The NGINX Ingress controller (which it looks like you are using) can <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">be configured to expose TCP services</a>.</p>
<p>First, you'll need to make sure that you expose both the HTTP and RTMP ports in your <code>stream-server-srv</code> Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stream-server-srv
namespace: default
spec:
selector:
app: stream-server
type: ClusterIP
ports:
- name: http-port
port: 8000
targetPort: 8000
protocol: TCP
- name: rtmp-port
port: 1935
targetPort: 1935
protocol: TCP
</code></pre>
<p>(replace default with your namespace)</p>
<p>You will also need to make sure that the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/cloud/1.23/deploy.yaml#L313" rel="nofollow noreferrer">Service used to expose the NGINX ingress</a> exposes port 1935 as well. For example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
- name: rtmp
port: 1935
protocol: TCP
targetPort: 1935
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
</code></pre>
<p>Finally, you'll need to update / patch the NGINX tcp services ConfigMap:</p>
<pre><code>kubectl patch configmap tcp-services -n ingress-nginx --patch '{"data":{"1935":"default/stream-server-srv:1935"}}'
</code></pre>
<p>(replace "default/stream-server-srv" with your namespace/servicename)</p>
|
<p>I had a clean Azure subscription and created AKS using Standard SKU and very basic setup (2 D2as_v4 nodes, default settings). Installed NGINX like:</p>
<pre><code>helm install ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p>Then put there a dummy app and made it accessible via Ingress + ClusterIP service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-demo
spec:
replicas: 1
selector:
matchLabels:
app: ingress-demo
template:
metadata:
labels:
app: ingress-demo
spec:
containers:
- name: ingress-demo
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ingress-demo
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: ingress-demo
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: development-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-demo
port:
number: 80
</code></pre>
<p>This <strong>works</strong> and I can access the dummy app on the Public IP assigned to my Ingress.</p>
<h2>Issue:</h2>
<p>I cannot access my app using any non-defalt path configuration. I get <code>Error: Request timed out</code>
for anything else than <code>/</code>. This Ingress change does not work for me:</p>
<pre><code>- path: /foo
</code></pre>
<p>Same issue using regex:</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
...
- path: /dev(/|$)(.*)
</code></pre>
<h2>UPDATE</h2>
<p>I have tried multiple fixes suggested in the <a href="https://github.com/Azure/AKS/issues/2903" rel="nofollow noreferrer">GitHub issue mentioned by silent</a>.<br />
Using following option makes <code>/foo</code> path work:</p>
<pre><code>--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
</code></pre>
<p>But it still doesn't work for regex. To make path like <code>/dev(/|$)(.*)</code> operable you need either change Azure Load Balancer probe <a href="https://github.com/Azure/AKS/issues/2903#issuecomment-1106784221" rel="nofollow noreferrer">from HTTP to TCP</a> (keeping the port) or to install NGINX controller with this flag:</p>
<pre><code>--set controller.service.externalTrafficPolicy=Local
</code></pre>
| <p>Not quite sure but it sounds like you ran into this issue: <a href="https://github.com/Azure/AKS/issues/2903" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2903</a></p>
<p>The solution was posted in this post: <a href="https://github.com/Azure/AKS/issues/2903#issuecomment-1109758644" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2903#issuecomment-1109758644</a></p>
|
<p>I am having a small shell script . it has the following snippet of code</p>
<pre><code>kubectl scale deployment sample1 --replicas=0 -n sb1
kubectl scale deployment sample1 --replicas-3 -n sb1
</code></pre>
<p>The first command is scale down my deployment and the second command is to scale up .
is there any flags /conditions (wait --for=condition=complete) that I can use to ensure the deployment is scaled down and scaled up ?</p>
| <p>There is no wait for completion flag in <code>kubectl scale --help.</code> You
may run the following to check if the replica rollout is completed:</p>
<pre><code>kubectl scale deployment sample1 --replicas 5 && kubectl rollout status deployment/sample1
</code></pre>
<p>Example:</p>
<pre><code>kubectl scale deployment sample1 --replicas 5
deployment.apps/sample1 scaled
</code></pre>
<p>After scaling check the rollout status, the rollout status will block until the rollout to desired replica count is available:</p>
<pre><code>kubectl rollout status deployment/sample1
Waiting for deployment "sample1" rollout to finish: 0 of 5 updated replicas are available...
Waiting for deployment "sample1" rollout to finish: 1 of 5 updated replicas are available...
Waiting for deployment "sample1" rollout to finish: 3 of 5 updated replicas are available...
Waiting for deployment "sample1" rollout to finish: 4 of 5 updated replicas are available...
deployment "sample1" successfully rolled out
kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
sample1 5/5 5 5 2m40s
</code></pre>
|
<p>Lets say I create a service account and retrieve the token associated to it:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n myexample describe sa myexample-sa
kubectl describe secret myexample-sa-token-xxxxx
</code></pre>
<p>Value of the token:</p>
<pre><code>token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpHWGxoRlNRTklaWjVzQTh2dmZMWVVsM1haclpRbXRVTEpFZnNUcER6RnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZXZlbG9wbWVudCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJteWV4YW1wbGUtc2EtdG9rZW4tOGw3cnciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibXlleGFtcGxlLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTM1NDhjNTUtZmJlYS00MDc1LThhNDYtNTVhZDQwN2VmYzMxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldmVsb3BtZW50Om15ZXhhbXBsZS1zYSJ9.FJMK2PIsloJRqGGIYAs_ZLpVn9-aW4UPWnGvrnNDscAWHtpatTknAJ0T075gXD86X6j_EShp7JLfv5J_aNRTHJWsYNzJIOXH0ZipdvsMW2oMfEK-VCDLgxlJnT3xikIYaFgYRgmw2-iraSiC-HcSmuuF8XPJgW93JNHqy2Vw2lka9GUzaxoD9D4UAvISk19peHPfDJZjEjr4r5QCUljQz8Va72dwOqNh3b01OI0-7epoRWjEjtCCOhKYyu2hErroo6IlaiUchN_VKTrL5182POMONYmKYrP0Z4ymX0AoA9dkKKbLjtm-Vkxp3B6xhtIrvaJ4upGH2AVNYSFb9aYacg
</code></pre>
<p>Then, I create a pod in a deployment and associate the service account above to the pod:</p>
<pre class="lang-yaml prettyprint-override"><code>...
spec:
template:
spec:
serviceAccountName: myexample-sa
...
</code></pre>
<p>Now,
After the pod is deployed I <code>exec</code> to it:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n myexample exec -it name-of-pod -- /bin/bash
</code></pre>
<p>If I run</p>
<pre class="lang-sh prettyprint-override"><code>cat /var/run/secrets/kubernetes.io/serviceaccount/token
</code></pre>
<p>The output is:</p>
<pre><code>eyJhbGciOiJSUzI1NiIsImtpZCI6IkpHWGxoRlNRTklaWjVzQTh2dmZMWVVsM1haclpRbXRVTEpFZnNUcER6RnMifQ.eyJhdWQiOlsidW5rbm93biJdLCJleHAiOjE2ODMxMjk2ODQsImlhdCI6MTY1MTU5MzY4NCwiaXNzIjoicmtlIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZXZlbG9wbWVudCIsInBvZCI6eyJuYW1lIjoic3RhdGljLXdlYiIsInVpZCI6ImZmNjMyOTU4LTM5MDctNDkyOS1hZGJjLWFjY2UyYzhkMTMxOCJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoibXlleGFtcGxlLXNhIiwidWlkIjoiNTM1NDhjNTUtZmJlYS00MDc1LThhNDYtNTVhZDQwN2VmYzMxIn0sIndhcm5hZnRlciI6MTY1MTU5NzI5MX0sIm5iZiI6MTY1MTU5MzY4NCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldmVsb3BtZW50Om15ZXhhbXBsZS1zYSJ9.Po-kZUo8nhOnJGxuHtvz6806QgLqPaztS0iWCNpnY0WCfpbRsmt2SWPckMo4P535DTqEJyDslUCF0loL0Tw2RNZxhHwRa-ul3P2G_9CmeApvGTX4nwyBFXjllsAWDiKWJkrxzpEkS0vf2N4r-9mGlEGkIWmPbUyDRD5LyeVmFMgPLNWYBLlAVG9qN5aJ5zzOq9pDFeY5jSXnOl3Ii3ddCZVxhnHDCGkFzu6w_YWkC-7iN68TlykwZb9wy2tFydCpAsPA
</code></pre>
<p>I compared the this token to the one I retrieved above (refer to first 2 commands) and they are different! Shouldn't this token value be exactly the same as the one I got for <code>myexample-sa-token-xxxxx</code>?</p>
| <p>No, they should not be the same when the type is <strong>Service account token volume projection</strong> which is the case above.</p>
<blockquote>
<p>Service account token volume projection: Mounts a short-lived, automatically rotating Kubernetes service account token into the Pod. This token is a OpenID Connect Token and can be used to authenticate to the Kubernetes API and other external services.</p>
</blockquote>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts#assigning_a_kubernetes_service_account_to_a_pod" rel="nofollow noreferrer">Readmore</a></p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">Service Account Token Volume Projection</a></p>
<p>You can decode the token by using <a href="https://jwt.io/" rel="nofollow noreferrer">jwt.io</a> and take a look at the payload. One is representing the service account and the other is actually binding for the pod.</p>
<p>Also, note that K8s manages and rotates the pod key.</p>
<blockquote>
<p>The kubelet will request and store the token on behalf of the pod, make the token available to the pod at a configurable file path, and refresh the token as it approaches expiration. The kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours</p>
</blockquote>
|
<p>I'm trying to develop a custom resource on kubernetes with kubebuilder.
In this CR, I have a field of type <code>url.URL</code></p>
<p>I get this error :</p>
<pre><code>(*in).DeepCopyInto undefined (type *url.URL has no field or method DeepCopyInto)
</code></pre>
<p>Is there a way to work with type <code>url.URL</code> when developing a CR ?</p>
<p>Thanks</p>
| <p>So I found a solution<br />
I don't know if it's the best but I've created a custom type URL with the part which is missing to use <code>net/url</code> with <code>controller-gen</code>.</p>
<p>It works fine
<a href="https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7" rel="nofollow noreferrer">https://gist.github.com/quentinalbertone/ec00085b57992d836c08d4586295ace7</a></p>
|
<p>I'm trying to port-forward my kubernetes service (through minikube) using the following command:</p>
<pre><code>kubectl port-forward svc/hapi-fhir-server 8080:8080 --address=0.0.0.0
</code></pre>
<p>But after trying to reach <code>localhost:8080</code> I get the following error: <code>"....... an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod {PodID, uid ....:E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused"</code></p>
<p>I checked which port the pod is listening to through the following command
<code>kubectl get pod hapi-fhir-server-666b846cbf-lhmr4 --template="{{(index (index .spec.containers 0).ports 0).containerPort}}"</code> resulting in answer <code>8080</code></p>
<p>For if this helps, my service & deployment files (having removed unrelated lines)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
....
spec:
replicas: 2
selector:
....
template:
....
spec:
containers:
- image: .........
name: hapi-fhir-server
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
status: {}
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
....
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32069
selector:
app: hapi-fhir-server
status:
loadBalancer: {}
</code></pre>
<p>The image being used is a HAPI FHIR server with the following configuration that runs on Apache Tomcat (server.xml):</p>
<pre><code><Connector port="${server.port}" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxThreads="${server.tomcat.max-threads}"
minSpareThreads="${server.tomcat.min-spare-threads}" />
</code></pre>
<p><code>server.port</code> being 8080.</p>
<p>I can't seem to find an accurate answer to why this is happening, even after going through documentation or similar questions like: <a href="https://stackoverflow.com/questions/53799600/kubernetes-port-forwarding-connection-refused">Kubernetes Port Forwarding - Connection refused</a>.</p>
<p>Is there something I am missing, not doing correctly or that I am not thinking of?
<strong>Note:</strong> I am relatively new to Kubernetes.</p>
| <p>Apparently there was no issue with the Kubernetes or server configuration but rather the Dockerfile that didn't expose port 8080, now it does through the following:</p>
<pre><code># Dockerfile
......
EXPOSE 8080
......
</code></pre>
<p>Thanks to IvanAracki that pointed this out to me through the comments.</p>
|
<p>Running Spring Boot 2.6.6 and Spring Cloud 2021.0.1</p>
<p>I'm attempting to migrate an existing service to Kubernetes so I added a dependency on <code>spring-cloud-starter-kubernetes-client-all</code>. By default, I have <code>spring.cloud.kubernetes.enable=false</code> and use the <code>kubernetes</code> profile to enable it. This is intended to allow this service to operate in both Kubernetes and the legacy environment.</p>
<p>My unit-tests complete successfully when building locally but fail in my Bitbucket pipeline with the following error:</p>
<pre><code>java.lang.IllegalStateException: Failed to load ApplicationContext
Caused by: org.springframework.cloud.kubernetes.commons.config.NamespaceResolutionFailedException: unresolved namespace
</code></pre>
<p>I suspect this occurs because Bitbucket Pipelines are deployed in Kubernetes and Spring somehow detects this. I have tried the following to no avail</p>
<ul>
<li>Pass <code>--define SPRING_CLOUD_KUBERNETES_ENABLED=false</code> to Maven on the command line</li>
<li>Set this as an environment variable e.g., <code>export SPRING_CLOUD_KUBERNETES_ENABLED=false</code></li>
<li>Pass <code>--define spring.cloud.kubernetes.enabled=false</code> to Maven on the command line</li>
</ul>
<p>I have also checked StackOverflow for similar issues and investigated the code also without avail. The class that is actually raising the issue is <code>KubernetesClientConfigUtils</code>, which should be disabled.</p>
<p>I would appreciate any guidance you can provide.</p>
| <p>Spring Cloud checks whether the application is running in a K8s environment before loading the active spring profile configuration and adds <code>kubernetes</code> to the active profiles. Previously, in Hoxton SR10, the profile was identified and <code>bootstrap-<profile>.yml</code> loaded before checking for Kubernetes. <code>spring.cloud.kubernetes.enabled</code> was picked up from there if set in the profile configuration or the maven pom properties.</p>
<p>As maven allows setting system properties on the command line, kubernetes detection can be disabled by setting it there:</p>
<pre><code>mvn test -Dspring.cloud.kubernetes.enabled=false
</code></pre>
<p>The surefire maven plugin allows setting system properties for all tests, so it's possible to set <code>spring.cloud.kubernetes.enabled</code> to be <code>false</code> in the surefire plugin configuration.</p>
<pre><code><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<systemPropertyVariables>
<spring.cloud.kubernetes.enabled>false</spring.cloud.kubernetes.enabled>
</systemPropertyVariables>
</configuration>
</plugin>
</code></pre>
<p>It is also possible to set the configuration on individual test classes using @Faron's approach to explicitly set the property in any <code>WebMvcTest</code> annotated unit test, e.g.:</p>
<pre><code>@WebMvcTest(properties = { "spring.cloud.kubernetes.enabled=false" })
</code></pre>
<p>It should also work on other unit test annotation that loads a Spring application context, such as <code>WebFluxTest</code>.</p>
|
<p>I am creating an Helm Chart and I am having problems when it comes to importing files:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vcl-template
namespace: {{.Release.Namespace}}
data:
{{- (.Files.Glob "config/varnish/default.vcl.tmpl").AsConfig | nindent 2 }}
{{- (.Files.Glob "config/varnish/nginx.conf").AsConfig | nindent 2 }}
</code></pre>
<p>This imports the file <code>config/varnish/nginx.conf</code> just fine but the file <code>config/varnish/default.vcl.tmpl</code> is imported with <code>\n</code> instead of newlines, so the data on the ConfigMap gets all buggy:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vcl-template
namespace: default
data:
default.vcl.tmpl: "vcl 4.0;\n\nimport std;\nimport directors;\n\n{{ range .Frontends
}}\nbackend {{ .Name }} {\n .host = \"{{ .Host }}\";\n .port = \"{{ .Port
}}\";\n}\n{{- end }}\n\n{{ range .Backends }}\nbackend be-{{ .Name }} {\n .host
= \"{{ .Host }}\";\n .port = \"{{ .Port }}\";\n}\n{{- end }}\n\nacl purge {\n
\ \"127.0.0.1\";\n \"localhost\";\n \"::1\";\n {{- range .Frontends }}\n
\ \"{{ .Host }}\";\n {{- end }}\n {{- range .Backends }}\n \"{{ .Host
}}\";\n {{- end }}\n}\n\nsub vcl_init {\n new cluster = directors.hash();\n\n
\ {{ range .Frontends -}}\n cluster.add_backend({{ .Name }}, 1);\n {{ end
}}\n\n new lb = directors.round_robin();\n\n {{ range .Backends -}}\n lb.add_backend(be-{{
.Name }});\n {{ end }}\n}\n\nsub vcl_recv {\n\n unset req.http.x-cache;\n
\ set req.backend_hint = cluster.backend(req.url);\n set req.http.x-shard =
req.backend_hint;\n if (req.http.x-shard != server.identity) {\n return(pass);\n
\ }\n set req.backend_hint = lb.backend();\n\n if (req.method == \"PURGE\")
{\n if (client.ip !~ purge) {\n return (synth(405, \"Method not
allowed\"));\n }\n # To use the X-Pool header for purging varnish
during automated deployments, make sure the X-Pool header\n # has been added
to the response in your backend server config. This is used, for example, by the\n
\ # capistrano-magento2 gem for purging old content from varnish during it's
deploy routine.\n if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool)
{\n return (synth(400, \"X-Magento-Tags-Pattern or X-Pool header required\"));\n
\ }\n if (req.http.X-Magento-Tags-Pattern) {\n ban(\"obj.http.X-Magento-Tags
~ \" + req.http.X-Magento-Tags-Pattern);\n }\n if (req.http.X-Pool)
{\n ban(\"obj.http.X-Pool ~ \" + req.http.X-Pool);\n }\n return
(synth(200, \"Purged\"));\n }\n\n if (req.method != \"GET\" &&\n req.method
!= \"HEAD\" &&\n req.method != \"PUT\" &&\n req.method != \"POST\"
&&\n req.method != \"TRACE\" &&\n req.method != \"OPTIONS\" &&\n req.method
!= \"DELETE\") {\n /* Non-RFC2616 or CONNECT which is weird. */\n return
(pipe);\n }\n\n # We only deal with GET and HEAD by default\n if (req.method
!= \"GET\" && req.method != \"HEAD\") {\n return (pass);\n }\n\n #
Bypass shopping cart, checkout and search requests\n if (req.url ~ \"/checkout\"
|| req.url ~ \"/catalogsearch\") {\n return (pass);\n }\n\n # Bypass
admin\n if (req.url ~ \"^/admin($|/.*)\") {\n return (pass);\n }\n\n
\ # Bypass health check requests\n if (req.url ~ \"/pub/health_check.php\")
{\n return (pass);\n }\n\n # Set initial grace period usage status\n
\ set req.http.grace = \"none\";\n\n # normalize url in case of leading HTTP
scheme and domain\n set req.url = regsub(req.url, \"^http[s]?://\", \"\");\n\n
\ # collect all cookies\n std.collect(req.http.Cookie);\n\n # Compression
filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression\n if (req.http.Accept-Encoding)
{\n if (req.url ~ \"\\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$\")
{\n # No point in compressing these\n unset req.http.Accept-Encoding;\n
\ } elsif (req.http.Accept-Encoding ~ \"gzip\") {\n set req.http.Accept-Encoding
= \"gzip\";\n } elsif (req.http.Accept-Encoding ~ \"deflate\" && req.http.user-agent
!~ \"MSIE\") {\n set req.http.Accept-Encoding = \"deflate\";\n }
else {\n # unknown algorithm\n unset req.http.Accept-Encoding;\n
\ }\n }\n\n # Remove all marketing get parameters to minimize the cache
objects\n if (req.url ~ \"(\\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=\")
{\n set req.url = regsuball(req.url, \"(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?\",
\"\");\n set req.url = regsub(req.url, \"[?|&]+$\", \"\");\n }\n\n #
Static files caching\n if (req.url ~ \"^/(pub/)?(media|static)/\") {\n return
(pass);\n }\n\n return (hash);\n}\n\nsub vcl_hash {\n if (req.http.cookie
~ \"X-Magento-Vary=\") {\n hash_data(regsub(req.http.cookie, \"^.*?X-Magento-Vary=([^;]+);*.*$\",
\"\\1\"));\n }\n\n # For multi site configurations to not cache each other's
content\n if (req.http.host) {\n hash_data(req.http.host);\n } else
{\n hash_data(server.ip);\n }\n\n if (req.url ~ \"/graphql\") {\n call
process_graphql_headers;\n }\n\n # To make sure http users don't see ssl warning\n
\ if (req.http.X-Forwarded-Proto) {\n hash_data(req.http.X-Forwarded-Proto);\n
\ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n
\ }\n if (req.http.Content-Currency) {\n hash_data(req.http.Content-Currency);\n
\ }\n}\n\nsub vcl_backend_response {\n\n set beresp.grace = 3d;\n\n if (beresp.http.content-type
~ \"text\") {\n set beresp.do_esi = true;\n }\n\n if (bereq.url ~ \"\\.js$\"
|| beresp.http.content-type ~ \"text\") {\n set beresp.do_gzip = true;\n
\ }\n\n if (beresp.http.X-Magento-Debug) {\n set beresp.http.X-Magento-Cache-Control
= beresp.http.Cache-Control;\n }\n\n # cache only successfully responses and
404s\n if (beresp.status != 200 && beresp.status != 404) {\n set beresp.ttl
= 0s;\n set beresp.uncacheable = true;\n return (deliver);\n }
elsif (beresp.http.Cache-Control ~ \"private\") {\n set beresp.uncacheable
= true;\n set beresp.ttl = 86400s;\n return (deliver);\n }\n\n
\ # validate if we need to cache it and prevent from setting cookie\n if (beresp.ttl
> 0s && (bereq.method == \"GET\" || bereq.method == \"HEAD\")) {\n unset
beresp.http.set-cookie;\n }\n\n # If page is not cacheable then bypass varnish
for 2 minutes as Hit-For-Pass\n if (beresp.ttl <= 0s ||\n beresp.http.Surrogate-control
~ \"no-store\" ||\n (!beresp.http.Surrogate-Control &&\n beresp.http.Cache-Control
~ \"no-cache|no-store\") ||\n beresp.http.Vary == \"*\") {\n # Mark
as Hit-For-Pass for the next 2 minutes\n set beresp.ttl = 120s;\n set
beresp.uncacheable = true;\n }\n\n return (deliver);\n}\n\nsub vcl_deliver
{\n if (resp.http.X-Magento-Debug) {\n if (resp.http.x-varnish ~ \" \")
{\n set resp.http.X-Magento-Cache-Debug = \"HIT\";\n set resp.http.Grace
= req.http.grace;\n } else {\n set resp.http.X-Magento-Cache-Debug
= \"MISS\";\n }\n } else {\n unset resp.http.Age;\n }\n\n #
Not letting browser to cache non-static files.\n if (resp.http.Cache-Control
!~ \"private\" && req.url !~ \"^/(pub/)?(media|static)/\") {\n set resp.http.Pragma
= \"no-cache\";\n set resp.http.Expires = \"-1\";\n set resp.http.Cache-Control
= \"no-store, no-cache, must-revalidate, max-age=0\";\n }\n\n unset resp.http.X-Magento-Debug;\n
\ unset resp.http.X-Magento-Tags;\n unset resp.http.X-Powered-By;\n unset
resp.http.Server;\n unset resp.http.X-Varnish;\n unset resp.http.Via;\n unset
resp.http.Link;\n}\n\nsub vcl_hit {\n if (obj.ttl >= 0s) {\n # Hit within
TTL period\n return (deliver);\n }\n if (std.healthy(req.backend_hint))
{\n if (obj.ttl + 300s > 0s) {\n # Hit after TTL expiration, but
within grace period\n set req.http.grace = \"normal (healthy server)\";\n
\ return (deliver);\n } else {\n # Hit after TTL and
grace expiration\n return (miss);\n }\n } else {\n #
server is not healthy, retrieve from cache\n set req.http.grace = \"unlimited
(unhealthy server)\";\n return (deliver);\n }\n}\n"
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
pcre_jit on;
error_log /var/log/nginx/error.log warn;
include /etc/nginx/modules/*.conf;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
client_max_body_size 15m;
keepalive_timeout 30;
sendfile on;
tcp_nodelay on;
gzip_vary on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p><code>nginx.conf</code>:</p>
<pre><code>worker_processes auto;
events {
worker_connections 1024;
}
pcre_jit on;
error_log /var/log/nginx/error.log warn;
include /etc/nginx/modules/*.conf;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
client_max_body_size 15m;
keepalive_timeout 30;
sendfile on;
tcp_nodelay on;
gzip_vary on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p><code>default.vcl.tmpl</code>:</p>
<pre><code>vcl 4.0;
import std;
import directors;
{{ range .Frontends }}
backend {{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
{{ range .Backends }}
backend be-{{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
acl purge {
"127.0.0.1";
"localhost";
"::1";
{{- range .Frontends }}
"{{ .Host }}";
{{- end }}
{{- range .Backends }}
"{{ .Host }}";
{{- end }}
}
sub vcl_init {
new cluster = directors.hash();
{{ range .Frontends -}}
cluster.add_backend({{ .Name }}, 1);
{{ end }}
new lb = directors.round_robin();
{{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}
sub vcl_recv {
unset req.http.x-cache;
set req.backend_hint = cluster.backend(req.url);
set req.http.x-shard = req.backend_hint;
if (req.http.x-shard != server.identity) {
return(pass);
}
set req.backend_hint = lb.backend();
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass shopping cart, checkout and search requests
if (req.url ~ "/checkout" || req.url ~ "/catalogsearch") {
return (pass);
}
# Bypass admin
if (req.url ~ "^/admin($|/.*)") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "/pub/health_check.php") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove all marketing get parameters to minimize the cache objects
if (req.url ~ "(\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=") {
set req.url = regsuball(req.url, "(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
return (pass);
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# For multi site configurations to not cache each other's content
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
if (req.url ~ "/graphql") {
call process_graphql_headers;
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
}
sub process_graphql_headers {
if (req.http.Store) {
hash_data(req.http.Store);
}
if (req.http.Content-Currency) {
hash_data(req.http.Content-Currency);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses and 404s
if (beresp.status != 200 && beresp.status != 404) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 86400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} else {
unset resp.http.Age;
}
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 300s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (miss);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
</code></pre>
<p>How come that the second file is not imported correctly? Latest Helm version and latest Go version.</p>
<p>Anyone has any ideas? The encoding of both files on VSCode shows as <code>UTF8</code>.</p>
| <p>They're actually equivalent from YAML's PoV, just not as pretty, but most important for your specific case it's because yaml cannot represent <strong>trailing</strong> whitespace without quoting it, which is what it did due to line 164 of your .tmpl file, as seen by the <code>\n \n</code> in:</p>
<pre class="lang-yaml prettyprint-override"><code> \ }\n \n}\n\nsub process_graphql_headers {\n if (req.http.Store) {\n hash_data(req.http.Store);\n
</code></pre>
<pre><code>$ sed -ne 164p default.vcl.tmpl | xxd
00000000: 2020 2020 0a .
</code></pre>
<p>turning on "strip trailing whitespace" in your editor will help that, or for this specific case you can just fix line 164</p>
|
<p>I am trying to create an operator using operator-sdk.</p>
<p>I have installed opeator-sdk on my mac OS.</p>
<p>My Environment Details :</p>
<p>go version <strong>go1.15.12 darwin/amd64</strong></p>
<p>operator-sdk version: <strong>"v1.7.2", commit: "6db9787d4e9ff63f344e23bfa387133112bda56b", kubernetes version: "v1.19.4", go version: "go1.16.3", GOOS: "darwin", GOARCH: "amd64"</strong></p>
<p>I am trying to create an operator using command -</p>
<pre><code>operator-sdk init hello-operator
</code></pre>
<p>I have enabled GO111MODULE.</p>
<p>When I am trying to run opeator-sdk init , I am getting following error.</p>
<pre><code>Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
# container/list
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# crypto/internal/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode/utf8
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/race
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/apimachinery/pkg/selection
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# encoding
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode/utf16
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/nettrace
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# math/bits
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/internal/sys
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/unsafeheader
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# unicode
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# vendor/golang.org/x/crypto/internal/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# crypto/subtle
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# vendor/golang.org/x/crypto/cryptobyte/asn1
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# golang.org/x/sys/internal/unsafeheader
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/internal/atomic
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# google.golang.org/protobuf/internal/flags
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# github.com/google/go-cmp/cmp/internal/flags
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/utils/integer
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/utils/buffer
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# internal/cpu
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# k8s.io/apimachinery/pkg/types
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# sync/atomic
compile: version "go1.15.6" does not match go tool version "go1.15.12"
# runtime/cgo
compile: version "go1.15.6" does not match go tool version "go1.15.12"
Error: failed to initialize project: unable to scaffold with "base.go.kubebuilder.io/v3": exit status 2
FATA[0003] failed to initialize project: unable to scaffold with "base.go.kubebuilder.io/v3": exit status 2
</code></pre>
<p>Does anybody has any idea about this?</p>
<p>Thanks in advance.</p>
| <p>The bellow commands show how you can run and scaffold an operator with the <code>operator-sdk</code> CLI tool. As of writing the latest version is v1.20.0. It covers some of the pitfalls such as setting the correct GO environment variables and installing the gcc for some OS. I tried it with <code>Ubuntu 18.04.3 LTS (Bionic Beaver)</code></p>
<pre class="lang-sh prettyprint-override"><code>#golang
echo "--- Installing golang ---"
GOVERSION=1.17.9
GOTAR=go$GOVERSION.linux-amd64.tar.gz
wget https://dl.google.com/go/$GOTAR
sudo tar -xvf $GOTAR
rm $GOTAR
sudo mv go /usr/local/bin
#gcc (used by operator-sdk CLI)
echo "--- Installing gcc ---"
sudo apt update
sudo apt install -y build-essential
sudo apt-get install manpages-dev
#operator-sdk
echo "--- Installing operator-sdk ---"
curl -Lo ./operator-sdk https://github.com/operator-framework/operator-sdk/releases/download/v1.20.0/operator-sdk_linux_amd64
chmod +x ./operator-sdk
sudo mv ./operator-sdk /usr/local/bin/operator-sdk
#environment variables
export GOROOT=/usr/local/bin/go
export PATH=$GOROOT/bin:$PATH
#verify versions
go version
operator-sdk version
#scaffold and run the HelloWorld operator
sudo -s
mkdir hello-operator
chmod 777 hello-operator
cd hello-operator
operator-sdk init --domain example.com --repo github.com/example/memcached-operator
operator-sdk create api --group example --version v1alpha1 --kind HelloWorld --resource --controller
make manifests
make run
</code></pre>
|
<p>Starting MongoDB with this connection string <code>mongodb://mongo-service:27017</code> is giving me this error.</p>
<p>The issue only happens for me when running on Kubernetes. Localhost is working fine.</p>
<pre><code>ERROR:src.app:Exception on /api/polls [GET]
Traceback (most recent call last):
File "/.venv/lib/python3.10/site-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/.venv/lib/python3.10/site-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/.venv/lib/python3.10/site-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/.venv/lib/python3.10/site-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/home/appuser/src/app.py", line 28, in get_all
polls = list(repository.get_all_polls())
File "/home/appuser/src/repository.py", line 11, in get_all_polls
return _get_collection().find()
File "/home/appuser/src/repository.py", line 28, in _get_collection
return mongo.get_collection(DATABASE, COLLECTION)
File "/home/appuser/src/mongo.py", line 15, in get_collection
return MongoClient(connection_string).get_database(database).get_collection(collection)
File "/.venv/lib/python3.10/site-packages/pymongo/mongo_client.py", line 704, in __init__
res = uri_parser.parse_uri(
File "/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py", line 568, in parse_uri
nodes = split_hosts(hosts, default_port=default_port)
File "/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py", line 376, in split_hosts
nodes.append(parse_host(entity, port))
File "/.venv/lib/python3.10/site-packages/pymongo/uri_parser.py", line 137, in parse_host
raise ValueError("Port must be an integer between 0 and 65535: %s" % (port,))
ValueError: Port must be an integer between 0 and 65535: 27017
</code></pre>
<p>I couldn't find the reason in the <a href="https://www.mongodb.com/docs/manual/reference/connection-string/#standard-connection-string-format" rel="nofollow noreferrer">docs</a>.</p>
| <p>As it is required by the Kubernetes manifest schema, Secret values must be encoded in Base 64, which I was doing like this:</p>
<pre class="lang-sh prettyprint-override"><code>$ echo 'mongodb://mongo-service:27017' | base64
bW9uZ29kYjovL21vbmdvLXNlcnZpY2U6MjcwMTcK
</code></pre>
<p>This was adding a line-break at the end, modifying the original string content, and breaking pymongo interpretation of the connection string. Tricky.</p>
<pre class="lang-sh prettyprint-override"><code>mongodb://mongo-service:27017
</code></pre>
<p>In my case the solution was to change to <code>printf</code> as pointed out in <a href="https://stackoverflow.com/a/51143063/3231778">this answer</a>:</p>
<pre class="lang-sh prettyprint-override"><code>$ printf mongodb://mongo-service:27017 | base64
bW9uZ29kYjovL21vbmdvLXNlcnZpY2U6MjcwMTc=
</code></pre>
<p>A new <a href="https://github.com/mongodb/mongo-python-driver/pull/937" rel="nofollow noreferrer">PR937</a> was created proposing a new message format which can outline this behavior.</p>
|
<p>I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a></p>
<p>I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error "502 Bad Gateway" error.</p>
<pre><code>curl elasticsearch.dev.acme.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
</code></pre>
<p>Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.</p>
<blockquote>
<p>{"type": "server", "timestamp": "2021-04-27T13:12:20,048Z", "level": "WARN", "component": "o.e.x.s.t.n.SecurityNetty4HttpServerTransport", "cluster.name": "elasticsearch-prod", "node.name": "elasticsearch-prod-es-default-0", "message": "received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}", "cluster.uuid": "t0mRfv7kREGQhXW9DVM3Vw", "node.id": "nCyAItDmSqGZRa3lApsC6g" }</p>
</blockquote>
<p><strong>Can you help me understand why this is occuring and how to fix it?</strong></p>
<p>I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?</p>
<p>FYI i am able to port-forward the service and connect to it with curl using the -k flag.</p>
<p><strong>What i have tried</strong></p>
<ol>
<li>I have tried adding my domain to the section as described here <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san</a></li>
<li>I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.</li>
</ol>
<blockquote>
<p>curl -u "elastic:$PASSWORD" "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a>
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.</p>
</blockquote>
<ol start="3">
<li>I have tried generating a certificate using the tool <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport" rel="noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport</a></li>
</ol>
<blockquote>
<p>bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem</p>
</blockquote>
<p>Then using the .crt and .key generated i created a kubectl secret <code>elastic-tls-cert</code>. But again curling localhost without -k gave the following error:</p>
<blockquote>
<p>curl --cacert cacert.pem -u "elastic:$PASSWORD" -XGET "https://localhost:9200"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a>
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.</p>
</blockquote>
<p><strong>elasticsearch.yml</strong></p>
<pre><code># This sample sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-prod
namespace: elastic-system
spec:
version: 7.12.0
nodeSets:
- name: default
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.roles: ["master", "data", "ingest", "ml"]
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
xpack.security.enabled: true
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
nodeSelector:
acme/node-type: ops
# this changes the kernel setting on the node to allow ES to use mmap
# if you uncomment this init container you will likely also want to remove the
# "node.store.allow_mmap: false" setting above
# initContainers:
# - name: sysctl
# securityContext:
# privileged: true
# command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
###
# uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification.
# automountServiceAccountToken: true
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 3
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
storageClassName: elasticsearch
# # inject secure settings into Elasticsearch nodes from k8s secrets references
# secureSettings:
# - secretName: ref-to-secret
# - secretName: another-ref-to-secret
# # expose only a subset of the secret keys (optional)
# entries:
# - key: value1
# path: newkey # project a key to a specific path (optional)
http:
service:
spec:
# expose this cluster Service with a LoadBalancer
type: NodePort
# tls:
# selfSignedCertificate:
# add a list of SANs into the self-signed HTTP certificate
subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch.dev.acme.com
# - dns: localhost
# certificate:
# # provide your own certificate
# secretName: elastic-tls-cert
</code></pre>
<p><strong>kubectl version</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.6-eks-49a6c0", GitCommit:"49a6c0bf091506e7bafcdb1b142351b69363355a", GitTreeState:"clean", BuildDate:"2020-12-23T22:10:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>helm list</strong></p>
<pre><code> NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0
</code></pre>
<p><strong>resources</strong></p>
<pre><code>pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal <none> <none>
pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/elastic-operator-webhook ClusterIP 172.20.218.208 <none> 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator
service/elasticsearch-prod-es-default ClusterIP None <none> 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default
service/elasticsearch-prod-es-http NodePort 172.20.229.173 <none> 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
service/elasticsearch-prod-es-transport ClusterIP None <none> 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod
</code></pre>
<p><strong>aws alb ingress controller</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <redacted>
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.dev.acme.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
# - host: kibana.dev.acme.com
# http:
# paths:
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: kibana-prod-kb-http
# port:
# number: 5601
</code></pre>
| <p>You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:</p>
<pre><code>xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
</code></pre>
|
<p>I am trying to update a complex object via a path value which it appear ReJson can do.
I have used it as a Cache just fine, I want to use it as a json collection store for real time data (similar to mongodb).</p>
<p>for example:
{
"name": "Michael \u0027Mike D\u0027 Diamond",
"employeeID": "E#101",
"role": "Cleaner",
"location": {
"longitude": -2.552079,
"lattitude": 51.501335
},
"numberOfAssignedJobs": 5
}</p>
<p>For example I want to update <strong>location.lattitude</strong> without taking the whole object and dealing with read/write sync issues.</p>
<p>However I am struggling with the API/How it is used in kubernetes. If anyone has any example code or tips and tricks, anything is much appreciated.</p>
<p>Thanks,
Matt</p>
| <p>You can do this with the <code>JSON.GET</code> command and JSON path syntax (<a href="https://redis.io/docs/stack/json/path/" rel="nofollow noreferrer">docs here</a>)</p>
<p>First, save the document in Redis at key "jsondoc":</p>
<pre><code>127.0.0.1:6379> json.set jsondoc $ '{ "name": "Michael \u0027Mike D\u0027 Diamond", "employeeID": "E#101", "role": "Cleaner", "location": { "longitude": -2.552079, "latitude": 51.501335 }, "numberOfAssignedJobs": 5 }'
</code></pre>
<p>Atomically update path <code>location.latitude</code> to some other value:</p>
<pre><code>127.0.0.1:6379> json.set jsondoc $.location.latitude 52.014
OK
</code></pre>
<p>Retrieve just <code>location.latitude</code>:</p>
<pre><code>127.0.0.1:6379> json.get jsondoc $.location.latitude
"52.014"
</code></pre>
|
<p>I am trying to create a module in Terraform to create the basic resources in a Kubernetes cluster, this means a <code>cert-manager</code>, <code>ingress-nginx</code> (as the ingress controller) and a <code>ClusterIssuer</code> for the certificates. In this exact order.</p>
<p>The first two I am installing with a <code>helm_release</code> resource and the <code>cluster_issuer</code> via <code>kubernetes_manifest</code>.</p>
<p>I am getting the below error, which, after some Google searches, I found out that it's because the <code>cert-manager</code> installs the CRDs that the <code>ClusterIssuer</code> requires but at the <code>terraform plan</code> phase, since they are not installed yet, the manifest cannot detect the <code>ClusterIssuer</code>.</p>
<p>Then, I would like to know if there's a way to circumvent this issue but still create everything in the same configuration with only one <code>terraform apply</code>?</p>
<p>Note: I tried to use the depends_on arguments and also include a <code>time_sleep</code> block but it's useless because nothing is installed in the plan and that's where it fails</p>
<pre><code>| Error: Failed to determine GroupVersionResource for manifest
│
│ with module.k8s_base.kubernetes_manifest.cluster_issuer,
│ on ../../modules/k8s_base/main.tf line 37, in resource "kubernetes_manifest" "cluster_issuer":
│ 37: resource "kubernetes_manifest" "cluster_issuer" {
│
│ no matches for kind "ClusterIssuer" in group "cert-manager.io"
</code></pre>
<pre><code>resource "helm_release" "cert_manager" {
chart = "cert-manager"
repository = "https://charts.jetstack.io"
name = "cert-manager"
create_namespace = var.cert_manager_create_namespace
namespace = var.cert_manager_namespace
set {
name = "installCRDs"
value = "true"
}
}
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
create_namespace = var.ingress_nginx_create_namespace
namespace = var.ingress_nginx_namespace
wait = true
depends_on = [
helm_release.cert_manager
]
}
resource "time_sleep" "wait" {
create_duration = "60s"
depends_on = [helm_release.ingress_nginx]
}
resource "kubernetes_manifest" "cluster_issuer" {
manifest = {
"apiVersion" = "cert-manager.io/v1"
"kind" = "ClusterIssuer"
"metadata" = {
"name" = var.cluster_issuer_name
}
"spec" = {
"acme" = {
"email" = var.cluster_issuer_email
"privateKeySecretRef" = {
"name" = var.cluster_issuer_private_key_secret_name
}
"server" = var.cluster_issuer_server
"solvers" = [
{
"http01" = {
"ingress" = {
"class" = "nginx"
}
}
}
]
}
}
}
depends_on = [helm_release.cert_manager, helm_release.ingress_nginx, time_sleep.wait]
}
</code></pre>
| <p><a href="https://cert-manager.io/docs/installation/helm/#option-1-installing-crds-with-kubectl" rel="nofollow noreferrer">Official documentation</a> says to use <code>kubectl apply</code> before installing this with a helm chart, making it a two step process. Using Terraform, this would make it a 3 step process in that you have to apply a targeted section to create the cluster so you can have access to kubeconfig credentials, then run the kubectl apply command to install the CRDs, and finally run terraform apply again to install the helm chart and the rest of the IaC. This is even less ideal.</p>
<p>I would use the <code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml</code> in kubectl_manifest resources as the comment above suggests, but this is impossible since this does not link to a single yaml file but so many of them one would not be able to keep up with the changes. Unfortunately, there is no "kubectl_apply" terraform resource** for the helm chart to depend on those CRDs being installed first.</p>
<p>Despite all this wonkiness, there is a solution, and that is to use the helm_release resource twice. It requires creating a module and referencing a custom helm chart for the cert-issuer. It's not ideal given the amount of effort that has to be used to create it for custom needs, but once it's created, it's a reusable, modular solution.</p>
<pre><code>#
# Cert-manager
# main.tf
#
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = var.cert_manager_chart_version
namespace = var.cert_manager_namespace
create_namespace = true
set {
name = "installCRDs"
value = true
}
}
</code></pre>
<p>Reference to custom chart:</p>
<pre><code>#
# cert-issuer.tf
#
# Cert Issuer using Helm
resource "helm_release" "cert_issuer" {
name = "cert-issuer"
repository = path.module
chart = "cert-issuer"
namespace = var.namespace
set {
name = "fullnameOverride"
value = local.issuer_name
}
set {
name = "privateKeySecretRef"
value = local.issuer_name
}
set {
name = "ingressClass"
value = var.ingress_class
}
set {
name = "acmeEmail"
value = var.cert_manager_email
}
set {
name = "acmeServer"
value = var.acme_server
}
depends_on = [helm_release.cert_manager]
}
</code></pre>
<p>You can see that the above use of <code>helm_release</code> is referencing itself locally as the repository, which requires you to have a custom helm chart, like this:</p>
<pre><code># ./cluster-issuer/cluster-issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: {{ include "cert-issuer.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
acme:
# The ACME server URL
server: {{ .Values.acmeServer }}
email: {{ .Values.acmeEmail }}
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: {{ .Values.privateKeySecretRef }}
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: {{ .Values.ingressClass }}
</code></pre>
<p>For some reason, this avoids the dependency check terraform uses to throw the error and works fine to get this installed in a single <code>apply</code></p>
<p>This could be further simplified by not using values.yaml values by creating a pure chart.</p>
<p>** Note, I think another work around is one can use a provisioner like 'local-exec' or 'remote-exec' after a cluser is created to run the kubectl apply command for the CRds directly, but I haven't tested this yet. It would also still require that your provisioning environment have kubectl installed and .kubeconfig properly configured, creating a dependency tree.</p>
<p>Also, that is of course not fully working code. for a full example of the module to use or fork, see <a href="https://github.com/DeimosCloud/terraform-kubernetes-cert-manager" rel="nofollow noreferrer">this github repo</a>.</p>
|
<p>Just study the core of K8S on local machine (Linux Mint 20.2).</p>
<p>Created one node cluster locally with:</p>
<blockquote>
<p>k3d cluster create mycluster</p>
</blockquote>
<p>And now I want to run spring boot application in a container.<br />
I build local image:</p>
<blockquote>
<p>library:0.1.0</p>
</blockquote>
<p>And here is snippet from <code>Deployment.yml</code>:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library'
image: library:0.1.0
imagePullPolicy: IfNotPresent
</code></pre>
<p>Despite the fact that image is already built:</p>
<pre><code>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
library 0.1.0 254c13416f46 About an hour ago 462MB
</code></pre>
<p>Starting the container fails:</p>
<pre><code>pod/library-867dfb64db-vndtj Pulling image "library:0.1.0"
pod/library-867dfb64db-vndtj Failed to pull image "library:0.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "library:0.1.0": failed to resolve reference "library:0.1.0": failed to do request: Head "https://...com/v2/library/manifests/0.1.0": x509: certificate signed by unknown authority
pod/library-867dfb64db-vndtj Error: ErrImagePull
pod/library-867dfb64db-vndtj Error: ImagePullBackOff
pod/library-867dfb64db-vndtj Back-off pulling image "library:0.1.0"
</code></pre>
<p><em><strong>How to resolve local images visibility for k3d cluster?</strong></em></p>
<hr />
<p>Solution:</p>
<p>Update the <code>Deployment.yml</code>:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 40
containers:
- name: 'library-xp'
image: xpinjection/library:0.1.0
imagePullPolicy: Never
</code></pre>
<p>And import the image to cluster:</p>
<blockquote>
<p>k3d image import xpinjection/library:0.1.0 -c mycluster</p>
</blockquote>
| <p>If you don't want to use a docker registry, you have to import the locally built image into the k3d cluster:</p>
<pre><code>k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]
</code></pre>
<p>But don't forget to configure in your deployment:</p>
<pre><code>imagePullPolicy: Never
</code></pre>
|
<p>I am configuring an EKS cluster using terraform in a private subnet and trying to access it using a VPN in a public subnet. When I configured it, it works fine but now when I run kubectl get pods or kubectl get svc, it is throwing an error:</p>
<p><code>error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"</code></p>
<p>I don't know why it is happening. Please reply if you the solution.</p>
<p>Thanks</p>
| <p>It's broken with <code>kubectl</code> version <code>1.24</code>. Downgrade to <code>1.23.6</code> will fix the issue for now</p>
<pre><code>sudo apt install kubectl=1.23.6-00
</code></pre>
|
<p>I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes.</p>
<p>I have the config.yml followed the doc from <a href="https://circleci.com/developer/orbs/orb/circleci/kubernetes" rel="nofollow noreferrer">https://circleci.com/developer/orbs/orb/circleci/kubernetes</a></p>
<p>Here is my version of setup in the config file:</p>
<pre><code>version: 2.1
orbs:
kube-orb: circleci/[email protected]
commands:
docker-check:
steps:
- docker/check:
docker-username: MY_USERNAME
docker-password: MY_PASS
registry: $DOCKER_REGISTRY
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: Name of the EKS cluster
type: string
steps:
- checkout
# It failed on this step
- kube-orb/delete-resource:
now: true
resource-names: my-frontend-deployment
resource-types: deployments
wait: true
</code></pre>
<p>Below is a copy of the error log</p>
<pre><code>#!/bin/bash -eo pipefail
#!/bin/bash
RESOURCE_FILE_PATH=$(eval echo "$PARAM_RESOURCE_FILE_PATH")
RESOURCE_TYPES=$(eval echo "$PARAM_RESOURCE_TYPES")
RESOURCE_NAMES=$(eval echo "$PARAM_RESOURCE_NAMES")
LABEL_SELECTOR=$(eval echo "$PARAM_LABEL_SELECTOR")
ALL=$(eval echo "$PARAM_ALL")
CASCADE=$(eval echo "$PARAM_CASCADE")
FORCE=$(eval echo "$PARAM_FORCE")
GRACE_PERIOD=$(eval echo "$PARAM_GRACE_PERIOD")
IGNORE_NOT_FOUND=$(eval echo "$PARAM_IGNORE_NOT_FOUND")
NOW=$(eval echo "$PARAM_NOW")
WAIT=$(eval echo "$PARAM_WAIT")
NAMESPACE=$(eval echo "$PARAM_NAMESPACE")
DRY_RUN=$(eval echo "$PARAM_DRY_RUN")
KUSTOMIZE=$(eval echo "$PARAM_KUSTOMIZE")
if [ -n "${RESOURCE_FILE_PATH}" ]; then
if [ "${KUSTOMIZE}" == "1" ]; then
set -- "$@" -k
else
set -- "$@" -f
fi
set -- "$@" "${RESOURCE_FILE_PATH}"
elif [ -n "${RESOURCE_TYPES}" ]; then
set -- "$@" "${RESOURCE_TYPES}"
if [ -n "${RESOURCE_NAMES}" ]; then
set -- "$@" "${RESOURCE_NAMES}"
elif [ -n "${LABEL_SELECTOR}" ]; then
set -- "$@" -l
set -- "$@" "${LABEL_SELECTOR}"
fi
fi
if [ "${ALL}" == "true" ]; then
set -- "$@" --all=true
fi
if [ "${FORCE}" == "true" ]; then
set -- "$@" --force=true
fi
if [ "${GRACE_PERIOD}" != "-1" ]; then
set -- "$@" --grace-period="${GRACE_PERIOD}"
fi
if [ "${IGNORE_NOT_FOUND}" == "true" ]; then
set -- "$@" --ignore-not-found=true
fi
if [ "${NOW}" == "true" ]; then
set -- "$@" --now=true
fi
if [ -n "${NAMESPACE}" ]; then
set -- "$@" --namespace="${NAMESPACE}"
fi
if [ -n "${DRY_RUN}" ]; then
set -- "$@" --dry-run="${DRY_RUN}"
fi
set -- "$@" --wait="${WAIT}"
set -- "$@" --cascade="${CASCADE}"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set -x
fi
kubectl delete "$@"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set +x
fi
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Exited with code exit status 1
CircleCI received exit code 1
</code></pre>
<p>Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.</p>
| <p>I was facing the exact issue since yesterday morning (16 hours ago). Then taking @Gavy's advice, I simply added this in my <code>config.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>steps:
- checkout
# !!! HERE !!!
- kubernetes/install-kubectl:
kubectl-version: v1.23.5
- run:
</code></pre>
<p>And now it works. Hope it helps.</p>
|
<p>While using this command for RBAC <code>"kubectl config set-cluster test --server=https://127.0.0.1:52807"</code> the IP here is from the kind-cluster that I am running after which I use <code>"kubectl config set-context test --cluster=test"</code> followed by required credentials & switch to the context by <code>"kubectl config use-context test"</code> and I am in the test context but with the first command I am configuring the config file I got that but m I making a cluster within a cluster what you guys understand please help me clear my doubt what is it actually doing?</p>
| <p><code>kubectl config set-cluster</code> sets a cluster entry in your <code>kubeconfig</code> file (usually found in <code>$HOME/.kube/config</code>). The <code>kubeconfig</code> file defines how your <code>kubectl</code> is configured.</p>
<p>The cluster entry defines where <code>kubectl</code> can find the kubernetes cluster to talk to. You can have multiple clusters defined in your <code>kubeconfig</code> file.</p>
<p><code>kubectl config set-context</code> sets a context element, which is used to combine a cluster, namespace and user into a single element so that <code>kubectl</code> has everything it needs to communicate with the cluster. You can have multiple contexts, for example one per kubernetes cluster that you're managing.</p>
<p><code>kubectl config use-context</code> sets your current context to be used in <code>kubectl</code>.</p>
<p>So to walk through your commands:</p>
<ul>
<li><code>kubectl config set-cluster test --server=https://127.0.0.1:52807</code> creates a new entry in <code>kubeconfig</code> under the <code>clusters</code> section with a cluster called <code>test</code> pointing towards <code>https://127.0.0.1:52807</code></li>
<li><code>kubectl config set-context test --cluster=test</code> creates a new context in <code>kubeconfig</code> called <code>test</code> and tells that context to point to a cluster called <code>test</code></li>
<li><code>kubectl config use-context test</code> changes the the current context in <code>kubeconfig</code> to a context called <code>test</code> (which you just created).</li>
</ul>
<p>More docs on <code>kubectl config</code> and <code>kubeconfig</code>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></li>
<li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration</a></li>
</ul>
|
<p>I have created a GKE cluster on GCP.</p>
<p>Kubernetes logs from kubectl logs command is different to /var/log/containers</p>
<p>kubectl</p>
<pre><code>{"method":"GET","path":"/healthz","format":"*/*","controller":"Public::PublicPagesController","action":"healthz","status":204,"duration":0.39,"view":0.0,"request_id":"ca29b519-d1e8-49a2-95ae-e5f23b60c36f","params":{},"custom":null,"request_time":"2022-04-27T15:25:43.780+00:00","process_id":6,"@version":"vcam-backend-vvcam-72_shareholder_event-rc16","@timestamp":"2022-04-27T15:25:43.780Z","message":"[204] GET /healthz (Public::PublicPagesController#healthz)"}
</code></pre>
<p>And logs in /var/log/containers, something add timestamp into the beginning of my container logs:</p>
<pre><code>2022-04-27T15:25:43.780523421Z stdout F {"method":"GET","path":"/healthz","format":"*/*","controller":"Public::PublicPagesController","action":"healthz","status":204,"duration":0.39,"view":0.0,"request_id":"ca29b519-d1e8-49a2-95ae-e5f23b60c36f","params":{},"custom":null,"request_time":"2022-04-27T15:25:43.780+00:00","process_id":6,"@version":"vcam-backend-vvcam-72_shareholder_event-rc16","@timestamp":"2022-04-27T15:25:43.780Z","message":"[204] GET /healthz (Public::PublicPagesController#healthz)"}
</code></pre>
<p>I want my application log will be consistent, I want it in json format like logs from kubectl command, so I can parse and analyze more.</p>
<p>I want to remove this part: <code>2022-04-27T15:25:43.780523421Z stdout F</code></p>
<p>Does anybody meet this problem? How can I make containers logs same as kubectl command logs?</p>
<p>GKE Version:</p>
<pre><code>Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10-gke.2000", GitCommit:"0823380786b063c3f71d5e7c76826a972e30550d", GitTreeState:"clean", BuildDate:"2022-03-17T09:22:22Z", GoVersion:"go1.16.14b7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Docker daemon.json</p>
<pre><code>{
"pidfile": "/var/run/docker.pid",
"iptables": false,
"ip-masq": false,
"log-level": "warn",
"bip": "169.254.123.1/24",
"mtu": 1460,
"storage-driver": "overlay2",
"live-restore": true,
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
</code></pre>
<p>Notes: I notice that the timestamp in the beginning of log line only show when we add option <code>docker logs -t</code>, <a href="https://docs.docker.com/engine/reference/commandline/logs/" rel="nofollow noreferrer">docs here</a>
But I still do not know how to fix this problem in GKE cluster.</p>
| <p>This problem is related to Container Runtime Interface (CRI). You can read about CRI <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p>For now, I'm still can not change the log format as I want, I just adapt to this new format. This format is called <code>CRI log format</code>, the CRI default of GKE cluster always produce log in this format, and aggregation log applications adapt to this new CRI log format too:</p>
<ul>
<li>This is PR of <code>Grafana/Loki</code> to support CRI log format: <a href="https://github.com/grafana/loki/pull/365" rel="nofollow noreferrer">PR</a></li>
<li>Fluent-bit create a new parser for CRI log format: <a href="https://docs.fluentbit.io/manual/v/1.8/installation/kubernetes#container-runtime-interface-cri-parser" rel="nofollow noreferrer">Docs</a></li>
</ul>
<p>So I think you need to change the way you approach this problem: If we can not change the log format as we want, we can use application support this log format.</p>
<p>Notes: I'm not sure but I think this problem come from: <code>Kubernetes is removing support for Docker as a container runtime</code> - <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">docs</a> , and the new container runtime produce this CRI log format.</p>
|
<p>I created EKS cluster and now i need to add http_proxy/http_proxies and no_proxy as environment variable to all 300 deployments. I created a config map with the variables. Is there a way to update all pods simultaneously without doing it one by one? And if yes how can i do it? My config map name is proxy-environment-variables.</p>
<p>Thank you for the help and assistance!</p>
<p>BR,</p>
<p>Martin</p>
<pre><code>Name: proxy-environment-variables
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
HTTPS_PROXY:
----
http://10.16.11.10:8080
HTTP_PROXY:
----
http://10.16.11.10:8080
NO_PROXY:
----
169.254.169.254,127.0.0.1,localhost,10.0.0.0/8
http_proxy:
----
http://10.16.11.10:8080
https_proxy:
----
http://10.16.11.10:8080
no_proxy:
----
169.254.169.254,127.0.0.1,localhost,10.0.0.0/8
BinaryData
====
Events: <none>
</code></pre>
| <p>You can use the <code>set env</code> to update the environment variable into the deployment:</p>
<pre><code>kubectl set env deployment --all --from=configmap/my-env-config
</code></pre>
<p>Example: The following example is demonstrating if there are multiple deployments, how to add env variables to their pods.</p>
<p>// I have the following deployments:</p>
<pre><code>kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
dep-1 1/1 1 1 5m21s
dep-2 1/1 1 1 41m
dep-4 1/1 1 1 3m17s
</code></pre>
<p>// their respective pods:</p>
<pre><code>kubectl get pod
NAME READY STATUS RESTARTS AGE
dep-1-84fcdf5-wblsm 1/1 Running 0 2m8s
dep-2-84fcdf5-qfbd5 1/1 Running 0 3m4s
dep-4-6cf4dcf794-wk476 1/1 Running 0 4s
</code></pre>
<p>// my config map:</p>
<pre><code>apiVersion: v1
data:
HTTP_PROXY: https://1.2.3.4:1234
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-04T16:52:00Z"
name: my-env-config
namespace: default
resourceVersion: "1002232"
uid: c8606312-90c6-45cf-86f0-f4f874f19909
</code></pre>
<p>// HTTP_PROXY env variable is <strong>already</strong> set in dep-1</p>
<pre><code>kubectl exec -it dep-1-84fcdf5-wblsm -- printenv HTTP_PROXY
https://1.2.3.4:1234
</code></pre>
<p>// HTTP_PROXY env variable is <strong>already</strong> set in dep-2</p>
<pre><code>kubectl exec -it dep-2-84fcdf5-qfbd5 -- printenv HTTP_PROXY
https://1.2.3.4:1234
</code></pre>
<p>// HTTP_PROXY env variable is <strong>not</strong> set in dep-4, <strong>this is our focus</strong></p>
<pre><code>kubectl exec -it dep-4-6cf4dcf794-wk476 -- printenv HTTP_PROXY
command terminated with exit code 1
</code></pre>
<p>// <strong>SOLUTION</strong>: Setting env for all the deployments(<code>--all</code>) from a config map called <code>my-env-config</code>)</p>
<pre><code> kubectl set env deployment --all --from=configmap/my-env-config
deployment.apps/dep-4 env updated
</code></pre>
<p>//Note the age of the pods, only dep-4's pod age is altered</p>
<pre><code>kubectl get pod
NAME READY STATUS RESTARTS AGE
dep-1-84fcdf5-wblsm 1/1 Running 0 3m40s
dep-2-84fcdf5-qfbd5 1/1 Running 0 4m36s
dep-4-59d5cfd48d-2tbbm 1/1 Running 0 5s
</code></pre>
<p>// now HTTP_PROXY is set in dep-4's pod.</p>
<pre><code>kubectl exec -it dep-4-59d5cfd48d-2tbbm -- printenv HTTP_PROXY
https://1.2.3.4:1234
</code></pre>
|
<p>I have a GKE cluster, external domain name, and letsencrypt certs.
When I am using a Load balancer and instruct pods to use certs that I generate using certbot then performance is quite good. But I have to renew certs manually which takes a lot of effort.</p>
<p>When using an ingress controller and letting cert-manager update certs by itself then additional hops add latency and make the traffic path more complex. Then the connection is on h2 from client to ingress and then the connection become plain HTTP from ingress to pods.</p>
<p>Is there any way remove the extra hops when using nginx ingress controller and take out the performance issue?</p>
| <p>There is no extra hop if you are using the cert-manager with ingress.</p>
<p>You can use the cert-manager it will save the <strong>cert</strong> into <strong>secret</strong> and attach to ingress. However, it's up to you where you are doing TLS termination.</p>
<p>You can also bypass the HTTPS traffic till POD for end-to-end encryption if you are doing TLS termination at ingress level backed traffic till POD will be in plain <strong>HTTP</strong>.</p>
<pre><code>Internet > ingress (TLS in secret) > Plain HTTP if you terminate > service > PODs
</code></pre>
<p>If you want to use the certificate into POD you can mount the secret into POD and that will be further used by the application.</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod</a></p>
<p>If you will use the secret with POD you might need to reload the POD in that case you can use the Reloader to auto roll out the PODs.</p>
<p>Reloader : <a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">https://github.com/stakater/Reloader</a></p>
|
<p>i am trying to make a cicd pipeline github->travisci->aws eks
everything works fine images are posted to dockerhub and all.but when travis is executing kubectl apply -f "the files" it is throwing a error.. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"</p>
<p>(theres nothing wrong with the source coe/deployment/service files as i manually deployed them on aws eks and they worked fine.)</p>
<pre><code>
#-----------------travis.yml-------------
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
- docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client
# - aws s3 ls
script:
- docker run -e CI=true akifboi/multi-client npm test
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
</code></pre>
<pre><code>#----deploy.sh--------
# docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client
# docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server
# docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker
# docker push akifboi/multi-client:latest
# docker push akifboi/multi-server:latest
# docker push akifboi/multi-worker:latest
# docker push akifboi/multi-client:$SHA
# docker push akifboi/multi-server:$SHA
# docker push akifboi/multi-worker:$SHA
echo "starting"
aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese!
echo "applying k8 files"
kubectl apply -f ./k8s/
# kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA
# kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA
# kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA
echo "done"
</code></pre>
<pre><code>#------travis;logs----------
last few lines:
starting
"ACTIVE"
applying k8 files
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
done
Already up to date.
HEAD detached at c1858f7
Untracked files:
(use "git add <file>..." to include in what will be committed)
aws/
awscliv2.zip
nothing added to commit but untracked files present (use "git add" to track)
Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225)
Done. Your build exited with 0.
</code></pre>
| <p>We were installing the latest version of kubectl in CI and hit this error today. After pinning to a previous version (1.18) the error was resolved.</p>
<p>the last working version was 1.23.6 and we saw errors with 1.24</p>
|
<p>I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?</p>
<p>I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.</p>
<p>I tried using prometheus but have not had any luck so far.</p>
| <p>You can enable <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">Auditing</a> on your cluster. For specific resource, use <code>resourceNames</code> in the audit policy to specify the resource name.</p>
|
<p>The following code will throw an <code>ApiException 410 resource too old</code> on the second <code>watch.stream()</code>:</p>
<pre><code># python3 -m venv venv
# source venv/bin/activate
# pip install 'kubernetes==23.3.0'
from kubernetes import client,config,watch
config.load_kube_config(context='my-eks-context')
v1 = client.CoreV1Api()
watcher = watch.Watch()
namespace = 'kube-system'
last_resource_version=0
# this watch will timeout in 5s to have a fast way to simulate a watch that need to be retried
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5):
print(i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
# we retry the watch starting from the last resource version known
# but this will raise a kubernetes.client.exceptions.ApiException: (410)
# Reason: Expired: too old resource version: 379140622 (380367990)
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5):
print('second loop', i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
</code></pre>
<p>The <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">kubernetes documentation</a> states that:</p>
<blockquote>
<p>If a client watch is disconnected then that client can start a new watch from the last returned resourceVersion</p>
</blockquote>
<p>which is what I intended in the above code, which always gives the following exception:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 24, in <module>
File "/Users/rubelagu/git/python-kubernetes-client/venv/lib/python3.8/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 379164133 (380432814)
</code></pre>
<p>What am I doing wrong?</p>
| <p>It seems that in the initial response to the watch (from an EKS cluster 1.21) the events can be returned in any order.</p>
<p>I did two subsequent watches two seconds apart and they contain the same 30 events in completely different ordering.</p>
<p>So it's not guaranteed that the last resource version that you see is actually the actual last and it's not guaranteed that you can resume from that <code>resourceVersion</code>/<code>resource_version</code>. Also you are not allowed to sort/collate those events by <code>resourceVersion</code>, since the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">kubernetes documentation for Resource Version Semantics</a> explicity says:</p>
<blockquote>
<p>Resource versions must be treated as <strong>opaque</strong> [...] You <strong>must not</strong> assume resource versions are <strong>numeric or collatable</strong>.</p>
</blockquote>
<p>You must account for that by catching the <code>resource too old exception</code> and retrying without specifying a resource version, see below for an example:</p>
<pre><code>from kubernetes import client,config,watch
from kubernetes.client.exceptions import ApiException
config.load_kube_config(context='eks-prod')
v1 = client.CoreV1Api()
# v1 = config.new_client_from_config(context="eks-prod").CoreV1Api()
watcher = watch.Watch()
namespace = 'my-namespace'
def list_pods(resource_version=None):
print('start watch from resource version: ', str(resource_version))
try:
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=resource_version, timeout_seconds=2):
print(i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
except ApiException as e:
if e.status == 410: # Resource too old
return list_pods(resource_version=None)
else:
raise
return last_resource_version
last_resource_version = list_pods()
print('last_resource_version', last_resource_version)
list_pods(last_resource_version)
</code></pre>
|
<p>I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?</p>
<p>I am basically trying to check which IP the request is sent from, any data sent and any other discerning information.</p>
<p>I tried using prometheus but have not had any luck so far.</p>
| <p>You can setup auditing in your Kubernetes Cluster.
Refer to this link <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/</a></p>
|
<p>I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p>
<p>After the yaml file is applied and while trying to check the status of the deployment , I get :</p>
<pre><code>$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 0/1 1 0 6m39s
</code></pre>
<p>I tried to debug it and I got this :</p>
<pre><code>$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
{"level":"info","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"error","logger":"setup","msg":"unable to create controller","controller":"Ingress","error":"the server could not find the requested resource"}
</code></pre>
<p>The yaml file is pulled directly from <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml</a> and apart from changing the Kubernetes cluster name, no other modifications are done.</p>
<p>Please let me know if I am missing some step in the configuration.
Any help would be highly appreciated.</p>
| <p>I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.</p>
<ul>
<li>aws-load-balancer-controller = v2.3.1</li>
<li>Kubernetes/EKS = 1.22</li>
</ul>
<p>Github issue for more information:
<a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495</a></p>
|
<p>I am reading the tekton <a href="https://tekton.dev/vault/pipelines-v0.14.3/auth/" rel="noreferrer">docs</a> on authentication it explains that two things are needed to do authentication</p>
<p>Create a secret docs give example below</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/git-0: https://github.com # Described below
type: kubernetes.io/basic-auth
stringData:
username: <username>
password: <password>
</code></pre>
<p>Add secret object to the service account</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: basic-user-pass
</code></pre>
<p>My mental model of a service account in k8s is that it is a JWT used to access the k8s API server. I don't understand what's point of adding a secret to the ServiceAccount, how and why it is used.</p>
<p>Questions:</p>
<ul>
<li>what does it mean for a service account to have secrets added to it?</li>
<li>Why is it useful to add a secret to a service account?</li>
<li>Why does tekton require the secret to be added to the service account?</li>
<li>who/what looks at the service account secrets list?</li>
</ul>
| <p>hope you are enjoying your kubernetes journey !</p>
<p><em><strong>1) what does it mean for a service account to have secrets added to it? / Why is it useful to add a secret to a service account?</strong></em></p>
<p>Fist of all, a little remider:</p>
<p>As you may know, you have to see the serviceAccount as a user for a machine/an application/a script (and not only in kubernetes) in short, everything that is not human. As a human a service account, in order to authenticate to things (Git repository/docker registry or API that require authentication, needs to have credentials (username+password).</p>
<p>In Kubernetes this credentials and especially the password are stored in "secrets".</p>
<p>Now, you should be aware that each namespace in kubernetes has a native service account named "default" that is associated with <strong>every</strong> running pod and that service account is linked to a native "default" kubernetes secret that is also present in all namespaces.
This "default" secret contains the ca.crt and a token that let the pod to make calls to the internal Kubernetes API Server endpoint among other things.</p>
<p>Since the secrets that contains the "credentials" is linked to a service account that is mounted to a pod, this pod can then be able to authenticate to things that require authentication.</p>
<p>For example if someday you'll have to use a private docker registry to pull your images, you can do this in two ways, In each of them you have to create a secret first that will contain your sensitive data (crendentials):</p>
<ul>
<li>The first way consist of adding your secret name,that contains the registry credentials directly in the default serviceAccount (that, as a reminder, is mounted by default in the pod, or in a new created serviceAccount (like tekton is doing in your case) that will be added to the kubernetes deployment manifest in the field <code>serviceAccountName:</code>.</li>
<li>The second way consist of adding the field <code>imagePullSecret</code> in your kubernetes deployment manifest.</li>
</ul>
<p>This way, when kubernetes comes to pull your private docker image, it will check if the needed credentials that are in the serviceAccount secrets works, if not it will check the secret you have added in the imagePullSecret field (or the opposite) and it will be able to connect to the registry and pull the image to run it as a container in a pod !</p>
<p><em><strong>2) who/what looks at the service account secrets list?</strong></em></p>
<p>For example in a brand new namespace:</p>
<pre><code>❯ k get sa
NAME SECRETS AGE
default 1 30m
</code></pre>
<p>This default serviceAccount is linked to a secret named "default-token-r4vrb":</p>
<pre><code>❯ k get sa default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-05-06T08:48:38Z"
name: default
namespace: so-tests
resourceVersion: "1771"
uid: c3598708-ad14-4806-af31-5c54d60e29b7
secrets:
- name: default-token-r4vrb
</code></pre>
<p>This default-token secret contains what is needed to authenticate the Kubernetes APi endpoint (certificate + token):</p>
<pre><code>❯ k get secret default-token-r4vrb -o yaml
apiVersion: v1
data:
ca.crt: base64encodedCaCertificate
namespace: base64encodedNamespace
token: base64encodedToken
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: c3598708-ad14-4806-af31-5c54d60e29b7
creationTimestamp: "2022-05-06T08:48:38Z"
name: default-token-r4vrb
namespace: so-tests
resourceVersion: "1770"
uid: d342a372-66d1-4c92-b520-23c23babc798
type: kubernetes.io/service-account-token
</code></pre>
<p><em><strong>3) Why does tekton require the secret to be added to the service account?
who/what looks at the service account secrets list?</strong></em></p>
<p>Now I hope you know why, they choose to use a serviceAccount to do this but they could have just mounted the secret into the pod directly also :)</p>
<p>Hope this has helped you. Here is some docs to be more familiar with K8S SA:
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
<p>bguess/</p>
|
<p>I have a k8s dev cluster (in aks). I previously used the bitnami chart for deployment of a single MongoDB pod and I never had the following problem.</p>
<p>As I'm now using a feature that only works with replicaSets, I recently moved to using the MongoDBCommunity Operator to create a single instance MongoDB replicaSet running in its own namespace (in dev). My application is connecting to and using this instance with no issues at all and my new feature works like a dream.</p>
<pre class="lang-bash prettyprint-override"><code>$ k get pods -n db
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-748c85557-bffz5 1/1 Running 0 2d16h
nga-mongodb-0 2/2 Running 0 2d19h
</code></pre>
<p>Now I want to use the MongoDB extension in vscode to help with debugging, and that's where the problem is. As I mentioned, with the bitnami chart I had no problem -- I provided the db connection string and it just worked.</p>
<p>When I try to do the same with this new deployment though I get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>getaddrinfo ENOTFOUND nga-mongodb-0.nga-mongodb-svc.db.svc.cluster.local
</code></pre>
<p>The operator creates a secret containing the connection string in 2 forms, for the service and for the pod, both give the same error</p>
<p>Svc:</p>
<pre class="lang-bash prettyprint-override"><code>mongodb+srv://username:[email protected]/admin?ssl=false
</code></pre>
<p>Pod:</p>
<pre class="lang-bash prettyprint-override"><code>mongodb://username:[email protected]:27017/admin?ssl=false
</code></pre>
<p>I have noticed that neither of those strings includes a replica name, which I find odd, but I use the svc based string for my application and it works fine so it should work ok when port-forwarding.</p>
<p>My port-forwarding statement is pretty basic:</p>
<pre class="lang-bash prettyprint-override"><code>k port-forward -n db nga-mongodb-0 27017:27017
</code></pre>
<p>And if I try <code>curl localhost:27017</code> I get the expected message</p>
<pre class="lang-bash prettyprint-override"><code>It looks like you are trying to access MongoDB over HTTP on the native driver port.
</code></pre>
<p>Which does suggest that I have a working port forwarder.</p>
<p>The connection string I use (assuming the pod connection string) then changes to:</p>
<pre class="lang-bash prettyprint-override"><code>mongodb://username:password@localhost:27017/admin?ssl=false
</code></pre>
<p>I suspect, but don't know, that the replica set is trying to forward the client to the primary (which I already connect to if I use the pod connection string) and is sending a kubernetes URL which doesn't translate well via the port-forwarding method. So, I doubt that this is something weird the operator has done.</p>
<p>Has anyone else actually got this working or know how to get it working?</p>
| <p>So, based on writing up the question and therefore organising my thoughts, I have found the reason this is happening and the answer.</p>
<p>I had suggested in my question that this might be a redirect problem, and now I have proof that this is the issue. I've added the following entry to my hosts file:</p>
<pre><code>127.0.0.1 nga-mongodb-0.nga-mongodb-svc.db.svc.cluster.local
</code></pre>
<p>And now it works. So, the MongoDB replicaSet is trying to forward me to the primary (even though I'm already connected to the primary) and is doing so by, quite rightly, returning the only host it knows. By adding that hostname to the <code>/etc/hosts</code> file my machine resolves the name to the localhost and back through the forwarded connection. I haven't actually tried this with the svc approach, but I doubt it would work as I believe it'll constantly redirect.</p>
|
<p>I am trying to create a replicaset with kubernetes. This time, I don't have a yml file and this is why I am trying to create the replicaset using a command line.
Why <code>kubectl create replicaset somename --image=nginx</code> raise an error, and how to fix this?</p>
| <p>You cannot create <code>replicaset</code> using the command line. Only the following resource creation is possible using <code>kubectl create</code>:</p>
<pre><code>kubectl create --help |awk '/Available Commands:/,/^$/'
Available Commands:
clusterrole Create a cluster role
clusterrolebinding Create a cluster role binding for a particular cluster role
configmap Create a config map from a local file, directory or literal value
cronjob Create a cron job with the specified name
deployment Create a deployment with the specified name
ingress Create an ingress with the specified name
job Create a job with the specified name
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name
priorityclass Create a priority class with the specified name
quota Create a quota with the specified name
role Create a role with single rule
rolebinding Create a role binding for a particular role or cluster role
secret Create a secret using specified subcommand
service Create a service using a specified subcommand
serviceaccount Create a service account with the specified name
</code></pre>
<p>Although, You may use the following way to create the replica set, in the below example, <code>kubectl create -f</code> is fed with stdout(<code>-</code>):</p>
<pre><code>echo "apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
" |kubectl create -f -
</code></pre>
|
<p><strong>What I am trying to do?</strong></p>
<p>I want to create a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service" rel="nofollow noreferrer">kubernetes service</a> object using terraform, but make it re-usable. So everytime there's a new service, I could just append the variables.</p>
<p><strong>My problem:</strong></p>
<p>I have been reading and trying different things but I am not sure how would I loop over "annotations" and "selectors" which will have more than one key value pair.</p>
<p><strong>Code Example</strong></p>
<p>Variables.tf using which I want to build the actual terraform resource.</p>
<pre><code>variable "apps" {
default = {
"app1" = {
svc_name = "app1"
namespace = "testns"
annotations = {
"testannotation" = "ann1"
}
selector = {
app = "podinfo"
env = "dev"
}
ports = {
name = "http"
port = 80
protocol = "TCP"
targetPort = 8008
}
},
"app2" = {
svc_name = "app2"
namespace = "testns"
annotations = {
"testannotation" = "ann1"
}
selector = {
app = "someapp"
env = "qa"
}
ports = {
name = "http"
port = 8080
protocol = "TCP"
targetPort = 8080
}
},
}
}
</code></pre>
<p>Here is the main.tf where I want to loop over "annotations" and "selectors" of the variables because there could be more than one of them. BUT there is only one "annotations" and "selector" block. So I couldn't use "dynamic" as it will generate many of those blocks.</p>
<pre><code>resource "kubernetes_service" "service" {
for_each = var.apps
metadata {
name = each.value.svc_name
namespace = each.value.namespace
# annotations = {
# HOW DO I GET THE ANNOTATIONS HERE
# }
}
spec {
selector = {
## HOW DO I GET THE SELECTORS FROM VARIABLEES HERE
}
session_affinity = "ClientIP"
port {
port = each.value.ports.port
target_port = each.value.ports.targetPort
}
type = "ClusterIP"
}
}
</code></pre>
<p>I'd appreciate any guide, links or suggestions here!</p>
| <p>You do the same as with other properties:</p>
<pre><code>resource "kubernetes_service" "service" {
for_each = var.apps
metadata {
name = each.value.svc_name
namespace = each.value.namespace
annotations = each.value.annotations
}
spec {
selector = each.value.selector
session_affinity = "ClientIP"
port {
port = each.value.ports.port
target_port = each.value.ports.targetPort
}
type = "ClusterIP"
}
}
</code></pre>
|
<p>I am trying to update a custom resource:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl edit ticketservice r134-4979a2f6-f46d-405d-97bc-ab1ba -n 2b9aea53dbfc47a
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: ticketservice.ias.alpha.com/v1alpha1
kind: TicketService
metadata:
annotations:
foo: bar
baz: bux
.. etc ..
creationTimestamp: "2022-05-04T20:08:06Z" <--- here
finalizers:
- ticket-generator/simulator-finalizer
generation: 3
labels:
label1: value1
.. etc ..
name: r134-4979a2f6-f46d-405d-97bc-ab1baf97543d
</code></pre>
<p>No matter what I do on the edit, the <code>creationTimestamp</code> just comes back at the same time. I can't edit it, can't remove the line either - it just comes back.</p>
<p>I thought maybe the <code>finalizer</code> line was involved, but I was able to delete it, but still no dice editing the <code>creationTimestamp</code></p>
<p>I think there must be another resource that locks this value; any idea what the reason is, and if there's another check going on on submission, then how would I trace that?</p>
| <p><code>creationtimestamp</code> can only be set by Kubernetes as it turns out. From a trusted source but sadly don't have the specific k8s docs ref.</p>
|
<p>So after updating to Docker Desktop 4.7.1, local kubernetes (v1.22.5) now is stuck at starting.</p>
<p><a href="https://i.stack.imgur.com/LAkxm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LAkxm.png" alt="enter image description here" /></a></p>
<p>Uninstalled and re-installed latest Docker Desktop (4.7.1) but after restart, issue persists.</p>
<p>Running <code>kubectl get all -n kube-system</code></p>
<p>Returns <code>Unable to connect to the server: EOF</code></p>
<p>So clearly kubernetes isn't running.</p>
<p>Any help is appreciated, thanks.</p>
<p>P.S. I am contemplating now on going back to Docker Desktop 4.6.x if I can't resolve this, but I do wish to be on latest Docker Desktop if possible.</p>
<p>I am on Windows 10</p>
| <p>Ok after couple of hours of trouble shooting I was able to resolved the issue.</p>
<h3><strong>Step 1: Reset Docker Desktop To Factory Defaults</strong></h3>
<p>Right click docker icon on system tray, then choose <strong>troubleshoot</strong>, then click the <strong>Reset to factory defaults</strong> button.</p>
<p><a href="https://i.stack.imgur.com/iLAwY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iLAwY.png" alt="enter image description here" /></a></p>
<h3><strong>Step 2: Close Docker Desktop</strong></h3>
<h3><strong>Step 3: Delete the .kube folder</strong></h3>
<p>It is located on <code>C:\Users\<user name>\.kube</code></p>
<p>Make a backup for it if you want</p>
<h3><strong>Step 4: Delete the pki folder</strong></h3>
<p>It is located on <code>C:\Users\<user name>\AppData\Local\Docker\pki</code></p>
<p>Make a backup for it if you want</p>
<h3><strong>Step 5: Add some entries in your hosts file</strong></h3>
<pre><code>192.168.1.5 host.docker.internal
192.168.1.5 gateway.docker.internal
127.0.0.1 docker-for-desktop
127.0.0.1 kubernetes.docker.internal
</code></pre>
<h3><strong>Step 6: Run Docker Desktop As Administrator</strong></h3>
<p>That's it, I was able to run local kubernetes cluster again</p>
<p><a href="https://i.stack.imgur.com/zxyxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zxyxN.png" alt="enter image description here" /></a></p>
|
<p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">Kubernetes API Concepts > Efficient detection of changes</a>:</p>
<blockquote>
<p>When retrieving a collection of resources (either namespace or cluster scoped), the response from the API server contains a resourceVersion value. The client can use that resourceVersion to initiate a watch against the API server.
When you send a watch request, the API server responds with a stream of changes. These changes itemize the outcome of operations (such as create, delete, and update) that occurred after the resourceVersion you specified as a parameter to the watch request. The overall watch mechanism allows a client to fetch the current state and then subscribe to subsequent changes, without missing any events.</p>
</blockquote>
<p>When I tried a watch operation (using kubernetes python client) I get a stream of kubernetes events, the events themselves <strong>do not have</strong> a <code>resourceVersion</code>, the object inside the event (<code>kind: Pod</code>) do have <code>resourceVersion</code>.</p>
<pre><code>from kubernetes import client,config,watch
config.load_kube_config(context='eks-prod')
v1 = client.CoreV1Api()
watcher = watch.Watch()
namespace = 'my-namespace'
last_resource_version=0
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5):
print(i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
</code></pre>
<p>The resource version are output in the order they are received and <strong>they are not monotonically increasing</strong> at least in the initial batch of events:</p>
<pre><code>380744163
380641499
380710458
380775853
380525082
381044514
380676150
380641735
380566799
380806984
380566585
380710721
378885571
380524650
380710218
380806798
373502190
380566206
381044372
380524615
380676624
380806573
380775663
380605904
380743917
380606153
380676388
380744368
380641258
380775416
380606397
</code></pre>
<p>But can I assume that if this watch is disconnected I can <strong>safely</strong> resume from the highest resource version I've seen? In the above case, can I safely resume from <code>381044514</code> (the highest) without missing events?</p>
| <p>From <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">Resource Version Semantics</a></p>
<blockquote>
<p>You must <strong>not assume resource versions are numeric</strong> or collatable. API clients may only compare two resource versions for equality (this means that you must not compare resource versions for greater-than or less-than relationships).</p>
</blockquote>
<p>So in principle no you can't use the "highest" resource version because they are not really numeric or sortable. The best you can do is use the latest <code>resourceVersion</code> that you received as is , verbatim. And be prepared to get a <code>resource too old</code> that you are supposed to handle by <strong>retrying without specifying a resource version</strong>, in that case you must also handle the case where you will likely receive some events more than once.</p>
<p>This scenario where the <code>resourceVersion</code> in the last event received is not the actual latest/most recent is easily reproducible in EKS 1.21 where the initial response to the watch will return the events in more or less random order. If I send two watch requests simultaneously I'll get the same 30 events but in different order.</p>
|
<p>When a pod is in crashLoopBackOff state (for example), are the <strong>requested</strong> resources (CPU and MEMORY) still allocated to this pod ?</p>
<p>If you have the answer, please explain how did you do to make sure that it is or not still allocated to the pod <3</p>
| <p>I got the answer, here is the test:</p>
<p>However the number of nodes I got or the amount of resources they have, I will create a resourceQuota for the namespace where I will perform the test:</p>
<p><code>resourceQuota.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: test-so
spec:
hard:
cpu: "10m"
memory: 10Mi
pods: "10"
</code></pre>
<p>Let's create a busybox deployment that will CrashLoopBackOff with the maximum resource in the ResourceQuota allocated to it:</p>
<p><code>deployment1.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-so
name: test-so
spec:
replicas: 1
selector:
matchLabels:
app: test-so
template:
metadata:
labels:
app: test-so
spec:
containers:
- image: busybox
name: busybox
resources:
requests:
cpu: 10m
memory: 10Mi
</code></pre>
<p>As expected its in CrashLoopBackOff state, however its deployed:</p>
<pre><code>> kubectl get pods -o wide:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/test-so-57f76ccb9b-2w5vk 0/1 CrashLoopBackOff 3 (63s ago) 2m23s 10.244.5.2 so-cluster-1-worker2 <none> <none>
</code></pre>
<p>Let's now create a second deployment with the same amount of resources:</p>
<p><code>deployment2.yaml</code>:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-so2
name: test-so2
spec:
replicas: 1
selector:
matchLabels:
app: test-so2
template:
metadata:
labels:
app: test-so2
spec:
containers:
- image: busybox
name: busybox
resources:
requests:
cpu: 10m
memory: 10Mi
</code></pre>
<p>No pods created and here is the status of the replicaset:</p>
<pre><code>❯ k describe rs test-so2-7dd9c65cbd
Name: test-so2-7dd9c65cbd
Namespace: so-tests
Selector: app=test-so2,pod-template-hash=7dd9c65cbd
Labels: app=test-so2
pod-template-hash=7dd9c65cbd
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/test-so2
Replicas: 0 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=test-so2
pod-template-hash=7dd9c65cbd
Containers:
busybox:
Image: busybox
Port: <none>
Host Port: <none>
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
ReplicaFailure True FailedCreate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-7x8qm" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-kv9m4" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-7w7wz" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-8gcnp" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-vllqf" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-2jhnb" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-gjtvw" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 31s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-qdq44" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 30s replicaset-controller Error creating: pods "test-so2-7dd9c65cbd-69rn7" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
Warning FailedCreate 11s (x4 over 29s) replicaset-controller (combined from similar events): Error creating: pods "test-so2-7dd9c65cbd-jjjl4" is forbidden: exceeded quota: test-so, requested: cpu=10m,memory=10Mi, used: cpu=10m,memory=10Mi, limited: cpu=10m,memory=10Mi
</code></pre>
<p>So that means that in fact, even if a pod is in CrashLoopBackOff state, it still blocks the requested amount of memory.</p>
<p>We know it now ! hahaha</p>
<p>Have a nice day, bguess</p>
|
<p>I'm currently getting errors when trying to exec or get logs for my pods on my GKE cluster.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl logs <POD-NAME>
Error from server: Get "https://<NODE-PRIVATE-IP>:10250/containerLogs/default/<POD-NAME>/<DEPLOYMENT-NAME>": remote error: tls: internal error
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it <POD-NAME> -- sh
Error from server: error dialing backend: remote error: tls: internal error
</code></pre>
<p>One suspicious thing I found while troubleshooting is that all CSRs are getting denied...</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-79zkn 4m16s kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7b5sx 91m kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7fzjh 103m kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7gstl 19m kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7hrvm 11m kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7mn6h 87m kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
csr-7nd7h 4m57s kubernetes.io/kubelet-serving system:node:<NODE-NAME> Denied
...
</code></pre>
<p>Any idea why this is happening ? Maybe a firewall issue ?</p>
<p>Thanks in advance !</p>
<h3>Update 1</h3>
<p>Here the same commands with a verbose output <code>--v=8</code>, <em>withtout the <code>goroutines</code> stack trace</em></p>
<p><code>$ kubectl logs --v=8 <POD-NAME></code></p>
<pre><code>I0527 09:27:59.624843 10407 loader.go:375] Config loaded from file: /home/kevin/.kube/config
I0527 09:27:59.628621 10407 round_trippers.go:420] GET https://<PUBLIC-IP>/api/v1/namespaces/default/pods/<POD-NAME>
I0527 09:27:59.628635 10407 round_trippers.go:427] Request Headers:
I0527 09:27:59.628644 10407 round_trippers.go:431] Accept: application/json, */*
I0527 09:27:59.628649 10407 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I0527 09:27:59.727411 10407 round_trippers.go:446] Response Status: 200 OK in 98 milliseconds
I0527 09:27:59.727461 10407 round_trippers.go:449] Response Headers:
I0527 09:27:59.727480 10407 round_trippers.go:452] Audit-Id: ...
I0527 09:27:59.727496 10407 round_trippers.go:452] Cache-Control: no-cache, private
I0527 09:27:59.727512 10407 round_trippers.go:452] Content-Type: application/json
I0527 09:27:59.727528 10407 round_trippers.go:452] Date: Thu, 27 May 2021 07:27:59 GMT
I0527 09:27:59.727756 10407 request.go:1097] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"<POD-NAME>","generateName":"<POD-BASE-NAME>","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/<POD-NAME>","uid":"...","resourceVersion":"6764210","creationTimestamp":"2021-05-19T10:33:28Z","labels":{"app":"<NAME>","pod-template-hash":"..."},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"<POD-BASE-NAME>","uid":"...","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-19T10:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"...\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:c [truncated 3250 chars]
I0527 09:27:59.745985 10407 round_trippers.go:420] GET https://<PUBLIC-IP>/api/v1/namespaces/default/pods/<POD-NAME>/log
I0527 09:27:59.746035 10407 round_trippers.go:427] Request Headers:
I0527 09:27:59.746055 10407 round_trippers.go:431] Accept: application/json, */*
I0527 09:27:59.746071 10407 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I0527 09:27:59.800586 10407 round_trippers.go:446] Response Status: 500 Internal Server Error in 54 milliseconds
I0527 09:27:59.800638 10407 round_trippers.go:449] Response Headers:
I0527 09:27:59.800654 10407 round_trippers.go:452] Audit-Id: ...
I0527 09:27:59.800668 10407 round_trippers.go:452] Cache-Control: no-cache, private
I0527 09:27:59.800680 10407 round_trippers.go:452] Content-Type: application/json
I0527 09:27:59.800693 10407 round_trippers.go:452] Content-Length: 217
I0527 09:27:59.800712 10407 round_trippers.go:452] Date: Thu, 27 May 2021 07:27:59 GMT
I0527 09:27:59.800772 10407 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Get \"https://10.156.0.8:10250/containerLogs/default/<POD-NAME>/<SERVICE-NAME>\": remote error: tls: internal error","code":500}
I0527 09:27:59.801848 10407 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "Get \"https://10.156.0.8:10250/containerLogs/default/<POD-NAME>/<SERVICE-NAME>\": remote error: tls: internal error",
"code": 500
}]
F0527 09:27:59.801944 10407 helpers.go:115] Error from server: Get "https://10.156.0.8:10250/containerLogs/default/<POD-NAME>/<SERVICE-NAME>": remote error: tls: internal error
</code></pre>
<p><code>kubectl exec --v=8 -it <POD-NAME> -- sh</code></p>
<pre><code>I0527 09:44:48.673774 11157 loader.go:375] Config loaded from file: /home/kevin/.kube/config
I0527 09:44:48.678514 11157 round_trippers.go:420] GET https://<PUBLIC-IP>/api/v1/namespaces/default/pods/<POD-NAME>
I0527 09:44:48.678528 11157 round_trippers.go:427] Request Headers:
I0527 09:44:48.678535 11157 round_trippers.go:431] Accept: application/json, */*
I0527 09:44:48.678543 11157 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I0527 09:44:48.795864 11157 round_trippers.go:446] Response Status: 200 OK in 117 milliseconds
I0527 09:44:48.795920 11157 round_trippers.go:449] Response Headers:
I0527 09:44:48.795963 11157 round_trippers.go:452] Audit-Id: ...
I0527 09:44:48.795995 11157 round_trippers.go:452] Cache-Control: no-cache, private
I0527 09:44:48.796019 11157 round_trippers.go:452] Content-Type: application/json
I0527 09:44:48.796037 11157 round_trippers.go:452] Date: Thu, 27 May 2021 07:44:48 GMT
I0527 09:44:48.796644 11157 request.go:1097] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"<POD-NAME>","generateName":"","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/<POD-NAME>","uid":"","resourceVersion":"6764210","creationTimestamp":"2021-05-19T10:33:28Z","labels":{"app":"...","pod-template-hash":"..."},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"<POD-BASE-NAME>","uid":"...","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-05-19T10:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"...\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:c [truncated 3250 chars]
I0527 09:44:48.814315 11157 round_trippers.go:420] POST https://<PUBLIC-IP>/api/v1/namespaces/default/pods/<POD-NAME>/exec?command=sh&container=<SERVICE-NAME>&stdin=true&stdout=true&tty=true
I0527 09:44:48.814372 11157 round_trippers.go:427] Request Headers:
I0527 09:44:48.814391 11157 round_trippers.go:431] User-Agent: kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a
I0527 09:44:48.814406 11157 round_trippers.go:431] X-Stream-Protocol-Version: v4.channel.k8s.io
I0527 09:44:48.814420 11157 round_trippers.go:431] X-Stream-Protocol-Version: v3.channel.k8s.io
I0527 09:44:48.814445 11157 round_trippers.go:431] X-Stream-Protocol-Version: v2.channel.k8s.io
I0527 09:44:48.814471 11157 round_trippers.go:431] X-Stream-Protocol-Version: channel.k8s.io
I0527 09:44:48.913928 11157 round_trippers.go:446] Response Status: 500 Internal Server Error in 99 milliseconds
I0527 09:44:48.913977 11157 round_trippers.go:449] Response Headers:
I0527 09:44:48.914005 11157 round_trippers.go:452] Audit-Id: ...
I0527 09:44:48.914029 11157 round_trippers.go:452] Cache-Control: no-cache, private
I0527 09:44:48.914054 11157 round_trippers.go:452] Content-Type: application/json
I0527 09:44:48.914077 11157 round_trippers.go:452] Date: Thu, 27 May 2021 07:44:48 GMT
I0527 09:44:48.914099 11157 round_trippers.go:452] Content-Length: 149
I0527 09:44:48.915741 11157 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "error dialing backend: remote error: tls: internal error",
"code": 500
}]
F0527 09:44:48.915837 11157 helpers.go:115] Error from server: error dialing backend: remote error: tls: internal error
</code></pre>
<h3>Update 2</h3>
<p>After connecting to one of the GKE worker nodes and checking <code>kubelet</code> logs I found these wired lines</p>
<pre><code>May 27 09:30:11 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:11.271022 1272 log.go:181] http: TLS handshake error from 10.156.0.9:54672: no serving certificate available for the kubelet
May 27 09:30:11 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:11.305628 1272 log.go:181] http: TLS handshake error from 10.156.0.9:54674: no serving certificate available for the kubelet
May 27 09:30:12 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:12.067998 1272 log.go:181] http: TLS handshake error from 10.156.0.11:57610: no serving certificate available for the kubelet
May 27 09:30:14 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:14.144826 1272 certificate_manager.go:412] Rotating certificates
May 27 09:30:14 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:14.154322 1272 reflector.go:207] Starting reflector *v1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
May 27 09:30:14 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: I0527 09:30:14.448976 1272 reflector.go:213] Stopping reflector *v1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
May 27 09:30:14 gke-<CLUSTER-NAME>-default-pool-<NODE-UID> kubelet[1272]: E0527 09:30:14.449045 1272 certificate_manager.go:454] certificate request was not signed: cannot watch on the certificate signing request: certificate signing request is denied, reason: AutoDenied, message:
</code></pre>
<h3>Update 3</h3>
<p>I've updated the cluster version from <code>1.19.9-gke.1400</code> to <code>1.19.9-gke.1900</code>. Didn't solved the problem...</p>
<p>Made a <code>Credentials Rotation</code> on the cluster. But didn't solved as well...</p>
<h1>Final</h1>
<p>After trying lot of changes in the cluster :</p>
<ul>
<li>Restarting kubelet on nodes</li>
<li>Restarting nodes</li>
<li>Upscaling/Downscaling node pool size</li>
<li>Upgrading cluster version</li>
<li>Rotating cluster certificates</li>
</ul>
<p>Even creating a new cluster (on the same project, with same VPC, etc) didn't solve the issue...</p>
<p><strong>This problem might be related to changes made on Firewall rules.</strong></p>
<p>Only solution found, creating a new GKE cluster in a new GCP project and migrating workflow using Velero.</p>
| <p>Assume this answer may help ,</p>
<p>*This issue is due to <strong>pending</strong> Certificate Sigining Requests for nodes made by kubelet running in each node *</p>
<p>to check the csr pending for nodes</p>
<pre><code>kubectl get csr --sort-by=.metadata.creationTimestamp
</code></pre>
<p>then approve the csr for each node</p>
<pre><code>kubectl certificate approve <csr-id>
</code></pre>
<p>Further details ,refer this section in k8s documentation</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs</a></p>
<p>One known limitation is that the CSRs (Certificate Signing Requests)
for these certificates cannot be automatically approved by the default
signer in the kube-controller-manager - kubernetes.io/kubelet-serving.
This will require action from the user or a third party controller</p>
</blockquote>
<p>.</p>
|
Subsets and Splits