Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I installed one Kubernetes Master and two kubernetes worker on-premises.</p>
<p>After I installed Metallb as LoadBalancer using commands below:</p>
<pre><code>$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
</code></pre>
<p>I created my yaml file as below:</p>
<p><strong>myapp-tst-deploy.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
</code></pre>
<p><strong>myapp-tst-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
</code></pre>
<p><strong>myapp-tst-ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
</code></pre>
<p>I run <code>kubectl -f apply</code> for all three files, and these is my result:</p>
<pre><code>kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
</code></pre>
<p>But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.</p>
<p>I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.</p>
<p>How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?</p>
| Marco Ferrara | <p>In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".</p>
<p>You should try to set it to "ClientIP".</p>
<p>From page <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> :</p>
<p>"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."</p>
| Sándor |
<p>I have a Kubernetes cluster with multiple nodes in two different subnets (<code>x</code> and <code>y</code>). I have an IPsec VPN tunnel setup between my <code>x</code> subnet and an external network. Now my problem is that the pods that get scheduled in the nodes on the <code>y</code> subnet can't send requests to the external network because they're in nodes not covered by the VPN tunnel. Creating another VPN to cover the <code>y</code> subnet isn't possible right now. Is there a way in k8s to force all pods' traffic to go through a single source? Or any clean solution even if outside of k8s?</p>
| alkhatim | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>There is no built-in functionality in kubernetes that can do it. However there are two available options which can help to achieve the required setup:</p>
<ol>
<li><strong>Istio</strong></li>
</ol>
<p>If services are well known then it's possible to use <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/" rel="nofollow noreferrer">istio egress gateway</a>. We are interested in this use case:</p>
<blockquote>
<p>Another use case is a cluster where the application nodes don’t have
public IPs, so the in-mesh services that run on them cannot access the
Internet. Defining an egress gateway, directing all the egress traffic
through it, and allocating public IPs to the egress gateway nodes
allows the application nodes to access external services in a
controlled way.</p>
</blockquote>
<ol start="2">
<li><strong>Antrea egress</strong></li>
</ol>
<p>There's another solution which can be used - <a href="https://antrea.io/docs/v1.4.0/docs/egress/" rel="nofollow noreferrer">antrea egress</a>. Use cases are:</p>
<p>You may be interested in using this capability if any of the following apply:</p>
<blockquote>
<ul>
<li><p>A consistent IP address is desired when specific Pods connect to
services outside of the cluster, for source tracing in audit logs, or
for filtering by source IP in external firewall, etc.</p>
</li>
<li><p>You want to force outgoing external connections to leave the cluster
via certain Nodes, for security controls, or due to network topology
restrictions.</p>
</li>
</ul>
</blockquote>
| moonkotte |
<p>I'm exposing an HTTPS service API gateway with Swagger UI hosted on Azure AKS Cluster with ingress-nginx controller <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a></p>
<p>Exposing the path my-domain.com/swagger works fine but when I try to make API calls( POST, GET, ...) I get a 404 error.</p>
<p>My ingress configuration configuration is the following:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod #letsencrypt-staging
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- my-domain.com
secretName: tp-api-gateway-wildcard # get it from certificate.yaml
rules:
- host: my-domain.com
http:
paths:
- path: /swagger
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 80
ingressClassName: nginx
</code></pre>
<p>Anyone has an idea how can I successfully make the API calls ?
Thank you</p>
| Reda E. | <p>Managed to make API calls because I exposed only the subpath <code>/swagger</code> where I could access only <code>my-domain.com/swagger</code> and not other paths.</p>
<p>Changed configuration such as :</p>
<pre><code>...
rules:
- host: my-domain.com
http:
paths:
- path: /
pathType: Prefix
...
</code></pre>
| Reda E. |
<p>I am confused in finalize the cluster size for my QA kubernetes deployment which can be used by 150 people. Follwing are the services i need to deploy:
6 Spring boot microservices with 4pods,
1 Angular application with 4 pods</p>
<p>Can anyone help me in finalizing the size?</p>
| I.vignesh David | <p>Managing a Kubernetes cluster is not a one-size-fits-all problem. There are many ways to rightsize your cluster, and it is vital to design your application for reliability and robustness.</p>
<p>Factors which we need to consider when making a decision:</p>
<ol>
<li><p>High Availability</p>
</li>
<li><p>Management Overhead</p>
</li>
<li><p>Ease of Scheduling Container</p>
</li>
<li><p>Node Auto-Scaling</p>
</li>
<li><p>Ease of Maintenance</p>
</li>
<li><p>Kubelet Overhead</p>
</li>
<li><p>System Overhead</p>
</li>
<li><p>Rightsizing Your Nodes</p>
</li>
</ol>
<p>You can use the following to arrive at an optimal figure:</p>
<p>The number of containers per node = Square root of the closest lower perfect square to the total number of containers, provided the number of containers per node doesn’t exceed the recommended value</p>
<p>Number of nodes = Total number of containers / The number of containers per node</p>
<p>Overprovision factor = Number of containers per node * max resource per container / (Number of nodes — Max planned unavailable nodes)</p>
<p>Node capacity = max resource required per container * the number of containers per node + overprovision factor + Kubernetes system resource requirements</p>
<p>Refer to the <a href="https://betterprogramming.pub/tips-for-rightsizing-your-kubernetes-cluster-e0a8f1093d8d" rel="nofollow noreferrer">document</a> for more information.</p>
| Fariya Rahmat |
<p>I am trying to get mongo to be scheduled to a given node in my cluster (qatar).</p>
<p>I see the following error message in the pod description:</p>
<pre><code> Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 6 node(s) didn't find available persistent volumes to bind.
</code></pre>
<p>Mongo relies on the following 2 claims:</p>
<pre><code>[dsargrad@malta cfg]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-learning-center-mongodb-0 Pending local-storage 3m57s
logs-volume-learning-center-mongodb-0 Pending local-storage 3m57s
[dsargrad@malta cfg]$ kubectl describe pvc data-volume-learning-center-mongodb-0
Name: data-volume-learning-center-mongodb-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: learning-center-mongodb-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 4m45s persistentvolume-controller waiting for first consumer to be created before binding
Normal WaitForPodScheduled 12s (x19 over 4m42s) persistentvolume-controller waiting for pod learning-center-mongodb-0 to be scheduled
</code></pre>
<p>My two PV's that I want to be bound are as follows:</p>
<pre><code>[dsargrad@malta cfg]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mongo-data-pv 1Gi RWO Retain Available default/data-volume-learning-center-mongodb-0 local-storage 8m47s
mongo-logs-pv 1Gi RWO Retain Available default/logs-volume-learning-center-mongodb-0 local-storage 15m
</code></pre>
<p>These use "local" storage.. on the qatar.corp.sensis.com node.</p>
<pre><code>[dsargrad@malta cfg]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
benin.corp.sensis.com Ready <none> 45h v1.20.5
chad.corp.sensis.com Ready <none> 45h v1.20.5
malta.corp.sensis.com Ready control-plane,master 45h v1.20.5
qatar.corp.sensis.com Ready <none> 45h v1.20.5
spain.corp.sensis.com Ready <none> 45h v1.20.5
togo.corp.sensis.com Ready <none> 45h v1.20.5
tonga.corp.sensis.com Ready <none> 45h v1.20.5
</code></pre>
<p>My mongo pod wont schedule</p>
<pre><code>[dsargrad@malta cfg]$ kubectl describe pod learning-center-mongodb-0
Name: learning-center-mongodb-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=learning-center-mongodb-svc
controller-revision-hash=learning-center-mongodb-784678577f
statefulset.kubernetes.io/pod-name=learning-center-mongodb-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/learning-center-mongodb
Init Containers:
mongod-posthook:
Image: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
Port: <none>
Host Port: <none>
Command:
cp
version-upgrade-hook
/hooks/version-upgrade
Environment: <none>
Mounts:
/hooks from hooks (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
mongodb-agent-readinessprobe:
Image: quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.1
Port: <none>
Host Port: <none>
Command:
cp
/probes/readinessprobe
/opt/scripts/readinessprobe
Environment: <none>
Mounts:
/opt/scripts from agent-scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Containers:
mongod:
Image: registry.hub.docker.com/library/mongo:4.2.6
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
#run post-start hook to handle version changes
/hooks/version-upgrade
# wait for config and keyfile to be created by the agent
while ! [ -f /data/automation-mongod.conf -a -f /var/lib/mongodb-mms-automation/authentication/keyfile ]; do sleep 3 ; done ; sleep 2 ;
# start mongod with this configuration
exec mongod -f /data/automation-mongod.conf;
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Environment:
AGENT_STATUS_FILEPATH: /healthstatus/agent-health-status.json
Mounts:
/data from data-volume (rw)
/healthstatus from healthstatus (rw)
/hooks from hooks (rw)
/var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
mongodb-agent:
Image: quay.io/mongodb/mongodb-agent:10.27.0.6772-1
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
current_uid=$(id -u)
echo $current_uid
declare -r current_uid
if ! grep -q "${current_uid}" /etc/passwd ; then
sed -e "s/^mongodb:/builder:/" /etc/passwd > /tmp/passwd
echo "mongodb:x:$(id -u):$(id -g):,,,:/:/bin/bash" >> /tmp/passwd
cat /tmp/passwd
export NSS_WRAPPER_PASSWD=/tmp/passwd
export LD_PRELOAD=libnss_wrapper.so
export NSS_WRAPPER_GROUP=/etc/group
fi
agent/mongodb-agent -cluster=/var/lib/automation/config/cluster-config.json -skipMongoStart -noDaemonize -healthCheckFilePath=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json -serveStatusPort=5000 -useLocalMongoDbTools
Limits:
cpu: 1
memory: 500M
Requests:
cpu: 500m
memory: 400M
Readiness: exec [/opt/scripts/readinessprobe] delay=5s timeout=1s period=10s #success=1 #failure=60
Environment:
AGENT_STATUS_FILEPATH: /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
AUTOMATION_CONFIG_MAP: learning-center-mongodb-config
HEADLESS_AGENT: true
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/data from data-volume (rw)
/opt/scripts from agent-scripts (rw)
/var/lib/automation/config from automation-config (ro)
/var/lib/mongodb-mms-automation/authentication from learning-center-mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/log/mongodb-mms-automation/healthstatus from healthstatus (rw)
/var/run/secrets/kubernetes.io/serviceaccount from mongodb-kubernetes-operator-token-ldwsr (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
logs-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: logs-volume-learning-center-mongodb-0
ReadOnly: false
data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-volume-learning-center-mongodb-0
ReadOnly: false
agent-scripts:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
automation-config:
Type: Secret (a volume populated by a Secret)
SecretName: learning-center-mongodb-config
Optional: false
healthstatus:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
hooks:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
learning-center-mongodb-keyfile:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
mongodb-kubernetes-operator-token-ldwsr:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-kubernetes-operator-token-ldwsr
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m19s default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 6 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 7m19s default-scheduler 0/7 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 6 node(s) didn't find available persistent volumes to bind.
</code></pre>
<p>I use the claimRef in the creation of the PV.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-logs-pv
labels:
app: learning-center-mongodb-svc
spec:
capacity:
storage: 1Gi
claimRef:
namespace: default
name: logs-volume-learning-center-mongodb-0
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/storage/mongo/logs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- qatar.corp.sensis.com
</code></pre>
<p>My local-storage class:</p>
<pre><code>[dsargrad@malta cfg]$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 4h22m
</code></pre>
<p>Here is a description of the data PV</p>
<pre><code>[dsargrad@malta cfg]$ kubectl describe pv mongo-data-pv
Name: mongo-data-pv
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim: default/data-volume-learning-center-mongodb-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/storage/mongo/data
Events: <none>
</code></pre>
<p>and the logs PV</p>
<pre><code>[dsargrad@malta cfg]$ kubectl describe pv mongo-logs-pv
Name: mongo-logs-pv
Labels: app=learning-center-mongodb-svc
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim: default/logs-volume-learning-center-mongodb-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [qatar.corp.sensis.com]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/storage/mongo/logs
Events: <none>
</code></pre>
<p>On the node qatar.corp.sensis.com I have the folders referenced in the PV
<a href="https://i.stack.imgur.com/DsVie.png" rel="nofollow noreferrer">Screenshot of directory with permissions</a></p>
<p>Why wont the pod schedule to qatar.corp.sensis.com and the PVCs bind to the PVs?</p>
| David Sargrad | <p>I made the boneheaded assumption that if the PVC had claimed a size I'd see that in the output of the <em><strong>describe</strong></em> command. I had to get the yaml for the PVC spec to see that it requested more than the PV had allocated</p>
<p>I've now successfully bound</p>
<pre><code>[dsargrad@malta cfg]$ kubectl apply -f *logs* --namespace default
persistentvolume/mongo-logs-pv configured
[dsargrad@malta cfg]$ kubectl apply -f *data* --namespace default
persistentvolume/mongo-data-pv configured
[dsargrad@malta cfg]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-learning-center-mongodb-0 Bound mongo-data-pv 10Gi RWO local-storage 98m
logs-volume-learning-center-mongodb-0 Bound mongo-logs-pv 10Gi RWO local-storage 98m
</code></pre>
<p>I found the detail I needed by looking carefully at the PVC spec.
Interestingly it was <a href="https://www.youtube.com/watch?v=0CFb26BNeTQ&t=301s" rel="nofollow noreferrer">this</a> video on youtube that clued me to the answer. See time starting at about 6:50.</p>
<p>In the following note the storage size is "2G".</p>
<pre><code>[dsargrad@malta cfg]$ kubectl get pvc logs-volume-learning-center-mongodb-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-03-31T15:55:40Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: learning-center-mongodb-svc
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-03-31T15:55:40Z"
name: logs-volume-learning-center-mongodb-0
namespace: default
resourceVersion: "302313"
uid: 09ef80fe-a45e-45e4-b515-9746b9265476
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
storageClassName: local-storage
volumeMode: Filesystem
status:
phase: Pending
</code></pre>
| David Sargrad |
<p>I'm a noob with Azure deployment, kubernetes and HA implementation. When I implement health probes as part of my app deployment, the health probes fail and I end up with either 503 (internal server error) or 502 (bad gateway) error when I try accessing the app via the URL. When I remove the health probes, I can successfully access the app using its URL.</p>
<p>I use the following yaml deployment configuration when implementing the health probes, which is utilised by an Azure devops pipeline. The app takes under 5 mins to become available, so I set the <code>initialDelaySeconds</code> for the health probes to <code>300s</code>.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
spec:
...
template:
metadata:
labels:
app: myApp
spec:
...
containers:
- name: myApp
...
ports:
- containerPort: 5000
...
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 300
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbe:
tcpSocket:
port: 5000
periodSeconds: 30
initialDelaySeconds: 300
successThreshold: 1
failureThreshold: 3
...
</code></pre>
<p>When I perform the deployment and describe the pod, I see the following listed under 'Events' at the bottom of the output:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 2m1s (x288 over 86m) kubelet, aks-vm-id-appears-here Readiness probe failed: dial tcp 10.123.1.23:5000: connect: connection refused
</code></pre>
<p>(this is confusing as it states the age as 2m1s - but the <code>initialDelaySeconds</code> is greater than this - so I'm not sure why it reports this as the age)</p>
<p>The readiness probe subsequently fails with the same error. The IP number matches the IP of my pod and I see this under <code>Containers</code> in the pod description:</p>
<pre><code>Containers:
....
Port: 5000/TCP
</code></pre>
<p>The failure of the liveness and readiness probes results in the pod being continually terminated and restarted.</p>
<p>The app has a default <code>index.html</code> page, so I <em>believe</em> the health probe should receive a 200 response if it's able to connect.</p>
<p>Because the health probe is failing, the pod IP doesn't get assigned to the endpoints object and therefore isn't assigned against the service.</p>
<p>If I comment out the <code>readinessProbe</code> and <code>livenessProbe</code> from the deployment, the app runs successfully when I use the URL via the browser, and the pod IP gets successfully assigned as an endpoint that the service can communicate with. The endpoint address is in the form 10.123.1.23:5000 - i.e. port 5000 seems to be the correct port for the pod.</p>
<p>I don't understand why the health probe would be failing to connect? It looks correct to me that it should be trying to connect on an IP that looks like 10.123.1.23:5000.</p>
<p>It's possible that the port is taking a long time than 300s to become open, but I don't know of a way I can check that. If I enter a bash session on the pod, <code>watch</code> isn't available (I read that <code>watch ss -lnt</code> can be used to examine port availability).</p>
<p>The following answer suggests increasing <code>initialDelaySeconds</code> but I already tried that - <a href="https://stackoverflow.com/a/51932875/1549918">https://stackoverflow.com/a/51932875/1549918</a></p>
<p>I saw this question - but resource utilisation (e.g. CPU/RAM) is not the issue
<a href="https://stackoverflow.com/questions/63904845/liveness-and-readiness-probe-connection-refused">Liveness and readiness probe connection refused</a></p>
<p><strong>UPDATE</strong></p>
<p>If I curl from a replica of the pod to <a href="https://10.123.1.23:5000" rel="nofollow noreferrer">https://10.123.1.23:5000</a>, I get a similar error (<code>Failed to connect to ...the IP.. port 5000: Connection refused</code>). Why could this be failing? I read something that suggests that attempting this connection from another pod may indicate reachability for the health probes also.</p>
| Chris Halcrow | <p>If you are unsure if your application is starting correctly then replace it with a known good image. e.g. <a href="https://hub.docker.com/_/httpd/" rel="nofollow noreferrer">httpd</a></p>
<p>change the ports to 80, the image to httpd.</p>
<p>You might also want to increase the timeout for the health check as it defaults to 1 second to timeoutSeconds=5</p>
<p>in addition, if your image is a web application then it would be better to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">a http probe</a></p>
| James |
<p>I created a aws eks using assume role api. Role A assume role B to performe create EKS api. I create the eks and specify that the EKS's cluster role is role C. As I know,the role C's arn will be stored in eks aws-auth configMap.</p>
<p>When A assume role C to access the created EKS, "Failed to get namespaces: Unauthorized" returned.</p>
<p>I always use assume role to invoke API. Does anyone know, whether aws-auth store role C's arn like 'arn:aws:iam::C:role/k8s-cluster-role' or eks store the role arn in aws-auth in another way.</p>
| Kami Wan | <p>You have some misconception; The role that is stored in <strong>aws-auth configmap for system:masters group</strong> in your cluster is not the cluster role, but the iam principal that creates the cluster itself, <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#:%7E:text=When%20you%20create%20an%20Amazon%20EKS%20cluster%2C%20the%20IAM%20principal%20that%20creates%20the%20cluster%20is%20automatically%20granted%20system%3Amasters%20permissions%20in%20the%20cluster%27s%20role%2Dbased%20access%20control%20(RBAC)%20configuration%20in%20the%20Amazon%20EKS%20control%20plane." rel="nofollow noreferrer">as per official doc</a>.</p>
<blockquote>
<p>When you create an Amazon EKS cluster, the IAM principal that creates the cluster is automatically granted system:masters permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane.</p>
</blockquote>
<p>From what you have written, if the sequence is right, and that assume-role approach you are following works properly, <strong>you should be able to query your cluster api resources with role-b not role-c</strong>, since b is the one you used to create the cluster. In your current setup, you are expecting role C to be able to access cluster resources, though you created with role b.</p>
| Abraam Magued |
<p>I'm trying to use <code>Kubernetes</code> with <code>Docker</code>. My image runs with Docker. I have one master-node and two worker-nodes. I also created a local registry like this <code>$ docker run -d -p 5000:5000 --restart=always --name registry registry:2</code> and pushed my image into it. Everything worked fine so far.</p>
<p>I added <code>{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }</code> to the <code>daemon.json</code> file at <code>/etc/docker</code>. And I also changed the content of the <code>docker-file</code> at <code>/etc/default/</code>to <code>DOCKER_OPTS="--config-file=/etc/docker/daemon.json"</code>. I made the changes on all nodes and I restarted the docker daemon afterwards.</p>
<p>I am able to pull my image from every node with the following command: </p>
<p><code>sudo docker pull xxx.xxx.xxx.xxx:5000/helloworldimage</code></p>
<p>I try to create my container from the master node with the command bellow:</p>
<p><code>sudo kubectl run test --image xxx.xxx.xxx.xxx:5000/helloworldimage</code></p>
<p>Than I get the following error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-775f99f57-m9r4b to rpi-2
Normal BackOff 18s (x2 over 44s) kubelet, rpi-2 Back-off pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 18s (x2 over 44s) kubelet, rpi-2 Error: ImagePullBackOff
Normal Pulling 3s (x3 over 45s) kubelet, rpi-2 Pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Failed to pull image "xxx.xxx.xxx.xxx:5000/helloworldimage": rpc error: code = Unknown desc = failed to pull and unpack image "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to resolve reference "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to do request: Head https://xxx.xxx.xxx.xxx:5000/v2/helloworldimage/manifests/latest: http: server gave HTTP response to HTTPS client
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Error: ErrImagePull
</code></pre>
<p>This is the <code>docker</code> version I use:</p>
<pre><code>Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:37:22 2019
OS/Arch: linux/arm
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:31:17 2019
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
</code></pre>
<p>This is the <code>Kubernetes</code> version I use:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}```
</code></pre>
| angel285 | <blockquote>
<p>Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client.</p>
</blockquote>
<pre><code>{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }
</code></pre>
<p>to the <code>daemon.json</code> file at <code>/etc/docker</code>.</p>
<p>I solved this problem by configuring it on <strong>all kubernetes nodes</strong>.</p>
| Alexey Sabo |
<p>In official tutorial saying about changing namespace by cliking on namespace item and set context
<a href="https://i.stack.imgur.com/k5pR4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k5pR4.png" alt="enter image description here" /></a></p>
<p>But I can't see in my version 2020.3</p>
<p><a href="https://i.stack.imgur.com/pLcXK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pLcXK.png" alt="enter image description here" /></a></p>
| max_b | <p>A namespace is a part of a Kubernetes context, other ones are cluster and user credentials. So essentially a Kubernetes context is a shortcut which gives you a quick access to a namespace in your cluster. For the screenshot you posted, <code>default</code> namespace has a context created for it (usual scenario), but other namespaces do not have a context.</p>
<p>If you want to create a context for another namespace, please use <code>kubectl config set-context</code> command in a terminal. Cloud Code doesn't support this operation via UI or Kubernetes Explorer since it's normally rarely used. Contexts are normally created automatically when you start a cluster like minikube or GKE.</p>
| Ivan Portyankin |
<p>I'm using spring boot 2.5.6 and I'm generating the docker image with the spring boot maven plugin.
I'm deploying the application using AWS EKS with nodes managed by fargate.</p>
<p>The plugin configuration is the following</p>
<pre><code><plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</code></pre>
<p>The command I use to execute it is the following</p>
<pre><code>./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-image-name
</code></pre>
<p>When the application is deployed on AWS EKS, the application print the following data</p>
<pre><code>Setting Active Processor Count to 2
Adding $JAVA_OPTS to $JAVA_TOOL_OPTIONS
Calculated JVM Memory Configuration:
-XX:MaxDirectMemorySize=10M
-Xmx408405K
-XX:MaxMetaspaceSize=128170K
-XX:ReservedCodeCacheSize=240M
-Xss1M
(Total Memory: 1G, Thread Count: 250, Loaded Class Count: 20215, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS:
-Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties
-XX:+ExitOnOutOfMemoryError
-XX:ActiveProcessorCount=2
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/var/log/containers/heapDump.hprof"
-XX:MaxDirectMemorySize=10M
-Xmx408405K
-XX:MaxMetaspaceSize=128170K
-XX:ReservedCodeCacheSize=240M
-Xss1M
-XX:+UnlockDiagnosticVMOptions
-XX:NativeMemoryTracking=summary
-XX:+PrintNMTStatistics
-Dorg.springframework.cloud.bindings.boot.enable=true
</code></pre>
<p>If I go inside the container and I run the command "free -h" I get the following output</p>
<pre><code>total mem : 7.7G
used mem : 730M
free mem : 4.6G
shared : 820K
buff/cache : 2.4G
available
</code></pre>
<p>Why the -Xmx is filled with 400Mb only? And why the total memory is only 1Gb?</p>
| Gavi | <p>Posting this out of comments for better visibility.</p>
<hr />
<p>An important thing to mention is when <code>free</code> command is run inside a pod's container, it shows all available memory on the node where this pod is scheduled and running.</p>
<p>At this point it's very important to have memory <code>resources</code> and <code>limits</code> for java applications since JVM memory allocation can be set incorrectly if it happens by the application.</p>
<hr />
<p>There are two main options for resource allocation (in this particular case is <code>memory</code>):</p>
<ul>
<li><p>requests (<code>spec.containers[].resources.requests.memory</code>) - kubernetes scheduler has to find a node which has requested amount of memory, not less than specified.</p>
<p>It's very important to set the <code>requests</code> reasonably since it's used for scheduling and there are chances that kubernetes scheduler won't be able to find a sufficient node with enough free memory to schedule the pod - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-that-is-too-big-for-your-nodes" rel="nofollow noreferrer">good example of incorrect requests</a></p>
</li>
<li><p>limits (<code>spec.containers[].resources.limits.memory</code>) - kubelet insures that pod will not consume more than specified in limits, since containers in pod are allowed to consume more than requested.</p>
<p>It's also important to have <code>limits</code> set up for predictable resource consumption since containers can exceed requested memory and consume all node's memory until <code>OOM killer</code> is involved. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#if-you-do-not-specify-a-memory-limit" rel="nofollow noreferrer">Possible cases when limits are not set</a></p>
</li>
</ul>
| moonkotte |
<p><strong>EDIT:</strong></p>
<p>I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed <code>ingress-nginx</code> manually.</p>
<pre><code>$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
</code></pre>
<p>It gave me an error but I think it's because I did it already before because:</p>
<pre><code>$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
</code></pre>
<p>Then applied all my yaml files again but this time ingress is not getting any address:</p>
<pre><code>$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
</code></pre>
<hr />
<p>I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:</p>
<pre><code>$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
</code></pre>
<p>And applied all my yaml files:</p>
<pre><code>$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
</code></pre>
<p>All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.</p>
<pre><code>$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
</code></pre>
<p>This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.</p>
<p>My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?</p>
<p>ingress-svc.yaml</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
</code></pre>
<p>events-depl.yaml:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
| osumatu | <h2>Reproduction</h2>
<p>I reproduced the case using minikube <code>v1.24.0</code>, Docker desktop <code>4.2.0</code>, engine <code>20.10.10</code></p>
<p>First, <code>localhost</code> in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in <code>/etc/hosts</code>, I added a different one for testing and still it showed localhost. Only <code>metallb</code> will provide an IP address from set up network.</p>
<h2>What happens</h2>
<p>When minikube driver is <code>docker</code>, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running <code>docker ps</code> command in host system:</p>
<pre><code>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
</code></pre>
<p>And then <code>minikube ssh</code> to get inside this container and run <code>docker ps</code> to see all kubernetes containers.</p>
<p>Moving forward. Before introducing <code>ingress</code>, it's already clear that even <code>NodePort</code> doesn't work as intended. Let's check it.</p>
<p>There are two ways to get <code>minikube VM IP</code>:</p>
<ol>
<li>run <code>minikube IP</code></li>
<li><code>kubectl get nodes -o wide</code> and find the node's IP</li>
</ol>
<p>What should happen next with <code>NodePort</code> is requests should go to <code>minikube_IP:Nodeport</code> while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.</p>
<p>On <code>minikube</code> to access services within cluster there is a special command - <a href="https://minikube.sigs.k8s.io/docs/commands/service/" rel="noreferrer"><code>minikube service %service_name%</code></a> which will create a direct tunnel to the service inside the <code>minikube VM</code> (you can see that it contains <code>service URL</code> with <code>NodePort</code> which is supposed to be working):</p>
<pre><code>$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
</code></pre>
<p>And now it's available on host machine:</p>
<pre><code>$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
</code></pre>
<h2>Adding ingress</h2>
<p>Moving forward and adding ingress.</p>
<pre><code>$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
</code></pre>
<p>Trying to get any response from <code>ingress</code> by hitting <code>minikube_IP:NodePort</code> with no luck:</p>
<pre><code>$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
</code></pre>
<p>Trying to create a tunnel with <code>minikube service</code> command:</p>
<pre><code>$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
<p>And getting <code>404</code> from <code>ingress-nginx</code> which means we can send requests to ingress:</p>
<pre><code>$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
</code></pre>
<h2>Solutions</h2>
<p>Above I explained what happens. Here are three solutions how to get it work:</p>
<ol>
<li>Use another <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="noreferrer">minikube driver</a> (e.g. virtualbox. I used <code>hyperv</code> since my laptop has windows 10 pro)</li>
</ol>
<p><code>minikube ip</code> will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into <code>/etc/hosts</code> for domain used in ingress rule</p>
<p><strong>Note!</strong> Even though <code>localhost</code> was shown in <code>kubectl get ing ingress</code> output in <code>ADDRESS</code>.</p>
<ol start="2">
<li>Use built-in kubernetes feature in Docker desktop for Windows.</li>
</ol>
<p>You will need to <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters" rel="noreferrer">manually install <code>ingress-nginx</code></a> and change <code>ingress-nginx-controller</code> service type from <code>NodePort</code> to <code>LoadBalancer</code> so it will be available on <code>localhost</code> and will be working. Please find <a href="https://stackoverflow.com/a/69113528/15537201">my another answer about Docker desktop for Windows</a></p>
<ol start="3">
<li>(testing only) - use port-forward</li>
</ol>
<p>It's almost exactly the same idea as <code>minikube service</code> command. But with more control. You will open a tunnel from host VM port <code>80</code> to <code>ingress-nginx-controller</code> service (eventually pod) on port <code>80</code> as well. <code>/etc/hosts</code> should contain <code>127.0.0.1 test.domain</code> entity.</p>
<pre><code>$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
</code></pre>
<p>And testing it works:</p>
<pre><code>$ curl test.domain
StatusCode : 200
StatusDescription : OK
</code></pre>
<h2>Update for kubernetes in docker desktop on windows and ingress:</h2>
<p>On modern <code>ingress-nginx</code> versions <code>.spec.ingressClassName</code> should be added to ingress rules. See <a href="https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do" rel="noreferrer">last updates</a>, so ingress rule should look like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...
</code></pre>
| moonkotte |
<p>Hey I have a kubernetes cluster for a gitlab ci/cd pipeline.
There is a gitlab runner (kubernetes executor) running on it.</p>
<p>Sometimes the pipeline passes, but sometime I get</p>
<pre><code>Waiting for pod gitlab-runner/runner-wyplq6-h-project-7180-concurrent-0lr66z to be running, status is Pending
ContainersNotInitialized: "containers with incomplete status: [init-permissions]"
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
ERROR: Job failed (system failure): prepare environment: waiting for pod running: timed out waiting for pod to start. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
</code></pre>
<p>I checked the link but it says the kubernetes executor should not cause any problem with shell profiles.
So I ran <code>kubectl describe pod gitlab-runner/runner-wyplq6-h-project-7180-concurrent-0lr66z</code></p>
<pre><code>...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40s default-scheduler Successfully assigned gitlab-runner/runner-wyplq6-h-project-7180-concurrent-0lr66z to bloxberg
Warning FailedCreatePodContainer 5s kubelet unable to ensure pod container exists: failed to create container for [kubepods besteffort pod6fe2669a-ae7f-47e3-8794-814767c14895] : Failed to activate service 'org.freedesktop.systemd1': timed out (service_start_timeout=25000ms)
</code></pre>
<p>Why does the runner fail to start <code>systemd</code> is there a way to fix that?</p>
<h3>EDIT:</h3>
<p>I looked at the pods running on the cluster and it seems that there is an issue with the runner. with <code>kubectl describe pod gitlab-runner-gitlab-runner-6b7bf4d766-9t4k6 -n gitlab-runner</code> I get:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 7m42s (x52196 over 21d) kubelet Readiness probe failed:
Warning BackOff 2m42s (x91155 over 21d) kubelet Back-off restarting failed container
</code></pre>
<p>So there is an issue with the runner, but the error message says nothing about the cause</p>
| iaquobe | <p><strong>Job failed (system failure): timed out waiting for pod to start</strong></p>
<p>The following error occurs if the cluster cannot schedule the build pod before the timeout defined by poll_timeout, the build pod returns an error. The <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime" rel="nofollow noreferrer">Kubernetes Scheduler</a> should be able to delete it.
To fix this issue, increase the <code>poll_timeout</code> value in your <code>config.toml</code> file.</p>
<p><strong>Error cleaning up pod and Job failed (system failure): prepare environment: waiting for pod running</strong></p>
<p>The following error occurs when Kubernetes fails to schedule the job pod in a timely manner. GitLab Runner waits for the pod to be ready, but it fails and then tries to clean up the pod, which can also fail.</p>
<p>To troubleshoot, check the Kubernetes primary node and all nodes that run a <code>kube-apiserver</code> instance. Ensure they have all of the resources needed to manage the target number of pods that you hope to scale up to on the cluster.</p>
<p>To change the time GitLab Runner waits for a pod to reach its <strong>Ready</strong> status, use the <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#other-configtoml-settings" rel="nofollow noreferrer">poll_timeout</a> setting.</p>
<p>To better understand how pods are scheduled or why they might not get scheduled on time, read about the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">Kubernetes Scheduler</a>.</p>
<p>Refer to the troubleshooting <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#job-failed-system-failure-timed-out-waiting-for-pod-to-start" rel="nofollow noreferrer">documentation</a> for more details.</p>
| Fariya Rahmat |
<p>I have a CronJob which runs every 10 minutes and executes some command.</p>
<p>It's failing for some reason and I want to <code>exec</code> into it to investigate and fix it (There's a 1 time command I need to run to fix a shared volume*).</p>
<p>The issue is when I try to run <code>exec</code> I get this error, which is expected:</p>
<pre><code>error: cannot exec into a container in a completed pod; current phase is Failed
</code></pre>
<p>I would like to create a new pod from the job definition and run a custom command on that (e.g. <code>tail -f</code>) so that it runs without crashing and I can <code>exec</code> into it to investigate and fix the issue.</p>
<p>I've been struggling to do this and have only found 2 solutions which both seem a bit hacky (I've used both and they do work, but since I'm still developing the feature I've had to reset a few times)</p>
<ol>
<li>I change the command on the k8s YAML file to <code>tail -f</code> then update the Helm repo and <code>exec</code> on the new container. Fix the issue and revert back.</li>
<li>Copy the job to a new <code>Pod</code> YAML file in a directory outside of the Helm repo, with <code>tail -f</code>. Create it with the <code>kubectl apply -f</code> command. Then I can <code>exec</code> on it, do what I need and delete the pod.</li>
</ol>
<p>The issue with the first is that I change the Helm repo. The second requires some duplication and adaptation of code, but it's not too bad.</p>
<p>What I would like is a <code>kubectl</code> command I can run to do this. Kind of like how you can create a job from a CronJob:</p>
<pre><code>kubectl create job --from=cronjob/myjob myjob-manual
</code></pre>
<p>If I could do this to create a pod, or to create a job with a command which never finishes (like <code>tail -f</code>) it would solve my problem.</p>
<p>*The command I need to run is to pass in some TOTP credentials as a 1 time task to login to a service. The cookies to stay logged in will then exist on the shared volume so I won't have to do this again. I don't want to pass in the TOTP master key as a secret and add logic to interpret it either. So the most simple solution is to set up this service and once in a while I <code>exec</code> into the pod and login using the TOTP value again.</p>
<p>One more note. This is for a personal project and a tool I use for my own use. It's not a critical service I am offering to someone else so I don't mind if something goes wrong once in a while and I need to intervene.</p>
| KNejad | <p>Looked into this question more, your <strong>option 2 is the most viable solution</strong>.</p>
<p>Adding a sidecar container - it's the same as option 1, but even more difficult/time consuming.</p>
<p>As mentioned in comments, there are no options for direct imperative pod creation from <code>job</code>/<code>cronjob</code>. Available options can be checked for <code>kubectl</code>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer"><code>kubectl create</code></a></li>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer"><code>kubectl run</code></a></li>
</ul>
<p>Also tried out of the interest (logic is to run the command from <code>cronjob</code> and then continue with specified command), but did not work out:</p>
<pre><code>$ kubectl create job --from=cronjob/test-cronjob manual-job -- tail -f
error: cannot specify --from and command
</code></pre>
| moonkotte |
<p>i have a service that name is <code>image-api</code>. the image-api is accessible in pod but nginx return 426 status code</p>
<p>after running this command return the expected data</p>
<pre><code>curl image-api.gateways.svc.cluster.local:8000
</code></pre>
<p>but nginx return 426 status code.</p>
<p>if replace the native url of image-api by istio url then nginx return 200 status code.</p>
<p>the <code>/etc/nginx.nginx.conf</code></p>
<pre><code>worker_processes 8;
events {
worker_connections 1024;
}
http {
resolver kube-dns.kube-system valid=10s;
server_tokens off;
server {
listen 8080;
location ~ ^/(\w+) {
# ISTIO URL
proxy_pass http://image-api.gateways.svc.cluster.local:8000$request_uri;
# MAIN URL
# proxy_pass http://image-api.main.svc.cluster.local:8000$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
</code></pre>
| Arman | <p>As mentioned in the <a href="https://httpstatuses.com/426" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>A 426 status code is caused when a client is attempting to upgrade a
connection to a newer version of a protocol, but the server is
refusing to do so.</p>
<p>This can happen for several reasons, including:</p>
<ol>
<li><p>Incompatibility between the client and server versions of the protocol.</p>
</li>
<li><p>The server may not support the requested version of the protocol.</p>
</li>
<li><p>The server may be configured to only allow certain versions of the protocol to be used.</p>
</li>
<li><p>The server may be experiencing technical issues or undergoing maintenance that prevents it from upgrading the connection.</p>
</li>
</ol>
</blockquote>
<p>You need to upgrade your HTTP protocol version in NGINX config like there:</p>
<p>This route is for a legacy API, which enabled NGINX cache for performance reason, but in this route's proxy config, it missed a shared config <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version" rel="nofollow noreferrer">proxy_http_version 1.1</a>, which default to use HTTP 1.0 for all NGINX upstream.</p>
<p>And Envoy will return HTTP 426 if the request is HTTP 1.0.</p>
| Fariya Rahmat |
<p>I share a part of the manifest where I added security context added. If I remove the security context, it works fine. I try to use non-root user to up the container. Not sure, what I did wrong below</p>
<pre><code>containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
ports:
- name: http
containerPort: 8010
protocol: TCP
volumeMounts:
- name: mount-jmx-secret
mountPath: "etc/hello-world"
volumes:
- name: mount-jmx-secret
secret:
secretName: jmxsecret
defaultMode: 0600
</code></pre>
| ratna | <p>I do not know what mistake I made. It worked fine after couple of reinstalls of helm charts.
Changes I made, Added a new user to docker file</p>
<pre><code>RUN useradd -u 8877 <user_name>(ram)
USER ram
</code></pre>
| ratna |
<p>I have created a k8s cluster with kops (1.21.4) on AWS and as per the <a href="https://kops.sigs.k8s.io/addons/#cluster-autoscaler" rel="nofollow noreferrer">docs on autoscaler</a>. I have done the required changes to my cluster but when the cluster starts, the cluster-autoscaler pod is unable to schedule on any node. When I describe the pod, I see the following:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m31s (x92 over 98m) default-scheduler 0/4 nodes are available: 1 Too many pods, 3 node(s) didn't match Pod's node affinity/selector.
</code></pre>
<p>Looking at the deployment for cluster I see the following <code>podAntiAffinity</code>:</p>
<pre><code> affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cluster-autoscaler
topologyKey: topology.kubernetes.io/zone
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cluster-autoscaler
topologyKey: kubernetes.com/hostname
</code></pre>
<p>From this I understand that it want to prevent running pod on same node which already has cluster-autoscaler running. But that doesn't seem to justify the error seen in the pod status.</p>
<p>Edit: The pod for autoscaler has the following <code>nodeSelectors</code> and <code>tolerations</code>:</p>
<pre><code>Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
</code></pre>
<p>So clearly, it should be able to schedule on master node too.</p>
<p>I am not sure what else do I need to do to make the pod up and running.</p>
| Divick | <p>Posting the answer out of comments.</p>
<hr />
<p>There are <code>podAffinity</code> rules in place so first thing to check is if any errors in scheduling are presented. Which is the case:</p>
<pre><code>0/4 nodes are available: 1 Too many pods, 3 node(s) didn't match Pod's node affinity/selector.
</code></pre>
<p>Since there are 1 control plane (on which pod is supposed to be scheduled) and 3 worked nodes, that leads to the error <code>1 Too many pods</code> related to the control plane.</p>
<hr />
<p>Since cluster is running in AWS, there's a known limitation about amount of <code>network interfaces</code> and <code>private IP addresses</code> per machine type - <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="nofollow noreferrer">IP addresses per network interface per instance type</a>.</p>
<p><code>t3.small</code> was used which has 3 interfaces and 4 IPs per interface = 12 in total which was not enough.</p>
<p>Scaling up to <code>t3.medium</code> resolved the issue.</p>
<hr />
<p>Credits to <a href="https://stackoverflow.com/a/64972286/15537201">Jonas's answer</a> about the root cause.</p>
| moonkotte |
<p>I want to install the new cluster on 3 machines.
I ran this command:</p>
<pre><code>ansible-playbook -i inventory/local/hosts.ini --become --become-user=root cluster.yml
but the installation failed:
TASK [remove-node/pre-remove : remove-node | List nodes] *********************************************************************************************************************************************************
fatal: [node1 -> node1]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "get", "nodes", "-o", "go-template={{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}"], "delta": "0:00:00.057781", "end": "2022-03-16 21:27:20.296592", "msg": "non-zero return code", "rc": 1, "start": "2022-03-16 21:27:20.238811", "stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory", "stderr_lines": ["error: stat /etc/kubernetes/admin.conf: no such file or directory"], "stdout": "", "stdout_lines": []}
</code></pre>
<p>Why the installation step tried to remove and why <code>/etc/kubernetes/admin.conf</code> has not been created?</p>
<p>Please assist.</p>
| shai kam | <p>There could be a couple of ways how can you solve your problem. First look at <a href="https://github.com/kubernetes-sigs/kubespray/issues/8396" rel="nofollow noreferrer">this github issue</a>. Probably you can manually copy the missing file and it should work:</p>
<blockquote>
<p>I solved it myself.</p>
<p>I copied the /etc/kubernetes/admin.conf and /etc/kubernetes/ssl/ca.* to the new node and now the scale playbook works. Maybe this is not the right way, but it worked...</p>
</blockquote>
<p>The another way is to use <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/wait_for_module.html" rel="nofollow noreferrer">wait for module</a> on Ansible. You can find example of usage in <a href="https://stackoverflow.com/questions/57808932/setup-kubernetes-using-ansible">this thread</a>.</p>
<p>To another solution I will recommend to read <a href="https://superuser.com/questions/1665122/kubernetes-installation-using-ansible-fails-for-admin-conf-not-found">this similar problem</a>:</p>
<blockquote>
<p>cluster_initialized.txt created on first fail and ansible never runs kubeadm init again. just delete that file on fail, fix the problem and run again.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I need to check if <code>dkms</code> is installed on my host, and if it is, I need to check that it's associated with a specific driver. This check is intended to happen from inside a privileged container in Kubernetes. The purpose is to facilitate system requirements check for some drivers or packages our product needs to work.</p>
<p>I tried to follow <a href="https://book.hacktricks.xyz/linux-hardening/privilege-escalation/docker-breakout/docker-breakout-privilege-escalation" rel="nofollow noreferrer">this guide</a>, but I'm not getting anywhere. It assumes I'm using docker (our cluster uses podman) and also requires me to install packages on my host (nsenter), which I want to avoid. What am I missing?</p>
<p>How do I access dkms from a privileged container?</p>
| Oren_C | <p>DKMS is supported by running the DKMS scripts inside a privileged container.</p>
<p>As given in the <a href="https://rancher.com/docs/os/v1.0/en/configuration/loading-kernel-modules/" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>To deploy containers that compiles DKMS modules, you will need to
ensure that you bind-mount /usr/src and /lib/modules.</p>
<p>To deploy containers that run any DKMS operations (i.e., modprobe),
you will need to ensure that you bind-mount /lib/modules</p>
</blockquote>
<p>By default, the /lib/modules folder is already available in the console deployed via RancherOS System Services, but not /usr/src. You will likely need to deploy your own container for compilation purposes.</p>
| Fariya Rahmat |
<p>I've deployed a MinIO server on Kubernetes with cdk8s, used <code>minio1</code> as serviceId and exposed port 9000.</p>
<p>My expectations were that I could access it using <code>http://minio1:9000</code>, but my MinIO server is unreachable from both Prometheus and other instances in my namespace (Loki, Mimir etc...). Is there a specific configuration I missed to enable access within the network? The server starts without error, so it sounds a networking issue.</p>
<p>I'm starting the server this way:</p>
<pre><code> command: ["minio"],
args: [
"server",
"/data",
"--address",
`:9000`,
"--console-address",
`:9001`,
],
</code></pre>
<p>Patched the K8S configuration to expose both 9000 and 9001</p>
<pre class="lang-js prettyprint-override"><code> const d = ApiObject.of(minioDeployment);
//Create the empty port list
d.addJsonPatch(JsonPatch.add("/spec/template/spec/containers/0/ports", []));
//add port for console
d.addJsonPatch(
JsonPatch.add("/spec/template/spec/containers/0/ports/0", {
name: "console",
containerPort: 9001,
})
);
// add port for bucket
d.addJsonPatch(
JsonPatch.replace("/spec/template/spec/containers/0/ports/1", {
name: "bucket",
containerPort: 9000,
})
);
</code></pre>
<p>Can it be related to the multi-port configuration? Or is there a way to explicitly define the hostname as service id to make it accessible in the Kubernetes namespace?</p>
<p>Here's my service definition generated by cdk8s:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
minReadySeconds: 0
progressDeadlineSeconds: 600
replicas: 1
selector:
matchExpressions: []
matchLabels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
spec:
automountServiceAccountToken: true
containers:
- args:
- server
- /data/minio/
- --address
- :9000
- --console-address
- :9001
command:
- minio
env:
- name: MINIO_ROOT_USER
value: userminio
- name: MINIO_ROOT_PASSWORD
value: XXXXXXXXXXXXXX
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
image: minio/minio
imagePullPolicy: Always
name: minio1-docker
ports:
- containerPort: 9001
name: console
- containerPort: 9000
name: bucket
securityContext:
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
volumeMounts:
- mountPath: /data
name: data
dnsConfig:
nameservers: []
options: []
searches: []
dnsPolicy: ClusterFirst
hostAliases: []
initContainers: []
restartPolicy: Always
securityContext:
fsGroupChangePolicy: Always
runAsNonRoot: false
sysctls: []
setHostnameAsFQDN: false
volumes:
- emptyDir: {}
name: data
---
apiVersion: v1
kind: Service
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
externalIPs: []
ports:
- port: 443
targetPort: 9001
selector:
cdk8s.deployment: stack-minio1-minio1-deployment-c8e6c44f
type: ClusterIP
</code></pre>
| dbrrt | <p>As mentioned in the <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>List of ports to expose from the container. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network</p>
</blockquote>
<p>.</p>
<p>As @user2311578 suggested Exposing it on the container level does not make it available automatically on the service. You must specify it there when you want to access it through the service (and its virtual IP).</p>
<p>Also check the <a href="https://github.com/kubernetes/kubernetes/issues/108255" rel="nofollow noreferrer">GitHub</a> link for more information.</p>
| Fariya Rahmat |
<p>I am trying to add a side car container to an existing pod (webapp-1) to save the logs. However, I am getting error after creating the pod. The pod is crashing and the status changes to error..</p>
<p>For the below question i have added the yaml file. Please let me know if this is fine.</p>
<p> Add a side car container to the running pod logging-pod with the blow specification</p>
<p> The image of the sidecar container is busybox and the container writes the logs as below</p>
<p>tail -n+1 /var/log/k8slog/application.log</p>
<p> The container shares the volume logs with the application container the mounts to the</p>
<p>directory /var/log/k8slog</p>
<p> Do not alter the application container and verify the logs are written properly to the file</p>
<p>here is the yaml file.. I dont understand where I am making a mistake here.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-25T07:54:07Z"
labels:
name: webapp-1
name: webapp-1
namespace: default
resourceVersion: "3241"
uid: 8cc29748-7879-4726-ac60-497ee41f7bd6
spec:
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i" >>;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 /var/log/k8slog/application.log']
volumeMounts:
- name: varlog
mountPath: /var/log
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-fgstk
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: varlog
mountPath: /var/log
- name: default-token-fgstk
secret:
defaultMode: 420
secretName: default-token-fgstk
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-10-25T07:54:07Z"
status: "True"
type: Initialized
- lastProbeTime: null
</code></pre>
| NelsonVasu | <p>First of all, You could create a directory and the logfile itself. If the <code>count-log-1</code> container spin up first, it will have nothing to read and exit with an error. To to it, a good practise is to use an <strong>Init Container</strong>. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p>Second, containers need to have a shared volume, on which the logfile will be present. If there is no need to persist data, an <strong>emptyDir</strong> volume will enough. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p>Finally, You hade some errors in shell commands. Full <code>.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
name: webapp-1
name: webapp-1
namespace: default
spec:
# Init container fo creating the log directory and file
# on the emptyDir volume, which will be passed to the containers
initContainers:
- name: create-log-file
image: busybox
command:
- sh
- -c
- |
#!/bin/sh
mkdir -p /var/log/k8slog
touch /var/log/k8slog/application.log
# Mount varlog volume to the Init container
volumeMounts:
- name: varlog
mountPath: /var/log
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
command:
- sh
- -c
- |
i=0
while true; do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i"
i=$((i+1))
sleep 1
done
# Mount varlog volume to simple-webapp container
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
command:
- sh
- -c
- |
tail -f -n 1 /var/log/k8slog/application.log
# Mount varlog volume to count-log-1 container
volumeMounts:
- name: varlog
mountPath: /var/log
# Define na emptyDir shared volume
volumes:
- name: varlog
emptyDir: {}
</code></pre>
| Cloudziu |
<p>I have Standard GKE cluster and want to migrate all my running services to new Autopilot cluster. I research official documentation and don't find anything how I can perform this migration</p>
| pagislav | <p>At the moment this operation is not possible to convert Standard GKE cluster to Autopilot GKE .</p>
<p>In GKE Documentation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview" rel="noreferrer">Autopilot overview</a> under Other limitations you can find section <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#no_conversion" rel="noreferrer">No conversion</a>:</p>
<blockquote>
<p>Converting Standard clusters to Autopilot mode and converting
Autopilot clusters to Standard mode is not supported.</p>
</blockquote>
<p>As stated by @guillaume blaquiere. You have to redeploy all your services. Backup and restore your data manually.</p>
| Bakul Mitra |
<p>I need to add Java command in container. For it I use Helm Chart.</p>
<pre><code>helm install mychart chart/mychart --set "command.cmd={java,-Disurz_dir=/mnt/isurz,-Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236,-Des_host_and_port=xxx:9200,-jar,*.jar}"
</code></pre>
<p>The result isn't what I expected:</p>
<pre><code> spec:
containers:
- command:
- java
- -Disurz_dir=/mnt/isurz
- -Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236
- -Des_host_and_port=xxx:9200
- -jar
- '*.jar'
</code></pre>
<p>I don't understand why Kubernetes puts quotes in <code>*.jar</code>.</p>
<p>If instead <code>*.jar</code> specify <code>myapp.jar</code> then Kubernetes don't add quotes:</p>
<pre><code>helm install mychart chart/mychart --set "command.cmd={java,-Disurz_dir=/mnt/isurz,-Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236,-Des_host_and_port=xxx:9200,-jar,myapp.jar}"
</code></pre>
<p>Result:</p>
<pre><code> spec:
containers:
- command:
- java
- -Disurz_dir=/mnt/isurz
- -Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236
- -Des_host_and_port=xxx:9200
- -jar
- myapp.jar
</code></pre>
| Maksim | <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>As @larsks correctly mentioned in the comment:</p>
<blockquote>
<p>The quotes are necessary because an unquoted * can't start a YAML
value. They are not part of the value itself.</p>
</blockquote>
<p>For example, this is from <a href="https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html#gotchas" rel="nofollow noreferrer">YAML syntax</a> document:</p>
<blockquote>
<p>In addition to ' and " there are a number of characters that are
special (or reserved) and cannot be used as the first character of an
unquoted scalar: [] {} > | * & ! % # ` @ ,.</p>
</blockquote>
<p>And in much more details with examples it's available in <a href="https://yaml.org/spec/1.2.2/#53-indicator-characters" rel="nofollow noreferrer">YAML spec</a>.</p>
| moonkotte |
<p>I need to create kubernetes job that will run below script on mongo shell:</p>
<pre><code>var operations = [];
db.product.find().forEach(function(doc) {
var documentLink = doc.documentLink;
var operation = { updateMany :{
"filter" : {"_id" : doc._id},
"update" : {$set:{"documentLinkMap.en":documentLink,"documentLinkMap.de":""},
$unset: {documentLink:"","descriptionMap.tr":"","news.tr":"","descriptionInternal.tr":"","salesDescription.tr":"","salesInternal.tr":"","deliveryDescription.tr":"","deliveryInternal.tr":"","productRoadMapDescription.tr":"","productRoadMapInternal.tr":"","technicalsAndIntegration.tr":"","technicalsAndIntegrationInternal.tr":"","versions.$[].descriptionMap.tr":"","versions.$[].releaseNoteMap.tr":"","versions.$[].artifacts.$[].descriptionMap.tr":"","versions.$[].artifacts.$[].artifactNotes.tr":""}}}};
operations.push(operation);
});
operations.push( {
ordered: true,
writeConcern: { w: "majority", wtimeout: 5000 }
});
db.product.bulkWrite(operations);
</code></pre>
<p>I will need a sample of how that job will look like. Should I create presistent volume and claim to it or is there possibility to run this job without persistent volume? I need to run this once and then remove it.</p>
| kemoT | <p>You can solve it much easier with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer"><code>configMap</code></a> and then mount the <code>configMap</code> as a volume which will be resolved in a file.</p>
<p>Below is example how to proceed with it (Note! You will need to use a proper image for it as well as some other changes how mongo shell works):</p>
<ol>
<li><p>Create a <code>configMap</code> from file. Can be done by running this command:</p>
<pre><code>$ kubectl create cm mongoscript-cm --from-file=mongoscript.js
configmap/mongoscript-cm created
</code></pre>
<p>You can check that you file is stored inside by running:</p>
<pre><code>$ kubectl describe cm mongoscript-cm
</code></pre>
</li>
<li><p>Create a job with volume mount from configmap (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-template" rel="nofollow noreferrer">spec template is the same as it used in pods</a>):</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: mongojob
spec:
template:
spec:
containers:
- name: mongojob
image: ubuntu # for testing purposes, you need to use appropriate one
command: ['bin/bash', '-c', 'echo STARTED ; cat /opt/mongoscript.js ; sleep 120 ; echo FINISHED'] # same for command, that's for demo purposes
volumeMounts:
- name: mongoscript
mountPath: /opt # where to mount the file
volumes:
- name: mongoscript
configMap:
name: mongoscript-cm # reference to previously created configmap
restartPolicy: OnFailure # required for jobs
</code></pre>
</li>
<li><p>Checking how it looks inside the pod</p>
<p>Connect to the pod:</p>
<pre><code>$ kubectl exec -it mongojob--1-8w4ml -- /bin/bash
</code></pre>
<p>Check file is presented:</p>
<pre><code># ls /opt
mongoscript.js
</code></pre>
<p>Check its content:</p>
<pre><code># cat /opt/mongoscript.js
var operations = [];
db.product.find().forEach(function(doc) {
var documentLink = doc.documentLink;
var operation = { updateMany :{
"filter" : {"_id" : doc._id},
"update" : {$set {"documentLinkMap.en":documentLink,"documentLinkMap.de":""},
$unset: {documentLink:"","descriptionMap.tr":"","news.tr":"","descriptionInternal.tr":"","salesDescription.tr":"","salesInternal.tr":"","deliveryDescription.tr":"","deliveryInternal.tr":"","productRoadMapDescription.tr":"","productRoadMapInternal.tr":"","technicalsAndIntegration.tr":"","technicalsAndIntegrationInternal.tr":"","versions.$[].descriptionMap.tr":"","versions.$[].releaseNoteMap.tr":"","versions.$[].artifacts.$[].descriptionMap.tr":"","versions.$[].artifacts.$[].artifactNotes.tr":""}}}};
operations.push(operation);
});
operations.push( {
ordered: true,
writeConcern: { w: "majority", wtimeout: 5000 }
});
db.product.bulkWrite(operations);
</code></pre>
</li>
</ol>
| moonkotte |
<p>I have a golang webapp pod running in kubernetes cluster, and I tried to deploy a prometheus pod to monitor the golang webapp pod.</p>
<p>I specified <code>prometheus.io/port: to 2112</code> in the service.yaml file, which is the port that the golang webapp is listening on, but when I go to the Prometheus dashboard, it says that the <code>2112</code> endpoint is down.</p>
<p>I'm following <a href="https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/" rel="nofollow noreferrer">this guide</a>, tried this thread's solution <a href="https://stackoverflow.com/questions/53365191/monitor-custom-kubernetes-pod-metrics-using-prometheus">thread</a>, but still getting result saying <code>2112</code> endpoint is down.</p>
<p>Below is the my service.yaml and deployment.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
spec:
type: NodePort
selector:
app: golang
ports:
- name: main
protocol: TCP
port: 80
targetPort: 2112
nodePort: 30001
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
name: golang
spec:
replicas: 1
template:
metadata:
labels:
app: golang
spec:
containers:
- name: gogo
image: effy77/gogo2
ports:
- containerPort: 2112
selector:
matchLabels:
app: golang
</code></pre>
<p>I will try add <code>prometheus.io/port: 2112</code> to the prometheus deployment part, as I suspect that might be the cause.</p>
| allen | <p>I was confused with where to put the annotations,got my clarifications from this <a href="https://stackoverflow.com/questions/57957614/prometheus-only-scrapes-one-pod">thread</a>, I needed to put it under the service's metadata that needs to be scraped by prothemeus. So in my case it needs to be in goapp's metadata.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: goapp
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
prometheus.io/port: '2112'
</code></pre>
| allen |
<p>I have tried to change the NIC ip of the worker node directly. It seems that the master node automatically updates the ip information of the worker node. And it does not have any negative impact on the kubernetes cluster. Is it the simple and correct way to change the worker node ip? Or are there some other important steps that I have missed?</p>
| wind | <p>I created a mini cluster using <code>kubeadm</code> with two ubuntu18.04 VMs in one public network.</p>
<p>Indeed changing IP address of the worker node doesn't affect the cluster at all unless new IP address doesn't interfere with <code>--pod-network-cidr</code>.</p>
<p><code>Kubelet</code> is responsible for it and it uses several options:</p>
<blockquote>
<p>The kubelet is the primary "node agent" that runs on each node. It can
register the node with the apiserver using one of: the hostname; a
flag to override the hostname; or specific logic for a cloud provider.</p>
</blockquote>
<p>For instance if you decide to change a <code>hostname</code> of worker node, it will become unreachable.</p>
<hr />
<p>There are two ways to change IP address properly:</p>
<ol>
<li>Re-join the worker node with new IP (already changed) to the cluster</li>
<li>Configure <code>kubelet</code> to advertise specific IP address.</li>
</ol>
<p>Last option can be done by following:</p>
<ul>
<li>modifying <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> with adding <code>KUBELET_EXTRA_ARGS=--node-ip %NEW_IP_ADDRESS%</code>.</li>
<li><code>sudo systemctl daemon-reload</code> since config file was changed</li>
<li><code>sudo systemctl restart kubelet.service</code></li>
</ul>
<hr />
<p>Useful links:</p>
<ul>
<li><a href="https://medium.com/@kanrangsan/how-to-specify-internal-ip-for-kubernetes-worker-node-24790b2884fd" rel="nofollow noreferrer">Specify internal ip for worker nodes</a> - (it's a bit old in terms of how it's done (it should be done as I described above), however the idea is the same).</li>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">CLI tools - Kubelet</a></li>
</ul>
| moonkotte |
<p>I use kubernetes cluster on Google Cloud Plataform and I want to change my load balancer from "TCP load balancing" to "HTTP(S) load balancing" (layer 7).</p>
<p>Currently the configuration about "TCP load balancing" is:</p>
<p><a href="https://i.stack.imgur.com/L6rp9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L6rp9.png" alt="enter image description here" /></a></p>
<p>For deploy NGINX and create automatically the load balancer, I use the ingress-nginx chart (<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>). I've been checking the documentation and haven't found the config for change load balancer layer.</p>
<p>I'm a beginner in GCP load balancer. Can anyone help with getting started? Please, if more information is needed, I will provide it.</p>
| roliveira | <p>You have to switch the way you’re load balancing. We don't change the load balancer type from the GCP UI. So you should create new gke resources. As per your case you would have to use an ingress resource to have external https load balancing.</p>
<p>Check the following <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#:%7E:text=In%20a%20GKE%20cluster%2C%20you%20create%20and%20configure%20an%20HTTP%28S%29%20load%20balancer%20by%20creating%20a%20Kubernetes%20Ingress%20object.%20An%20Ingress%20object%20must%20be%20associated%20with%20one%20or%20more%20Service%20objects%2C%20each%20of%20which%20is%20associated%20with%20a%20set%20of%20Pods" rel="nofollow noreferrer">Document</a> for more information.</p>
| Bakul Mitra |
<p>I want to use Kubernetes on some clouds (maybe Amazon, Google, etc). Should I disallow my EC2 machines from accessing the external network? My guess is as follows, and I wonder whether it is correct or wrong?</p>
<ol>
<li>I <em>should</em> disallow EC2 from accessing the external network. Otherwise, hackers can attack my machines more easily. (true?)</li>
<li>How to do it: I should use a dedicated load balancer (maybe Ingress) with the external IP that my domain name is bound to. The load balancer will then talk with my actual application (which has <em>no</em> external IP and can only access internal network). (true?)</li>
</ol>
<p>Sorry I am new to Ops, and thanks for any help!</p>
| ch271828n | <p>Allowing or disallowing your EC2 instances from accessing external networks, ie keeping the rule that allows all outgoing traffic in your security group won't be of much use keeping hackers out, that's what the incoming traffic rules are for. It will, however, prevent unwanted traffic from going out <em>after</em> the hacker has reached your instance and has been able to install whatever malicious software on it, and then it would try to initiate outgoing communication.</p>
<p>That outgoing traffic rule is usually kept to allow things like getting software installs and updates, but it won't affect how your instances respond to incoming requests (legitimate or not).</p>
<p>It is a good idea to have a load balancer in front of your instances and have it be the only allowed point of entry to your services. It's a good pattern to follow, and your instances will not need to have an external IP address.</p>
<p>Having a bastion host is a good idea as well, and use it to manage the instances themselves. And I would also recommend Systems Manager's Session Manager for this task.</p>
| Oscar De León |
<p>I have a helm deployment which deploys 2 containers pod.</p>
<p>Now I need to include init container to one of the container pod.</p>
<p>I'm new to helm. Kindly share the snippet to achieve this. Here under spec I have defined 2 containers in which container 1 is dependent on container 2. So container 2 should be up and then I need to run init container for container 1.</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "testLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ .Values.cloudsqlproxySa }}
automountServiceAccountToken: true
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }} # For this I need to include the init container.
securityContext:
{{- toYaml .Values.test.securityContext | nindent 12 }}
image: "{{ .Values.test.image.repository }}:{{ .Values.test.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.test.image.pullPolicy }}
ports:
- name: {{ .Values.test.port.name }}
containerPort: {{ .Values.test.port.containerPort }}
protocol: {{ .Values.test.port.protocol }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.test.port.containerPort }}
envFrom:
- configMapRef:
name: {{ .Values.configmap.name }}
resources:
{{- toYaml .Values.test.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
- name: {{ .Chart.Name }}-gce-proxy
securityContext:
{{- toYaml .Values.cloudsqlproxy.securityContext | nindent 12 }}
image: "{{ .Values.cloudsqlproxy.image.repository }}:{{ .Values.cloudsqlproxy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.cloudsqlproxy.image.pullPolicy }}
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.cloudsqlConnection }}=tcp:{{ .Values.cloudsqlproxy.port.containerPort }}"
ports:
- name: {{ .Values.cloudsqlproxy.port.name }}
containerPort: {{ .Values.cloudsqlproxy.port.containerPort }}
resources:
{{- toYaml .Values.cloudsqlproxy.resources | nindent 12 }}
volumeMounts:
- name: gcp-bigquery-credential-file
mountPath: /secret
readOnly: true
volumes:
- name: gcp-bigquery-credential-file
secret:
secretName: {{ .Values.bigquerysecret.name }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
</code></pre>
| user2439278 | <p>Posting this as a community wiki out of comments, feel free to edit and expand.</p>
<hr />
<p>As @anemyte responded in comments, it's not possible to start init container after the main container is started, this is the logic behind init-containers. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#understanding-init-containers" rel="nofollow noreferrer">Understanding init-containers</a></p>
<p>Possible solution for this from @DavidMaze is to separate containers into different deployments and setup a container with application to restart itself until proxy container is up and running. Full quote:</p>
<blockquote>
<p>If the init container exits with an error if it can't reach the proxy
container, and you run the proxy container in a separate deployment,
then you can have a setup where the application container restarts
until the proxy is up and running. That would mean splitting this into
two separate files in the <code>templates</code> directory</p>
</blockquote>
| moonkotte |
<p>I would like to know if there is a possibility to apply liveness and readiness probe check to multiples containers in a pod or just for one container in a pod.
I did try checking with multiple containers but the probe check fails for container A and passes for container B in a pod.</p>
| sandy | <p>Welcome to the community.</p>
<p><strong>Answer</strong></p>
<p>It's absolutely possible to apply multiple probes for containers within the pod. What happens next depends on a probe.</p>
<p>There are three probes listed in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">Containers probes</a> which can be used: <code>liveness</code>, <code>readiness</code> and <code>startup</code>. I'll describe more about <code>liveness</code> and <code>readiness</code>:</p>
<p><strong>Liveness</strong></p>
<blockquote>
<p><code>livenessProbe</code>: Indicates whether the container is running. If the
<code>liveness</code> probe fails, the kubelet kills the container, and the
container is subjected to its restart policy. If a Container does not
provide a <code>liveness</code> probe, the default state is Success</p>
</blockquote>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a container.
For example, liveness probes could catch a deadlock, where an
application is running, but unable to make progress. Restarting a
container in such a state can help to make the application more
available despite bugs.</p>
</blockquote>
<p>In case of <code>livenessProbe</code> fails, <code>kubelet</code> will restart the container in POD, the POD will remain the same (its age as well).</p>
<p>Also it can be checked in <code>container events</code>, this quote is from <code>Kubernetes in Action - Marko Lukša</code></p>
<blockquote>
<p>I’ve seen this on many occasions and users were confused why their
container was being restarted. But if they’d used <code>kubectl describe</code>,
they’d have seen that the container terminated with exit code 137 or
143, telling them that the pod was terminated externally</p>
</blockquote>
<p><strong>Readiness</strong></p>
<blockquote>
<p><code>readinessProbe</code>: Indicates whether the container is ready to respond to
requests. If the <code>readiness</code> probe fails, the endpoints controller
removes the Pod's IP address from the endpoints of all Services that
match the Pod. The default state of <code>readiness</code> before the initial delay
is Failure. If a Container does not provide a <code>readiness</code> probe, the
default state is Success</p>
</blockquote>
<blockquote>
<p>The kubelet uses readiness probes to know when a container is ready to
start accepting traffic. A Pod is considered ready when all of its
containers are ready. One use of this signal is to control which Pods
are used as backends for Services. When a Pod is not ready, it is
removed from Service load balancers.</p>
</blockquote>
<p>What happens here is kubernetes checks if webserver in container is serving requests and if not, <code>readinessProbe</code> fails and POD's IP (generally speaking entire POD) will be removed from endpoints and no traffic will be directed to the POD.</p>
<p><strong>Useful links</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">Container probes</a> - general information and <code>types</code></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">Configure Liveness, Readiness and Startup Probes</a> (practice and examples)</li>
</ul>
| moonkotte |
<p>I'm running a <code>flask</code> application with <code>gunicorn</code> and <code>gevent</code> worker class. In my own test environment, I follow the official guide <code>multiprocessing.cpu_count() * 2 + 1</code> to set worker number.</p>
<p>If I want to put the application on Kubernetes' pod and assume that resources will be like</p>
<pre><code>resources:
limits:
cpu: "10"
memory: "5Gi"
requests:
CPU: "3"
memory: "3Gi"
</code></pre>
<p>how to calculate the worker number? should I use limits CPU or requests CPU?</p>
<hr />
<p>PS. I'm launching application via binary file packaged by <code>pyinstaller</code>, in essence <code>flask run(python script.py)</code>, and launch gunicorn in the main thread:</p>
<pre><code>def run():
...
if config.RUN_MODEL == 'GUNICORN':
sys.argv += [
"--worker-class", "event",
"-w", config.GUNICORN_WORKER_NUMBER,
"--worker-connections", config.GUNICORN_WORKER_CONNECTIONS,
"--access-logfile", "-",
"--error-logfile", "-",
"-b", "0.0.0.0:8001",
"--max-requests", config.GUNICORN_MAX_REQUESTS,
"--max-requests-jitter", config.GUNICORN_MAX_REQUESTS_JITTER,
"--timeout", config.GUNICORN_TIMEOUT,
"--access-logformat", '%(t)s %(l)s %(u)s "%(r)s" %(s)s %(M)sms',
"app.app_runner:app"
]
sys.exit(gunicorn.run())
if __name__ == "__main__":
run()
</code></pre>
<hr />
<p>PS. Whether I set worker number by <code>limits CPU (10*2+1=21)</code> or <code>requests CPU (3*2+1=7)</code> the performance still can't catch up with my expectations. Any trial suggestions to improve performance will be welcome under this questions</p>
| romlym | <blockquote>
<p>how to calculate the worker number? should I use limits CPU or requests CPU?</p>
</blockquote>
<p>It depends on your situation. First, look at the documentation about <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">request and limits</a> (this example is for memory, but the same is for CPU).</p>
<blockquote>
<p>f the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its <code>request</code> for that resource specifies. However, a container is not allowed to use more than its resource <code>limit</code>.</p>
<p>For example, if you set a <code>memory</code> request of 256 MiB for a container, and that container is in a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use more RAM.</p>
<p>If you set a <code>memory</code> limit of 4GiB for that container, the kubelet (and <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes" rel="nofollow noreferrer">container runtime</a>) enforce the limit. The runtime prevents the container from using more than the configured resource limit. For example: when a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error.</p>
</blockquote>
<p>Answering your question: first of all, you need to know how many resources (eg. CPU) your application needs. Request will be the minimum amount of CPU that the application must receive (you have to calculate this value yourself. In other words - you must know how much the application needs minimum CPU to run properly and then you need to set the value.) For example, if your application will perform better, when it receives more CPU, consider adding a limit ( this is the maximum amount of CPU an application can receive). If you want to calculate the worker number based on the highest performance, use <code>limit</code> to calculate the value. If, on the other hand, you want your application to run smoothly (perhaps not as fast as possible, but it will consume less resources) use <code>request</code> type.</p>
| Mikołaj Głodziak |
<p>When I do <code>kubectl top pods</code> I only see NAME, CPU and MEMORY.</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
bbox-inference-falcon-79dc678d8c-2fq9b 4m 1272Mi
bbox-inference-falcon-79dc678d8c-2nfnk 3m 1503Mi
bbox-inference-falcon-79dc678d8c-4579l 27m 1303Mi
bbox-inference-falcon-79dc678d8c-4kjsz 3m 1032Mi
bbox-inference-falcon-79dc678d8c-4mvxd 3m 1258Mi
bbox-inference-falcon-79dc678d8c-4pw2t 3m 1115Mi
</code></pre>
<p>I'd like to know who ran these jobs but <code>kubectl describe pod pod_name</code> doesn't give me the information.</p>
<p>Is there any way to do it?</p>
| aerin | <p>You can use Kubernetes Auditing: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/audit/</a></p>
<p>Set the Level to Metadata</p>
<pre><code>apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
— level: Metadata
</code></pre>
<p>Metadata will log: user, resource_name and timestamp</p>
| CyG |
<p>I am curious to know if there is a way to do a Kustomize replacement or other operation to inject the contents of a non-yaml file into a yaml file using Kustomize. I know Kustomize is not a template engine and that this could be accomplished with Helm, but using the tool I am already using, is this possible?</p>
<p>My use case is to store OPA policies as native rego, which allows use of OPA unit tests, and to inject the content of these rego files into Gatekeeper constraints during Kustomize deployment. This will remove the requirement for custom pipeline processing or manual copy/paste to accomplish this.</p>
<p>Example opaRule.rego file</p>
<pre><code>package k8sdisallowedtags
violation[{"msg": msg}] {
container := input_containers[_]
tags := [forbid | tag = input.parameters.tags[_] ; forbid = endswith(container.image, concat(":", ["", tag]))]
any(tags)
msg := sprintf("container <%v> uses a disallowed tag <%v>; disallowed tags are %v", [container.name, container.image, input.parameters.tags])
}
...
</code></pre>
<p>Example constraintTemplate.yaml file</p>
<pre><code>apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdisallowedtags
namespace: kube-system
annotations:
description: Requires container images to have an image tag different
from the ones in a specified list.
spec:
crd:
spec:
names:
kind: K8sDisallowedTags
validation:
openAPIV3Schema:
properties:
tags:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |-
{CONTENT OF OPA RULE POLICY HERE}
</code></pre>
| Michael L Rhyndress | <h2 id="answer-will-contain-two-parts">Answer will contain two parts:</h2>
<ul>
<li>idea on how to address the asked question (because no built-in functionality is available for it + integrate with the second part)</li>
<li>using patches (can be useful for others in community)</li>
</ul>
<h2 id="create-own-plugin">Create own plugin</h2>
<p>Kustomize allows to create plugins to extend its functionality. And there are almost zero limitations including <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/#no-security" rel="nofollow noreferrer">security</a> - this should be handled by author of the plugin.</p>
<p>There are two kinds of plugins:</p>
<ul>
<li><a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/#authoring" rel="nofollow noreferrer">exec</a></li>
<li><a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/#go-plugins" rel="nofollow noreferrer">go-plugins</a></li>
</ul>
<p>All available information can be found in <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/" rel="nofollow noreferrer">Extending Kustomize - documentation</a>.</p>
<p><a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/execpluginguidedexample/#build-your-app-using-the-plugin" rel="nofollow noreferrer">Example of exec plugin</a>. <strong>Note!</strong> correct flag is <code>--enable-alpha-plugins</code> (with <code>-</code>, not with <code>_</code> as in example).</p>
<h2 id="using-patches">Using patches</h2>
<blockquote>
<p>Patches (also call overlays) add or override fields on resources. They
are provided using the patches Kustomization field.</p>
<p>The patches field contains a list of patches to be applied in the
order they are specified.</p>
<p>Each patch may:</p>
<ul>
<li>be either a strategic merge patch, or a JSON6902 patch</li>
<li>be either a file, or an inline string target</li>
<li>a single resource or multiple resources</li>
</ul>
</blockquote>
<p><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/" rel="nofollow noreferrer">Reference to kustomize - patches</a></p>
<p>Below is example how to patch <code>gatekeeper.yaml</code> object.</p>
<p>Structure:</p>
<p>$ tree</p>
<pre><code>.
├── gatekeeper.yaml
├── kustomization.yaml
└── opa-gk.yaml
</code></pre>
<p>$ cat gatekeeper.yaml</p>
<pre><code>apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdisallowedtags
namespace: kube-system
annotations:
description: Requires container images to have an image tag different
from the ones in a specified list.
spec:
crd:
spec:
names:
kind: K8sDisallowedTags
validation:
openAPIV3Schema:
properties:
tags:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |-
{CONTENT OF OPA RULE POLICY HERE}
</code></pre>
<p>$ cat kustomization.yaml</p>
<pre><code>resources:
- gatekeeper.yaml
patches:
- path: opa-gk.yaml
target:
group: templates.gatekeeper.sh
version: v1beta1
kind: ConstraintTemplate
name: k8sdisallowedtags
</code></pre>
<p>$ cat opa-gk.yaml</p>
<pre><code>- op: add
path: /spec/targets/0/rego
value: |
package k8sdisallowedtags
violation[{"msg": msg}] {
container := input_containers[_]
tags := [forbid | tag = input.parameters.tags[_] ; forbid = endswith(container.image, concat(":", ["", tag]))]
any(tags)
msg := sprintf("container <%v> uses a disallowed tag <%v>; disallowed tags are %v", [container.name, container.image, input.parameters.tags])
}
...
</code></pre>
<p><strong>End result:</strong></p>
<p>$ kubectl kustomize .</p>
<pre><code>apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
annotations:
description: Requires container images to have an image tag different from the
ones in a specified list.
name: k8sdisallowedtags
namespace: kube-system
spec:
crd:
spec:
names:
kind: K8sDisallowedTags
validation:
openAPIV3Schema:
properties:
tags:
items:
type: string
type: array
targets:
- rego: |
package k8sdisallowedtags
violation[{"msg": msg}] {
container := input_containers[_]
tags := [forbid | tag = input.parameters.tags[_] ; forbid = endswith(container.image, concat(":", ["", tag]))]
any(tags)
msg := sprintf("container <%v> uses a disallowed tag <%v>; disallowed tags are %v", [container.name, container.image, input.parameters.tags])
}
...
target: admission.k8s.gatekeeper.sh
</code></pre>
<h2 id="useful-links">Useful links:</h2>
<ul>
<li><a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/" rel="nofollow noreferrer">Extending kustomize</a></li>
<li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/" rel="nofollow noreferrer">kustomize patches</a></li>
</ul>
| moonkotte |
<p>I'm trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:</p>
<pre><code>Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {
</code></pre>
<p>This error repeats for <code>metrics_server</code>, <code>lb_ingress</code>, <code>argocd</code>, but <code>cluster-autoscaler</code> throws:</p>
<pre><code>Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {
</code></pre>
<p>My <code>main.tf</code> looks like this:</p>
<pre><code>terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
</code></pre>
<p>My <code>eks.tf</code> looks like this:</p>
<pre><code>module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
enable_amazon_eks_vpc_cni = true
amazon_eks_vpc_cni_config = {
addon_name = "vpc-cni"
addon_version = "v1.7.5-eksbuild.2"
service_account = "aws-node"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
enable_amazon_eks_kube_proxy = true
amazon_eks_kube_proxy_config = {
addon_name = "kube-proxy"
addon_version = "v1.19.8-eksbuild.1"
service_account = "kube-proxy"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
</code></pre>
| TFaws | <p>OP has confirmed in the comment that the problem was resolved:</p>
<blockquote>
<p>Of course. I think I found the issue. Doing "kubectl get svc" throws: "An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy"</p>
<p>Solved it by using my actual role, that's crazy. No idea why it was calling itself.</p>
</blockquote>
<p>For similar problem look also <a href="https://github.com/hashicorp/terraform-provider-helm/issues/400" rel="nofollow noreferrer">this issue</a>.</p>
| Mikołaj Głodziak |
<p>I have couple of nodes I need to deploy Kubernetes on them. So they all have couple of NICs. Let's say ens0, ens1, ens2 are the network interfaces and ens0 is the default, but I was requested to use ens1. When I deploy kubernetes it's using the default interface. How do I change it when I initialize the cluster?</p>
| jarge | <p>You can use <code>--apiserver-advertise-address string</code> option when initializing kubeadm.</p>
<p>From <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">Kubernetes documentation</a>: The string is the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
So something like this should work:</p>
<pre><code>kubeadm init --apiserver-advertise-address=10.x.x.x
</code></pre>
<p>Where 10.x.x.x is the IP associated with your desired interface.</p>
| CyG |
<p>I want to delete completed jobs in my aks cluster after certain time interval using TTL controller, but I'm unable to enable TTL controller in aks cluster, Is there any solution for this problem... Thanks in advance...</p>
| harish hari | <p>TTL in AKS is available since 1.21.2 version of Kubernetes. For more look at <a href="https://github.com/Azure/AKS/issues/1634" rel="nofollow noreferrer">this github topic</a>:</p>
<blockquote>
<p>Short update on that. It is available in 1.21.2. Got the confirmation from Azure Support. So, we are currently using it.</p>
</blockquote>
<p>Make sure that you are using this or newer version. For older versions, you won't be able to run this mechanism. You can also use <a href="https://github.com/lwolf/kube-cleanup-operator" rel="nofollow noreferrer">kube-cleanup operator</a> for older versions of cluster.</p>
<p><a href="https://learn.microsoft.com/en-us/answers/questions/570446/can-we-leverage-34-terminated-pod-gc-threshold34-i.html" rel="nofollow noreferrer">Here</a> you can find information how to enable TTL on AKS cluster:</p>
<blockquote>
<p>Another way to clean up finished Jobs (either <code>Complete</code> or <code>Failed</code>) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the <code>.spec.ttlSecondsAfterFinished</code> field of the Job.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer">Reference</a></p>
<p>When the TTL controller cleans up the Job, it will delete the Job cascadingly, i.e. delete its dependent objects, such as Pods, together with the Job. Note that when the Job is deleted, its lifecycle guarantees, such as finalizers, will be honored.</p>
</blockquote>
<p>So this could help you to enable TTL mechanism on your cluster.</p>
| Mikołaj Głodziak |
<p>Is there a way to load any kernel module ("modprobe nfsd" in my case) automatically after starting/upgrading nodes or in GKE? We are running an NFS server pod on our kubernetes cluster and it dies after every GKE upgrade</p>
<p>Tried both cos and ubuntu images, none of them seems to have nfsd loaded by default.</p>
<p>Also tried something like this, but it seems it does not do what it is supposed to do:</p>
<pre><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: nfsd-modprobe
labels:
app: nfsd-modprobe
spec:
template:
metadata:
labels:
app: nfsd-modprobe
spec:
hostPID: true
containers:
- name: nfsd-modprobe
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
modprobe nfs
modprobe nfsd
while true; do sleep 1; done
</code></pre>
| Palko | <p>I faced the same issue, existing answer is correct, I want to expand it with working example of <code>nfs</code> pod within kubernetes cluster which has capabilities and libraries to load required modules.</p>
<p>It has two important parts:</p>
<ul>
<li>privileged mode</li>
<li>mounted <code>/lib/modules</code> directory within the container to use it</li>
</ul>
<p><code>nfs-server.yaml</code></p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
spec:
containers:
- name: nfs-server-container
image: erichough/nfs-server
securityContext:
privileged: true
env:
- name: NFS_EXPORT_0
value: "/test *(rw,no_subtree_check,insecure,fsid=0)"
volumeMounts:
- mountPath: /lib/modules # mounting modules into container
name: lib-modules
readOnly: true # make sure it's readonly
- mountPath: /test
name: export-dir
volumes:
- hostPath: # using hostpath to get modules from the host
path: /lib/modules
type: Directory
name: lib-modules
- name: export-dir
emptyDir: {}
</code></pre>
<p>Reference which helped as well - <a href="https://github.com/ehough/docker-nfs-server/blob/develop/doc/feature/auto-load-kernel-modules.md" rel="nofollow noreferrer">Automatically load required kernel modules</a>.</p>
| moonkotte |
<p>In my /root/.kube/config I have a server: value of "https://my.reverse.proxy:6443".</p>
<p>If I don't set any certificates on the reverse proxy (traffic goes directly to backend and backend certificate is presented, SSL passthrough), I can run a kubectl command successfully (i.e sudo kubectl get pods -o wide -A). But if I set a certificate on the reverse proxy, my kubectl command returns:</p>
<pre><code>$ sudo kubectl --insecure-skip-tls-verify get pods -o wide -A
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>I am not sure why this is happening. Is it because the kubectl is trying to "authenticate" with the reverse proxy certificate, and is only allowed to do so with the back-end certificate?</p>
<p>How would I get rid of that error if I want to use a different certificate on the reverse proxy (no SSL passthrough)? What should I do on the client side?</p>
| Joey Cote | <p>If the issue started after renewing kubernetes certificates, this caused the existing <strong>~/.kube/config</strong> to have outdated keys and certificate values in it.</p>
<p>The solution is to replace the values <strong>client-certificate-data</strong> and <strong>client-key-data</strong> in file <strong>~/.kube/config</strong> with the values from the updated file in <strong>/etc/kubernetes/kubelet.conf</strong> of the same name</p>
| Ramesh kollisetty |
<p>Why do I have to use <code>--verb=list</code> option when I list all resources in the k8s namespace?</p>
<p>I read <a href="https://stackoverflow.com/questions/47691479/listing-all-resources-in-a-namespace">this question</a> and linked GitHub issue, and they worked for me. However, I cannot understand why <code>--verb=list</code> option is used.</p>
<p>Thanks to the help, I now know what this option does. When I add this option, the command shows only resources which support list verb. However, I could not figure out why it was necessary to show only the resources that support the list verb.</p>
<p>Please teach me this.</p>
| yu saito | <p>The question you quoted was to list resources. To be able to list, the resource must support the listing. Based on <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types" rel="nofollow noreferrer">official documentation</a>:</p>
<pre><code>kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
</code></pre>
<p>In the case of documentation, we have 2 verbs (list, get), you had one (list). The idea is for the command to return only those api-resources which handle <a href="https://www.oreilly.com/library/view/kubernetes-security/9781492039075/ch04.html" rel="nofollow noreferrer">list verb</a>.</p>
<p>In conclusion, the <code>--verb=list</code> flag was used to limit the results to only those that support the listing.</p>
<blockquote>
<p>I could not figure out why it was necessary to show only the resources that support the list verb.</p>
</blockquote>
<p>This solution is good if, for example, later you want to work on api-resources using only the list. If you would like to operate on a resource that does not support it, you will get an error similar to this:</p>
<pre><code>kubectl list tokenreviews
error: unknown command "list" for "kubectl"
Did you mean this?
get
wait
</code></pre>
<p>To avoid this situation you can filter results before with the flag <code>--verb=list</code>.</p>
| Mikołaj Głodziak |
<p>This is from <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p><strong>Consistent</strong> and <strong>highly-available</strong> key value store used as Kubernetes'
backing store for all cluster data.</p>
</blockquote>
<p>Does Kubernetes have a separate mechanism internally to make ETCD more available? or does ETCD use, let's say, a modified version of Raft that allows this superpower?</p>
| Shivansh Kuchhal | <p>When it comes to going into etcd details, it is best to use the <a href="https://etcd.io/" rel="noreferrer">official etcd documentation</a>:</p>
<blockquote>
<p><strong>etcd</strong> is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.</p>
</blockquote>
<p>There is no mention here that this is high-availability. As for the fault tolerance, you will find a very good paragraph on this topic <a href="https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance" rel="noreferrer">here</a>:</p>
<blockquote>
<p>An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through <a href="https://etcd.io/docs/v3.5/op-guide/runtime-configuration/" rel="noreferrer">runtime reconfiguration</a>.</p>
<p>It is recommended to have an odd number of members in a cluster. An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes.</p>
</blockquote>
<p>You can also find very good article about <a href="https://medium.com/@ahadrana/understanding-etcd3-8784c4f61755" rel="noreferrer">understanding etcd</a>:</p>
<blockquote>
<p>Etcd is a strongly consistent system. It provides <a href="http://jepsen.io/consistency/models/linearizable" rel="noreferrer">Linearizable</a> reads and writes, and <a href="http://jepsen.io/consistency/models/serializable" rel="noreferrer">Serializable</a> isolation for transactions. Expressed more specifically, in terms of the <a href="https://en.wikipedia.org/wiki/PACELC_theorem" rel="noreferrer">PACELC</a> theorem, an extension of the ideas expressed in the CAP theorem, it is a CP/EC system. <strong>It optimizes for consistency over latency in normal situations and consistency over availability in the case of a partition.</strong></p>
</blockquote>
<p>Look also at this picture:<img src="https://i.stack.imgur.com/MjdFG.jpg" alt="enter image description here" /></p>
| Mikołaj Głodziak |
<p>Prometheus-operator seems to generate <code>promethues-operated</code> service which just points to Prometheus instance at port 9090.</p>
<p>What does this service do? We define other services to point at our Prometheus cluster.</p>
<p>What would be repercussions on removing <code>prometheus-operated</code> service?</p>
| Steve | <p>Based on the documentation, <code>prometheus-operated</code> is a governing service for statefulsets, in other words it's Prometheus's service endpoint which is used for its functioning.</p>
<p>Below are some references:</p>
<blockquote>
<p>What you are referring to is the governing service that point to the
synthesized Prometheus statefulsets. In the case of a second
Prometheus in the same namespace the same governing service will be
referenced, which in turn will add the IPs of all pods of the separate
Prometheus instances to the same governing service.</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/issues/3805" rel="nofollow noreferrer">Rename Prometheus Operator Service #3805</a></p>
<p>Also another reference to the same idea:</p>
<blockquote>
<p>The Prometheus Operator reconciles services called prometheus-operated
and alertmanager-operated, which are used as governing Services for
the StatefulSets. To perform this reconciliation</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/blob/1b13e573c7ad533010407544f88dc4a78320b134/Documentation/rbac.md" rel="nofollow noreferrer">Prometheus operator/Documentation/readme</a></p>
<p>One more commit that confirms that <code>prometheus-operated</code> is a governing service:</p>
<blockquote>
<p>pkg/prometheus: add Thanos service port to governing service
Currently, for service discovery of Prometheus instances a separate
headless service must be deployed.</p>
<p>This adds the Thanos grpc port to the existing Prometheus statefulset
governing service if a Thanos sidecar is given in the Prometheus
custom resource specification.</p>
<p>This way no additional service has to be deployed.</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/pull/2754/commits/86d42de53ca2c2ea0d32264818d9750f409979da" rel="nofollow noreferrer">pkg/prometheus: add Thanos service port to governing service #2754</a></p>
<hr />
<blockquote>
<p>What would be repercussions on removing prometheus-operated service.</p>
</blockquote>
<p>It's quite old answer, but since this is a part of Prometheus and Prometheus components will fail if the service is removed:</p>
<blockquote>
<p>The prometheus-operated service is an implementation detail of the
Prometheus Operator, it should not be touched, especially as all
Prometheus instances will be registered in this service</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/issues/522#issuecomment-319016693" rel="nofollow noreferrer">kube-prometheus chart creates 3 different services pointing to the same pods #522</a></p>
<hr />
<p><a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/pkg/prometheus/statefulset.go#L267-L313" rel="nofollow noreferrer">Code where this service is created</a></p>
<p>Taking into consideration that:</p>
<pre><code>const (
governingServiceName = "prometheus-operated"
...
)
</code></pre>
| moonkotte |
<p>I have an operator whose code is not accessible
By default, the operator requires the rights to list all secrets with no "resource Names" restriction
I can't grant such rights. Is there anyway to keep the rights to list all secrets and prohibit list multiple secrets by name</p>
<p>I tried to give rights to the list with resourceNames, but the operator does not accept this behavior and writes an error on startup</p>
<p>failed to list <em>v1.Secret: secrets is forbidden: User "system:serviceaccount:operator-</em>*****" cannot list resource "secrets" in API group "" in the namespace "namespace-name"</p>
| Sergey Belov | <pre><code>failed to list v1.Secret: secrets is forbidden: User "system:serviceaccount:operator-*****" cannot list resource "secrets" in API group "" in the namespace "namespace-name"
</code></pre>
<p>The above error states that the operator with a service account does not have permission to list the secrets, to resolve your issue create role binding for service account as it not given access by default after creation, for adding a viewer (read only) role to service account run the following command:</p>
<p>For example, grant read-only permission within "my-namespace" to the "my-sa" service account:</p>
<pre><code>kubectl create rolebinding my-sa-view \
--clusterrole=view \
--serviceaccount=my-namespace:my-sa \
--namespace=my-namespace
</code></pre>
<p>Found a similar <a href="https://stackoverflow.com/questions/71513836">stack question</a> for more information.</p>
| Mayur Kamble |
<p>I have my application deployed on Kubernetes cluster, managed by operator (A). I am mounting secrets with ssl key materials to the deployment, so application could access to the content.</p>
<p>I have separate operator (B) deployment, which is responsible to create those secrets with the ssl key materials. Now I have a use case where my secrets are recreated by operator (B), and it deletes/restarts the pods, which managed by operator (A).</p>
<p>I am trying understand - <strong>is it common practice to allow separately deployed operator delete pods?</strong></p>
<p>My perception was that operator should work only with resources it manages, nothing more.</p>
| liotur | <p>Community wiki to summarise the topic.</p>
<p>If it is as you say:</p>
<blockquote>
<p>both operators are proprietary,</p>
</blockquote>
<p>it is impossible to give a definite yes or no answer. Everything will depend on what is really going on there, and we are not able to check and evaluate it.</p>
<p>Look at the well provided comments by <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a>:</p>
<blockquote>
<p>That sort of seems like a bug...but also, Pods are pretty disposable and the usual expectation is that a ReplicaSet or another controller will recreate them...?</p>
</blockquote>
<blockquote>
<p>Note that the essence of the Kubernetes controller model is that the controller looks at the current state of the Kubernetes configuration store (not changes or events, just which objects exist and which don't) and tries to make the things it manages match that, so if the controller believes it should manage some external resource and there's not a matching Kubernetes object, it could delete it.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I'm trying to add a self-signed certificate in my AKS cluster using Cert-Manager.</p>
<p>I created a <code>ClusterIssuer</code> for the CA certificate (to sign the certificate) and a second <code>ClusterIssuer</code> for the Certificate (self-signed) I want to use.</p>
<p>I am not sure if the <code>certificate2</code> is being used correctly by Ingress as it looks like it is waiting for some event.</p>
<p>Am I following the correct way to do this?</p>
<p>This is the first <code>ClusterIssuer</code> "clusterissuer.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
</code></pre>
<p>This is the CA certificate "certificate.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate
spec:
secretName: hello-deployment-tls-ca-key-pair
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: true
issuerRef:
name: selfsigned
kind: ClusterIssuer
</code></pre>
<p>This is the second <code>ClusterIssuer</code> "clusterissuer2.yml" for the certificate I want to use:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: hello-deployment-tls
spec:
ca:
secretName: hello-deployment-tls-ca-key-pair
</code></pre>
<p>and finally this is the self-signed certificate "certificate2.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate2
spec:
secretName: hello-deployment-tls-ca-key-pair2
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: false
issuerRef:
name: hello-deployment-tls
kind: ClusterIssuer
</code></pre>
<p>I am using this certificate in an Ingress:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "hello-deployment-tls"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: sonar-ingress
spec:
tls:
- secretName: "hello-deployment-tls-ca-key-pair2"
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: sonarqube
servicePort: 80
</code></pre>
<p>As I do not have any registered domain name I just want to use the public IP to access the service over <code>https://<Public_IP></code>.</p>
<p>When I access to the service <code>https://<Public_IP></code> I can see that "Kubernetes Ingress Controller Fake Certificate" so i guess this is because the certificate is not globally recognize by the browser.</p>
<p>The strange thing is here. Theoretically the Ingress deployment is using the <code>selfsigned-certificate2</code> but looks like it is not ready:</p>
<pre><code>kubectl get certificate
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 4h29m
selfsigned-certificate2 False hello-deployment-tls-ca-key-pair2 3h3m
selfsigned-secret True selfsigned-secret 5h25m
</code></pre>
<pre><code>kubectl describe certificate selfsigned-certificate2
.
.
.
Spec:
Dns Names:
*.default.svc.cluster.local
*.default.com
Issuer Ref:
Kind: ClusterIssuer
Name: hello-deployment-tls
Secret Name: hello-deployment-tls-ca-key-pair2
Status:
Conditions:
Last Transition Time: 2021-10-15T11:16:15Z
Message: Waiting for CertificateRequest "selfsigned-certificate2-3983093525" to complete
Reason: InProgress
Status: False
Type: Ready
Events: <none>
</code></pre>
<p>Any idea?</p>
<p>Thank you in advance.</p>
| X T | <h2>ApiVersions</h2>
<p>First I noticed you're using <code>v1alpha2</code> apiVersion which is depricated and will be removed in <code>1.6</code> cert-manager:</p>
<pre><code>$ kubectl apply -f cluster-alpha.yaml
Warning: cert-manager.io/v1alpha2 ClusterIssuer is deprecated in v1.4+, unavailable in v1.6+; use cert-manager.io/v1 ClusterIssuer
</code></pre>
<p>I used <code>apiVersion: cert-manager.io/v1</code> in reproduction.</p>
<p>Same for <code>v1beta1</code> ingress, consider updating it to <code>networking.k8s.io/v1</code>.</p>
<h2>What happens</h2>
<p>I started reproducing your setup step by step.</p>
<p>I applied <code>clusterissuer.yaml</code>:</p>
<pre><code>$ kubectl apply -f clusterissuer.yaml
clusterissuer.cert-manager.io/selfsigned created
$ kubectl get clusterissuer
NAME READY AGE
selfsigned True 11s
</code></pre>
<p>Pay attention that <code>READY</code> is set to <code>True</code>.</p>
<p><strong>Next</strong> I applied <code>certificate.yaml</code>:</p>
<pre><code>$ kubectl apply -f cert.yaml
certificate.cert-manager.io/selfsigned-certificate created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 7s
</code></pre>
<p><strong>Next step</strong> is to add the second <code>ClusterIssuer</code> which is referenced to <code>hello-deployment-tls-ca-key-pair</code> secret:</p>
<pre><code>$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls False 6s
selfsigned True 3m50
</code></pre>
<p>ClusterIssuer <code>hello-deployment-tls</code> is <strong>not</strong> ready. Here's why:</p>
<pre><code>$ kubectl describe clusterissuer hello-deployment-tls
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 10s (x5 over 75s) cert-manager Error getting keypair for CA issuer: secret "hello-deployment-tls-ca-key-pair" not found
Warning ErrInitIssuer 10s (x5 over 75s) cert-manager Error initializing issuer: secret "hello-deployment-tls-ca-key-pair" not found
</code></pre>
<p>This is expected behaviour since:</p>
<blockquote>
<p>When referencing a Secret resource in ClusterIssuer resources (eg
apiKeySecretRef) the Secret needs to be in the same namespace as the
cert-manager controller pod. You can optionally override this by using
the --cluster-resource-namespace argument to the controller.</p>
</blockquote>
<p><a href="https://docs.cert-manager.io/en/release-0.11/reference/clusterissuers.html" rel="noreferrer">Reference</a></p>
<h2>Answer - how to move forward</h2>
<p>I edited the <code>cert-manager</code> deployment so it will look for <code>secrets</code> in <code>default</code> namespace (this is not ideal, I'd use <code>issuer</code> instead in <code>default</code> namespace):</p>
<pre><code>$ kubectl edit deploy cert-manager -n cert-manager
spec:
containers:
- args:
- --v=2
- --cluster-resource-namespace=default
</code></pre>
<p>It takes about a minute for <code>cert-manager</code> to start. Redeployed <code>clusterissuer2.yaml</code>:</p>
<pre><code>$ kubectl delete -f clusterissuer2.yaml
clusterissuer.cert-manager.io "hello-deployment-tls" deleted
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls True 3s
selfsigned True 5m42s
</code></pre>
<p>Both are <code>READY</code>. Moving forward with <code>certificate2.yaml</code>:</p>
<pre><code>$ kubectl apply -f cert2.yaml
certificate.cert-manager.io/selfsigned-certificate2 created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 33s
selfsigned-certificate2 True hello-deployment-tls-ca-key-pair2 6s
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
selfsigned-certificate-jj98f True True selfsigned system:serviceaccount:cert-manager:cert-manager 52s
selfsigned-certificate2-jwq5c True True hello-deployment-tls system:serviceaccount:cert-manager:cert-manager 25s
</code></pre>
<h2>Ingress</h2>
<p>When <code>host</code> is not added to <code>ingress</code>, it doesn't create any certificates and seems to used some fake one from <code>ingress</code> which is issued by <code>CN = Kubernetes Ingress Controller Fake Certificate</code>.</p>
<p>Events from <code>ingress</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 5s cert-manager TLS entry 0 is invalid: secret "example-cert" for ingress TLS has no hosts specified
</code></pre>
<p>When I added DNS to <code>ingress</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 4s cert-manager Successfully created Certificate "example-cert"
</code></pre>
<h2>Answer, part 2 (about ingress, certificates and issuer)</h2>
<p>You don't need to create a certificate if you're referencing to <code>issuer</code> in <code>ingress</code> rule. Ingress will issue certificate for you when all details are presented, such as:</p>
<ul>
<li>annotation <code>cert-manager.io/cluster-issuer: "hello-deployment-tls"</code></li>
<li><code>spec.tls</code> part with host within</li>
<li><code>spec.rules.host</code></li>
</ul>
<p><strong>OR</strong></p>
<p>if you want to create certificate manually and ask ingress to use it, then:</p>
<ul>
<li>remove annotation <code>cert-manager.io/cluster-issuer: "hello-deployment-tls"</code></li>
<li>create certificate manually</li>
<li>refer to it in <code>ingress rule</code>.</li>
</ul>
<p>You can check certificate details in browser and find that it no longer has issuer as <code>CN = Kubernetes Ingress Controller Fake Certificate</code>, in my case it's empty.</p>
<h2>Note - cert-manager v1.4</h2>
<p>Initially I used a bit outdated <code>cert-manager v1.4</code> and got <a href="https://github.com/jetstack/cert-manager/issues/4142" rel="noreferrer">this issue</a> which has gone after updating to <code>1.4.1</code>.</p>
<p>It looks like:</p>
<pre><code>$ kubectl describe certificaterequest selfsigned-certificate2-45k2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 41s cert-manager Certificate request has been approved by cert-manager.io
Warning DecodeError 41s cert-manager Failed to decode returned certificate: error decoding certificate PEM block
</code></pre>
<h2>Useful links:</h2>
<ul>
<li><a href="https://docs.cert-manager.io/en/release-0.11/tasks/issuers/setup-selfsigned.html" rel="noreferrer">Setting up self-singed Issuer</a></li>
<li><a href="https://docs.cert-manager.io/en/release-0.11/tasks/issuers/setup-ca.html" rel="noreferrer">Setting up CA issuers</a></li>
<li><a href="https://docs.cert-manager.io/en/release-0.11/reference/clusterissuers.html" rel="noreferrer">Cluster Issuers</a></li>
</ul>
| moonkotte |
<p>I try to deploy mongodb with helm and it gives this error:</p>
<pre><code>mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
</code></pre>
<p>I also tried this solution:</p>
<pre><code>sudo chown -R 1001 /tmp/mongo
</code></pre>
<p>but it says no this directory.</p>
| Onur AKKÖSE | <p>You have permission denied on <code>/bitnami/mongodb/data</code> and you are trying to modify another path: <code>/tmp/mongo</code>. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)</p>
<p>You've probably seen <a href="https://github.com/bitnami/bitnami-docker-mongodb/issues/177" rel="nofollow noreferrer">this github issue</a> and this answer:</p>
<blockquote>
<p>You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.</p>
<p>As you can see in <a href="https://github.com/bitnami/bitnami-docker-mongodb#366-r16-and-411-r9" rel="nofollow noreferrer">our changelog</a>, the container was migrated to the non-root user approach, that means that the user <code>1001</code> needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?</p>
</blockquote>
<pre><code>sudo chown -R 1001 /tmp/mongo
</code></pre>
<p>This method will work if you are going to mount the <code>/tmp/mongo</code> folder, which is actually not quite a common behavior. Look for another answer:</p>
<blockquote>
<p>Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the <code>securityContext</code> section to ensure the proper permissions.</p>
</blockquote>
<p>In your situation, you'll just have change owner to the path <code>/bitnami/mongodb/data</code> or to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">Security Context</a> on your Helm chart and everything should work out for you.</p>
<p>Probably <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">here</a> you can find the most interesting part with example context:</p>
<pre class="lang-yaml prettyprint-override"><code>securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
</code></pre>
| Mikołaj Głodziak |
<p>Can ingress rewrite 405 to the origin url and change the http-errors <code>405</code> to <code>200</code>?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
serviceName: front-user
servicePort: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
serviceName: front-admin
servicePort: 80
</code></pre>
<p>Ngnix can realize that visit a html page by a <code>post</code> method but I want to know how to realize by ingress.</p>
<pre><code>server {
listen 80;
# ...
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend;
}
}
</code></pre>
<p>This is an e.g. that ngnix realize that visit a html page by a <code>post</code> method to change <code>405</code> to <code>200</code> and change the method to <code>get</code></p>
| windyxia | <p>You can use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">server snippet</a> annotation to achieve it.</p>
<p>Also I rewrote your ingress from <code>extensions/v1beta1</code> apiVersion to <code>networking.k8s.io/v1</code>, because starting kubernetes <code>v1.22</code> previous <code>apiVersion</code> is be removed:</p>
<pre><code>$ kubectl apply -f ingress-snippit.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
<p><code>Ingress-snippet-v1.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-snippet: | # adds this block to server
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend; # tested with IP since I don't have this upstream
}
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
service:
name: front-user
port:
number: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
service:
name: front-admin
port:
number: 80
</code></pre>
<hr />
<p>Applying manifest above and verifying <code>/etc/nginx/nginx.conf</code> in <code>ingress-nginx-controller</code> pod:</p>
<pre><code>$ kubectl exec -it ingress-nginx-controller-xxxxxxxxx-yyyy -n ingress-nginx -- cat /etc/nginx/nginx.conf | less
...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=4096 ;
listen 443 default_server reuseport backlog=4096 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
# Custom code snippet configured for host _
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://127.0.0.1; # IP for testing purposes
}
location ~* "^/page/manager/(.*)" {
set $namespace "default";
set $ingress_name "frontend-ingress";
set $service_name "front-admin";
set $service_port "80";
set $location_path "/page/manager/(.*)";
set $global_rate_limit_exceeding n;
...
</code></pre>
| moonkotte |
<p>I am testing a log previous command and for that I need a pod to restart.</p>
<p>I can get my pods using a command like</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pods -n $ns -l $label
</code></pre>
<p>Which shows that my pods did not restart so far. I want to test the command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl logs $podname -n $ns --previous=true
</code></pre>
<p>That command fails because my pod did not restart making the <code>--previous=true</code> switch meaningless.</p>
<p>I am aware of this command to restart pods when configuration changed:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl rollout restart deployment myapp -n $ns
</code></pre>
<p>This does not restart the containers in a way that is meaningful for my log command test but rather terminates the old pods and creates new pods (which have a restart count of 0).</p>
<p>I tried various versions of exec to see if I can shut them down from within but most commands I would use are not found in that container:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec $podname -n $ns -- shutdown
kubectl exec $podname -n $ns -- shutdown now
kubectl exec $podname -n $ns -- halt
kubectl exec $podname -n $ns -- poweroff
</code></pre>
<p>How can I use a <code>kubectl</code> command to forcefully restart the pod with it retaining its identity and the restart counter increasing by one so that my test log command has a previous instance to return the logs from.</p>
<p>EDIT:
Connecting to the pod is <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">well described</a>.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n $ns exec --stdin --tty $podname -- /bin/bash
</code></pre>
<p>The process list shows only a handful running processes:</p>
<pre class="lang-sh prettyprint-override"><code>ls -1 /proc | grep -Eo "^[0-9]{1,5}$"
</code></pre>
<p>proc 1 seems to be the one running the pod.
<code>kill 1</code> does nothing, not even kill the proc with pid 1</p>
<p>I am still looking into this at the moment.</p>
| Johannes | <p>There are different ways to achieve your goal. I'll describe below most useful options.</p>
<h2>Crictl</h2>
<p>Most correct and efficient way - restart the pod on container runtime level.</p>
<p>I tested this on Google Cloud Platform - GKE and minikube with <code>docker</code> driver.</p>
<p>You need to <code>ssh</code> into the worker node where the pod is running. Then find it's <code>POD ID</code>:</p>
<pre><code>$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9863a993e0396 87a94228f133e 3 minutes ago Running nginx-3 2 6d17dad8111bc
</code></pre>
<p>OR</p>
<pre><code>$ crictl pods -s ready
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
6d17dad8111bc About an hour ago Ready nginx-3 default 2 (default)
</code></pre>
<p>Then stop it:</p>
<pre><code>$ crictl stopp 6d17dad8111bc
Stopped sandbox 6d17dad8111bc
</code></pre>
<p>After some time, <code>kubelet</code> will start this pod again (with different POD ID in CRI, however kubernetes cluster treats this pod as the same):</p>
<pre><code>$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f5f0442841899 87a94228f133e 41 minutes ago Running nginx-3 3 b628e1499da41
</code></pre>
<p>This is how it looks in cluster:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3 1/1 Running 3 48m
</code></pre>
<p>Getting logs with <code>--previous=true</code> flag also confirmed it's the same POD for kubernetes.</p>
<h2>Kill process 1</h2>
<p>It works with most images, however not always.</p>
<p>E.g. I tested on simple pod with <code>nginx</code> image:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 27h
$ kubectl exec -it nginx -- /bin/bash
root@nginx:/# kill 1
root@nginx:/# command terminated with exit code 137
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 27h
</code></pre>
<h2>Useful link:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/" rel="nofollow noreferrer">Debugging Kubernetes nodes with crictl</a></li>
</ul>
| moonkotte |
<p>I am trying to write a python script using one of the sdk by my org, and extract some useful information from kubernetes pod.</p>
<p>The sdk I have has a podexec() function which can be used to execute a command inside the pod.</p>
<p>I have a specific usecase, where I've to execute a command inside of the pod, which inturn will spin up an interactive shell, and then in that interactive shell, I want to execute a command and print the output.</p>
<p>For example, Let's say there's a mysql pod, and I want to first exec into the mysql pod, and then run <strong>mysql</strong> command which will bring up an <em>interactive mysql shell</em>, where I want to enter some commands like "<strong>Show tables;</strong>", and then get the output of that command in my script. Is it possible?</p>
<p>After getting into the pod, I am able to run a single command like below</p>
<pre><code>kubectl exec -it mysql-pod -- bash
echo "show tables;" |mysql
</code></pre>
<p>Now how to run this without entering the pod with just the kubectl?</p>
<p>NOTE: My usecase is not w.r.t mysql actually. My org has a custom tool which lets us execute commands in it's interactive shell. Mysql here is just an example.</p>
| Saha | <p>Ok. Figured it out.</p>
<pre><code>kubectl exec -it mysql-pod -- bash -c "echo \"show tables\" |mysql"
</code></pre>
| Saha |
<p>If in the ingress rules I only specify the host to do the forwarding, when editing <code>/etc/hosts</code> I relate the ip of minikube with the host as follows: <code>ip lamp-dev.com</code>, the problem I want to give these two paths to the host: <code>lamp-dev.com/server</code> returns not found and <code>lamp-dev.com/phpmyadmin</code> returns 503, reviewing the kubernetes documentation regarding the entry controller I have it identical to the example they use.</p>
<ul>
<li>server</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: apachephp-deployment
namespace: lamp-dev
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apachephp
template:
metadata:
labels:
app: apachephp
spec:
containers:
- name: apachephp
image: localhost:5000/apachephp:latest
imagePullPolicy: Always
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 80
name: apachephp
</code></pre>
<ul>
<li>apache-service</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> ---
apiVersion: v1
kind: Service
metadata:
name: apachephp-service
namespace: lamp-dev
spec:
selector:
app: apachephp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: LoadBalancer
</code></pre>
<ul>
<li>phpmyadmin-deployment:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: phpmyadmin
name: phpmyadmin
namespace: lamp-dev
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- image: phpmyadmin
ports:
- containerPort: 8080
name: phpmyadmin
env:
- name: PMA_HOST
value: mysql-service
- name: PMA_PORT
value: "3306"
- name: PMA_USER
valueFrom:
secretKeyRef:
name: phpmyadmin-secret
key: PMA_USER
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: phpmyadmin-secret
key: PMA_PASSWORD ######
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
imagePullPolicy: Always
name: phpmyadmin
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
</code></pre>
<ul>
<li>phpmyadmin-service</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> ---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-service
namespace: lamp-dev
labels:
app: phpmyadmin
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30307
type: NodePort
</code></pre>
<ul>
<li>INGRESS.YAML</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: lamp-dev
spec:
rules:
- host: lamp-dev.com
http:
paths:
- path: /server
pathType: Prefix
backend:
service:
name: apachephp-service
port:
number: 80
- path: /phpmyadmin
pathType: Prefix
backend:
service:
name: phpmyadmin-service
port:
number: 8080
</code></pre>
<p>When I describe the ingress the output is</p>
<pre class="lang-text prettyprint-override"><code>error Default backend: default-http-backend:80 (<error: endpoints > > > "default-http-backend" not found>)
</code></pre>
| christian | <blockquote>
<p>The problem was i did not define anything for default route = "/" and only
/server and /phpmyadmin were defined that's why i got default backend not found , if some one can point me more technically, i'll be grateful ,but... for the moment i fixed giving "/" to phpmyadmin "being the default" , and apachephp "/server"</p>
</blockquote>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lamp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: lamp-dev
spec:
rules:
- host: "lamp-dev.com"
http:
paths:
- path: /server
pathType: Prefix
backend:
service:
name: apachephp-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: phpmyadmin-service
port:
number: 80
</code></pre>
| christian |
<p>The API credentials for service accounts are normally mounted in pods as:</p>
<pre><code>/var/run/secrets/kubernetes.io/serviceaccount/token
</code></pre>
<p>This token allows containerized processes in the pod to communicate with the API server.</p>
<p>What's the purpose of a pod's service account (<code>serviceAccountName</code>), if <code>automountServiceAccountToken</code> is set to <code>false</code>?</p>
| Shuzheng | <p><strong>A little of theory:</strong></p>
<p>Let's start with what happens when pod should be created.</p>
<blockquote>
<p>When you create a pod, if you do not specify a service account, it is
automatically assigned the default service account in the same
namespace</p>
</blockquote>
<p><a href="https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="noreferrer">Reference</a>.</p>
<p>So all pods are linked to service account anyway (default or specified in <code>spec</code>).</p>
<p>Then API access token is always generated for each service account.</p>
<p><code>automountServiceAccountToken</code> flag defines if this token will automatically mounted to the pod after it has been created.</p>
<p>There are two options where to set this flag:</p>
<ul>
<li><p>In a specific service account</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
</code></pre>
</li>
<li><p>In a specific pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
</code></pre>
</li>
</ul>
<hr />
<p><strong>Answer:</strong></p>
<blockquote>
<p>What's the purpose of a pod's service account (serviceAccountName), if
automountServiceAccountToken is set to false?</p>
</blockquote>
<p>It may make a difference depending on what processes are involved in pod creation. Good example is in <a href="https://github.com/kubernetes/kubernetes/issues/16779#issuecomment-159656641" rel="noreferrer">comments in GitHub issue</a> (where this flag eventually came from):</p>
<blockquote>
<p>There are use cases for still creating a token (for use with external
systems) or still associating a service account with a pod (for use
with image pull secrets), but being able to opt out of API token
automount (either for a particular pod, or for a particular service
account) is useful.</p>
</blockquote>
| moonkotte |
<p>I'm having trouble to expose a k8s cluster deployed on AKS with a public IP address. I'm using GitHub Actions to do the deployment. The following are my .tf and deployment.yml files;</p>
<p>Please see below the errors I'm facing.</p>
<p>main.tf</p>
<pre><code>provider "azurerm" {
features {}
}
provider "azuread" {
version = "=0.7.0"
}
terraform {
backend "azurerm" {
resource_group_name = "tstate-rg"
storage_account_name = "tstateidentity11223"
container_name = "tstate"
access_key = "/qSJCUo..."
key = "terraform.tfstate"
}
}
# create resource group
resource "azurerm_resource_group" "aks" {
name = "${var.name_prefix}-rg"
location = var.location
}
}
</code></pre>
<p>aks-cluster.tf</p>
<pre><code>resource "azurerm_kubernetes_cluster" "aks" {
name = "${var.name_prefix}-aks"
location = var.location
resource_group_name = var.resourcename
dns_prefix = "${var.name_prefix}-dns"
default_node_pool {
name = "identitynode"
node_count = 3
vm_size = "Standard_D2_v2"
os_disk_size_gb = 30
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
}
</code></pre>
<p>nginxlb.tf</p>
<pre><code># Initialize Helm (and install Tiller)
provider "helm" {
# install_tiller = true
kubernetes {
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
load_config_file = false
}
}
# Add Kubernetes Stable Helm charts repo
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
# Create Static Public IP Address to be used by Nginx Ingress
resource "azurerm_public_ip" "nginx_ingress" {
name = "nginx-ingress-pip"
location = azurerm_kubernetes_cluster.aks.location
resource_group_name = azurerm_kubernetes_cluster.aks.node_resource_group
allocation_method = "Static"
domain_name_label = var.name_prefix
}
# Install Nginx Ingress using Helm Chart
resource "helm_release" "nginx" {
name = "nginx-ingress"
repository = data.helm_repository.stable.url
#repository = data.helm_repository.stable.metadata.0.name
chart = "nginx-ingress"
# namespace = "kube-system"
namespace = "default"
set {
name = "rbac.create"
value = "false"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.loadBalancerIP"
value = azurerm_public_ip.nginx_ingress.ip_address
}
}
</code></pre>
<p>And my deployment.yml</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name:
namespace: default
---
apiVersion: v1
kind: Service
metadata:
name: identity-svc
namespace: default
labels:
name: identity-svc
env: dev
app: identity-svc
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: MC_identity-k8s-rg_identity-k8s-aks_westeurope
# nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
loadBalancerIP: 13.95.67.206
type: LoadBalancer ## NodePort,ClusterIP,LoadBalancer --> Ingress Controller:nginx,HAProxy
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
selector:
app: identity-svc
---
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJpZGVudGl0eXNlcnZpY2UuYXp1cmVjVZWcVpYS2o4QTM3RmsvZEZZbTlrbHQiLCJlbWFpbCI6InN1YmplQHN1YmplLmNvbSIsImF1dGgiOiJ
kind: Secret
metadata:
creationTimestamp: null
name: acr-secret
namespace: default
type: kubernetes.io/dockerconfigjson
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-deploy
namespace: default
labels:
name: identity-app
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: identity-svc
template:
metadata:
namespace: default
labels:
app: identity-svc
spec:
#backoffLimit: 1
imagePullSecrets:
- name: acr-secret
containers:
- name: identitysvc
image: identitysvc.azurecr.io/identitysvc:${{ github.run_id }}
env:
- name: SECRET_KEY
value: ${SECRET_KEY}
- name: DOPPLER_TOKEN
value: ${DOPPLER_TOKEN}
resources:
requests:
cpu: 0.5
memory: "500Mi"
limits:
cpu: 2
memory: "1000Mi"
ports:
- containerPort: 8000
name: http
imagePullPolicy: Always
restartPolicy: Always
</code></pre>
<p>The following are the error messages from GitHub Actions log and Kubectl on Azure.</p>
<p>GitHub Actions log;
This message gets repeated until timeout.
<img src="https://i.stack.imgur.com/PgMNB.png" alt="" /></p>
<p>Kubectl logs on AKS;
<img src="https://i.stack.imgur.com/jhmdb.png" alt="" /></p>
<p>kubectl describe svc</p>
<pre><code>Name: nginx-ingress-controller
Namespace: default
Labels: app=nginx-ingress
app.kubernetes.io/managed-by=Helm
chart=nginx-ingress-1.41.3
component=controller
heritage=Helm
release=nginx-ingress
Annotations: meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress
Type: LoadBalancer
IP: 10.0.153.66
IP: 13.95.67.206
Port: http 8000/TCP
TargetPort: http/TCP
NodePort: http 30933/TCP
Endpoints: 10.244.1.6:8000
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32230/TCP
Endpoints: 10.244.1.6:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32755
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 4m17s (x43 over 3h10m) service-controller Ensuring load balancer
Warning CreateOrUpdateLoadBalancer 4m16s (x43 over 3h10m) azure-cloud-provider Code="PublicIPAndLBSkuDoNotMatch" Message="Standard sku load balancer /subscriptions/e90bd4d0-3b50-4a27-a7e8-bc88cf5f5398/resourceGroups/mc_identity-k8s-rg_identity-k8s-aks_westeurope/providers/Microsoft.Network/loadBalancers/kubernetes cannot reference Basic sku publicIP /subscriptions/e90bd4d0-3b50-4a27-a7e8-bc88cf5f5398/resourceGroups/MC_identity-k8s-rg_identity-k8s-aks_westeurope/providers/Microsoft.Network/publicIPAddresses/nginx-ingress-pip." Details=[]
</code></pre>
<p>kubectl logs</p>
<pre><code>I1108 12:52:52.862797 7 flags.go:205] Watching for Ingress class: nginx
W1108 12:52:52.863034 7 flags.go:250] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1108 12:52:52.863078 7 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1108 12:52:52.863272 7 main.go:231] Creating API client for https://10.0.0.1:443
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.34.1
Build: v20200715-ingress-nginx-2.11.0-8-gda5fa45e2
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.1
-------------------------------------------------------------------------------
I1108 12:52:52.892455 7 main.go:275] Running in Kubernetes cluster version v1.17 (v1.17.13) - git (clean) commit 30d651da517185653e34e7ab99a792be6a3d9495 - platform linux/amd64
I1108 12:52:52.897887 7 main.go:87] Validated default/nginx-ingress-default-backend as the default backend.
I1108 12:52:53.229870 7 main.go:105] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
W1108 12:52:53.252657 7 store.go:659] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I1108 12:52:53.268067 7 nginx.go:263] Starting NGINX Ingress controller
I1108 12:52:54.468656 7 leaderelection.go:242] attempting to acquire leader lease default/ingress-controller-leader-nginx...
I1108 12:52:54.468691 7 nginx.go:307] Starting NGINX process
W1108 12:52:54.469222 7 controller.go:395] Service "default/nginx-ingress-default-backend" does not have any active Endpoint
I1108 12:52:54.469249 7 controller.go:141] Configuration changes detected, backend reload required.
I1108 12:52:54.473464 7 status.go:86] new leader elected: nginx-ingress-controller-6b45fcd8ff-7mbx4
I1108 12:52:54.543113 7 controller.go:157] Backend successfully reloaded.
I1108 12:52:54.543152 7 controller.go:166] Initial sync, sleeping for 1 second.
W1108 12:52:58.251867 7 controller.go:395] Service "default/nginx-ingress-default-backend" does not have any active Endpoint
I1108 12:53:38.008002 7 leaderelection.go:252] successfully acquired lease default/ingress-controller-leader-nginx
I1108 12:53:38.008203 7 status.go:86] new leader elected: nginx-ingress-controller-6b45fcd8ff-njgjs
</code></pre>
<p>Help me understand what am I missing here? This whole process is me trying to deploy a simple Python service on a public IP address. I'm just trying to expose the service on a public IP it doesn't matter currently with which method whether nginx or any other loadbalancing service.</p>
<p>Also before I implement nginx ingress in the Terraform files I could see identity-svc running when I ran <code>kubectl get services</code> but now I can't even see that service its only the nginx ingress controller. I really appreciate any help.</p>
<p><strong>Edit:</strong> After adding sku standard to the public IP creation as @mynko mentioned the workflow could run successfully. Now when I check the following;</p>
<pre><code>admin@Azure:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
identity-svc LoadBalancer 10.0.188.32 20.56.242.212 8000:30036/TCP 22m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 7h28m
nginx-ingress-controller LoadBalancer 10.0.230.164 20.50.221.84 8000:31742/TCP,443:31675/TCP 23m
nginx-ingress-default-backend ClusterIP 10.0.229.217 <none> 8000/TCP 23m
</code></pre>
<p>I get this I'm not sure why <code>nginx-ingress-controller</code> is looking at port 80 instead of 8000. Also when I try to access `20.56.242.212:8000 nothing loads up. Also in this case which one should be my exposed public IP?</p>
<p>When I access <code>20.50.221.84</code> it shows <code>default backend - 404</code></p>
| Mert Alnuaimi | <p>Have a look at the kubernetes service warning message.</p>
<pre><code>Code="PublicIPAndLBSkuDoNotMatch"
</code></pre>
<p>You're using a Basic SKU Public IP, change it to Standard.</p>
<p><a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip#sku" rel="noreferrer">https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip#sku</a></p>
| m8usz |
<p>I have a hard time understand how exactly is the Istio Gateway port used. I am referring to line 14 in the below example</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8169
name: http-test1
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>From the Istio documentation:</p>
<blockquote>
<p>The Port on which the proxy should listen for incoming connections. So
indeed if you apply the above yaml file and check the
istio-ingressgateway pod for listening TCP ports you will find that
the port 8169 is actually used (see below output)</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
tcp LISTEN 0 4096 0.0.0.0:8169 0.0.0.0:*
</code></pre>
<p>But here comes the tricky part. If before you apply the Gateway you change the istio-ingressgateway service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
...
- name: http5
nodePort: 31169
port: 8169
protocol: TCP
targetPort: 8069
...
</code></pre>
<p>And then you apply the Gateway then the actual port used is not 8169 but 8069. It seems like that the Gateway resource will check first for a matching port in the istio-ingressgateway service and use the targetPort of the service instead</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
<empty result>
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8069
tcp LISTEN 0 4096 0.0.0.0:8069 0.0.0.0:*
</code></pre>
<p>Can anybody explain why?
Thank you in advance for any help</p>
| Gerassimos Mitropoulos | <p>You encountered an interesting aspect of Istio - how to configure Istio to expose a service outside of the service mesh using an Istio Gateway.</p>
<p>First of all, please note that the gateway configuration will be applied to the proxy running on a Pod (in your example on a Pod with labels <code>istio: ingressgateway</code>). Istio is responsible for configuring the proxy to listen on these ports, however it is the user's responsibility to ensure that external traffic to these ports are allowed into the mesh.</p>
<p>Let me show you with an example. What you encountered is expected behaviour, because that is exactly how Istio works.</p>
<hr />
<p>First, I created a simple Gateway configuration (for the sake of simplicity I omit Virtual Service and Destination Rule configurations) like below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9091
name: http-test-1
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>Then:</p>
<pre><code> $ kubectl apply -f gw.yaml
gateway.networking.istio.io/gateway created
</code></pre>
<p>Let's check if our proxy is listening on port <code>9091</code>. We can check it directly from the <code>istio-ingressgateway-*</code> pod or we can use the <code>istioctl proxy-config listener</code> command to retrieve information about listener configuration for the Envoy instance in the specified Pod:</p>
<pre><code> $ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>Exposing this port on the pod doesn't mean that we are able to reach it from the outside world, but it is possible to reach this port internally from another pod:</p>
<pre><code> $ kubectl get pod -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP
istio-ingressgateway-8c48d875-lzsng 1/1 Running 0 43m 10.4.0.4
$ kubectl exec -it test -- curl 10.4.0.4:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>To make it accessible externally we need to expose this port on <code>istio-ingressgateway</code> Service:</p>
<pre><code> ...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9091
...
</code></pre>
<p>After this modification, we can reach port <code>9091</code> from the outside world:</p>
<pre><code> $ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Please note that nothing has changed from Pod's perspective:</p>
<pre><code> $ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>Now let's change the <code>targetPort: 9091</code> to <code>targetPort: 9092</code> in the <code>istio-ingressgateway</code> Service configuration and see what happens:</p>
<pre><code> ...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092 <--- "9091" to "9092"
...
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>As you can see, it seems that nothing has changed from the Pod's perspective so far, but we also need to re-apply the Gateway configuration:</p>
<pre><code> $ kubectl delete -f gw.yaml && kubectl apply -f gw.yaml
gateway.networking.istio.io "gateway" deleted
gateway.networking.istio.io/gateway created
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9092
tcp LISTEN 0 1024 0.0.0.0:9092 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9092 ALL Route: http.9092
</code></pre>
<p>Our proxy is now listening on port <code>9092</code> (<code>targetPort</code>), but we can still reach port <code>9091</code> from the outisde as long as our Gateway specifies this port and it is open on the <code>istio-ingressgateway</code> Service.</p>
<pre><code> $ kubectl describe gw gateway -n istio-system | grep -A 4 "Port"
Port:
Name: http-test-1
Number: 9091
Protocol: HTTP
$ kubectl get svc -n istio-system -oyaml | grep -C 2 9091
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
| Mikołaj Głodziak |
<p>I have a PV <code>alpha-pv</code> in the kubernetes cluster and have created a PVC matching the PV specs. The PV uses the <code>Storage Class: slow</code>. However, when I check the existence of Storage Class in Cluster there is no Storage Class existing and still my PVC was <code>Bound</code> to the PV.</p>
<p>How is this Possible when the Storage Class referred in the PV/PVC does not exists in the cluster?</p>
<p>If I don't mention the Storage Class in PVC, I get error message stating Storage Class Set. There is already an existing PV in the cluster which has <code>RWO</code> access modes, <code>1Gi</code> storage size and with the Storage class named <code>slow</code>. But on checking the Storage Class details, there is no Storage Class resource in cluster.</p>
<p>If I add the Storage Class name <code>slow</code> in my PVC <code>mysql-alpha-pvc</code>, then the PVC binds to the PV. But I'm not clear how this happens when the Storage Class referred in PV/PVC named <code>slow</code> doesn't exist in the cluster.</p>
<p><a href="https://i.stack.imgur.com/a0oJR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a0oJR.png" alt="Storage Class Error in PVC" /></a></p>
<p><a href="https://i.stack.imgur.com/1qCTl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1qCTl.png" alt="PVC Bound to PV even when the Storage class named "slow" does not exists in Cluster " /></a></p>
| Mayur Kadam | <h2>Short answer</h2>
<p>It depends.</p>
<h2>Theory</h2>
<p>One of the main purpose of using a <code>storageClass</code> is <a href="https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/" rel="nofollow noreferrer">dynamic provisioning</a>. That means that persistent volumes will be automatically provisioned once persistent volume claim requests for the storage: immediately or after pod using this <code>PVC</code> is created. See <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode" rel="nofollow noreferrer">Volume binding mode</a>.</p>
<p>Also:</p>
<blockquote>
<p>A StorageClass provides a way for administrators to describe the
"classes" of storage they offer. Different classes might map to
quality-of-service levels, or to backup policies, or to arbitrary
policies determined by the cluster administrators. Kubernetes itself
is unopinionated about what classes represent. This concept is
sometimes called "profiles" in other storage systems.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#introduction" rel="nofollow noreferrer">Reference</a>.</p>
<h2>How it works</h2>
<p>If for instance kubernetes is used in cloud (Google GKE, Azure AKS or AWS EKS), they have already had predefined <code>storageClasses</code>, for example this is from Google GKE:</p>
<pre><code>$ kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 27d
standard (default) kubernetes.io/gce-pd Delete Immediate true 27d
standard-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 27d
</code></pre>
<p>So you can create <code>PVC</code>'s and refer to <code>storageClass</code>, <code>PV</code> will be created for you.</p>
<hr />
<p>Another scenario which you faced is you can create <code>PVC</code> and <code>PV</code> with any custom <code>storageClassName</code> only for binding purposes. Usually it's used for testing something locally. This is also called <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static" rel="nofollow noreferrer">static provisioning</a>.</p>
<p>In this case you can create "fake" storage class which won't exist in kubernetes server.</p>
<p>Please see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">an example with such type of binding</a>:</p>
<blockquote>
<p>It defines the StorageClass name manual for the PersistentVolume,
which will be used to bind PersistentVolumeClaim requests to this
PersistentVolume.</p>
</blockquote>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Kubernetes storage classes</a></li>
<li><a href="https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/" rel="nofollow noreferrer">Kubernetes dynamic provisioning</a></li>
<li><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes persistent volumes</a></li>
</ul>
| moonkotte |
<p>I have a Kubernetes cluster <code>v1.22.1</code> set up in bare metal CentOS. I am facing a problem when setting up Nginx Ingress controller following <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="noreferrer">this link</a>.</p>
<p>I followed exactly the same in step 1-3 but got a <code>CrashLoopBackOff</code> error in nginx ingress controller pod. I checked the logs of the pod and found below:</p>
<pre><code>[root@dev1 deployments]# kubectl logs -n nginx-ingress nginx-ingress-5cd5c7549d-hw6l7
I0910 23:15:20.729196 1 main.go:271] Starting NGINX Ingress controller Version=1.12.1 GitCommit=6f72db6030daa9afd567fd7faf9d5fffac9c7c8f Date=2021-09-08T13:39:53Z PlusFlag=false
W0910 23:15:20.770569 1 main.go:310] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes >= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class.
F0910 23:15:20.774788 1 main.go:314] Error when getting IngressClass nginx: the server could not find the requested resource
</code></pre>
<p>I believe I have the IngressClass setup properly as shown in below:</p>
<pre><code>[root@dev1 deployments]# kubectl get IngressClass
NAME CONTROLLER PARAMETERS AGE
nginx nginx.org/ingress-controller <none> 2m12s
</code></pre>
<p>So I have no idea why it said Error when getting IngressClass nginx. Can anyone shed me some lights please?</p>
| Steve | <h2>Reproduction and what happens</h2>
<p>I created a one node cluster using <code>kubeadm</code> on CentOS 7. And got the same error.</p>
<p>You and I were able to proceed further only because we missed this command at the beginning:</p>
<pre><code>git checkout v1.12.1
</code></pre>
<p>The main difference is <code>ingress-class.yaml</code> has <code>networking.k8s.io/v1beta1</code> in <code>v1.12.1</code> and <code>networking.k8s.io/v1</code> in <code>master</code> branch.</p>
<p>After I went here for the second time and switched the branch, I immediately saw this error:</p>
<pre><code>$ kubectl apply -f common/ingress-class.yaml
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
</code></pre>
<p>That looks like other resources are not updated to be used on kubernetes <code>v1.22+</code> yet.</p>
<p>Please <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="nofollow noreferrer">see deprecated migration guide - v1.22 - ingress</a></p>
<h2>How to proceed further</h2>
<ul>
<li><p>I tested exactly the same approach on a cluster with <code>v1.21.4</code> and it worked like a charm. So you may consider downgrading the cluster.</p>
</li>
<li><p>If you're not tight to using NGINX ingress controller (supported by <code>Nginx inc</code>, you can try <code>ingress nginx</code> which is developed by <code>kubernetes community</code>. I tested it on <code>v1.22</code>, it works fine. Please find
<a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">Installation on bare metal cluster</a>.</p>
</li>
</ul>
<p>P.S. It may be confusing, but there are two free nginx ingress controllers which are developed by different teams. Also there's a third option - NGINX Plus which is paid and has more option. Please <a href="https://docs.nginx.com/nginx-ingress-controller/intro/nginx-ingress-controllers/" rel="nofollow noreferrer">see here the difference</a></p>
| moonkotte |
<p>Playing around with K8 and ingress in local minikube setup. Creating ingress from yaml file in networking.k8s.io/v1 api version fails. See below output.
Executing</p>
<pre><code>> kubectl apply -f ingress.yaml
</code></pre>
<p>returns</p>
<pre><code>Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
</code></pre>
<p>in local minikube environment with hyperkit as vm driver.</p>
<p>Here is the <code>ingress.yaml</code> file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mongodb-express-ingress
namespace: hello-world
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mongodb-express-service-internal
port:
number: 8081
</code></pre>
<p>Here is the mongodb-express deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-express
namespace: hello-world
labels:
app: mongodb-express
spec:
replicas: 1
selector:
matchLabels:
app: mongodb-express
template:
metadata:
labels:
app: mongodb-express
spec:
containers:
- name: mongodb-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongodb-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: mongodb_url
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-external
namespace: hello-world
spec:
selector:
app: mongodb-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-express-service-internal
namespace: hello-world
spec:
selector:
app: mongodb-express
ports:
- protocol: TCP
port: 8081
targetPort: 8081
</code></pre>
<p>Some more information:</p>
<pre><code>> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
> minikube version
minikube version: v1.19.0
commit: 15cede53bdc5fe242228853e737333b09d4336b5
> kubectl get all -n hello-world
NAME READY STATUS RESTARTS AGE
pod/mongodb-68d675ddd7-p4fh7 1/1 Running 0 3h29m
pod/mongodb-express-6586846c4c-5nfg7 1/1 Running 6 3h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-express-service-external LoadBalancer 10.106.185.132 <pending> 8081:30000/TCP 3h29m
service/mongodb-express-service-internal ClusterIP 10.103.122.120 <none> 8081/TCP 3h3m
service/mongodb-service ClusterIP 10.96.197.136 <none> 27017/TCP 3h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb 1/1 1 1 3h29m
deployment.apps/mongodb-express 1/1 1 1 3h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-68d675ddd7 1 1 1 3h29m
replicaset.apps/mongodb-express-6586846c4c 1 1 1 3h29m
> minikube addons enable ingress
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
> kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-2bn8h 0/1 Completed 0 4h4m
pod/ingress-nginx-admission-patch-vsdqn 0/1 Completed 0 4h4m
pod/ingress-nginx-controller-5d88495688-n6f67 1/1 Running 0 4h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.176.223 <none> 80:32740/TCP,443:30636/TCP 4h4m
service/ingress-nginx-controller-admission ClusterIP 10.97.107.77 <none> 443/TCP 4h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5d88495688 1 1 1 4h4m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 4h4m
job.batch/ingress-nginx-admission-patch 1/1 9s 4h4m
</code></pre>
<hr />
<p>However, it works for the beta api version, i.e.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mongodb-express-ingress-deprecated
namespace: hello-world
spec:
rules:
- host: mongodb-express.local
http:
paths:
- path: /
backend:
serviceName: mongodb-express-service-internal
servicePort: 8081
</code></pre>
<p>Any help very much appreciated.</p>
| norym | <p>I had the same issue. I successfully fixed it using:</p>
<p><code>kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission</code></p>
<p>then apply the yaml files:</p>
<p><code>kubectl apply -f ingress_file.yaml</code></p>
| vagdevi k |
<p>I have a log line stating</p>
<p>Received App information from Source and processed in ms: 467</p>
<p>Now I would like to find the avg response time for the app which would be avg values for the time received after ms: Can you please guide me how do I extract the value of time (ms) and then find average response time</p>
| knowledge20 | <p>You can use Splunk's <a href="https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Rex" rel="nofollow noreferrer">rex command</a> to extract new fields at search time.</p>
<p>Next, you will need to use the <a href="https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Stats" rel="nofollow noreferrer">stats command</a> along with the <code>avg</code> function to get the average response time over all events.</p>
<p>Here is the full Splunk query:</p>
<pre><code>| makeresults | eval _raw="Received App information from Source and processed in ms: 467"
| rex field=_raw "processed in ms:\s+(?<response_time>\d+)"
| stats avg(response_time)
</code></pre>
<p><a href="https://i.stack.imgur.com/FIFAv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FIFAv.png" alt="screenshot" /></a></p>
| whng |
<p>I have an application running in Kubernetes as a <code>StatefulSet</code> that starts 2 pods. It has configured a liveness probe and a readiness probe.</p>
<p>The <code>liveness probe</code> call a simple <code>/health</code> endpoint that responds when the server is done loading</p>
<p>The <code>readiness probe</code>, wait for some start-up job to complete. The job can take several minutes in some cases, and only when it finish the api of the application is ready to start accepting requests.</p>
<p>Even when the api is not available my app also run side jobs that don't depend on it, and I expect them to be done while the startup is happening too.</p>
<p><strong>Is it possible to force Kubernetes deployment to complete and deploy 2 pods, even when the readiness probe is still not passing?</strong></p>
<p>From the docs I get that the only effect of a readiness probe not passing is that the current pod won't be included as available in the loadbalancer service (which is actually the only effect that I want).</p>
<blockquote>
<p>If the readiness probe fails, the endpoints controller removes the
Pod's IP address from the endpoints of all Services that match the
Pod.</p>
</blockquote>
<p>However I am also seeing that the deployment never finishes, since pod 1 readiness probe is not passing and pod 2 is never created.</p>
<pre><code>kubectl rollout restart statefulset/pod
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-0 1/2 Running 0 28m
</code></pre>
<p>If the readiness probe failure, always prevent the deployment, <strong>Is there other way to selectively expose only ready pods in the loadbalancer, while not marking them as Unready during the deployment?</strong></p>
<p>Thanks in advance!</p>
| jesantana | <h2>StatefulSet deployment</h2>
<blockquote>
<p>Is it possible to force kubernetes deployment to complete and deploy 2
pods, even when the readiness probe is still not passing?</p>
</blockquote>
<p>Assuming it's meant <code>statefulSet</code> instead of <code>deployment</code> as object, the answer is no, it's not possible by design, most important is second point:</p>
<ul>
<li>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</li>
<li>Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.</li>
<li>When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.</li>
</ul>
<blockquote>
<p>When the nginx example above is created, three Pods will be deployed
in the order web-0, web-1, web-2. web-1 will not be deployed before
web-0 is Running and Ready, and web-2 will not be deployed until web-1
is Running and Ready</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">StatefulSets - Deployment and scaling guaranties</a></p>
<h2>Readyness probe, endpoints and potential workaround</h2>
<blockquote>
<p>If the readiness probe failure, always prevent the deployment, Is
there other way to selectively expose only ready pods in the load
balancer, while not marking them as Unready during the deployment?</p>
</blockquote>
<p>This is by design, pods are added to service endpoints once they are in <code>ready</code> state.</p>
<p>Some kind of potential workaround can be used, at least in simple example it does work, however you should try and evaluate if this approach will suit your case, this is fine to use as initial deployment.</p>
<p><code>statefulSet</code> can be started without <code>readyness</code> probe included, this way <code>statefulSet</code> will start pods one by one when previous is <code>run and ready</code>, <code>liveness</code> may need to set up <code>initialDelaySeconds</code> so kubernetes won't restart the pod thinking it's unhealthy. Once <code>statefulSet</code> is fully run and ready, you can add <code>readyness</code> probe to the <code>statefulSet</code>.</p>
<p>When <code>readyness</code> probe is added, kubernetes will restart all pods again starting from the last one and your application will need to start again.</p>
<p>Idea is to start all pods and they will be able to serve requests +- at the same time, while with <code>readyness</code> probe applied, only one pod will start in 5 minutes for instance, next pod will take 5 minutes more and so on.</p>
<h2>Example</h2>
<p>Simple example to see what's going on based on <code>nginx</code> webserver and <code>sleep 30</code> command which makes kubernetes think when <code>readyness</code> probe is setup that pod is <code>not ready</code>.</p>
<ol>
<li>Apply <code>headless service</code></li>
<li>Comment <code>readyness</code> probe in <code>statefulSet</code> and apply manifest</li>
<li>Observe that all pods are created right after previous pod is <code>running and ready</code></li>
<li>Uncomment <code>readyness</code> probe and apply manifest</li>
<li>Kubernetes will recreate all pods starting from the last one waiting this time <code>readyness</code> probe to complete and flag a pod as <code>running and ready</code>.</li>
</ol>
<p>Very convenient to use this command to watch for progress:</p>
<pre><code>watch -n1 kubectl get pods -o wide
</code></pre>
<p><code>nginx-headless-svc.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
</code></pre>
<p><code>nginx-statefulset.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
command: ["/bin/bash", "-c"]
args: ["sleep 30 ; echo sleep completed ; nginx -g \"daemon off;\""]
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 1
periodSeconds: 5
</code></pre>
<h2>Update</h2>
<p>Thanks to @jesantana for this much easier solution.</p>
<p>If all pods have to be scheduled at once and it's not necessary to wait for pods readyness, <code>.spec.podManagementPolicy</code> can be set to <code>Parallel</code>. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies" rel="nofollow noreferrer">Pod Management Policies</a></p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes statefulsets</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">kubernetes liveness, readyness and startup probes</a></li>
</ul>
| moonkotte |
<p>I have a pod which exceeds the disk resource limits sometimes for a short time. The pod then gets evicted. Is there a way to allow the pod to exceed the resource limits for e.g. 30 seconds?</p>
| User12547645 | <p>Yes, if the sizeLimit is breached your Pod will be evicted (Kubernetes will terminate your containers and schedule a replacement Pod). If the node where a Pod is running has enough of a resource available, it's possible for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.</p>
<p>As suggested by David Maze, Nodes have <strong>local ephemeral storage</strong>. You can provide a default ephemeral-storage request and limit on your LimitRange. A <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">LimitRange</a> is a policy to constrain resource allocations (to Pods or Containers) in a namespace.</p>
<p>To make the resource quota work on ephemeral-storage, two things need to be done:</p>
<ul>
<li>An admin sets the resource quota for ephemeral-storage in a namespace.</li>
<li>A user needs to specify limits for the ephemeral-storage resource in the Pod spec.</li>
</ul>
<p>The <code>kubelet</code> can measure how much local storage it is using. It does
this provided that you have set up the node using one of the
supported configurations(single or two file systems) for local
ephemeral storage.If you have a different configuration, then the
kubelet does not apply resource limits for ephemeral local storage.</p>
<p>If the user doesn't specify the ephemeral-storage resource limit in the Pod spec, the resource quota is not enforced on ephemeral-storage.</p>
<p>Kubernetes lets you track, reserve and limit the amount of ephemeral local storage a Pod can consume.If a Pod is using more ephemeral storage than you allow it to, the kubelet sets an eviction signal that triggers Pod eviction.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
</code></pre>
<p>You can refer to <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-ephemeral-storage-requests-are-scheduled" rel="nofollow noreferrer">how pods with ephemeral storage requests are scheduled.</a></p>
| Srividya |
<p><em>"When using the KubernetesExecutor, Airflow offers the ability to override system defaults on a per-task basis. To utilize this functionality, we can create a Kubernetes <code>V1pod</code> object and fill in the desired overrides."</em></p>
<p>I am trying to trigger a DAG with the following operator (example from the official core doc):</p>
<pre class="lang-py prettyprint-override"><code>...
volume_task = PythonOperator(
task_id="task_with_volume",
python_callable=test_volume_mount,
executor_config={
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
volume_mounts=[
k8s.V1VolumeMount(
mount_path="/foo/", name="example-kubernetes-test-volume"
)
],
)
],
volumes=[
k8s.V1Volume(
name="example-kubernetes-test-volume",
host_path=k8s.V1HostPathVolumeSource(path="/tmp/"),
)
],
)
),
},
)
...
</code></pre>
<p>When I click on my DAG in the UI, I got the following error:</p>
<pre><code>
____/ ( ( ) ) \___
/( ( ( ) _ )) ) )\
(( ( )( ) ) ( ) )
((/ ( _( ) ( _) ) ( () ) )
( ( ( (_) (( ( ) .((_ ) . )_
( ( ) ( ( ) ) ) . ) ( )
( ( ( ( ) ( _ ( _) ). ) . ) ) ( )
( ( ( ) ( ) ( )) ) _)( ) ) )
( ( ( \ ) ( (_ ( ) ( ) ) ) ) )) ( )
( ( ( ( (_ ( ) ( _ ) ) ( ) ) )
( ( ( ( ( ) (_ ) ) ) _) ) _( ( )
(( ( )( ( _ ) _) _(_ ( (_ )
(_((__(_(__(( ( ( | ) ) ) )_))__))_)___)
((__) \\||lll|l||/// \_))
( /(/ ( ) ) )\ )
( ( ( ( | | ) ) )\ )
( /(| / ( )) ) ) )) )
( ( ((((_(|)_))))) )
( ||\(|(|)|/|| )
( |(||(||)|||| )
( //|/l|||)|\\ \ )
(/ / // /|//||||\\ \ \ \ _)
-------------------------------------------------------------------------------
Node: 9c9de21b5ea0
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/python3.6/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/python3.6/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/python3.6/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/python3.6/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/python3.6/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/python3.6/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/python3.6/lib/python3.6/site-packages/flask_admin/base.py", line 69, in inner
return self._run_view(f, *args, **kwargs)
File "/opt/python3.6/lib/python3.6/site-packages/flask_admin/base.py", line 368, in _run_view
return fn(self, *args, **kwargs)
File "/opt/python3.6/lib/python3.6/site-packages/flask_login/utils.py", line 258, in decorated_view
return func(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/www/utils.py", line 386, in view_func
return f(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/www/utils.py", line 292, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/utils/db.py", line 74, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/www/views.py", line 1706, in tree
show_external_logs=bool(external_logs))
File "/usr/local/lib/airflow/airflow/www/views.py", line 425, in render
return super(AirflowViewMixin, self).render(template, **kwargs)
File "/opt/python3.6/lib/python3.6/site-packages/flask_admin/base.py", line 308, in render
return render_template(template, **kwargs)
File "/opt/python3.6/lib/python3.6/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/opt/python3.6/lib/python3.6/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/opt/python3.6/lib/python3.6/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/opt/python3.6/lib/python3.6/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/opt/python3.6/lib/python3.6/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/airflow/airflow/www/templates/airflow/tree.html", line 20, in top-level template code
{% extends "airflow/dag.html" %}
File "/usr/local/lib/airflow/airflow/www/templates/airflow/dag.html", line 21, in top-level template code
{% import 'admin/lib.html' as lib with context %}
File "/usr/local/lib/airflow/airflow/www/templates/airflow/master.html", line 20, in top-level template code
{% extends "admin/master.html" %}
File "/usr/local/lib/airflow/airflow/www/templates/admin/master.html", line 20, in top-level template code
{% extends 'admin/base.html' %}
File "/opt/python3.6/lib/python3.6/site-packages/flask_admin/templates/bootstrap3/admin/base.html", line 95, in top-level template code
{% block tail %}
File "/usr/local/lib/airflow/airflow/www/templates/airflow/tree.html", line 85, in block "tail"
var data = {{ data|tojson }};
File "/opt/python3.6/lib/python3.6/site-packages/flask/json/__init__.py", line 376, in tojson_filter
return Markup(htmlsafe_dumps(obj, **kwargs))
File "/opt/python3.6/lib/python3.6/site-packages/flask/json/__init__.py", line 290, in htmlsafe_dumps
dumps(obj, **kwargs)
File "/opt/python3.6/lib/python3.6/site-packages/flask/json/__init__.py", line 211, in dumps
rv = _json.dumps(obj, **kwargs)
File "/opt/python3.6/lib/python3.6/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/opt/python3.6/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/opt/python3.6/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/opt/python3.6/lib/python3.6/site-packages/flask/json/__init__.py", line 100, in default
return _json.JSONEncoder.default(self, o)
File "/opt/python3.6/lib/python3.6/json/encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'V1Pod' is not JSON serializable
</code></pre>
<p>Can you please tell me what is going on and how I can resolve the issue?
Thanks</p>
| Pascal GILLET | <p>It's a bug which was fixed in Airflow 2.0.0 (See <a href="https://github.com/apache/airflow/pull/11952" rel="nofollow noreferrer">PR</a>)</p>
| Elad Kalif |
<p>I own the domain foo.com with lots of exisitng services.
We have chosen hubspot as our blog service provider and we it appears to be foo.com/blog while all the other pages are self-hosted.</p>
<p>We are using kubernetes and istio already. Is there a way we can connect /blog similar to a reverse proxy as an external service?</p>
| benone | <blockquote>
<p>Is there a way we can connect /blog similar to a reverse proxy as an external service?</p>
</blockquote>
<p>If I good understand your problem / question this could be done by <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/http-proxy/" rel="nofollow noreferrer">using an External HTTPS Proxy</a>. Everything is described in this documentation with examples.</p>
<p>But based on <a href="https://github.com/istio/istio/issues/23298" rel="nofollow noreferrer">this topic</a> by default</p>
<blockquote>
<p>You need to have an ingress gateway and expose the internal service.
The services within the mesh is discovered by Pilot and you can't visit internal mesh services without configure the Ingress Gateway. They're simply isolated by the sidecar rules.</p>
</blockquote>
<p>There are also many similar questions on the internet. Look at the similar topics:</p>
<ul>
<li><a href="https://discuss.istio.io/t/is-it-possible-to-use-istio-as-a-reverse-proxy-similar-to-nginx-proxy-pass/6319" rel="nofollow noreferrer">Is it possible to use Istio as a reverse proxy? (similar to nginx proxy_pass)</a></li>
<li><a href="https://istio.io/latest/blog/2019/proxy/" rel="nofollow noreferrer">Istio as a Proxy for External Services</a></li>
<li><a href="https://discuss.istio.io/t/nginx-reverse-proxy-with-istio-ingress/11395" rel="nofollow noreferrer">Nginx reverse proxy with istio ingress</a></li>
<li><a href="https://stackoverflow.com/questions/62173813/using-istio-as-an-reverse-proxy-for-external-tls-services">Using istio as an reverse proxy for external TLS services</a></li>
<li><a href="https://stackoverflow.com/questions/68176318/how-to-proxy-pass-for-another-website-reverse-proxy-in-istio-like-we-do-in-ngi">How to proxy pass for another website (reverse proxy) in istio like we do in nginx https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/</a></li>
</ul>
| Mikołaj Głodziak |
<p>I have a Kubernetes cluster and I need to collect the various pod and node event timestamps.</p>
<p>I do that by building a go service that communicates with my Kubernetes cluster via client-go library. The timestamp I get for the subscribed pod and node object shows time only till seconds precision.</p>
<p>Is there a way one can get time in milliseconds precision level? I found a <a href="https://github.com/kubernetes/kubernetes/issues/81026" rel="nofollow noreferrer">similar issue raised</a> but there is no resolution of that.</p>
<p>Could someone help me in this?</p>
| Shresthi Garg | <p>Welcome to the community @shresthi-garg</p>
<p>First of all, as you correctly found, it's not possible to get precise timestamps from kubernetes components themselves with milliseconds precision. And <a href="https://github.com/kubernetes/kubernetes/issues/81026#issuecomment-832301082" rel="nofollow noreferrer">this github issue</a> is closed for now.</p>
<p>However it's still possible to find some exact timings about containers and other events. Below is an example related to a container.</p>
<p><strong>Option 1</strong> - kubelet by default writes significant amount of logs to syslog. It's possible to view them with using <code>journalctl</code> (note! this approach works on <code>systemd</code> systems. For other systems please refer to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs" rel="nofollow noreferrer">official kubenetes documentation</a>). Example of the command:</p>
<p><code>journalctl -u kubelet -o short-precise</code></p>
<p>-u - filter by unit</p>
<p>-o - output options</p>
<p>Line from output which we're looking for will be:</p>
<pre><code>May 18 21:00:30.221950 control-plane kubelet[8576]: I0518 21:00:30.221566 8576 scope.go:111] "RemoveContainer" containerID="d7d0403807684ddd4d2597d32b90b1e27d31f082d22cededde26f6da8281cd92"
</code></pre>
<p><strong>Option 2</strong> - get this information from containerisation engine. In the example below I used Docker for this.
I run this command:</p>
<p><code>docker inspect container_id/container_name</code></p>
<p>Output will be like:</p>
<pre><code>{
"Id": "d7d0403807684ddd4d2597d32b90b1e27d31f082d22cededde26f6da8281cd92",
"Created": "2021-05-18T21:00:07.388569335Z",
"Path": "/docker-entrypoint.sh",
"Args": [
"nginx",
"-g",
"daemon off;"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8478,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-05-18T21:00:07.593216613Z",
"FinishedAt": "0001-01-01T00:00:00Z"
}
</code></pre>
| moonkotte |
<p>I have added one more nameserver and also rotate options so that for dns resolution, different name servers are picked using round robin fashion but it's picking first name server always. This is my resolv.conf</p>
<pre><code>nameserver 10.96.0.10
nameserver 8.8.8.8
search measurement.svc.cluster.local svc.cluster.local cluster.local
options rotate timeout:1 ndots:5
</code></pre>
<p>Second nameserver is never getting picked. I tried running nslookup from kuberenetes pod multiple times. It's always picking first one and that's some cluster level default. Is there some additional configuration change required at k8s level or it has something to do with Coredns policy?
I would really appreciate any help.</p>
| Bhavya Sharma | <p>When you specify multiple name servers in <code>resolv.conf</code> file with options rotate, the resolver library should rotate through each name server in order for each DNS query. However in your case, the second nameserver is being ignored.</p>
<p>This issue could be caused by several factors:</p>
<ol>
<li><p>Check if the second nameserver is configured correctly and is reachable.</p>
</li>
<li><p>Ensure the network connectivity is not an issue and check network configuration and firewall settings.</p>
</li>
<li><p>The issue might be caused by DNS caching also. The first name server could be cached and the resolver library might be using a cached response instead of rotating to the next server. Clear the DNS cache to see if it resolves the issue.</p>
</li>
<li><p>Also, try to increase the timeout value in options timeout : 3 or 5 seconds. The first server might be responding faster than the second server, so the resolver library is continuously using the first one.</p>
</li>
</ol>
| Srividya |
<p>I was trying to use a module in airflow called Secrets to pass secrets to KubernetesOperator in Airflow.
It should be imported as <code>from airflow.contrib.kubernetes.secret import Secret</code></p>
<p>But I'm getting an error
<code>ModuleNotFoundError: No module named 'airflow.contrib.kubernetes'</code></p>
<p>I have tried to install apache-airflow kubernetes package <code>pip install apache-airflow[kubernetes]</code>
but this did not help.</p>
| Kavya | <p>The import is:</p>
<pre><code>from airflow.kubernetes.secret import Secret
</code></pre>
<p>Note that the Secret class in Airflow can only <strong>reference</strong> secrets already exist in Kubernetes.</p>
<p>If you are looking to "pass" = generate secrets then it won't work. You first must create them in Kubernetes. You can do this by using <code>create_namespaced_secret</code> of the Kubernetes Python SDK - see <a href="https://stackoverflow.com/questions/61405562/using-create-namespaced-secret-api-in-kubernetes-python-client">Using create_namespaced_secret API in Kubernetes Python client</a></p>
<p>Noting that there is <a href="https://github.com/apache/airflow/issues/28086" rel="nofollow noreferrer">open feature request</a> to be able to use credentials passed from Airflow Connections to the POD running the workload.</p>
| Elad Kalif |
<p>For an application deployed in Kubernetes would there be any suggested guidance documentation for SAML integration? My search foo is deserting me.</p>
<p>Most documentation are for the Kubernetes itself and not the application. The application would not be aware of Kubernetes RBAC etc.</p>
| user353829 | <p>In the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies" rel="nofollow noreferrer">official documentation</a> you can find the following section:</p>
<blockquote>
<p>Kubernetes uses client certificates, bearer tokens, or an authenticating proxy to authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request:</p>
<ul>
<li>Username: a string which identifies the end user. Common values might be <code>kube-admin</code> or <code>[email protected]</code>.</li>
<li>UID: a string which identifies the end user and attempts to be more consistent and unique than username.</li>
<li>Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users. Common values might be <code>system:masters</code> or <code>devops-team</code>.</li>
<li>Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful.</li>
</ul>
<p>All values are opaque to the authentication system and only hold significance when interpreted by an <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">authorizer</a>.</p>
<p>You can enable multiple authentication methods at once. You should usually use at least two methods:</p>
<ul>
<li>service account tokens for service accounts</li>
<li>at least one other method for user authentication.</li>
</ul>
<p>When multiple authenticator modules are enabled, the first module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in.</p>
<p>The <code>system:authenticated</code> group is included in the list of groups for all authenticated users.</p>
<p><strong>Integrations with other authentication protocols (LDAP, SAML, Kerberos, alternate x509 schemes, etc) can be accomplished using an <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authenticating-proxy" rel="nofollow noreferrer">authenticating proxy</a> or the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication" rel="nofollow noreferrer">authentication webhook</a>.</strong></p>
</blockquote>
<p>As you can see to add SAML to your configuration you can use <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authenticating-proxy" rel="nofollow noreferrer">authenticating proxy</a> or the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication" rel="nofollow noreferrer">authentication webhook</a>.</p>
<p>If you search an example how to set SAML in Kubernetes, you can read <a href="https://goteleport.com/blog/kubernetes-sso-saml/" rel="nofollow noreferrer">this article</a>.</p>
<p>However, in the vast majority of cases, SAML will extend (rather than replace) the RBAC functionality. See also article <a href="https://goteleport.com/blog/how-saml-authentication-works/" rel="nofollow noreferrer">How SAML 2.0 Authentication Works?</a></p>
| Mikołaj Głodziak |
<p>I walked through the code in a 3 node K8 cluster and doesn't seem like I am able to block the flow of traffic using networkpolicy on a deployment pod.</p>
<p>Here is the the output from the exercise.</p>
<pre><code>user@myk8master:~$ kubectl get deployment,svc,networkpolicy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP X.X.X.X <none> 443/TCP 20d
user@myk8master:~$
user@myk8master:~$
user@myk8master:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
user@myk8master:~$ kubectl expose deployment nginx --port=80
service/nginx exposed
user@myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (X.X.X.X:80)
remote file exists
/ # exit
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running
pod "busybox" deleted
user@myk8master:~$
user@myk8master:~$
user@myk8master:~$ vi network-policy.yaml
user@myk8master:~$ cat network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
access: "true"
user@myk8master:~$
user@myk8master:~$
user@myk8master:~$ kubectl apply -f network-policy.yaml
networkpolicy.networking.k8s.io/access-nginx created
user@myk8master:~$
user@myk8master:~$
user@myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.100.97.229:80)
remote file exists. <<<< THIS SHOULD NOT WORK
</code></pre>
<blockquote>
<p>I followed all the steps as is, but it seems like I am unable to block the traffic even with networkpolicy defined.</p>
</blockquote>
<p>Can someone please help and let me know if I am doing something dumb here?</p>
| Shyam | <p>As described in the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">documentation</a> , restricting client access should work by using a network plugin. Because of some conflict or glitch it may not restrict the access. So try to reinstall/reconfigure.</p>
<p>You can also try another method like blocking them in <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/" rel="nofollow noreferrer">NGINX</a></p>
<p>You can restrict Access by IP Address. NGINX can allow or deny access based on a particular IP address or the range of IP addresses of client computers. To allow or deny access, use the allow and deny directives inside the stream context or a server block:</p>
<pre><code> stream {
#...
server {
listen 12345;
deny 192.168.1.2;
allow 192.168.1.1/24;
allow 2001:0db8::/32;
deny all;
}
}
</code></pre>
<p>Limiting the Number of TCP Connections. You can limit the number of simultaneous TCP connections from one IP address:</p>
<pre><code> stream {
#...
limit_conn_zone $binary_remote_addr zone=ip_addr:10m;
#...
}
</code></pre>
<p>you can also limit bandwidth and ip range etc.,Using NGINX is more flexible.</p>
<p>Refer to the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">link</a> for more information about network plugins.</p>
| Srividya |
<p>So, I'm using minikube v1.19.0 in ubuntu and using nginx-ingress with kubernetes. I have two node files: auth and client having docker image made respectively</p>
<p>i got 4 kubernetes cinfig files which are as follows:</p>
<p>auth-deply.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: xyz/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>auth-moongo-depl.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>client-depl.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: xyz/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>ingress-srv.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
</code></pre>
<p>skaffold.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: xyz/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: xyz/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
</code></pre>
<p>Now, when I run <code>skaffold dev</code> the following error is coming:</p>
<pre class="lang-sh prettyprint-override"><code>Listing files to watch...
- xyz/auth
- xyz/client
Generating tags...
- xyz/auth -> xyz/auth:abcb6e4
- xyz/client -> xyz/client:abcb6e4
Checking cache...
- xyz/auth: Found Locally
- xyz/client: Found Locally
Starting test...
Tags used in deployment:
- xyz/auth -> xyz/auth:370487d5c0136906178e602b3548ddba9db75106b22a1af238e02ed950ec3f21
- xyz/client -> xyz/client:a56ea90769d6f31e983a42e1c52275b8ea2480cb8905bf19b08738e0c34eafd3
Starting deploy...
- deployment.apps/auth-depl configured
- service/auth-srv configured
- deployment.apps/auth-mongo-depl configured
- service/auth-mongo-srv configured
- deployment.apps/client-depl configured
- service/client-srv configured
- Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
exiting dev mode because first deploy failed: kubectl apply: exit status 1
</code></pre>
<p>Actually everything was working fine until i reinstall minikube again and getting this problem.
Need some help here.</p>
| Ishan Joshi | <p>Actually I just found out the issue was when reinstalling the minikube, Validating Webhook was not deleted and creating the issue hence, should be removed using following command.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
</code></pre>
<p>I found out that while reinstalling i forgot to remove this webhook that is installed in the manifests which created this problem.</p>
<p>Additional links related to this problem:</p>
<p><a href="https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found/62044090#62044090">Nginx Ingress: service "ingress-nginx-controller-admission" not found</a></p>
<p><a href="https://stackoverflow.com/questions/61616203/nginx-ingress-controller-failed-calling-webhook">Nginx Ingress Controller - Failed Calling Webhook</a></p>
| Ishan Joshi |
<p>On my bare metal kubernetese cluster, I installed mongo db using <a href="https://bitnami.com/stack/mongodb/helm" rel="nofollow noreferrer">helm from bitnami on kubernetese</a> as follows.</p>
<pre><code>helm install mongodb bitnami/mongodb
</code></pre>
<p>Immediately I get the following output as a result.</p>
<pre><code>vagrant@kmasterNew:~$ helm install mongodb bitnami/mongodb
NAME: mongodb
LAST DEPLOYED: Tue May 4 12:26:58 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
MongoDB(R) can be accessed on the following DNS name(s) and ports from within your cluster:
mongodb.default.svc.cluster.local
To get the root password run:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
To connect to your database, create a MongoDB(R) client container:
kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.5-debian-10-r21 --command -- bash
Then, run the following command:
mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/mongodb 27017:27017 &
mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD
</code></pre>
<p>Then upon inspecting the pods, I see that the pod is always pending however long I wait.</p>
<pre><code>vagrant@kmasterNew:~$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongodb-d9b6d589c-zpmb6 0/1 Pending 0 9m21s <none> <none> <none> <none>
</code></pre>
<p>What am I missing?</p>
<p>As indicated in the helm install output I run the following command to get the secret.</p>
<pre><code>export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
</code></pre>
<p>It is successfully executed. And when I do</p>
<pre><code>echo $MONGODB_ROOT_PASSWORD
</code></pre>
<p>I get the root password as</p>
<pre><code>rMjjciN8An
</code></pre>
<p>As the instructions in the helm install output suggests, I tried to connect to the database by running</p>
<pre><code>kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=rMjjciN8An" --image docker.io/bitnami/mongodb:4.4.5-debian-10-r21 --command -- bash
mongo admin --host "mongodb" --authenticationDatabase admin -u root -p rMjjciN8An
</code></pre>
<p>And I get the following output.</p>
<pre><code>MongoDB shell version v4.4.5
connecting to: mongodb://mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongodb:27017, connection attempt failed: SocketException: Error connecting to mongodb:27017 (10.111.99.8:27017) :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:374:17
@(connect):2:6
exception: connect failed
exiting with code 1
</code></pre>
<p>As you can see the connection attempt failed. This I guess is because the pod itself is in the pending state in the first place.</p>
<p>So to get more info about the pod, I exit out of the mongodb-client pod(I created in the step above), and run the following command.</p>
<pre><code>kubectl get pod -o yaml
</code></pre>
<p>And I get the lengthy output.</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-05-04T12:26:59Z"
generateName: mongodb-d9b6d589c-
labels:
app.kubernetes.io/component: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
pod-template-hash: d9b6d589c
name: mongodb-d9b6d589c-zpmb6
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: mongodb-d9b6d589c
uid: c99bfa3e-9a8d-425f-acdc-74d8acaba71b
resourceVersion: "52012"
uid: 97f77766-f400-424c-9651-9839a7506721
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/name: mongodb
namespaces:
- default
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mongodb-root-password
name: mongodb
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
image: docker.io/bitnami/mongodb:4.4.5-debian-10-r21
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- db.adminCommand('ping')
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: mongodb
ports:
- containerPort: 27017
name: mongodb
protocol: TCP
readinessProbe:
exec:
command:
- bash
- -ec
- |
mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext:
runAsNonRoot: true
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mongodb
name: datadir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-g5kx8
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
serviceAccount: mongodb
serviceAccountName: mongodb
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
- name: kube-api-access-g5kx8
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-05-04T12:26:59Z"
message: '0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>But I feel that an important clue is at the end of the output.</p>
<pre><code>message: '0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.'
</code></pre>
<p>Looks like the some thing wrong with PVC. Now as I look the manifest generated by running</p>
<pre><code>helm get menifest mongodb
</code></pre>
<p>I get the manifest as follows.</p>
<pre><code>---
# Source: mongodb/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
secrets:
- name: mongodb
---
# Source: mongodb/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: "ck1qamNpTjhBbg=="
---
# Source: mongodb/templates/standalone/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: mongodb/templates/standalone/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
type: ClusterIP
ports:
- name: mongodb
port: 27017
targetPort: mongodb
nodePort: null
selector:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/component: mongodb
---
# Source: mongodb/templates/standalone/dep-sts.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/component: mongodb
template:
metadata:
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-10.15.0
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
serviceAccountName: mongodb
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb
app.kubernetes.io/component: mongodb
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
sysctls: []
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:4.4.5-debian-10-r21
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb
key: mongodb-root-password
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
ports:
- name: mongodb
containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- bash
- -ec
- |
mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits: {}
requests: {}
volumeMounts:
- name: datadir
mountPath: /bitnami/mongodb
subPath:
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
</code></pre>
<p>To summarize, the following are the 5 different kinds of objects the above manifest represent.</p>
<pre><code>kind: ServiceAccount
kind: Secret
kind: PersistentVolumeClaim
kind: Service
kind: Deployment
</code></pre>
<p>As we can see there is PersistentVolumeClaim, but no PersistentVolume.</p>
<p>I think, I followed the instructions given <a href="https://bitnami.com/stack/mongodb/helm" rel="nofollow noreferrer">here</a> for installing the mongo db chart on kubernetese.</p>
<p><a href="https://i.stack.imgur.com/BhnAQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BhnAQ.png" alt="mongo db installation on kubernetese using helm" /></a></p>
<p>There is nothing there mentioning about PersistentVolume. Am I missing something here? Do I have to somehow create a Persistant Volume myself?</p>
<p>So the questions are</p>
<ol>
<li><p>Why is the pod in pending state indefinitely?</p>
</li>
<li><p>Why is there no Persistant Volume object created(I checked with the command kubectl get pv --all-namespaces)</p>
</li>
<li><p>Finally what baffles me is when try to get logs, I see nothing!!</p>
<pre><code>vagrant@kmasterNew:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongodb-d9b6d589c-zpmb6 0/1 Pending 0 60m
vagrant@kmasterNew:~$ kubectl logs mongodb-d9b6d589c-zpmb6
vagrant@kmasterNew:~$
</code></pre>
</li>
</ol>
| VivekDev | <p>Moving this out of comments, as I was able to reproduce it on kubernetes cluster setup using <code>kubeadm</code>.</p>
<p>1 - It's pending because it doesn't have persistent volumes to proceed.
Can be checked with:</p>
<p><code>kubectl get pvc</code>
output is:</p>
<pre><code>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb Pending 8s
</code></pre>
<p>Then <code>kubectl describe pvc mongodb</code></p>
<pre><code>Name: mongodb
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/component=mongodb
app.kubernetes.io/instance=mongodb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mongodb
helm.sh/chart=mongodb-10.15.0
Annotations: meta.helm.sh/release-name: mongodb
meta.helm.sh/release-namespace: default
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: mongodb-d9b6d589c-7mbf8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2s (x8 over 97s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>2 - There are <a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb#prerequisites" rel="nofollow noreferrer">three main prerequisites</a> to start with bitnami/mongodb chart:</p>
<blockquote>
<ul>
<li>Kubernetes 1.12+</li>
<li>Helm 3.1.0</li>
<li><strong>PV provisioner support in the underlying infrastructure</strong></li>
</ul>
</blockquote>
<p>In your case pod can't start because it doesn't have PersistentVolume created. This happens because no provisioner is used. E.g. in clouds or minikube it's automatically handled for you, while for bare metal cluster you should take care of it on your own. Here are two examples how you can do it:</p>
<ul>
<li><a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner" rel="nofollow noreferrer">sig-storage-local-static-provisioner</a></li>
<li><a href="https://github.com/rancher/local-path-provisioner" rel="nofollow noreferrer">local-path-provisioner</a></li>
</ul>
<p>You can check if any storage classes and provisioners are used with:</p>
<p><code>kubectl get storageclasses</code></p>
<p>3 - You don't see logs because container didn't even start. You can always refer to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#my-pod-stays-pending" rel="nofollow noreferrer">troubleshooting pending or crashing pods</a></p>
| moonkotte |
<p>I am trying to write the nginx ingress config for my k8s cluster.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: blabla-data-api-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "360"
nginx.ingress.kubernetes.io/proxy-send-timeout: "360"
nginx.ingress.kubernetes.io/proxy-read-timeout: "360"
nginx.ingress.kubernetes.io/proxy-body-size: 256m
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- blabla-data.api.staging.20-74-47-80.nip.io
secretName: blabla-data-api-certification-staging
rules:
- host: blabla-data.api.staging.20-74-47-80.nip.io
http:
paths:
- backend:
serviceName: blabla-data-api
servicePort: 80
path: /
- backend:
serviceName: blabla-data-api
servicePort: 443
path: /
</code></pre>
<p>When I apply this config, I get this error:</p>
<pre><code>for: "kubernetes/staging/blabla-data-api-ingress.staging.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: nginx.ingress.kubernetes.io/configuration-snippet annotation contains invalid word proxy_pass
</code></pre>
<p>In fact, this piece of code used to work in the past.</p>
<p>I tried to add <code>--set controller.admissionWebhooks.enabled=false</code> in my <code>helm install nginx-ingress ingress-nginx/ingress-nginx</code> like that:</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.enabled=false
</code></pre>
<p>In this case, I don't get any error while applying this ingress config but then I get a <code>404</code> from nginx when I try to access my server through the external API.</p>
| soling | <p>OP has confirmed, that the issue was solved in <a href="https://github.com/kubernetes/ingress-nginx/issues/7837" rel="nofollow noreferrer">this github topic</a></p>
<blockquote>
<p>it was exactly the issue you mentioned, thanks for your help</p>
</blockquote>
<p>This problem is related to <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-25742" rel="nofollow noreferrer">CVE-2021-25742</a>. Problem is solved based on this message:</p>
<blockquote>
<p>Hi folks we just released <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.5" rel="nofollow noreferrer">Ingress NGINX v1.0.5.</a> Thanks to <a href="https://github.com/rikatz" rel="nofollow noreferrer">@rikatz</a> who helped implement<br />
<a href="https://github.com/kubernetes/ingress-nginx/pull/7874" rel="nofollow noreferrer">#7874</a> which added the option to sanitize annotation inputs</p>
<p><code>annotation-value-word-blocklist</code> defaults are <code>"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\"</code></p>
<p>Users from mod_security and other features should be aware that some blocked values may be used by those features and must be manually unblocked by the Ingress Administrator.</p>
<p>For more details please check <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist</a></p>
<p>If you have any issues with this new feature or the release please <a href="https://github.com/kubernetes/ingress-nginx/issues/new/choose" rel="nofollow noreferrer">open a new issue</a> so we can track it there.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I installed <code>minikube</code> on my Mac and I'd like to deploy elasticsearch on this k8s cluster. I followed this instruction: <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a></p>
<p>The file I created is:</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.10.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>when I run <code>kubectl apply -f es.yaml</code>, I got this error: <code>error: unable to recognize "es.yaml": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"</code></p>
<p>It says <code>kind</code> is not matched. I wonder how I can make it work. I searched k8s doc and it seems <code>kind</code> can be <code>service</code>, <code>pod</code>, <code>deployment</code>. But why the above instruction uses <code>Elasticsearch</code> as the <code>kind</code>? What value of <code>kind</code> should I specify?</p>
| Joey Yi Zhao | <p>I think you might have missed the step of installing CRD and the operator for ElasticSearch. Have you followed this step <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html</a>?</p>
<p><code>Service</code>, <code>Pod</code>, <code>Deployment</code> etc are Kubernetes native resources. Kubernetes provides a way to write custom resources also, using <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">CRDs</a>. <code>Elasticsearch</code> is one such example, so you have to define custom resource before using it for Kubernetes to understand that.</p>
| Syam Sankar |
<p>I have mounted two tar files as secrets. I would like to mount them to my container and then unpack the contents. The commands that created the secrets are as follows:</p>
<pre><code>kubectl create secret generic orderer-genesis-block --from-file=./channel-artifacts/genesis.block
kubectl create secret generic crypto-config --from-file=crypto-config.tar
kubectl create secret generic channel-artifacts --from-file=channel-artifacts.tar
</code></pre>
<p>The following is what I <code>kubectl apply</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fabric-orderer-01
spec:
selector:
matchLabels:
app: fabric-orderer-01
replicas: 1
template:
metadata:
labels:
app: fabric-orderer-01
spec:
initContainers:
- name: init-channel-artifacts
image: busybox
volumeMounts:
- name: channel-artifacts
mountPath: /hlf/channel-artifacts
command: ['sh', '-c', 'tar -xf /hlf/channel-artifacts/channel-artifacts.tar']
containers:
- name: fabric-orderer-01
image: hyperledger/fabric-orderer:1.4.9
env:
- name: ORDERER_CFG_PATH
value: /hlf/
- name: CONFIGTX_ORDERER_ADDRESSES
value: "orderer.example.com:7050"
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_LISTENPORT
value: "7050"
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /hlf/genesis.block
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: fabricfiles-01
mountPath: /fabric
- name: orderer-genesis-block
mountPath: /hlf/
readOnly: true
- name: crypto-config
mountPath: /hlf/crypto-config
readOnly: true
- name: channel-artifacts
mountPath: /hlf/channel-artifacts
readOnly: true
volumes:
- name: orderer-genesis-block
secret:
secretName: orderer-genesis-block
- name: crypto-config
secret:
secretName: crypto-config
- name: channel-artifacts
secret:
secretName: channel-artifacts
- name: fabricfiles-01
persistentVolumeClaim:
claimName: fabric-pvc-01
</code></pre>
<p>My deployment succeeds, but when I <code>bash</code> into my pod, I don't see my tar files being extracted. I only see my tar files <code>/hlf/channel-artifacts/channel-artifacts.tar</code> and <code>/hlf/crypto-config/crypto-config.tar</code>. How should I go about extracting their contents?</p>
| user10931326 | <p>When you create an initContainer and execute this command:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar']</code></p>
<p>it runs in default for this container path.
I checked this by adding <code>pwd</code> and <code>ls -l</code> commands.</p>
<p>Whole line is:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar ; pwd ; ls -l']</code></p>
<p>From an initContainer you can get logs by:</p>
<p><code>kubectl logs fabric-orderer-01-xxxxxx -c init-channel-artifacts</code></p>
<p>Output was:</p>
<pre><code>channel-artifacts.txt # first line for -v option so tar was untared indeed
/ # working directory
total 44
drwxr-xr-x 2 root root 12288 May 3 21:57 bin
-rw-rw-r-- 1 1001 1002 32 May 10 14:15 channel-artifacts.txt # file which was in tar
drwxr-xr-x 5 root root 360 May 11 08:41 dev
drwxr-xr-x 1 root root 4096 May 11 08:41 etc
drwxr-xr-x 4 root root 4096 May 11 08:41 hlf
drwxr-xr-x 2 nobody nobody 4096 May 3 21:57 home
dr-xr-xr-x 225 root root 0 May 11 08:41 proc
drwx------ 2 root root 4096 May 3 21:57 root
dr-xr-xr-x 13 root root 0 May 11 08:41 sys
drwxrwxrwt 2 root root 4096 May 3 21:57 tmp
drwxr-xr-x 3 root root 4096 May 3 21:57 usr
drwxr-xr-x 1 root root 4096 May 11 08:41 var
</code></pre>
<p>As you can see your file is stored in <code>/</code> path of the container which means when this container is terminated, its filesystem is terminated as well and your file is gone.</p>
<p>Once we know what happened, it's time to workaround it.
First and impotant thing is secrets are read-only and should be used in prepared form, you can't write file to secret like you wanted to do in your example.</p>
<p>Instead one of the options is you can untar your secrets to a persistent volume:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar -C /hlf/fabric']</code></p>
<p>And then use <code>postStart hook</code> for the main container where you can e.g. copy your files to desired locations or create simlinks and you won't need to mount your secrets to the main container.</p>
<p>Simple example of <code>postStart hook</code> (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">reference</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
</code></pre>
<p>Small notice is</p>
<blockquote>
<p>Kubernetes sends the postStart event immediately after the Container
is created. There is no guarantee, however, that the postStart handler
is called before the Container's entrypoint is called.</p>
</blockquote>
<p>To workaround it you can add <code>sleep 5</code> in your main container before entrypoint. Here's an example of a beginning of container section with nginx image (for your image it'll be different):</p>
<pre><code>containers:
- name: main-container
image: nginx
command: ["bash", "-c", 'sleep 5 ; echo "daemon off;" >> /etc/nginx/nginx.conf ; nginx']
</code></pre>
<p>This will fix your issue. Also you can use this approach for untar your files and you won't even need an <code>initContainer</code></p>
<p>It's not clear why you want to use <code>tar</code> for this purpose as you can store small files in <code>secrets</code> or <code>configmaps</code> and mount them directly using <code>subPath</code> where they are needed without additional steps (you can read about it and find an example <a href="https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i" rel="nofollow noreferrer">here</a>)</p>
<p>To use secrets securely, you should consider e.g. <code>HashiCorp Vault</code> (<a href="https://www.hashicorp.com/products/vault/kubernetes" rel="nofollow noreferrer">Vault with kubernetes</a>)</p>
| moonkotte |
<p>We are using actually k8s HA multi master cluster version 1.24 within a <strong>SSL/TLS encrypted, secured and access restricted private network behind a hardware firewall</strong>. Our cluster has <strong>no public endpoints</strong> or any communication to the public internet. We want to disable TLS for the k8s cluster or at least using an insecure way for internal k8s communication. At version < 1.24, there were <strong>--insecure flags</strong> for the kube-apiserver, which has been <strong>removed since 1.24</strong>.</p>
<p>Do version 1.24 provide any alternative for the removed --insecure flags?
How to achieve an insecure communication with 1.24?</p>
| DavidL | <p><a href="https://github.com/rancher/cis-operator/issues/158" rel="nofollow noreferrer">Insecure address flags</a> are used to serve unsecured unauthenticated access to the API server. Setting up the apiserver to serve on an insecure port would allow <strong>unauthenticated</strong> and <strong>unencrypted access</strong> to your master node. This would allow attackers who could access this port, to easily take control of this cluster.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md" rel="nofollow noreferrer">Kubernetes</a> release of 2022(1.24+) aims to make Kubernetes more secure and reliable. In the 1.24 release, there is a single essential change for the kube-scheduler. Insecure flags, such as <code>--address</code> and <code>--port</code>, have been <a href="https://github.com/kubernetes/kubernetes/pull/106865" rel="nofollow noreferrer">removed</a>. It is suggested to use <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer"><strong>--bind-address and --secure-port</strong></a> instead. So, there is no alternate option of using an insecure way for internal kubernetes communication.</p>
| Srividya |
<p>I deployed the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> helm chart. While this chart offers a really nice starting point it has lots of default dashboards which I do not want to use. In the <em>values.yaml</em> of the chart, there is an option <em>defaultDashboardsEnabled: true</em>, which seems to be what I am looking for but if I set it to false using the code below in my values file, which I mount into the helm chart, the dashboards are still there. Does anyone know why this does not work?</p>
<p>A possibility which I thought of is that the chart has both a subchart called <em>grafana</em> and an option <em>grafana</em>, but I do not know how I could fix it or test if this is the issue.</p>
<pre><code>grafana:
defaultDashboardsEnabled: false
</code></pre>
| Manuel | <p>I solved the issue by removing the namespace where Grafana was located. Apparently, there has been some resource left which was not removed by uninstalling the helm chart.</p>
<p>Edit:
The problem seems to be with the configmaps. It appears that in some of these the old configuration is saved even though it has already been changed in the helm chart. Removing the Grafana deployment and all the config maps in the regarding namespace worked for me.
Surely it is not necessary to remove all configmaps but I did not have the time to find out which one is the problem.</p>
| Manuel |
<p>I'm using helm3.
My kubernetes deployment get crashed on 43 line of code below.</p>
<p>Error in console: error converting YAML to JSON: mapping values are not allowed in this context</p>
<p>yaml lint says this:
<strong>(): found unexpected ':' while scanning a plain scalar at line 43 column 19</strong></p>
<p>Whats wrong with that line?</p>
<p>wrong one:</p>
<pre><code>- image: {{ printf "%s/%s:%s" .Values.dockerRegistry .Values.dockerImage .Values.version }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "deploy_name" . }}
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.replicas.min }}
revisionHistoryLimit: 3
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
team: {{ .Values.meta.team }}
env: {{ .Values.env }}
version: {{ .Values.version }}
revision: {{ .Release.Revision | quote }}
json_logs: "true"
commit_hash: {{ .Values.commitHash }}
annotations:
prometheus.io/port: {{ .Values.ports.application | quote }}
prometheus.io/path: {{ .Values.prometheus.path | quote }}
prometheus.io/scrape: {{ .Values.prometheus.scrape | quote }}
prometheus.io/scheme: {{ .Values.prometheus.scheme | quote }}
host/url: {{ .Values.url | quote }}
host.net/owner: {{ .Values.meta.owner | quote }}
host.net/system: {{ .Values.meta.team | quote }}
sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
imagePullSecrets:
- name: gitlab
hostAliases:
- ip: 0.0.0.0
hostnames:
- host
- host
- ip: 0.0.0.0
hostnames:
- host
containers:
- image: {{ printf "%s/%s:%s" .Values.dockerRegistry .Values.dockerImage .Values.version }}
imagePullPolicy: Always
name: website
ports:
- containerPort: {{ .Values.ports.application }}
envFrom:
- configMapRef:
name: {{ template "config_map_name" . }}
- secretRef:
name: {{ template "secret_name" . }}
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 350m
memory: 2Gi
livenessProbe:
httpGet:
path: /healthz
port: {{ .Values.ports.application }}
initialDelaySeconds: 20
periodSeconds: 3
failureThreshold: 10
readinessProbe:
httpGet:
path: /readyz
port: {{ .Values.ports.application }}
initialDelaySeconds: 20
periodSeconds: 3
</code></pre>
| jediinspace | <p>so my solution is:</p>
<pre><code>- image: '{{.Values.dockerRegistry}}/{{.Values.dockerImage}}:{{.Values.version}}'
</code></pre>
<p>The problem was about printf function and colon symbol within</p>
<pre><code>"%s/%s:%s"
</code></pre>
| jediinspace |
<p>Let's suppose I have pods deployed in a Kubernetes cluster and I have exposed them via a NodePort service. Is there a way to get the pod external endpoint in one command ?</p>
<p>For example:</p>
<pre><code>kubectl <cmd>
Response : <IP_OF_NODE_HOSTING_THE_POD>:30120 ( 30120 being the nodeport external port )
</code></pre>
| joe1531 | <p>The requirement is a complex one and requires query to list object. I am going to explain with assumptions. Additionally if you need internal address then you can use endpoint object(ep), because the target resolution is done at the endpoint level.</p>
<p>Assumption: 1 Pod, 1 NodePort service(32320->80); both with name nginx</p>
<p>The following command will work with the stated assumption and I hope this will give you an idea for the best approach to follow for your requirement.</p>
<p>Note: This answer is valid based on the assumption stated above. However for more generalized solution I recommend to use <code>-o jsonpath='{range..</code> for this type of complex queries. For now the following command will work.</p>
<p>command:</p>
<pre><code>kubectl get pods,ep,svc nginx -o jsonpath=' External:http://{..status.hostIP}{":"}{..nodePort}{"\n Internal:http://"}{..subsets..ip}{":"}{..spec.ports..port}{"\n"}'
</code></pre>
<p>Output:</p>
<pre><code> External:http://192.168.5.21:32320
Internal:http://10.44.0.21:80
</code></pre>
| Rajesh Dutta |
<p>I have <a href="https://github.com/kubernetes-client/csharp/" rel="nofollow noreferrer">KubernetesClient</a> code running my app on K3s Orchestrator.</p>
<p>I want to understand the difference(use-case) between two K3s APIs <code>PatchNamespacedServiceWithHttpMessagesAsync</code> and <code>ReplaceNamespacedServiceWithHttpMessagesAsync</code> <a href="https://raw.githubusercontent.com/kubernetes-client/csharp/463e2d94dfe5a4bb54372aaab957ca8d794d767e/src/KubernetesClient/generated/Kubernetes.cs" rel="nofollow noreferrer">[link to these APIs]</a>. Apart from this link I cant find any place to read about K3s APIs use cases. Please help me here.</p>
<p><strong>PS:</strong></p>
<ol>
<li>Basically I am trying to update the existing Service, so want to understand the difference between above two APIs, either of which I will be calling with updated Patch body (updated service deployment).<br/></li>
<li>This que is extension of my <a href="https://stackoverflow.com/questions/70076040/how-to-find-existing-service-by-kubernetes-c-sharp-client-api">previous que</a></li>
</ol>
| Thor | <p><strong>TL;DR</strong></p>
<p><strong><code>ReplaceNamespacedServiceWithHttpMessagesAsync</code> use <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PUT" rel="nofollow noreferrer">PUT</a> HTTP method. <code>PatchNamespacedServiceWithHttpMessagesAsync</code> use <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PATCH" rel="nofollow noreferrer">PATCH</a> HTTP method.</strong></p>
<p>PUT method is to update or create a new object. If such an object already exists, all data are updated, if not, a new object is created on the basis of the information sent. The PATCH method, like PUT, is used to update data about an object, but it requires the object to exist. This is because it does not send complete data in the request, but only the data that is to be updated.</p>
<hr />
<p><strong>Overall, both APIs are very similar to each other. They only differ in a few places:</strong></p>
<p>We have a different name in the first line:
In the <code>ReplaceNamespacedServiceWithHttpMessagesAsync</code></p>
<pre><code>public async Task<HttpOperationResponse<V1Service>> ReplaceNamespacedServiceWithHttpMessagesAsync(
V1Service body,
</code></pre>
<p>and in the <code>PatchNamespacedServiceWithHttpMessagesAsync</code>:</p>
<pre><code>public async Task<HttpOperationResponse<V1Service>> PatchNamespacedServiceWithHttpMessagesAsync(
V1Patch body,
</code></pre>
<hr />
<p>A <code>bool</code> is added to the 7th line in the <code>PatchNamespacedServiceWithHttpMessagesAsync</code></p>
<pre><code>bool? force = null,
</code></pre>
<p>and in the 36th line:</p>
<pre><code>tracingParameters.Add("force", force);
</code></pre>
<hr />
<p>Lines 37th for <code>ReplaceNamespacedServiceWithHttpMessagesAsync</code> and 39th for <code>PatchNamespacedServiceWithHttpMessagesAsync</code>are also different:</p>
<pre><code>ServiceClientTracing.Enter(_invocationId, this, "ReplaceNamespacedService", tracingParameters);
</code></pre>
<p>vs</p>
<pre><code>ServiceClientTracing.Enter(_invocationId, this, "PatchNamespacedService", tracingParameters);
</code></pre>
<hr />
<p>Then is added a fragment (from 56th to 59th ) line for <code>PatchNamespacedServiceWithHttpMessagesAsync</code>:</p>
<pre><code>if (force != null)
{
_queryParameters.Add(string.Format("force={0}", System.Uri.EscapeDataString(SafeJsonConvert.SerializeObject(force, SerializationSettings).Trim('"'))));
}
</code></pre>
<p><strong>The last and most important difference is the 65th line in the <code>ReplaceNamespacedServiceWithHttpMessagesAsync</code> and 71th line in the <code>PatchNamespacedServiceWithHttpMessagesAsync</code> .</strong></p>
<pre><code>_httpRequest.Method = HttpMethod.Put;
</code></pre>
<p>vs</p>
<pre><code>_httpRequest.Method = HttpMethod.Patch;
</code></pre>
| Mikołaj Głodziak |
<p>I am following the tutorial here about kserve <a href="https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md" rel="nofollow noreferrer">https://github.com/kserve/modelmesh-serving/blob/main/docs/quickstart.md</a></p>
<p>Is this my docker&k8s issue? I have spent hours trying to debug but to no avail.</p>
<p>I am getting the following error:</p>
<pre><code>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m20s default-scheduler Successfully assigned modelmesh-serving/modelmesh-serving-mlserver-0.x-77cc8fd548-xdgvr to minikube
Normal Pulling 4m18s kubelet Pulling image "kserve/modelmesh:v0.9.0-rc0"
Normal Pulled 3m18s kubelet Successfully pulled image "kserve/modelmesh:v0.9.0-rc0" in 59.419620166s
Normal Created 3m18s kubelet Created container mm
Normal Started 3m17s kubelet Started container mm
Normal Pulling 3m17s kubelet Pulling image "seldonio/mlserver:0.5.2"
Warning Failed 68s kubelet Failed to pull image "seldonio/mlserver:0.5.2": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 68s kubelet Error: ErrImagePull
Normal Pulling 68s kubelet Pulling image "kserve/modelmesh-runtime-adapter:v0.9.0-rc0"
</code></pre>
| bonijad383 | <p>As mentioned by @Rafał Leszko, The image “<strong>seldonio/mlserver:0.5.2</strong>” you are trying to pull is very large which possibly throws the error: <strong>ErrImagePull: context deadline error</strong>. You can still pull the image with <code>docker pull</code> after which the scheduling succeedes.</p>
<p>When the timeout is exceeded, kubelet will cancel the request, throw out an error. The possible workaround is by setting or increasing the parameter <code>--runtime-request-timeout duration</code> via the config file(in /var/lib/kubelet) by adjusting the timeout and then pull the image by running the command <strong><code>docker pull imagename</code></strong>.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">kubelet-config-file</a> for more information.</p>
| Srividya |
<p>I am currently working on installing Pulsar inside of Minikube. The installation seems to be going okay, however at the end when I try to get the HTTP proxy URL and binary proxy URL... I get an error message that I am not specifying a valid service:</p>
<pre><code>xyz-MBP:pulsar xyz$ kubectl get services -n pulsar | grep pulsar-mini-proxy
pulsar-mini-proxy LoadBalancer 10.107.193.52 <pending> 80:31241/TCP,6650:32025/TCP 8h
xyz-MBP:pulsar xyz$ minikube service pulsar-mini-proxy -n pulsar –-url
❌ Exiting due to MK_USAGE: You must specify a service name
</code></pre>
<p>Is there something I am doing wrong in the command I am using to display the services? Why doesn't the proxy show up as a service?</p>
<p>Here is what I did to get Pulsar installed into Minikube:</p>
<pre><code>#!/bin/bash
# this script assumes that the pre-requisites have been
# installed, and that you just need to create a minikube
# cluster and then deploy pulsar to it
# startup a minikube kubernetes cluster
minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.19.0
# point kubectl towards minikube
kubectl config use-context minikube
# install the pulsar helm chart
./pulsar-helm-chart/scripts/pulsar/prepare_helm_release.sh --create-namespace --namespace pulsar --release pulsar-mini
# install pulsar using the helm chart
helm install --set initialize=true --values pulsar-helm-chart/examples/values-minikube.yaml -n pulsar pulsar-mini apache/pulsar
# wait and then show what is going on
sleep 1m
kubectl get all
# need to wait or else the pods wont display
sleep 5m
# display the pods
kubectl get pods -n pulsar -o name
</code></pre>
<p>Just another update, it doesn't look like anything gets a URL assigned to it from the helm install:</p>
<pre><code>xyz-MBP:pulsar xyz$ minikube service list
|-------------|----------------------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|----------------------------|--------------|-----|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| pulsar | pulsar-mini-bookie | No node port |
| pulsar | pulsar-mini-broker | No node port |
| pulsar | pulsar-mini-grafana | server/3000 | |
| pulsar | pulsar-mini-prometheus | No node port |
| pulsar | pulsar-mini-proxy | http/80 | |
| | | pulsar/6650 | |
| pulsar | pulsar-mini-pulsar-manager | server/9527 | |
| pulsar | pulsar-mini-toolset | No node port |
| pulsar | pulsar-mini-zookeeper | No node port |
|-------------|----------------------------|--------------|-----|
</code></pre>
| Snoop | <p>I have a similar kind of setup and I installed the <a href="https://helm.sh/docs/intro/install/#from-apt-debianubuntu" rel="nofollow noreferrer">helm</a> before installing the <a href="https://pulsar.apache.org/docs/en/kubernetes-helm/" rel="nofollow noreferrer">pulsar</a> in the minikube. After I executed these two commands and got the urls.</p>
<p>$ kubectl -n pulsar get services</p>
<p>$ minikube service -n pulsar pulsar-mini-proxy</p>
<p><a href="https://i.stack.imgur.com/nq8zR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nq8zR.png" alt="enter image description here" /></a></p>
| Tatikonda vamsikrishna |
<p>I have a cluster with two nodes. I want to publish some services to internet. So I need to pin my domains to some address. Basically I understand, that I need to install ingress controller. But, am I right I need to glue ingress controller to the particular node?</p>
| Maksim Rukomoynikov | <p>Ingress controller helps to manage the ingress resources in the cluster. So along with the controller you need to create the ingress resources which will be the "glue" between the domain and the services(target application).</p>
<p>Please read <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">here</a> for more.</p>
<p>One sample from the documentation:</p>
<p>foo.bar.com/bar -> service1</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
</code></pre>
| Rajesh Dutta |
<p>Previously I was using the <code>extensions/v1beta1</code> api to create ALB on Amazon EKS. After upgrading the EKS to <code>v1.19</code> I started getting warnings:</p>
<pre><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
<p>So I started to update my ingress configuration accordingly and deployed the ALB but the ALB is not launching in AWS and also not getting the ALB address.</p>
<p>Ingress configuration --></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "pub-dev-alb"
namespace: "dev-env"
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: "dev.test.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "dev-test-tg"
port:
number: 80
</code></pre>
<p>Node port configuration --></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "dev-test-tg"
namespace: "dev-env"
spec:
ports:
- port: 80
targetPort: 3001
protocol: TCP
type: NodePort
selector:
app: "dev-test-server"
</code></pre>
<p>Results ---></p>
<p><a href="https://i.stack.imgur.com/tc2rp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tc2rp.png" alt="enter image description here" /></a></p>
<p>Used <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-setup/" rel="nofollow noreferrer">this documentation</a> to create ALB ingress controller.</p>
<p>Could anyone help me on here?</p>
| jawad846 | <p>Your ingress should work fine even if you use newest Ingress. The warnings you see indicate that a new version of the API is available. You don't have to worry about it.</p>
<p>Here is the <a href="https://github.com/kubernetes/kubernetes/issues/94761" rel="nofollow noreferrer">explanation</a> why this warning occurs, even if you you use <code>apiVersion: networking.k8s.io/v1</code>:</p>
<blockquote>
<p>This is working as expected. When you create an ingress object, it can be read via any version (the server handles converting into the requested version). <code>kubectl get ingress</code> is an ambiguous request, since it does not indicate what version is desired to be read.</p>
<p>When an ambiguous request is made, kubectl searches the discovery docs returned by the server to find the first group/version that contains the specified resource.</p>
<p>For compatibility reasons, <code>extensions/v1beta1</code> has historically been preferred over all other api versions. Now that ingress is the only resource remaining in that group, and is deprecated and has a GA replacement, 1.20 will drop it in priority so that <code>kubectl get ingress</code> would read from <code>networking.k8s.io/v1</code>, but a 1.19 server will still follow the historical priority.</p>
<p>If you want to read a specific version, you can qualify the get request (like <code>kubectl get ingresses.v1.networking.k8s.io</code> ...) or can pass in a manifest file to request the same version specified in the file (<code>kubectl get -f ing.yaml -o yaml</code>)</p>
</blockquote>
<p>You can also see a <a href="https://stackoverflow.com/questions/66080909">similar question</a>.</p>
| Mikołaj Głodziak |
<p>I have an application deployed in Kubernetes. I am using the Istio service mesh. One of my services needs to be restarted when a particular error occurs. Is this something that can be achieved using Istio?</p>
<p>I don't want to use a cronjob. Also, making the application restart itself seems like an anti-pattern.</p>
<p>The application is a node js app with fastify.</p>
| Sid | <p>Istio is a network connection tool. I was creating this answer when <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> made a very correct mention in a comment:</p>
<blockquote>
<p>Istio is totally unrelated to this. Another approach could be to use a Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">liveness probe</a> if the cluster can detect the pod is unreachable; but if you're going to add a liveness hook to your code, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-liveness-probe" rel="nofollow noreferrer">the Kubernetes documentation also endorses just crashing</a> on unrecoverable failure.</p>
</blockquote>
<p>The <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.</p>
<p>See also:</p>
<ul>
<li><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">health checks on the cloud - GCP example</a></li>
<li><a href="https://github.com/oracle/weblogic-kubernetes-operator/issues/1395" rel="nofollow noreferrer">creating custom rediness/liveness probe</a></li>
<li><a href="https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/domain-lifecycle/liveness-readiness-probe-customization/" rel="nofollow noreferrer">customization liveness probe</a></li>
</ul>
| Mikołaj Głodziak |
<p>Error : Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <a href="https://www.docker.com/increase-rate-limit" rel="nofollow noreferrer">https://www.docker.com/increase-rate-limit</a></p>
<p>To fix this I have added the login credentials</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: xxxx
kind: Secret
metadata:
creationTimestamp: null
name: pullsecretgeneric
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>and also added to the deployment yaml</p>
<pre><code>template:
metadata:
labels:
component: {{ .component }}
spec:
imagePullSecrets:
- name: pullsecretgeneric
- name: pullsecret
</code></pre>
<p>Then using helm install to do the installation
But still I get this error
Do I need to do add somewhere else</p>
<p>Config.json</p>
<pre><code>{
"auths": {
"https://registry.hub.docker.com": {
"auth": "xxxx"
}
}
}
</code></pre>
<p>Any pointers to fix this</p>
| Ajoy Das | <p><strong>1 .</strong> On November 20, 2020, rate limits anonymous and free authenticated use of Docker Hub went into effect. Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours. You can read <a href="https://docs.docker.com/docker-hub/download-rate-limit/" rel="nofollow noreferrer">here</a> for more detailed information.</p>
<p>As stated in the <a href="https://www.docker.com/increase-rate-limits/" rel="nofollow noreferrer">documentation</a> :</p>
<blockquote>
<p>The rate limits of 100 container image requests per six hours for anonymous usage, and 200 container image requests per six hours for free Docker accounts are now in effect. Image requests exceeding these limits will be denied until the six hour window elapses.</p>
</blockquote>
<p>So as a workaround you can either:</p>
<ul>
<li>Reduce your pull rate.</li>
<li>Upgrade your membership.</li>
<li>Setup your own docker proxy to cache containers locally</li>
</ul>
<p>To overcome docker hub pull rate limit refer to the <a href="https://container-registry.com/posts/overcome-docker-hub-rate-limit/" rel="nofollow noreferrer">documentation</a> and also refer to the <a href="https://stackoverflow.com/a/65020370/15745153">stackpost</a>.</p>
<p><strong>2 .</strong> Another workaround is to pull the image locally once, push it to your local docker repository and then update your image properties to point to your local repository.</p>
<p>You have to pull the images locally using your credentials and push it to your local (internally hosted) docker repository. Once pushed, update the deployment.yaml file with updated image link.</p>
<p><code>image: <LOCAL DOCKER REPO URL>/busybox</code>.</p>
<p><strong>3 .</strong> If there was no issue with Docker login and if you are able to download docker images with docker pull but getting error when creating a pod with the image then create a private docker registry.</p>
<ul>
<li>Create and run a private docker registry</li>
<li>Download busybox image from public docker hub</li>
<li>Create a tag for busybox before pushing it to private registry</li>
<li>Push to registry</li>
<li>Now create a pod, it will be created successfully.</li>
</ul>
<p>Refer to the <a href="https://stackoverflow.com/a/70093649/15745153">stackpost</a> for more information.</p>
| Jyothi Kiranmayi |
<p>I have a use case where I want to check which pods are covered by a network policy, right now my focus is only k8s generated network policies.</p>
<p>What's the easiest way to do this? I know we can go through each network policy and from there filter out pods but a network policy can have multiple ways in which one uses the pod filtering. I am not sure if there is a way to tackle every possible pod filter on the network policy and then get the list of the pods from it.</p>
| ashu8912 | <p>Using the <strong>podSelector</strong> field you can check all the pods that are covered by a Network Policy. Using the label mentioned in podSelector you can retrieve the list of pods which are using the NetworkPolicy.</p>
<p>Each NetworkPolicy includes a <strong>podSelector</strong> which selects the grouping of pods to which the policy applies. Let us consider an example policy which contains a <strong>podSelector</strong> with the label “role=db”. The example policy selects pods with the label "role=db". An empty podSelector selects all pods in the namespace.</p>
<p>When you run NetworkPolicy, you can check the label used for a podSelector by describing the networkpolicy.</p>
<pre><code>$ kubectl describe networkpolicy <networkpolicy-name>
</code></pre>
<p>Pod selector will show you which labels this network policy applied too. Then you can present all the pods with this label by:</p>
<pre><code>$ kubectl get pods -l <podSelector>
</code></pre>
<p>Refer <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource" rel="nofollow noreferrer">NetworkPolicy resource</a> for more information.</p>
| Jyothi Kiranmayi |
<p>I'm using <strong>Docker Desktop</strong> on windows 10 pro with this docker information <a href="https://i.stack.imgur.com/2FQxL.png" rel="nofollow noreferrer">docker info</a></p>
<p>My docker is just fine and I can use docker commands completely. But when it comes to enabling kubernetes.
I can't enable it. I just go to docker-desktop settings and check the enable Kubernetes button. but it stuck at "Starting ..." situation. This is the picture: <a href="https://i.stack.imgur.com/sVely.png" rel="nofollow noreferrer">Picture</a> I have used so many ways to solve the problem(e.g: turn the firewall off, delete some docker files) but nothing happens. In the log.txt file, in <code>C:\Users\<usr>\AppData\Local\Docker</code> I see the error:</p>
<blockquote>
<p>cannot get lease for master node: Get
"https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop":
dial TCP: lookup Kubernetes.docker.internal: no such host</p>
</blockquote>
| mosaddegh warner | <p>I had the same issue.
I resolve it , changing the HOSTS file in Widnows (located at C:\Windows\System32\drivers\etc) and adding</p>
<p><strong>127.0.0.1 kubernetes.docker.internal</strong></p>
<p>( I also disable for this operation firewall and antivirus and set 8.8.8.8 in DNS , in docker desktop setting )</p>
| f0rs4k3n |
<p>I have a basic understanding that pods need to be exposed as service. Now I would like to know that
the frontend Pod(like web pods) must be exposed as Load Balancer service and backend pods (like app or DB pods) must be exposed as ClientIP. And there is no configuration from within the application(Java/Python). My question may be silly but I would like to understand.
In a Two-tier or three tier architecture we will be configuring in the application side. Likewise I am trying to understand the concept here. Thanks in advance!</p>
| Dilly B | <p>To establish the communication between component (frontend, backend and database) I think you need to make :</p>
<ul>
<li>A deployment for each component</li>
<li>Service type clusterIp to establish communication between backend and database.</li>
<li>Service type clusterIP to establish communication between backend and frontend.</li>
<li>To make your application accessible from the outside you can use service type nodePort or load balancer .</li>
</ul>
<p>To resume:</p>
<ul>
<li>Service type clusterIP for inter communication inside cluster.</li>
<li>Node port to make your service accessible at node level.
I hope that make this part clear for you .</li>
</ul>
| rassakra |
<p>sudo kubeadm init
I0609 02:20:26.963781 3600 version.go:252] remote version is much newer: v1.21.1; falling back to: stable-1.18
W0609 02:20:27.069495 3600 configset.go:202]</p>
<pre><code>WARNING: kubeadm cannot validate component configs `for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]`
`[init] Using Kubernetes version: v1.18.19`
`[preflight] Running pre-flight checks`
`error execution phase preflight: [preflight] Some fatal errors occurred:`
`[ERROR Port-10259]: Port 10259 is in use`
`[ERROR Port-10257]: Port 10257 is in use`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: `/etc/kubernetes/manifests/kube-apiserver.yaml already exists`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]:` `/etc/kubernetes/manifests/kube-controller-manager.yaml already exists`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]:` /etc/kubernetes/manifests/kube-scheduler.yaml already exists
`[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists`
`[ERROR Port-10250]: Port 10250 is in use`
`[ERROR Port-2379]: Port 2379 is in use`
`[ERROR Port-2380]: Port 2380 is in use`
`[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty`
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
| pandeg87 | <p>Hi and welcome to Stack Overflow.</p>
<p><strong>"Port in use"</strong> means that there's a process running that uses that port. So you need to stop that process. Since you already ran kubeadm init once, it must have already changed a number of things.</p>
<p>First run <strong>kubeadm reset</strong> to undo all of the changes from the first time you ran it.</p>
<p>Then run <strong><strong>systemctl restart</strong> kubelet</strong>.</p>
<p>Finally, when you run <strong>kubeadm init</strong> you should no longer get the error.</p>
<p><strong>Even after following the above steps , if you get this error:</strong></p>
<pre><code>[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
</code></pre>
<p>Then, remove the etcd folder (/var/lib/etcd) before you run <strong>kubeadm init</strong>.</p>
<p><strong>Note:</strong></p>
<ol>
<li><p>This <a href="https://www.edureka.co/community/19089/error-while-doing-kubernetes-init-command" rel="nofollow noreferrer">solution</a> worked for other users.</p>
</li>
<li><p>The warning itself is not an issue, it's just warning that kubeadm no longer validates the KubeletConfiguration, KubeProxyConfiguration that it feeds to the kubelet, kube-proxy components.</p>
</li>
</ol>
| Jyothi Kiranmayi |
<p>I am trying to use Google's preferred "Workload Identity" method to enable my GKE app to securely access secrets from Google Secrets.</p>
<p>I've completed the setup and even checked all steps in the Troubleshooting section (<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=sr-ba#troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=sr-ba#troubleshooting</a>) <strong>but I'm still getting the following error in my logs</strong>:</p>
<blockquote>
<p>Unhandled exception. Grpc.Core.RpcException:
Status(StatusCode=PermissionDenied, Detail="Permission
'secretmanager.secrets.list' denied for resource
'projects/my-project' (or it may not exist).")</p>
</blockquote>
<p>I figured the problem was due to the node pool not using the correct service account, so I recreated it, this time specifying the correct service account.</p>
<p>The service account has the following roles added:</p>
<ul>
<li>Cloud Build Service</li>
<li>Account Kubernetes Engine Developer</li>
<li>Container Registry Service Agent</li>
<li>Secret Manager Secret Accessor</li>
<li>Secret Manager Viewer</li>
</ul>
<p>The relevant source code for the package I am using to authenticate is as follows:</p>
<pre><code>var data = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
var request = new ListSecretsRequest
{
ParentAsProjectName = ProjectName.FromProject(projectName),
};
var secrets = secretManagerServiceClient.ListSecrets(request);
foreach(var secret in secrets)
{
var value = secretManagerServiceClient.AccessSecretVersion($"{secret.Name}/versions/latest");
string secretVal = this.manager.Load(value.Payload);
string configKey = this.manager.GetKey(secret.SecretName);
data.Add(configKey, secretVal);
}
Data = data;
</code></pre>
<p>Ref. <a href="https://github.com/jsukhabut/googledotnet" rel="nofollow noreferrer">https://github.com/jsukhabut/googledotnet</a></p>
<p>Am I missing a step in the process?</p>
<p>Any idea why Google is still saying "Permission 'secretmanager.secrets.list' denied for resource 'projects/my-project' (or it may not exist)?"</p>
| user1477388 | <p>Like @sethvargo mentioned in the comments, you need to map the service account to your pod because <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=sr-ba#overview" rel="nofollow noreferrer">Workload Identity</a> doesn’t use the underlying node identity and instead maps a Kubernetes service account to a GCP service account. Everything happens at the per-pod level in Workload identity.</p>
<p>Assign a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to" rel="nofollow noreferrer">Kubernetes service account</a> to the application and configure it to act as a Google service account.</p>
<p>1.Create a GCP service account with the required permissions.</p>
<p>2.Create a Kubernetes service account.</p>
<p>3.Assign the Kubernetes service account permission to impersonate the GCP
service account.</p>
<p>4.Run your workload as the Kubernetes service account.</p>
<p>Hope you are using project ID instead of project name in the project or secret.</p>
<p>You cannot update the service account of an already created pod.</p>
<p>Refer the <a href="https://dzone.com/articles/enabling-gke-workload-identity" rel="nofollow noreferrer">link</a> to add service account to the pods.</p>
| Jyothi Kiranmayi |
<p>I have multiple openshift routes of type:</p>
<pre><code>apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: <name>
labels:
app.kubernetes.io/name: <app-name>
spec:
host: <host>
port:
targetPort: <targetPort>
tls:
termination: reencrypt
destinationCACertificate: |-
-----BEGIN CERTIFICATE-----
MIIDejCCAmICCQCNHBN8tj/FwzANBgkqhkiG9w0BAQsFADB/MQswCQYDVQQGEwJV
UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoM
BlNwbHVuazEXMBUGA1UEAwwOU3BsdW5rQ29tbW9uQ0ExITAfBgkqhkiG9w0BCQEW
EnN1cHBvcnRAc3BsdW5rLmNvbTAeFw0xNzAxMzAyMDI2NTRaFw0yNzAxMjgyMDI2
NTRaMH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNU2FuIEZy
YW5jaXNjbzEPMA0GA1UECgwGU3BsdW5rMRcwFQYDVQQDDA5TcGx1bmtDb21tb25D
QTEhMB8GCSqGSIb3DQEJARYSc3VwcG9ydEBzcGx1bmsuY29tMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzB9ltVEGk73QvPlxXtA0qMW/SLDQlQMFJ/C/
tXRVJdQsmcW4WsaETteeWZh8AgozO1LqOa3I6UmrWLcv4LmUAh/T3iZWXzHLIqFN
WLSVU+2g0Xkn43xSgQEPSvEK1NqZRZv1SWvx3+oGHgu03AZrqTj0HyLujqUDARFX
sRvBPW/VfDkomHj9b8IuK3qOUwQtIOUr+oKx1tM1J7VNN5NflLw9NdHtlfblw0Ys
5xI5Qxu3rcCxkKQuwzdask4iijOIRMAKX28pbakxU9Nk38Ac3PNadgIk0s7R829k
980sqGWkd06+C17OxgjpQbvLOR20FtmQybttUsXGR7Bp07YStwIDAQABMA0GCSqG
SIb3DQEBCwUAA4IBAQCxhQd6KXP2VzK2cwAqdK74bGwl5WnvsyqdPWkdANiKksr4
ZybJZNfdfRso3fA2oK1R8i5Ca8LK3V/UuAsXvG6/ikJtWsJ9jf+eYLou8lS6NVJO
xDN/gxPcHrhToGqi1wfPwDQrNVofZcuQNklcdgZ1+XVuotfTCOXHrRoNmZX+HgkY
gEtPG+r1VwSFowfYqyFXQ5CUeRa3JB7/ObF15WfGUYplbd3wQz/M3PLNKLvz5a1z
LMNXDwN5Pvyb2epyO8LPJu4dGTB4jOGpYLUjG1UUqJo9Oa6D99rv6sId+8qjERtl
ZZc1oaC0PKSzBmq+TpbR27B8Zra3gpoA+gavdRZj
-----END CERTIFICATE-----
to:
kind: Service
name: <ServiceName>
</code></pre>
<p>I want to convert it into a Ingress Object as there are no routes in bare k8s. I see we don't have definition of termination type in Ingress Object, so can anyone recommend what is the optimal way to achieve this same functionality of openshift route using k8s ingress?</p>
<p>Thanks in advance</p>
| Kumud Jain | <p>The option <code>reencrypt</code> is not available in NGINX ingress controller. TLS cert in bare metal ingress is just stored in a secret. In the case of NGINX ingress controller, TLS termination takes place at the controller. In the case of openshift's route, it is similar to edge termination. So it is impossible to achieve similar TLS termination to openshift's route using bare k8s. You can achive this using <a href="https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication" rel="nofollow noreferrer">istio</a>. Here is <a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">tutorial</a> how to setup Mutual TLS Migration.</p>
| Mikołaj Głodziak |
<p>I'm running an eks cluster, installed k8s dashboard etc. All works fine, I can login in the UI in</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
</code></pre>
<p>Is there a way for me to pass the token via the url so I won't need a human to do this?
Thanks!</p>
| J. Doe | <p>Based on <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#access-control" rel="nofollow noreferrer">official documentation</a> it is impossible to put your authentication token in URL.</p>
<blockquote>
<p>As of release 1.7 Dashboard supports user authentication based on:</p>
<ul>
<li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#authorization-header" rel="nofollow noreferrer"><code>Authorization: Bearer <token></code></a> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.</li>
<li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#bearer-token" rel="nofollow noreferrer">Bearer Token</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li>
<li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#basic" rel="nofollow noreferrer">Username/password</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li>
<li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#kubeconfig" rel="nofollow noreferrer">Kubeconfig</a> file that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li>
</ul>
</blockquote>
<p>As you can see, only the first option bypasses the Dashboard login view. So, what is Bearer Authentication?</p>
<blockquote>
<p><strong>Bearer authentication</strong> (also called <strong>token authentication</strong>) is an <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication" rel="nofollow noreferrer">HTTP authentication scheme</a> that involves security tokens called bearer tokens. The name “Bearer authentication” can be understood as “give access to the bearer of this token.” The bearer token is a cryptic string, usually generated by the server in response to a login request. The client must send this token in the <code>Authorization</code> header when making requests to protected resources:</p>
</blockquote>
<p>You can find more information about Baerer Authentication <a href="https://swagger.io/docs/specification/authentication/bearer-authentication/" rel="nofollow noreferrer">here</a>.</p>
<p>The question now is how you can include the authentication header in your request. There are many ways to achieve this:</p>
<ul>
<li><code>curl</code> command - example:</li>
</ul>
<pre><code>curl -H "Authorization: Bearer <TOKEN_VALUE>" <https://address-your-dashboard>
</code></pre>
<ul>
<li>Postman application - <a href="https://stackoverflow.com/questions/40539609/how-to-add-authorization-header-in-postman-environment">here</a> is good answer to set up authorization header with screenshots.</li>
<li>reverse proxy - you can be achieve this i.e. by configuring reverse proxy in front of Dashboard. Proxy will be responsible for authentication with identity provider and will pass generated token in request header to Dashboard. Note that Kubernetes API server needs to be configured properly to accept these tokens. You can read more about it <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md#authorization-header" rel="nofollow noreferrer">here</a>. You should know, that this method is potentially insecure due to <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="nofollow noreferrer">Man In The Middle Attack</a> when you are using http.</li>
</ul>
<p>You can also read very good answers to the question <a href="https://stackoverflow.com/questions/46664104/how-to-sign-in-kubernetes-dashboard">how to sign in kubernetes dashboard</a>.</p>
| Mikołaj Głodziak |
<p>There is no clear information about how to make a backup and restore from a regular node like node01 for instance, I mean:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="nofollow noreferrer">Operating etcd clusters for Kubernetes</a> shows information like how to use it and</p>
</li>
<li><p><a href="https://discuss.kubernetes.io/t/etcd-backup-and-restore-management/11019" rel="nofollow noreferrer">ETCD - backup and restore management</a> shows some of the necessary steps.</p>
</li>
</ul>
<p>But how about in the cert exam, you are operating most of the time from a regular node01, the config files are not the same? Can some one elaborate?</p>
<p>Thanks</p>
| user9356263 | <p>It is impossible to backup cluster from a regular node using etcd. <a href="https://www.veeam.com/blog/backup-kubernetes-master-node.html" rel="nofollow noreferrer">The etcd can only be run on a master node.</a></p>
<p>But you can backup your Kubernetes cluster by command: <code>etcdctl backup</code>. Here you can find completely guide, how to use <a href="https://etcd.io/docs/v2.3/admin_guide/#disaster-recovery" rel="nofollow noreferrer">etcdctl backup command</a>.</p>
<p>Another way is making a <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot" rel="nofollow noreferrer">snapshot</a> of your cluster by command: <code>etcdctl snapshot save</code>.</p>
<p>This command will let you create <strong>incremental backup</strong>.</p>
<blockquote>
<p>Incremental backup of etcd, where full snapshot is taken first and then we apply watch and persist the logs accumulated over certain period to snapshot store. Restore process, restores from the full snapshot, start the embedded etcd and apply the logged events one by one.</p>
</blockquote>
<p>You can find more about incremental backup function <a href="https://github.com/gardener/etcd-backup-restore/issues/2" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>I'm using an if-else statement inside my <strong>deployment.yaml</strong> to determine which key to use from my <strong>values.yaml</strong> file, in the following way:</p>
<pre><code>{{ - if .Values.some_key}}
some_key:
{{ toYaml .Values.some_key| indent 12 }}
{{ else if .Values.global.some_key}}
some_key:
{{ toYaml .Values.global.some_key| indent 12 }}
{{ - end }}
</code></pre>
<p>I got a "Key 'some_key' is duplicated" error from IntelliJ, and was wondering what is the correct way of using the condition in this situation.</p>
| peleg | <p>I was able to remove IntelliJ's errors by installing the Go Template plugin and the Kubernetes plugin for Intellij.</p>
| peleg |
<p>I've set up an nfs server that serves a RMW pv according to the example at <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a></p>
<p>This setup works fine for me in lots of production environments, but in some specific GKE cluster instance, mount stopped working after pods restarted.</p>
<p>From kubelet logs I see the following repeating many times</p>
<blockquote>
<p>Unable to attach or mount volumes for pod "api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)": unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition; skipping pod</p>
</blockquote>
<blockquote>
<p>Error syncing pod 521b43c8-319f-425f-aaa7-e05c08282e8e ("api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)"), skipping: unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition</p>
</blockquote>
<p>Manually mounting the nfs on any of the nodes work just fine: <code>mount -t nfs <service ip>:/ /tmp/mnt</code></p>
<p>How can I further debug the issue? Are there any other logs I could look at besides kubelet?</p>
| Mugen | <p>In case the pod gets kicked out of the node because the mount is too slow, you may see messages like that in logs.</p>
<p>Kubelets even inform about this issue in logs.<br />
<strong>Sample log from Kubelets:</strong><br />
Setting volume ownership for /var/lib/kubelet/pods/c9987636-acbe-4653-8b8d-
aa80fe423597/volumes/kubernetes.io~gce-pd/pvc-fbae0402-b8c7-4bc8-b375-
1060487d730d and fsGroup set. If the volume has a lot of files then setting
volume ownership could be slow, see <a href="https://github.com/kubernetes/kubernetes/issues/69699" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/69699</a></p>
<p><strong>Cause:</strong><br />
The pod.spec.securityContext.fsGroup setting causes kubelet to run chown and chmod on all the files in the volumes mounted for given pod. This can be a very time consuming thing to do in case of big volumes with many files.</p>
<p>By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. From the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">document</a>.</p>
<p><strong>Solution:</strong><br />
You can deal with it in the following ways.</p>
<ol>
<li>Reduce the number of files in the volume.</li>
<li>Stop using the fsGroup setting.</li>
</ol>
| Chandra Kiran Pasumarti |
<p>I'm trying to follow instructions on this <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">guide</a> but under docker.</p>
<p>I set up a folder with:</p>
<pre><code>.
├── Dockerfile
└── main.py
0 directories, 2 files
</code></pre>
<p><code>main.py</code> is:</p>
<pre><code>#!/usr/bin/env python3
print("Docker is magic!")
</code></pre>
<p>Dockerfile is:</p>
<pre><code>FROM python:latest
COPY main.py /
CMD [ "python", "./main.py" ]
FROM python:3.7-alpine
COPY ./ /usr/src/app/
WORKDIR /usr/src/app
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-node --type=LoadBalancer --port=38080
minikube start --driver=docker
kubectl get pods
</code></pre>
<p>When I run docker run python-test I see in terminal:</p>
<pre><code>Docker is magic!
</code></pre>
<p>but I don't see the get pods output.</p>
<p>My goal here is to run a simple <code>minikube</code> in the docker that just print the list of the pods. What is wrong here?</p>
| Slava | <p>If you want to use kubernetes inside a docker container my suggestion is to use k3d .</p>
<blockquote>
<p>k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.</p>
</blockquote>
<p>You can Download , install and use it directly with Docker.
For more information you can follow the official documentation from <a href="https://k3d.io/" rel="nofollow noreferrer">https://k3d.io/</a> .</p>
<p>To get the list of pods you dont' need to create a k8s cluster inside a docker container .
what you need is a config file for any k8s cluster
.
├── Dockerfile
├-- config
└── main.py
0 directories, 3 files</p>
<p>after that :</p>
<pre><code>FROM python:latest
COPY main.py /
CMD [ "python", "./main.py" ]
FROM python:3.7-alpine
COPY ./ /usr/src/app/
WORKDIR /usr/src/app
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY config ~/.kube/config
# now if you execute k get pods you can get the list of pods
#Example;
RUN kubectl get pods
</code></pre>
<p>to get this file config you can follow this link <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Organizing Cluster Access Using kubeconfig Files</a></p>
<p>I hope that can help you to resolve your issue .</p>
| rassakra |
<p>It may be a vague question but I couldn't find any documentation regarding the same. Does Google cloud platform have provision to integrate with OpsGenie?</p>
<p>Basically we have set up few alerts in GCP for our <code>Kubernetes Cluster monitoring</code> and we want them to be feeded to <code>OpsGenie</code> for Automatic call outs in case of high priority incidents.</p>
<p>Is it possible?</p>
| mikita agrawal | <p>Recapping for better visibility:</p>
<p>OpsGenie supports multiple <a href="https://www.atlassian.com/software/opsgenie/integrations" rel="nofollow noreferrer">tools</a>, including Google Stackdriver.<br />
Instruction on how to integrate it with Stackdriver webhooks can be found <a href="https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-google-stackdriver/" rel="nofollow noreferrer">here</a>.</p>
| Sergiusz |
<p>I have a Kubernetes Cluster with pods autoscalables using Autopilot. Suddenly they stop to autoscale, I'm new at Kubernetes and I don't know exactly what to do or what is supposed to put in the console to show for help.</p>
<p>The pods automatically are Unschedulable and inside the cluster put his state at Pending instead of running and doesn't allow me to enter or interact.</p>
<p>Also I can't delete or stop them at GCP Console. There's no issue regarding memory or insufficient CPU because there's not much server running on it.</p>
<p>The cluster was working as expected before this issue I have.</p>
<pre><code>Namespace: default
Priority: 0
Node: <none>
Labels: app=odoo-service
pod-template-hash=5bd88899d7
Annotations: seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/odoo-cluster-dev-5bd88899d7
Containers:
odoo-service:
Image: us-central1-docker.pkg.dev/adams-dev/adams-odoo/odoo-service:v58
Port: <none>
Host Port: <none>
Limits:
cpu: 2
ephemeral-storage: 1Gi
memory: 8Gi
Requests:
cpu: 2
ephemeral-storage: 1Gi
memory: 8Gi
Environment:
ODOO_HTTP_SOCKET_TIMEOUT: 30
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqh5r (ro)
cloud-sql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.17
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=adams-dev:us-central1:odoo-test=tcp:5432
Limits:
cpu: 1
ephemeral-storage: 1Gi
memory: 2Gi
Requests:
cpu: 1
ephemeral-storage: 1Gi
memory: 2Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqh5r (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-zqh5r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 28m (x248 over 3h53m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 4 in backoff after failed scale-up, 2 Insufficient cpu, 2 Insufficient memory
Normal NotTriggerScaleUp 8m1s (x261 over 3h55m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 Insufficient memory, 4 in backoff after failed scale-up, 2 Insufficient cpu
Normal NotTriggerScaleUp 3m (x1646 over 3h56m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 Insufficient cpu, 2 Insufficient memory, 4 in backoff after failed scale-up
Warning FailedScheduling 20s (x168 over 3h56m) gke.io/optimize-utilization-scheduler 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 28m (x250 over 3h56m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 Insufficient memory, 4 in backoff after failed scale-up, 2 Insufficient cpu
Normal NotTriggerScaleUp 8m2s (x300 over 3h55m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 4 in backoff after failed scale-up, 2 Insufficient cpu, 2 Insufficient memory
Warning FailedScheduling 5m21s (x164 over 3h56m) gke.io/optimize-utilization-scheduler 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory.
Normal NotTriggerScaleUp 3m1s (x1616 over 3h55m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 Insufficient cpu, 2 Insufficient memory, 4 in backoff after failed scale-up
</code></pre>
<p>I don't know how much I can debug or fix it.</p>
| Chux Rincon | <p>Pods failed to schedule on any node because none of the nodes have cpu available.</p>
<p>Cluster autoscaler tried to scale up but it backoff after failed scale-up attempt which indicates possible issues with scaling up managed instance groups which are part of the node pool.</p>
<p>Cluster autoscaler tried to scale up but as the quota limit is reached no new nodes can be added.</p>
<p>You can't see the Autopilot GKE VMs that are being counted against your quota.</p>
<p>Try by creating the autopilot cluster in another region. If your needs are not no longer fulfilled by an autopilot cluster then go for a standard cluster.</p>
| Chandra Kiran Pasumarti |
<p>I get this error when trying to get ALB logs:</p>
<pre><code>root@b75651fde30e:/apps/tekton/deployment# kubectl logs -f ingress/tekton-dashboard-alb-dev
error: cannot get the logs from *v1.Ingress: selector for *v1.Ingress not implemented
</code></pre>
<p>The load balancer YAML:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard-alb-dev
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: tekton-dashboard-alb-dev
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/tags: "Cost=SwiftALK,[email protected],VantaNonProd=true,VantaDescription=ALB Ingress for Tekton Dashboard,VantaContainsUserData=false,VantaUserDataStored=None"
alb.ingress.kubernetes.io/security-groups: sg-034ca9846b81fd721
kubectl.kubernetes.io/last-applied-configuration: ""
spec:
defaultBackend:
service:
name: tekton-dashboard
port:
number: 9097
</code></pre>
<p><strong>Note:</strong> <code>sg-034ca9846b81fd721</code> restricts access to our VPN CIDRs</p>
<p>Ingress is up as revealed from:</p>
<pre><code>root@b75651fde30e:/apps/tekton/deployment# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tekton-dashboard-alb-dev <none> * tekton-dashboard-alb-dev-81361211.us-east-1.elb.amazonaws.com 80 103m
root@b75651fde30e:/apps/tekton/deployment# kubectl describe ingress/tekton-dashboard-alb-dev
Name: tekton-dashboard-alb-dev
Namespace: tekton-pipelines
Address: tekton-dashboard-alb-dev-81361211.us-east-1.elb.amazonaws.com
Default backend: tekton-dashboard:9097 (172.18.5.248:9097)
Rules:
Host Path Backends
---- ---- --------
* * tekton-dashboard:9097 (172.18.5.248:9097)
Annotations: alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/load-balancer-name: tekton-dashboard-alb-dev
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-034ca9846b81fd721
alb.ingress.kubernetes.io/tags:
Cost=SwiftALK,[email protected],VantaNonProd=true,VantaDescription=ALB Ingress for SwifTalk Web Microservices,VantaCon...
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
Events: <none>
</code></pre>
| Anadi Misra | <p><strong>The error you received means that the logs for your object are not implemented. It looks like you're trying to get logs from the wrong place.</strong></p>
<p>I am not able to reproduce your problem on AWS, but I tried to do it on GCP and the situation was very similar. You cannot get logs from <code>ingress/tekton-dashboard-alb-dev</code>, and this is normal bahaviour. If you want to get logs of your ALB, you have to find the appropriate pod and then extract the logs from it. Let me show you how I did it on GCP. The commands are the same, but the pod names will be different.</p>
<p>First I have executed:</p>
<pre><code>kubectl get pods --all-namespaces
</code></pre>
<p>Output:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-controller-57cb5bf694-722ml 1/1 Running 0 18d
-----
and many other not related pods in other namespaces
</code></pre>
<p>You can find directly your pod with command:</p>
<pre><code>kubectl get pods -n ingress-nginx
</code></pre>
<p>Output:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-57cb5bf694-722ml 1/1 Running 0 18d
</code></pre>
<p>Now you can get logs from <code>ingress controller</code> by command:</p>
<pre><code>kubectl logs -n ingress-nginx ingress-nginx-controller-57cb5bf694-722ml
</code></pre>
<p>in your situation:</p>
<pre><code>kubectl logs -n <your namespace> <your ingress controller pod>
</code></pre>
<p>The output should be similar to this:</p>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.46.0
Build: 6348dde672588d5495f70ec77257c230dc8da134
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.6
-------------------------------------------------------------------------------
I0923 05:26:20.053561 8 flags.go:208] "Watching for Ingress" class="nginx"
W0923 05:26:20.053753 8 flags.go:213] Ingresses with an empty class will also be processed by this Ingress controller
W0923 05:26:20.054185 8 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0923 05:26:20.054502 8 main.go:241] "Creating API client" host="https://10.16.0.1:443"
I0923 05:26:20.069482 8 main.go:285] "Running in Kubernetes cluster" major="1" minor="20+" git="v1.20.9-gke.1001" state="clean" commit="1fe18c314ed577f6047d2712a9d1c8e498e22381" platform="linux/amd64"
I0923 05:26:20.842645 8 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0923 05:26:20.846132 8 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W0923 05:26:20.849470 8 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I0923 05:26:20.866252 8 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0923 05:26:20.917594 8 nginx.go:254] "Starting NGINX Ingress controller"
I0923 05:26:20.942084 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"42dc476e-3c5c-4cc9-a6a4-266edecb2a4b", APIVersion:"v1", ResourceVersion:"5600", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0923 05:26:22.118459 8 nginx.go:296] "Starting NGINX process"
I0923 05:26:22.118657 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0923 05:26:22.119481 8 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0923 05:26:22.120266 8 controller.go:146] "Configuration changes detected, backend reload required"
I0923 05:26:22.126350 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-57cb5bf694-8c9tn"
I0923 05:26:22.214194 8 controller.go:163] "Backend successfully reloaded"
I0923 05:26:22.214838 8 controller.go:174] "Initial sync, sleeping for 1 second"
I0923 05:26:22.215234 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-57cb5bf694-722ml", UID:"b9672f3c-ecdf-473e-80f5-529bbc5bc4e5", APIVersion:"v1", ResourceVersion:"59016530", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0923 05:27:00.596169 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0923 05:27:00.596305 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-57cb5bf694-722ml"
157.230.143.29 - - [23/Sep/2021:08:28:25 +0000] "GET / HTTP/1.1" 400 248 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/70.0" 165 0.000 [] [] - - - - d47be1e37ea504aca93d59acc7d36a2b
157.230.143.29 - - [23/Sep/2021:08:28:26 +0000] "\x00\xFFK\x00\x00\x00\xE2\x00 \x00\x00\x00\x0E2O\xAAC\xE92g\xC2W'\x17+\x1D\xD9\xC1\xF3,kN\x17\x14" 400 150 "-" "-" 0 0.076 [] [] - - - - c497187f4945f8e9e7fa84d503198e85
157.230.143.29 - - [23/Sep/2021:08:28:26 +0000] "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.138 [] [] - - - - 4067a2d34d0c1f2db7ffbfc143540c1a
167.71.216.70 - - [23/Sep/2021:12:02:23 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xDB\xBBo*K\xAE\x9A&\x8A\x9B)\x1B\xB8\xED3\xB7\xE16N\xEA\xFCS\x22\x14V\xF7}\xC8&ga\xDA\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.300 [] [] - - - - ff6908bb17b0da020331416773b928b5
167.71.216.70 - - [23/Sep/2021:12:02:23 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03a\xBF\xFB\xC1'\x03S\x83D\x5Cn$\xAB\xE1\xA6%\x93G-}\xD1C\xB2\xB0E\x8C\x8F\xA8q-\xF7$\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.307 [] [] - - - - fee3a478240e630e6983c60d1d510f52
66.240.205.34 - - [23/Sep/2021:12:04:11 +0000] "145.ll|'|'|SGFjS2VkX0Q0OTkwNjI3|'|'|WIN-JNAPIER0859|'|'|JNapier|'|'|19-02-01|'|'||'|'|Win 7 Professional SP1 x64|'|'|No|'|'|0.7d|'|'|..|'|'|AA==|'|'|112.inf|'|'|SGFjS2VkDQoxOTIuMTY4LjkyLjIyMjo1NTUyDQpEZXNrdG9wDQpjbGllbnRhLmV4ZQ0KRmFsc2UNCkZhbHNlDQpUcnVlDQpGYWxzZQ==12.act|'|'|AA==" 400 150 "-" "-" 0 0.086 [] [] - - - - 365d42d67e7378359b95c71a8d8ce983
147.182.148.98 - - [23/Sep/2021:12:04:17 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xABA\xF4\xD5\xB7\x95\x85[.v\xDB\xD1\x1B\x04\xE7\xB4\xB8\x92\x82\xEC\xCC\xDDr\xB7/\xBD\x93/\xD0f4\xB3\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.152 [] [] - - - - 858c2ad7535de95c84dd0899708a3801
164.90.203.66 - - [23/Sep/2021:12:08:19 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\x93\x81+_\x95\xFA\xEAj\xA7\x80\x15 \x179\xD7\x92\xAE\xA9i+\x9D`\xA07:\xD2\x22\xB3\xC6\xF3\x22G\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.237 [] [] - - - - 799487dd8ec874532dcfa7dad1c02a27
164.90.203.66 - - [23/Sep/2021:12:08:20 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xB8\x22\xCB>1\xBEM\xD4\x92\x95\xEF\x1C0\xB5&\x1E[\xC5\xC8\x1E2\x07\x1C\x02\xA1<\xD2\xAA\x91F\x00\xC6\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.193 [] [] - - - - 4604513713d4b9fb5a7199b7980fa7f2
164.90.203.66 - - [23/Sep/2021:12:16:10 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03[\x16\x02\x94\x98\x17\xCA\xB5!\xC11@\x08\xD9\x89RE\x970\xC2\xDF\xFF\xEBh\xA0i\x9Ee%.\x07{\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.116 [] [] - - - - 23019f0886a1c30a78092753f6828e74
77.247.108.81 - - [23/Sep/2021:14:52:51 +0000] "GET /admin/config.php HTTP/1.1" 400 248 "-" "python-requests/2.26.0" 164 0.000 [] [] - - - - 04630dbf3d0ff4a4b7138dbc899080e5
209.141.48.211 - - [23/Sep/2021:16:17:46 +0000] "" 400 0 "-" "-" 0 0.057 [] [] - - - - 3c623b242909a99e18178ec10a814d7b
209.141.62.185 - - [23/Sep/2021:18:13:11 +0000] "GET /config/getuser?index=0 HTTP/1.1" 400 248 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 353 0.000 [] [] - - - - 2640cf06912615a7600e814dc893884b
125.64.94.138 - - [23/Sep/2021:19:49:08 +0000] "GET / HTTP/1.0" 400 248 "-" "-" 18 0.000 [] [] - - - - b633636176888bc3b7f6230f691e0724
2021/09/23 19:49:20 [crit] 39#39: *424525 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 125.64.94.138, server: 0.0.0.0:443
125.64.94.138 - - [23/Sep/2021:19:49:21 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 197 0.000 [] [] - - - - ede08c8fb12e8ebaf3adcbd2b7ea5fd5
125.64.94.138 - - [23/Sep/2021:19:49:22 +0000] "GET /robots.txt HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 196 0.000 [] [] - - - - fae50b56a11600abc84078106ba4b008
125.64.94.138 - - [23/Sep/2021:19:49:22 +0000] "GET /.well-known/security.txt HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 210 0.000 [] [] - - - - ad82bcac7d7d6cd9aa2d044d80bb719d
87.251.75.145 - - [23/Sep/2021:21:29:10 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.180 [] [] - - - - 8c2b62bcdf26ac1592202d0940fc30b8
167.71.102.181 - - [23/Sep/2021:21:54:58 +0000] "\x00\x0E8K\xA3\xAAe\xBCn\x14\x1B\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.027 [] [] - - - - 65b8ee37a2c6bf8368843e4db3b90b2a
185.156.72.27 - - [23/Sep/2021:22:03:55 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.139 [] [] - - - - 92c6ad2d71b961bf7de4e345ff69da10
185.156.72.27 - - [23/Sep/2021:22:03:55 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.140 [] [] - - - - fe0424f8ecf9afc1d0154bbca2382d13
34.86.35.21 - - [23/Sep/2021:22:54:41 +0000] "\x16\x03\x01\x00\xE3\x01\x00\x00\xDF\x03\x03\x0F[\xA9\x18\x15\xD3@4\x7F\x7F\x98'\xA9(\x8F\xE7\xCCDd\xF9\xFF`\xE3\xCE\x9At\x05\x97\x05\xB1\xC3}\x00\x00h\xCC\x14\xCC\x13\xC0/\xC0+\xC00\xC0,\xC0\x11\xC0\x07\xC0'\xC0#\xC0\x13\xC0\x09\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 2.039 [] [] - - - - c09d38bf2cd925dac4d9e5d5cb843ece
2021/09/24 02:41:15 [crit] 40#40: *627091 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 184.105.247.252, server: 0.0.0.0:443
61.219.11.151 - - [24/Sep/2021:03:40:51 +0000] "dN\x93\xB9\xE6\xBCl\xB6\x92\x84:\xD7\x03\xF1N\xB9\xC5;\x90\xC2\xC6\xBA\xE1I-\x22\xDDs\xBA\x1FgC:\xB1\xA7\x80+\x00\x00\x00\x00%\xFDK:\xAAW.|J\xB2\xB5\xF5'\xA5l\xD3V(\xB7\x01%(CsK8B\xCE\x9A\xD0z\xC7\x13\xAD" 400 150 "-" "-" 0 0.203 [] [] - - - - 190d00221eefc869b5938ab6380f835a
46.101.155.106 - - [24/Sep/2021:04:56:37 +0000] "HEAD / HTTP/1.0" 400 0 "-" "-" 17 0.000 [] [] - - - - e8c108201c37d7457e4578cf68feacf8
46.101.155.106 - - [24/Sep/2021:04:56:38 +0000] "GET /system_api.php HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 255 0.000 [] [] - - - - b3032f9a9b3f4f367bdee6692daeb05c
46.101.155.106 - - [24/Sep/2021:04:56:39 +0000] "GET /c/version.js HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 253 0.000 [] [] - - - - 9104ab72a0232caf6ff98da57d325144
46.101.155.106 - - [24/Sep/2021:04:56:40 +0000] "GET /streaming/clients_live.php HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 267 0.000 [] [] - - - - 341cbb6cf424b348bf8b788f79373b8d
46.101.155.106 - - [24/Sep/2021:04:56:41 +0000] "GET /stalker_portal/c/version.js HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 268 0.000 [] [] - - - - 9954fd805fa092595057dbf83511bd92
46.101.155.106 - - [24/Sep/2021:04:56:42 +0000] "GET /stream/live.php HTTP/1.1" 400 248 "-" "AlexaMediaPlayer/2.1.4676.0 (Linux;Android 5.1.1) ExoPlayerLib/1.5.9" 209 0.000 [] [] - - - - 3c9409419c1ec59dfc08c10cc3eb6eef
</code></pre>
| Mikołaj Głodziak |
<p>How to fix pop up error <a href="http:///Users/162408.suryadi/Library/Application%20Support/Lens/node_module/lenscloud-lens-extension" rel="nofollow noreferrer">permission denied</a>?</p>
<p>The log:</p>
<pre><code> 50 silly saveTree +-- [email protected]
50 silly saveTree +-- [email protected]
50 silly saveTree `-- [email protected]
51 warn Lens No description
52 warn Lens No repository field.
53 warn Lens No license field.
54 verbose stack Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
55 verbose cwd /Users/162408.suryadi/Library/Application Support/Lens
56 verbose Darwin 20.4.0
57 verbose argv "/Applications/Lens.app/Contents/Frameworks/Lens Helper.app/Contents/MacOS/Lens Helper" "/Applications/Lens.app/Contents/Resources/app.asar/node_modules/npm/bin/npm-cli.js" "install" "--no-audit" "--only=prod" "--prefer-offline" "--no-package-lock"
58 verbose node v14.16.0
59 verbose npm v6.14.13
60 error code EACCES
61 error syscall access
62 error path /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
63 error errno -13
64 error Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
64 error [Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'] {
64 error errno: -13,
64 error code: 'EACCES',
Could not load extensions: npm WARN checkPermissions Missing write access to /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
npm WARN enoent ENOENT: no such file or directory, open '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/untitled folder/package.json'
npm WARN Lens No description
npm WARN Lens No repository field.
npm WARN Lens No license field.
npm ERR! code EACCES
npm ERR! syscall access
npm ERR! path /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
npm ERR! [Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'] {
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'access',
npm ERR! path: '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/162408.suryadi/.npm/_logs/2021-10-21T02_16_36_016Z-debug.log
</code></pre>
| lauwis premium | <p>In your logs you can find the description of your problem:</p>
<blockquote>
<p>It is likely you do not have the permissions to access this file as the current user.</p>
</blockquote>
<p>And one more thing to check:</p>
<blockquote>
<p>If you believe this might be a permissions issue, please double-check the permissions of the file and its containing directories, or try running the command again as root/Administrator.</p>
</blockquote>
<p>The error you got is because you cannot access the resource <code>/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension</code> from the current user. If you want to fix it, you have to change the permissions for this resource or change its owner. It is also possible (since you are using Kubernetes) that you will have to make such a change in the image of the system you are using.</p>
<p>To change owner the resource run</p>
<pre><code>sudo chown -R $USER /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
</code></pre>
<p>You can find also many similar problems. In most cases, the only difference will be the different path to the resource. The mode of operation and the solution to the problem remains the same:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/48910876/error-eacces-permission-denied-access-usr-local-lib-node-modules">Error: EACCES: permission denied, access '/usr/local/lib/node_modules'</a></li>
<li><a href="https://stackoverflow.com/questions/52979927/npm-warn-checkpermissions-missing-write-access-to-usr-local-lib-node-modules">npm WARN checkPermissions Missing write access to /usr/local/lib/node_modules</a></li>
<li><a href="https://progressivecoder.com/how-to-easily-fix-missing-write-access-error-npm-install/" rel="nofollow noreferrer">https://progressivecoder.com/how-to-easily-fix-missing-write-access-error-npm-install/</a></li>
<li><a href="https://flaviocopes.com/npm-fix-missing-write-access-error/" rel="nofollow noreferrer">https://flaviocopes.com/npm-fix-missing-write-access-error/</a></li>
</ul>
| Mikołaj Głodziak |
<p>I have a spring boot application which has some end points, ex: product/version</p>
<p>I deployed then the application to the K8s network and expose it out using Ingress as below: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /productApp
backend:
serviceName: productappservice
servicePort: 80
</code></pre>
<p>I added a path to the ingress to have my application running under <code>my-host:8080/productApp/</code></p>
<p>In the <code>Deployment</code> configuration, I need to add the extra <code>context-path</code> to my application using <code>ConfigMap</code> to map with the URL that the ingress is exposing: </p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: product-app
spec:
replicas: 1
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-app
image: my-docer-registry/product-app:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: server.servlet.context-path
valueFrom:
configMapKeyRef:
name: product-app-values
key: server.servlet.context-path
---
apiVersion: v1
kind: ConfigMap
metadata:
name: product-app-values
data:
server.servlet.context-path: '/productApp'
</code></pre>
<p>So now my application is running under <code>my-host:8080/productApp/</code> and when I tried to call the API ex: <code>my-host:8080/productApp/product/version</code>, it worked fine.</p>
<p>The Problem came when I work with Swagger. I have swagger under: <code>my-host:8080/productApp/swagger-ui.html</code> and when I tried to call the <code>product/version</code> API, it returned <code>TypeError: Failed to fetch</code> . Then I realized that the host URL that swagger is sending request to is no longer <code>my-host:8080</code>, it was only <code>my-host</code>: </p>
<blockquote>
<p>curl -X GET "<a href="http://my-host/productApp/product/version" rel="noreferrer">http://my-host/productApp/product/version</a>" -H
"accept: application/json;charset=UTF-8"</p>
</blockquote>
<p>and here is the example json: </p>
<pre><code>{
"swagger": "2.0",
"info": {
"version": "0.0.1-SNAPSHOT",
"title": "Product App REST API"
},
"host": "my-host",
"basePath": "/productApp",
"tags": [{
"name": "Product App API",
"description": "REST API of Product App"
},
{
"name": "version API",
"description": "Version information of application and REST API"
}],
"paths": {
.....
}
}
</code></pre>
<p>What I expect here is the host URL should be <code>my-host:8080</code>, instead of only <code>my-host</code>, that's why the request in swagger is failed to sent. </p>
<p>Does anyone have an idea how to fix this problem. Thank you in advanced!</p>
| Ock | <p>I encountered a similar problem when using <code>Istio Ingress</code>. I solved it by using an Nginx proxy and setting <code>X-Forwarded-Host</code> and <code>X-Forwarded-Port</code> request headers in Nginx config file to let Swagger know the correct address.</p>
| fanshibear |
<p>I'm trying to use the free tier (autopilot mode) to learn k8s on gcp. However I cam across the following <a href="https://stackoverflow.com/questions/63987678/is-it-possible-to-have-google-cloud-kubernetes-cluster-in-the-free-tier">Is it possible to have Google Cloud Kubernetes cluster in the free tier?</a>. However when I checked the link given in the question I could not find the specified limitation <code>f1-micro machines are not supported due to insufficient memory</code>. Is this still valid ? can I use k8s on gcp in the free tier without incurring any cost?</p>
| tmp dev | <p>There is no way to get a free GKE cluster on GCP, but you can get a very cheap one by following the instructions at
<a href="https://github.com/Neutrollized/free-tier-gke" rel="nofollow noreferrer">https://github.com/Neutrollized/free-tier-gke</a>.</p>
<p>Using a combination of GKE's free management tier and a low cost machine type, the cost estimate is less than $5 per month: .</p>
<p>More details on what is available as part of the free tier can be found here: <a href="https://cloud.google.com/free" rel="nofollow noreferrer">https://cloud.google.com/free</a>.</p>
<p>Also, for your question regarding limitation of f1-micro to be used in GKE,if you follow the documentation <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/min-cpu-platform#limitations" rel="nofollow noreferrer">limitation</a>
It is written that- Minimum CPU platform cannot be used with shared core machine types. Now since f1-micro machines are shared core machine types. So it is valid and cannot be used.</p>
| Manish Kumar |
<p>I deployed a Rook EdgeFS cluster (stateful set) and created a NFS custom resource, but now I can't find it anywhere in Rancher. If I query the cluster using kubectl, is possible to see that the resource is there:</p>
<pre><code>kubectl get nfs
</code></pre>
<p>Is there a way of make Rancher display these custom resources somewhere in the UI?</p>
| Jackson Mourão | <p>Yes there is. In your cluster overview in Rancher, on the left side menu, go to "More Resource", then in the submenu go to "API", and then in final submenu open "CustomResourceDefinitions".</p>
<p><a href="https://i.stack.imgur.com/VP63l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VP63l.png" alt="CRD in Rancher" /></a></p>
| Luka Devic |
<p>I want to monitor my Kafka deployed using Strimzi operator. I deployed Prometheus operator alongside a few basic monitoring tools using community chart <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack</a>, then followed Strimzi documentation <a href="https://strimzi.io/docs/operators/latest/deploying.html#assembly-metrics-str" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/deploying.html#assembly-metrics-str</a> to configure kafka exporter and other metrics to being exposed. Strimzi examples have pod monitors <code>cluster-operator-metrics</code>, <code>entity-operator-metrics</code> and <code>kafka-resources-metrics</code>. Everything discovered and scrapped successfully except one pod monitor <code>cluster-operator-metrics</code> which is even now showed up in Prometheus instance.</p>
<p><a href="https://i.stack.imgur.com/JjRYh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JjRYh.png" alt="enter image description here" /></a></p>
<p>I have following architecture:</p>
<ul>
<li>My strimzi operator deployed into <code>strimzi-system</code> namespace and watches for namespace <code>A</code></li>
<li>Kafka brokers, exporters, entity operator deployed into namespace <code>A</code></li>
<li>Kube prometheus stack chart (with prometheus operator) deployed into <code>monitoring</code> namespace</li>
<li>I have deployed second prometheus instance into namespace <code>A</code> to configure different retention and collect metrics in another place. This prometheus instance successfully discovered targets for <code>entity-operator-metrics</code> and <code>kafka-resources-metrics</code></li>
<li><code>entity-operator-metrics</code> and <code>kafka-resources-metrics</code> pod monitors deployed to namespace A</li>
<li><code>cluster-operator-metrics</code> deployed to <code>strimzi-system</code> namespace</li>
</ul>
<p>I want to scrape metrics for <code>cluster-operator-metrics</code> deployed in <code>strimzi-system</code> namespace by prometheus deployed in <code>monitoring</code> namespace.</p>
<p>Here is manifests which I'm using:</p>
<p><strong>Prometheus</strong> instance deployed to <code>monitoring</code> namespace</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2023-02-06T10:29:49Z"
generation: 7
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 45.0.0
chart: kube-prometheus-stack-45.0.0
heritage: Helm
release: prometheus
name: prometheus-kube-prometheus-prometheus
namespace: monitoring
resourceVersion: "8888421"
uid: 4fbc11f6-5547-45de-ad77-40fbd37d5e1d
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: prometheus-kube-prometheus-alertmanager
namespace: monitoring
pathPrefix: /
port: http-web
enableAdminAPI: false
evaluationInterval: 30s
externalUrl: http://prometheus-kube-prometheus-prometheus.monitoring:9090
hostNetwork: false
image: quay.io/prometheus/prometheus:v2.42.0
listenLocal: false
logFormat: logfmt
logLevel: info
paused: false
podMonitorNamespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- monitoring
- strimzi-system
podMonitorSelector: {}
portName: http-web
probeNamespaceSelector: {}
probeSelector:
matchLabels:
release: prometheus
replicas: 1
retention: 10d
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
release: prometheus
scrapeInterval: 30s
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-kube-prometheus-prometheus
serviceMonitorNamespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
serviceMonitorSelector: {}
shards: 1
version: v2.42.0
walCompression: true
</code></pre>
<p><strong>Prometheus</strong> instance deployed to namespace <code>A</code></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
creationTimestamp: "2023-02-06T11:23:09Z"
generation: 7
name: kafka-prometheus
namespace: A
resourceVersion: "8888395"
uid: 43ab949d-4b1f-4ad2-843c-5a22e7ecc1a1
spec:
additionalScrapeConfigs:
key: prometheus-additional.yaml
name: additional-scrape-configs
enableAdminAPI: false
evaluationInterval: 30s
podMonitorSelector:
matchLabels:
app: strimzi
replicas: 1
resources:
requests:
memory: 400Mi
retention: 30d
scrapeInterval: 30s
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: kafka-prometheus-server
serviceMonitorSelector: {}
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 10Gi
storageClassName: standard
</code></pre>
<p><strong>PodMonitor</strong> for <code>entity-operator-metrics</code> in namespace A</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
creationTimestamp: "2023-02-06T11:56:15Z"
generation: 1
labels:
app: strimzi
name: kafka-entity-operator-metrics
namespace: A
resourceVersion: "4290222"
uid: f1d14d38-e1ad-404f-94ba-7ff294923608
spec:
namespaceSelector:
matchNames:
- A
podMetricsEndpoints:
- path: /metrics
port: healthcheck
selector:
matchLabels:
app.kubernetes.io/name: entity-operator
</code></pre>
<p><strong>PodMonitor</strong> for <code>kafka-resource-metrics</code> in namespace A</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
creationTimestamp: "2023-02-06T11:56:15Z"
generation: 1
labels:
app: strimzi
name: kafka-resource-metrics
namespace: A
resourceVersion: "4290228"
uid: 48a4e919-51ab-433d-9dbe-8f0352d7341d
spec:
namespaceSelector:
matchNames:
- A
podMetricsEndpoints:
- path: /metrics
port: tcp-prometheus
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(strimzi_io_.+)
replacement: $1
separator: ;
- action: replace
regex: (.*)
replacement: $1
separator: ;
sourceLabels:
- __meta_kubernetes_namespace
targetLabel: namespace
- action: replace
regex: (.*)
replacement: $1
separator: ;
sourceLabels:
- __meta_kubernetes_pod_name
targetLabel: kubernetes_pod_name
- action: replace
regex: (.*)
replacement: $1
separator: ;
sourceLabels:
- __meta_kubernetes_pod_node_name
targetLabel: node_name
- action: replace
regex: (.*)
replacement: $1
separator: ;
sourceLabels:
- __meta_kubernetes_pod_host_ip
targetLabel: node_ip
selector:
matchExpressions:
- key: strimzi.io/kind
operator: In
values:
- Kafka
- KafkaConnect
</code></pre>
<p><strong>PodMonitor</strong> for <code>cluster-operator-metrics</code> in namespace <code>strimzi-system</code></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
creationTimestamp: "2023-02-12T14:30:33Z"
generation: 1
labels:
app: strimzi
name: kafka-cluster-operator-metrics
namespace: strimzi-system
resourceVersion: "8255294"
uid: 0986ba6a-dc83-4000-91b4-efc6f16462e4
spec:
namespaceSelector:
matchNames:
- strimzi-system
podMetricsEndpoints:
- path: /metrics
port: healthcheck
selector:
matchLabels:
strimzi.io/kind: cluster-operator
</code></pre>
<p>What I have tried:</p>
<ul>
<li>I have checked, that strimzi cluster operator has target labels <code>strimzi.io/kind: cluster-operator</code> and here is a part of pod manifest of strimzi operator</li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2023-01-31T16:22:36Z"
generateName: strimzi-cluster-operator-77f9c84bc-
labels:
name: strimzi-cluster-operator
pod-template-hash: 77f9c84bc
strimzi.io/kind: cluster-operator
name: strimzi-cluster-operator-77f9c84bc-h9895
namespace: strimzi-system
</code></pre>
<ul>
<li>Also, I have checked that Prometheus service account deployed in <code>monitoring</code> has cluster role with permission to access metrics endpoint:</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2023-02-06T10:29:46Z"
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 45.0.0
chart: kube-prometheus-stack-45.0.0
heritage: Helm
release: prometheus
name: prometheus-kube-prometheus-prometheus
resourceVersion: "8223589"
uid: f13d2445-b185-4b75-aa92-ba97b7393da0
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
- /metrics/cadvisor
verbs:
- get
</code></pre>
<ul>
<li>I verified that strimzi operator pod returns metrics on endpoint <code>GET /metrics</code></li>
<li>Tried to deploy PodMonitor into <code>monitoring</code> and <code>A</code> namespace</li>
<li>Tried to disable any restrictions to discover pod monitors in Prometheus in namespace <code>monitoring</code>. It discovered 2 other pod monitors which is worked before, but still didn't find strimzi cluster operator pod monitor. Followed by this answer <a href="https://stackoverflow.com/questions/60706343/prometheus-operator-enable-monitoring-for-everything-in-all-namespaces">prometheus operator - enable monitoring for everything in all namespaces</a></li>
</ul>
| Kostya Zgara | <p>If you, like me, came here because your PodMonitor is not working in microk8s. It is because of the previous answer that</p>
<pre><code>## If true, a nil or {} value for prometheus.prometheusSpec.podMonitorSelector will cause the
## prometheus resource to be created with selectors based on values in the helm deployment,
## which will also match the podmonitors created
##
podMonitorSelectorNilUsesHelmValues: true
</code></pre>
<p>This will generate a manifest with the following selector</p>
<pre><code>podMonitorSelector:
matchLabels:
release: "kube-prom-stack"
</code></pre>
<p>And because of this you can easily work around this by creating your PodMonitor in any namespace, just give it the helm release label <strong>release: kube-prom-stack</strong></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: my-pod-monitor
namespace: default
labels:
name: my-pod-monitor
release: kube-prom-stack
spec:
selector:
matchLabels:
app.kubernetes.io/enable-actuator-prometheus: "true"
namespaceSelector:
matchNames:
- default
podMetricsEndpoints:
- port: http-management
path: '/actuator/prometheus'
</code></pre>
| Leonard Broman |
<p>we are going to connect Azure DevOps to kubernetes on a bare metal Server with rocky Linux 9 installed. The connection between Server and Azure DevOps is already done, now we got the challenge to get Azure Devops and Kubernetes connected. Has somebody an idea, in which form we can get the connetcion between k8s and Azure Devops?</p>
<p>Hello,</p>
<p>we are going to connect Azure DevOps to kubernetes on a bare metal Server with rocky Linux 9 installed. The connection between Server and Azure DevOps is already done, now we got the challenge to get Azure Devops and Kubernetes connected. Has somebody an idea, in which form we can get the connetcion between k8s and Azure Devops?</p>
| Murat | <p>On Azure DevOps, you can set up a <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#kubernetes-service-connection" rel="nofollow noreferrer"><strong>Kubernetes service connection</strong></a> to your Kubernetes.</p>
<p>Navigate to "<strong>Project Settings</strong>" > "<strong>Service connections</strong>" > "<strong>New service connection</strong>" button > select "<strong>Kubernetes</strong>".</p>
<p>Since your Kubernetes is hosted on your On-Premise Server, you can select "<strong><code>KubeConfig</code></strong>" or "<strong><code>Service Account</code></strong>" as the <strong>Authentication method</strong>. Then provide the required values following the notes on the window of the new service connection.</p>
<p><a href="https://i.stack.imgur.com/om0LV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/om0LV.png" alt="enter image description here" /></a></p>
<p>After the <strong>Kubernetes service connection</strong> is created successfully, you can use it in the pipelines via referencing its name to access the Kubernetes resources from pipelines on Azure DevOps.</p>
| Bright Ran-MSFT |
Subsets and Splits