Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I'm running into an issue where our nginx-ingress is no longer working properly. We had an issue where one of the worker nodes went down and it ended up evicting a number of ingress pods. We created a new worker node and the pods were recreated with no errors.
The error I was getting was the following when I inspected the logs for the nginx-ingress pods:</p>
<pre><code>W0731 18:11:20.640729 1 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
E0731 18:11:26.533749 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0731 18:11:31.778992 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:571: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
E0731 18:11:51.190680 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:572: Failed to watch *v1.VirtualServerRoute: failed to list *v1.VirtualServerRoute: the server could not find the requested resource (get virtualserverroutes.k8s.nginx.org)
E0731 18:11:59.259723 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:574: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E0731 18:12:11.070691 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:573: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
E0731 18:12:26.943999 1 reflector.go:138] /home/runner/work/kubernetes-ingress/kubernetes-ingress/internal/k8s/controller.go:571: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
</code></pre>
<p>Our current ingress looks something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
namespace: dev-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "900"
nginx.ingress.kubernetes.io/proxy-send-timeout: "900"
nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
nginx.ingress.kubernetes.io/proxy-buffers: "4 256k"
nginx.ingress.kubernetes.io/server-snippet: client_header_buffer_size 256k
nginx.ingress.kubernetes.io/server-snippet: large_client_header_buffers 4 256k;
nginx.org/websocket-services: "contoso-service,sbase-service"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
spec:
tls:
- secretName: tls2023
hosts:
- sbasedev.net
- contosodev.net
rules:
- host: sbasedev.net
http:
paths:
- backend:
serviceName: sbase-service
servicePort: 80
</code></pre>
<p>For context, the kubernetes version we are running is v1.12.0. When I checked the nginx version inside the pods, the version running is 1.19.6.</p>
<p>I tried deleting and recreating the pods however the same error occured. This configuration was confirmed working until the nginx pods were evicted and recreated.</p>
| Derek Chen | <p>Upon checking the error messages, It seems like your Ingress resources are using deprecated API version <a href="http://networking.k8s.io/v1beta1" rel="nofollow noreferrer">networking.k8s.io/v1beta1</a> which was deprecated in v 1.19 and the Ingress Controller is unable to find CRDβs that are needed. You should update your ingress resources to use the <a href="http://networking.k8s.io/v1" rel="nofollow noreferrer">networking.k8s.io/v1</a> API version and install the missing CRDβs. Here are documentations that can be helpful in your use case. [1][2][3]</p>
<p>[1] <a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/</a></p>
<p>[2] <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></p>
<p>[3] <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/</a></p>
| Ray John Navarro |
<p>which service assigns nameservers under /etc/resolv.conf of pods , generally it should pickup from host /etc/resolv.conf , i'm seeing different nameservers under /etc/resolv.conf of pods, is there is any configuration on kbernetes(kubedns) which i can configure so that pods /etc/resolv.conf have 8.8.8.8</p>
| venkatesh pakanati | <p>I had the same issue over Jenkins deployed on Kubernetes. If you don't mention a nameserver then <em>/etc/resolv.conf</em> shows the default nameserver (ip of k8s).
I solved this by modifying the deploy file with</p>
<pre><code> dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
</code></pre>
<p>and applying it.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var/jenkins_vol
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
volumes:
- name: jenkins-vol
emptyDir: {}
</code></pre>
| Mithlaj |
<p>I have installed microk8s, traefik and cert-manager. When I try to receive a letsencrypt certificate, a new pod for answering the challenge is created, but the request from the letsencryt server does not reach this pod. Instead, the request is forwarded to the pod that serves the website.</p>
<p>It looks like the ingressroute routing the traffic to the web pod has higher priority then the ingress that routes the <code>/.well-known/acme-challenge/...</code> requests to the correct pod. What am I missing?</p>
<p><code>kubectl edit clusterissuer letsencrypt-prod</code>:</p>
<pre><code>kind: ClusterIssuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"[email protected]","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"traefik"}}}]}}}
creationTimestamp: "2022-07-11T14:32:15Z"
generation: 11
name: letsencrypt-prod
resourceVersion: "49979842"
uid: 40c4e26d-9c94-4cda-aa3a-357491bdb25a
spec:
acme:
email: [email protected]
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress: {}
status:
acme:
lastRegisteredEmail: [email protected]
uri: https://acme-v02.api.letsencrypt.org/acme/acct/627190636
conditions:
- lastTransitionTime: "2022-07-11T14:32:17Z"
message: The ACME account was registered with the ACME server
observedGeneration: 11
reason: ACMEAccountRegistered
status: "True"
type: Ready
</code></pre>
<p><code>kubectl edit ingressroute webspace1-tls</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"traefik.containo.us/v1alpha1","kind":"IngressRoute","metadata":{"annotations":{},"name":"w271a19-tls","namespace":"default"},"spec":{"entryPoints":["websecure"],"routes":[{"kind":"Rule","match":"Host(`test1.mydomain.com`)","middlewares":[{"name":"test-compress"}],"priority":10,"services":[{"name":"w271a19","port":80}]}],"tls":{"secretName":"test1.mydomain.com-tls"}}}
creationTimestamp: "2022-10-05T20:01:38Z"
generation: 7
name: w271a19-tls
namespace: default
resourceVersion: "45151920"
uid: 77e9b7ac-33e7-4810-9baf-579f00e2db6b
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`test1.mydomain.com`)
middlewares:
- name: test-compress
priority: 10
services:
- name: w271a19
port: 80
tls:
secretName: test1.mydomain.com-tls
</code></pre>
<p><code>kubectl edit ingress cm-acme-http-solver-rz9mm</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
creationTimestamp: "2023-03-22T13:00:18Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
acme.cert-manager.io/http-domain: "2306410973"
acme.cert-manager.io/http-token: "1038683769"
acme.cert-manager.io/http01-solver: "true"
name: cm-acme-http-solver-rz9mm
namespace: default
ownerReferences:
- apiVersion: acme.cert-manager.io/v1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: test1.mydomain.com-glnrn-2096762198-4162956557
uid: db8b5c78-8549-4f13-b43d-c6c7bba7468d
resourceVersion: "52806119"
uid: 6b27e02a-ee65-4809-b391-95c03f9ebb36
spec:
ingressClassName: traefik
rules:
- host: test1.mydomain.com
http:
paths:
- backend:
service:
name: cm-acme-http-solver-ll2zr
port:
number: 8089
path: /.well-known/acme-challenge/9qtVY8FjfMIWd_wBNhP3PEPJZo4lFTw8WfWLMucRqAQ
pathType: ImplementationSpecific
status:
loadBalancer: {}
</code></pre>
<p><code>get_cert.yaml</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test1.mydomain.com
namespace: default
spec:
secretName: test1.mydomain.com-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: test1.mydomain.com
dnsNames:
- test1.mydomain.com
</code></pre>
<p>In the webserver log of the web pod I see the reqests to /.well-known... coming in.</p>
| Peter | <p>Shouldn't this annotation be added to ingress?</p>
<pre><code>cert-manager.io/cluster-issuer=letsencrypt-production
</code></pre>
| Marco Brunet |
<p>I am new to k8's and trying to update storageClassName in a StatefulSet.(from default to default-t1 only change in the yaml)</p>
<p>I tried running <code>kubectl apply -f test.yaml</code></p>
<p>The only difference between 1st and 2nd Yaml(one using to apply update) is storageClassName: default-t1 instead of default</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: default
resources:
requests:
storage: 1Gi
</code></pre>
<p>Every-time I try to update it I get <code>The StatefulSet "web" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden</code></p>
<p>What am I missing or what steps should I take to do this?</p>
| e.f.a. | <p>There's no easy way as far as I know but it's possible.</p>
<ol>
<li>Save your current StatefulSet configuration to yaml file:</li>
</ol>
<pre><code>kubectl get statefulset some-statefulset -o yaml > statefulset.yaml
</code></pre>
<ol start="2">
<li>Change <code>storageClassName</code> in <code>volumeClaimTemplate</code> section of StatefulSet configuration saved in yaml file</li>
<li>Delete StatefulSet without removing pods using:</li>
</ol>
<pre><code>kubectl delete statefulset some-statefulset --cascade=orphan
</code></pre>
<ol start="4">
<li>Recreate StatefulSet with changed StorageClass:</li>
</ol>
<pre><code>kubectl apply -f statefulset.yaml
</code></pre>
<ol start="5">
<li>For each Pod in StatefulSet, first delete its PVC (it will get stuck in terminating state until you delete the pod) and then delete the Pod itself</li>
</ol>
<p>After deleting each Pod, StatefulSet will recreate a Pod (and since there is no PVC) also a new PVC for that Pod using changed StorageClass defined in StatefulSet.</p>
| filst |
<p>We are trying to migrate Java service from java 11 and SpringBoot 2.7.1 to Java 17 and SpringBoot 3.0.6.</p>
<p>we are creating the docker image and deploying to AWS kubernates (1.24). with java 11 and SpringBoot 2.7.1, able to trigger email successfully but after upgrade, we are getting below error with the smtp hostname.</p>
<p><strong>Note: if we give the ip address of that smtp server, code works fine with java 17 and SpringBoot 3.0.6.</strong></p>
<p>in local machine, from IDE, we were able to trigger an email with hostname as well as ipaddress from both java and SpringBoot version</p>
<pre><code>12:46:48 DEBUG SMTP: trying to connect to host "company smpt host", port 25, isSSL false
12:46:48 ("timestamp":"07-03-2823 GMT 67:14:19.672","level": "ERROR", "thread":"http-nio-8081-exec- 6", "logger":"com.emailer.service.FreeMarkerEMailSevice", "message":"Email sending failed with execption :
{}","context":"default", "exception":"com.sun.mail.util.MailConnectException: Couldn't connect to host, port: <company.smtpserver.com>,
5000\n\tat com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:2259)\n\tat
com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:754)\n\tat jakarta.mail.Service.connect(Service.java:342)\n\ta org.springframework.mail.javamail.JavaMailSenderImpl.connectTransport (JavaMailSenderImpl.java:518)\n\tat
org.springframework ail.jovanail.JavaMailSenderImpl.doSend(JavaMailSenderImpl.java:437)\n\tat
org.springfra ork.mail.javamail.JavaMailSenderImpl.send(JavaMailSenderImpl.java:361)\n\tat org.springframework.mail.javanail. JavaMailSenderImpl.send(JavaMailSenderImpl.java:356)\n\tat
.
..
...
java.net.UnknownHostException: <company smtp host>\n\tat java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567)\n\tat java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:331)\n\tat java.base/java.net.Socket.connect(Socket.java:630)\n\tat
com.sun.mail.util.WriteTimeoutSocket.connect(WriteTimeoutSocket.java:104)\n\tat
com.sun.mail.util.SocketFetcher.createSocket(SocketFetcher.java:355)
</code></pre>
<p>Application.yaml</p>
<pre><code>spring.mail.host = <company smtpserver>
spring.mail.port= 25
spring.mail.username = xxxxx
spring.mail.password = xxxx
spring.mail.properties.mail.smtp.auth = true;
spring.mail.properties.mail.smtp.starttls.enable = true
spring.mail.properties.mail.transport.protocol = smtp
spring.mail.properties.mail.smtp.timeout = 5000
</code></pre>
<p>Java code:</p>
<pre><code>@Service
public class EmailService{
private JavaMailSender emailSender;
public EmailService(JavaMailSender emailSender){
this.emailSender = emailSender;
}
@Override
public void sendEmail(MailModel mailModel) {
MimeMessage message = emailSender.createMimeMessage();
try{
MimeMessageHelper mimeMessageHelper = new MimeMessageHelper(message,true);
Template template = emailConfig.getTemplate("email4.ftl");
String html = FreeMarkerTemplateUtils.processTemplateIntoString(template, mailModel.getModel());
mimeMessageHelper.setTo(mailModel.getTo());
mimeMessageHelper.setText(html, true);
mimeMessageHelper.setSubject(mailModel.getSubject());
mimeMessageHelper.setFrom(mailModel.getFrom());
emailSender.send(mimeMessageHelper.getMimeMessage());
}catch(Exception e){
log.error("exception occurred {}, e.getCause());
}
}
}
</code></pre>
<p>We tried to debug the code with inetaddress, getting error for that aswell</p>
<pre><code>InetAddress address = null;
try{
address = InetAddress.getByName(hostname);
} catch(UnknownHostException e){
log.error("error occurred while trying to get ipaddress : {}, hostname);
throw new RuntimeException(e);
}
</code></pre>
<p>error stacktrace:</p>
<pre><code>12:45:48 ("timestamp":"87-03-2023 GHT 07:16:35.225","level":"ERROR", "thread":"http-nio-8881-exec-
7","logger":"com.emailer.service.FreeMarkerEMailSevice", "message":"error occurred while trying to get ipaddress <company smtp server> text":"default")
12:46:48 {"timestamp":"07-03-2023 GT 07:16:35.226","level":"ERROR", "thread":"http-nio-8881-exec-
7","logger":"org.apache.catalina.core.ContainerBase. [Tomcat].[localhost].[/].[dispatcherServlet]", "message":"Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: java.lang.RuntimeException: java.net.UnknownHostException <company smtp server>: Try again) with root cause", "context":"default", "exception":"java.net.UnknownHostException: <company smtp server>: Try again\n\ java.base/java.net.InetAddressImpl.lookupAllHostAddr(Native Method)\n\tat
java.base/java.net.InetAddress.PlatformameService.lookupAllHostAddr(InetAddress.java:932)\n\tat
java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1517)\n\tat
java.base/java.net.InetAdd eServiceAddresses.get(InetAddress.java:851)\n\tat
java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1587)\n\tat java.base/java.net.InetAddress.getAllByName(InetAddress.java:1366)\n\t
java.base/java.net.InetAddress.getAllByName(InetAddress.java:1300)\n\tat java.base/java.net.InetAddress.getByName(InetAddress.java:1250)\n\tat
</code></pre>
<p>we tried to compile it into a jar file and run it in one of the aws vm with hostname and java 17, able to trigger an email successfully.</p>
| Vikneshwar Vikki | <p>You can use <code>Session</code> class and use <code>Transport.send()</code>. Below is my code for reference :</p>
<p>EmailConfig :</p>
<pre><code>@Configuration
public class EmailConfig {
@Value("${spring.mail.username}")
private String noReplyEmailId;
@Value("${spring.mail.password}")
private String noReplyEmailPassword;
@Bean
public Session getSession() {
Properties props = new Properties();
props.put("mail.smtp.starttls.enable", "true");
props.put("mail.smtp.host", "smtp.gmail.com");
props.put("mail.smtp.port", "587");
props.put("mail.smtp.auth", "true");
props.put("mail.smtp.starttls.required", "true");
props.put("mail.smtp.ssl.protocols", "TLSv1.2");
return Session.getInstance(props, new javax.mail.Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(noReplyEmailId, noReplyEmailPassword);
}
});
}
}
</code></pre>
<p>EmailService :</p>
<pre><code>@Autowired
Session session;
public void sendVerifyEmail() throws MessagingException {
try {
MimeMessage message = new MimeMessage(session);
MimeMessageHelper helper = new MimeMessageHelper(message,
MimeMessageHelper.MULTIPART_MODE_MIXED_RELATED,
StandardCharsets.UTF_8.name());
helper.setFrom();
helper.setTo();
helper.setSubject();
helper.setText();
Transport.send(message);
}catch (Exception e) {
System.out.println("Problem occurred while sending email" + e);
}
}
</code></pre>
| Dhruman Desai |
<p>I have deployed a pod with cpu/memory limit on the spec:</p>
<pre><code>resources:
requests:
cpu: 1
memory: 4Gi
</code></pre>
<p>but the limit looks different from <code>kubectl describe pod</code> command:</p>
<pre><code>Init Containers:
elastic-internal-init-filesystem:
Container ID: docker://174be47d381c76b64ba406d1e1a80c685fca407a926074df32d8dd5689f359f2
Image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:a1dce08d504b22e87adc849c94dcae53f6a0bd12648a4d99d7f9fc07bb2e8a3e
Port: <none>
Host Port: <none>
Command:
bash
-c
/mnt/elastic-internal/scripts/prepare-fs.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 31 May 2023 10:42:34 +1000
Finished: Wed, 31 May 2023 10:42:36 +1000
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
</code></pre>
<p>The <code>describe</code> command returns 100m cpu and 50Mi memory. However I requested 4G memory and 1 cpu. Why doesn't it use the limit from the spec file?</p>
<p>The full spec file is:</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elk
spec:
version: 7.15.2
serviceAccountName: docker-sa
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: node
count: 2
config:
network.host: 0.0.0.0
xpack.security.enabled: false
# xpack.security.http.ssl.enabled: false
# xpack.security.transport.ssl.enabled: false
podTemplate:
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
readinessProbe:
exec:
command:
- bash
- -c
- /mnt/elastic-internal/scripts/readiness-probe-script.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 12
successThreshold: 1
timeoutSeconds: 12
env:
- name: READINESS_PROBE_TIMEOUT
value: "120"
resources:
requests:
cpu: 1
memory: 4Gi
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 512Gi
</code></pre>
| Joey Yi Zhao | <p>Resources are per container, not per pod. Your spec has 2 containers, with resource request only for the second (non-init) container.</p>
<p>Your 'describe' output only shows the init container. Though actually, it doesn't seem to be the init container named in your spec.</p>
| Arfur Narf |
<p>I know there are many questions concerning this aspect... but until now I could not find any answers. I tried two images (Apache Solr and Neo4J). Tried different namespaces, clusterIP, edit /etc/hosts, ingress, tunnel, minikube ip and all my requests got no response.</p>
<p>I tried these images standalone in Docker and they answer properly... with localhost, 127.0.0.1 and my ethernet IP - in case 192.168.0.15. I guessed that could be an internal configuration (from Sol, Neo4J) to allow requests only from localhost... but as they replied the calling from IP address and through a custom domain I set in /etc/hosts, I turned to kubernetes configuration.</p>
<p>Below are the following steps and environment:</p>
<pre><code>1) MacOS 10.15 Catalina
2) minikube version: v1.24.0 - commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
3) Kubectl:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
4) Docker:
Client:
Cloud integration: v1.0.22
Version: 20.10.11
API version: 1.41
Go version: go1.16.10
Git commit: dea9396
Built: Thu Nov 18 00:36:09 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.11
API version: 1.41 (minimum version 1.12)
Go version: go1.16.9
Git commit: 847da18
Built: Thu Nov 18 00:35:39 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</code></pre>
<pre class="lang-sh prettyprint-override"><code>minikube start --mount --mount-string="/my/local/path:/analytics" --driver='docker'
kubectl apply -f neo4j-configmap.yaml
kubectl apply -f neo4j-secret.yaml
kubectl apply -f neo4j-volume.yaml
kubectl apply -f neo4j-volume-claim.yaml
kubectl apply -f neo4j.yaml
kubectl apply -f neo4j-service.yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: neo4j-configmap
data:
neo4j-url: neo4j-service
---
apiVersion: v1
kind: Secret
metadata:
name: neo4j-secret
type: Opaque
data:
neo4j-user: bmVvNGoK
neo4j-password: bmVvNGoK
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: neo4j-volume
spec:
storageClassName: hostpath
capacity:
storage: 101Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/analytics/neo4j"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neo4j-volume-claim
labels:
app: neo4j
spec:
storageClassName: hostpath
volumeMode: Filesystem
volumeName: neo4j-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 101Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-application
labels:
app: neo4j
spec:
replicas: 1
selector:
matchLabels:
app: neo4j
template:
metadata:
labels:
app: neo4j
spec:
volumes:
- name: neo4j-storage
persistentVolumeClaim:
claimName: neo4j-volume-claim
containers:
- name: neo4j
image: neo4j:4.1.4
ports:
- containerPort: 7474
name: neo4j-7474
- containerPort: 7687
name: neo4j-7687
volumeMounts:
- name: neo4j-storage
mountPath: "/data"
---
apiVersion: v1
kind: Service
metadata:
name: neo4j-service
spec:
type: NodePort
selector:
app: neo4j
ports:
- protocol: TCP
port: 7474
targetPort: neo4j-7474
nodePort: 30001
name: neo4j-port-7474
- protocol: TCP
port: 7687
targetPort: neo4j-7687
nodePort: 30002
name: neo4j-port-7687
</code></pre>
<p>The bash steps where executed in that order. I have each yaml configuration in a separated file. I joined they here as just one yaml just to expose then.</p>
<p>What part or parts of the setup process or configuration process am I missing?</p>
<p>Below follows the <code>kubectl describe all</code> with only neo4j. I tried http, https request from all possibles IP... Connected to each pod and perform a curl inside the pod... and got successfully responses.</p>
<pre><code>Name: neo4j-application-7757948b98-2pxr2
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Sun, 09 Jan 2022 14:19:32 -0300
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/neo4j-application-7757948b98
Containers:
neo4j:
Container ID: docker://2deda46b3bb15712ff6dde5d2f3493c07b616c2eef3433dec6fe6f0cd6439c5f
Image: neo4j:4.1.4
Image ID: docker-pullable://neo4j@sha256:b1bc8a5c5136f4797dc553c114c0269537c85d3580e610a8e711faacb48eb774
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sun, 09 Jan 2022 14:19:43 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5hq9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
kube-api-access-z5hq9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 35m default-scheduler Successfully assigned default/neo4j-application-7757948b98-2pxr2 to minikube
Normal Pulling 35m kubelet Pulling image "neo4j:4.1.4"
Normal Pulled 35m kubelet Successfully pulled image "neo4j:4.1.4" in 3.087215911s
Normal Created 34m kubelet Created container neo4j
Normal Started 34m kubelet Started container neo4j
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.49.2:8443
Session Affinity: None
Events: <none>
Name: neo4j-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=neo4j
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.131.77
IPs: 10.98.131.77
Port: neo4j-port-7474 7474/TCP
TargetPort: neo4j-7474/TCP
NodePort: neo4j-port-7474 30001/TCP
Endpoints: 172.17.0.4:7474
Port: neo4j-port-7687 7687/TCP
TargetPort: neo4j-7687/TCP
NodePort: neo4j-port-7687 30002/TCP
Endpoints: 172.17.0.4:7687
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: neo4j-application
Namespace: default
CreationTimestamp: Sun, 09 Jan 2022 14:19:27 -0300
Labels: app=neo4j
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=neo4j
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=neo4j
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: neo4j-application-7757948b98 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set neo4j-application-7757948b98 to 1
Name: neo4j-application-7757948b98
Namespace: default
Selector: app=neo4j,pod-template-hash=7757948b98
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/neo4j-application
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=neo4j
pod-template-hash=7757948b98
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 35m replicaset-controller Created pod: neo4j-application-7757948b98-2pxr2
</code></pre>
| lordshark | <p>As mentioned in comments and in this <a href="https://stackoverflow.com/questions/40767164/expose-port-in-minikube">post</a>, the way you would expose app running in minikube via NodePort is my running the command:</p>
<p><code>minikube service <SERVICE_NAME> --url</code></p>
<p>Which prints out url you can paste in your browser.
You also mentioned:</p>
<blockquote>
<p>With the url fro minikube service I could reach the endpoint! <code>π Starting tunnel for service neo4j-service. http://127.0.0.1:49523 and http://127.0.0.1:49524.</code> But considering the domain of the application... What should I do with NodePort 30001? What is the correct way to configure a kubernetes node?</p>
</blockquote>
<p>The output you pasted is correct, you are getting a successful response. As for the NodePort - minikube maps this port to url that you are getting when running command mentioned before. Read more on accessing apps running in minikube <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">here</a></p>
| mdobrucki |
<p>I have a spark executor pod, which when goes to OOMKilled status, I want to alert it. I am exporting spark metrics using prometheus to grafana.</p>
<p>I have tried some queries to</p>
<pre><code>kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}
kube_pod_container_status_terminated_reason{reason="OOMKilled"}
</code></pre>
<p>They don't seem to give proper results. I am cross checking the result using humio logs, which is logging the OOMKilled properly.</p>
<pre><code>container_memory_failures_total{pod="<<pod_name>>"}
</code></pre>
<p>Even this is not able to capture the problems of OOMKilled which is in sync with the humio logs. Is there any other proper metric to catch OOMKilled ?</p>
| user9920500 | <p>As i know there is two metrics which allow you to monitor OOM.
The first one is used for tracking OOMKilled status of your main process/pid. If it breach the limit pod will be restarted with this status.</p>
<pre><code>kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}
</code></pre>
<p>And the second one for gathering total count of OOM events inside the container. So every time some child process or other process will breach the RAM limit they will be just killed and metric counter increased. But the container will be working as usual.</p>
<pre><code>container_oom_events_total
</code></pre>
| Organ2 |
<p>Is it correct to say that a scaffolding in Helm Chart is a Chart based on another Chart with your own customization?</p>
<p>If that's correct why don't we just customize values.yaml instead? If we need to change the configuration why don't we write a new Chart instead?</p>
<p>what are the use cases for creating a scaffolding?</p>
| lzy917 | <p><code>helm create xyz</code> just builds you a starting structure in whuch you can develop the chart needed for your <code>xyz</code> project. It has no relationship to any existing chart.</p>
<p>The use-case is 'programmer who prefers not to start with a blank sheet of paper'.</p>
| Arfur Narf |
<p>The following example would expose the services externally. So why is <code>NodePort</code>/<code>LB</code> allowed in this context, would not that be redundant?</p>
<pre><code> rules:
- host: lab.example.com
http:
paths:
- path: /service-root
backend:
serviceName: clusterip-svc
servicePort: 8080
- path: /service-one
backend:
serviceName: nodeport-svc
servicePort: 8080
- path: /service-two
backend:
serviceName: headless-svc
servicePort: 8080
</code></pre>
<p>Is there any particular advantage of using <code>NodePort</code>, <code>ClusterIP</code>, <code>LoadBalancer</code>, or <code>Headless</code> as back-end to Ingress?</p>
| x300n | <p>Services are a way to define logical set of Pods and a policy to access them. The Pods are ephemeral resources, so Services make it possible to connect to them regardless of their IP addresses. They usually use selectors to do so. There are different types of Services in Kubernetes and these are the main differences.</p>
<p>Cluster IP is a default type of Service. It exposes the Service on a cluster-internal IP and makes it available only from within the cluster.</p>
<p>NodePort exposes the Service on each Node's IP at a static port. This option also creates ClusterIP Service, to which NodePort routes.</p>
<p>LoadBalancer goes a step further and exposes the Service externally using cloud provider's load balancer. NodePort and ClusterIP resources are created automatically.</p>
<p>Follow <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">this link</a> to get more information about different ServiceTypes.</p>
<p>And there are Headless Services. You would use these when you don't need load-balancing and a single Service IP. You can follow <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">this</a> section in documentation for further clarification.</p>
<p>Answering your question - it depends on your use-case, you might find different advantages using these Services.</p>
| mdobrucki |
<p>Iβm running my POC java application inside the EKS cluster. Here the steps I followed to move my application into containerized application.</p>
<ol>
<li>Java application source code is checkout into my local workstation.</li>
<li>Executed the maven command to generate the package mvn package -DskipTests</li>
<li>As usual mvn package command will generate the packages inside the target folder I can see all the dependencies including jar file which has selenium jar to run the test cases.</li>
<li>Using docker build command I have created a docker image out of the source code docker build -t myapp .</li>
<li>Then I have entered into docker container using below command I can see my test cases are executed successfully without any issue.</li>
</ol>
<p>dockerfile</p>
<pre><code>FROM openjdk:8u191-jre-alpine
# Workspace
WORKDIR /usr/share/selenium_docker
# ADD jar files and any other dependencies from HOST
ADD target/selenium-docker.jar selenium-docker.jar
ADD target/selenium-docker-tests.jar selenium-docker-tests.jar
ADD target/libs libs
# add TestNG suite files
ADD duck_search_tests.xml duck_search_tests.xml
ADD saucedemo_tests.xml saucedemo_tests.xml
# run tests using provided browser/hub address/test suite module
ENTRYPOINT java -cp selenium-docker.jar:selenium-docker-tests.jar:libs/* -DBROWSER=$BROWSER -DHUB_HOST=$HUB_HOST org.testng.TestNG $MODULE
</code></pre>
<pre><code>docker run -e HUB_HOST=192.168.1.1 -e MODULE=saucedemo_tests.xml -v /c/Users/ /workspace/delete/JavaSeleniumDocker/output:/usr/share/selenium_docker/test-output localhost/myapp
Jul 17, 2023 4:41:44 AM org.openqa.selenium.remote.DesiredCapabilities chrome
INFO: Using `new ChromeOptions()` is preferred to `DesiredCapabilities.chrome()`
Jul 17, 2023 4:41:51 AM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: W3C
Logging in to Sauce Demo site...
Logged In!
Home page loaded with page header: Products
Added item to cart
Price of item is: $29.99
Logging in to Sauce Demo site...
Logged In!
===============================================
check_price
Total tests run: 2, Failures: 0, Skips: 0
===============================================
</code></pre>
<p>Next i'm moving my docker image into docker hub registry and i'm trying to deploy the image inside the EKS cluster, when i'm doing deployment i can see the pod is up and running and also i run the kubectl logs -f pod command to verify the test case execution i can able to see same like above message like Total tests run: 2, Failures: 0, Skips: 0, but problem is pod is restarting every time when the test case execution completed, its again and again pod status is going to CrashLoopBackOff and running .....</p>
<p>here the K8s yaml file</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-app-deployment-deployment
namespace: devops-java-selenium
labels:
app: sample-app-deployment
name: sample-app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: sample-app-deployment
template:
metadata:
labels:
app: sample-app-deployment
name: sample-app-deployment
spec:
containers:
- name: sample-app-deployment
image: myapp
env:
- name: HUB_HOST
value: "10.1.2.3"
- name: BROWSER
value: chrome
- name: MODULE
value:
- name: selenium_grid_host
value: "4444"
volumeMounts:
- mountPath: /usr/share/selenium/test-output
name: devops-caas-java-selenium-pv
volumes:
- name: devops-caas-java-selenium-pv
hostPath:
path: /tmp/palani
</code></pre>
<p>Can you please someone help me on this.</p>
| Gowmi | <p>That is the nature of a Deployment. When The pod finishes its task, the pod completes. But the Deployment has the purpose to keep up a certain number of pods (replicas) so it will recreate the pod again and again to fullfill its purpose. This is leading to your CrashLoopBackOff. I think maybe you should use a kubernetes job with the restartPolicy: OnFailure for your specific use case.</p>
| pwoltschk |
<p>I am trying to update the eks add-on named "vpc-cni". This plugin, does the following:</p>
<p>"The CNI plugin allows Kubernetes Pods to have the same IP address as they do on the VPC network. More specifically, all containers inside the Pod share a network namespace, and they can communicate with each-other using local ports."</p>
<p>I am however getting the following "Conflict" when updating:</p>
<pre><code>Conflicts: ClusterRole.rbac.authorization.k8s.io aws-node - .rules DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image
</code></pre>
<p>I don't really know where to begin in remediating this, or even what this error conflict is saying is conflicting.</p>
<p>Any help appreciated.</p>
| ambe5960 | <p>Work around: when deploying this addon in the AWS console, click on the "advanced options" in the first panel after specifying the version and the IAM role. At the very bottom is a button that can be selected to override conflicts and this allows the installation to succeed.</p>
| Dave Wagoner |
<p>My k8s pod gets terminated/crashed sometimes and restarts automatically. When I try to see the old log to debug for the crash, I get the below error. If terminated container not found, then how can I get the log and debug the crash in this case?</p>
<p>My pod has only one container inside it.</p>
<pre><code>/home/ravi> sudo kubectl logs -p mypod-766c995f8b-zshgc -n ricplt
Error from server (BadRequest): previous terminated container "container-ricplt-e2term" in pod "mypod-766c995f8b-zshgc" not found
</code></pre>
| myquest9 sh | <p>I recommend you to use tool like LENS to have a GUI to have a clarity on what is going on in K8S cluster.Also you can use kubectl get events to know the history of events happened along with age of event,pod info , etc. It can help you to know for what reason the old container has been terminated.</p>
| Suresh Ganesan |
<p>need some recommendations. I have the redis servers deployed in K8s and they are without the password/authentication. I need to work on adding the passwords to the redis servers and at the same time make sure that to the clients/services using that redis servers also get it. Has anyone come across this use case? #redis</p>
| Batman 21 | <p>Hey for this purpose you should use a <strong>Kubernetes Secret</strong> that you mount into your redis container. It can be also mounted into different containers.</p>
<p>This would be my approach:</p>
<p>First, you'll need to enable authentication on your Redis servers. This can be done by modifying the <strong>redis.conf</strong>. you can provide a password in the configuration file using the <strong>requirepass</strong> directive.</p>
<p>Once you've configured Redis to use authentication, you'll need to update the Kubernetes deployment to use the new configuration. You can use a <strong>Configmap</strong> containing the <strong>redis.conf</strong> file, including the <strong>requirepass</strong> directive with the password you choose. Then Modify the Redis deployment YAML to mount the ConfigMap containing the updated redis.conf into the Redis Pod.</p>
<p>Next step is to store the Redis authentication password as a Kubernetes Secret and <strong>inject it into the client containers as environment variables or volume mounts</strong>. This way, the Redis clients can access the password securely and use it to authenticate. <em>You can define the secret with kubectl or helm.</em></p>
<p>As last step perform an update of your clients and server Deployments and test out the solution.</p>
<p><em>Please comment and I can provide you more detailed code examples. This is just a bit of how I would approach it.</em></p>
| pwoltschk |
<p>On a minikube I have installed KEDA and managed to scale up/down a small service I have created using the Postgres scaler.</p>
<p>After a while, the scaler stopped working, and I don't understand why.</p>
<p>Here's the spec from the ScaledObject yaml:</p>
<pre><code>spec:
minReplicaCount: 0
maxReplicaCount: 5
pollingInterval: 30
cooldownPeriod: 30
scaleTargetRef:
name: demo-service
triggers:
- type: postgresql
metadata:
connection: "postgresql://host.minikube.internal:5432"
userName: "postgres"
passwordFromEnv: demo-service-secret-keda-password
host: "host.minikube.internal"
dbName: "postgres"
sslmode: disable
port: "5432"
query: "select value from keda where id = 1"
targetQueryValue: "3"
</code></pre>
<p>Postgres is running on Docker on the same machine, and here's the result of the query:</p>
<pre><code>postgres=# select value from keda where id = 1; value
-------
2
(1 row)
</code></pre>
<p>Looking at the logs of the Keda pod, I see:</p>
<pre><code>2022-11-07T14:48:59Z ERROR Reconciler error {"controller": "scaledobject", "controllerGroup": "keda.sh", "controllerKind": "ScaledObject", "scaledObject": {"name":"postgres-scaledobject","namespace":"default"}, "namespace": "default", "name": "postgres-scaledobject", "reconcileID": "06cbd2e8-93ac-43a1-8cf0-ac4852eac4be", "error": "HorizontalPodAutoscaler.autoscaling \"keda-hpa-postgres-scaledobject\" is invalid: spec.metrics[0].external.target.averageValue: Invalid value: resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"0\", Format:\"DecimalSI\"}: must be positive"}
</code></pre>
<p>But I don't understand the error, because the value IS positive.
I did set it at one point to a negative value, but I have changed it multiple times since, and I have undeployed the ScaledObject and redeployed it.</p>
<p>I am not sure how to fix this, so any help is welcome.</p>
<p>Thanks.</p>
| D. Joe | <p>Uninstalling and installing KEDA in my cluster helped.</p>
<blockquote>
<p>kubectl delete -f <a href="https://github.com/kedacore/keda/releases/download/v2.9.0/keda-2.9.0.yaml" rel="nofollow noreferrer">https://github.com/kedacore/keda/releases/download/v2.9.0/keda-2.9.0.yaml</a></p>
</blockquote>
<blockquote>
<p>kubectl apply --server-side -f <a href="https://github.com/kedacore/keda/releases/download/v2.9.0/keda-2.9.0.yaml" rel="nofollow noreferrer">https://github.com/kedacore/keda/releases/download/v2.9.0/keda-2.9.0.yaml</a></p>
</blockquote>
<p>I suppose that if you change the targetvalue of your trigger to something invalid it does not get deleted when you delete the ScaledObject somehow.</p>
| Jens Voorpyl |
<p>i try to run a mongodb within a kubernetes cluster secured with a keyFile. For this, i created a simple statefulset and a configmap, where i stored the keyfile:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
args:
- --bind_ip
- '0.0.0.0,::'
- --replSet
- MySetname01
- --auth
- --keyFile
- /etc/mongodb/keyfile/keyfile
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: MyUsername
- name: MONGO_INITDB_ROOT_PASSWORD
value: MyPassword
ports:
- containerPort: 27017
name: mongodb
volumeMounts:
- name: mongodb-persistent-storage
mountPath: /data/db
- name: mongodb-keyfile
mountPath: /etc/mongodb/keyfile
readOnly: True
volumes:
- name: mongodb-keyfile
configMap:
name: mongodb-keyfile
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
labels:
app: mongodb
spec:
ports:
- port: 27017
selector:
app: mongodb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-keyfile
data:
keyfile: |
+PN6gXEU8NeRsyjlWDnTesHCoPOn6uQIEI5pNorDkphREi6RyoSHCIaXOzLrUpPq
jpSGhSc5/MZj17R7K5anjerhvR6f5JtWjBuQcrjdJdNBceck71F2bly/u9ICfCOy
STFzv6foMQJBJTBYqLwtfyEO7CQ9ywodM0K5r9jtT7x5BiJaqso+F8VN/VFtIYpe
vnzKj7uU3GwDbmw6Yduybgv6P88BGXyW3w6HG8VLMgud5aV7wxIIPE6nAcr2nYmM
1BqC7wp8G6uCcMiHx5pONPA5ONYAIF+u3zj2wAthgMe2UeQxx2L2ERx8Zdsa9HLR
qYOmy9XhfolwdCTwwYvqYRO+RqXGoPGczenC/CKJPj14yfkua+0My5NBWvpL/fIB
osu0lQNw1vFu0rcT1/9OcaJHuwFWocec2qBih9tk2C3c7jNMuxkPo3dxjv8J/lex
vN3Et6tK/wDsQo2+8j6uLYkPFQbHZJQzf/oQiekV4RaC6/pejAf9fSAo4zbQXh29
8BIMpRL3fik+hvamjrtS/45yfqGf/Q5DQ7o8foI4HYmhy+SU2+Bxyc0ZLTn659zl
myesNjB6uC9lMWtpjas0XphNy8GvJxfjvz+bckccPUVczxyC3QSEIcVMMH9vhzes
AcQscswhFMgzp1Z0fbNKy0FqQiDy1hUSir06ZZ3xBGLKeIySRsw9D1Pyh1Y11HlH
NdGwF14cLqm53TGVd9gYeIAm2siQYMKm8rEjxmecc3yGgn0B69gtMcBmxr+z3xMU
X256om6l8L2BJjm3W1zUTiZABuKzeNKjhmXQdEFPQvxhubvCinTYs68XL76ZdVdJ
Q909MmllkOXKbAhi/TMdWmpV9nhINUCBrnu3F08jAQ3UkmVb923XZBzcbdPlpuHe
Orp11/f3Dke4x0niqATccidRHf6Hz+ufVkwIrucBZwcHhK4SBY/RU90n233nV06t
JXlBl/4XjWifB7iJi9mxy/66k
</code></pre>
<p>Problem is: MongoDb stays in a Crashloopbackoff , because the permissions on the keyfile are too open:</p>
<pre><code>{"t":{"$date":"2022-12-19T12:41:41.399+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-12-19T12:41:41.402+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-12-19T12:41:41.402+00:00"},"s":"I", "c":"ACCESS", "id":20254, "ctx":"main","msg":"Read security file failed","attr":{"error":{"code":30,"codeName":"InvalidPath","errmsg":"permissions on /etc/mongodb/keyfile/keyfile are too open"}}}
</code></pre>
<p>For what i dont have a explanation.</p>
<p>I already set the volumemount of the configmap on readonly (You see in the mongo statefulset). Also i tried around with commands or lifecyclehooks to chmod 600/400 the file. I tried differnt versions of mongodb, but got always the same error.
For sure i also tried if the configmap is included correctly, it is (I uncommented the args and Username/Password for that one).</p>
<p>Permissions are shown:</p>
<pre><code>lrwxrwxrwx 1 root root 14 Dec 19 12:50 keyfile -> ..data/keyfile
</code></pre>
<p>Maybe its related to that fact that the file is shown as linked?</p>
<p>I expect a kubernetes yaml which is able to start with a keyfile. Thank you very much.</p>
<hr />
<p>EDIT: I tried to mount the file directly, not as a link with subpath. Now i got the following permissions:</p>
<p><code>-rw-r--r-- 1 root root 1001 Dec 19 13:34 mongod.key</code></p>
<p>But sadly the db will not start with that one too, it's still crashing with the same error.</p>
<hr />
<p>EDIT2:
Adding <code>defaultMode: 0600</code> to the volume in the statefulset led at least to the correct permissions, but also another error (already mentioned in one of my comments):</p>
<p><code>file: /var/lib/mongo/mongod.key: bad file"</code></p>
<p>So i tried to mount on different places in the Pod (You see here /var/lib/) for example and i tried to include the keyfile as secret. But none is working.</p>
| MarM25 | <p>If anyone is still looking for a solution to this issue, I found one that does not use the init container.</p>
<p>Working with <strong>k8s 1.26</strong>,
I mounted the <strong>keyfile</strong> as a secret inside the pod with a volume and I set the <code>defaultMode</code> to <code>0o400</code>.</p>
<p>We have to put the <strong>"o"</strong> after the <strong>"0"</strong> so the system would recognize it as readonly for the owner and we will be able to get the file with a permission like so <code>-r--------</code>.</p>
<p>I mounted the secret in <code>/var/run/secrets/keyfile</code>
and using a subPath: <code>./keyfile</code></p>
<p>Here is the <code>statefulset</code> yaml:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
namespace: test
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
role: mongo
app: mongo
replicaset: rsTest
template:
metadata:
labels:
role: mongo
app: mongo
replicaset: rsTest
spec:
volumes:
- name: mongo-conf
configMap:
name: mongo-conf-cm
- name: mongodb-keyfile
secret:
secretName: mongodb-keyfile
defaultMode: 0o400
containers:
- name: mongod-container
image: mongo:6.0.1
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--config"
- "/etc/mongo.conf"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: /data/db
- name: mongo-conf
mountPath: /etc/mongo.conf
subPath: mongo.conf
- name: mongodb-keyfile
mountPath: /var/run/secrets/keyfile
subPath: ./keyfile
readOnly: true
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
| Flav |
<p>Hi got a YAML that seems valid to me but it does not work when applying it and I can't figure out what is wrong with it</p>
<p>error:</p>
<pre><code>unknown field "spec.template.spec.volumes[0].PersistentVolumeClaim"
</code></pre>
<p>kubeval deployment.yaml passes with no errors</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-service
namespace: app-1-0
labels:
app: app-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service
image: registry.azurecr.io/app/app-service:1.0
env:
- name: AzureAd__Instance
value: ...
- name: AzureAd__ClientId
value: ...
- name: AzureAd__TenantId
value: ...
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
volumeMounts:
- mountPath: "/mnt/data"
name: cache
dnsPolicy: ClusterFirst
volumes:
- name: cache
PersistentVolumeClaim:
claimName: app-pvc
</code></pre>
| Jester | <p>Note that <code>persistentVolumeClaim</code> should be lowercased and specified as a nested object under volumes.</p>
<pre><code>volumes:
- name: cache
persistentVolumeClaim:
claimName: app-pvc
</code></pre>
| pwoltschk |
<p>Ingress doesn't get a static ip. The static ip is global.</p>
<p>Ingress manifest:</p>
<pre><code> ο
Ή ξ° οΌ ~/temp ξ° cat stage-api-deleted-com-cert.yaml ξ² β
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.allow-http: "true"
kubernetes.io/ingress.global-static-ip-name: devstaticip
name: stage-api-deleted-com-cert
namespace: default
spec:
defaultBackend:
service:
name: deleted-gateway-service-https
port:
number: 8989
tls:
- hosts:
- dev-api.deleted.com
secretName: dev-api-deleted-com-cert
</code></pre>
<p>kubectl describe ingress/</p>
<pre><code> ο
Ή ξ° οΌ ~/temp ξ° kubectl describe ingress/stage-api-deleted-com-cert
Name: stage-api-deleted-com-cert
Labels: <none>
Namespace: default
Address:
Ingress Class: <none>
Default backend: deleted-gateway-service-https:8989 (<none>)
TLS:
dev-api-deleted-com-cert terminates dev-api.deleted.com
Rules:
Host Path Backends
---- ---- --------
* * deleted-gateway-service-https:8989 (<none>)
Annotations: kubernetes.io/ingress.allow-http: true
kubernetes.io/ingress.global-static-ip-name: devstaticip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 72s loadbalancer-controller UrlMap "k8s2-um-g4e2azkm-default-stage-api-deleted-com-cert-xkz9puyb" created
Normal Sync 69s loadbalancer-controller TargetProxy "k8s2-tp-g4e2azkm-default-stage-api-deleted-com-cert-xkz9puyb" created
Normal Sync 57s (x3 over 2m22s) loadbalancer-controller Scheduled for sync
Normal Sync 52s loadbalancer-controller ForwardingRule "k8s2-fr-g4e2azkm-default-stage-api-deleted-com-cert-xkz9puyb" created
Warning Sync 2s (x10 over 50s) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer g4e2azkm-default-stage-api-deleted-com-cert-xkz9puyb does not exist: googleapi: Error 404: The resource 'projects/smarter-ai-service/global/sslCertificates/k8s2-cr-g4e2azkm-wr44xnxqvebjy33n-e3b0c44298fc1c14' was not found, notFound
</code></pre>
<p>I tried to recreate an IP address several times. Also, I tried to use ingress.regional-static-ip-name instead of ingress.global-static-ip-name. The same result occurred without the TLS option.</p>
| CY83R14N | <p>I'm not absolutely sure, but it looks like the <code>networking.gke.io/static-ip: "devstaticip"</code> annotation helps to get a static ip in my case.</p>
| CY83R14N |
<p>Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places
I want to ask:</p>
<ul>
<li><p>if there is some helper that can help me to reduce this boilarplate
code ? something like <code>createOrUpdate</code> func</p>
</li>
<li><p>am I doing it right ?</p>
</li>
</ul>
<pre><code>if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp); err != nil {
if apierrors.IsNotFound(err) {
// If the object does not exist, create a new one
if err := r.Create(ctx, &eCmp); err != nil {
return ctrl.Result{}, err
}
} else {
// If there was an error other than 'not found', return the error
return ctrl.Result{}, err
}
} else {
// If the object exists, patch it
patch := client.MergeFrom(eCmp.DeepCopy())
if err := r.Patch(ctx, &eCmp, patch); err != nil {
return ctrl.Result{}, err
}
}
</code></pre>
<p><strong>if enverything is as recomended please let me know.</strong>
I need also to do the stratgic merge but the code doesnt support it</p>
<p>I found the following
<a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate</a>
but I dont want it to be related to timestamp, just <code>if something was change -> update it</code> or something <code>doesnt exist -> create it</code></p>
<p><strong>when trying to update the CR with the following code it doesnt works , any idea?</strong></p>
<pre><code>// If the object exists, patch it
patch := client.MergeFrom(eCmp.DeepCopy())
if err := r.Patch(ctx, &eCmp, patch); err != nil {
return ctrl.Result{}, err
}
</code></pre>
| PeterSO | <p>you can use the controllerutil.CreateOrUpdate() function from the sigs.k8s.io/controller-runtime/pkg/controller/controllerutil package to reduce boilerplate code.</p>
<p>use controllerutil.CreateOrUpdate() function:</p>
<pre><code>if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error {
return r.Patch(ctx, &eCmp, client.MergeFrom(eCmp.DeepCopy()))
}); err != nil {
return ctrl.Result{}, err
}
</code></pre>
<p>strategic merge, you can add the strategic merge patch to the callback function to patch the object strategically</p>
<pre><code> if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error {
// Create a strategic merge patch
strategicMergePatch, err := strategicpatch.CreateTwoWayMergePatch(eCmp, &newECmp, eCmp)
if err != nil {
return err
}
// Patch the object strategically
return r.Patch(ctx, &eCmp, client.ConstantPatch(types.StrategicMergePatchType, strategicMergePatch))
}); err != nil {
return ctrl.Result{}, err
}
</code></pre>
| Hari pootar |
<p>I have an application designed to run as a K8s application, and it imports some dependencies (that I don't own) that run <code>exec.Cmd</code>s. This is fine, except I want to capture those logs. For some reason, when I do:</p>
<pre><code>r := bufio.NewReader(os.Stdout)
...
line, err := r.ReadString('\n')
</code></pre>
<p>An error is thrown saying that <code>/dev/stdout</code> is a <code>bad file descriptor</code>. How can this be? Isn't that the standard local destination for console output?</p>
<p><code>kubectl logs</code> seems to be able to capture the output, and more specifically, our central log forwarder is able to capture it as well. But trying to capture logs from the kube API server inside the container that's actually generating those logs seems kinda silly... Is there a better way to do this?</p>
| jayjyli | <p>Generally, <code>stdin</code> is a read-only stream for retrieving input written to your program, while <code>stdout</code> is a write-only stream for sending output written by your program. <em>In other words, nobody can read from /dev/stdout, except Chuck Norris.</em></p>
<p>By default, <code>stdout</code> is "pointing" to your terminal. But it is possible to redirect <code>stdout</code> from your terminal to a file. This redirection is set up before your program is started.</p>
<p>What usually happens, is the following: The container runtime redirects <code>stdout</code> of the process of your container to a file on the node where your container is running (e.g., <code>/var/log/containers/<container-name>-<container-id>.log</code>). When you request logs with <code>kubectl logs</code>, kubectl connects to kube-apiserver, which connects to the kubelet on the node running your container and asks it to send back the content from the log file.</p>
<p>Also take a look at <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a> which explains the various logging design approaches.</p>
<hr />
<p>A solution, which from a security and portability perspective you would definitely NOT implement, is to add a <code>hostPath</code> mount in your container mounting the <code>/var/log/containers</code> directory of your node and to access the container log directly.</p>
<hr />
<p>A proper solution might be to change the command of your image and to write output to <code>stdout</code> of your container and also to a local file within your container. This can be achieved using the <code>tee</code> command. Your application can then read back the log from this file. But keep in mind, that without proper rotation, the log file will grow until your container is terminated.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
containers:
- image: bash:latest
name: log-to-stdout-and-file
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /tmp/test.log'
</code></pre>
<hr />
<p>A little more complex solution would be, to replace the log file in the container with a named pipe file created with <code>mkfifo</code>. This avoids the growing file size problem (as long as your application is continuously reading the log from the named pipe file).</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
# the init container creates the fifo in an empty dir mount
initContainers:
- image: bash:latest
name: create-fifo
command:
- bash
- -c
- mkfifo /var/log/myapp/log
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# the actual app uses tee to write the log to stdout and to the fifo
containers:
- image: bash:latest
name: log-to-stdout-and-fifo
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /var/log/myapp/log'
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# this sidecar container is only for testing purposes, it reads the
# content written to the fifo (this is usually done by the app itself)
#- image: bash:latest
# name: log-reader
# command:
# - bash
# - -c
# - cat /var/log/myapp/log
# volumeMounts:
# - name: ed
# mountPath: /var/log/myapp
volumes:
- name: ed
emptyDir: {}
</code></pre>
| Gerald Mayr |
<p>I define a CoucbaseBackupRestore resource for my kubernetes cluster. Managed with flux and kustomize (gitops)
The restore goes well. But when it is finished it starts another restore.
I want it to run only once.
Is it possible to tell kubernetes to not recreate a pod when the first one succeeds?</p>
<p>I have to comment out my CouchbaseBackupRestore resource in my kustomize file when the job succeeds.
But this is not a viable option for production.
BackoffLimit is not a solution either, as I want it to retry if it fails.</p>
<p>Thanks for tour help</p>
| Fundhor | <p>Currently, the Operator will garbage collect successful restore jobs. The expected behaviour is that the restore yaml would only be applied when a restore job is required.</p>
<p>This causes repeat creation in workflows like this; however, we've opened a ticket to track this request! <a href="https://issues.couchbase.com/browse/K8S-3121" rel="nofollow noreferrer">https://issues.couchbase.com/browse/K8S-3121</a></p>
| Alex |
<p>I have Kubernetes client version 25.3.0. With this I am able to execute below code</p>
<pre class="lang-py prettyprint-override"><code>import kubernetes
from kubernetes import client
from kubernetes import kubernetes
print('Hello, world!')
</code></pre>
<p>I have requirement to upgrade Kubernetes client because of some other vulnerability fixes. Now if I upgrade my Kubernetes client version to latest one i.e. 26.1.0 or 27.2.0 and execute the same above code, I see error:</p>
<pre><code>[root@comfort1 ]# python3 test.py
/usr/local/lib/python3.6/site-packages/google/auth/crypt/_cryptography_rsa.py:22: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography will remove support for Python 3.6.
import cryptography.exceptions
Traceback (most recent call last):
File "test.py", line 4, in <module>
from kubernetes import kubernetes
ImportError: cannot import name 'kubernetes'
[root@comfort1 ]#
</code></pre>
<p>Any idea how I can fix this? I am using Python version 3.6.8</p>
| HotDev | <p>The issue is with the line, <code>from kubernetes import kubernetes</code>. The format <code>from x import y</code> is used to explicitly import a module <code>x.y</code> in package <code>x</code>. It does not seem that the <code>kubernetes</code> package contains a <code>kubernetes.kubernetes</code> module. You can either use <code>import kubernetes</code> and access the module like <code>kubernetes.config</code> or <code>kubernetes.utils</code> OR explicitly import the modules you will use with the following:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config, [...other modules you need]
</code></pre>
<p>The warning about cryptography is a depreciation warning the <a href="https://pypi.org/project/cryptography/" rel="nofollow noreferrer"><code>cryptography</code></a> package. I had trouble tracking down the exact dependency tree, but it seems that the <a href="https://pypi.org/project/kubernetes/" rel="nofollow noreferrer"><code>kubernetes</code></a> package has an indirect dependency on the <code>cryptography</code> package. The latest release of <code>cryptography</code> no longer supports Python 3.6. It is recommended that you update to a more modern version of Python.</p>
| Joshua Shew |
<p>I currently have something like this in my kubeconfig</p>
<pre><code> exec:
apiVersion: client.authentication.k8s.io/v1
command: PATH_DETERMINED_VIA_BINARY/token_generator
args:
- --ACCESS_TOKEN
interactiveMode: Never
provideClusterInfo: false
</code></pre>
<p>My question is , in the above the PATH_DETERMINED_VIA_BINARY is obtained by running a binary called <code>tginfo</code> like this</p>
<pre><code>sudo tginfo path token --> This will return a path (like usr/lib/000012/)
</code></pre>
<p>Now this is the path that will contain the binary <code>token_generator</code> that is used in the command above. My question is how do I call <code>tginfo</code> binary to obtain a part of the path that will be used in the command ?</p>
| James Franco | <p>How I understand the problem is that
you want the output of one command to run as part of another command. In Linux you can use backticks (`) for this. e.g if you want to loop through the item in a particular location. You would do:</p>
<pre><code>for i in `ls`;β¦
</code></pre>
<p>So in theory something like this will work. The sudo tginfo command is inside backticks so that should get evaluated first followed by its output combined with the rest of the string.</p>
<p>Also look into eval. If this doesnβt work.</p>
<pre><code> exec:
apiVersion: client.authentication.k8s.io/v1
command: ["/bin/sh"]
args: ["-c", "`sudo tginfo path token`/token_generator --ACCESS_TOKEN"]
interactiveMode: Never
provideClusterInfo: false
</code></pre>
| user21895277 |
<p>can someone help?
I am trying to inject a helm value on a config map, but it breaks the format. If I use the value directly instead of .Values, it works fine.</p>
<p>What I have:</p>
<pre><code>data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: {{ .Values.okta.clientID }}
clientSecret: {{ .Values.okta.clientSecret }}
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
</code></pre>
<p>The result</p>
<pre><code>data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: "name: Okta\nissuer: https://mycompany.okta.com\nclientID: myClientId \nclientSecret:
mySecret\nrequestedScopes: [\"openid\", \"profile\",
\"email\", \"groups\"]\nrequestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n"
</code></pre>
| Stargazer | <p>it should be with the values.yaml . it worked for me in both ways :</p>
<ol>
<li>using the values in values.yaml</li>
</ol>
<hr />
<p>Values.yaml:</p>
<pre><code>okta:
clientSecret: test1233
clientID: testnew
</code></pre>
<p>configmap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: {{ .Values.okta.clientID }}
clientSecret: {{ .Values.okta.clientSecret }}
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
</code></pre>
<hr />
<p>command used :</p>
<pre><code> helm install testchart .\mycharttest --dry-run
</code></pre>
<p>-----Output-------------------</p>
<pre><code># Source: mycharttest/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
product: test
db: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: testnew
clientSecret: test1233
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}}
</code></pre>
<ol start="2">
<li>using the values in runtime</li>
</ol>
<hr />
<p>---Command --</p>
<pre><code> helm install test .\mycharttest --dry-run --set okta.clientID=newclientid --set okta.clientSecret=newsecret
</code></pre>
<p>----Output ---</p>
<pre><code># Source: mycharttest/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
labels:
app: test
product: test
db: test
data:
application.instanceLabelKey: argocd.argoproj.io/instance
oidc.config: |
name: Okta
issuer: https://mycompany.okta.com
clientID: newclientid
clientSecret: newsecret
requestedScopes: ["openid", "profile", "email", "groups"]
requestedIDTokenClaims: {"groups": {"essential": true}
</code></pre>
<p>kubernetes version : 1.22
Helm version :
version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.16.9"}</p>
| jins |
<p>I would like to have an environment variable that its value is a JSON String, that is built from the <code>ExternalSecrets</code> variables.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-app-external-secret
spec:
refreshInterval: "1h"
secretStoreRef:
name: hashicorp-vault
kind: ClusterSecretStore
target:
creationPolicy: Owner
data:
- secretKey: SECRET_KEY_A
remoteRef:
key: microservice/my-app
property: secret_key_a
- secretKey: SECRET_KEY_B
remoteRef:
key: microservice/my-app
property: secret_key_b
</code></pre>
<p>The output should look like:</p>
<pre class="lang-json prettyprint-override"><code>{
"A": {
"secretKey": "<SECRET_KEY_A>",
"someStaticValue": "ABCDEF"
},
"B": {
"secretKey": "<SECRET_KEY_B>",
"someStaticValue": "FEDCBA"
}
}
</code></pre>
<p>So then I could export it in the <code>Deployment</code> and be able to use it as a JSON string, something that might look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: JSON_SECRETS
valueFrom:
secretKeyRef:
name: my-app-external-secret
key: jsonSecret
</code></pre>
<p>I've tried some stuff but didn't get it to work.
I can use <code>ConfigMap</code> if needed. I'm trying to achieve the following behavior in my app:
Whenever I need to implement a new secret, there is not need to modify the source code, only wait the refresh time or restart the pod. The application already knows how to handle the JSON, but I'm unable to create this JSON in the Kubernetes. It is a Java application, it doesn't need to be a JSON, it can also be a list.</p>
| Johnnes Souza | <p>To build an environment variable in Kubernetes using a JSON string created from ExternalSecrets variables, use Kubernetes Jobs or CronJobs to update the environment variable with the needed JSON data on a regular basis. First, write a shell script that searches ExternalSecrets and creates the required JSON format. Vault CLI or any other tool that can access your ExternalSecrets can be used. <code>generate-json.sh</code> as an example:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
SECRET_KEY_A=$(vault read -field=secret_key_a secret microservice/my-app)
SECRET_KEY_B=$(vault read -field=secret_key_b secret microservice/my-app)
JSON="{"
JSON="$JSON \"A\": {\"secretKey\": \"$SECRET_KEY_A\", \"someStaticValue\": \"ABCDEF\"},"
JSON="$JSON \"B\": {\"secretKey\": \"$SECRET_KEY_B\", \"someStaticValue\": \"FEDCBA\"}"
JSON="$JSON }"
echo "$JSON" > /path/to/output/json.json
</code></pre>
<p>You may include the shell script in a ConfigMap and declare it as executable.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: json-generator
data:
generate-json.sh: |
#!/bin/sh
# (Contents of the shell script)
</code></pre>
<p>Make a CronJob that executes the shell script on a regular basis to produce the JSON and save it in a ConfigMap.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: json-generator-cron
spec:
schedule: "0 */1 * * *" # Run every 1 hour
jobTemplate:
spec:
template:
spec:
containers:
- name: json-generator
image: your-custom-image:latest # Image with Vault CLI or required tools
command: ["/bin/sh", "/path/to/script/generate-json.sh"]
volumeMounts:
- name: output-volume
mountPath: /path/to/output
volumes:
- name: output-volume
emptyDir: {}
</code></pre>
<p>Create a Pod in your deployment that references the produced JSON from the ConfigMap as an environment variable.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
containers:
- name: my-app-container
image: your-app-image:latest
env:
- name: JSON_SECRETS
valueFrom:
configMapKeyRef:
name: json-generator
key: /path/to/output/json.json
</code></pre>
<p>With this configuration, the CronJob creates JSON using your ExternalSecrets data on a regular basis and saves it in a ConfigMap. The Pod in your application then refers to this ConfigMap to set the <code>JSON_SECRETS</code> environment variable. When you add or alter secrets in ExternalSecrets, the CronJob will automatically refresh the JSON, and your application will pick up the changes on the next Pod restart or as scheduled. Ascertain that the CronJob's <code>json-generator</code> container has the appropriate rights to access your private store (e.g., Vault) and write to the ConfigMap.</p>
| ooxvyd |
<p>In a Azure AKS kubernetes cluster, after a cluster version upgrade the nodepool nodes, I have a PV that has this node affinity:</p>
<pre><code>Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [westeurope]
</code></pre>
<p>The nodes don't have the label so the Deployment creates a Pod that cannot be scheduled for the corresponding PVC for this PV. The Pod is never started:</p>
<pre><code> Warning FailedScheduling 15m default-scheduler 0/3 nodes are available: 3 node(s) had volume node affinity conflict. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
</code></pre>
<p>How can I add the label to the node or remove the label from the PV? I have tried to add the label to the node but I get:</p>
<pre><code>Error from server: admission webhook "aks-node-validating-webhook.azmk8s.io" denied the request: (UID: 931bf139-1579-4e96-b164-e4e6e2fdae65) User is trying to add or update a non-changeable system label (failure-domain.beta.kubernetes.io/region:westeurope). This action is denied..
</code></pre>
<p>Is the only solution to backup and restore the PV into a new one that does not have that deprecated label? What would the best process to do it (or any alternative solution)</p>
| icordoba | <p>We had the same problem. How we resolved it:</p>
<ol>
<li>Copy output from "kubectl get pvc" to get the link between the pvc and the pv.</li>
<li>Locate the disk in azure portal and create a snapshot of the disk. (In the MC_ resource group to the aks</li>
<li>Edit deployment in kubernetes and set replicacout to 0. Save and see that the pod are stopping ang removed.</li>
<li>Delete the PVC for this pod.</li>
<li>Edit deployment in kubernetes and set replicacout to 1. Save and see that there is a new PVC and a new PV created.</li>
<li>Edit deployment again and set replicacount to 0.</li>
<li>Locate the new disk in azure portal. Use "kubectl get pvc" to locate.</li>
<li>Delete new disk in azure portal</li>
<li>Locate snapshot created in pt 2.</li>
<li>Create a new disk based on the snapshot. New disk should have the same name as the disk deleted in pt 6.</li>
<li>Edit deployment in kubernetes and set replicacount to 1.
It should now start using the old disk with the new pvc and pv.</li>
</ol>
<p>Take backup of what ever referances and disk you can before starting</p>
| user21907706 |
<p>Related to <a href="https://stackoverflow.com/q/66288565/3288890">Duplicated env variable names in pod definition, what is the precedence rule to determine the final value?</a></p>
<p>I have a deployment spec with a repeated env name, see below:</p>
<pre><code>containers:
- name: c1
...
env:
- name: DUP1
value: hello1
- name: DUP1
value: hello2
</code></pre>
<p>Can I expect the second DUP1 key to always be the one set on the pod? e.g. echo $DUP1 == "hello2". It seems to be the case but I can't find validation about it</p>
| Fran | <p>Have recently tested this -- in my case the precedence is reversed. Am not sure why it is the case and apparently not so for the other responders but, in yaml:</p>
<pre><code>env:
- name: key1
value: value1
- name: key1
value: value2
</code></pre>
<p>is rendering</p>
<pre><code>containers:
- env:
- name: key1
value: value1
</code></pre>
| KBlanko |
<p>I can create a PostgresSQL deployment in Kubernetes with volumes with no problems. The question I have is how to create the database tables.</p>
<p>I can easily exec in the pod and create the tables but I want it to be automatically createded.</p>
<p>I don't want to build in into the docker image as a want a generic image.</p>
<p>I have thought about a few options such as Kubernetes Batch Job only running once but not sure what the best approach is ?</p>
<p>Thanks.</p>
| Billy Slater | <p>this may help (here I have added configmap, persistent volume, persistent volume-claim, and Postgres deployment yaml. This yaml will automatically create a table named <code>users</code> in the Postgres database inside the Postgres-container. Thanks</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
Postgres_DB: postgresdb
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
data:
Postgres_User: postgresadmin
Postgres_Password: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres-container
image: postgres:latest
imagePullPolicy: "IfNotPresent"
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","sleep 20 && PGPASSWORD=$POSTGRES_PASSWORD psql -w -d $POSTGRES_DB -U $POSTGRES_USER -c 'CREATE TABLE IF NOT EXISTS users (userid SERIAL PRIMARY KEY,username TEXT,password TEXT,token TEXT,type TEXT);'"]
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: Postgres_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: Postgres_User
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: Postgres_Password
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>If you find this helpful, please mark it as answer.</p>
| Ashwin Singh |
<p>I am trying to send a curl request to a Kubernetes ClusterType service. Is there any way to perform curl requests to service?</p>
<p>I am deploying an application with Blue/Green deployment. So here need to verify the Blue version is properly working or not. So decide to send a curl request to the blue version. When I get 200 status, I will route all traffic to this version.</p>
<p>But now am facing that send curl request to the new version(Blue version) of the application.</p>
| Amjed saleel | <p>ClusterIP makes the Service only reachable from within the cluster. This is the default ServiceType. You can read more information about services <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>.</p>
<p>As the command in the first answer doesn't work, I'm posting the working solution:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run tmp-name --rm --image nginx -i --restart=Never -- /bin/bash -c 'curl -s clusterip:port'
</code></pre>
<p>with the above command curl is working fine. You can use a service name instead of cluster IP address.</p>
<p><code>--restart=Never</code> is needed for using curl like this.</p>
<p><code>--rm</code> ensures the Pod is deleted when the shell exits.</p>
<h5>Edited:</h5>
<h3>But if you want to access your ClusterIp service from the host on which you run kubectl, you can use Port Forwarding.</h3>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward service/yourClusterIpServiceName 28015:yourClusterIpPort
</code></pre>
<p>You will get output like this:</p>
<pre class="lang-yaml prettyprint-override"><code>Forwarding from 127.0.0.1:28015 -> yourClusterIpPort
Forwarding from [::1]:28015 -> yourClusterIpPort
</code></pre>
<p>after that you will be able to reach your ClusterIP service using this command:</p>
<pre class="lang-sh prettyprint-override"><code>curl localhost:28015
</code></pre>
<p>More information about port forwarding is on <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">the official documentation page</a>.</p>
| mozello |
<p>I'm having trouble understanding what pod Eviction means mechanically in terms of K8s's actions -- what exactly does K8s do with the pod during eviction?</p>
<p>Specifically, my main question is this:
Under what conditions is an Evicted pod actually deleted from ETCD?
Under what conditions is an Evicted pod just killed without being deleted from the API server?</p>
<p>If I Evict a pod directly using the Eviction API, the pod object is actually deleted.
On the other hand, I've definitely seen pods hang in "Evicted" in the status column after I run "kubectl get pod".</p>
<p>Edit:
Removed follow-up questions about Preemption and OOM-Killing to conform to the guideline of one question per post.
Might post a separate question about OOM management later.</p>
| Dmitri Gekhtman | <p>I'm also confused by this lately. Here are some findings after a while of digging in the source code and the docs.</p>
<p>'Eviction' here actually means two slightly different concepts, which are both documented in the official docs: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction" rel="nofollow noreferrer">Node-Pressure Eviction</a> and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction" rel="nofollow noreferrer">API-Initiated Eviction</a>. They can really be mixed up when we just talk about 'Eviction' because they both do the same thing: evict pods from nodes.</p>
<p>Actually the doc of 'Node-Pressure Eviction' states:</p>
<blockquote>
<p>Node-pressure eviction is not the same as API-initiated eviction.</p>
</blockquote>
<p>The difference between these two is that 'API-Initiated Eviction' is, as the doc said:</p>
<blockquote>
<p>performing a policy-controlled DELETE operation on the Pod.</p>
</blockquote>
<p>So it will eventually delete the object stored in API server if the pod is evicted.</p>
<p>But 'Node-Pressure Eviction' is issued directly by the kubelet and what it does is set the PodPhase in pod's status to 'Failed' and the Reason to 'Evicted'</p>
<blockquote>
<p>During a node-pressure eviction, the kubelet sets the PodPhase for the selected pods to Failed. This terminates the pods.</p>
</blockquote>
<p>This will result in the <code>Evicted</code> pods shown when running <code>kubectl get pod</code>.</p>
<p>So the direct answer to your question is:
If the pod is evicted using the Eviction API, the pod object will be deleted.
If the pod is evicted by kubelet due to node pressure, the pod object will remain and will be in Failed status.</p>
| devzbw |
<p>I know that the moment the pod receives a deletion request, it is deleted from the service endpoint and no longer receives the request. However, I'm not sure if the pod can return a response to a request it received just before it was deleted from the service endpoint.
If the pod IP is missing from the service's endpoint, can it still respond to requests?</p>
| HHJ | <p>There are many reasons why Kubernetes might terminate a healthy container (for example, node drain, termination due to lack of resources on the node, rolling update).</p>
<h4>Once Kubernetes has decided to terminate a Pod, a series of events takes place:</h4>
<h4>1 - Pod is set to the βTerminatingβ State and removed from the endpoints list of all Services</h4>
<p>At this point, the pod stops getting new traffic. Containers running in the pod will not be affected.</p>
<h3>2 - preStop Hook is executed</h3>
<p>The <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details" rel="nofollow noreferrer">preStop Hook</a> is a special command or http request that is sent to the containers in the pod.
If your application doesnβt gracefully shut down when receiving a SIGTERM you can use this hook to trigger a graceful shutdown. Most programs gracefully shut down when receiving a SIGTERM, but if you are using third-party code or are managing a system you donβt have control over, the preStop hook is a great way to trigger a graceful shutdown without modifying the application.</p>
<h4>3 - SIGTERM signal is sent to the pod</h4>
<p>At this point, Kubernetes will send a SIGTERM signal to the containers in the pod. This signal lets the containers know that they are going to be shut down soon.
Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (like a database connection or WebSocket stream), saving the current state, or anything like that.
Even if you are using the preStop hook, it is important that you test what happens to your application if you send it a SIGTERM signal, so you are not surprised in production!</p>
<h4>4 - Kubernetes waits for a grace period</h4>
<p>At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. Itβs important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish.
If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately.
If your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML.</p>
<h4>5 - SIGKILL signal is sent to pod, and the pod is removed</h4>
<p>If the containers are still running after the grace period, they are sent the SIGKILL signal and forcibly removed. At this point, all Kubernetes objects are cleaned up as well.</p>
<p>I hope this gives a good idea of the Kubernetes <strong>termination lifecycle</strong> and how to handle a Pod termination <strong>gracefully</strong>.</p>
<p>Based on <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">this article</a>.</p>
| mozello |
<p>The <a href="https://cloud.google.com/kubernetes-engine/pricing#cluster_management_fee_and_free_tier" rel="nofollow noreferrer">documentation</a> says</p>
<blockquote>
<p>The cluster management fee of $0.10 per cluster per hour (charged in 1 second increments) applies to all GKE clusters irrespective of the mode of operation, cluster size or topology.</p>
<p>The GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters. If you only use a single Zonal or Autopilot cluster, this credit will at least cover the complete cost of that cluster each month. Unused free tier credits are not rolled over, and cannot be applied to any other SKUs (for example, they cannot be applied to compute charges, or the cluster fee for Regional clusters).</p>
</blockquote>
<p>On average, I consume $74 a month for a single zonal cluster in Google Kubernetes Engine (GKE). This amount is just within the limit of the free promotional credits offered per billing account. However, if I were to operate two clusters, I would exceed the promotional credit limit halfway through the month and would start incurring charges for the service.</p>
<p>Would creating two separate billing accounts (<strong>but both linked to the same Google payment profile</strong>), allow each account to take advantage of its own set of promotional credits, effectively doubling the amount of free usage available? Is that actually the case?</p>
| Mikolaj | <p>Seems to me your issue requires billing specialists for GCP. You may check this link for more details and assistance.[1]</p>
<p>[1[</p>
<p><a href="https://cloud.google.com/support/billing#contact-billing-support" rel="nofollow noreferrer">https://cloud.google.com/support/billing#contact-billing-support</a></p>
| Ray John Navarro |
<p>I'm using minikube version: v1.25.1, win10, k8s version 1.22</p>
<p>There is 1 node, 2 pods on it: <code>main</code> and <code>front</code>, 1 service - <code>svc-main</code>.</p>
<p>I'm trying to exec into front and call main thru service and see some msg confirming connection is ok.</p>
<p>main.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: main
name: main
namespace: default
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["while true; do echo date; sleep 2; done"]
</code></pre>
<p>front.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: nginx
name: front
command:
- /bin/sh
- -c
- while true; echo date; sleep 2; done
</code></pre>
<p>service is created like this:</p>
<pre><code>k expose pod ngin --name=svc-main --type=ClusterIP --port=80 --target-port=80
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-main ClusterIP 10.104.26.249 <none> 80:31775/TCP 11m
</code></pre>
<p>When I try to curl from inside front it says "Could not resolve host: svc-main"</p>
<pre><code> k exec front -it -- sh
curl svc-main:80
</code></pre>
<p>or this</p>
<pre><code>curl http://svc-main:80
curl 10.104.26.249:80
</code></pre>
<p>I tried the port 31775, same result. What am I doing wrong?!</p>
| ERJAN | <p>The problem is when you are creating a Kubernetes pod using your yaml file, you are overwriting the default Entrypoint and Cmd defined in nginx docker image with your custom command and args:</p>
<pre class="lang-yaml prettyprint-override"><code>command: ["/bin/sh","-c"]
args: ["while true; do echo date; sleep 2; done"]
</code></pre>
<p>That's why nginx web server doesnΒ΄t work in created pods.
You should remove these lines, delete running pods, and create new pods. After that, you will be able to reach the nginx web page by running</p>
<pre class="lang-sh prettyprint-override"><code># curl svc-main
</code></pre>
<p>within your front pod.</p>
<p>You can read more info about defining a command and arguments for a container in a Pod <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">here</a></p>
<p>And there is a good article about Docker CMD and Entrypoint <a href="https://www.cloudbees.com/blog/understanding-dockers-cmd-and-entrypoint-instructions" rel="nofollow noreferrer">here</a></p>
| mozello |
<p>I want to calculate the availability/uptime percentage of ephemeral pods runners that belong to GitHub Action runners.</p>
<p>How should promql query look like to calculate the uptime percentage of ephemeral runners/pods in kubernetes?</p>
| semural | <p>There are a lot to consider in order for you to achieve your goals in monitoring the availability or uptime of ephemeral pod runners for GitHub Action runners. To start off, you need a tool to do this like Grafana. This will help you monitor and visualize Kubernetes metrics. Also you need to define the criteria that you want to monitor. This is just fo help you start off but there are a lot of things to be done. Here are some helpful links for you. [1][2][3]</p>
<p>[1] <a href="https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/" rel="nofollow noreferrer">https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/</a></p>
<p>[2] <a href="https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners" rel="nofollow noreferrer">https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners</a></p>
<p>[3] <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/</a></p>
| Ray John Navarro |
<p>I have deployed ambassador edge stack and I am using hosts and mapping resources to route my traffic. I want to implement the mapping in such a way that if there is any double slash in the path, using regex (or any other available way) to remove one slash from it.
For example, if client request <code>https://a.test.com//testapi</code> I want it to be <code>https://a.test.com/testapi</code>.</p>
<p>I search through the ambassador documents but I am unable to find anything that can be of help.</p>
<p>Thank You</p>
| Susanta Gautam | <p>There is the <a href="https://www.getambassador.io/docs/emissary/1.14/topics/running/ambassador/#the-module-resource" rel="nofollow noreferrer">Module Resource</a> for emissary ingress.</p>
<blockquote>
<p>If present, the Module defines system-wide configuration. This module can be applied to any Kubernetes service (the ambassador service itself is a common choice). You may very well not need this Module. To apply the Module to an Ambassador Service, it MUST be named ambassador, otherwise it will be ignored. To create multiple ambassador Modules in the same namespace, they should be put in the annotations of each separate Ambassador Service.</p>
</blockquote>
<p>You should add this to the module's yaml file:</p>
<pre><code>spec:
...
config:
...
merge_slashes: true
</code></pre>
<blockquote>
<p>If true, Emissary-ingress will merge adjacent slashes for the purpose of route matching and request filtering. For example, a request for //foo///bar will be matched to a Mapping with prefix /foo/bar.</p>
</blockquote>
| mozello |
<p>Do I understand correctly that Kafka Streams state store is co-located with the KS application instance? For example if my KS application is running in a Kubernetes pod, the state store is located in the same pod? What state store storage is better to use in Kubernetes - RocksDB or in-memory? How can the type of the state store be configured in the application?</p>
| AndCode | <p>This depends on your use case - sometimes you can accept an in-memory store when you have a small topic. However in most cases you'll default to a persistent stores. To declare one you'd do:</p>
<pre><code>streamsBuilder.addStateStore(
Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore(
storeName
),
keySerde,
valueSerde
)
);
</code></pre>
<p>If you wish for a inMemoryStore replace the third line with <code>inMemoryKeyValueStore</code>.</p>
<p>Running a KafkaStreams application in k8s has a few caveats. First of all just a pod is not enough. You'll need to run this as a stateful-set. In that case your pod will have a PersistentVolumeClaim mounted on your pod under a certain path. It's best to set your <code>state.dir</code> property to point at a subfolder of that path. That way when your pod shuts down the volume is retained and when the pod comes back on it will have all of its store present.</p>
| Kacper Roszczyna |
<p>For example, guestbook-ui service and bbs-ui service are installed in k8s.</p>
<p>And I want to map guestbook-ui only to the 8080 listener port and bbs-ui service to the 8081 listener port to the pre-generated k8s ALB ingress.</p>
<p>However, if you write and store the following in spec, all guestbook-ui and bbs-ui services are deployed to all ports of 8080, 8081, and routing is twisted.</p>
<pre class="lang-yaml prettyprint-override"><code># skip
metadata:
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":8080}, {"HTTP":8081}]'
# skip
spec:
rules:
- http:
paths:
- backend:
service:
name: bbs-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
- http:
paths:
- backend:
service:
name: guestbook-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
</code></pre>
<p>How can I deploy the service to the listener port I want?</p>
| Junseok Lee | <p>There is a feature to automatically merge multiple ingress rules for all ingresses in the same <strong>ingress group</strong>. The AWS ALB ingress controller supports them with a single ALB.</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
annotations:
alb.ingress.kubernetes.io/group.name: my-group
</code></pre>
<blockquote>
<p>In the AWS ALB ingress controller, prior to version 2.0, each ingress object you created in Kubernetes would get its own ALB. Customers wanted a way to lower their cost and duplicate configuration by sharing the same ALB for multiple services and namespaces.
By sharing an ALB, you can still use annotations for advanced routing but share a single load balancer for a team, or any combination of apps by specifying the alb.ingress.kubernetes.io/group.name annotation. <strong>All services with the same group.name will use the same load balancer</strong>.</p>
</blockquote>
<p>So, you can create ingress like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: mytag=tag
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8080}]'
spec:
rules:
- http:
paths:
- backend:
service:
name: bbs-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress2
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: mytag=tag
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8081}]'
spec:
rules:
- http:
paths:
- backend:
service:
name: guestbook-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
</code></pre>
<p>You can read more info about <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.0/guide/ingress/annotations/#ingressgroup" rel="nofollow noreferrer">IngressGroup here</a>.</p>
| mozello |
<p>I want to log activity of users in k8s. I want to log each users that exec commands in pods. for example if a user has username 'user01' in k8s (or openshift) and executes 'whoami' in a pod, I want to log this activity with username 'user01'.</p>
<p>I explored Tetragon and Falco. these tools cannot give me k8s username. they just give real and effective user in pod.
k8s audit log stores username but it does not store whole activity.</p>
| Michael Cab | <p>Hey the pod is the smalles unit in kubernetes. The Container that the pod is wrapping does not know about K8s It only have its own users and whatever you configure on application level.</p>
<p>I assume you want to know which k8s user does what, specifically on the pod it he uses <em>kubeclt exec bla -- /bin/sh</em> ?</p>
<p>Since this would be a very specific usecase you have to find a workaround for you.</p>
<p><strong>Solution:</strong>
One possible way to achive this to publish the container logs and the k8s audit log together to your distributed logging system like elastic & kibana. And from that point you can search by correlation. E.g. you have the k8s logs where you can see the container and the k8s user and you can join them with the logs that are performed on this specific container.</p>
| pwoltschk |
<p>Concerning the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale" rel="nofollow noreferrer">Kubernetes Horizontal Autoscaler</a>, are there any metrics related to the number of changes between certain time periods?</p>
| Hegemon | <p>Kubernetes does not provide such metrics, but you can get <code>events</code> for a k8s resource.</p>
<blockquote>
<p>An event in Kubernetes is an object in the framework that is automatically generated in response to changes with other resourcesβlike nodes, pods, or containers.</p>
</blockquote>
<p>The simplest way to get events for HPA:</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl get events | grep HorizontalPodAutoscaler
7m5s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target
3m20s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 10; reason:
</code></pre>
<p>or</p>
<pre><code>$ kubectl describe hpa <yourHpaName>
</code></pre>
<p><strong>But</strong> Kubernetes events are deleted by default after 1 hour (it is the default time-to-live, higher values might require more resources for <code>etcd</code>). Therefore <strong>you must watch for and collect important events as they happen</strong>.</p>
<p>To do this you can use for example:</p>
<ul>
<li><a href="https://github.com/bitnami-labs/kubewatch" rel="nofollow noreferrer">KubeWatch</a> is great open-source tool for watching and streaming K8s events to third-party tools and webhooks.</li>
<li><a href="https://github.com/heptiolabs/eventrouter?ref=thechiefio" rel="nofollow noreferrer">EventRouter</a> is another great open-source tool for collecting Kubernetes events. It is effortless to set up and aims to stream Kubernetes events to multiple sources or <em>sinks</em> as they are referred to in its documentation. However, just like KubeWatch, it also does not offer querying or persistence features. You need to connect it with a third-party storage and analysis tool for a full-fledged experience.</li>
<li>almost any logging tool for k8s.</li>
</ul>
<p><strong>Also</strong>, you can use the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">official Kubernetes API library</a> to develop some simple app for catching events from HPA.
There is a good example of how to develop a simple monitoring app for k8s by yourself <a href="https://stackoverflow.com/a/63774000/17739440">in this answer</a>. They monitor pods' statuses there, but you can get a good idea of developing the app you need.</p>
| mozello |
<p>I'm creating a secret with below yaml. My application needs the username and password in plaintext</p>
<pre><code>apiVersion: v1
stringData:
password: "secret"
username: "kafka"
kind: Secret
metadata:
name: kafka-secret
namespace: kafka
type: Opaque
</code></pre>
<p>However when I apply the yaml it changes to data instead of string data and the values get encoded</p>
<pre><code>apiVersion: v1
data:
password: a2Fma2Etc2VjcmV0
username: c2VjcmV0
kind: Secret
metadata:
creationTimestamp: "2023-07-18T18:41:17Z"
name: kafka-secret
namespace: kafka
resourceVersion: "14336732"
uid: bc051184-335b-4eca-8120-78d8c431f123
type: Opaque
</code></pre>
| DeirdreRodgers | <p>yes this is the nature of a Secret, you can think of a Secret as an encoded configmap all values in the secret should be base64 encoded. If you create a secret e.g. via</p>
<p><code>kubectl create secret generic mysecret --from-literal=password=mypassword123</code></p>
<p>the password will be always base64 encoded!</p>
<p>In the pod you could mount now you secret and utilize it as clear values</p>
<pre><code>kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
optional: true
</code></pre>
<p>You can also utilize you value as an env variable which fits to your password username example:</p>
<pre><code> kind: Pod
metadata:
name: envvars-multiple-secrets
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: BACKEND_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: backend-username
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: db-username
</code></pre>
| pwoltschk |
<p>I have a K8s service defined as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: myapp
name: myapp
namespace: myapps
spec:
ports:
- name: ui
port: 8081
protocol: TCP
targetPort: 8081
selector:
my-app: myapp
my-deployment-type: jobmanager
type: ClusterIP
</code></pre>
<p>This service serves as backend for an ingress.</p>
<p>Now during blue green deployment there are two apps running i.e two sets of pods that match the selctors specified above:
my-app: myapp
my-deployment-type: jobmanager</p>
<p>And I observe that it selects both pods i.e both app versions at random till the older app pods gets killed.</p>
<p>Is there a way to ensure that the service will just choose the new app pods? i.e Have a selector to also depend on the "Start Time" of the pods apart from the selector?</p>
| Ace McCloud | <blockquote>
<p>Is there a way to ensure that the service will just choose the new app pods? i.e Have a selector to also depend on the "Start Time" of the pods apart from the selector?</p>
</blockquote>
<p>There is no such feature in Kubernetes to filter a pod start time in service selector block.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#service-and-replicationcontroller" rel="nofollow noreferrer">official k8s documentation</a> says Β¨Labels selectors for service objects are defined in json or yaml files using maps, and <strong>only equality-based requirement selectors are supported</strong> Β¨.</p>
<p>So, in your case the best option is to:</p>
<ul>
<li>create a new k8s deployment with new labels for green deployment (the new application version)</li>
<li>create a new ClusterIP service for the green deployment</li>
<li>then switch your Ingress to this green deployment by changing the backend service name.</li>
</ul>
<p>If something goes wrong, you can quickly switch back to your blue deployment (previous application version).</p>
| mozello |
<p>What is the preferred Kubernetes storageClass for a PersistentVolume used by a Postgresql database? Which factors should go into consideration choosing the storageClass when I have the choice between S3 (Minio), NFS and HostPath?</p>
| mxcd | <p>When you choose a storage option for Postgresql in Kubernetes, you should take into account the following:</p>
<ol>
<li><p><strong>NFS / Minio</strong> is not the preferred storage for databases, if your application is latency-sensitive. A common use case is a download folder or a logging/backup folder.<br />
But it gives you flexibility to design a k8s cluster and ability to easily move to cloud-based solution in future (AWS EFS or S3 for example).</p>
</li>
<li><p><strong>HostPath</strong> is a better option for databases. But</p>
</li>
</ol>
<blockquote>
<p>Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.</p>
</blockquote>
<blockquote>
<p>In a production cluster, you would not use hostPath. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use StorageClasses to set up dynamic provisioning.</p>
</blockquote>
<ol start="3">
<li>As you mentioned, there is quite a good option for non-cloud k8s clusters <strong>Longhorn</strong></li>
</ol>
<blockquote>
<p>Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes.<br />
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.</p>
</blockquote>
<ol start="4">
<li>Also, check this <a href="https://github.com/bitnami/charts/tree/master/bitnami/postgresql" rel="nofollow noreferrer">Bitnami PostgreSQL Helm chart</a></li>
</ol>
<blockquote>
<p>It offers a PostgreSQL Helm chart that comes pre-configured for security, scalability and data replication. It's a great combination: all the open source goodness of PostgreSQL (foreign keys, joins, views, triggers, stored proceduresβ¦) together with the consistency, portability and self-healing features of Kubernetes.</p>
</blockquote>
| mozello |
<p>I am somewhat brand-new to Kubernetes.</p>
<p>I have a pod that keeps restarting, and my error is:</p>
<pre><code> at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:524:31)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/target_node/create_transport.js:58:16)
at Cluster.getSettings (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/cluster.js:157:16)
at isInlineScriptingEnabled (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-server-internal/target_node/is_scripting_enabled.js:22:20)
at ElasticsearchService.start (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-server-internal/target_node/elasticsearch_service.js:128:32)
at Server.start (/usr/share/kibana/src/core/server/server.js:366:32)
at Root.start (/usr/share/kibana/src/core/server/root/index.js:69:14)
at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:120:5)
at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:216:5)
[2023-08-19T12:01:21.293+00:00][INFO ][plugins-system.preboot] Stopping all plugins.
[2023-08-19T12:01:21.294+00:00][INFO ][plugins-system.standard] Stopping all plugins.
[2023-08-19T12:01:21.294+00:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Monitoring stats collection is stopped
[2023-08-19T12:01:21.297+00:00][ERROR][plugins.ruleRegistry] Error: Server is stopping; must stop all async operations
at /usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:66:20
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-08-19T12:01:21.298+00:00][ERROR][plugins.ruleRegistry] Error: Failure installing common resources shared between all indices. Server is stopping; must stop all async operations
at ResourceInstaller.installWithTimeout (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:75:13)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at ResourceInstaller.installCommonResources (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:89:5)
FATAL TimeoutError: Request timed out
</code></pre>
<p>The rest of my pods are running:</p>
<p><a href="https://i.stack.imgur.com/Ph0ZY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ph0ZY.png" alt="enter image description here" /></a></p>
<p><code>kubectl describe pod</code></p>
<p><a href="https://i.stack.imgur.com/cEfvt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cEfvt.png" alt="enter image description here" /></a>
What I've Tried:
Deleted the pod on its own.
Removed the deployment.
Updated the YAML file to allocate more resources and applied the changes. However, I'm uncertain if the changes were actually applied.</p>
<p>If anyone is a Kubernetes expert, I would be very appreciative if someone could provide proper guidance here.</p>
<p>Thank you.</p>
| OreuhNation | <p>It's quite hard todo remote diagnostics with that amount of information. Here are basically two things that disturb me. Your Kubernetes pod running Kibana is encountering issues and keeps restarting due to errors related to communication with Elasticsearch.</p>
<p><strong>1:</strong> The "Readiness probe failed: Get <a href="https://xxxxx.login" rel="nofollow noreferrer">https://xxxxx.login</a> dial tcp connect connection refused" message suggests that Kibana is failing its readiness probe. A readiness probe is used to determine if a container is ready to start accepting traffic.</p>
<ul>
<li><p>Verify that the readiness probe configuration for the Kibana pod is
correct. It might be misconfigured or the service it's trying to
access ("login" in this case) might not be ready or available.</p>
</li>
<li><p>Check the service name, port, and path used in the readiness probe. Ensure they match the actual configuration of the application.</p>
</li>
<li><p>incease the timeout of the readiness probe</p>
</li>
</ul>
<p><strong>2:</strong> The error message "TimeoutError: Request timed out" indicates that there's an issue with Kibana's communication with Elasticsearch.</p>
<ul>
<li>Check if the Elasticsearch service is up and running and accessible
from the Kibana pod.</li>
<li>Ensure that the Elasticsearch URL and credentials are correctly configured in the Kibana configuration.</li>
<li>Verify that there are no network policies or firewalls blocking communication between Kibana and Elasticsearch.</li>
<li>Check resource allocation for both Kibana and Elasticsearch pods. Ensure they have enough resources (CPU, memory) allocated to perform
their tasks.</li>
</ul>
<p>I hope this could help you.
Happy Coding</p>
| pwoltschk |
<p>I have an NGINX Ingress sitting in front of a few nodejs services. I want to restrict the path /graphql to only POST and only content-type=application/json</p>
<p>I've added the following annotation, which seems to work in terms of the restriction, but valid requests now return a 404</p>
<pre><code> nginx.ingress.kubernetes.io/server-snippet: |
location /graphql {
limit_except OPTIONS POST {
deny all;
}
if ($http_content_type != "application/json") {
return 403;
}
}
</code></pre>
| Ben Gannaway | <p>I think the problem is that your <code>location</code> {} block doesn't have an upstream like the regular paths defined in the nginx ingress.
Get the nginx ingress configuration from <code>ingress-nginx-controller</code> pod:</p>
<pre><code>$ kubectl exec -n your-ingress-nginx-namespace ingress-nginx-controller-xxx-xxx -- cat /etc/nginx/nginx.conf
</code></pre>
<p>And check some other location {} block to find what you might need for your <code>/graphql</code> location {} configuration.</p>
<p>The following basic <code>nginx.ingress.kubernetes.io/server-snippet</code> works for me (requests other than POST and content-type=application/json return 403 status code, valid requests are OK):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cms-ingress-service
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/server-snippet: |
location /graphql {
limit_except OPTIONS POST {
deny all;
}
if ($http_content_type != "application/json") {
return 403;
}
set $proxy_upstream_name "default-nginx-80";
proxy_pass http://upstream_balancer;
}
spec:
...
</code></pre>
<p>Also, be aware of <a href="https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/" rel="nofollow noreferrer">If is Evil... when used in location context</a>.<br />
So, I advise you to test your final configuration thoroughly before putting it in production.</p>
| mozello |
<p>One of our containers is using ephemeral storage but we don't know why. The app running in the container shouldn't be writing anything to the disk.</p>
<p>We set the storage limit to 20MB but it's still being evicted. We could increase the limit but this seems like a bandaid fix.</p>
<p>We're not sure what or where this container is writing to, and I'm not sure how to check that. When a container is evicted, the only information I can see is that the container exceeded its storage limit.</p>
<p>Is there an efficient way to know what's being written, or is our only option to comb through the code?</p>
| Jessie | <p>Adding details to the topic.</p>
<blockquote>
<p>Pods use ephemeral local storage for scratch space, caching, and logs.
<strong>Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage has been reclaimed.</strong></p>
</blockquote>
<p>The kubelet can provide scratch space to Pods using local ephemeral storage to mount <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> volumes into containers.</p>
<ul>
<li><p>For container-level isolation, if a container's writable layer and log usage exceeds its storage limit, the kubelet marks the Pod for eviction.</p>
</li>
<li><p>For pod-level isolation the kubelet works out an overall Pod storage limit by summing the limits for the containers in that Pod. In this case, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod for eviction.</p>
</li>
</ul>
<p>To see what files have been written since the pod started, you can run:</p>
<pre><code>find / -mount -newer /proc -print
</code></pre>
<p>This will output a list of files modified more recently than '/proc'.</p>
<pre><code>/etc/nginx/conf.d
/etc/nginx/conf.d/default.conf
/run/secrets
/run/secrets/kubernetes.io
/run/secrets/kubernetes.io/serviceaccount
/run/nginx.pid
/var/cache/nginx
/var/cache/nginx/fastcgi_temp
/var/cache/nginx/client_temp
/var/cache/nginx/uwsgi_temp
/var/cache/nginx/proxy_temp
/var/cache/nginx/scgi_temp
/dev
</code></pre>
<p>Also, try without the '-mount' option.</p>
<p>To see if any new files are being modified, you can run some variations of the following command in a Pod:</p>
<pre><code>while true; do rm -f a; touch a; sleep 30; echo "monitoring..."; find / -mount -newer a -print; done
</code></pre>
<p>and check the file size using the <code>du -h someDir</code> command.</p>
<p>Also, as @gohm'c pointed out in his answer, you can use sidecar/ephemeral debug containers.</p>
<p>Read more about <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">Local ephemeral storage here</a>.</p>
| mozello |
<p>When applying the yaml file to the ingress services a load balancer is created. I pointed my DNS domain to that load balancer IP and it reaches the nginx server. However when routing to <a href="https://www.example.com/api/users/" rel="nofollow noreferrer">https://www.example.com/api/users/</a> or <a href="https://www.example.com/api/users/currentuser/" rel="nofollow noreferrer">https://www.example.com/api/users/currentuser/</a> (or it's http:// equivalents) it returns a 404 page. Here is the Ingress-Nginx yaml file I'm using:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api/users/
pathType: Prefix
backend:
service:
name: be-auth-srv
port:
number: 9001
- path: /auth/
pathType: Prefix
backend:
service:
name: be-contact-form-srv
port:
number: 9000
</code></pre>
<p>Anybody can shed light on this topic? Thanks</p>
| Manel | <p>Based on the example you provided, I would suggest, as a test, to update the first path definition to the following:</p>
<ul>
<li>path: /api/users(/|$)(.*)</li>
</ul>
<p>...and assuming the Nginx Ingress Controller is already configured to terminate TLS, try reaching <a href="https://www.example.com/api/users/" rel="nofollow noreferrer">https://www.example.com/api/users/</a></p>
<p>I would also double check that the be-auth-srv service is configured to listen on port 9001.</p>
| Michael See |
<p>I have an openshift namespace (<code>SomeNamespace</code>), in that namespace I have several pods.</p>
<p>I have a route associated with that namespace (<code>SomeRoute</code>).</p>
<p>In one of pods I have my spring application. It has REST controllers.</p>
<p>I want to send message to that REST controller, how can I do it?</p>
<p>I have a route URL: <code>https://some.namespace.company.name</code>. What should I find next?</p>
<p>I tried to send requests to <code>https://some.namespace.company.name/rest/api/route</code> but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.</p>
| Anton | <p><strong>Routes</strong> are an <strong>OpenShift-specific</strong> way of exposing a Service outside the cluster.
But, if you are developing an app that will be deployed onto <strong>OpenShift and Kubernetes</strong>, then you should use <strong>Kubernetes Ingress</strong> objects.</p>
<p>Using Ingress means that your appβs manifests are more portable between different Kubernetes clusters.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">official Kubernetes docs</a>:</p>
<blockquote>
<ul>
<li>An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.</li>
<li>Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.</li>
<li>Traffic routing is controlled by rules defined on the Ingress resource.</li>
</ul>
</blockquote>
<h4>So, if you want to reach your REST controllers:</h4>
<ul>
<li><strong>from within the k8s cluster</strong>. Create a k8s <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> to expose an application running on a set of Pods as a network service:</li>
</ul>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: your-namespace
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>This specification creates a new Service object named "my-service", which targets TCP port 8080 on any Pod with the <code>app=MyApp</code> label.
You can reach the REST controller using this URL:</p>
<pre><code>http://my-service
</code></pre>
<ul>
<li><strong>externally</strong>. Create an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress resource</a> to configure externally-reachable URLs (a k8s Service 'my-service' should exist):</li>
</ul>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
namespace: your-namespace
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>You can reach the REST controller using this URL:</p>
<pre><code>http://foo.bar.com
</code></pre>
| mozello |
<p>I'm quite confused between ECK and ELK. AFIK ELK requires logstash for reading the logs and whether the logstash required for ECK also or only filebeat or metric beat is enough to get logs. Filebeat replaced logstash in case of ECK?</p>
| ilavarasan M | <p>I can understand your confusion between ECK and ELK. Let me clarify the differences between them and how they handle log ingestion.</p>
<ol>
<li>ELK Stack:</li>
</ol>
<ul>
<li>ELK stands for Elasticsearch, Logstash, and Kibana. It's a popular open-source stack used for log analysis and visualization.</li>
<li>Elasticsearch is a distributed search and analytics engine that stores and indexes the log data.</li>
<li>Logstash is a data processing pipeline that ingests logs from various sources, processes them, and sends them to Elasticsearch for storage.</li>
<li>Kibana is a web-based visualization tool that allows users to explore and analyze the data stored in Elasticsearch.</li>
</ul>
<p>In the traditional ELK setup, Logstash plays a crucial role in parsing and transforming logs before they are indexed into Elasticsearch. Logstash is responsible for handling log data collection, filtering, and processing.</p>
<ol start="2">
<li>ECK (Elastic Cloud on Kubernetes):</li>
</ol>
<ul>
<li>ECK is an abbreviation for Elastic Cloud on Kubernetes. It is a tool designed to deploy, manage, and operate the Elastic Stack (Elasticsearch, Kibana, Beats) on Kubernetes.</li>
<li>ECK brings the capabilities of the ELK Stack to Kubernetes, allowing you to deploy Elasticsearch and Kibana as native Kubernetes applications.</li>
</ul>
<ol start="3">
<li>Beats:</li>
</ol>
<ul>
<li>Beats are lightweight data shippers that can send various types of data to Elasticsearch directly, bypassing the need for Logstash in some cases.</li>
<li>Filebeat is a Beat designed to collect, ship, and centralize log data. It can replace Logstash in certain scenarios, as it can read log files, parse them, and send the data directly to Elasticsearch.</li>
<li>Metricbeat is another Beat that can collect and ship system and service-level metrics to Elasticsearch.</li>
</ul>
<p>So, to answer your questions:</p>
<ul>
<li>In the context of ECK, Logstash is not required to get logs into Elasticsearch.</li>
<li>Instead, you can use Filebeat or Metricbeat to read logs and send them directly to Elasticsearch.</li>
</ul>
<p>Filebeat is typically used for log collection, while Metricbeat is used for collecting system metrics. Both can be used independently or together, depending on your specific use case.</p>
<p>To summarize, ECK allows you to deploy Elasticsearch and Kibana on Kubernetes and use Beats like Filebeat or Metricbeat to send data to Elasticsearch, eliminating the need for Logstash in certain scenarios.</p>
| RizwanAli72 |
<p><strong>Context:</strong>
[Regarding cost reduction for compute]. In AKS cluster, recently observed that compute resources are under-utilised and so my plan is:</p>
<ol>
<li><p>To create a new node pool(with lower cpu and memory, and so lower cost basically) and attach to the same AKS cluster.
And then</p>
</li>
<li><p>cordon the old node pool and then drain it, so the workload will move to new node pool (Thanks to nodeSelector)</p>
</li>
</ol>
<p><strong>Question:</strong></p>
<p>What about the k8s resources like statefulset for eg. Redis etc which are in old node pool and are having PV and PVC . Do we have to take backup of those pvc and restore etc for new node pool? (As per my thinking, kubernetes will take care of detaching and attaching pvc considering all this activity will be within single kubernetes cluster only)</p>
| Nin | <p>You are right! Kubernetes will take care of detaching and attaching PV and PVC considering all this activity will be within single Kubernetes cluster only. You donβt need to backup.</p>
<p>(If the response was helpful please don't forget to upvote and/or accept as answer, thank you)</p>
| Prrudram-MSFT |
<p>I try to deploy mysql container in Kubernetes cluster (minikube) using the <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">example</a>.</p>
<p>I changed only a following part:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/tomasz/k8s/volume"
</code></pre>
<p>I doublechecked if the directory exists. Yes it does.</p>
<p>However, the container doesn't work because the command:</p>
<pre><code>kubectl get pods
</code></pre>
<p>prints</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
mysql 0/1 1 0 26d
phpmyadmin 1/1 1 1 26d
</code></pre>
<p>when I use:</p>
<pre><code>kubectl describe pod mysql-6f9cd7b9df-vj2zf
</code></pre>
<p>I get:</p>
<pre><code>Name: mysql-6f9cd7b9df-vj2zf
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.64.2
Start Time: Sun, 23 Jul 2023 21:09:02 +0200
Labels: app=mysql
pod-template-hash=6f9cd7b9df
Annotations: <none>
Status: Running
IP: 10.244.0.120
IPs:
IP: 10.244.0.120
Controlled By: ReplicaSet/mysql-6f9cd7b9df
Containers:
mysql:
Container ID: docker://d3f8152b3805a7e45153335935edfe46f6d9b800441a9d1e82953d1c87f3fb14
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Sat, 19 Aug 2023 11:36:22 +0200
Finished: Sat, 19 Aug 2023 11:38:13 +0200
Ready: False
Restart Count: 21
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'mysql-root-password' in secret 'mysql-secret'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-52qf5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
kube-api-access-52qf5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26d default-scheduler Successfully assigned default/mysql-6f9cd7b9df-vj2zf to minikube
Normal Pulled 26d (x5 over 26d) kubelet Container image "mysql:5.6" already present on machine
Normal Created 26d (x5 over 26d) kubelet Created container mysql
Normal Started 26d (x5 over 26d) kubelet Started container mysql
Warning BackOff 26d (x89 over 26d) kubelet Back-off restarting failed container mysql in pod mysql-6f9cd7b9df-vj2zf_default(7d26ac38-3f3c-49e9-a5b5-e1399a2df0e6)
Warning NetworkNotReady 14d (x2 over 14d) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Warning FailedMount 14d (x4 over 14d) kubelet MountVolume.SetUp failed for volume "kube-api-access-52qf5" : object "default"/"kube-root-ca.crt" not registered
Normal SandboxChanged 14d kubelet Pod sandbox changed, it will be killed and re-created.
Normal Started 14d (x3 over 14d) kubelet Started container mysql
Normal Pulled 14d (x4 over 14d) kubelet Container image "mysql:5.6" already present on machine
Normal Created 14d (x4 over 14d) kubelet Created container mysql
Warning BackOff 14d (x12 over 14d) kubelet Back-off restarting failed container mysql in pod mysql-6f9cd7b9df-vj2zf_default(7d26ac38-3f3c-49e9-a5b5-e1399a2df0e6)
Normal SandboxChanged 16m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 13m (x4 over 16m) kubelet Container image "mysql:5.6" already present on machine
Normal Created 13m (x4 over 16m) kubelet Created container mysql
Normal Started 13m (x4 over 16m) kubelet Started container mysql
Warning BackOff 95s (x48 over 15m) kubelet Back-off restarting failed container mysql in pod mysql-6f9cd7b9df-vj2zf_default(7d26ac38-3f3c-49e9-a5b5-e1399a2df0e6)
</code></pre>
<p>When I look into logs:</p>
<pre><code>kubectl logs -p mysql-6f9cd7b9df-vj2zf
</code></pre>
<p>I get:</p>
<pre><code>2023-08-19 09:36:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.51-1debian9 started.
2023-08-19 09:36:22+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2023-08-19 09:36:22+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.6.51-1debian9 started.
</code></pre>
<p>Could someone tell me how to debug it? Thank you.</p>
| CodeGrinder | <p>Exit code 137 typically means that the process was terminated by a signal, and in Kubernetes, signal 137 corresponds to SIGKILL, which is often used to forcefully terminate a process.</p>
<p>There are a few common reasons why a MySQL container might be crashing like this:</p>
<p><strong>Resource Constraints:</strong> The MySQL container might not have enough resources (CPU, memory) assigned to it. This could lead to the process getting killed if it exceeds its resource limits.</p>
<p><strong>OOM (Out of Memory) Kill:</strong> If the container is running out of memory, the Linux kernel might invoke the OOM killer to terminate processes to free up memory.</p>
<p><strong>Configuration or Data Issues:</strong> If there are issues with the MySQL configuration or the data directory, it could cause the container to crash on startup.</p>
<p><strong>Volume Mounting Issues:</strong> The volume mounting might not be working as expected, which could cause the container to fail.</p>
<p><strong>Environment Variables:</strong> Make sure that the environment variables you're passing to the MySQL container, like the root password, are correctly set.</p>
<p>To debug this, you can try the following steps:</p>
<p><strong>Resource Allocation:</strong> Check if the resources allocated to the MySQL container are sufficient. You might need to increase the memory and CPU limits.</p>
<p><strong>Memory Usage:</strong> Use tools like kubectl top to monitor the memory usage of the MySQL container.</p>
<p><strong>Logs:</strong> Examine the MySQL container logs more thoroughly. Apart from the log snippet you provided, look for any error messages or warnings in the MySQL error log.</p>
<p><strong>Check Persistent Volume:</strong> Ensure that the PersistentVolume (PV) and PersistentVolumeClaim (PVC) are correctly set up. You might need to delete and recreate the PVC.</p>
<p><strong>Database Initialization:</strong> If the MySQL database initialization is failing, it could cause the container to crash. Check if the initialization script is executing correctly.</p>
<p><strong>Check for External Factors:</strong> If the host machine where Minikube is running has resource constraints or issues, it could affect the containers. Make sure the host machine has enough resources.</p>
| arnoldschweizer |
<p>I am trying to manipulate a kubernetes yaml file with python's pyyaml. I'm trying to add some annotations, but when I dump them the output is empty.</p>
<p>Add <code>key</code>,<code>val</code> in a <code>Deployment</code>:</p>
<pre class="lang-py prettyprint-override"><code>file_content = ""
try:
with open(filename, "r") as f:
file_content = yaml.safe_load_all(f.read())
except:
sys.exit(1)
for doc in file_content:
if doc['kind'] == "Deployment":
metadata = doc['spec']['template']['metadata']
if 'annotations' not in metadata.keys():
metadata['annotations'] = {}
metadata['annotations']['key'] = 'val'
yaml.safe_dump_all(documents=file_content, stream=sys.stdout)
</code></pre>
<p>I would like to obtain:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
metadata:
...
annotations:
'key': 'val'
</code></pre>
<p>How can I do that?</p>
<blockquote>
<p>Note that in the output there's should be the single quote mark <code>'</code>.</p>
</blockquote>
| sctx | <p>I found a solution. It seems that modifying the iterator returned by the <code>yaml.load</code> function results in an empty structure, so empty results.</p>
<p>I copy each element in the operator in a new structure and then apply edits to it.</p>
<p>The following works properly</p>
<pre class="lang-py prettyprint-override"><code>resources = []
for res in file_content:
if res['kind'] == "Deployment":
# manipulate res
resources.append(res)
yaml.safe_dump_all(documents=resources, stream=sys.stdout)
</code></pre>
| sctx |
<p>After upgrading the jenkins plugin Kubernetes Client to version 1.30.3 (also for 1.31.1) I get the following exceptions in the logs of jenkins when I start a build:</p>
<pre><code>Timer task org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount@2c16d367 failed
java.lang.NoSuchMethodError: 'okhttp3.OkHttpClient io.fabric8.kubernetes.client.HttpClientAware.getHttpClient()'
at org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount.doRun(KubernetesClientProvider.java:150)
at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:90)
at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:67)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
</code></pre>
<p>After some of these eceptions the build itself is cancelled with this error:</p>
<pre><code>java.io.IOException: Timed out waiting for websocket connection. You should increase the value of system property org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout currently set at 30 seconds
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:451)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:338)
at hudson.Launcher$ProcStarter.start(Launcher.java:507)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
</code></pre>
<p>Do you have an idea what can be done?</p>
| fuechsle | <p>Downgrade the plugin to kubernetes-client-api:5.10.1-171.vaa0774fb8c20. The latest one has the compatibility issue as of now.</p>
<p><strong>new info</strong>: The issue is now solved with upgrading the <strong>Kubernetes plugin</strong> to version: 1.31.2 <a href="https://issues.jenkins.io/browse/JENKINS-67483" rel="noreferrer">https://issues.jenkins.io/browse/JENKINS-67483</a></p>
| Nitin Kalra |
<p>I am using Prometheus 2.33 version.
The following query does not work.</p>
<blockquote>
<p>kubelet_volume_stats_available_bytes</p>
</blockquote>
<blockquote>
<p>kubelet_volume_stats_capacity_bytes</p>
</blockquote>
<p>The following query is used to monitor the DISK usage of the POD.</p>
<blockquote>
<p>container_fs_usage_bytes</p>
</blockquote>
<blockquote>
<p>container_fs_limit_bytes</p>
</blockquote>
<p>Is there a way to get the usage of PVC, Limit value?</p>
| κΉνμ° | <p>For PVC, Kubernetes exposes these metrics to Prometheus, you can use them to monitor a persistent volume's usage:</p>
<pre class="lang-yaml prettyprint-override"><code>kube_persistentvolume_capacity_bytes
kube_persistentvolumeclaim_resource_requests_storage_bytes
</code></pre>
<p><strong>EDIT</strong>:
These metrics are from <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> - a service that produces Prometheus format metrics based on the current state of the Kubernetes native resources. It is basically listening to Kubernetes API and gathering information about its resources and objects, in particular for PV - <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/persistentvolume-metrics.md" rel="nofollow noreferrer">PV metrics</a> and PVC - <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/persistentvolumeclaim-metrics.md" rel="nofollow noreferrer">PVC metrics</a>. More information about the service is <a href="https://kubernetes.io/blog/2021/04/13/kube-state-metrics-v-2-0/" rel="nofollow noreferrer">here</a>.</p>
| anarxz |
<p>I want to set wildcard subdomain for my project, using k8s, nginx ingress controller, helm chart:</p>
<p>In <code>ingress.yaml</code> file:</p>
<pre><code>...
rules:
- host: {{ .Values.ingress.host }}
...
</code></pre>
<p>In <code>values.yaml</code> file, I change host <code>example.local</code> to <code>*.example.local</code>:</p>
<pre><code>...
ingress:
enabled: true
host: "*.example.local"
...
</code></pre>
<p>Then, when I install chart using helm chart:</p>
<pre><code>Error: YAML parse error on example/templates/ingress.yaml: error converting YAML to JSON: yaml: line 15: did not find expected alphabetic or numeric character
</code></pre>
<p>How can I fix it?</p>
<p>Thank for your support.</p>
| Phan Ly Huynh | <p>YAML treats strings starting with asterisk in a special way - that's why the hostname with wildcards like <code>*.example.local</code> breaks the ingress on <code>helm install</code>.
In order to be recognized as strings, the values in <code>ingress.yaml</code> file should be quoted with <code>" "</code> characters:</p>
<pre class="lang-yaml prettyprint-override"><code>...
rules:
- host: "{{ .Values.ingress.host }}"
...
</code></pre>
<p>One more option here - adding <code>| quote</code> :</p>
<pre class="lang-yaml prettyprint-override"><code>...
rules:
- host: {{ .Values.ingress.host | quote }}
...
</code></pre>
<p>I've reproduced your issue, both these options worked correctly. More information on quoting special characters for YAML is <a href="https://yaml.org/spec/1.2.2/#53-indicator-characters" rel="noreferrer">here</a>.</p>
| anarxz |
<p>I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).</p>
<p>Is this the expected behavior?
Is it a Kubernetes limitation or a security feature?
For debugging etc., we might need to access the services from the node. How can I achieve it?</p>
| Parvathy Mohan | <p>No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. <code>ClusterIP</code> service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kubernetes documentation</a>.</p>
<p>Services are <em>not</em> node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal <code>port:</code> while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.</p>
<p><strong>EDIT</strong>: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no <code>NetworkPolicy</code> resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called <code>default-allow</code> behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above.
If one or more <code>NetworkPolicy</code> is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, <code>NetworkPolicy</code>that both selects the pod and has "Ingress"/"Egress" in its policyTypes) - <code>default-deny</code> behavior.</p>
<p>What is <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">more</a>:</p>
<blockquote>
<p>Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.</p>
</blockquote>
<p>So yes, it is expected behavior for Kubernetes <code>NetworkPolicy</code> - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of <code>NetworkPolicy</code> defined.
To be compatible with it, <a href="https://projectcalico.docs.tigera.io/security/calico-network-policy" rel="nofollow noreferrer">Calico network policy</a> follows the same behavior for Kubernetes pods.
<code>NetworkPolicy</code> is applied to pods within a particular namespace - either the same or different with the help of the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">selectors</a>.</p>
<p>As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of <code>ipBlock</code> in pod/service <code>NetworkPolicy</code> - particular <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">IP ranges</a> are selected to allow as ingress sources or egress destinations for pod/service.</p>
<p>Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described <a href="https://discuss.kubernetes.io/t/how-do-we-write-an-ingress-networkpolicy-object-in-kubernetes-so-that-calls-via-api-server-proxy-can-come-through/10142/4" rel="nofollow noreferrer">here</a>.</p>
| anarxz |
<p>I am using kubernetes (by windows 10 - docker desktop).</p>
<p>I am using mysql, that is running by helm 3 (loaded from bitnami repository).</p>
<p>I am creating another application.
For now, I am testing on docker (not in kubernetes yet).</p>
<p>Everything is fine, but when trying to connect the database from my project
(BTW - Project works fine, but not when running on docker).</p>
<p>Something like:</p>
<pre><code>docker run --name test-docker --rm my-image:tag --db "root:12345@tcp(127.0.0.1:3306)/test"
</code></pre>
<p>(db is a parameter to to connect to db).</p>
<p>I get the message:</p>
<pre><code>2022-02-21T12:18:17.205Z FATAL failed to open db: could not setup schema: cannot create jobs table: dial tcp 127.0.0.1:3306: connect: connection refused
</code></pre>
<p>I have investigate a little, and find that the problem may be because the dockers running need to run on the same network.
(Nonetheless, they are actually dockers, when one is running by helm tool for K8S).</p>
<p>this is on:
<a href="https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking" rel="nofollow noreferrer">kubernetes networking</a></p>
<p>When I run:</p>
<pre><code>nsenter -t your-container-pid -n ip addr
</code></pre>
<p>the pid is not directory, so I get the message:</p>
<pre><code>/proc/<pid>/ns/net - No such file or directory
</code></pre>
<p>How can I eventually run my project that can use the mysql (running in dockers on K8S)?</p>
<p>Thanks.</p>
| Eitan | <p>Docker containers are isolated from other containers and the external network by default. There are several options to establish connection between Docker containers:</p>
<ul>
<li><p>Docker sets up a default <code>bridge</code> network automatically, through which the communication is possible between containers and between containers and the host machine. Both your containers should be on the <code>bridge</code> network - for container with your project to connect to your DB container by referring to it's name. More details on this approach and how it can be set up is <a href="https://docs.docker.com/network/network-tutorial-standalone/#use-the-default-bridge-network" rel="nofollow noreferrer">here</a>.</p>
</li>
<li><p>You can also create user-defined bridge network - basically, your own custom bridge network - and attach your Docker containers to it. In this way, both containers won't be connected to the default <code>bridge</code> network at all. Example of this approach is described in details <a href="https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks" rel="nofollow noreferrer">here</a>.</p>
<ol>
<li>First, user-defined network should be created:</li>
</ol>
<pre><code>docker network create <network-name>
</code></pre>
<ol start="2">
<li>List your newly created network and check with <code>inspect</code> command its IP address and that no containers are connected to it:</li>
</ol>
<pre><code>docker network ls
docker network inspect <network-name>
</code></pre>
<ol start="3">
<li>You can either connect your containers on their start with <code>--network</code> flag:</li>
</ol>
<pre><code>docker run -dit --name <container-name1> --network <network-name>
docker run -dit --name <container-name2> --network <network-name>
</code></pre>
<p>Or attach running containers by their name or by their ID to your newly created network by <code>docker network connect</code> - more options are listed <a href="https://docs.docker.com/engine/reference/commandline/network_connect/" rel="nofollow noreferrer">here</a>:</p>
<pre><code>docker network connect <network-name> <container-name1>
docker network connect <network-name> <container-name2>
</code></pre>
<ol start="4">
<li>To verify that your containers are connected to the network, check again the <code>docker network inspect</code> command.</li>
</ol>
</li>
</ul>
<p>Once connected in network, containers can communicate with each other, and you can connect to them using another containerβs IP address or name.</p>
<p><strong><strong>EDIT</strong>:</strong> As suggested by @Eitan, when referring to the network instead of a changing IP address in <code>root:12345@tcp(127.0.0.1:3306)/test</code>, special DNS name <code>host.docker.internal</code> can be used - it resolves to the internal IP address used by the host.</p>
| anarxz |
<p>We had a major outage when both our container registry and the entire K8S cluster lost power. When the cluster recovered faster than the container registry, my pod (part of a statefulset) is stuck in <code>Error: ImagePullBackOff</code>.</p>
<p>Is there a config setting to retry downloading the image from the CR periodically or recover without manual intervention?</p>
<p>I looked at <code>imagePullPolicy</code> but that does not apply for a situation when the CR is unavailable.</p>
| ucipass | <p>The <code>BackOff</code> part in <code>ImagePullBackOff</code> status means that Kubernetes is keep trying to pull the image from the registry, with an exponential back-off delay (10s, 20s, 40s, β¦). The delay between each attempt is increased until it reaches a compiled-in limit of 300 seconds (5 minutes) - more on it in <a href="https://kubernetes.io/docs/concepts/containers/images/#imagepullbackoff" rel="noreferrer">Kubernetes docs</a>.</p>
<p><code>backOffPeriod</code> parameter for the image pulls is a hard-coded constant in Kuberenets and unfortunately is not tunable now, as it can affect the node performance - otherwise, it can be adjusted in the very <a href="https://github.com/kubernetes/kubernetes/blob/5f920426103085a28069a1ba3ec9b5301c19d075/pkg/kubelet/kubelet.go#L155" rel="noreferrer">code</a> for your custom kubelet binary.
There is still ongoing <a href="https://github.com/kubernetes/kubernetes/issues/57291" rel="noreferrer">issue</a> on making it adjustable.</p>
| anarxz |
<p>I have roughly 20 cronjobs in Kubernetes that handle various tasks at specific time intervals. Currently there's a fair bit of overlap causing usage of resources to spike, opposed to the usage graph being more flat.</p>
<p>Below is a rough example of one of my cronjobs:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: cronjob
metadata:
name: my-task
spec:
schedule: "*/20 * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
suspend: false
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 1
ttlSecondsAfterFinished: 900
template:
spec:
serviceAccountName: my-task-account
containers:
- name: my-task
image: 12345678910.dkr.ecr.us-east-1.amazonaws.com/my-task:latest
command: ["/bin/sh"]
args:
- -c
- >-
python3 my-task.py
resources:
requests:
memory: "3Gi"
cpu: "800m"
limits:
memory: "5Gi"
cpu: "1500m"
restartPolicy: Never
</code></pre>
<p>Is there a way to stagger my jobs so that they aren't all running concurrently?</p>
<p>ie.</p>
<ul>
<li>job 1 starts at 12:00 with next run at 12:20</li>
<li>job 2 starts at 12:01 with next run at 12:21</li>
<li>job 3 starts at 12:02 with next run at 12:22</li>
<li>job 4 starts at 12:03 with next run at 12:23</li>
</ul>
<p>A solution where this is handled automatically would be 1st prize however a manually configured solution would also suffice.</p>
| Damian Jacobs | <p><strong>Posting my comment as the answer for better visibility.</strong></p>
<p>As far as I understood, all your jobs are configured separately, you can set specific schedule for each of them, e.g. for job 1 that starts at 12:00 with next run at 12:20 it can set up like this:</p>
<pre><code>spec:
schedule: "0,20 12 * * *"
</code></pre>
<p>and correspondingly for job 2:</p>
<pre><code>spec:
schedule: "01,21 12 * * *"
</code></pre>
| anarxz |
<p>I have AWS EKS cluster with only Fargate profile, no Node Groups.
Is it possible to enable HPA in this case? I tried to enable metric server as described <a href="https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html" rel="noreferrer">here</a> but pod creation fails with error</p>
<pre><code>0/4 nodes are available: 4 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.
</code></pre>
<p>Any insights?</p>
| yurybubnov | <p>You need to create fargate profile for this.
If you are deploying it into another namespace then you need to create a fargate profile for that namespace.</p>
| Vikas |
<p>I'm writing as I've encountered an issue that doesn't seem to get resolved, would value the community's help.</p>
<p>I'm trying to push an image to a local registry I deployed on port 5000.</p>
<p>When I use this command <code>docker push localhost:5000/explorecalifornia.com</code> to push the image to my local registry, I get the following message</p>
<pre><code>Get "http://localhost:5000/v2/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>I've confirmed the registry is on port 5000 by using GET on postman, and I get a valid, expected <code>{}</code> response (since there's no images currently on my local registry).</p>
<p>I've since tried to fix this by updating my <code>etc/hosts</code> file to comment out "::1 localhost" per advise of this <a href="https://stackoverflow.com/questions/57570423/net-http-request-canceled-while-waiting-for-connection-client-timeout-exceeded">post</a>. This is the contents of my <code>etc/hosts</code> file</p>
<pre><code>##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
# ::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
</code></pre>
<p>I also updated my <code>etc/resolve.conf</code> file with the following nameservers per advise from this <a href="https://github.com/docker/for-win/issues/611#issuecomment-531764867" rel="nofollow noreferrer">post</a>.</p>
<pre><code>nameserver 10.0.2.3
nameserver 8.8.8.8
nameserver 8.8.4.4
</code></pre>
<p>None of this worked. Did anyone also encounter this issue? Is there any recommendations to help fix this issue?</p>
<p>Here's the <a href="https://github.com/keshinpoint/Explore-California" rel="nofollow noreferrer">source code</a> if it helps! Thank you in advance :)</p>
| keshinpoint | <p>I use a work-around for this error and it is as below:</p>
<p>Firstly, tag the images using the localhost ip instead, ie</p>
<pre><code>docker tag imagename 127.0.0.1:5000/imagename
</code></pre>
<p>and,</p>
<pre><code>docker push 127.0.0.1:5000/imagename
</code></pre>
<p>I hope this works for you as well.</p>
| sonierk88 |
<p>First, I've created a GKE Autopilot cluster using the GCP GUI Console in my browser, with default settings so I tried applying my deployments with <code>kubectl apply -f thisfile.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: hello-app
image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
</code></pre>
<p>After, I've been rewriting this into a Terraform file, resulting in this:</p>
<pre><code>resource "google_container_cluster" "my_gke" {
name = "my-gke"
enable_autopilot = "true"
location = "southamerica-east1"
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${google_container_cluster.my_gke.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(google_container_cluster.my_gke.master_auth[0].cluster_ca_certificate)
}
resource "kubernetes_deployment" "my_deployment" {
metadata {
name = "my-app"
}
spec {
replicas = 2
selector {
match_labels = {
run = "my-app"
}
}
template {
metadata {
labels = {
run = "my-app"
}
}
spec {
container {
image = "us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0"
name = "hello-app"
}
}
}
}
lifecycle {
ignore_changes = [
metadata[0].annotations,
metadata[0].resource_version,
spec[0].template[0].spec[0].container[0].security_context
]
}
}
</code></pre>
<p>The problem is as follows:</p>
<ul>
<li>When I apply it with <code>kubectl apply -f thisfile.yaml</code>, using the YAML notation, everything deploys fine.</li>
<li>When I remove the cluster and apply everything with Terraform, the first revision applies fine after some time, but the next revisions of the deployment keeps resulting on the GCP Console printing "Unschedulable" errors claiming "insufficient cpu" and/or "insufficient memory"</li>
</ul>
<p>PS. I've already tried to set resources' limits and requests inside of the PodSpec before, but nothing changed.</p>
<p>I am new into GKE and everything looks so unreliable for me now. What am I doing wrong?</p>
| deniable_encryption | <p>I can confirm this is an ongoing issue. We have a support ticket opened with GCP to investigate the same behaviour.</p>
<p>The first deployment works fine but the deployment of a new revision fails. Pods gets created and deleted quickly (a few per second) and the ReplicaSets get to the hundreds pretty fast.</p>
<p>From the logs we can see the same Unschedulable errors, sometimes we see pods failing to start due to missing Volumes in the nodes (our own volumes most of the time, but also "kube-api-access" ones).</p>
<p>There is also errors like this in the logs:</p>
<p><code>message: "Operation cannot be fulfilled on replicasets.apps "deploytest-747c54b87d": the object has been modified; please apply your changes to the latest version and try again"</code></p>
<p>It only happens if we try to redeploy via terraform. If we patch the yaml directly with kubectl the new version goes fine through the replacement strategy, no errors.</p>
<p>The exact same terraform code works on GKE regular cluster. Problem seems to be limited to the Autopilot ones. Already tested in a brand new project, everything out of the box default.</p>
<p>Google is currently investigating the issue on an internal ticket. I'll try to report back here what else we can figure out.</p>
<p>-- edit:</p>
<p>It seems the pods need the pod.securityContext.seccompProfile parameter, which is not yet supported by Terraform. For some reason the same terraform code can instantiate a new deployment if it doesn't exist yet. But it cannot update the deployment.</p>
<p>The patch bellow adds the missing config, but it's not a solution. I guess we need to wait for that securityContext to be added to TF. (ps. there is a deprecated annotation method to add this config, but I didn't try it).</p>
<pre><code>kubectl patch deployment deploy-test -p='[{"op": "replace", "path": "/spec/template/spec/securityContext", "value":{"seccompProfile":{"type":"RuntimeDefault"}}}]' --type='json'
</code></pre>
<p>We just decided to move away from TF for the K8S deployments. Our TF only goes up to the cluster creation. Workloads will be managed somewhere else.</p>
| Positronico |
<p>We are getting logs that calls to k8s are being made, despite our cluster being private, as well as being behind the gcp firewall with a rule that blocks all ingress except IAP IPs (and ICMP). What am I missing?</p>
<pre><code>"protoPayload":{
"@type":"type.googleapis.com/google.cloud.audit.AuditLog"
"authenticationInfo":{
"principalEmail":"system:anonymous"
}
"authorizationInfo":["0":{2}]
"methodName":"io.k8s.post"
"requestMetadata":{
"callerIp":"45.*.*.*"
"callerSuppliedUserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
}
"resourceName":"Autodiscover/Autodiscover.xml"
"serviceName":"k8s.io"
"status":{
"code":"7"
"message":"Forbidden"
}
}
</code></pre>
| Pat | <p>The private clusters have a control plane private endpoint and a control plane public endpoint and you can choose to disable the control plane public endpoint, this is the highest level of restricted access. So you can manage the cluster with the private endpoint internal IP address with tools like kubectl and any VM that uses the same subnet that your cluster can also access the private endpoint.However, it is important to say that even if you disable the public endpoint access, Google can use the control plane public endpoint for cluster management purposes, such as scheduled maintenance and automatic control plane upgrades.
If you need more information about how to create a private cluster with public endpoint disable, you can consult the following <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept#overview" rel="nofollow noreferrer">public document.</a></p>
<p>You can review your public endpoints with the following command:</p>
<pre><code>gcloud container clusters describe YOUR_CLUSTER_NAME
</code></pre>
<p>Also, you can verify that your cluster's nodes do not have external IP addresses with the following command:</p>
<pre><code>kubectl get nodes --output wide
</code></pre>
| Leo |
<p>I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error</p>
<p>I required a help just to access the backend service which is working fine with kubernetes container load balance ip.</p>
<p>here is my ingress configure</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
</code></pre>
<p>Deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent@sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
</code></pre>
<p>Service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
</code></pre>
<p><a href="https://i.stack.imgur.com/zEh4m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zEh4m.png" alt="Load Balancer" /></a></p>
| Aman | <p>You need to change the pathType to Prefix as follows, in your ingress:</p>
<pre><code>pathType: Prefix
</code></pre>
<p>Because I noted that you are using the <code>pathType: ImplementationSpecific</code> . With this value, the matching depends on the <code>IngressClass</code>, so I think for your case the <code>pathType Prefix</code> should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">link</a>.</p>
| Leo |
<p>We have setup a GKE cluster using Terraform with private and shared networking:</p>
<p>Network configuration:</p>
<pre><code>resource "google_compute_subnetwork" "int_kube02" {
name = "int-kube02"
region = var.region
project = "infrastructure"
network = "projects/infrastructure/global/networks/net-10-23-0-0-16"
ip_cidr_range = "10.23.5.0/24"
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.60.0.0/14" # 10.60 - 10.63
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.56.0.0/16"
}
}
</code></pre>
<p>Cluster configuration:</p>
<pre><code>resource "google_container_cluster" "gke_kube02" {
name = "kube02"
location = var.region
initial_node_count = var.gke_kube02_num_nodes
network = "projects/ninfrastructure/global/networks/net-10-23-0-0-16"
subnetwork = "projects/infrastructure/regions/europe-west3/subnetworks/int-kube02"
master_authorized_networks_config {
cidr_blocks {
display_name = "admin vpn"
cidr_block = "10.42.255.0/24"
}
cidr_blocks {
display_name = "monitoring server"
cidr_block = "10.42.4.33/32"
}
cidr_blocks {
display_name = "cluster nodes"
cidr_block = "10.23.5.0/24"
}
}
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = true
master_ipv4_cidr_block = "192.168.23.0/28"
}
node_config {
machine_type = "e2-highcpu-2"
tags = ["kube-no-external-ip"]
metadata = {
disable-legacy-endpoints = true
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
</code></pre>
<p>The cluster is online and running fine. If I connect to one of the worker nodes i can reach the api using <code>curl</code>:</p>
<pre><code>curl -k https://192.168.23.2
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
</code></pre>
<p>I also see a healthy cluster when using a SSH port forward:</p>
<pre><code>β― k get pods --all-namespaces --insecure-skip-tls-verify=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system event-exporter-gke-5479fd58c8-mv24r 2/2 Running 0 4h44m
kube-system fluentbit-gke-ckkwh 2/2 Running 0 4h44m
kube-system fluentbit-gke-lblkz 2/2 Running 0 4h44m
kube-system fluentbit-gke-zglv2 2/2 Running 4 4h44m
kube-system gke-metrics-agent-j72d9 1/1 Running 0 4h44m
kube-system gke-metrics-agent-ttrzk 1/1 Running 0 4h44m
kube-system gke-metrics-agent-wbqgc 1/1 Running 0 4h44m
kube-system kube-dns-697dc8fc8b-rbf5b 4/4 Running 5 4h44m
kube-system kube-dns-697dc8fc8b-vnqb4 4/4 Running 1 4h44m
kube-system kube-dns-autoscaler-844c9d9448-f6sqw 1/1 Running 0 4h44m
kube-system kube-proxy-gke-kube02-default-pool-2bf58182-xgp7 1/1 Running 0 4h43m
kube-system kube-proxy-gke-kube02-default-pool-707f5d51-s4xw 1/1 Running 0 4h43m
kube-system kube-proxy-gke-kube02-default-pool-bd2c130d-c67h 1/1 Running 0 4h43m
kube-system l7-default-backend-6654b9bccb-mw6bp 1/1 Running 0 4h44m
kube-system metrics-server-v0.4.4-857776bc9c-sq9kd 2/2 Running 0 4h43m
kube-system pdcsi-node-5zlb7 2/2 Running 0 4h44m
kube-system pdcsi-node-kn2zb 2/2 Running 0 4h44m
kube-system pdcsi-node-swhp9 2/2 Running 0 4h44m
</code></pre>
<p>So far so good. Then I setup the Cloud Router to announce the <code>192.168.23.0/28</code> network. This was successful and replicated to our local site using BGP. Running <code>show route 192.168.23.2</code> displays the correct route is advertised and installed.</p>
<p>When trying to reach the API from the monitoring server <code>10.42.4.33</code> I just run into timeouts. All three, the Cloud VPN, the Cloud Router and the Kubernetes Cluster run in <code>europe-west3</code>.</p>
<p>When i try to ping one of the workers its working completely fine, so networking in general works:</p>
<pre><code>[me@monitoring ~]$ ping 10.23.5.216
PING 10.23.5.216 (10.23.5.216) 56(84) bytes of data.
64 bytes from 10.23.5.216: icmp_seq=1 ttl=63 time=8.21 ms
64 bytes from 10.23.5.216: icmp_seq=2 ttl=63 time=7.70 ms
64 bytes from 10.23.5.216: icmp_seq=3 ttl=63 time=5.41 ms
64 bytes from 10.23.5.216: icmp_seq=4 ttl=63 time=7.98 ms
</code></pre>
<p>Googles <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">Documentation</a> gives no hit what could be missing. From what I understand the Cluster API should be reachable by now.</p>
<p>What could be missing and why is the API not reachable via VPN?</p>
| Eetae0x | <p>I have been missing the peering configuration documented here:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cp-on-prem-routing" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cp-on-prem-routing</a></p>
<pre><code>resource "google_compute_network_peering_routes_config" "peer_kube02" {
peering = google_container_cluster.gke_kube02.private_cluster_config[0].peering_name
project = "infrastructure"
network = "net-10-13-0-0-16"
export_custom_routes = true
import_custom_routes = false
}
</code></pre>
| Eetae0x |
<p>I'm experimenting with kubernetes and a minio deployment. I have a k3s 4 node cluster, each one with 4 50GB disk. Following the instructions <a href="https://docs.min.io/minio/k8s/tenant-management/deploy-minio-tenant-using-commandline.html" rel="nofollow noreferrer">here</a> I have done this:</p>
<ol>
<li><p>First I installed <a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/" rel="nofollow noreferrer">krew</a> in order to install the <a href="https://docs.min.io/minio/k8s/deployment/deploy-minio-operator.html#deploy-operator-kubernetes" rel="nofollow noreferrer">minio</a> and the <a href="https://github.com/minio/directpv/blob/master/README.md" rel="nofollow noreferrer">directpv</a> operators.</p>
</li>
<li><p>I installed those two without a problem.</p>
</li>
<li><p>I formatted every <strong>Available</strong> hdd in the node using <code>kubectl directpv drives format --drives /dev/vd{b...e} --nodes k3s{1...4}</code></p>
</li>
<li><p>I then proceed to make the deployment, first I create the namespace with <code>kubectl create namespace minio-tenant-1</code>, and then I actually create the tenant with:</p>
<p><code>kubectl minio tenant create minio-tenant-1 --servers 4 --volumes 8 --capacity 10Gi --storage-class direct-csi-min-io --namespace minio-tenant-1</code></p>
</li>
<li><p>The only thing I need to do then is expose the port to access, which I do with: <code>kubectl port-forward service/minio 443:443</code> (I'm guessing it should be a better way to achieve this, as the last command isn't apparently permanent, maybe using a LoadBalancer or NodePort type services in the kubernetes cluster).</p>
</li>
</ol>
<p>So far so good, but I'm facing some problems:</p>
<ul>
<li>When I try to create an alias to the server using <a href="https://docs.min.io/docs/minio-client-complete-guide.html" rel="nofollow noreferrer">mc</a> the prompt answer me back with:</li>
</ul>
<blockquote>
<p>mc: Unable to initialize new alias from the provided
credentials. Get
"https://127.0.0.1/probe-bucket-sign-9aplsepjlq65/?location=": x509:
cannot validate certificate for 127.0.0.1 because it doesn't contain
any IP SANs</p>
</blockquote>
<p>I can surpass this with simply adding the <code>--insecure</code> option, but I don't know why it throws me this error, I guess is something how k3s manage the TLS auto-signed certificates.</p>
<ul>
<li><p>Once created the alias (I named it test) of the server with the <code>--insecure</code> option I try to create a bucket, but the server always answer me back with:</p>
<p><code>mc mb test/hello</code></p>
<p><code>mc: <ERROR> Unable to make bucket \test/hello. The specified bucket does not exist.</code></p>
</li>
</ul>
<p>So... I can't really use it... Any help will be appreciated, I need to know what I'm doing wrong.</p>
| k.Cyborg | <p>Guided by information at the <a href="https://docs.min.io/docs/how-to-secure-access-to-minio-server-with-tls.html" rel="nofollow noreferrer">Minio documentation</a>. You have to generate a public certificate. First of all generate a private key use command:</p>
<pre><code>certtool.exe --generate-privkey --outfile NameOfKey.key
</code></pre>
<p>After that create a file called <code>cert.cnf</code> with content below:</p>
<pre><code># X.509 Certificate options
#
# DN options
# The organization of the subject.
organization = "Example Inc."
# The organizational unit of the subject.
#unit = "sleeping dept."
# The state of the certificate owner.
state = "Example"
# The country of the subject. Two letter code.
country = "EX"
# The common name of the certificate owner.
cn = "Sally Certowner"
# In how many days, counting from today, this certificate will expire.
expiration_days = 365
# X.509 v3 extensions
# DNS name(s) of the server
dns_name = "localhost"
# (Optional) Server IP address
ip_address = "127.0.0.1"
# Whether this certificate will be used for a TLS server
tls_www_server
</code></pre>
<p>Run <code>certtool.exe</code> and specify the configuration file to generate a certificate:</p>
<pre><code>certtool.exe --generate-self-signed --load-privkey NameOfKey.key --template cert.cnf --outfile public.crt
</code></pre>
<p>And the end put the public certificate into:</p>
<pre><code>~/.minio/certs/CAs/
</code></pre>
| Mykola |
<p>I'm running PowerDNS recursor inside my k8s cluster. My python script is on a different <code>pod</code> that is doing rdns to my <code>powerdns</code> <code>rescursor</code> app. I have my hpa <code>Max replica</code> set to <code>8</code>. However, I do not think the load is the problem here. I'm unsure what to do to resolve this timeout error that I'm getting below. I can increase the replicas to solve the problem temporarily, and then it would happen again.</p>
<p><code>[ipmetadata][MainThread][source.py][144][WARNING]: dns_error code=12, message=Timeout while contacting DNS servers</code></p>
<p>It seems like my pods are rejecting incoming traffic therefore it's outputting the dns_error code=12.</p>
<p>Here is part of my script that's running the rdns</p>
<pre><code> return_value = {
'rdns': None
}
try:
async for attempt in AsyncRetrying(stop=stop_after_attempt(3)):
with attempt:
try:
if ip:
result = await self._resolver.query(ip_address(ip).reverse_pointer, 'PTR')
return_value['rdns'] = result.name
return return_value
except DNSError as dns_error:
# 1 = DNS server returned answer with no data
# 4 = Domain name not found
# (seems to just be a failure of rdns lookup no sense in retrying)
# 11 = Could not contact DNS servers
if int(dns_error.args[0]) in [1, 4, 11]:
return return_value
LOG.warning('dns_error code=%d, message=%s, ip=%s', dns_error.args[0], dns_error.args[1], ip)
raise
except RetryError as retry_ex:
inner_exception = retry_ex.last_attempt.exception()
if isinstance(inner_exception, DNSError):
# 12 = Timeout while contacting DNS servers
LOG.error('dns_error code=%d, message=%s, ip=%s', inner_exception.args[0], inner_exception.args[1], ip)
else:
LOG.exception('rnds lookup failed')
return return_value
</code></pre>
| thevoipman | <p>The error code 12 indicates that the PowerDNS recursor did not receive a response from any of the authoritative servers for the queried domain within the configured timeout. This could be due to network issues, firewall rules, rate limiting, or misconfiguration of the recursor or the authoritative servers.</p>
<h2>Possible solutions</h2>
<p>There are a few things you can try to resolve this timeout error:</p>
<ul>
<li>Check the network connectivity and latency between your python pod and your recursor pod, and between your recursor pod and the authoritative servers. You can use tools like <code>ping</code>, <code>traceroute</code>, or <code>dig</code> to diagnose network problems.</li>
<li>Check the firewall rules on your k8s cluster and on the authoritative servers. Make sure they allow UDP and TCP traffic on port 53 for DNS queries and responses. You can use tools like <code>iptables</code>, <code>nftables</code>, or <code>ufw</code> to manage firewall rules.</li>
<li>Check the rate limiting settings on your recursor and on the authoritative servers. Rate limiting is a mechanism to prevent denial-of-service attacks or abuse of DNS resources by limiting the number of queries per second from a given source. You can use tools like <code>pdnsutil</code> or <code>pdns_control</code> to configure rate limiting on PowerDNS recursor and authoritative servers.</li>
<li>Check the configuration of your recursor and the authoritative servers. Make sure they have the correct IP addresses, domain names, and DNSSEC settings. You can use tools like <code>pdnsutil</code> or <code>pdns_control</code> to manage PowerDNS configuration files and settings.</li>
</ul>
<h2>Examples</h2>
<p>Here are some examples of how to use the tools mentioned above to troubleshoot the timeout error:</p>
<ul>
<li>To ping the recursor pod from the python pod, you can use the following command:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import subprocess
recursor_pod_ip = "10.0.0.1" # replace with the actual IP address of the recursor pod
ping_result = subprocess.run(["ping", "-c", "4", recursor_pod_ip], capture_output=True)
print(ping_result.stdout.decode())
</code></pre>
<p>This will send four ICMP packets to the recursor pod and print the output. You should see something like this:</p>
<pre><code>PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.102 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.101 ms
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3060ms
rtt min/avg/max/mdev = 0.098/0.106/0.123/0.010 ms
</code></pre>
<p>This indicates that the network connectivity and latency between the python pod and the recursor pod are good.</p>
<ul>
<li>To traceroute the authoritative server from the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- traceroute 8.8.8.8
</code></pre>
<p>This will trace the route taken by packets from the recursor pod to the authoritative server at 8.8.8.8 (Google DNS). You should see something like this:</p>
<pre><code>traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.123 ms 0.098 ms 0.102 ms
2 10.0.1.1 (10.0.1.1) 0.456 ms 0.432 ms 0.419 ms
3 10.0.2.1 (10.0.2.1) 0.789 ms 0.765 ms 0.752 ms
4 192.168.0.1 (192.168.0.1) 1.123 ms 1.098 ms 1.085 ms
5 192.168.1.1 (192.168.1.1) 1.456 ms 1.432 ms 1.419 ms
6 192.168.2.1 (192.168.2.1) 1.789 ms 1.765 ms 1.752 ms
7 192.168.3.1 (192.168.3.1) 2.123 ms 2.098 ms 2.085 ms
8 192.168.4.1 (192.168.4.1) 2.456 ms 2.432 ms 2.419 ms
9 192.168.5.1 (192.168.5.1) 2.789 ms 2.765 ms 2.752 ms
10 8.8.8.8 (8.8.8.8) 3.123 ms 3.098 ms 3.085 ms
</code></pre>
<p>This indicates that the route to the authoritative server is clear and there are no firewall blocks or network issues.</p>
<ul>
<li>To dig the domain name from the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- dig example.com
</code></pre>
<p>This will send a DNS query for the domain name example.com to the recursor pod and print the response. You should see something like this:</p>
<pre><code>; <<>> DiG 9.11.5-P4-5.1ubuntu2.1-Ubuntu <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12345
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 3600 IN A 93.184.216.34
;; Query time: 12 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Tue Jun 15 12:34:56 UTC 2021
;; MSG SIZE rcvd: 56
</code></pre>
<p>This indicates that the recursor pod received a valid response from the authoritative server for the domain name example.com.</p>
<ul>
<li>To check the rate limiting settings on the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- pdns_control get-all
</code></pre>
<p>This will print all the configuration settings of the recursor pod. You should look for the following settings:</p>
<pre><code>max-cache-entries=1000000
max-packetcache-entries=500000
max-recursion-depth=40
max-tcp-clients=128
max-udp-queries-per-round=1000
max-udp-queries-per-second=10000
</code></pre>
<p>These settings control the maximum number of cache entries, TCP clients, UDP queries, and recursion depth that the recursor pod can handle. You can adjust them according to your needs and resources. You can use the following command to set a new value for a setting:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- pdns_control set max-udp-queries-per-second 20000
</code></pre>
<p>This will set the maximum number of UDP queries per second to 20000.</p>
<ul>
<li>To check the configuration of the authoritative server at 8.8.8.8, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>dig +short CHAOS TXT version.bind @8.8.8.8
</code></pre>
<p>This will send a DNS query for the version of the authoritative server at 8.8.8.8. You should see something like this:</p>
<pre><code>"google-public-dns-a.google.com"
</code></pre>
<p>This indicates that the authoritative server is running Google Public DNS, which is a well-known and reliable DNS service. You can check the documentation of Google Public DNS for more information on its configuration and features. You can also use the following command to check the DNSSEC status of the authoritative server:</p>
<pre class="lang-bash prettyprint-override"><code>dig +short CHAOS TXT id.server @8.8.8.8
</code></pre>
<p>This will send a DNS query for the identity of the authoritative server at 8.8.8.8. You should see something like this:</p>
<pre><code>"edns0"
</code></pre>
<p>This indicates that the authoritative server supports EDNS0, which is an extension of the DNS protocol that enables DNSSEC and other features. You can check the documentation of EDNS0 for more information on its functionality and benefits.</p>
| Ahmed Mohamed |
<p>I faced this problem since yesterday, no problems before.<br />
My environment is</p>
<ul>
<li>Windows 11</li>
<li>Docker Desktop 4.4.4</li>
<li>minikube 1.25.1</li>
<li>kubernetes-cli 1.23.3</li>
</ul>
<h1>Reproduce</h1>
<h2>1. Start minikube and create cluster</h2>
<pre><code>minikube start
</code></pre>
<h2>2. Check pods</h2>
<pre><code>kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-z7rpf 1/1 Running 0 22s
kube-system etcd-minikube 1/1 Running 1 34s
kube-system kube-apiserver-minikube 1/1 Running 1 34s
kube-system kube-controller-manager-minikube 1/1 Running 1 33s
kube-system kube-proxy-zdr9n 1/1 Running 0 22s
kube-system kube-scheduler-minikube 1/1 Running 1 34s
kube-system storage-provisioner 1/1 Running 0 29s
</code></pre>
<h2>3. Add new pod (in this case, use istio)</h2>
<pre><code>istioctl manifest apply -y
</code></pre>
<h2>4. Check pods</h2>
<pre><code>kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system istio-ingressgateway-c6d9f449-nhbvg 1/1 Running 0 13s
istio-system istiod-5ffcccb477-5hzgs 1/1 Running 0 19s
kube-system coredns-64897985d-nxhxm 1/1 Running 0 67s
kube-system etcd-minikube 1/1 Running 2 79s
kube-system kube-apiserver-minikube 1/1 Running 2 82s
kube-system kube-controller-manager-minikube 1/1 Running 2 83s
kube-system kube-proxy-8jfz7 1/1 Running 0 67s
kube-system kube-scheduler-minikube 1/1 Running 2 83s
kube-system storage-provisioner 1/1 Running 1 (45s ago) 77s
</code></pre>
<h2>5. Restart minikube</h2>
<pre><code>minikube stop
</code></pre>
<p>then, back to 1 and and check pod, <code>kubectl get po -A</code> returns same pods as <strong>#2</strong>.<br />
(In this case, istio-system is lost.)</p>
<p>Created pods etc. was retained till yesterday even restart minikube or PC.</p>
<p>Does anyone face same problem or have any solution?</p>
| akrsum | <p>This seems to be a bug introduced with 1.25.0 version of minikube: <a href="https://github.com/kubernetes/minikube/issues/13503" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/13503</a> .
A PR to revert the changes introducing the bug is already open: <a href="https://github.com/kubernetes/minikube/pull/13506" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/pull/13506</a></p>
<p>The fix is scheduled for minikube v1.26.</p>
| CdrBlair |
<p>I would like to install a helm release using argocd, i defined a helm app declaratively like the following :</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: moon
namespace: argocd
spec:
project: aerokube
source:
chart: moon2
repoURL: https://charts.aerokube.com/
targetRevision: 2.4.0
helm:
valueFiles:
- values.yml
destination:
server: "https://kubernetes.default.svc"
namespace: moon1
syncPolicy:
syncOptions:
- CreateNamespace=true
</code></pre>
<p>Where my values.yml:</p>
<pre><code>customIngress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
ingressClassName: nginx
host: moon3.benighil-mohamed.com
tls:
- secretName: moon-tls
hosts:
- moon3.benighil-mohamed.com
configs:
default:
containers:
vnc-server:
repository: quay.io/aerokube/vnc-server
resources:
limits:
cpu: 400m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
</code></pre>
<p>Notice, the app does not take values.yml into consideration, and i get the following error:</p>
<pre><code>rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template moon --namespace moon1 --kube-version 1.23 --values /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml --api-versions acme.cert-manager.io/v1 --api-versions acme.cert-manager.io/v1/Challenge --api-versions acme.cert-manager.io/v1/Order --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions ceph.rook.io/v1 --api-versions ceph.rook.io/v1/CephBlockPool --api-versions ceph.rook.io/v1/CephBlockPoolRadosNamespace --api-versions ceph.rook.io/v1/CephBucketNotification --api-versions ceph.rook.io/v1/CephBucketTopic --api-versions ceph.rook.io/v1/CephClient --api-versions ceph.rook.io/v1/CephCluster --api-versions ceph.rook.io/v1/CephFilesystem --api-versions ceph.rook.io/v1/CephFilesystemMirror --api-versions ceph.rook.io/v1/CephFilesystemSubVolumeGroup --api-versions ceph.rook.io/v1/CephNFS --api-versions ceph.rook.io/v1/CephObjectRealm --api-versions ceph.rook.io/v1/CephObjectStore --api-versions ceph.rook.io/v1/CephObjectStoreUser --api-versions ceph.rook.io/v1/CephObjectZone --api-versions ceph.rook.io/v1/CephObjectZoneGroup --api-versions ceph.rook.io/v1/CephRBDMirror --api-versions cert-manager.io/v1 --api-versions cert-manager.io/v1/Certificate --api-versions cert-manager.io/v1/CertificateRequest --api-versions cert-manager.io/v1/ClusterIssuer --api-versions cert-manager.io/v1/Issuer --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.projectcalico.org/v1 --api-versions crd.projectcalico.org/v1/BGPConfiguration --api-versions crd.projectcalico.org/v1/BGPPeer --api-versions crd.projectcalico.org/v1/BlockAffinity --api-versions crd.projectcalico.org/v1/CalicoNodeStatus --api-versions crd.projectcalico.org/v1/ClusterInformation --api-versions crd.projectcalico.org/v1/FelixConfiguration --api-versions crd.projectcalico.org/v1/GlobalNetworkPolicy --api-versions crd.projectcalico.org/v1/GlobalNetworkSet --api-versions crd.projectcalico.org/v1/HostEndpoint --api-versions crd.projectcalico.org/v1/IPAMBlock --api-versions crd.projectcalico.org/v1/IPAMConfig --api-versions crd.projectcalico.org/v1/IPAMHandle --api-versions crd.projectcalico.org/v1/IPPool --api-versions crd.projectcalico.org/v1/IPReservation --api-versions crd.projectcalico.org/v1/KubeControllersConfiguration --api-versions crd.projectcalico.org/v1/NetworkPolicy --api-versions crd.projectcalico.org/v1/NetworkSet --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions moon.aerokube.com/v1 --api-versions moon.aerokube.com/v1/BrowserSet --api-versions moon.aerokube.com/v1/Config --api-versions moon.aerokube.com/v1/DeviceSet --api-versions moon.aerokube.com/v1/License --api-versions moon.aerokube.com/v1/Quota --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions objectbucket.io/v1alpha1 --api-versions objectbucket.io/v1alpha1/ObjectBucket --api-versions objectbucket.io/v1alpha1/ObjectBucketClaim --api-versions operator.tigera.io/v1 --api-versions operator.tigera.io/v1/APIServer --api-versions operator.tigera.io/v1/ImageSet --api-versions operator.tigera.io/v1/Installation --api-versions operator.tigera.io/v1/TigeraStatus --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions snapshot.storage.k8s.io/v1 --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --api-versions snapshot.storage.k8s.io/v1beta1 --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotContent --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: open /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml: no such file or directory
</code></pre>
<p>Notice both <code>application.yml</code> and <code>values.yml</code> are located in the same directory on my local machine, ie: the structure of the 2 files in question looks like :</p>
<pre><code>.
βββ application.yml
βββ values.yml
</code></pre>
<p>Any help please ?</p>
| Mohamed | <p>Cleanest way to achieve what you want is using the remote chart as dependency:</p>
<p>Chart.yaml</p>
<pre><code>name: mychartname
version: 1.0.0
apiVersion: v2
dependencies:
- name: moon2
version: "2.4.0"
repository: "https://charts.aerokube.com/"
</code></pre>
<p>And overriding its values like this:</p>
<p>values.yaml</p>
<pre><code>moon2:
customIngress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
ingressClassName: nginx
host: moon3.benighil-mohamed.com
tls:
- secretName: moon-tls
hosts:
- moon3.benighil-mohamed.com
configs:
default:
containers:
vnc-server:
repository: quay.io/aerokube/vnc-server
resources:
limits:
cpu: 400m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
</code></pre>
<p>Pay attention to this file. You need to create a key in your values file with the same name as the dependency(<code>moon2</code> in your case), and indent the values you want to override one level.</p>
<p>You need to upload both of these files to a repository and point your ArgoCD application URL to this repository.</p>
<p>This has the advantage that whenever the upstream helm chart gets updated, all you need to do is increase the version in Chart.yaml</p>
| HiroCereal |
<p>Just finished reading Nigel Poulton's <strong>The Kubernetes Book</strong>. I'm left with the question of whether or not a Deployment can specify multiple ReplicaSets.</p>
<p>When I think Deployment, I think of it in the traditional sense of an entire application being deployed. Or is there meant to be a Deployment for each microservice?</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoulton/k8sbook : latest
ports:
- containerPort: 8080
</code></pre>
| Kermit | <p>The replica sets can be multiple up to a limit of 10 based on the number of updates that have been done using deployment. But only one <code>replicaSet</code> (the latest one) should be showing the number of pods; all other older sets should be showing <code>0</code>.</p>
<p>We can set <code>revisionHistoryLimit</code> to specify how many old <code>replicaSet</code>s we want to retain:
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy</a></p>
| mahesh kumar |
<p>I use ingress-nginx with Helm Chart. I used to have the problem, that when I would upload a file (50MB) that I would get the error 413 Request Entity Too Large nginx.</p>
<p>So I changed the proxy-body-size value in my values.yaml file to 150m, so I should now be able to upload my file.
But now I get the error "413 Request Entity Too Large openresty/1.13.6.2".
I checked the nginx.conf file on the ingress controller and the value for client_max_body_size is correctly set to 150m.</p>
<p>After some research I found out that openresty is used by the lua module in nginx.
Does anybody know how I can set this setting too for openresty, or what parameter I am missing ?</p>
<p>My current config is the following:</p>
<p>values.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>ingress-nginx:
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linux
controller:
replicaCount: 2
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 2
memory: 7Gi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 90
targetMemoryUtilizationPercentage: 90
ingressClassResource:
name: nginx
controllerValue: "k8s.io/nginx"
nodeSelector:
beta.kubernetes.io/os: linux
admissionWebhooks:
enabled: false
patch:
nodeSelector:
beta.kubernetes.io/os: linux
extraArgs:
ingress-class: "nginx"
config:
proxy-buffer-size: "16k"
proxy-body-size: "150m"
client-body-buffer-size: "128k"
large-client-header-buffers: "4 32k"
ssl-redirect: "false"
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
use-proxy-protocol: "false"
</code></pre>
<p>ingress.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespacename
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
nginx.ingress.kubernetes.io/client-body-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-body-size: "150m"
spec:
tls:
- hosts:
- hostname
rules:
- host: hostname
http:
paths:
- path: /assets/static/
pathType: ImplementationSpecific
backend:
service:
name: servicename
port:
number: 8080
</code></pre>
| Alex | <p>So it turns out the Application wich had the error, had another reverse Proxy infront of it (wich uses Lua and Openresty for oauth registration).
The Proxy-body-size attribute needed to be raised there to. After that the File upload worked</p>
| Alex |
<p>i wan't to send pod log to ELK, buat after deploying fluentd i get Error, i got from tutorial Official Fluentd documentation</p>
<p>EKS Version 1.22</p>
<p>i put Suppress_Type_Name On, it's not solved this issue</p>
<pre><code>[2022/06/20 16:23:07] [error] [output:es:es.0] HTTP status=400 URI=/_bulk, response:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400}
</code></pre>
<p>my configmap</p>
<pre><code> fluent-bit.conf: |
[SERVICE]
Flush 1
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
Logstash_Format On
Replace_Dots On
Retry_Limit False
</code></pre>
| Juan Daniel | <p>I was able to resolve the issue by these 3 steps,</p>
<p>step 1. You need to update your fluentbit image to latest</p>
<blockquote>
<pre><code> image: fluent/fluent-bit:2.1.1
</code></pre>
</blockquote>
<p>You can get the deployment file from <a href="https://docs.fluentbit.io/manual/v/1.5/installation/kubernetes" rel="nofollow noreferrer">here</a></p>
<p>step 2: Add "Suppress_Type_Name On" to output-elasticsearch.conf</p>
<pre><code> output-elasticsearch.conf: |
[OUTPUT]
Name es
Match *
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Logstash_Format On
Replace_Dots On
Retry_Limit False
Suppress_Type_Name On
</code></pre>
<p>Step 3. Delete the fluentbit pods and reapply it</p>
<pre><code>kubectl delete -f fluentbit-ds.yaml
kubectl apply -f fluentbit-ds.yaml
</code></pre>
| Sebinn Sebastian |
<p>I installed aws-load-balancer-controller on new EKS cluster (version v1.21.5-eks-bc4871b).</p>
<p>I installed by this guide <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/" rel="noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/</a> step by step but when I'm trying to deploy ingress object I'm getting the error I mentioned in the title.
I tried to do as github issues questions like here <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2039" rel="noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2039</a> but didn't find any answer.</p>
<p>What else can I do for checking this?</p>
| yershalom | <p>In case it might help others - I also had the original issue using fargate profile and worker-node for core-dns. The solution for me I found in another place was just adding</p>
<pre><code>node_security_group_additional_rules = {
ingress_allow_access_from_control_plane = {
type = "ingress"
protocol = "tcp"
from_port = 9443
to_port = 9443
source_cluster_security_group = true
description = "Allow access from control plane to webhook port of AWS load balancer controller"
}
}
</code></pre>
| Emo |
<p>We have a requirement to connect a K8s POD to an Azure VPN Gateway in a secure manner. This is what our network topology is:</p>
<p><a href="https://i.stack.imgur.com/sH8cx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sH8cx.png" alt="enter image description here" /></a></p>
<p>Firstly is this possible to achieve and secondly how would we go about creating this peering? If peering isn't the best option then what would you recommend to solve this problem? TIA</p>
<p>We have created the VPN gateway, VNET, and a local network and confirmed that they can communicate in both directions. The problem is how we bring this into K8s.</p>
| Andy B | <p>I tried to reproduce the same in my environment I have created a virtual network gateway vnet local network gateway like below:</p>
<p><img src="https://i.stack.imgur.com/z2Ddu.png" alt="enter image description here" /></p>
<p>In virtual network added gateway subnet like below:</p>
<p><img src="https://i.stack.imgur.com/kBnRd.png" alt="enter image description here" /></p>
<p>created local network gateway :</p>
<p><img src="https://i.stack.imgur.com/p1wVb.png" alt="enter image description here" /></p>
<p>On-premise try to configure <a href="https://www.mcse.gen.tr/demand-dial-ile-site-to-site-vpn/" rel="nofollow noreferrer">Routing and remote access role</a> in tools -> select custom configuration ->Vpn access, Lan routing ->finish</p>
<p>in network interface select -> New demand-dial interface -> in vpn type select IPEv2 and in the destination address screen provide public IP of virtual network gateway</p>
<p><img src="https://i.stack.imgur.com/b1uuG.png" alt="enter image description here" /></p>
<p>Now, try to create a connection like below:</p>
<p><img src="https://i.stack.imgur.com/hUCLr.png" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/WPFrV.png" alt="enter image description here" /></p>
<p>Now, I have created an aks cluster with pod like below:</p>
<p><img src="https://i.stack.imgur.com/afFJ2.png" alt="enter image description here" /></p>
<p>To communicate with pod make sure to use <em><strong>Azure Container Networking Interface (CNI)</strong></em> every pod gets an IP address from the subnet and can be accessed directly each pod receives an IP address and can directly communicate with other pods and services.
you can AKS nodes based on the maximum number of pod can support. Advanced network features and scenarios such as Virtual Nodes or Network Policies (either Azure or Calico) are supported with Azure CNI.</p>
<p>When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, <em><strong>Gateway should be able reach the pods directly.</strong></em> <a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/configure-kubenet.md#virtual-network-peering-and-expressroute-connections" rel="nofollow noreferrer">Refer</a></p>
<p><img src="https://i.stack.imgur.com/Iyu5Z.png" alt="enter image description here" /></p>
<ul>
<li>You can use AKS's advanced features such as virtual nodes or Azure Network Policy. Use <a href="https://docs.projectcalico.org/v3.9/security/calico-network-policy" rel="nofollow noreferrer">Calico network policies</a>. network policy allows an traffic between pods within a cluster and communicated</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
</code></pre>
<p><img src="https://i.stack.imgur.com/5S5Ge.png" alt="enter image description here" /></p>
<p>To more in detail <em><strong>refer</strong></em> this link:</p>
<p><a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/configure-kubenet.md" rel="nofollow noreferrer">Azure configure-kubenet - GitHub</a></p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-network" rel="nofollow noreferrer">Network connectivity and secure in Azure Kubernetes Service | Microsoft</a></p>
| Imran |
<p>I have a service principal which is an Owner on the subscription that I am using to create an Azure Kubernetes Service cluster as part of a script. I want my cluster to use:</p>
<pre><code>Kubernetes RBAC --> enable
AKS-managed AAD --> enable
Local accounts --> disabled
</code></pre>
<p>I would like the same Service Principal creating the cluster to be able to create k8s roles and role bindings however in order to do this the Service Principal seems to need a cluster-admin role binding.</p>
<p>When creating the cluster there is the option of adding an array of "admin group object ids" which seems to create cluster-admin role bindings for AD Groups. However the SPN cannot be a part of a Group.</p>
<p>Is there anyway around this process?</p>
| floaty39 | <p><em><strong>I tried to reproduce the same in my environment and got the results as below:</strong></em></p>
<p>To assign Azure Kubernetes Service RBAC Cluster Admin to service principal you can make use of below cli command:</p>
<pre><code>az role assignment create --assignee <appId> --scope <resourceScope> --role Azure Kubernetes Service RBAC Cluster Admin
</code></pre>
<p><img src="https://i.stack.imgur.com/a0pBV.png" alt="enter image description here" /></p>
<p><strong>When I run this command kubernetes roles are added successfully like below</strong></p>
<p><img src="https://i.stack.imgur.com/D5bXN.png" alt="enter image description here" /></p>
<p>Alternatively, In azure AD create a group add service principal as a member like below:</p>
<p><img src="https://i.stack.imgur.com/LwcPC.png" alt="enter image description here" /></p>
<p>Now, Add the group in cluster configuration like below</p>
<p><img src="https://i.stack.imgur.com/nOKpq.png" alt="enter image description here" /></p>
<p>You can use the below the cli command to create the aks cluster using service principal like below:</p>
<pre><code>az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--service-principal <appId> \
--client-secret <password>
</code></pre>
<p><em><strong>Reference</strong></em>:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal?tabs=azure-cli" rel="nofollow noreferrer">Use a service principal with Azure Kubernetes Services (AKS) - Azure Kubernetes Service | Microsoft Learn</a></p>
| Imran |
<p>I need to add RBAC to my AKS, but when I go to Azure portal it says that it's a creation operation and that it's not possible to update it afterwards.</p>
<p>Do I need to tear down the whole cluster and create a new one with RBAC enabled to make use of this feature?</p>
<p>It's an ongoing project in production, so for me it's not as simple as running terraform destroy and terraform apply unfortunately.</p>
<p>How would you suggest to do it, to make sure of minimum impact on availability and to have everything set up the same way as the previous cluster?</p>
| Domenico | <p><em><strong>I tried to reproduce the same in my environment I got the results successfully like below:</strong></em></p>
<p>It is possible to add RBAC enabled After creating a Kubernetes cluster:</p>
<p><em><strong>In your Kubernetes cluster -> under setting, cluster configuration -> choose azure authentication with azure RBAC and save like below:</strong></em></p>
<p><img src="https://i.stack.imgur.com/W0WOk.png" alt="enter image description here" /></p>
<p>Then, make use of below cmd to add Azure RBAC for Kubernetes Authorization into an existing AKS cluster,</p>
<pre><code>az aks update -g myResourceGroup -n myAKSCluster --enable-azure-rbac
</code></pre>
<p><img src="https://i.stack.imgur.com/LCbPx.png" alt="enter image description here" /></p>
<p><em><strong>Reference:</strong></em></p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac" rel="nofollow noreferrer">Manage Azure RBAC in Kubernetes From Azure - Azure Kubernetes Service | Microsoft Learn</a></p>
| Imran |
<p>I am trying to update my Azure Kubernetes Service (AKS) cluster with the following command:</p>
<pre><code>az aks upgrade \
--resource-group myResourceGroup \
--name myAKSCluster \
--kubernetes-version KUBERNETES_VERSION
</code></pre>
<p>This results in the following response:</p>
<blockquote>
<p>(AuthorizationFailed) The client '>email<' with object id '>object id<' does not have authorization to perform action 'Microsoft.ContainerService/managedClusters/write' over scope '/subscriptions/>id</resourceGroups/>resourcegroup-name</providers/Microsoft.ContainerService/managedClusters/>cluster-name<' or the scope is invalid. If access was recently granted, please refresh your credentials.
Code: AuthorizationFailed</p>
</blockquote>
<p>When I go to resourcegroup/Access Control(IAM), I find these roles assigned to me when I click on "view my access"</p>
<p><a href="https://i.stack.imgur.com/080s6.png" rel="nofollow noreferrer">IAM access control roles</a></p>
<p>These are:</p>
<pre><code>Azure Kubernetes Service Cluster Admin Role
List cluster admin credential action.
--
Azure Kubernetes Service RBAC Cluster Admin
Lets you manage all resources in the cluster.
--
Reader
View all resources, but does not allow you to make any changes.
--
Storage Account Contributor
Lets you manage storage accounts, including accessing storage account keys which prov...
</code></pre>
<p>I would expect that having the role "Azure Kubernetes Service RBAC Cluster Admin" that says:
"Lets you manage all resources in the cluster." would authorize me to upgrade the cluster to a new version.</p>
<p>I run into the same problem when trying to create a static IP-adress via the Microsoft documentation</p>
| Jens Voorpyl | <p>Created Kubernetes cluster with 1.24 version when I run the same command got the same error:</p>
<pre class="lang-yaml prettyprint-override"><code>az aks upgrade \
--resource-group myResourceGroup \
--name myAKSCluster \
--kubernetes-version KUBERNETES_VERSION
</code></pre>
<p><a href="https://i.stack.imgur.com/iMMMg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMMMg.png" alt="" /></a></p>
<p>To resolve this issue, make sure to add <strong><code>Contributor</code></strong> role to the user in subscription level.</p>
<p><img src="https://i.imgur.com/eizcLIV.png" alt="enter image description here" /></p>
<p>Now when I run the below command got result successfully:</p>
<pre><code>az aks upgrade --resource-group <RGName> --name <myAKSCluster> --kubernetes-version 1.25
</code></pre>
<p><img src="https://i.imgur.com/8iicEtJ.png" alt="enter image description here" /></p>
<pre><code>agentPoolProfiles": [
{
"availabilityZones": [
"1",
"2",
"3"
],
"count": 1,
"creationData": null,
"currentOrchestratorVersion": "1.25.6",
"enableAutoScaling": true,
"enableEncryptionAtHost": null,
"enableFips": false,
"enableNodePublicIp": false,
"enableUltraSsd": null,
"gpuInstanceProfile": null,
"hostGroupId": null,
"kubeletConfig": null,
"kubeletDiskType": "OS",
"linuxOsConfig": null,
</code></pre>
<p>In portal:</p>
<p><img src="https://i.imgur.com/mirhxMw.png" alt="enter image description here" /></p>
| Imran |
<p>Is there a method to define rule priority within the Azure Application Gateway?</p>
<p>I've defined an ingress object in my Kubernetes cluster as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
appgw.ingress.kubernetes.io/rule-priority: "100"
spec:
ingressClassName: azure-application-gateway
rules:
- host: foo.bar.io
http:
paths:
- path: /
backend:
service:
name: type
port:
number: 80
pathType: Exact
</code></pre>
<p>This setup generates a listener and an associated rule, which works just fine. However, it appears that the annotation <code>appgw.ingress.kubernetes.io/rule-priority: "100"</code> doesn't have any effect.</p>
<p>When I inspect the rule in the console, it displays 19000 instead of 100.</p>
<p>I'm now wondering if the rule priority might be configured elsewhere or if I'm using the incorrect annotation altogether, as I don't find it listed among the supported <a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/" rel="nofollow noreferrer">annotations.</a></p>
<p>Is it possible to set the rule-priority from the ingress definition or is this done somewhere else entirely?</p>
| Metro | <p>I compared with your list <a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/" rel="nofollow noreferrer">annotation</a>, even as per <a href="https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-annotations#rewrite-rule-set" rel="nofollow noreferrer">MsDoc</a> it is not listed.</p>
<p>You can edit or modify your priority value from the console like below:</p>
<p><img src="https://i.imgur.com/FTF2n2d.png" alt="enter image description here" /></p>
<p>Alternatively, you can also try setting up rule priority between 1 and 20000 (1 = highest priority, 20000=lowest priority) as per below script by <em>megabit</em></p>
<pre><code>$AppGW = Get-AzApplicationGateway -Name "<APPGATEWAYNAME>" -ResourceGroupName "<RGName>"
$Rules = Get-AzApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGW
$i = 1000
foreach ($Rule in $Rules) {
$Rule.Priority = $i
$i++
}
Set-AzApplicationGateway -ApplicationGateway $AppGw
</code></pre>
| Imran |
<p>I am wondering if it is possible to configure the βpublic access source allowlistβ from CDK. I can see and manage this in the console under the networking tab, but canβt find anything in <a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.Cluster.html" rel="nofollow noreferrer">the CDK docs</a> about setting the allowlist during deploy. I tried creating and assigning a security group (code sample below), but this didn't work. Also the security group was created as an "additional" security group, rather than the "cluster" security group.</p>
<pre class="lang-ts prettyprint-override"><code>declare const vpc: ec2.Vpc;
declare const adminRole: iam.Role;
const securityGroup = new ec2.SecurityGroup(this, 'my-security-group', {
vpc,
allowAllOutbound: true,
description: 'Created in CDK',
securityGroupName: 'cluster-security-group'
});
securityGroup.addIngressRule(
ec2.Peer.ipv4('<vpn CIDR block>'),
ec2.Port.tcp(8888),
'allow frontend access from the VPN'
);
const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
securityGroup
});
</code></pre>
<p><strong>Update:</strong> I attempted the following, and it updated the <em>cluster</em> security group, but I'm still able to access the frontend when I'm not on the VPN:</p>
<pre class="lang-ts prettyprint-override"><code>cluster.connections.allowFrom(
ec2.Peer.ipv4('<vpn CIDER block>'),
ec2.Port.tcp(8888)
);
</code></pre>
<p><strong>Update 2:</strong> I tried this as well, and I can still access my application's frontend even when I'm not on the VPN. However I can now only use <code>kubectl</code> when I'm on the VPN, which is good! It's a step forward that I've at least improved the cluster's security in a useful manner.</p>
<pre class="lang-ts prettyprint-override"><code>const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE.onlyFrom('<vpn CIDER block>')
});
</code></pre>
| James Kelleher | <p>In general EKS has two relevant security groups:</p>
<ol>
<li><p>The one used by nodes, which AWS calls "cluster security group". It's setup automatically by EKS. You shouldn't need to mess with it unless you want (a) more restrictive rules the defaults (b) open your nodes to maintenance taks (e.g.: ssh access). This is what you are acessing via <code>cluster.connections</code>.</p>
</li>
<li><p>The Ingress Load Balancer security group. This is an Application Load balancer created and managed by EKS. In CDK, it can be created like so:</p>
</li>
</ol>
<pre class="lang-js prettyprint-override"><code>const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_22,
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
});
</code></pre>
<p>This will will serve as a gateway for all internal services that need an Ingress. You can access it via the <code>cluster.albController</code> propriety and add rules to it like a regular Application Load Balancer. I have no idea how EKS deals with task communication when an Ingress ALB is not present.</p>
<p>Relevant docs:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html" rel="nofollow noreferrer">Amazon EKS security group considerations</a></li>
<li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks-readme.html#alb-controller" rel="nofollow noreferrer">Alb Controller on CDK docs</a></li>
<li><a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.Cluster.html#albcontroller-1" rel="nofollow noreferrer">The ALB propriety for EKS Cluster objects</a></li>
</ul>
| pid-1 |
<p>After a <strong>mongodump</strong> I am trying to restore using <strong>mongorestore</strong>.</p>
<p>It works locally in seconds. However, when I <strong>kubectl exec -it</strong> into the pod of the primary mongodb node and run the same command it gets stuck and endlessly repeats the line with the same progress and an updated timestamp (the first and the last line are the same except the timestamp, so 0 progress). This goes about 5 hours, then I get thrown out with an OOM error.</p>
<p>I am using mongo:3.6.9</p>
<pre><code>2022-03-02T22:56:36.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:39.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:42.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:45.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:48.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:51.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
2022-03-02T22:56:54.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%)
</code></pre>
<p>The same behavior when I do a mongorestore from a restore container specifying all mongo pods like so <strong>mongorestore --db=mydb --collection=users data/mydb/users.bson --host mongo-0.mongo,mongo-1.mongo,mongo-2.mongo --port 27017</strong></p>
<p>Is there anything else I could try?</p>
| veste | <p>I found my answer here:
<a href="https://stackoverflow.com/a/41352269/18358598">https://stackoverflow.com/a/41352269/18358598</a></p>
<p><code>--writeConcern '{w:0}'</code>
works.</p>
| veste |
<p>I have deployed a Prometheus-operator on the k8s cluster.
Everything works well but I want to monitor MySQL pods that are in another namespace.
I create mysqld-exporter pod and svc for it in MariaDB namespace and a servicemonitor for it in the monitoring namespace.
I check all the items which are in this <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md#troubleshooting-servicemonitor-changes" rel="nofollow noreferrer">link</a>, but this servicemonitor(for mysqld) doesn't add to Prometheus targets.
when I change the svc type to nodeport everything works, and metrics are exposed.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
creationTimestamp: "2022-09-11T11:51:46Z"
generation: 1
labels:
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 9.1.2
monitor-app: mysqld-exporter
name: mysqld-exporter
namespace: monitoring
resourceVersion: "2932040"
uid: 247683c8-7868-4f2c-9a60-255c703273a5
spec:
endpoints:
- interval: 30s
port: http-metrics
jobLabel: k8s-app
namespaceSelector:
matchNames:
- mariadb
selector:
matchLabels:
app: mysqld-exporter
--------
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-09-11T11:50:28Z"
labels:
app: mysqld-exporter
name: mysqld-exporter
namespace: mariadb
resourceVersion: "2931235"
uid: 1b548f89-33a1-4235-b042-8cda5dfc766b
spec:
clusterIP: 10.109.39.231
clusterIPs:
- 10.109.39.231
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http-metrics
port: 9104
protocol: TCP
targetPort: 9104
selector:
app: mysqld-exporter
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
</div>
</div>
</p>
| mona moghadampanah | <p>I checked the Prometheus pod logs, and its error was :</p>
<blockquote>
<p>pods is forbidden: User "system:serviceaccount:monitoring:prometheus-k8s" cannot list resource</p>
</blockquote>
<p>so I search for this error and find the answer in <a href="https://github.com/prometheus-operator/kube-prometheus/issues/483#issuecomment-610427646" rel="nofollow noreferrer">this</a> link and add pods and services to the resources of Prometheus-k8s ClusterRole.</p>
| mona moghadampanah |
<p>I have an AWS EKS cluster running in a custom VPC with 2 public and 2 private subnets. The node groups (for my backend) run in the 2 private subnets so they can't be accessed directly.</p>
<p>I would like to create an API Gateway which exposes the microservices in the node group so my front-end and third party software can communicate with them. I eventually also like to add authorization to the API Gateway for secutiry. The problem is that I cannot find a good documentation how to do this (Expose the microservices through an API Gateway). Does anyone now khow to do this or where I can find information on how to do this?</p>
<p>The situation would look something like this:
<a href="https://i.stack.imgur.com/gcnDm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcnDm.png" alt="enter image description here" /></a></p>
| Casperca | <p>You need to use API Gateway private integrations to expose services running in EKS using NLB. Please check the below article for overall solution .</p>
<p><a href="https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/</a></p>
| Manmohan Mittal |
<p>Ingress is not forwarding traffic to pods.
Application is deployed on Azure Internal network.
I can access app successfully using pod Ip and port but when trying Ingress IP/ Host I am getting 404 not found. I do not see any error in Ingress logs.
Bellow are my config files.
Please help me if I am missing anything or a how I can troubleshoot to find issue.</p>
<p>Deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
- name: aks-helloworld-one
image: <image>
ports:
- containerPort: 8290
protocol: "TCP"
env:
- name: env1
valueFrom:
secretKeyRef:
name: configs
key: env1
volumeMounts:
- mountPath: "mnt/secrets-store"
name: secrets-mount
volumes:
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-keyvault"
imagePullSecrets:
- name: acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8290
selector:
app: aks-helloworld-one
</code></pre>
<p>Ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aks-helloworld
port:
number: 80
</code></pre>
| megha | <p>You have mentioned the wrong service name under the ingress definition. Service name should be aks-helloworld-one as per the service definition.</p>
| Manmohan Mittal |
<p>I am new to kubernetes.</p>
<p>Can someone please explain that what is the real purpose of <strong>"--wait=false"</strong> in <code>minikube start --wait=false</code> command?</p>
<p>I am not able to find the appropriate answer in internet.</p>
| Raashith | <p><a href="https://minikube.sigs.k8s.io/docs/commands/start/#options" rel="nofollow noreferrer">minikube start -options</a></p>
<p>According to documentation:</p>
<pre><code>--wait strings comma separated list of Kubernetes components to verify and wait for after starting a cluster. defaults to "apiserver,system_pods", available options: "apiserver,system_pods,default_sa,apps_running,node_ready,kubelet" . other acceptable values are 'all' or 'none', 'true' and 'false' (default [apiserver,system_pods])
</code></pre>
| user18464468 |
<p>There is a folder name "data-persistent" in the running container that the code reads and writes from, I want to save the changes made in that folder. when I use persistent volume, it removes/hides the data from that folder and the code gives an error. So what should be my approach.</p>
<pre><code>FROM python:latest
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
#RUN mkdir data-persistent
ADD linkedin_scrape.py .
COPY requirements.txt ./requirements.txt
COPY final_links.csv ./final_links.csv
COPY credentials.txt ./credentials.txt
COPY vectorizer.pk ./vectorizer.pk
COPY model_IvE ./model_IvE
COPY model_JvP ./model_JvP
COPY model_NvS ./model_NvS
COPY model_TvF ./model_TvF
COPY nocopy.xlsx ./nocopy.xlsx
COPY data.db /data-persistent/
COPY textdata.txt /data-persistent/
RUN ls -la /data-persistent/*
RUN pip install -r requirements.txt
CMD python linkedin_scrape.py --bind 0.0.0.0:8080 --timeout 90
</code></pre>
<p>And my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-first-cluster1
spec:
replicas: 2
selector:
matchLabels:
app: scrape
template:
metadata:
labels:
app: scrape
spec:
containers:
- name: scraper
image: image-name
#
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
volumeMounts:
- mountPath: "/dev/shm"
name: dshm
- mountPath: "/data-persistent/"
name: tester
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: tester
persistentVolumeClaim:
claimName: my-pvc-claim-1
</code></pre>
<p>Let me explain the workflow of the code. The code reads from the textdata.txt file which contains the indices of links to be scraped e.g. from 100 to 150, then it scrapes the profiles, inserts them to data.db file and then writes to the texdata.txt file the sequence to be scraped in next run e.g. 150 to 200.</p>
| Sardar Arslan | <p>First , k8s volume mounting point overwrite the original file system /data-persistent/</p>
<p>To solve such a case you have many options</p>
<p><strong>Solution 1</strong></p>
<ul>
<li>edit your docker file to copy local data to /tmp-data-persistent</li>
<li>then add "init container" that copy content of /tmp-data-persitent to /data-persistent that will copy the data to the volume and apply persistency</li>
</ul>
<p><strong>Solution 2</strong></p>
<ul>
<li><p>its not good to copy data in docker images , that will increase images sizes and also unityfing code and data change pipelines</p>
</li>
<li><p>Its better to keep data in any shared storage like "s3" , and let the "init container" compare and sync data</p>
</li>
</ul>
<p>if cloud services like s3 not available</p>
<ul>
<li><p>you can use persistent volume type that support multipe r/w mounts</p>
</li>
<li><p>attach same volume to another deployment { use busybox image as example } and do the copy with "kubectl cp"</p>
</li>
<li><p>scale temp deployments to zero after finalizing the copy , also you
can make it as part of CI pipeline</p>
</li>
</ul>
| Tamer Elfeky |
<p>Yesterday, I installed kubernetes microk8s on my private laptop to learn about kubernetes,
But even on first simple file with PersistentVolume I'm getting a lot of validation errors,</p>
<p>I have installed microk8s on Ubuntu from below source:
<a href="https://microk8s.io/?_ga=2.70856272.1723042697.1642604373-620897147.1642604373v" rel="nofollow noreferrer">https://microk8s.io/?_ga=2.70856272.1723042697.1642604373-620897147.1642604373v</a></p>
<p>The issue is when firstly I wanted to create pv:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<p>and I'm gettting below errors:</p>
<pre><code>kubectl apply -f pv-data.yml -n testing
error: error validating "pv-data.yml": error validating data: [ValidationError(PersistentVolume): unknown field "accessModes" in io.k8s.api.core.v1.PersistentVolume, ValidationError(PersistentVolume): unknown field "name" in io.k8s.api.core.v1.PersistentVolume, ValidationError(PersistentVolume): unknown field "path" in io.k8s.api.core.v1.PersistentVolume, ValidationError(PersistentVolume): unknown field "storage" in io.k8s.api.core.v1.PersistentVolume, ValidationError(PersistentVolume): unknown field "storageClassName" in io.k8s.api.core.v1.PersistentVolume]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>Can someone help me with that ? Maybe I have microk8s not properly installed ? Because this is very simple yaml file.</p>
<p>also I would like to attach status of microk8s:</p>
<pre><code>microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard # The Kubernetes dashboard
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard-ingress # Ingress definition for Kubernetes dashboard
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
inaccel # Simplifying FPGA management in Kubernetes
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
kata # Kata Containers is a secure runtime with lightweight VMS
keda # Kubernetes-based Event Driven Autoscaling
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
multus # Multus CNI enables attaching multiple network interfaces to pods
openebs # OpenEBS is the open-source storage solution for Kubernetes
openfaas # OpenFaaS serverless framework
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
traefik # traefik Ingress controller for external access
</code></pre>
| dominbdg | <p>Issue related to yaml indentation , you can use valid online examples like</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a></p>
<p>or just try fix it on your intuition , most of time you can make it done</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
spec:
storageClassName: ""
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
| Tamer Elfeky |
<p>Not able to create cluster using existing nodes (RKE) on rancher 2.5.10. Firewall is disabled by default on all the servers.</p>
<p>Internal error occurred: failed calling webhook "rancherauth.cattle.io": Post <a href="https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s" rel="nofollow noreferrer">https://rancher-webhook.cattle-system.svc:443/v1/webhook/validation?timeout=10s</a>: context deadline exceeded.</p>
| Bharath Reddy | <p>reference: <a href="https://rancher.com/docs/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/</a></p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete secret -n cattle-system cattle-webhook-tls
kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io
kubectl delete pod -n cattle-system -l app=rancher-webhook
</code></pre>
| dillon y |
<p>I have a couple of overlays (dev, stg, prod) pulling data from multiple bases where each base contains a single service so that each overlay can pick and choose what services it needs. I generate the manifests from the dev/stg/prod directories.</p>
<p>A simplified version of my Kubernetes/Kustomize directory structure looks like this:</p>
<pre><code>βββ base
β βββ ServiceOne
β β βββ kustomization.yaml
β β βββ service_one_config.yaml
β βββ ServiceTwo
β β βββ kustomization.yaml
β β βββ service_two_config.yaml
β βββ ConfigMap
β βββ kustomization.yaml
β βββ config_map_constants.yaml
βββ overlays
βββ dev
β βββ kustomization.yaml
β βββ dev_patch.yaml
βββ stg
β βββ kustomization.yaml
β βββ stg_patch.yaml
βββ prod
βββ kustomization.yaml
βββ prod_patch.yaml
</code></pre>
<p>Under base/ConfigMap, config_map_constants.yaml file contains key/value pairs that are non-secrets:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: myApp
name: global-config-map
namespace: myNamespace
data:
aws_region: "us-west"
env_id: "1234"
</code></pre>
<p>If an overlay just needs a default value, it should reference the key/value pair as is, and if it needs a custom value, I would use a patch to override the value.</p>
<p>kustomization.yaml from base/ConfigMap looks like this and refers to ConfigMap as a resource:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- config_map_constants.yaml
</code></pre>
<p>QUESTION: how do I reference "aws_region" in my overlays' yaml files so that I can retrieve the value?</p>
<p>For example, I want to be able to do something like this in base/ServiceOne/service_one_config.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: ../ConfigMap/${aws_region} #pseudo syntax
name: service_one
spec:
env_id: ../ConfigMap/${env_id} #pseudo syntax
</code></pre>
<p>I am able to build the ConfigMap and append it to my services but I am struggling to find how to reference its contents within other resources.</p>
<p>EDIT:
Kustomize version: v4.5.2</p>
| unboundedcauchy | <p>You can try using <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/" rel="nofollow noreferrer">https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/</a></p>
<p>For your scenario, if you want to reference the <code>aws-region</code> into your Service labels. You need to create a <code>replacement</code> file.</p>
<p><code>replacements/region.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>source:
kind: ConfigMap
fieldPath: data.aws-region
targets:
- select:
kind: Service
name: service_one
fieldPaths:
- metadata.labels.aws_region
</code></pre>
<p>And add it to your <code>kustomization.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>replacements:
- path: replacements/region.yaml
</code></pre>
<p>Kustomize output should be similar to this</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: us-west-1
name: service_one
</code></pre>
| Justin Miguel Zamora |
<p>I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)</p>
<p>I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
</code></pre>
<p>my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.</p>
<p>the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.</p>
<p>There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.</p>
<p>any suggestions and ideas what can be the problem or how I can try to debug it?</p>
| corey | <p>I'm facing the same issue.</p>
<p>Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.</p>
<p>MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.</p>
<p>But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.</p>
<p>Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.</p>
| GGorge |
<p>I am deploying openShift cluster (ocp) on openstack environment with 3 master and 3 worker node.For that I have generated the install-config.yaml file using "openshift-install" command. I want to use different flavour for master (m1.xlarge) and worker (m1.2xlarge). How Can I define this in install-config.yaml file?</p>
<p>Below is my install-config.yaml file -</p>
<pre><code>apiVersion: v1
baseDomain: abc.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
openstack:
additionalSecurityGroupIDs:
- 61dfe2fb-889a-4d21-a252-608f357ae570
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
openstack:
additionalSecurityGroupIDs:
- 61dfe2fb-889a-4d21-a252-608f357ae570
replicas: 3
metadata:
creationTimestamp: null
name: tb
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/24
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
openstack:
apiFloatingIP: 10.9.7.10
ingressFloatingIP: 10.9.7.11
apiVIP: 10.0.0.5
cloud: openstack
defaultMachinePlatform:
type: m1.2xlarge < ==== M1.2 xlarger is being used for both worker and master
rootVolume: {
size: 200,
type: "tripleo"
}
externalDNS: null
externalNetwork: testbed-vlan1507
ingressVIP: 10.0.0.7
publish: External
</code></pre>
| user3115222 | <p>According to the <a href="https://docs.openshift.com/container-platform/4.10/installing/installing_openstack/installing-openstack-installer-custom.html#installation-osp-config-yaml_installing-openstack-installer-custom" rel="nofollow noreferrer">documentation</a>, you should add <code>type: <flavor></code> in platform.openstack, for example:</p>
<pre class="lang-yaml prettyprint-override"><code>compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
openstack:
type: ci.m1.xlarge
additionalSecurityGroupIDs:
- 61dfe2fb-889a-4d21-a252-608f357ae570
replicas: 3
</code></pre>
| VANAN |
<p>Now I'm using WSL 2 and Docker Desktop on Windows 10.</p>
<p>I created an YAML script to create an ingress for my microservices like below.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
</code></pre>
<p>And I installed ingress-nginx by following this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">installation guide</a></p>
<p>I ran this command in the guide.</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml</code></p>
<p>But when I ran <code>kubectl get pods --namespace=ingress-nginx</code>, <code>ingress-nginx-controller</code> shows <code>ImageInspectError</code>
<a href="https://i.stack.imgur.com/uthyL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uthyL.png" alt="ImageInspectError" /></a></p>
<p>And when I ran the command <code>kubectl apply -f ingress-srv.yaml</code>, it showed an error message.
<a href="https://i.stack.imgur.com/5li4R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5li4R.png" alt="Ingress error message" /></a></p>
<p>Can anyone please let me know what the issue is?</p>
<p>I removed the namespace <code>ingress-nginx</code> using this command <code>kubectl delete all --all -n ingress-nginx</code> and ran the deploy script again.</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml</code></p>
<p>But the issue still happened.</p>
| Daniel Morales | <p>It was because of the corrupted filesystem.</p>
<p>When I ran the ingress-nginx deployment command, there was a docker-desktop crash because of the lack of drive storage size.</p>
<p>So I removed all corrupted, unused or dangling docker images.</p>
<p><code>docker system prune</code></p>
<p>Also I deleted ingress-nginx and reinstalled.</p>
<p><code>kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml</code></p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml</code></p>
<p>After that, it worked well.</p>
<p><code>kubectl get pods --namespace=ingress-nginx</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-tgkfx 0/1 Completed 0 74m
ingress-nginx-admission-patch-28l7q 0/1 Completed 3 74m
ingress-nginx-controller-7844b9db77-4dfvb 1/1 Running 0 74m
</code></pre>
| Daniel Morales |
<p>I'm running Kubernetes v1.25.9+rke2r1.
I have a Metallb correctly setup with an external ip address and haproxy ingress controller.
I need to expose a cockroachdb instance via ingress and I set up that config in the crdb instance correctly and the ingress spins up correctly. but without any ip address. And therefore when I click the page goes into timeout, if I telnet the port it goes into timeout aswell.</p>
<pre><code>k get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
cockroachdb-instance sql-cockroachdb haproxy cockroachdb.dev-service-stage.k8s.mi1.prod.cloudfire.it 80 15m
</code></pre>
<p>From the haproxy pod logs I don't see any errors, to my knowledge the ingress and the corresponding service are setup correctly.
What can I do in order to further debug this or fix this?</p>
<pre><code>k describe ingress sql-cockroachdb -n cockroachdb-instance
Name: sql-cockroachdb
Labels: app.kubernetes.io/component=database
app.kubernetes.io/instance=cockroachdb
app.kubernetes.io/managed-by=cockroach-operator
app.kubernetes.io/name=cockroachdb
app.kubernetes.io/part-of=cockroachdb
app.kubernetes.io/version=v23.1.4
crdb=test
Namespace: cockroachdb-instance
Address:
Ingress Class: haproxy
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
cockroachdb.dev-service-stage.k8s.mi1.prod.cloudfire.it
cockroachdb-public:sql (10.222.19.88:26257,10.222.196.89:26257,10.222.30.93:26257)
Annotations: crdb.io/last-applied:
UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAb3JpZ2luYWyMk0GT0zAMhf+LzrG3pbvQ5ki5cIEZYLgwHGRbaTxxbK+sFDqd/HfGbelsoYe9aZSn7+kpyRFGEnQoCO0RAhoKpV...
field.cattle.io/publicEndpoints:
[{"addresses":[""],"port":80,"protocol":"HTTP","serviceName":"cockroachdb-instance:cockroachdb-public","ingressName":"cockroachdb-instance...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 28m ingress-controller Ingress cockroachdb-instance/sql-cockroachdb
Normal CREATE 28m ingress-controller Ingress cockroachdb-instance/sql-cockroachdb
Normal UPDATE 27m (x2 over 27m) ingress-controller Ingress cockroachdb-instance/sql-cockroachdb
Normal UPDATE 27m (x2 over 27m) ingress-controller Ingress cockroachdb-instance/sql-cockroachdb
</code></pre>
<p>If I try to port-forward the service that handles cockroachdb I get this error:</p>
<pre><code>E0711 09:59:38.779413 43555 portforward.go:409] an error occurred forwarding 8081 -> 26258: error forwarding port 26258 to pod 8a774d302b846fdbdd7bbc6b3f35144d9712f15de9ab72d4ef9e0c8cdfa8ee85, uid : failed to execute portforward in network namespace "/var/run/netns/cni-f1c36dc8-3647-e00e-b67e-7abb019b236b": read tcp4 127.0.0.1:33424->127.0.0.1:26258: read: connection reset by peer
</code></pre>
<p>This is the Kind: CrdbCluster - The installation of cockroachdb was done via kubectl apply with all default values.</p>
<pre><code>apiVersion: crdb.cockroachlabs.com/v1alpha1
kind: CrdbCluster
metadata:
# this translates to the name of the statefulset that is created
name: cockroachdb
spec:
dataStore:
pvc:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "20Gi"
volumeMode: Filesystem
resources:
requests:
# This is intentionally low to make it work on local k3d clusters.
cpu: 500m
memory: 2Gi
limits:
cpu: 2
memory: 4Gi
tlsEnabled: true
ingress:
#ui:
# ingressClassName: haproxy
# annotations:
# key: value
# host: cockroachdb.dev-service-stage.k8s.mi1.prod.cloudfire.it
sql:
ingressClassName: haproxy
annotations:
#key: value
host: cockroachdb.dev-service-stage.k8s.mi1.prod.cloudfire.it
# You can set either a version of the db or a specific image name
# cockroachDBVersion: v23.1.4
image:
name: cockroachdb/cockroach:v23.1.4
# nodes refers to the number of crdb pods that are created
# via the statefulset
nodes: 3
additionalLabels:
crdb: test
</code></pre>
| simone.benati | <p>The issue was with metallb and a custom resource</p>
<p>while adding this to the manifests, everything was provisioned successfully:</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: advertisement
namespace: metallb
spec:
ipaddresspools:
- first-pool
</code></pre>
| simone.benati |
<p>I have an Azure question. I use terraform in the Azure Cloud service. I try to start up 2 AKS cluster there. But I always get an error that my CIDR settings are wrong.</p>
<p>I use in Cluster one:</p>
<pre><code>resource "azurerm_subnet" "cluster1-node-pool-subnet" {
name = "cluster1-node-pool-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.1.0/19"]
}
resource "azurerm_subnet" "cluster1-execution-nodes-subnet" {
name = "cluster1-execution-nodes-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.33.0/19"]
}
resource "azurerm_subnet" "cluster1-gpu-nodes-subnet" {
count = var.gpuNodePool ? 1 : 0
name = "execution-nodes-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.48.0/20"]
}
network_profile {
network_plugin = "azure"
service_cidr = "10.0.65.0/19"
dns_service_ip = "10.0.65.10"
docker_bridge_cidr = "172.17.0.1/16"
}
</code></pre>
<p>and in Cluster two:</p>
<pre><code>resource "azurerm_subnet" "default-node-pool-subnet" {
name = "default-node-pool-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.0.0/19"]
}
resource "azurerm_subnet" "execution-nodes-subnet" {
name = "execution-nodes-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.32.0/19"]
}
resource "azurerm_subnet" "gpu-nodes-subnet" {
count = var.gpuNodePool ? 1 : 0
name = "execution-nodes-subnet"
resource_group_name = azurerm_virtual_network.cluster-vnet.resource_group_name
virtual_network_name = azurerm_virtual_network.cluster-vnet.name
address_prefixes = ["10.0.48.0/20"]
}
network_profile {
network_plugin = "azure"
service_cidr = "10.0.64.0/19"
dns_service_ip = "10.0.64.10"
docker_bridge_cidr = "172.17.0.1/16"
}
</code></pre>
<p>Azur now tell me that the prefix is wrong.</p>
<pre><code>β Error: creating Subnet: (Name "cluster1-node-pool-subnet" / Virtual Network Name "cluster-vnet" / Resource Group "cluster-infra-network"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidCIDRNotation" Message="The address prefix 10.0.1.0/19 in resource /subscriptions/xxx/resourceGroupscluster-infra-network/providers/Microsoft.Network/virtualNetworks/cluster-vnet/subnets/cluster1-node-pool-subnet has an invalid CIDR notation. For the given prefix length, the address prefix should be 10.0.0.0/19." Details=[]
β
β with azurerm_subnet.cluster1-node-pool-subnet,
β on k8s-rtc.tf line 7, in resource "azurerm_subnet" "cluster1-node-pool-subnet":
β 7: resource "azurerm_subnet" "cluster1-node-pool-subnet" {
β
β΅
β·
β Error: creating Subnet: (Name "cluster1-execution-nodes-subnet" / Virtual Network Name "cluster-vnet" / Resource Group "cluster-infra-network"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidCIDRNotation" Message="The address prefix 10.0.33.0/19 in resource /subscriptions/xxx/resourceGroups/cluster-infra-network/providers/Microsoft.Network/virtualNetworks/cluster-vnet/subnets/cluster1-execution-nodes-subnet has an invalid CIDR notation. For the given prefix length, the address prefix should be 10.0.32.0/19." Details=[]
β
β with azurerm_subnet.cluster1-execution-nodes-subnet,
β on k8s-rtc.tf line 14, in resource "azurerm_subnet" "cluster1-execution-nodes-subnet":
β 14: resource "azurerm_subnet" "cluster1-execution-nodes-subnet" {
</code></pre>
<p>In my mind the CIDR and prefix are valid. any idea what is wrong?</p>
| IFThenElse | <p>There are basically two issues in your subnet definitions:</p>
<ol>
<li><p>Azure tells you that you are using invalid CIDR notations. E.g. in case of <code>cluster1-node-pool-subnet</code> you are specifying in address prefix <code>["10.0.1.0/19"]</code>. While <code>10.0.1.0/19</code> is a valid IP address, it belongs to the subnet <code>10.0.0.0/19</code> and Azure insists that you need to specify the address prefix based on the network address of the subnet.</p>
<p>The same applies to e.g. <code>10.0.33.0/19</code> which belongs to the subnet <code>10.0.32.0/19</code>.</p>
</li>
<li><p>This unveils the second issue, which Azure will report to you once the first is fixed: In both subnets you are using the same address prefixes in the subnets you are defining. To overcome this you could use e.g. <code>["10.0.0.0/19"]</code> for <code>cluster1-node-pool-subnet</code> and <code>["10.1.0.0/19"]</code> for <code>default-node-pool-subnet</code> and so on.</p>
</li>
</ol>
| stdtom |
<p>I read the following article about resource management in kubrnaties: <a href="https://home.robusta.dev/blog/kubernetes-memory-limit" rel="nofollow noreferrer">https://home.robusta.dev/blog/kubernetes-memory-limit</a></p>
<p>One of the important points there is that: "memory is a non-compressible resource. Once you give a pod memory, you can only take it away by killing the pod."</p>
<p>What I am still having trouble with is what happened when the pod releases its memory, does it return to the node or still allocated to the pod?</p>
<p>For example, let's say I have a pod that is waiting for a job which comes at some interval. During the job, the memory consumption is high, but between jobs the memory consumption is low. So, when the memory consumption is low, does the memory is still allocated to the pod or it return to the resource pool of the node?</p>
| Idan Aviv | <p>In Kubernetes, when a pod releases its memory, it goes back into the pool of available resources on the node.</p>
<p>Here's how it works: When a pod is up and running, it gets assigned a specific amount of memory from the node it's running on. This memory is exclusively reserved for that pod and can't be used by any other pods or processes on the same node. However, once the pod is done with its memory or if it reduces its memory usage, that memory is freed up and becomes accessible for other pods or processes on the node.</p>
<p>So, in your scenario where the memory consumption is low between jobs, the memory that was previously allocated to the pod is returned to the resource pool of the node. This means that other pods or processes running on the same node can take advantage of the available memory. Kubernetes handles all this resource management behind the scenes, making sure that resources are efficiently allocated and deallocated based on the needs of the pods and the available resources on the node.</p>
| Udi Hofesh |
<p>Everytime I try accessing my minIO console via the browser with a port-forward, the connection will work briefly with multiple connection messages of:</p>
<pre><code>Handling connection for 9000
Handling connection for 42935
Handling connection for 42935
Handling connection for 42935
Handling connection for 42935
Handling connection for 42935
Handling connection for 42935
...
</code></pre>
<p>Then a moment later, this error message</p>
<p><code>E0128 18:22:01.801739 40952 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 [::1]:42935->[::1]:50796: write tcp6 [::1]:42935->[::1]:50796: write: broken pipe</code></p>
<p>Before it finally spamming with multiple messages of:</p>
<pre><code>E0128 18:22:31.738313 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: Timeout occurred
Handling connection for 42935
E0128 18:22:32.120930 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe
Handling connection for 42935
E0128 18:22:32.574837 40952 portforward.go:346] error creating error stream for port 42935 -> 42935: write tcp 192.168.0.16:50776->34.133.9.102:443: write: broken pipe
...
</code></pre>
<p>Here's my deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: minio-deployment
namespace: minio-ns
spec:
replicas: 1
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio
args:
- server
- /data
- --console-address
- ":42935"
volumeMounts:
- name: minio-pv-storage
mountPath: /data
volumes:
- name: minio-pv-storage
persistentVolumeClaim:
claimName: minio-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: minio-ns
spec:
selector:
app: minio
ports:
- name: minio
port: 9000
targetPort: 9000
- name: minio-console
port: 42935
targetPort: 42935
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
namespace: minio-ns
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>I changed the minio-service type to LoadBalancer (from ClusterIP) to access the console via the browser along with adding the --console-address flag and exposing the necessary port. This worked in allowing the minIO console to show despite being in a constant loading state. If I try to login, it will just refresh until crashing/timing out</p>
| Leo Lad | <p>Resolved. The issue didn't have anything to do with my actual code.</p>
<p><a href="https://github.com/minio/console/issues/2539" rel="nofollow noreferrer">https://github.com/minio/console/issues/2539</a></p>
| Leo Lad |
<p>I have created multi-node cluster on GKE and I was using this shell script for fixing the virtual memory on nodes <a href="https://gist.github.com/thbkrkr/bc12a3457fd0f71c5a2106b11ce8801e" rel="nofollow noreferrer">Set vm.max_map_count=262144 on the nodes of a GKE cluster #k8s</a>. I want it to be executed in my cloud builder but it just returns Permission denied (publickey). on gcloud compute ssh execution. How can I fix it?</p>
| Danyil Poprotskyi | <p>Basically in the line</p>
<pre><code>gcloud compute ssh --zone $(zone_by_node $node) $node -- sudo bash -c "'"$@"'"
</code></pre>
<p>set your username before node:</p>
<pre><code>gcloud compute ssh --zone $(zone_by_node $node) username@$node -- sudo bash -c "'"$@"'"
</code></pre>
<p>Your username is first part of your email before @.</p>
| Danyil Poprotskyi |
<p>We have our web api service running in OpenShift for the past few months</p>
<p>When we deployed this to OpenShift, initially we have given basic request and limits for memory and CPU.</p>
<p>Sometime when the resource limit crossed its threshold we had to increase the limit</p>
<p>We have several services deployed and we have given some random request and limits for the Pod.
we are trying to figure out a way to provide resource limits and request based on the past few months that it is running on OpenShift</p>
<p>My idea is to look at the last few months on requests what is POD is receiving and come up with a value to requests and limits</p>
<p>I am thinking PROMQL can help me to provide this value, can someone help me with a query to determine average resource and limits based on past 4 to 5 weeks of requests on the POD ?</p>
| user804401 | <p>Try the below queries which are helpful in your case :</p>
<pre><code>avg (
avg_over_time(container_cpu_usage_seconds_total:rate5m[30d])
) by (pod_name)
</code></pre>
<p>The above query is used to determine the average CPU usage of a certain pod for the past 30 days.</p>
<pre><code>avg (
avg_over_time(container_memory_usage_seconds_total:rate5m[30d])
) by (pod_name)
</code></pre>
<p>The above query is used to determine the average memory usage of a certain pod for the past 30 days.</p>
<p>In the query avg is used to calculate the average of the sample values in the input series, grouped by their <code>[pod_name]</code>. <code>avg_over_time</code> is used for getting the average value of all points in the specified interval, we can get the metrics like cpu and memory usage for the specified interval by using the respective queries.</p>
<p>For more info follow this <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="nofollow noreferrer">doc</a>.</p>
| Sai Chandra Gadde |
<p>I'm on GKE and I'm trying to expose one application using IPV6 address.</p>
<p>This is my service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
labels:
run: ubuntu
name: ubuntu
namespace: default
spec:
loadBalancerIP: "<ipv6-address>"
ipFamilies:
- IPv6
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30783
port: 5000
protocol: TCP
targetPort: 5001
selector:
run: ubuntu
sessionAffinity: None
type: LoadBalancer
</code></pre>
<p>This is my gcloud address list</p>
<pre><code>deletable-pzk-reg-ip2-6 <ipv6-address>/96 EXTERNAL us-central1 pzksubnet1 RESERVED
</code></pre>
<p>I'm getting this error</p>
<pre><code> Warning SyncLoadBalancerFailed 13s (x6 over 2m49s) service-controller Error syncing load balancer: failed to ensure load balancer: requested ip "<ipv6-address>" is neither static nor assigned to the LB
</code></pre>
<p>Please help me in debugging this.</p>
| pzk | <p>Below troubleshooting steps can help you to resolve your issue:</p>
<ol>
<li><p>IPv6 is only for HTTP, SSL Proxy and TCP proxy and make sure you are using one of them.</p>
</li>
<li><p>The following <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#use_an_ingress" rel="nofollow noreferrer">documentation</a> describes creation of an Ingress resource.</p>
</li>
</ol>
<ul>
<li><p>Using the following reserve a regional external IPv6 address.</p>
<p><code>- gcloud compute addresses create <your-ipv6-address-name> --global --ip-version=IPv6</code></p>
</li>
<li><p>Specify the global ip address in the YAML file using the annotation:</p>
<p><code>kubernetes.io/ingress.global-static-ip-name: <your-ipv6-address-name></code></p>
</li>
</ul>
<ol start="3">
<li>If you want to use load balancer check the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters#service_parameters" rel="nofollow noreferrer">Load balancer parameters</a>, example: After <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#external-ip" rel="nofollow noreferrer">reserving the static IP</a> use it as loadBalancedIP in yaml, the load balancer will be created.</li>
</ol>
<pre><code>apiVersion: v1
kind: Servicemetadata:
name: my-lb-service
spec:
type: LoadBalancer
loadBalancerIP: <ip>
</code></pre>
<p>Attaching a blog <a href="https://www.jhanley.com/blog/google-cloud-http-load-balancer-and-ipv6/" rel="nofollow noreferrer">HTTP on Load Balancer and IPv6</a> authored by John Hanley for your reference.</p>
| Sai Chandra Gadde |
<p>The problem is that I need to modify the StorageClassName but it's not possible because there was a blank StorageClassName assigned.</p>
<p>This PVC is bound to a PV, so, if I delete the PVC to create a new one with the StorageClassName, the data that is in my AWS will be deleted?</p>
| Teachh | <p>You can recreate an existing PVC reusing the same PV with no data losses by using <a href="https://kubernetes.io/blog/2021/12/15/kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order/" rel="nofollow noreferrer">reclaim policy</a>.</p>
<p>In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
In that case, it is more appropriate to use the βRetainβ policy. With the βRetainβ policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.</p>
<p>Reclaim Policy: Used to tell the cluster what to do with the volume after releasing its claim. Current reclaim policies are:</p>
<ul>
<li>Retain β manual reclamation</li>
<li>Recycle β basic scrub (rm -rf/thevolume/*)</li>
<li>Delete β associated storage assets such as AWS EBS, GCE
PD, Azure Disk, or OpenStack Cinder volume is deleted</li>
</ul>
<p>NOTE: Extremely recommended to use Retain policy for PVCs that store critical data.</p>
<p>Here in this <a href="https://webera.blog/recreate-an-existing-pvc-in-a-new-namespace-but-reusing-the-same-pv-without-data-loss-2c7326c0035a" rel="nofollow noreferrer">blog</a> you have detailed steps to recreate a PVC in another name space similarly you can change the storage class.</p>
| Sai Chandra Gadde |
<p>I'm trying to troubleshoot an issue I'm having in kubernetes where after a job fails, the associated pod seemingly disappears and I can't view the logs. The job still exists though.</p>
<p>But that's not what my question is about. In reading through the documentation, it seemingly uses the terms "terminated" and "deleted" interchangably. This is leading me to be very confused. I would <em>assume</em> that terminated pods are not necessarily deleted, but the way the documentation is written, it implies that a terminated pod and a deleted pod are the same thing.</p>
<p><strong>Example 1:</strong> <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup</a></p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are usually not deleted either</p>
</blockquote>
<p>"usually" then links to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy</a> which then describes the logic by which pods will be <em>terminated</em>. So here, a link to a section which purports to describe the logic by which pods will be <em>deleted</em>, instead describes the logic by which pods will be <em>terminated</em>, implying they are one and the same.</p>
<p><strong>Example 2:</strong> <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced</a></p>
<p>This section is titled "Forced Pod <em>termination</em>" and proceeds to explain what happens when you attempt to force <em>delete</em> a pod. Again implying that terminating and deleting a pod are one-and-the-same.</p>
<p><strong>Example 3:</strong>
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination</a></p>
<p>This section, titled "Termination of Pods", describes what happens when the user requests deletion of a pod.</p>
<p>The job in question is failing due to <code>DeadlineExceeded.</code> The documentation states "Once a Job reaches <code>activeDeadlineSeconds</code>, all of its running Pods are terminated and the Job status will become type: Failed with reason: <code>DeadlineExceeded</code>." If <em>terminated</em> and <em>deleted</em> mean the same thing, then that would explain why my pods are gone. I find that a strange design choice but it at least would explain my problem.</p>
<p>The kubernetes documentation asked me if the documentation was helpful, I said "no" and it told me to create a question on stack overflow, so that's what I'm doing :)</p>
| Alex A | <p>As @karthikeayan said, Delete and Terminate are the same. And yes, your pods got deleted because activeDeadlineSeconds exceeded.</p>
<p>if your job have some error but you restartPolicy is not Never, so your job created pods will be deleted</p>
<p>restartPolicy can be set: This indicates that kubernetes will restart until the Job is completed successfully on OnFailure. However, the number of failures does not rise with each retry. To prevent a loop from failing, you can set activeDeadlineSeconds to a value.</p>
<p>As you have researched and gathered pretty good information it is good enough, To find the logs of a deleted pod follow this <a href="https://stackoverflow.com/questions/68572218/how-to-review-logs-of-a-deleted-pod">stack link</a> or else the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service as suggested by @Jonas</p>
| Sai Chandra Gadde |
<p>I install ingress in the standard way. Using the Helm Chart.
<a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a></p>
<p>But I do not want my LoadBalancer to be automatically deleted when the controller is deleted.</p>
<pre><code> finalizers:
- service.kubernetes.io/load-balancer-cleanup
</code></pre>
<p>This value is set by default to the service during installation. And I don't see how it can be removed/configured.
I don't understand why it's set like this by default. After all, this is not practical, if LoadBalancer is deleted, then when creating a new one, it will be assigned a different IP, which means that DNS will need to be redirected to another IP.</p>
<p>And a second question if I may. How do I configure Service to automatically update LoadBalancer status? When the service connects to it, everything is fine, but when I remove the LoadBalancer, the service continues to be in an active state.
I use Hetzner. Annotation <code>load-balancer.hetzner.cloud/health-check-interval: 15</code> does not work in the service.
Available annotations can be viewed here: <a href="https://pkg.go.dev/github.com/hetznercloud/hcloud-cloud-controller-manager/internal/annotation" rel="nofollow noreferrer">https://pkg.go.dev/github.com/hetznercloud/hcloud-cloud-controller-manager/internal/annotation</a>)</p>
| JDev | <p><strong>You can remove finalizers by two ways:</strong></p>
<ol>
<li>Using <code>kubectl edit svc service_name</code>, remove the part below and save it again.</li>
</ol>
<p><code> finalizers:</code></p>
<pre><code>` - service.kubernetes.io/load-balancer-cleanup`
</code></pre>
<ol start="2">
<li><p>You can use <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="nofollow noreferrer">kubectl patch</a> to update objects in place.</p>
<p><code>kubectl patch ingress $Ingressname -n $namespace -p '{"metadata":{"finalizers":[]}}' --type=merge</code></p>
</li>
</ol>
<p>As per this <a href="https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/#owner-references" rel="nofollow noreferrer">Doc</a> you can try to delete the ingress controller without deleting the load balancer using <code>kubectl delete --cascade=orphan</code> , <a href="https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy" rel="nofollow noreferrer">cascade</a> option is used when you want to delete the owner object but the orphan will not get deleted . Can you try this and let me know if this works.</p>
| Sai Chandra Gadde |
Subsets and Splits