Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>Here is an example of yaml file for deploying:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress spec: backend: serviceName: gateway-service servicePort: 4001 --- apiVersion: apps/v1 kind: Deployment metadata: name: gateway-deployment spec: selector: matchLabels: app: gateway replicas: 2 template: metadata: labels: app: gateway spec: containers: - name: gateway image: hello-go-microservices_gateway imagePullPolicy: Never ports: - containerPort: 4001 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: gateway-service spec: selector: app: gateway ports: - protocol: TCP port: 4001 targetPort: 4001 nodePort: 30001 type: NodePort </code></pre> <p>As you can see, the service is called <strong>gateway-service</strong>. The problem is when I rename it to just <strong>gateway</strong>, it's pods won't start. When I rename it to <strong>gateway-service</strong> back, or rename it to <strong>gateway1</strong>, or to <strong>gateway-blablabla</strong>, everytging works well. </p> <p>Kubectl's logs show: </p> <pre><code>Failed to decode: strconv.ParseInt: parsing "tcp://10.101.177.91:4001": invalid syntax </code></pre>
<p>Thanks to <a href="https://stackoverflow.com/questions/58445138/naming-a-service-in-kubernetes-prevents-it-from-starting/58446435#">P Ekambaram</a> suggestion and just to provide more information for other community members in case they will have the same issue:</p> <p>According to the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">Discovering services</a> mechanism - once the service was created all information where applied to the PODS env:</p> <p>Note:</p> <blockquote> <p>When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It supports both Docker links compatible variables (see makeLinkVariables) and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.</p> </blockquote> <p>Proper env variables should looks like:</p> <pre><code>GATEWAY_PORT_8080_TCP=tcp://10.4.13.154:8080 GATEWAY_PORT_8080_TCP_PROTO=tcp GATEWAY_PORT_8080_TCP_PORT=8080 GATEWAY_PORT_8080_TCP_ADDR=10.4.13.154 GATEWAY_SERVICE_HOST=10.4.13.154 GATEWAY_SERVICE_PORT=8080 </code></pre> <p>and values for: GATEWAY_PORT=tcp://10.4.13.154:8080 were overridden by your custom settings:</p> <pre><code> GATEWAY_PORT=8080 </code></pre>
<p>I have Django application with Docker, nginx and gunicorn.</p> <p>I am trying to use nginx to serve the static files but I am getting 404.</p> <p>here is my nginx.conf: </p> <pre><code>events { worker_connections 1024; } http { upstream backend { ip_hash; server backend:8000; } server { location /static { autoindex on; alias /api/static; } location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend; } listen 80; } } </code></pre> <p>kubernetes manifest file: Nginx and app are two separate containers within the same deployment.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: backend namespace: staging labels: group: backend annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: &lt;host name&gt; http: paths: - path: / backend: serviceName: backend servicePort: 8000 --- apiVersion: v1 kind: Service metadata: name: backend namespace: staging labels: group: backend spec: selector: app: backend ports: - port: 8000 targetPort: 8000 name: backend - port: 80 targetPort: 80 name: nginx --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: backend namespace: staging labels: group: backend spec: replicas: 1 template: metadata: labels: app: backend group: backend spec: containers: - name: nginx image: &lt;image&gt; command: [nginx, -g,'daemon off;'] imagePullPolicy: Always ports: - containerPort: 80 - name: backend image: &lt;image&gt; command: ["gunicorn", "-b", ":8000", "api.wsgi"] imagePullPolicy: Always ports: - containerPort: 8000 </code></pre> <p>settings.py</p> <pre><code>STATIC_URL = '/static/' STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),) </code></pre> <p>Dockerfile for nginx:</p> <pre><code>FROM nginx:latest ADD app/api/static /api/static ADD config/nginx /etc/nginx WORKDIR /api </code></pre> <p>I checked in the nginx container, all static files are present in the /api directory. </p>
<p>You need to create a volume and share it with nginx and your django backend and then python manage.py collectstatic in that volume. But there is a problem, your cluster needs to support <code>ReadWriteMany</code> in <code>accessModes</code> for the pvc storage. If you don't have access to this you can create an application that has two containers, your django backend and nginx. Then you can share volumes between them!</p>
<p>I am trying to run Prometheus to ONLY monitor pods in specific namespaces (in openshift cluster).</p> <p>I am getting &quot;cannot list pods at the cluster scope&quot; - But I have tried to set it to not use ClusterScope (only look in specific namespaces instead)..</p> <p>I've set:</p> <pre><code> prometheus.yml: | scrape_configs: - job_name: prometheus static_configs: - targets: - localhost:9090 - job_name: kubernetes-pods kubernetes_sd_configs: - namespaces: names: - api-mytestns1 - api-mytestns2 role: pod relabel_configs: [cut] </code></pre> <p>I get this error - even if I remove the -job_name: kubernetes-pods entirely.. so maybe its something else in prometheus, that needs disabling?</p>
<p>I found that one had to overwrite server.alertmanagers with a complete copy of the settings in charts/prometheus/templates/server-configmap.yaml - to override the hardcoded default in those, to try and scrape cluster-wide.</p>
<p>I'm configuring startup/liveness/readiness probes for kubernetes deployments serving spring boot services. As of the spring boot documentation it's best practice to use the corresponding liveness &amp; readiness actuator endpoints as describes here: <a href="https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot" rel="nofollow noreferrer">https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot</a></p> <p>What do you use for your startup probe? What are your recommendations for failureThreshold, delay, period and timeout values? Did you encounter issues when deploying isito sidecars to an existing setup?</p>
<p>I use the paths <code>/actuator/health/readiness</code> and <code>/actuator/health/liveness</code> :</p> <pre><code>readinessProbe: initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 failureThreshold: 3 httpGet: scheme: HTTP path: /actuator/health/readiness port: 8080 livenessProbe: initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 failureThreshold: 3 httpGet: scheme: HTTP path: /actuator/health/liveness port: 8080 </code></pre> <p>for the recommendations, it depends on your needs and policies actually ( <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a> )</p> <p>no istio sidecars issues with this :)</p> <p>do not forget to activate the endpoints in properties (cf <a href="https://www.baeldung.com/spring-liveness-readiness-probes" rel="nofollow noreferrer">https://www.baeldung.com/spring-liveness-readiness-probes</a>):</p> <pre><code>management.endpoint.health.probes.enabled=true management.health.livenessState.enabled=true management.health.readinessState.enabled=true </code></pre>
<p>I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.</p> <pre><code>spec: volumes: - name: shared-data emptyDir: {} containers: - name: myapp volumeMounts: - name: shared-data mountPath: /etc/myapp/ - name: monitoring volumeMounts: - name: shared-data mountPath: /var/read </code></pre> <p>This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).</p> <p>How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?</p> <p>Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.</p> <p>Can anyone advice please?</p>
<p>Consider using <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMaps</a> with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">SubPaths</a>.</p> <blockquote> <p>A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.</p> </blockquote> <blockquote> <p>Sometimes, it is useful to share one volume for multiple uses in a single pod. The <code>volumeMounts.subPath</code> property specifies a sub-path inside the referenced volume instead of its root.</p> </blockquote> <p>ConfigMaps can be used as volumes. The <code>volumeMounts</code> inside the <code>template.spec</code> are the same as any other volume. However, the volumes section is different. Instead of specifying a <code>persistentVolumeClaim</code> or other volume type you reference the configMap by name. Than you can add the <code>subPath</code> property which would look something like this:</p> <pre><code>volumeMounts: - name: shared-data mountPath: /etc/myapp/ subPath: myapp.config </code></pre> <p>Here are the resources that would show you how to set it up:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Configure a Pod to Use a ConfigMap</a>: official docs</p> </li> <li><p><a href="https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i" rel="nofollow noreferrer">Using ConfigMap SubPaths to Mount Files</a>: step by step guide</p> </li> <li><p><a href="https://carlos.mendible.com/2019/02/10/kubernetes-mount-file-pod-with-configmap/" rel="nofollow noreferrer">Mount a file in your Pod using a ConfigMap</a>: supplement</p> </li> </ul>
<p>I installed a brand new 1.16.0 worker node using kubeadm and I am getting the following:</p> <pre><code>Kubernetes version: Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} OS: 18.04.3 LTS (Bionic Beaver) Kernel: Linux kube-node-5 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <hr> <pre><code>Name: kube-proxy Selector: k8s-app=kube-proxy Node-Selector: beta.kubernetes.io/os=linux Labels: k8s-app=kube-proxy Annotations: deprecated.daemonset.template.generation: 2 Desired Number of Nodes Scheduled: 8 Current Number of Nodes Scheduled: 8 Number of Nodes Scheduled with Up-to-date Pods: 8 Number of Nodes Scheduled with Available Pods: 8 Number of Nodes Misscheduled: 0 Pods Status: 8 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: k8s-app=kube-proxy Service Account: kube-proxy Containers: kube-proxy: Image: k8s.gcr.io/kube-proxy:v1.15.0 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME) Environment: NODE_NAME: (v1:spec.nodeName) Mounts: /lib/modules from lib-modules (ro) /run/xtables.lock from xtables-lock (rw) /var/lib/kube-proxy from kube-proxy (rw) Volumes: kube-proxy: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-proxy Optional: false xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 3h55m daemonset-controller Error creating: Pod "kube-proxy-nz5bk" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h38m daemonset-controller Error creating: Pod "kube-proxy-l26kw" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h21m daemonset-controller Error creating: Pod "kube-proxy-fjcpd" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-msqnx" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-pssv5" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-59cx8" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-t9nh2" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-5hp6c" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-hbbl4" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-zph4z" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-prj9w" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 3h7m daemonset-controller Error creating: Pod "kube-proxy-rhnjq" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 177m (x9 over 3h7m) daemonset-controller (combined from similar events): Error creating: Pod "kube-proxy-whdnm" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 166m daemonset-controller Error creating: Pod "kube-proxy-2xhgt" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 149m daemonset-controller Error creating: Pod "kube-proxy-zd429" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 132m daemonset-controller Error creating: Pod "kube-proxy-wzn8x" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-l8csx" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-6jxpl" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-jk29x" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-p7db2" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-kf8qz" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-l5wjh" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-d8brg" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-6w2ql" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 124m daemonset-controller Error creating: Pod "kube-proxy-d4n47" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy Warning FailedCreate 122m (x7 over 124m) daemonset-controller (combined from similar events): Error creating: Pod "kube-proxy-2lnpb" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy </code></pre> <p>The not so funny thing is that all the other nodes have absolutely NO problem creating the kube-proxy pods. It is only this one node that is failing with the above error. </p> <p>I have tried a variety of things to fix this issue but have yet to find a solution. Previous installations using kubeadm were flawless. </p> <p>I have a feeling I am missing a PodSecurityPolicy and a binding to the kube-proxy role. I am definitely missing something but I have no idea.</p>
<p>It's very strange trying to add new node to the existing cluster from different relese. As an example for 1.1.15 The deprecated kubelet security controls AllowPrivileged please refer to release <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#node" rel="nofollow noreferrer">CHANGELOG-1.15.md</a> </p> <blockquote> <p>The deprecated kubelet security controls AllowPrivileged, HostNetworkSources, HostPIDSources, and HostIPCSources have been removed. Enforcement of these restrictions should be done through admission control (such as PodSecurityPolicy) instead</p> </blockquote> <p>In my opinion you should remove this node (please refer before to those docs):</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">Safely Drain a Node while Respecting the PodDisruptionBudget</a> </li> <li><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Nodes</a> </li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/" rel="nofollow noreferrer">Cluster Management</a> </li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">Upgrading kubeadm clusters</a> </li> </ul> <p>After that you should upgrade you cluster according to Best practices.</p> <p><strong>Please note, before you start upgrading your cluster to the v1.16.0 release</strong>: about other notable changes in the last release.</p> <ul> <li><a href="https://kubernetes.io/docs/setup/release/notes/#urgent-upgrade-notes" rel="nofollow noreferrer">Urgent Upgrade Notes</a> </li> </ul>
<p>Can someone help me understand about the IP address I see for cluster IP when I list services.</p> <ol> <li>what is cluster IP (not the service type, but the real IP)?</li> <li>how it is used?</li> <li>where does it come from?</li> <li>can I define the range for cluster IP (like we do for pod network)?</li> </ol>
<p>Good question to start learning something new (also for me):</p> <p>Your concerns are related to <code>kube-proxy</code> by default in K8s cluster it's working in <code>iptables mode</code>.</p> <p>Every node in a Kubernetes cluster runs a kube-proxy. Kube-proxy is responsible for implementing a form of virtual IP for Services.</p> <blockquote> <p>In this mode, kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service’s clusterIP and port, and redirect that traffic to one of the Service’s backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod.</p> </blockquote> <ol> <li><p><a href="https://kubernetes.io/docs/concepts/overview/components/#kube-proxy" rel="nofollow noreferrer">Node components kube-proxy</a>:</p> <ul> <li>kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.</li> <li>kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.</li> <li>kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.</li> </ul></li> </ol> <p>As described <a href="https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-3-f35957784c8e" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Due to these iptables rules, whenever a packet is destined for a service IP, it’s DNATed (DNAT=Destination Network Address Translation), meaning the destination IP is changed from service IP to one of the endpoints pod IP chosen at random by iptables. This makes sure the load is evenly distributed among the backend pods.</p> <p>When this DNAT happens, this info is stored in conntrack — the Linux connection tracking table (stores 5-tuple translations iptables has done: protocol, srcIP, srcPort, dstIP, dstPort). This is so that when a reply comes back, it can un-DNAT, meaning change the source IP from the Pod IP to the Service IP. This way, the client is unaware of how the packet flow is handled behind the scenes.</p> </blockquote> <p>There are also different modes, you can find more information <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">here</a> </p> <ol start="2"> <li><p>During cluster initialization you can use <code>--service-cidr</code> string parameter <code>Default: "10.96.0.0/12"</code></p> <ul> <li>ClusterIP: The IP address assigned to a Service</li> </ul></li> </ol> <blockquote> <p>Kubernetes assigns a stable, reliable IP address to each newly-created Service (the ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.</p> </blockquote> <ul> <li>Pod IP: The IP address assigned to a given Pod. <blockquote> <p>Kubernetes assigns an IP address (the Pod IP) to the virtual network interface in the Pod's network namespace from a range of addresses reserved for Pods on the node. This address range is a subset of the IP address range assigned to the cluster for Pods, which you can configure when you create a cluster.</p> </blockquote></li> </ul> <p>Resources:</p> <ul> <li><a href="https://supergiant.io/blog/understanding-kubernetes-kube-proxy/" rel="nofollow noreferrer">Iptables Mode</a> </li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#services" rel="nofollow noreferrer">Network overview</a> </li> <li><a href="https://supergiant.io/blog/understanding-kubernetes-kube-proxy/" rel="nofollow noreferrer">Understanding Kubernetes Kube-Proxy</a> </li> </ul> <p>Hope this helped</p>
<p>i have a question and a problem about capabilities.</p> <p>Why my program work when i run <code>docker run --cap-add=NET_ADMIN ...</code> ?</p> <p>And it's doesn't work if i run my program with file .yml which is:</p> <pre><code> containers: - name: snake image: docker.io/kelysa/snake:lastest imagePullPolicy: Always securityContext: privileged: true capabilities: add: ["NET_ADMIN","NET_RAW"] </code></pre> <p>What is the difference between run docker with --cap-add and run a pod with the same capabilities ? </p>
<p>As described by <a href="https://stackoverflow.com/questions/58377469/difference-between-cap-add-net-admin-and-add-capabilities-in-yml#">David Maze</a> and According to the docker <a href="https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities" rel="noreferrer">docs:Runtime privilege and Linux capabilities</a> </p> <blockquote> <p>By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all devices (see the documentation on cgroups devices).</p> <pre><code>--cap-add: Add Linux capabilities, --cap-drop: Drop Linux capabilities, --privileged=false: Give extended privileges to this container --device=[]: Allows you to run devices inside the container without the --privileged flag. </code></pre> <p>When the operator executes <code>docker run --privileged</code>, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. </p> <p>In addition to --privileged, the operator can have fine grain control over the capabilities using --cap-add and --cap-drop. </p> </blockquote> <p>You can find there two kinds of capabilities:</p> <ul> <li>Docker with default list of capabilities that are kept.</li> <li>capabilities which are not granted by default and may be added.</li> </ul> <p>This command <code>docker run --cap-add=NET_ADMIN</code> will apply additional linux capibilities.</p> <p>As per docs:</p> <blockquote> <p>For interacting with the network stack, instead of using --privileged they should use --cap-add=NET_ADMIN to modify the network interfaces.</p> </blockquote> <p><strong>Note</strong>:</p> <p>To reduce syscall attacks it's good practice to give the container only required privileges. Please refer also to <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">Enabling Pod Security Policies</a>.</p> <p>From container it can be achieved by using:</p> <pre><code>securityContext: capabilities: drop: ["all"] add: ["NET_BIND"] </code></pre> <p>To see applied capibilities inside your container you can use: <code>getpcaps process_id or $(pgrep your-proces_name)</code> to list and explore linux capibilities you an use <code>capsh --print</code></p> <p><strong>Resources</strong>:</p> <ul> <li><a href="https://linux-audit.com/linux-capabilities-101/" rel="noreferrer">Linux capibilities</a>, </li> <li><a href="https://github.com/docker/labs/tree/master/security/capabilities" rel="noreferrer">docker labs</a>, </li> <li><a href="http://man7.org/linux/man-pages/man1/capsh.1.html" rel="noreferrer">capsh</a> </li> <li><a href="https://www.weave.works/blog/container-capabilities-kubernetes/" rel="noreferrer">Configuring Container Capabilities with Kubernetes</a> </li> <li><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#what-is-a-pod-security-policy" rel="noreferrer">What is a Pod Security Policy</a></li> </ul> <p>Hope this help.</p>
<p>I'd like to use some code like below to describe a comma is the substring of the string.</p> <pre><code>{{- if contains '\,' .alias }} </code></pre> <p>But it reports the error in the line when using <code>helm lint</code></p> <pre><code> invalid syntax </code></pre> <p>How to write it correctly?</p>
<p>Thanks to community user <a href="https://stackoverflow.com/questions/57926732/how-to-check-if-the-comma-exist-in-helm-charts#">Bimal</a>.</p> <p>It can be applied like:</p> <pre><code>{{- if contains "," .Values.xxx.name }} some_data: {{ .Values.xxx.name }} {{- end }} </code></pre> <p>For further reference please follow <a href="http://masterminds.github.io/sprig/strings.html" rel="nofollow noreferrer">String Functions</a>: </p>
<p>I'm trying to mount a secret as a file</p> <pre><code>apiVersion: v1 data: credentials.conf: &gt;- dGl0bGU6IHRoaYWwpCg== kind: Secret metadata: name: address-finder-secret type: Opaque </code></pre> <pre><code>kind: DeploymentConfig apiVersion: v1 metadata: name: app-sample spec: replicas: 1 selector: app: app-sample template: metadata: labels: app: app-sample spec: volumes: - name: app-sample-vol configMap: name: app-sample-config - name: secret secret: secretName: address-finder-secret containers: - name: app-sample volumeMounts: - mountPath: /config name: app-sample-vol - mountPath: ./secret/credentials.conf name: secret readOnly: true subPath: credentials.conf </code></pre> <p>I need to add the <code>credentials.conf</code> file to a directory where there are already other files. I'm trying to use <code>subPath</code>, but I get <code>'Error: failed to create subPath directory for volumeMount &quot;secret&quot; of container &quot;app-sample&quot;'</code> If I remove the subPath, I will lose all other files in the directory.</p> <p>Where did I go wrong?</p>
<p>Hello, Hope you are enjoying your kubernetes journey !</p> <p>It would have been better if you have given your image name to try it out but however, i decided to create on custom image.</p> <p>I created a simple file named file1.txt and copied it to the image, here is my dockefile:</p> <pre><code>FROM nginx COPY file1.txt /secret/ </code></pre> <p>I built it simply with:</p> <pre><code>❯ docker build -t test-so-mount-file . </code></pre> <p>I just checked if my file were here before going further:</p> <pre><code>❯ docker run -it test-so-mount-file bash root@1c9cebc4884c:/# ls bin etc mnt sbin usr boot home opt secret var dev lib proc srv docker-entrypoint.d lib64 root sys docker-entrypoint.sh media run tmp root@1c9cebc4884c:/# cd secret/ root@1c9cebc4884c:/secret# ls file1.txt root@1c9cebc4884c:/secret# </code></pre> <p>perfect. now lets deploy it on kubernetes.</p> <p>For this test, since i'm using kind (kubernetes in docker) I just used this command to upload my image to the cluster:</p> <pre><code>❯ kind load docker-image test-so-mount-file --name so-cluster-1 </code></pre> <p>It seems that you are deploying on openshift regarding to your &quot;Deploymentconfig&quot; kind. however, once my image has been added to my cluster i modified your deployment with it :</p> <p>first without volumes, to check that the file1 is in the container:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app-sample spec: replicas: 1 selector: matchLabels: app: app-sample template: metadata: labels: app: app-sample spec: containers: - name: app-sample image: test-so-mount-file imagePullPolicy: Never # volumeMounts: </code></pre> <p>yes it is:</p> <pre><code>❯ k exec -it app-sample-7b96558fdf-hn4qt -- ls /secret file1.txt </code></pre> <p>Before going further, when i tried to deploy your secret i got this:</p> <pre><code>Error from server (BadRequest): error when creating &quot;manifest.yaml&quot;: Secret in version &quot;v1&quot; cannot be handled as a Secret: illegal base64 data at input byte 20 </code></pre> <p>This is linked to your base64 string that actually contains illegal base64 data, here it is:</p> <pre><code>❯ base64 -d &lt;&lt;&lt; &quot;dGl0bGU6IHRoaYWwpCg==&quot; title: thi���(base64: invalid input </code></pre> <p>No pb, I used another string in b64:</p> <pre><code>❯ base64 &lt;&lt;&lt; test dGVzdAo= </code></pre> <p>and added it to the secret. since I want this data to be in a file, i replaced the '&gt;-' by a '|-' (<a href="https://stackoverflow.com/questions/72082343/what-is-the-difference-between-and-in-yaml">What is the difference between &#39;&gt;-&#39; and &#39;|-&#39; in yaml?</a>),however it works, with or without it. Now, lets add the secret to our deployment. I replaced the &quot;./secret/credentials.conf&quot; by &quot;/secret/credentials.conf&quot; (it works with or without but i prefer to remove the &quot;.&quot;). since I don't have your confimap datas, I commented out this part. However, Here is the deployment manifest of my file manifest.yaml:</p> <pre><code>apiVersion: v1 kind: Secret type: Opaque metadata: name: address-finder-secret data: credentials.conf: |- dGVzdAo= --- apiVersion: apps/v1 kind: Deployment metadata: name: app-sample spec: replicas: 1 selector: matchLabels: app: app-sample template: metadata: labels: app: app-sample spec: containers: - name: app-sample image: test-so-mount-file imagePullPolicy: Never volumeMounts: # - mountPath: /config # name: app-sample-vol - mountPath: /secret/credentials.conf name: secret readOnly: true subPath: credentials.conf volumes: # - name: app-sample-vol # configMap: # name: app-sample-config - name: secret secret: secretName: address-finder-secret </code></pre> <p>Lets deploy this:</p> <pre><code>❯ kaf manifest.yaml secret/address-finder-secret created deployment.apps/app-sample created ❯ k get pod NAME READY STATUS RESTARTS AGE app-sample-c45ff9d58-j92ct 1/1 Running 0 31s ❯ k exec -it app-sample-c45ff9d58-j92ct -- ls /secret credentials.conf file1.txt ❯ k exec -it app-sample-c45ff9d58-j92ct -- cat /secret/credentials.conf test </code></pre> <p>It worked perfectly, Since I havent modified big things from you manifest, i think the pb comes from the deploymentConfig, I suggest you to use deployment instead of deploymentConfig, That way it will works (I hope) and if someday you decide to migrate from openshift to another kubernetes cluster your manifest will be compatible.</p> <p>bguess</p>
<p>I'm trying to follow this step by step to upload the airflow in Kubernetes (<a href="https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm" rel="noreferrer">https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm</a>) but in this part of the execution I have problems as follows:</p> <p>Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?</p> <pre><code>SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD@$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME echo $SQL_ALCHEMY_CONN &gt; /secrets/airflow/sql_alchemy_conn # Create the fernet key which is needed to decrypt database the database FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2&gt;/dev/null | openssl base64) echo $FERNET_KEY &gt; /secrets/airflow/fernet-key kubectl create secret generic airflow \ --from-file=fernet-key=/secrets/airflow/fernet-key \ --from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn </code></pre> <blockquote> <p>Unable to connect to the server: error executing access token command "/google/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run the following command: gcloud feedback To check gcloud for common problems, please run the following command: gcloud info --run-diagnostics</p> </blockquote>
<p>I solved this by creating a new cloud shell tab to connect the cluster:</p> <p><code>gcloud container clusters get-credentials testcluster1 --zone = your_zone</code></p>
<p>The container set resource limit:</p> <pre><code>resources: limits: cpu: "1" memory: 1G requests: cpu: "1" memory: 1G </code></pre> <p>the cgroup memory limit:</p> <pre><code>cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-podaace5b66_c7d0_11e9_ba2a_dcf401d01e81.slice/memory.limit_in_bytes 999997440 </code></pre> <p>1GB= 1*1024*1024*1024=1,073,741,824B</p> <p>k8s version:1.14.4</p> <p>docker version: docker-ce-18.09.6 OS: ubuntu 18.04</p>
<p>I have performed some tests.</p> <p>For the values between 999997440 B (976560 KB) and 1000000000 B (as in your example) you will have the same results in memory.limit_in_bytes = 999997440 B. Till you reach the next (integer) number of bytes divisible by your pagesize (default 4096). In my example it was 1000001536 B (976564K).</p> <p>I am not linux expert but according to the <a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="nofollow noreferrer">documentation</a>: </p> <blockquote> <p>A successful write to this file does not guarantee a successful setting of this limit to the value written into the file. This can be due to a number of factors, such as rounding up to page boundaries or the total availability of memory on the system. The user is required to re-read this file after a write to guarantee the value committed by the kernel.</p> </blockquote> <p>I would like to suggest use Gi notation instead as mentioned by <a href="https://stackoverflow.com/a/57761069/11207414">prometherion</a> to have more control about resources limits.</p> <p>Hope this help.</p>
<p>I am going to install redis cluster for the use for my applications, I was planning install them by using prepared helm chart,</p> <p>But there's a saying goes:</p> <blockquote> <p>Redis installed in k8s will have less performance compared to standalone installations, because of shared hardware resources (CPU, memories...)</p> </blockquote> <p>Is that true?</p>
<p>As already mentioned by Burak in the comments you can choose to have a dedicated node(s) only for the Redis pods in order to avoid resource sharing with other services.</p> <p>Also it is worth to mention that Redis performance is tied to the underlying VM specifications. Redis is single-threaded so a fast CPUs with large caches would perform better. Multi-cores do not directly affects performance. If your workload is relatively small (objects are less than 10 KB), memory is not as critical in order to optimize the performance.</p> <p>Finally, you can use the <a href="https://redis.io/topics/benchmarks" rel="nofollow noreferrer">redis-benchmark</a> in order to test the performance yourself. There are plenty of examples to check out. Or use other tools like <a href="https://github.com/RedisLabs/memtier_benchmark#memtier_benchmark" rel="nofollow noreferrer">memtier_benchmark</a> or <a href="https://github.com/gamenet/redis-memory-analyzer#redis-memory-analyzer" rel="nofollow noreferrer">Redis Memory Analyzer</a>.</p>
<p>I am new to Kubernetes. If there is any service deployed using EKS having 4 replicas A,B,C,D. Usually loadbalancer directs requests to these replicas But if I want that my request should go to replica A only or B only... How can we achieve it. Request to share some links or steps for guidance</p>
<p>What you could use are the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Services</a>:</p> <blockquote> <p>Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed &quot;headless&quot; Services, by explicitly specifying <code>&quot;None&quot;</code> for the cluster IP (<code>.spec.clusterIP</code>).</p> <p>You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes' implementation.</p> <p>For headless <code>Services</code>, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the Service has selectors defined:</p> <ul> <li>With selectors</li> </ul> <p>For headless Services that define selectors, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return records (addresses) that point directly to the <code>Pods</code> backing the <code>Service</code>.</p> <ul> <li>Without selectors</li> </ul> <p>For headless Services that do not define selectors, the endpoints controller does not create <code>Endpoints</code> records. However, the DNS system looks for and configures either:</p> <ul> <li><p>CNAME records for <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a>-type Services.</p> </li> <li><p>A records for any <code>Endpoints</code> that share a name with the Service, for all other types.</p> </li> </ul> </blockquote> <p>So, a <code>Headless service</code> is the same as default <code>ClusterIP service</code>, but without load balancing or proxying and therefore allowing you to connect to a Pod directly.</p> <p>You can also reference below guides for further assistance:</p> <ul> <li><p><a href="https://dev.to/kaoskater08/building-a-headless-service-in-kubernetes-3bk8" rel="nofollow noreferrer">Building a headless service in Kubernetes</a></p> </li> <li><p><a href="https://medium.com/faun/kubernetes-headless-service-vs-clusterip-and-traffic-distribution-904b058f0dfd" rel="nofollow noreferrer">Kubernetes Headless service vs ClusterIP and traffic distribution</a></p> </li> </ul>
<p>I have already uploaded an image with everything I need to run in GCP using KubernetesPodOperator and I get the message below, could anyone help me understand what is going on?</p> <p>Below is a summary of my script and error message:</p> <pre><code>import os import pandas as pd import numpy as np from datetime import datetime, timedelta from airflow.contrib.operators.mssql_to_gcs import MsSqlToGoogleCloudStorageOperator from airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator import pyarrow import airflow from airflow import DAG from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator default_args = { 'owner': 'me', 'start_date': airflow.utils.dates.days_ago(0), 'depends_on_past': False, 'email_on_failure': False, 'email_on_retry': False, 'depends_on_past': False, 'catchup': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } with DAG('test_kube', default_args=default_args, description='Kubernetes Operator', schedule_interval='00 12 01 * *') as dag: k = KubernetesPodOperator(namespace='kubenm', image="teste-kube:latest", name="test", task_id="test", is_delete_operator_pod=False, hostnetwork=False, dag=dag ) k </code></pre> <p>This is the first time I am using this operator and I am wondering if it will meet my needs.</p> <p>Log:</p> <pre><code>INFO - Job 11344: Subtask test Traceback (most recent call last): INFO - Job 11344: Subtask test File "/usr/local/bin/airflow", line 32, in &lt;module&gt; INFO - Job 11344: Subtask test args.func(args) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/utils/cli.py", line 74, in wrapper INFO - Job 11344: Subtask test return f(*args, **kwargs) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/bin/cli.py", line 522, in run INFO - Job 11344: Subtask test _run(args, dag, ti) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/bin/cli.py", line 440, in _run INFO - Job 11344: Subtask test pool=args.pool, INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/utils/db.py", line 74, in wrapper INFO - Job 11344: Subtask test return func(*args, **kwargs) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 926, in _run_raw_task INFO - Job 11344: Subtask test result = task_copy.execute(context=context) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 111, in execute INFO - Job 11344: Subtask test config_file=self.config_file) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/kubernetes/kube_client.py", line 56, in get_kube_client INFO - Job 11344: Subtask test return _load_kube_config(in_cluster, cluster_context, config_file) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/kubernetes/kube_client.py", line 38, in _load_kube_config INFO - Job 11344: Subtask test config.load_kube_config(config_file=config_file, context=cluster_context) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 645, in load_kube_config INFO - Job 11344: Subtask test persist_config=persist_config) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 613, in _get_kube_config_loader_for_yaml_file INFO - Job 11344: Subtask test **kwargs) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 153, in __init__ INFO - Job 11344: Subtask test self.set_active_context(active_context) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 173, in set_active_context INFO - Job 11344: Subtask test context_name = self._config['current-context'] INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 495, in __getitem__ INFO - Job 11344: Subtask test v = self.safe_get(key) INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 491, in safe_get INFO - Job 11344: Subtask test key in self.value): INFO - Job 11344: Subtask test TypeError: argument of type 'NoneType' is not iterable INFO - [[34m2019-09-30 17:18:16,274[0m] {[34mlocal_task_job.py:[0m172} WARNING[0m - State of this instance has been externally set to [1mup_for_retry[0m. Taking the poison pill.[0m INFO - Sending Signals.SIGTERM to GPID 9 INFO - Process psutil.Process(pid=9, status='terminated') (9) terminated with exit code -15 INFO - [[34m2019-09-30 17:18:16,303[0m] {[34mlocal_task_job.py:[0m105} INFO[0m </code></pre>
<p>I made some adjustments to the script that made the operation work:</p> <pre><code>with DAG('test_kube', default_args=default_args, description='Kubernetes Operator', schedule_interval='00 12 01 * *') as dag: k = KubernetesPodOperator(namespace='kubenm', image="gcr.io/project/teste-kube:latest", #Image path was incorrect name="test", in_cluster=True, #To trigger cluster kubeconfig. image_pull_policy="Always", #In my case, I need the image update to occur whenever there is an update task_id="test", is_delete_operator_pod=False, hostnetwork=False, dag=dag ) k </code></pre>
<p>Kubernetes have cronjob which can be used to schedule jobs periodically <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a></p> <p>Is there a way to reset cronjob schedule, let's say it's running at <code>0</code>,<code>5</code>,<code>10</code>,<code>15th</code> min of hour at every 5 mins schedule, now if i want to schedule a job manually at <code>17th</code> min, i can trigger one, but i also want now cronjob schedule to reset and next run should be at <code>22th</code> min, instead of <code>20th</code> min.</p> <p>I have tried deleting and recreating job with same spec, but that doesn't help as well, somehow it ends up restoring schedule meta for the job specs.</p> <p>Is there a trick to reset cronjob schedule?</p> <p>Cron job running at every 5 min</p> <p>0, 5, 10, 15th, 20th</p> <p>Reset at 17th mins , next cadence should be</p> <p>0, 5, 10, 15th, 17th, 22th</p>
<p>Automatically no ( actually maybe with a script that modify your cronjob manifest but I think that's ugly but do whatever you want)</p> <p>However, if you want to modify it after every manual job execution (for example at min 17) go modify your cronjob like this for example:</p> <pre><code>2-57/5 * * * * </code></pre> <p>This way if you ran it manually a minute 17, the next will be 22,27 etc..</p> <p>If another day you ran it manually at minute 24 for example do this:</p> <pre><code>4-59/5 * * * * </code></pre> <p>Etc...</p>
<p>I'm a very new user of k8s python client.</p> <p>I'm trying to find the way to get jobs with regex in python client.</p> <p>For example in CLI,</p> <p><code>kubectl describe jobs -n mynamespace partial-name-of-job</code></p> <p>gives me the number of jobs whose name has <code>partial-name-of-job</code> in "mynamespace".</p> <p>I'm trying to find the exact same code in python client.</p> <p>I did several searches and some are suggested to use label selector, but the python client API function <code>BatchV1Api().read_namespaced_job()</code> requires the exact name of jobs.</p> <p>Please let me know if there's a way!</p>
<p><code>kubectl describe jobs</code> (it describes all jobs in default namespace) instead of returning the number of jobs.</p> <p>So as mentioned by <a href="https://stackoverflow.com/a/57722892/11207414">Yasen</a> please use <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#list_namespaced_job" rel="nofollow noreferrer">list_namespaced_job</a> with <code>namespace</code> parameter it gives api request like <code>kubectl get --raw=/apis/batch/v1/namespaces/{namespace}/jobs</code></p> <p>You can also modify your script and get some specific value. Please run <code>kubectl get or describe --v=8</code> to get the strict api request. Please refer to <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">Kubectl output verbosity and debugging</a></p> <p>Hope this help</p>
<p>Info:</p> <ul> <li>Kubernetes Server version: 1.14</li> <li>AWS Cloud Provider</li> <li>EBS volume, storageclass</li> </ul> <p>Details: I have installed statefulset in our kubernetes cluster, however, it stuck it &quot;ContainerCreating&quot; status. Upon checking the logs, the error is &quot;AttachVolume.Attach failed for volume pvc-xxxxxx: error finding instance ip-xxxxx : &quot;instance not found&quot;</p> <p>It was succesfully installed around 17 days ago, but re-installing for an update caused the pod to stuck in ContainerCreating.</p> <p>Manual attaching volume to the instance works. But doing it via storage class is not working and stuck in ContainerCreating status.</p> <p>storageclass:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: &quot;true&quot; name: ssd-default allowVolumeExpansion: true parameters: encrypted: &quot;true&quot; type: gp2 provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete volumeBindingMode: Immediate </code></pre> <p>pvc yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: &quot;yes&quot; pv.kubernetes.io/bound-by-controller: &quot;yes&quot; volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs finalizers: - kubernetes.io/pvc-protection labels: app.kubernetes.io/instance: thanos-store app.kubernetes.io/name: thanos-store name: data-thanos-store-0 namespace: thanos spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi storageClassName: ssd-default volumeMode: Filesystem volumeName: pvc-xxxxxx status: accessModes: - ReadWriteOnce capacity: storage: 3Gi phase: Bound </code></pre> <p>pv yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: &quot;yes&quot; pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: ap-xxx failure-domain.beta.kubernetes.io/zone: ap-xxx name: pvc-xxxx spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: ext4 volumeID: aws://xxxxx capacity: storage: 3Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-athena-thanos-store-0 namespace: thanos nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/region operator: In values: - ap-xxx - key: failure-domain.beta.kubernetes.io/zone operator: In values: - ap-xxx persistentVolumeReclaimPolicy: Delete storageClassName: ssd-default volumeMode: Filesystem status: phase: Bound </code></pre> <p>Describe pvc:</p> <pre><code>Name: data-athena-thanos-store-0 Namespace: athena-thanos StorageClass: ssd-encrypted Status: Bound Volume: pvc-xxxx Labels: app.kubernetes.io/instance=athena-thanos-store app.kubernetes.io/name=athena-thanos-store Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs Finalizers: [kubernetes.io/pvc-protection] Capacity: 3Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: athena-thanos-store-0 </code></pre>
<p>The <code>FailedAttachVolume</code> error occurs when an EBS volume can’t be detached from an instance and thus cannot be attached to another. The EBS volume has to be in the available state to be attached. <code>FailedAttachVolume</code> is usually a symptom of an underlying failure to unmount and detach the volume.</p> <p>Notice that while describing the PVC the <code>StorageClass</code> name is <code>ssd-encrypted</code> which is a mismatch with the config you showed earlier where the <code>kind: StorageClass</code> name is <code>ssd-default</code>. That's why you can mount the volume manually but not via the <code>StorageClass</code>. You can drop and recreate the <code>StorageClass</code> with a proper data.</p> <p>Also, I recommend going through <a href="https://kubernetes.io/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/" rel="nofollow noreferrer">this article</a> and using <code>volumeBindingMode: WaitForFirstConsumer</code> instead of <code>volumeBindingMode: Immediate</code>. This setting instructs the volume provisioner to not create a volume immediately, and instead, wait for a pod using an associated PVC to run through scheduling.</p>
<p>I was successfully able to share data between a docker container and host using</p> <pre><code>docker run -it -v /path/to/host/folder:/container/path image-name </code></pre> <p>Now I am trying to run this docker image through a Kubernetes cronjob every minute for which my yaml file is as follows :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: automation spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: automation image: localhost:32000/image-name:registry restartPolicy: OnFailure </code></pre> <p>But here how do I share data between my local and k8s so as to basically replicate the <code>-v /path/to/host/folder:/container/path</code> functionality from the docker run command ? What should I add to my yaml file ?</p> <p>Please help.</p>
<p>If you're just playing with <strong>one node</strong> and need to map a volume from this node to a pod running in the same node, then you need to use a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath volume</a>.<br /> In summary, your code will look like this :</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: automation spec: schedule: &quot;*/1 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: automation image: localhost:32000/image-name:registry volumeMounts: - mountPath: /container/path name: test-volume restartPolicy: OnFailure volumes: - name: test-volume hostPath: # directory location on host path: /path/to/host/folder # this field is optional type: Directory </code></pre> <p>Warning, this will work only if you just have a <strong>one node cluster</strong>.</p> <p>If you have a multi-node cluster, then you need to havec a look at distributed storage solution and how to use them with kubernetes.</p> <p>Here is the <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">doc about volumes in K8S</a></p>
<p>I want to access nginx on pod using localhost:8080 as uri, not minikube ip </p> <pre><code> apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: webserver ports: - port: 80 targetPort: 80 </code></pre> <p><a href="https://i.stack.imgur.com/7nl7O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7nl7O.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/uXHmw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uXHmw.png" alt="enter image description here"></a></p>
<p>I guess this is for development/debug purpose.<br> So if you can't use port-forward option as @arghya-sadhu suggested, then you'll just have to map your domain to your ip.</p> <p>I can see that's what you're trying to do in your nginx deployment with those lines :</p> <pre><code>spec: hostAliases: - ip: 192.168.99.101 hostnames: - localhost </code></pre> <p>However, this will not affect your host. In order to map localhost to the minikube ip, you'll have to edit your <code>/etc/hosts</code> file. Below is the line you need to add : </p> <pre><code>192.168.99.101 localhost #127.0.0.1 localhost &lt;-- this line needs to be commented </code></pre> <p>Be sure to comment the existing line with localhost</p>
<p>When deploying an application using Kubernetes I get the following exception:</p> <p>Error: Upgrade failed: the server has asked for the client to provide credentials (get configmaps)</p> <p>Can someone tell me about which credentials is rancher complaining, applications credentials or helm and tiller credentials?</p>
<p>It looks like problem with Using <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC Authorization</a> or <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">RoleBinding and ClusterRoleBinding</a> </p> <p>Here you can find one of the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-examples" rel="nofollow noreferrer">examples</a>: </p> <blockquote> <p>Allow reading a ConfigMap named “my-config” (must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace):</p> </blockquote> <pre><code>rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-config"] verbs: ["get"] </code></pre> <p>Please follow similar <a href="https://forums.rancher.com/t/helm-error-error-the-server-has-asked-for-the-client-to-provide-credentials/13325" rel="nofollow noreferrer">issues</a> related to k8s authorization in rancher and for <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/?#" rel="nofollow noreferrer">helm</a>. </p> <p>Hope this help.</p>
<p>I am using Minikube on my laptop with &quot;driver=none&quot; option. When I try to enable ingress, I got the following error:</p> <pre><code>$ minikube addons enable ingress ❌ Exiting due to MK_USAGE: Due to networking limitations of driver none, ingress addon is not supported. Try using a different driver. </code></pre> <p>After some googling, I found that Ingress addon stopped to work with 'none' VM driver starting from Minikube v1.12.x, and I am using v1.13.1. (please refer to: <a href="https://github.com/kubernetes/minikube/issues/9322" rel="noreferrer">https://github.com/kubernetes/minikube/issues/9322</a>)</p> <p>I wonder whether there are other ways to install &quot;native&quot; ingress on Minikube with the &quot;driver=none&quot; option?</p>
<p>This is a community wiki answer. Feel free to expand it.</p> <p>Unfortunately, as you already found out, this addon is not supported with with <code>vm-driver=none</code>.</p> <p>If you use the <code>none</code> driver, some Kubernetes components run as privileged containers that have side effects outside of the Minikube environment. Those side effects mean that the <code>none</code> driver is not recommended for personal workstations.</p> <p>Also, according to <a href="https://minikube.sigs.k8s.io/docs/drivers/none/" rel="nofollow noreferrer">the official docs</a>:</p> <blockquote> <p>Most users of this driver should consider the newer <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/" rel="nofollow noreferrer">Docker driver</a>, as it is significantly easier to configure and does not require root access. The ‘none’ driver is recommended for advanced users only.</p> </blockquote> <p>So basically you have two options here:</p> <ul> <li><p>downgrade to Minikube v1.11 (not recommended)</p> </li> <li><p>use a supported driver (strongly recommended)</p> </li> </ul> <p>Remember that these changes are made for a reason and going against them is usually a bad idea. It would be better to follow the official recommendation.</p>
<p>I am using below manifest. I am having a simple server which prints pod name on <code>/hello</code>. Here, I was going through kubernetes documentation and it mentioned that we can access service via service name as well. But that is not working for me. As this is a service of type <code>NodePort</code>, I am able to access it using IP of one of the nodes. Is there something wrong with my manifest?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myhttpserver labels: day: zero name: httppod spec: replicas: 1 selector: matchLabels: name: httppod day: zero template: metadata: labels: day: zero name: httppod spec: containers: - name: myappcont image: agoyalib/trial:tryit imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Service metadata: name: servit labels: day: zeroserv spec: type: NodePort selector: day: zero name: httppod ports: - name: mine port: 8080 targetPort: 8090 </code></pre> <p>Edit: I created my own mini k8s cluster and I am doing these operations on the master node.</p>
<p>From what I understand when you say </p> <blockquote> <p>As this is a service of type NodePort, I am able to access it using IP of one of the nodes</p> </blockquote> <p>You're accessing your service from outside your cluster. That's why you can't access it using its name.</p> <p>To access a service using its name, you need to be inside the cluster.</p> <p>Below is an example where you use a pod based on centos in order to connect to your service using its name :</p> <pre class="lang-sh prettyprint-override"><code># Here we're just creating a pod based on centos $ kubectl run centos --image=centos:7 --generator=run-pod/v1 --command sleep infinity # Now let's connect to that pod $ kubectl exec centos -ti bash [root@centos /]# curl servit:8080/hello </code></pre>
<p>I have a number of deployment objects in my namespace. I'd like to run <code>kubectl rollout status</code> for all of them at the same time. So, I'd like the command to exit, only when all rollouts are complete, or an error has occurred. How can I achieve that?</p> <p>The only thing I got with so far is:</p> <pre><code>kubectl get deploy -o name | xargs -n1 -t kubectl rollout status </code></pre> <p>But I hope there's something smarter.</p>
<p>You can use this simple check:</p> <pre><code>#!/bin/bash deploy=$(kubectl get deploy -o name) for i in $deploy; do kubectl rollout status $i -w --timeout=30s; done </code></pre> <p>You can also build more advanced script using clues in this <a href="https://stackoverflow.com/questions/37448357/ensure-kubernetes-deployment-has-completed-and-all-pods-are-updated-and-availabl/37472801#37472801">post</a>. </p>
<p>I am trying to configure ceph on kubernetes cluster using rook, I have run the following commands:</p> <pre><code>kubectl apply -f common.yaml </code></pre> <pre><code>kubectl apply -f operator.yaml </code></pre> <pre><code>kubectl apply -f cluster.yaml </code></pre> <p>I have three worker nodes with atached volumes and on master, all the created pods are running except the rook-ceph-crashcollector pods for the three nodes, when I describe these pods I get this message</p> <pre><code>MountVolume.SetUp failed for volume &quot;rook-ceph-crash-collector-keyring&quot; : secret &quot;rook-ceph-crash-collector-keyring&quot; not found </code></pre> <p>However all the nodes are running and working</p>
<p>It is hard to exactly tell what might be the cause of this but there are few possibilities:</p> <ul> <li><p>Cluster networking problem between nodes</p> </li> <li><p>Some possible leftover sockets in the <code>/var/lib/kubelet</code> directory related to rook ceph.</p> </li> <li><p>A bug when connecting to an external Ceph cluster.</p> </li> </ul> <p>In order to fix your issue you can:</p> <ul> <li><p>Use Flannel and make sure it is using the right interface. Check the <code>kube-flannel.yml</code> file and see if it uses the <code>--iface=</code> option. Or alternatively try to use Calico.</p> </li> <li><p>Clear the <code>./var/lib/rook/</code>, <code>./var/lib/kubelet/plugins/</code> and <code>./var/lib/kubelet/plugins_registry/</code> directories and reinstall the rook service.</p> </li> <li><p>Create the <code>rook-ceph-crash-collector-keyring</code> secret manually by executing: <code>kubectl -n rook-ceph create secret generic rook-ceph-crash-collector-keyring</code>.</p> </li> </ul>
<p>I'm working through chapter 5.3 of <strong>Kubernetes In Action</strong> by Marko Luska. I'm creating a nodeport service from the <a href="https://github.com/luksa/kubernetes-in-action/blob/master/Chapter05/kubia-svc-nodeport.yaml" rel="nofollow noreferrer">following file</a>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kubia-nodeport spec: type: NodePort ports: - port: 80 targetPort: 8080 nodePort: 30123 selector: app: kubia </code></pre> <p>It works, and I can hit all the IPs I'm expecting to hit (localhost, cluterIP...) but the external IP is shown as <code>&lt;none&gt;</code>:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 17h kubia-nodeport NodePort 10.96.191.43 &lt;none&gt; 80:30123/TCP 12s $ kubectl get rc --show-labels NAME DESIRED CURRENT READY AGE LABELS kubia 3 3 3 21h app=kubia $ kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-fb7h8 1/1 Running 0 17h app=kubia kubia-nnkc4 1/1 Running 0 17h app=kubia kubia-s88mt 1/1 Running 0 17h app=kubia </code></pre> <p>Minikube should be showing <code>&lt;nodes&gt;</code> as it does in <a href="https://stackoverflow.com/questions/44112150/external-ip-for-kubernetes-shows-nodes-in-minikube">this question</a> and this <a href="https://stackoverflow.com/questions/40767164/expose-port-in-minikube">other question</a>. Why is it not?</p>
<p>Probably because this was the case in 2017 and it's not anymore.<br> The question you're referencing are from 2016 and 2017.</p> <p>Since then you'll always see <code>&lt;none&gt;</code> unless it's a LoadBalancer. See this <a href="https://github.com/kubernetes/minikube/issues/3966#issuecomment-480097796" rel="nofollow noreferrer">particular comment</a> on github which is from 2019.</p> <p>Sorry I can't find the PR nor the issue corresponding to that change.</p>
<p>I have a server that is orchestrated using k8s it's service looks like below</p> <pre><code>➜ installations ✗ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oxd-server ClusterIP 10.96.124.25 &lt;none&gt; 8444/TCP,8443/TCP 3h32m </code></pre> <p>and it's pod.</p> <pre><code>➜ helm git:(helm-rc1) ✗ kubectl get po NAME READY STATUS RESTARTS AGE sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m </code></pre> <p>Now, I have a docker image with an <code>env</code> variable that requires the URL of this server. </p> <p>I have 2 questions from here.</p> <ol> <li><p>How can the docker image get the URL or access the URL? </p></li> <li><p>How can I access the same URL in my terminal so I make some curl commands through it?</p></li> </ol> <p>I hope I am clear on the explanation.</p>
<p>If your docker container is outside the kubernetes cluster, then it's not possible to access you <code>ClusterIP</code> service.</p> <p>As you could guess by its name, <code>ClusterIP</code> type services are only accessible <strong>from within the cluster</strong>.<br> By <em>within the cluster</em> I mean any resource managed by Kubernetes.<br> <em>A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.</em></p> <p>So, in order to achieve what you want, you'll have those possibilities :</p> <ol> <li>Set a <code>hostPort</code> inside your pod. This is not recommanded and is listed as a bad practice in the <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">doc</a>. Keep this usage for very specific case.</li> <li>Switch your service to <code>NodePort</code> instead of <code>ClusterIP</code>. This way, you'll be able to access it using a node IP + the node port.</li> <li>Use a <code>LoadBalancer</code> type of service, but this solution needs some configuration and is not straightforward.</li> <li>Use an <code>Ingress</code> along with an <code>IngressController</code> but just like the load balancer, this solution needs some configuration and is not that straightforward.</li> </ol> <p>Depending on what you do and if this is critical or not, you'll have to choose one of these solutions. </p> <ul> <li>1 &amp; 2 for debug/dev</li> <li>3 &amp; 4 for prod, but you'll have to work with your k8s admin</li> </ul>
<p>I would like to be able to reference the current namespace in <code>values.yaml</code> to use it to suffix some values like this</p> <pre><code># in values.yaml someParam: someval-{{ .Release.Namespace }} </code></pre> <p>It much nicer to define it this way instead of going into all my templates and adding <code>{{ .Release.Namespace }}</code>. If I can do it in <code>values.yaml</code> it's much clearer and only needs to be defined in one place.</p>
<p>Just to clarify:</p> <p>As described by community: <a href="https://stackoverflow.com/a/57641009/11207414">Amit Kumar Gupta</a> and <a href="https://stackoverflow.com/a/57641142/11207414">David Maze</a> there is no good solution natively supported by <a href="https://helm.sh/docs/chart_best_practices/" rel="nofollow noreferrer">helm</a> in order to change this behavior without modifying templates. It looks that in your case (without modifying helm templates) the best solution it will be just using <strong>set</strong> with parameters during helm install.</p> <p>like:</p> <pre><code>helm install --set foo=bar --set foo=newbar ./redis </code></pre>
<p>I'm getting following error message:</p> <pre><code>root@master-1:~# microk8s.kubectl get no The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port? </code></pre> <p>Even <code>microk8s.kubectl cluster-info dump</code> fails with message above. When I run <code>microk8s start</code> it still the same.</p>
<p>It is hard to tell exactly what might went wrong here but there are few things that you could do in order to fix your issue:</p> <ul> <li><p><code>.kube/config</code> is missing or not configured correctly. Create or copy a valid <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#:%7E:text=The%20kubectl%20command%2Dline%20tool,of%20referring%20to%20configuration%20files." rel="nofollow noreferrer">kubeconfig</a> file to solve this.</p> </li> <li><p>You have <code>swap</code> not turned off. With <code>swap</code> enabled, kubelet service will not start. Execute: <code>sudo swapoff -a</code> in order to make sure it is disabled.</p> </li> <li><p><code>kubelet</code> might be down. Check <code>kubelet</code> logs and make sure that <code>kube-apiserver</code> is up and running.</p> </li> <li><p>Check which ports are appropriate to use with the <code>telnet</code> command.</p> </li> </ul>
<p>I am trying to create and delete file from the same pod and get error that the file not found, any idea?</p> <p>one container creates the file and the second should delete it...</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: volumes: - name: shared emptyDir: { } containers: - name: createfile image: debian command: [ &quot;/bin/sh&quot;, &quot;-c&quot; ] args: - while true; do touch /usr/test.txt; ls echo &quot;new file created on container&quot;; sleep 30; done volumeMounts: - name: shared mountPath: /usr/ - name: deletefile image: debian volumeMounts: - name: shared mountPath: /usr/ command: [ &quot;/bin/sh&quot;, &quot;-c&quot; ] args: - while true; do rm /usr/test.txt; ls echo &quot;container 2 - file removed&quot;; sleep 30; done </code></pre> <p>The error which I got is: <code>ls: error while loading shared libraries: libpcre2-8.so.0: cannot open shared object file: No such file or directory</code></p> <p>Is it because I am running <strong>ls</strong> in the container, any idea why? As I use Debian, not sure what is the issue</p>
<p>Yes, the error was linked to the debian image (there is few solutions here:<a href="https://stackoverflow.com/questions/8501163/error-while-loading-shared-libraries-libpcre-so-0-cannot-open-shared-object-f">Error while loading shared libraries: &#39;libpcre.so.0: cannot open shared object file: No such file or directory&#39;</a> ) AND to your while script.</p> <p>but howerver, I was able to fix this by using bash image and modify a little bit your script see the manifest:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: volumes: - name: shared emptyDir: { } containers: - name: createfile image: bash command: [ &quot;/bin/sh&quot;, &quot;-c&quot; ] args: - while true; do touch /usr/test.txt &amp;&amp; ls /usr/ &amp;&amp; echo &quot;new file created on container&quot; &amp;&amp; sleep 10; done volumeMounts: - name: shared mountPath: /usr/ - name: deletefile image: bash volumeMounts: - name: shared mountPath: /usr/ command: [ &quot;/bin/sh&quot;, &quot;-c&quot; ] args: - while true; do rm /usr/test.txt &amp;&amp; ls /usr/ &amp;&amp; echo &quot;container 2 - file removed&quot; &amp;&amp; sleep 10; done </code></pre> <p>createfile output:</p> <pre><code>❯ k logs -f mypod -c createfile test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt new file created on container test.txt ... </code></pre> <p>deletefile output:</p> <pre><code>❯ k logs -f mypod -c deletefile container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed container 2 - file removed ... </code></pre>
<p>After a successful</p> <p><code>kubectl rollout restart deployment/foo</code></p> <p>the</p> <p><code>kubectl rollout undo deployment/foo</code></p> <p>or</p> <p><code>kubectl rollout undo deployment/foo --to-revision=x</code></p> <p>are not having effect. I mean, the pods are replaced by new ones and a new revision is created which can be checked with</p> <p><code>kubectl rollout history deployment foo</code></p> <p>but when I call the service, the rollback had no effect.</p> <p>I also tried to remove the <code>imagePullPolicy: Always</code>, guessing that it was always pulling even in the rollback, with no success because probably one thing is not related to the other.</p> <hr /> <p>Edited: The test is simple, I change the health check route of the http api to return something different in the json, and it doesn't.</p> <hr /> <p>Edited:</p> <p>Maybe a typo, but not: I was executing with <code>... undo deployment/foo ...</code>, and now tried with <code>... undo deployment foo ...</code>. It also gives me <code>deployment.apps/foo rolled back</code>, but no changes in the live system.</p> <p>More tests: I changed again my api route to test what would happen if I executed a rollout undo to every previous revision one by one. I applied the last 10 revisions, and nothing.</p>
<p>To be able to rollback to a previous version don't forget to append the <strong>--record</strong> parameter to your kubectl command, for example:</p> <pre><code>kubectl apply -f DEPLOYMENT.yaml --record </code></pre> <p>Then you should be able to see the history as you know with:</p> <pre><code>kubectl rollout history deployment DEPLOYMENT_NAME </code></pre> <p>And your rollback will work properly</p> <pre><code>kubectl rollout undo deployment DEPLOYMENT_NAME --to-revision=CHOOSEN_REVISION_NUMBER </code></pre> <p>Little example:</p> <p>consider my nginx deployment manifest &quot;nginx-test.yaml&quot; here:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p>lets create it:</p> <pre><code>❯ kubectl apply -f nginx-test.yaml --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginx-deployment created </code></pre> <p>lets check the image of this deployment, as expected from the manifest:</p> <pre><code>❯ k get pod nginx-deployment-74d589986c-k9whj -o yaml | grep image: - image: nginx image: docker.io/library/nginx:latest </code></pre> <p>now lets modify the image of this deployment to &quot;nginx:1.21&quot;:</p> <pre><code>#&quot;nginx=&quot; correspond to the name of the container inside the pod create by the deployment. ❯ kubectl set image deploy nginx-deployment nginx=nginx:1.21.6 deployment.apps/nginx-deployment image updated </code></pre> <p>we can optionnaly check the rollout status:</p> <pre><code>❯ kubectl rollout status deployment nginx-deployment deployment &quot;nginx-deployment&quot; successfully rolled out </code></pre> <p>we can check the rollout history with:</p> <pre><code>❯ kubectl rollout history deploy nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 kubectl apply --filename=nginx-test.yaml --record=true 2 kubectl apply --filename=nginx-test.yaml --record=true </code></pre> <p>lets check the image of this deployment, as expected:</p> <pre><code>❯ k get pod nginx-deployment-66dcfc79b5-4pk7w -o yaml | grep image: - image: nginx:1.21.6 image: docker.io/library/nginx:1.21.6 </code></pre> <p>Oh, no, i don't like this image ! Lets rollback:</p> <pre><code>❯ kubectl rollout undo deployment nginx-deployment --to-revision=1 deployment.apps/nginx-deployment rolled back </code></pre> <p>creating:</p> <pre><code>&gt; kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-deployment-66dcfc79b5-4pk7w 1/1 Running 0 3m41s 10.244.3.4 so-cluster-1-worker3 &lt;none&gt; &lt;none&gt; pod/nginx-deployment-74d589986c-m2htr 0/1 ContainerCreating 0 13s &lt;none&gt; so-cluster-1-worker2 &lt;none&gt; &lt;none&gt; </code></pre> <p>after few seconds:</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-deployment-74d589986c-m2htr 1/1 Running 0 23s 10.244.4.10 so-cluster-1-worker2 &lt;none&gt; &lt;none&gt; </code></pre> <p>as you can see it worked:</p> <pre><code>❯ k get pod nginx-deployment-74d589986c-m2htr -o yaml | grep image: - image: nginx image: docker.io/library/nginx:latest </code></pre> <p>lets recheck the history:</p> <pre><code>❯ kubectl rollout history deploy nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 kubectl apply --filename=nginx-test.yaml --record=true 2 kubectl apply --filename=nginx-test.yaml --record=true </code></pre> <p>you can change the rollout history's CHANGE-CAUSE with the &quot;kubernetes.io/change-cause&quot; annotation:</p> <pre><code>❯ kubectl annotate deploy nginx-deployment kubernetes.io/change-cause=&quot;update image from 1.21.6 to latest&quot; --reco rd Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginx-deployment annotated </code></pre> <p>lets recheck the history:</p> <pre><code>❯ kubectl rollout history deploy nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 2 kubectl apply --filename=nginx-test.yaml --record=true 3 update image from 1.21.6 to latest </code></pre> <p>lets describe the deployment:</p> <pre><code>❯ kubectl describe deploy nginx-deploy Name: nginx-deployment Namespace: so-tests CreationTimestamp: Fri, 06 May 2022 00:56:09 -0300 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 3 kubernetes.io/change-cause: update image from latest to latest ... </code></pre> <p>hope this has helped you, bguess.</p>
<p>Trying to start up rabbitmq in K8s while attaching a configmap gives me the following error:</p> <pre><code>/usr/local/bin/docker-entrypoint.sh: line 367: rabbitmq-plugins: command not found /usr/local/bin/docker-entrypoint.sh: line 405: exec: rabbitmq-server: not found </code></pre> <p>Exactly the same setup is working fine with docker-compose, so I am a bit lost. Using <code>rabbitmq:3.8.3</code></p> <p>Here is a snippet from my deployment:</p> <pre><code> "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "rabbitmq" } }, "spec": { "volumes": [ { "name": "rabbitmq-configuration", "configMap": { "name": "rabbitmq-configuration", "defaultMode": 420 } } ], "containers": [ { "name": "rabbitmq", "image": "rabbitmq:3.8.3", "ports": [ { "containerPort": 5672, "protocol": "TCP" } ], "env": [ { "name": "RABBITMQ_DEFAULT_USER", "value": "guest" }, { "name": "RABBITMQ_DEFAULT_PASS", "value": "guest" }, { "name": "RABBITMQ_ENABLED_PLUGINS_FILE", "value": "/opt/enabled_plugins" } ], "resources": {}, "volumeMounts": [ { "name": "rabbitmq-configuration", "mountPath": "/opt/" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {}, "schedulerName": "default-scheduler" } }, </code></pre> <p>And here is the configuration:</p> <pre><code>{ "kind": "ConfigMap", "apiVersion": "v1", "metadata": { "name": "rabbitmq-configuration", "namespace": "e360", "selfLink": "/api/v1/namespaces/default/configmaps/rabbitmq-configuration", "uid": "28071976-98f6-11ea-86b2-0244a03303e1", "resourceVersion": "1034540", "creationTimestamp": "2020-05-18T10:55:58Z" }, "data": { "enabled_plugins": "[rabbitmq_management].\n" } } </code></pre>
<p>That's because you're monting a volume in <code>/opt</code>, which is the rabbitmq home path.</p> <p>So, the entrypoint script cannot find any of the rabbitmq binaries.<br> You can see the rabbitmq Dockerfile <a href="https://github.com/docker-library/rabbitmq/blob/91be7cf597c069c9a038ae2619adac1a85e68d4d/3.8/ubuntu/Dockerfile" rel="nofollow noreferrer">here</a></p>
<p>I am looking for a syntax/condition of percentage decrease threshold to be inserted in HPA.yaml file which would allow the Horizontal Pod Autoscaler to start decreasing the pod replicas when the CPU utilization falls that particular percentage threshold.</p> <p>Consider this scenario:- I mentioned an option targetCPUUtilizationPercentage and assigned it with value 50. minReplicas to be 1 and MaxReplicas to be 5. Now lets assume the CPU utilization went above 50, and went till 100, making the HPA to create 2 replicas. If the utilization decreases to 51% also, HPA will not terminate 1 pod replica.</p> <p>Is there any way to conditionize the scale down on the basis of % decrease in CPU utilization?</p> <p>Just like targetCPUUtilizationPercentage, I could be able to mention targetCPUUtilizationPercentageDecrease and assign it value 30, so that when the CPU utilization falls from 100% to 70%, HPA terminates a pod replica and further 30% decrease in CPU utilization, so that when it reaches 40%, the other remaining pod replica gets terminated.</p>
<p>As per on-line resources, this topic is still under community progress "<a href="https://github.com/kubernetes/kubernetes/pull/74525" rel="nofollow noreferrer">Configurable HorizontalPodAutoscaler options</a>" </p> <p>I didn't try but as workaround you can try to create custom metrics f.e. using <a href="https://github.com/helm/charts/tree/master/stable/prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter</a>, <a href="https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/manage_cluster/hpa.html" rel="nofollow noreferrer">Horizontal pod auto scaling by using custom metrics</a> in order to have more control about provided limits.</p> <p>At the moment you can use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">horizontal-pod-autoscaler-downscale-stabilization</a>:</p> <blockquote> <p>--horizontal-pod-autoscaler-downscale-stabilization option to control </p> <p>The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).</p> </blockquote> <p>On the other point of view this is expected due to the basis of HPA:</p> <blockquote> <p>Applications that process very important data events. These should scale up as fast as possible (to reduce the data processing time), and scale down as soon as possible (to reduce cost).</p> </blockquote> <p>Hope this help.</p>
<p>I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.</p> <p>I'm doing this all on a minikube cluster.</p> <p>Setup the pod and service:</p> <pre><code>$&gt; kubectl create -f postgres-pod.yml $&gt; kubectl create -f postgres-service.yml </code></pre> <p><strong>postgres-pod.yml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: postgres labels: env: prod creation_method: manual domain: infrastructure spec: containers: - image: postgres:13-alpine name: kubia-postgres ports: - containerPort: 5432 protocol: TCP env: - name: POSTGRES_PASSWORD value: dave - name: POSTGRES_USER value: dave - name: POSTGRES_DB value: tmp # TODO: # volumes: # - name: postgres-db-volume </code></pre> <p><strong>postgres-service.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres-service spec: ports: - port: 5432 targetPort: 5432 selector: name: postgres </code></pre> <p>Check that the service is up <code>kubectl get services</code>:</p> <pre><code>kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 35d postgres-service ClusterIP 10.110.159.21 &lt;none&gt; 5432/TCP 71m </code></pre> <p>Then, log in to the postgres container:</p> <p><code>$&gt; kubectl exec --stdin --tty postgres -- /bin/bash</code></p> <p>from there, attempt to hit the <em>service</em>'s IP:</p> <pre><code>bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp psql: error: could not connect to server: Connection refused Is the server running on host &quot;10.110.159.21&quot; and accepting TCP/IP connections on port 5432? </code></pre> <p>So using this approach I am not able to connect to the postgres server using the IP of the service.</p> <p>I'm unsure of several steps in this process:</p> <ol> <li>Is the selecting by name block in the service configuration yaml correct?</li> <li>Can you access the IP of a service from pods that are &quot;behind&quot; the service?</li> <li>Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?</li> </ol>
<p>Hello, hope you are envoying your Kubernetes journey !</p> <p>I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:</p> <p>First I have setup a kind cluster locally with this configuration (info here: <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/quick-start/</a>):</p> <pre><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: so-cluster-1 nodes: - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: control-plane image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 - role: worker image: kindest/node:v1.23.5 </code></pre> <p>after this I created my cluster with this command:</p> <pre><code>kind create cluster --config=config.yaml </code></pre> <p>Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: so-tests </code></pre> <p>From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:</p> <p>1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)</p> <p>2- I decided to keep using your docker image &quot;postgres:13-alpine&quot; and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:</p> <pre><code>❯ k exec -it postgres-0 -- bash bash-5.1# whoami root bash-5.1# id uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video) bash-5.1# id postgres uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres) bash-5.1# exit </code></pre> <p>so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:</p> <pre><code>securityContext: runAsUser: 70 fsGroup: 70 </code></pre> <p>3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:</p> <p>First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: &quot;k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client&quot;):</p> <pre><code>apiVersion: v1 data: password: ZGF2ZQ== kind: Secret metadata: name: postgres-secret </code></pre> <p>After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )</p> <pre><code>apiVersion: v1 data: db: tmp user: dave kind: ConfigMap metadata: name: postgres-config </code></pre> <p>Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).</p> <p>N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to &quot;Memory&quot; to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here <a href="https://stackoverflow.com/questions/63337419/create-a-new-volume-when-pod-restart-in-a-statefulset">Create a new volume when pod restart in a statefulset</a>)</p> <p>Since it is a statefulset, it has to be exposed by a headless kubernetes service (<a href="https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services</a>)</p> <p>Here are the manifests:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: serviceName: &quot;postgres&quot; replicas: 2 selector: matchLabels: env: prod domain: infrastructure template: metadata: labels: env: prod domain: infrastructure spec: terminationGracePeriodSeconds: 20 securityContext: runAsUser: 70 fsGroup: 70 containers: - name: kubia-postgres image: postgres:13-alpine env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: password - name: POSTGRES_USER valueFrom: configMapKeyRef: name: postgres-config key: user - name: POSTGRES_DB valueFrom: configMapKeyRef: name: postgres-config key: db ports: - containerPort: 5432 protocol: TCP volumeMounts: - name: postgres-test-volume mountPath: /var/lib/postgresql resources: requests: memory: &quot;64Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; volumes: - name: postgres-test-volume emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: postgres-service labels: env: prod domain: infrastructure spec: ports: - port: 5432 protocol: TCP targetPort: 5432 name: pgsql clusterIP: None selector: env: prod domain: infrastructure --- apiVersion: v1 data: password: ZGF2ZQ== kind: Secret metadata: name: postgres-secret --- apiVersion: v1 data: db: tmp user: dave kind: ConfigMap metadata: name: postgres-config --- </code></pre> <p>I deployed this using:</p> <pre><code>kubectl apply -f postgres.yaml </code></pre> <p>I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:</p> <pre><code>❯ k exec -it pod/postgres-0 -- bash bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp Password: psql (13.6) Type &quot;help&quot; for help. tmp=# </code></pre> <p>I listed the databases:</p> <pre><code>tmp=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+-------+----------+------------+------------+------------------- postgres | dave | UTF8 | en_US.utf8 | en_US.utf8 | template0 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave + | | | | | dave=CTc/dave template1 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave + | | | | | dave=CTc/dave tmp | dave | UTF8 | en_US.utf8 | en_US.utf8 | (4 rows) </code></pre> <p>and I connected to the &quot;tmp&quot; db:</p> <pre><code>tmp=# \c tmp Password: You are now connected to database &quot;tmp&quot; as user &quot;dave&quot;. </code></pre> <p>succesful.</p> <p>I also tried to connect the database using the IP, as you tried:</p> <pre><code>bash-5.1$ ip a | grep /24 inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0 bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp Password: psql (13.6) Type &quot;help&quot; for help. tmp=# </code></pre> <p>succesful.</p> <p>I then downloaded dbeaver (from here <a href="https://dbeaver.io/download/" rel="nofollow noreferrer">https://dbeaver.io/download/</a> ) to test the access from outside of my cluster:</p> <p>with a kubectl port-forward:</p> <pre><code>kubectl port-forward statefulset/postgres 5432:5432 Forwarding from 127.0.0.1:5432 -&gt; 5432 Forwarding from [::1]:5432 -&gt; 5432 </code></pre> <p>I created the connection on dbeaver, and could access easily the db &quot;tmp&quot; from localhost:5361 with dave:dave credentials</p> <pre><code>kubectl port-forward statefulset/postgres 5432:5432 Forwarding from 127.0.0.1:5432 -&gt; 5432 Forwarding from [::1]:5432 -&gt; 5432 Handling connection for 5432 Handling connection for 5432 </code></pre> <p>perfect.</p> <p><a href="https://i.stack.imgur.com/qiopZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qiopZ.png" alt="postgresql-statefulset-port-forward-with-dbeaver" /></a></p> <p>same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:</p> <pre><code>❯ kubectl port-forward service/postgres-service 5432:5432 Forwarding from 127.0.0.1:5432 -&gt; 5432 Forwarding from [::1]:5432 -&gt; 5432 Handling connection for 5432 Handling connection for 5432 </code></pre> <p>It worked as well !</p> <p>I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: postgres labels: app: test spec: terminationGracePeriodSeconds: 20 securityContext: runAsUser: 70 fsGroup: 70 containers: - name: kubia-postgres image: postgres:13-alpine env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: password - name: POSTGRES_USER valueFrom: configMapKeyRef: name: postgres-config key: user - name: POSTGRES_DB valueFrom: configMapKeyRef: name: postgres-config key: db ports: - containerPort: 5432 protocol: TCP volumeMounts: - name: postgres-test-volume mountPath: /var/lib/postgresql resources: requests: memory: &quot;64Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; volumes: - name: postgres-test-volume emptyDir: {} </code></pre> <p>here is the result of the connection from inside the podtest:</p> <pre><code>bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp Password: psql (13.6) Type &quot;help&quot; for help. tmp=# </code></pre> <ul> <li>Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):</li> </ul> <blockquote> <p>StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local</p> </blockquote> <pre><code>i.e: postgres-0.postgres-service.so-tests.svc.cluster.local </code></pre> <ul> <li>To access the statefulsets workloads from outside the cluster here is a good start: <a href="https://stackoverflow.com/questions/46456239/how-to-expose-a-headless-service-for-a-statefulset-externally-in-kubernetes">How to expose a headless service for a StatefulSet externally in Kubernetes</a></li> </ul> <p>Hope this will helped you. Thank you for your question. Bguess</p>
<p>I'm running <code>flink run-application</code> targetting Kubernetes, using these options:</p> <pre><code>-Dmetrics.reporter.prom.class=org.apache.flink.metrics.prometheus.PrometheusReporter -Dmetrics.reporter.prom.port=9249 </code></pre> <p>I specify a container image which has the Prometheus plugin copied into <code>/opt/flink/plugins</code>. From within the job manager container I can download Prometheus metrics on port 9249. However, <code>kubectl describe</code> on the flink pod does not show that the Prometheus port is exposed. The ports line in the kubectl output is:</p> <p><code> Ports: 8081/TCP, 6123/TCP, 6124/TCP</code></p> <p>Therefore, I expect that nothing outside the container will be able to read the Prometheus metrics.</p>
<p>You are misunderstanding the concept of <strong>exposed ports</strong>.<br /> When you expose a port in kubernetes with the <code>ports</code> option (the same apply with Docker and the <code>EXPOSE</code> tag), nothing is open on this port from the outside world.</p> <p>It's basically just a hint for users of that image to tell them <em>&quot;Hey, you want to use this image ? Ok, you may want to have a look at this port on this container.&quot;</em></p> <p>So if your port does not appear when you do <code>kubectl describe</code>, then it does not mean that you can't reach that port. You can still map it with a service targetting this port.</p> <p>Furthermore, if you really want to make it appear with <code>kubectl describe</code>, then you just have to add it to your kubernetes descriptor file :</p> <pre><code>... containers: - ports: - name: prom-http containerPort: 9249 </code></pre>
<p>I'm currently building a backend, that among other things, involves sending RabbitMQ messages from localhost into a K8s cluster where containers can run and pickup specific messages.</p> <p>So far I've been using Minikube to carry out all of my Docker and K8s development but have ran into a problem when trying to install RabbitMQ.</p> <p>I've been following the RabbitMQ Cluster Operator official documentation (<a href="https://www.rabbitmq.com/kubernetes/operator/install-operator.html" rel="nofollow noreferrer">installing</a>) (<a href="https://www.rabbitmq.com/kubernetes/operator/using-operator.html" rel="nofollow noreferrer">using</a>). I got to the &quot;Create a RabbitMQ Instance&quot; section and ran into this error:</p> <pre><code>1 pod has unbound immediate persistentVolumeClaims </code></pre> <p>I fixed it by continuing with the tutorial and adding a PV and PVC into my RabbitMQCluster YAML file. Tried to apply it again and came across my next issue:</p> <pre><code>1 insufficient cpu </code></pre> <p>I've tried messing around with resource limits and requests in the YAML file but no success yet. After Googling and doing some general research I noticed that my specific problems and setup (Minikube and RabbitMQ) doesn't seem to be very popular. My question is, have I passed the scope or use case of Minikube by trying to install external services like RabbitMQ? If so what should be my next step?</p> <p>If not, are there any useful tutorials out there for installing RabbitMQ in Minikube?</p> <p>If it helps, here's my current YAML file for the RabbitMQCluster:</p> <pre><code>apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: rabbitmq-cluster spec: persistence: storageClassName: standard storage: 5Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rabbimq-pvc spec: resources: requests: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce --- apiVersion: v1 kind: PersistentVolume metadata: name: rabbitmq-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: standard hostPath: path: /mnt/app/rabbitmq type: DirectoryOrCreate </code></pre> <p>Edit:</p> <p>Command used to start Minikube:</p> <pre><code>minikube start </code></pre> <p>Output:</p> <pre><code>😄 minikube v1.17.1 on Ubuntu 20.04 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🔄 Restarting existing docker container for &quot;minikube&quot; ... 🎉 minikube 1.18.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.18.1 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard 🏄 Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default </code></pre>
<p>According to the command you used to start minikube, the error is because you don't have enough resources assigned to your cluster.<br /> According to the source code from the rabbitmq cluster operator, it seems that it needs 2CPUs.</p> <p>You need to adjust the number of CPUs (and probably the memory also) when you initialize your cluster. Below is an example to start a kubernetes cluster with 4 cpus and 8G of RAM :</p> <pre class="lang-sh prettyprint-override"><code>minikube start --cpus=4 --memory 8192 </code></pre> <p>If you want to check your current allocated ressources, you can run <code>kubectl describe node</code>.</p>
<p>My understanding is that the <code>AGE</code> shown for a pod when using <code>kubectl get pod</code>, shows the time that the pod has been running since the last restart. So, for the pod shown below, my understanding is that it intially restarted 14 times, but hasn't restarted in the last 17 hours. Is this correct, and where is a kubernetes reference that explains this?</p> <p><a href="https://i.stack.imgur.com/uW3sY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uW3sY.png" alt="enter image description here" /></a></p>
<p>Hope you're enjoying your Kubernetes journey !</p> <p>In fact, the AGE Headers when using kubectl get pod shows you for how long your <strong>pod</strong> has been created and it's running. But do not confuse POD and container:</p> <p>The header &quot;RESTARTS&quot; is actually linked to the parameters &gt; '.status.containerStatuses[0].restartCount' of the pod manifest. That means that this header is linked to the number of restarts, not of the pod, but of the container inside the pod.</p> <p>Here is an example: I just deployed a new pod:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-f4cql 2/2 Running 0 9m38s </code></pre> <p>If I check the yaml configuration of this pod, we can see that in the &quot;status&quot; section we have the said &quot;restartCount&quot; field:</p> <pre><code>❯ k get po test-bg-7d57d546f4-f4cql -o yaml apiVersion: v1 kind: Pod metadata: ... spec: ... status: ... containerStatuses: ... - containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798 ... name: test-bg ready: true restartCount: 0 ... </code></pre> <p>So, to demonstrate what I'm saying, I'm going to connect into my pod and kill the main process's my pod is running:</p> <pre><code>❯ k exec -it test-bg-7d57d546f4-f4cql -- bash I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 1000 1 0.0 0.0 5724 3256 ? Ss 03:20 0:00 bash -c source /tmp/entrypoint.bash 1000 22 1.5 0.1 2966140 114672 ? Sl 03:20 0:05 java -jar test-java-bg.jar 1000 41 3.3 0.0 5988 3592 pts/0 Ss 03:26 0:00 bash 1000 48 0.0 0.0 8588 3260 pts/0 R+ 03:26 0:00 ps aux I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ kill 22 I have no name!@test-bg-7d57d546f4-f4cql:/tmp$ command terminated with exit code 137 </code></pre> <p>and after this, if I reexecute the &quot;kubectl get pod&quot; command, I got this:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-f4cql 2/2 Running 1 11m </code></pre> <p>Then, if I go back to my yaml config, We can see that the restartCount field is actually linked to my container and not to my pod.</p> <pre><code>❯ k get po test-bg-7d57d546f4-f4cql -o yaml apiVersion: v1 kind: Pod metadata: ... spec: ... status: ... containerStatuses: ... - containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798 ... name: test-bg ready: true restartCount: 1 ... </code></pre> <p>So, to conclude, the <strong>RESTARTS</strong> header is giving you the restartCount of the container not of the pod, but the <strong>AGE</strong> header is giving you the age of the pod.</p> <p>This time, if I delete the pod:</p> <pre><code>❯ k delete pod test-bg-7d57d546f4-f4cql pod &quot;test-bg-7d57d546f4-f4cql&quot; deleted </code></pre> <p>we can see that the restartCount is back to 0 since its a brand new pod with a brand new age:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-bg-7d57d546f4-bnvxx 2/2 Running 0 23s test-bg-7d57d546f4-f4cql 2/2 Terminating 2 25m </code></pre> <p>For your example, it means that the <strong>container</strong> restarted 14 times, but the pod was deployed 17 hours ago.</p> <p>I can't find the exact documentation of this but (as it is explained here: <a href="https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods</a>): &quot;Note: Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running container(s). A Pod persists until it is deleted.&quot;</p> <p>Hope this has helped you better understand. Here is a little tip from <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a>: kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' (to sort your pods by their restartCount number :p)</p> <p>Bye</p>
<p>I am new to Kubernetes and Helm. I deployed a jupyterhub pod on Kubernetes (on GCP) following this line</p> <p><a href="https://zero-to-jupyterhub.readthedocs.io/en/stable/jupyterhub/installation.html" rel="nofollow noreferrer">https://zero-to-jupyterhub.readthedocs.io/en/stable/jupyterhub/installation.html</a></p> <ul> <li><p>First, I managed to get it working with simple configuration and in order to interact with GCS from the notebooks I am currently uploading a keyfile.json for each user and using environment variable GOOGLE_APPLICATION_CREDENTIALS.</p> </li> <li><p>I would like to mount the keyfile as secret in the values.yaml. How could I do that ? Basically, I would like to get rid of uploading manually the file for each user and have it mounted automatically through the yaml file.</p> </li> </ul> <p>Thanks</p>
<p>I managed to get it working by adding this in yaml file. keyfile.json was created as secret named gcsfs-creds with cubectl secrets.</p> <pre><code> extraEnv: GOOGLE_APPLICATION_CREDENTIALS: &quot;/etc/secrets/keyfile.json&quot; storage: extraVolumes: - name: gcsfs-creds secret: secretName: gcsfs-creds items: - key: keyfile.json path: keyfile.json extraVolumeMounts: - name: gcsfs-creds mountPath: &quot;/etc/secrets&quot; readOnly: true </code></pre>
<p>I am new to kubernetes I am writing a yml file to create a deployment. I am crating deployment by running this command &quot;kubectl create -f backend-deployment.yml&quot; but I keep getting this error: &quot;error: error parsing backend-deployment.yml: error converting YAML to JSON: yaml: line 16: did not find expected '-' indicator&quot;</p> <p>line 16 is - name: django-react-ecommerce-master_backend_1</p> <p>following is my backend-deployment.yml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name : backend-deployment spec: replicas: 1 selector: matchLabels: component: backend template: metadata: labels: component: backend spec: containers: - name: django-react-ecommerce-master_backend_1 ports: - containerPort: 8000 </code></pre>
<p>The problem comes from the line below, you have an indentation problem there. <code>ports</code> should be at the same level than <code>name</code>.</p> <pre class="lang-yaml prettyprint-override"><code># indent the port bloc spec: containers: - name: django-react-ecommerce-master_backend_1 ports: - containerPort: 8000 </code></pre>
<p>I need to get error pods above five days. The below commands is working well for the pods which is below five days. Could anyone please let me know on how to get pods which is above 5 days only. It should not show the error pods which is below 5 days.</p> <pre><code>kubectl get pods --all-namespaces --sort-by=.metadata.creationTimestamp | awk 'match($6,/^[1-4]d|^[1-900]+h|^[1-900]+m|^[1-900]+s/) {print $0}' | grep &quot;Error&quot; </code></pre>
<p>I got two options for you:</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | awk 'match($5,/[6-9]d|[0-9][0-9]d|[0-9][0-9][0-9]d/) {print $0}' | grep -i error </code></pre> <p>or</p> <pre><code>kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/[6-9]d|[0-9][0-9]d|[0-9][0-9][0-9]d/) {print $0}' </code></pre> <p>Both will only show pods that exists for 6 days or longer. First option will also look for those with errors and the second one will show only those with the <code>Status=Pending</code>.</p>
<p>I would like to set the value <code>KubeletConfiguration.cpuCFSQuota = false</code> in the <code>config.yaml</code> passed to <code>kubeadm</code> when launching <code>minikube</code> to turn off CPU resource checking, but I have not managed to find the options to do this through the documentation here <a href="https://minikube.sigs.k8s.io/docs/handbook/config/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/config/</a> . The closest solution I have found is to use the option <code>--extra-config=kubelet.cpu-cfs-quota=false</code> but the <code>--cpu-cfs-quota</code> option for the <code>kubelet</code> has been deprecated and no longer has an effect.</p> <p>Any ideas appreciated.</p> <p>Environment:</p> <ul> <li>Ubuntu 20.04</li> <li>Minikube 1.17.1</li> <li>Kubernetes 1.20.2</li> <li>Driver docker (20.10.2)</li> </ul> <p>Thanks, Piers.</p>
<p>Using the <code>--extra-config=kubelet.</code> flag alongside <code>minikube start</code> is a good approach but you would also need to <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">Set Kubelet parameters via a config file</a>.</p> <p>As you already noticed the <code>--cpu-cfs-quota</code> flag:</p> <blockquote> <p>Enable CPU CFS quota enforcement for containers that specify CPU limits (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag.</p> </blockquote> <p>So you need to set that parameter by creating a <code>kubelet</code> config file:</p> <blockquote> <p>The configuration file must be a JSON or YAML representation of the parameters in this struct. Make sure the Kubelet has read permissions on the file.</p> <p>Here is an example of what this file might look like:</p> <pre><code>apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration evictionHard: memory.available: &quot;200Mi&quot; </code></pre> </blockquote> <p>Now you can use that config file to set <code>cpuCFSQuota</code> = <code>false</code>:</p> <pre><code>// cpuCFSQuota enables CPU CFS quota enforcement for containers that // specify CPU limits. // Dynamic Kubelet Config (beta): If dynamically updating this field, consider that // disabling it may reduce node stability. // Default: true // +optional` CPUCFSQuota *bool `json:&quot;cpuCFSQuota,omitempty&quot; </code></pre> <p>and than call minikube with <code>--extra-config=kubelet.config=/path/to/config.yaml</code></p> <p>Alternately, you can start your minikube without the <code>--extra-config</code> flag and than <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#start-a-kubelet-process-configured-via-the-config-file" rel="nofollow noreferrer">start the Kubelet with the <code>--config</code> flag</a> set to the path of the Kubelet's config file. The Kubelet will then load its config from this file.</p> <p>I know these are a few steps more than you expected but setting the kubelet parameters via a config file is the recommended approach because it simplifies node deployment and configuration management.</p>
<p>envoy container failing while startup with the below error</p> <pre><code>Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: Unknown field in: {&quot;static_resources&quot;:{&quot;listeners&quot;:[{&quot;address&quot;:{&quot;socket_address&quot;:{&quot;address&quot;:&quot;0.0.0.0&quot;,&quot;port_value&quot;:443}},&quot;filter_chains&quot;:[{&quot;tls_context&quot;:{&quot;common_tls_context&quot;:{&quot;tls_certificates&quot;:[{&quot;private_key&quot;:{&quot;filename&quot;:&quot;/etc/ssl/private.key&quot;},&quot;certificate_chain&quot;:{&quot;filename&quot;:&quot;/etc/ssl/keychain.crt&quot;}}]}},&quot;filters&quot;:[{&quot;typed_config&quot;:{&quot;route_config&quot;:{&quot;name&quot;:&quot;local_route&quot;,&quot;virtual_hosts&quot;:[{&quot;domains&quot;:[&quot;*&quot;],&quot;routes&quot;:[{&quot;match&quot;:{&quot;prefix&quot;:&quot;/&quot;},&quot;route&quot;:{&quot;host_rewrite_literal&quot;:&quot;127.0.0.1&quot;,&quot;cluster&quot;:&quot;service_envoyproxy_io&quot;}}],&quot;name&quot;:&quot;local_service&quot;}]},&quot;@type&quot;:&quot;type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager&quot;,&quot;http_filters&quot;:[{&quot;name&quot;:&quot;envoy.filters.http.router&quot;}],&quot;access_log&quot;:[{&quot;typed_config&quot;:{&quot;@type&quot;:&quot;type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog&quot;,&quot;path&quot;:&quot;/dev/stdout&quot;},&quot;name&quot;:&quot;envoy.access_loggers.file&quot;}],&quot;stat_prefix&quot;:&quot;ingress_http&quot;},&quot;name&quot;:&quot;envoy.filters.network.http_connection_manager&quot;}]}],&quot;name&quot;:&quot;listener_0&quot;}],&quot;clusters&quot;:[{&quot;load_assignment&quot;:{&quot;cluster_name&quot;:&quot;service_envoyproxy_io&quot;,&quot;endpoints&quot;:[{&quot;lb_endpoints&quot;:[{&quot;endpoint&quot;:{&quot;address&quot;:{&quot;socket_address&quot;:{&quot;port_value&quot;:8080,&quot;address&quot;:&quot;127.0.0.1&quot;}}}}]}]},&quot;connect_timeout&quot;:&quot;30s&quot;,&quot;name&quot;:&quot;service_envoyproxy_io&quot;,&quot;dns_lookup_family&quot;:&quot;V4_ONLY&quot;,&quot;transport_socket&quot;:{&quot;name&quot;:&quot;envoy.transport_sockets.tls&quot;,&quot;typed_config&quot;:{&quot;@type&quot;:&quot;type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext&quot;,&quot;sni&quot;:&quot;www.envoyproxy.io&quot;}},&quot;type&quot;:&quot;LOGICAL_DNS&quot;}]}} </code></pre> <p>Here's my envoy.yaml file</p> <pre><code>static_resources: listeners: - name: listener_0 address: socket_address: address: 0.0.0.0 port_value: 443 filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http access_log: - name: envoy.access_loggers.file typed_config: &quot;@type&quot;: type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog path: /dev/stdout http_filters: - name: envoy.filters.http.router route_config: name: local_route virtual_hosts: - name: local_service domains: [&quot;*&quot;] routes: - match: prefix: &quot;/&quot; route: host_rewrite_literal: 127.0.0.1 cluster: service_envoyproxy_io tls_context: common_tls_context: tls_certificates: - certificate_chain: filename: &quot;/etc/ssl/keychain.crt&quot; private_key: filename: &quot;/etc/ssl/private.key&quot; clusters: - name: service_envoyproxy_io connect_timeout: 30s type: LOGICAL_DNS # Comment out the following line to test on v6 networks dns_lookup_family: V4_ONLY load_assignment: cluster_name: service_envoyproxy_io endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 127.0.0.1 port_value: 8080 transport_socket: name: envoy.transport_sockets.tls typed_config: &quot;@type&quot;: type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext sni: www.envoyproxy.io </code></pre> <p>I'm I doing something wrong here?</p>
<p>The error message states that: <code>Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021</code>. The v2 xDS APIs are deprecated and will be removed form Envoy in Q1 2021, as per the <a href="https://github.com/envoyproxy/envoy/blob/a52ec8b4146dc7db630cd5372fb714f104e1f740//api/API_VERSIONING.md" rel="nofollow noreferrer">API versioning policy</a>.</p> <p>According to the official docs you got the following options:</p> <ol> <li>In the interim, you can continue to use the v2 API for the transitional period by:</li> </ol> <ul> <li><p>Setting <code>--bootstrap-version</code> 2 on the CLI for a v2 bootstrap file.</p> </li> <li><p>Enabling the runtime <em>envoy.reloadable_features.enable_deprecated_v2_api</em> feature. This is implicitly enabled if a v2 <code>--bootstrap-version</code> is set.</p> </li> </ul> <ol start="2"> <li>Or <a href="https://www.envoyproxy.io/docs/envoy/latest/faq/api/envoy_v3#how-do-i-configure-envoy-to-use-the-v3-api" rel="nofollow noreferrer">Configure Envoy to use the v3 API</a></li> </ol> <p>More details can be found in the linked docs.</p>
<p>If the max-file value is set to 2, two files are created as shown below.</p> <pre><code>11111-json.log 11111-json.log.1 </code></pre> <p>But here, when the <code>11111-json.log</code> file size is <code>max-size</code>, the contents of <code>11111-json.log</code> are moved to <code>11111-json.log.1</code>, and the size of <code>11111-json.log</code> Becomes zero. /var/log/container At this point I lose the last log.</p> <p>The log in the <code>/var/log/container</code> path eventually links to <code>/var/lib/docker/containers/~</code>, so if the file mentioned above works that way, the log will be lost.</p> <p>How can't I be lost?</p>
<p>According to your settings, all logs .log.1, .log.2 are stored in /var/lib/docker/containers/... and as per docker documentation you can change those settings for docker in <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">daemon.json</a>: </p> <pre><code> "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3", </code></pre> <p>in /var/log/containers you can find link to the last created log file.</p> <p>As per documentation for <a href="https://docs.fluentd.org/input/tail#example-configuration" rel="nofollow noreferrer">flunetd</a>: You should consider using <strong>in_tail</strong> option: </p> <blockquote> <p>in_tail is included in Fluentd's core. No additional installation process is required. When Fluentd is first configured with in_tail, it will start reading from the tail of that log, not the beginning. Once the log is rotated, Fluentd starts reading the new file from the beginning. It keeps track of the current inode number.</p> </blockquote> <p>Please refer to the similar <a href="https://stackoverflow.com/a/49633015/11207414">community post</a></p>
<p>In kubernetes network policy we can set Ingress value as blank array i.e. [] or we can also set value as - {}</p> <p>What is the difference between using these 2 values?</p> <p><strong>First YAML that I tried - It didn't work</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy spec: podSelector: matchLabels: name: internal policyTypes: ["Ingress","Egress"] ingress: [] egress: - to: - podSelector: matchLabels: name: mysql ports: - protocol: TCP port: 3306 </code></pre> <p>Second YAML that was answer in katacoda scenario</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy namespace: default spec: podSelector: matchLabels: name: internal policyTypes: - Egress - Ingress ingress: - {} egress: - to: - podSelector: matchLabels: name: mysql ports: - protocol: TCP port: 3306 </code></pre>
<p>In both cases you have specified <strong>Policy Types: Ingress and Egress</strong></p> <ol> <li>In the first example:</li> </ol> <pre><code> ingress: [] </code></pre> <p>this rule (is empty) and <strong>deny all ingress traffic</strong>, (the same result if ingress rule are not present in the spec).</p> <p>You can verify this by running:</p> <pre><code>kubectl describe networkpolicy internal-policy Allowing ingress traffic: &lt;none&gt; (Selected pods are isolated for ingress connectivity) </code></pre> <ol start="2"> <li>In the second example:</li> </ol> <pre><code> ingress: - {} </code></pre> <p>this rule <strong>allow all ingress traffic</strong>:</p> <pre><code>kubectl describe networkpolicy internal-policy Allowing ingress traffic: To Port: &lt;any&gt; (traffic allowed to all ports) From: &lt;any&gt; (traffic not restricted by source) </code></pre> <p>As per documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a></p> <blockquote> <p>Ingress rules: Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections.</p> </blockquote> <p>Hope this help.</p>
<p>I have a problem with service (DNS) discovery in <strong>kubernetes 1.14</strong> version in <strong>ubuntu bionic</strong>.</p> <p>Right now my 2 pods communicating using IP addresses. How can I enable <strong>coredns</strong> for service (DNS) discovery?</p> <p>Here is the output of kubectl for service and pods from kube-system namespace:</p> <pre><code> kubectl get pods,svc --namespace=kube-system | grep dns pod/coredns-fb8b8dccf-6plz2 1/1 Running 0 6d23h pod/coredns-fb8b8dccf-thxh6 1/1 Running 0 6d23h service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 6d23h </code></pre> <h2>I have installed kubernetes on master node(ubuntu bionic machine) using below steps</h2> <pre><code> apt-get update apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&quot; apt-get update apt-get install docker-ce docker-ce-cli containerd.io apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl kubectl version apt-mark hold kubelet kubeadm kubectl kubeadm config images pull swapoff -a kubeadm init mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config sysctl net.bridge.bridge-nf-call-iptables=1 kubectl apply -f &quot;https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&quot; kubectl get pods --all-namespaces </code></pre> <h3>This is on worker node</h3> <pre><code> Docker is already installed, so directly installing kubernetes on worker node apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl kubectl version apt-mark hold kubelet kubeadm kubectl swapoff -a Now joined worker node to master </code></pre> <h2>Answer:-</h2> <p>I think everything was setup correctly by default, There was a misunderstanding by me that I can call a server running in one pod from another pod using the container name and port which I have specified in spec, but instead I should use service name and port.</p> <h2>Below is my deployment spec and service spec:-</h2> <h3>Deployment spec:-</h3> <pre><code> apiVersion: extensions/v1beta1 kind: Deployment metadata: name: node-server1-deployment spec: replicas: 1 template: metadata: labels: app: node-server1 spec: hostname: node-server1 containers: - name: node-server1 image: bvenkatr/node-server1:1 ports: - containerPort: 5551 </code></pre> <h3>Service spec:</h3> <pre><code> kind: Service apiVersion: v1 metadata: name: node-server1-service spec: selector: app: node-server1 ports: - protocol: TCP port: 5551 </code></pre>
<blockquote> <p>As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. In Kubernetes, CoreDNS is installed with the following default Corefile configuration:</p> </blockquote> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } </code></pre> <p>More info yo can find <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">here</a>. </p> <p>You can verify your env by running:</p> <pre><code>kubectl get cm coredns -n kube-system -o yaml apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system </code></pre> <p>and: kubeadm config view dns: type: CoreDNS</p> <p>during kubeadm init you should noticed:</p> <pre><code>[addons] Applied essential addon: CoreDNS </code></pre> <p>If you are moving from kube-dns to CoreDNS, make sure to set the CoreDNS <code>feature gate</code> to <strong>true</strong> during an upgrade. For example, here is what a v1.11.0 upgrade would look like: <code>kubeadm upgrade apply v1.11.0 --feature-gates=CoreDNS=true</code></p> <blockquote> <p>In Kubernetes version 1.13 and later the CoreDNS feature gate is removed and CoreDNS is used by default. More information <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/" rel="nofollow noreferrer">here</a>.</p> </blockquote> <p>You can see if your coredns pod is working properly by running:</p> <pre><code>kubectl logs &lt;your coredns pod&gt; -n kube-system .:53 2019-05-02T13:32:41.438Z [INFO] CoreDNS-1.3.1 CoreDNS-1.3.1 . . </code></pre>
<p>We want to run a Wowza streaming engine among other containers in our Kubernetes cluster on Azure Kubernetes Service (AKS). Wowza uses various ports, some with TCP, some with UDP protocol.</p> <p>We need to expose these ports to the outside world. We can't seem to find a way to set up a load balancer that can forward both TCP and UDP ports.</p> <p>A LoadBalancer service does not support mixed protocols until an upcoming version of K8s, and it will be even longer until this version is available in AKS: <a href="https://github.com/kubernetes/enhancements/issues/1435" rel="nofollow noreferrer">link</a></p> <p>We have tried using nginx-ingress, but it has the same limitation due to the underlying K8s limitation: see comment from author <a href="https://github.com/kubernetes/ingress-nginx/issues/6573" rel="nofollow noreferrer">here</a></p> <p>It would seem like citrix-ingress allows this according to its documentation, but we have a lot of problems making it work at all...</p> <p>Is there any way to do this that we may have missed? Want to make sure we are not missing something obvious.</p>
<p>This is a community wiki answer posted for better visibility. Feel free to edit it when the final solution would be available (k8s v1.20 on AKS).</p> <p>There is an open enhancement: <a href="https://github.com/kubernetes/enhancements/issues/1435" rel="nofollow noreferrer">Support of mixed protocols in Services with type=LoadBalancer #1435</a> which when implemented will enable the creation of a <code>LoadBalancer</code> Service that has different port definitions with different protocols. The stable release is planned for k8s v1.20.</p> <p>Unfortunately, it is not possible to use both TCP and UDP ports with <code>LoadBalancer</code> Service currently. It would be however when the above solution is in place.</p>
<p>I have a dozen of cron jobs on GKE. My docker registry is down. The status of these cron jobs becomes: <code>ImagePullBackOff</code></p> <p>My thinking is, the cron jobs should pull the docker image once after deploying and use the cached/local docker image.</p> <p>Shouldn't pull the docker image every time from remote docker registry when the cron job creates a new pod. It's a waste, because the docker image doesn't change (I mean the application code of cron job).</p> <p>So, is there a way to do this?</p> <p>Purpose: if can do this, my cron jobs will always running using local docker image before next deploying, even if docker registry is down.</p>
<p>you can use one of the "<strong>Container Images" properties</strong> mentioned <a href="https://kubernetes.io/docs/concepts/configuration/overview/#container-images" rel="nofollow noreferrer">here</a>.</p> <p>Please setup in your deployment: <code>imagePullPolicy: IfNotPresent</code>.</p> <p>Note:</p> <blockquote> <p>if imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.</p> </blockquote> <p>Please verify your deployment settings and verify also if docker images are present on the machine.</p>
<blockquote> <p>How to append a list to another list inside a dictionary using Helm?</p> </blockquote> <p>I have a Helm chart specifying the key <code>helm</code> inside of an Argo CD <code>Application</code> (see snippet below).</p> <p>Now given a <code>values.yaml</code> file, e.g.:</p> <pre><code>helm: valueFiles: - myvalues1.yaml - myvalues2.yaml </code></pre> <p>I want to append <code>helm.valuesFiles</code> to the one below. How can I achieve this? The <a href="https://helm.sh/docs/chart_template_guide/function_list/#merge-mustmerge" rel="nofollow noreferrer">merge</a> function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook # You'll usually want to add your resources to the argocd namespace. namespace: argocd # Add this finalizer ONLY if you want these to cascade delete. finalizers: - resources-finalizer.argocd.argoproj.io # Add labels to your application object. labels: name: guestbook spec: # The project the application belongs to. project: default # Source of the application manifests source: repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo. targetRevision: HEAD # For Helm, this refers to the chart version. path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git. # helm specific config chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts. helm: passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains # Extra parameters to set (same as setting through values.yaml, but these take precedence) parameters: - name: &quot;nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname&quot; value: mydomain.example.com - name: &quot;ingress.annotations.kubernetes\\.io/tls-acme&quot; value: &quot;true&quot; forceString: true # ensures that value is treated as a string # Use the contents of files as parameters (uses Helm's --set-file) fileParameters: - name: config path: files/config.json # Release name override (defaults to application name) releaseName: guestbook # Helm values files for overriding values in the helm chart # The path is relative to the spec.source.path directory defined above valueFiles: - values-prod.yaml </code></pre> <p><a href="https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml</a></p>
<p>If you only need to append <code>helm.valueFiles</code> to the existing <code>.spec.source.helm.valueFiles</code>, you can range through the list in the values file and add the list items like this:</p> <pre><code>valueFiles: - values-prod.yaml {{- range $item := .Values.helm.valueFiles }} - {{ $item }} {{- end }} </code></pre>
<p>I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod. The NFS folder <code>/mount/protected</code> has user access restrictions, i.e. only certain users can access this folder.</p> <p>This is my Pod configuration:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory secret: secretName: my-secret containers: - name: my-container image: &lt;...&gt; command: ["/bin/sh"] args: ["-c", "python /my-volume/test.py"] volumeMounts: - name: my-volume mountPath: /my-volume </code></pre> <p>When applying it, I get the following error:</p> <pre><code>The Pod "my-pod" is invalid: * spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type * spec.containers[0].volumeMounts[0].name: Not found: "my-volume" </code></pre> <p>I created my-secret according to the following guide:<br> <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret</a><br> So basically:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-secret data: username: bXktYXBw password: PHJlZGFjdGVkPg== </code></pre> <p>But when I mount the folder <code>/mount/protected</code> with:</p> <pre><code>spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory </code></pre> <p>I get a permission denied error <code>python: can't open file '/my-volume/test.py': [Errno 13] Permission denied</code> when running a Pod that mounts this volume path.</p> <p>My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?</p>
<p>I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (<a href="https://github.com/fstab/cifs" rel="nofollow noreferrer">https://github.com/fstab/cifs</a>). With this Plugin, every user can pass her/his credentials to the Pod. The user only needs to create a Kubernetes secret (<code>cifs-secret</code>), storing the username/password and use this secret for the mount within the Pod. The volume is then mounted as follows:</p> <pre><code> (...) volumes: - name: test flexVolume: driver: "fstab/cifs" fsType: "cifs" secretRef: name: "cifs-secret" options: networkPath: "//server/share" mountOptions: "dir_mode=0755,file_mode=0644,noperm" </code></pre>
<p>I'm installing a kubernetes cluster on raspberry pis with hypriotOS. During the installation process, I have to only install kubeadm by using</p> <blockquote> <p>apt-get install kubeadm</p> </blockquote> <p>Can someone explain to me what kudeam actually does? I already read about bootstrapping in the documentation, but I don't understand exactly. I'm also wondering why I only have to install kubeadm, since it is written in the documentation that:</p> <blockquote> <p>kubeadm will not install or manage kubelet or kubectl</p> </blockquote> <p>After the installation I can use kubectl etc. without having installed it explicitly like </p> <blockquote> <p>apt-get install kubeadm kubectl kubelet kubernetes-cni</p> </blockquote>
<p>As mentioned by <a href="https://stackoverflow.com/users/11294273/manuel-dom%C3%ADnguez">@Manuel Domínguez</a>: Kubeadm is a tool to built Kubernetes clusters. It's responsible for cluster bootstrapping. It support also upgrades, downgrades, and managing bootstrap tokens.</p> <p>First of all Kubeadm runs a series of prechecks to ensure that the machine is ready to run Kubernetes, during bootstrapping the cluster kubeadm is downloading and installing the cluster control plane components and configure all necessary cluster resources.</p> <p>f.e.</p> <p>Control plane components like:</p> <ul> <li>kube-apiserver,</li> <li>kube-controller-manager,</li> <li>kube-scheduler,</li> <li>etcd</li> </ul> <p>Runtime components like:</p> <ul> <li>kubelet,</li> <li>kube-proxy</li> <li>container runtime</li> </ul> <p>You can find more information about Kubeadm:</p> <ul> <li><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">&quot;Creating a single master cluster with kubeadm&quot;</a></li> <li><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/" rel="nofollow noreferrer">&quot;Overview of kubeadm&quot;</a></li> <li><a href="https://github.com/kubernetes/kubeadm" rel="nofollow noreferrer">&quot;Github repository&quot;</a>:</li> </ul> <p>Hope this help</p>
<p>I create one EKS cluster with 6 nodes. In every node attach EFS file system using launch template and create a file in file system and file also accessible from all worker node. By SSH i check this operation.</p> <p>But when try to create a test-application pod with dynamic pvc claim after installing csi driver and storage class creation in eks, cluster tell me,</p> <p><strong>PVC Pod:</strong></p> <pre><code>Normal ExternalProvisioning 2 hours persistentvolume-controller waiting for a volume to be created, either by external provisioner &quot;efs.csi.aws.com&quot; or manually created by system administrator </code></pre> <p><strong>Application Pod:</strong></p> <pre><code>FailedScheduling 2 hours default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. </code></pre> <p>How can i fix this efs issues from eks....? please give me some guidance. I am stucking in here almost 1 months.</p> <p><strong>PVC describe:</strong></p> <pre><code>{ &quot;kind&quot;: &quot;PersistentVolumeClaim&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;efs-claim&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;uid&quot;: &quot;bb7d1d08-f14b-4435-90e2-42255a915e23&quot;, &quot;resourceVersion&quot;: &quot;9805&quot;, &quot;creationTimestamp&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;annotations&quot;: { &quot;kubectl.kubernetes.io/last-applied-configuration&quot;: &quot;{\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;kind\&quot;:\&quot;PersistentVolumeClaim\&quot;,\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{},\&quot;name\&quot;:\&quot;efs-claim\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;},\&quot;spec\&quot;:{\&quot;accessModes\&quot;:[\&quot;ReadWriteMany\&quot;],\&quot;resources\&quot;:{\&quot;requests\&quot;:{\&quot;storage\&quot;:\&quot;5Gi\&quot;}},\&quot;storageClassName\&quot;:\&quot;eks-sc-efs\&quot;}}\n&quot;, &quot;volume.beta.kubernetes.io/storage-provisioner&quot;: &quot;efs.csi.aws.com&quot; }, &quot;finalizers&quot;: [ &quot;kubernetes.io/pvc-protection&quot; ], &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kube-controller-manager&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;f:volume.beta.kubernetes.io/storage-provisioner&quot;: {} } } } }, { &quot;manager&quot;: &quot;kubectl-client-side-apply&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;.&quot;: {}, &quot;f:kubectl.kubernetes.io/last-applied-configuration&quot;: {} } }, &quot;f:spec&quot;: { &quot;f:accessModes&quot;: {}, &quot;f:resources&quot;: { &quot;f:requests&quot;: { &quot;.&quot;: {}, &quot;f:storage&quot;: {} } }, &quot;f:storageClassName&quot;: {}, &quot;f:volumeMode&quot;: {} } } } ] }, &quot;spec&quot;: { &quot;accessModes&quot;: [ &quot;ReadWriteMany&quot; ], &quot;resources&quot;: { &quot;requests&quot;: { &quot;storage&quot;: &quot;5Gi&quot; } }, &quot;storageClassName&quot;: &quot;eks-sc-efs&quot;, &quot;volumeMode&quot;: &quot;Filesystem&quot; }, &quot;status&quot;: { &quot;phase&quot;: &quot;Pending&quot; } } </code></pre> <p><strong>Pod Describe:</strong></p> <pre><code>{ &quot;kind&quot;: &quot;Pod&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;efs-app&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;uid&quot;: &quot;de410588-0fe4-4ea8-bb48-fe1fdbda30a0&quot;, &quot;resourceVersion&quot;: &quot;9809&quot;, &quot;creationTimestamp&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;annotations&quot;: { &quot;kubectl.kubernetes.io/last-applied-configuration&quot;: &quot;{\&quot;apiVersion\&quot;:\&quot;v1\&quot;,\&quot;kind\&quot;:\&quot;Pod\&quot;,\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{},\&quot;name\&quot;:\&quot;efs-app\&quot;,\&quot;namespace\&quot;:\&quot;default\&quot;},\&quot;spec\&quot;:{\&quot;containers\&quot;:[{\&quot;args\&quot;:[\&quot;-c\&quot;,\&quot;while true; do echo $(date -u) \\u003e\\u003e /data/out; sleep 5; done\&quot;],\&quot;command\&quot;:[\&quot;/bin/sh\&quot;],\&quot;image\&quot;:\&quot;centos\&quot;,\&quot;name\&quot;:\&quot;app\&quot;,\&quot;volumeMounts\&quot;:[{\&quot;mountPath\&quot;:\&quot;/efs\&quot;,\&quot;name\&quot;:\&quot;persistent-storage\&quot;}]}],\&quot;volumes\&quot;:[{\&quot;name\&quot;:\&quot;persistent-storage\&quot;,\&quot;persistentVolumeClaim\&quot;:{\&quot;claimName\&quot;:\&quot;efs-claim\&quot;}}]}}\n&quot;, &quot;kubernetes.io/psp&quot;: &quot;eks.privileged&quot; }, &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kube-scheduler&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:status&quot;: { &quot;f:conditions&quot;: { &quot;.&quot;: {}, &quot;k:{\&quot;type\&quot;:\&quot;PodScheduled\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:lastProbeTime&quot;: {}, &quot;f:lastTransitionTime&quot;: {}, &quot;f:message&quot;: {}, &quot;f:reason&quot;: {}, &quot;f:status&quot;: {}, &quot;f:type&quot;: {} } } } } }, { &quot;manager&quot;: &quot;kubectl-client-side-apply&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;time&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;.&quot;: {}, &quot;f:kubectl.kubernetes.io/last-applied-configuration&quot;: {} } }, &quot;f:spec&quot;: { &quot;f:containers&quot;: { &quot;k:{\&quot;name\&quot;:\&quot;app\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:args&quot;: {}, &quot;f:command&quot;: {}, &quot;f:image&quot;: {}, &quot;f:imagePullPolicy&quot;: {}, &quot;f:name&quot;: {}, &quot;f:resources&quot;: {}, &quot;f:terminationMessagePath&quot;: {}, &quot;f:terminationMessagePolicy&quot;: {}, &quot;f:volumeMounts&quot;: { &quot;.&quot;: {}, &quot;k:{\&quot;mountPath\&quot;:\&quot;/efs\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:mountPath&quot;: {}, &quot;f:name&quot;: {} } } } }, &quot;f:dnsPolicy&quot;: {}, &quot;f:enableServiceLinks&quot;: {}, &quot;f:restartPolicy&quot;: {}, &quot;f:schedulerName&quot;: {}, &quot;f:securityContext&quot;: {}, &quot;f:terminationGracePeriodSeconds&quot;: {}, &quot;f:volumes&quot;: { &quot;.&quot;: {}, &quot;k:{\&quot;name\&quot;:\&quot;persistent-storage\&quot;}&quot;: { &quot;.&quot;: {}, &quot;f:name&quot;: {}, &quot;f:persistentVolumeClaim&quot;: { &quot;.&quot;: {}, &quot;f:claimName&quot;: {} } } } } } } ] }, &quot;spec&quot;: { &quot;volumes&quot;: [ { &quot;name&quot;: &quot;persistent-storage&quot;, &quot;persistentVolumeClaim&quot;: { &quot;claimName&quot;: &quot;efs-claim&quot; } }, { &quot;name&quot;: &quot;kube-api-access-ppzjw&quot;, &quot;projected&quot;: { &quot;sources&quot;: [ { &quot;serviceAccountToken&quot;: { &quot;expirationSeconds&quot;: 3607, &quot;path&quot;: &quot;token&quot; } }, { &quot;configMap&quot;: { &quot;name&quot;: &quot;kube-root-ca.crt&quot;, &quot;items&quot;: [ { &quot;key&quot;: &quot;ca.crt&quot;, &quot;path&quot;: &quot;ca.crt&quot; } ] } }, { &quot;downwardAPI&quot;: { &quot;items&quot;: [ { &quot;path&quot;: &quot;namespace&quot;, &quot;fieldRef&quot;: { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;fieldPath&quot;: &quot;metadata.namespace&quot; } } ] } } ], &quot;defaultMode&quot;: 420 } } ], &quot;containers&quot;: [ { &quot;name&quot;: &quot;app&quot;, &quot;image&quot;: &quot;centos&quot;, &quot;command&quot;: [ &quot;/bin/sh&quot; ], &quot;args&quot;: [ &quot;-c&quot;, &quot;while true; do echo $(date -u) &gt;&gt; /data/out; sleep 5; done&quot; ], &quot;resources&quot;: {}, &quot;volumeMounts&quot;: [ { &quot;name&quot;: &quot;persistent-storage&quot;, &quot;mountPath&quot;: &quot;/efs&quot; }, { &quot;name&quot;: &quot;kube-api-access-ppzjw&quot;, &quot;readOnly&quot;: true, &quot;mountPath&quot;: &quot;/var/run/secrets/kubernetes.io/serviceaccount&quot; } ], &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot;, &quot;terminationMessagePolicy&quot;: &quot;File&quot;, &quot;imagePullPolicy&quot;: &quot;Always&quot; } ], &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;terminationGracePeriodSeconds&quot;: 30, &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;serviceAccountName&quot;: &quot;default&quot;, &quot;serviceAccount&quot;: &quot;default&quot;, &quot;securityContext&quot;: {}, &quot;schedulerName&quot;: &quot;default-scheduler&quot;, &quot;tolerations&quot;: [ { &quot;key&quot;: &quot;node.kubernetes.io/not-ready&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;tolerationSeconds&quot;: 300 }, { &quot;key&quot;: &quot;node.kubernetes.io/unreachable&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;tolerationSeconds&quot;: 300 } ], &quot;priority&quot;: 0, &quot;enableServiceLinks&quot;: true, &quot;preemptionPolicy&quot;: &quot;PreemptLowerPriority&quot; }, &quot;status&quot;: { &quot;phase&quot;: &quot;Pending&quot;, &quot;conditions&quot;: [ { &quot;type&quot;: &quot;PodScheduled&quot;, &quot;status&quot;: &quot;False&quot;, &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2022-07-18T03:50:14Z&quot;, &quot;reason&quot;: &quot;Unschedulable&quot;, &quot;message&quot;: &quot;0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.&quot; } ], &quot;qosClass&quot;: &quot;BestEffort&quot; } } </code></pre> <p><strong>Storage Class Descibe:</strong></p> <pre><code>{ &quot;kind&quot;: &quot;StorageClass&quot;, &quot;apiVersion&quot;: &quot;storage.k8s.io/v1&quot;, &quot;metadata&quot;: { &quot;name&quot;: &quot;eks-sc-efs&quot;, &quot;uid&quot;: &quot;9ea3d917-ef15-4863-ae62-e0de929c3134&quot;, &quot;resourceVersion&quot;: &quot;2756&quot;, &quot;creationTimestamp&quot;: &quot;2022-07-18T03:35:09Z&quot;, &quot;annotations&quot;: { &quot;kubectl.kubernetes.io/last-applied-configuration&quot;: &quot;{\&quot;allowVolumeExpansion\&quot;:true,\&quot;apiVersion\&quot;:\&quot;storage.k8s.io/v1\&quot;,\&quot;kind\&quot;:\&quot;StorageClass\&quot;,\&quot;metadata\&quot;:{\&quot;annotations\&quot;:{},\&quot;name\&quot;:\&quot;eks-sc-efs\&quot;},\&quot;mountOptions\&quot;:[\&quot;tls\&quot;],\&quot;parameters\&quot;:{\&quot;basePath\&quot;:\&quot;/\&quot;,\&quot;directoryPerms\&quot;:\&quot;700\&quot;,\&quot;fileSystemId\&quot;:\&quot;fs-0c8427977faa4865c\&quot;,\&quot;gidRangeEnd\&quot;:\&quot;2000\&quot;,\&quot;gidRangeStart\&quot;:\&quot;1000\&quot;,\&quot;provisioningMode\&quot;:\&quot;efs-ap\&quot;},\&quot;provisioner\&quot;:\&quot;efs.csi.aws.com\&quot;}\n&quot; }, &quot;managedFields&quot;: [ { &quot;manager&quot;: &quot;kubectl-client-side-apply&quot;, &quot;operation&quot;: &quot;Update&quot;, &quot;apiVersion&quot;: &quot;storage.k8s.io/v1&quot;, &quot;time&quot;: &quot;2022-07-18T03:35:09Z&quot;, &quot;fieldsType&quot;: &quot;FieldsV1&quot;, &quot;fieldsV1&quot;: { &quot;f:allowVolumeExpansion&quot;: {}, &quot;f:metadata&quot;: { &quot;f:annotations&quot;: { &quot;.&quot;: {}, &quot;f:kubectl.kubernetes.io/last-applied-configuration&quot;: {} } }, &quot;f:mountOptions&quot;: {}, &quot;f:parameters&quot;: { &quot;.&quot;: {}, &quot;f:basePath&quot;: {}, &quot;f:directoryPerms&quot;: {}, &quot;f:fileSystemId&quot;: {}, &quot;f:gidRangeEnd&quot;: {}, &quot;f:gidRangeStart&quot;: {}, &quot;f:provisioningMode&quot;: {} }, &quot;f:provisioner&quot;: {}, &quot;f:reclaimPolicy&quot;: {}, &quot;f:volumeBindingMode&quot;: {} } } ] }, &quot;provisioner&quot;: &quot;efs.csi.aws.com&quot;, &quot;parameters&quot;: { &quot;basePath&quot;: &quot;/&quot;, &quot;directoryPerms&quot;: &quot;700&quot;, &quot;fileSystemId&quot;: &quot;fs-*******4865c&quot;, &quot;gidRangeEnd&quot;: &quot;2000&quot;, &quot;gidRangeStart&quot;: &quot;1000&quot;, &quot;provisioningMode&quot;: &quot;efs-ap&quot; }, &quot;reclaimPolicy&quot;: &quot;Delete&quot;, &quot;mountOptions&quot;: [ &quot;tls&quot; ], &quot;allowVolumeExpansion&quot;: true, &quot;volumeBindingMode&quot;: &quot;Immediate&quot; } </code></pre>
<p>According to the PVC status pending, there is no provisioner able to create the Volume. Have you try and create a PersistentVolume?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: test-volume spec: storageClassName: eks-sc-efs capacity: storage: 5Gi accessModes: - ReadWriteOnce </code></pre> <p>Describe the object and see if there are any other errors. Additionally make sure the CSI drivers are installed. I found this documentation from AWS EKS <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html</a></p>
<p>I have 3 services that are based on the same image, they're basically running the same app in 3 different configurations. 1 service is responsible for running migrations and data updates which the other 2 services will need. So I need this 1 service to be deployed first before the other 2 will be deployed. Is there any way to do this?</p>
<p>I would look for a solution outside of Kubernetes.</p> <p>Assuming that you have a release pipeline to deploy the changes, you could have a step to migrate and update the database. If the migration step succeeds, then deploy all the new services.</p> <p>If you really have to use K8S, see the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">InitContainers doc</a>.</p>
<p>I'm trying to deploy an app to AKS cluster. Everytime I push changes to my branch, I want AKS to redeploy pods and make use of the most recent tag (which I have versioned with $(Build.BuildId))</p> <p>The problem is right now I have to manually retrieve this build version and enter it into deployment.yaml and then run <code>kubectl apply -f deployment.yaml</code> for the change to go ahead. For example, the most recent tag is 58642, so I would have to log into my Azure Container Registry, retrieve the version number, update the deployment.yaml, and then apply for changes to take effect.</p> <p>How can I change my setup here so that the most recently built and tagged container is deployed to the AKS cluster as part of my CICD?</p> <p>Here is my <code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mission-model-api spec: replicas: 3 selector: matchLabels: app: mission-model-api template: metadata: labels: app: mission-model-api spec: containers: - name: mission-model-api image: my_registry.azurecr.io/serving/mission_model_api:58642 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi ports: - containerPort: 80 imagePullPolicy: Always --- apiVersion: v1 kind: Service metadata: name: mission-model-api spec: type: ClusterIP ports: - port: 80 selector: app: mission-model-api </code></pre> <p>And here is my CI/CD <code>azure-pipelines.yaml</code></p> <pre><code>variables: tag: '$(Build.BuildId)' vmImageName: 'ubuntu-latest' envName: 'poc-releases' docker_image_name: API imagePullSecret: 'AUTH' dockerRegistryServiceConnection: 'XX' trigger: batch: true branches: include: - feature/* stages: - stage: Build displayName: Build stage pool: vmImage: $(vmImageName) jobs: - job: Build displayName: Build job variables: PROJECT_DIR: $(Build.SourcesDirectory)/apps/$(docker_image_name) IMAGE_AND_TAG: &quot;$(docker_image_name):$(tag)&quot; steps: - script: | az acr login --name my_registry.azurecr.io --username user --password $(acr_password) displayName: ACR Login - bash: &gt; docker build -f ./Dockerfile -t &quot;$(IMAGE_AND_TAG)&quot; . displayName: Build docker image workingDirectory: $(PROJECT_DIR) - script: | REGISTRY_PATH=my_registry.azurecr.io/serving docker tag &quot;$(IMAGE_AND_TAG)&quot; &quot;$REGISTRY_PATH/$(IMAGE_AND_TAG)&quot; docker push &quot;$REGISTRY_PATH/$(IMAGE_AND_TAG)&quot; displayName: Tag and Push to ACR - task: PublishPipelineArtifact@0 inputs: artifact: 'manifests' artifactName: 'manifests' targetPath: '$(PROJECT_DIR)/manifests' - stage: Deploy_BVT displayName: Deploy BVT dependsOn: Build jobs: - deployment: Deploy_BVT pool: vmImage: $(vmImageName) environment: '$(envName).ingress-basic' strategy: runOnce: deploy: steps: - task: DownloadPipelineArtifact@1 inputs: artifactName: 'manifests' downloadPath: '$(System.ArtifactsDirectory)/manifests' - task: KubernetesManifest@0 displayName: Create imagePullSecret inputs: action: createSecret secretName: $(imagePullSecret) namespace: ingress-basic dockerRegistryEndpoint: $(dockerRegistryServiceConnection) - task: KubernetesManifest@0 displayName: Deploy to Kubernetes cluster inputs: action: deploy namespace: &quot;ingress-basic&quot; manifests: | $(System.ArtifactsDirectory)/manifests/cluster-isseur.yaml $(System.ArtifactsDirectory)/manifests/deployment.yaml $(System.ArtifactsDirectory)/manifests/ingress.yaml imagePullSecrets: | $(imagePullSecret) containers: | &quot;$REGISTRY_PATH/$(docker_image_name):$(tag)&quot; </code></pre>
<p>Replace tokens task can solve your problem. I use it most of the time.</p> <ol> <li>For the deployment yaml, change the image like this.</li> </ol> <blockquote> <pre><code>image: my_registry.azurecr.io/serving/mission_model_api:#{Build.BuildId}# </code></pre> </blockquote> <ol start="2"> <li>Before the <strong>task: PublishPipelineArtifact@0</strong> task in <strong>Build</strong> stage, put a <strong><a href="https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens" rel="nofollow noreferrer">Replace Tokens</a></strong> task. You should add it as an extension to Azure DevOps</li> </ol> <blockquote> <pre><code>- task: replacetokens@4 inputs: targetFiles: '**/deployment.yml' encoding: 'auto' tokenPattern: 'default' writeBOM: true actionOnMissing: 'warn' keepToken: false actionOnNoFiles: 'continue' enableTransforms: false useLegacyPattern: false enableTelemetry: true </code></pre> </blockquote> <p>Then it should work as you expected.</p>
<p>Hi I'm trying to setup basic logging to get all my pod logs at a single place. Following is the pod-spec I have created but couldn't find the trace of the logs in the location mentioned. What could be missing in the template below?</p> <pre><code> apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)" &gt;&gt; /u01/kubernetes_prac/logs/log_output.txt; i=$((i+1)); sleep 1; done'] volumeMounts: - name: varlog mountPath: /u01/kubernetes_prac/logs volumes: - name: varlog emptyDir: {} </code></pre>
<p>try this:</p> <pre><code>volumes: - name: varlog hostPath: path: /tmp/logs </code></pre> <p>and check the node logs on that location</p>
<p>I am having problems with some pod staying in init phase all the time.</p> <p>I do not see any errors when I run the <code>pod describe</code> command. This is the list of the events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned infrastructure/jenkins-74cc957b47-mxvqd to ip-XX-XX-XXX-XXX.eu-west-1.compute.internal Warning BackOff 3m (x3 over 4m) kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal Back-off restarting failed container Normal Pulling 3m (x4 over 5m) kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal pulling image &quot;jenkins/jenkins:lts&quot; Normal Pulled 3m (x4 over 5m) kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal Successfully pulled image &quot;jenkins/jenkins:lts&quot; Normal Created 3m (x4 over 5m) kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal Created container Normal Started 3m (x4 over 5m) kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal Started container </code></pre> <p>I can see this also:</p> <pre><code> State: Running Started: Wed, 23 Sep 2020 09:49:56 +0200 Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 23 Sep 2020 09:49:06 +0200 Finished: Wed, 23 Sep 2020 09:49:27 +0200 Ready: False Restart Count: 3 </code></pre> <p>If I list the pods, it looks like this: <code>Error from server (BadRequest): container &quot;jenkins&quot; in pod &quot;jenkins-74cc957b47-mxvqd&quot; is waiting to start: PodInitializing</code></p> <p>But I am not able to see the specific error. Can someone help?</p>
<p>The official documentation has several recommendations regarding <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">Debug Running Pods</a>:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examining pod logs</a>: by executing <code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}</code> or <code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}</code> if your container has previously crashed</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#container-exec" rel="nofollow noreferrer">Debugging with container exec</a>: run commands inside a specific container with <code>kubectl exec</code>: <code>kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}</code></p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">Debugging with an ephemeral debug container</a>: Ephemeral containers are useful for interactive troubleshooting when <code>kubectl exec</code> is insufficient because a container has crashed or a container image doesn't include debugging utilities. You can find an example <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container-example" rel="nofollow noreferrer">here</a>.</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#node-shell-session" rel="nofollow noreferrer">Debugging via a shell on the node</a>: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.</p> </li> </ul> <p>You can find more details in the linked documentation.</p>
<p>I'm quite new in GitLab cicd. I have created simple nginx deployment including namespace,configmap,svc,deployment configmap contains simple custom index.html with cicd variable:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: index-html-configmap namespace: lazzio data: index.html: | &lt;html&gt; &lt;h1&gt;Welcomee&lt;/h1&gt; &lt;/br&gt; &lt;h1&gt;Hi! This is a configmap Index file for test-tepl+ingress &lt;/h1&gt; &lt;h2&gt; and this ---&gt; $PW_TEST &lt;--- is a password from gitlab cicd variable&lt;/h2&gt; &lt;/html&gt; </code></pre> <p>custom variable PW_TEST is set under cicd/variables section in UI without protected branch</p> <pre><code>#pipeline : stages: - build variables: ENV_NAME: value: &quot;int&quot; 1st-build: environment: name: ${ENV_NAME} variables: PW_TEST: $PW_TEST image: alpine stage: build before_script: - apk add bash - apk add curl script: - echo $PW_TEST - curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl&quot; - install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f nm.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f index.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f depl.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f svc.yml - kubectl --kubeconfig $CONF_INT_JK --insecure-skip-tls-verify apply -f test_ingress_int.yml </code></pre> <p>but when i log into the cluster and make a curl i got same index file as defined within the index.yml.</p> <p>I know its a stupid useless variable in index, but I'm just testing if variable is passing stored as a custom variable in cicd into the deployments on k3s. within another pipeline where is installing eg. any database or k3s cluster via ansible where password or other secrets are needed, so i want to use cicd variables instead of clear text secrets in a files within GitLab repository.</p> <p>Thanks for any hint.</p>
<p>you have actually few ways to do it.</p> <ol> <li><p>Personally like <a href="https://www.baeldung.com/linux/envsubst-command" rel="nofollow noreferrer">envsubst</a>, it's easy to implement and has a little weight. But you have to install it (in e.g. gitlab runner) to avoid downloading it each time pipeline runs.</p> </li> <li><p>There is also nice/simple solution using shellscript to basically just replace string with var's value. Disadvantage here is you have to write SanityChecks on your own.</p> <pre><code> sed "s/\${PY_VERSION}/${PY_VERSION}/g; s/\${JQ_VERSION}/${JQ_VERSION}/g" "${FILE}yaml.in" > "${FILE}.yaml" </pre></code> </li> <li><p>In complicated dynamic deployments(if you have huge amount of variables) you can use helm to extract variables with option debug. Disadvantage here is you have basically all manifest's declarations in one file in the end.</p> <pre><code>helm --values ci-variables/api-variables.yaml --debug template ./deployment/api-name > apply_file.yaml</pre></code> </li> </ol>
<p>I'm trying to run a simple Kubernetes Pod, and I want to mount the home of the host where the pod is scheduled into the <code>/hosthome</code> directory.</p> <p>I'm using Kubernetes Python API to deploy those pods on a remote cluster (so I can't use something like <code>os.path.expanduser('~')</code> because it'll parse the "client" host home, not the remote one).</p> <p>When I try to deploy the pod with this volume definition:</p> <pre><code>... volumes: - name: hosthome hostPath: path: ~ ... </code></pre> <p>The pod creation fails with this error: <code>create ~: volume name is too short, names should be at least two alphanumeric characters</code>. So I can't use the <code>~</code> shortcut to mount it.</p> <p>So, my question is: is there any way to mount the home directory of the host where the pod is scheduled using only the YAML definition (without replaces or Python functions)?</p> <p>Thanks.</p>
<p>No, I think this is not possible. Only absolute paths are allowed for a host volumne mount.</p>
<p>I have a folder in my project, which contains 1 properties file and 1 jar file(db-driver) file.</p> <p>I need to copy both of these files to /usr/local/tomcat/lib directory on my pod. I am not sure how to achieve this in kubernetes yaml file. Below is my yaml file where I am trying to achieve this using configMap, but pod creation fails with error &quot;configmap references non-existent config key: app.properties&quot;</p> <p>Target <code>/usr/local/tomcat/lib</code> already has other jar files so I am trying to use configMap to not override entire directory and just add 2 files which are specific to my application.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: appvolume mountPath: /usr/local/data - name: config mountPath: /usr/local/tomcat/lib subPath: ./configuration ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: appvolume - name: config configMap: name: config-map items: - key: app.properties path: app.properties --- apiVersion: v1 kind: ConfigMap metadata: name: config-map data: key: app.properties </code></pre> <p>Current Directory structure...</p> <pre><code>. ├── configuration │   ├── app.properties │   └── mysql-connector-java-5.1.21.jar ├── deployment.yaml └── service.yaml </code></pre> <p>Please share your valuable feedback on how to achieve this.</p> <p>Regards.</p>
<p>Please try this:</p> <p><strong>kubectl create configmap config-map --from-file=app.properties --from-file=mysql-connector-java-5.1.21.jar</strong></p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: config mountPath: /usr/local/tomcat/lib/conf ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: config configMap: name: config-map </code></pre> <p>or</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat3 image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: config mountPath: /usr/local/tomcat/lib/app.properties subPath: app.properties - name: config mountPath: /usr/local/tomcat/lib/mysql-connector-java-5.1.21.jar subPath: mysql-connector-java-5.1.21.jar ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: config configMap: name: config-map items: - key: app.properties path: app.properties - key: mysql-connector-java-5.1.21.jar path: mysql-connector-java-5.1.21.jar </code></pre>
<p>I am using <strong>KubeSpray</strong> to provision a two node cluster on AWS. By default, the <code>--kubelet-certificate-authority</code> parameter is not used. However, I would like to set it.</p> <p>I do not know the correct setting for <code>--kubelet-certificate-authority</code>. When I set it to <code>/etc/kubernetes/pki/ca.crt</code> I see messages like the following in my logs:</p> <pre><code>TLS handshake error from 10.245.207.223:59278: remote error: tls: unknown certificate authority </code></pre> <p>In order to create an isolated environment for testing, I SSH'ed to my controller node to run this command which runs apiserver on the non-standard port of <code>4667</code>. I copied these values directly from <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. You'll need to adjust the value to match your own cluster. I purposely am running the container in interactive mode so that I can see the log messages.</p> <pre class="lang-sh prettyprint-override"><code>sudo docker run \ --name api-server-playground \ -it \ --rm \ --network host \ --volume /etc/kubernetes:/etc/kubernetes:ro \ --volume /etc/pki:/etc/pki:ro \ --volume /etc/ssl:/etc/ssl/:ro \ k8s.gcr.io/kube-apiserver:v1.18.9 \ kube-apiserver \ --advertise-address=10.245.207.223 \ --allow-privileged=true \ --anonymous-auth=True \ --apiserver-count=1 \ --authorization-mode=Node,RBAC \ --bind-address=0.0.0.0 \ --client-ca-file=/etc/kubernetes/ssl/ca.crt \ --cloud-config=/etc/kubernetes/cloud_config \ --cloud-provider=aws \ --enable-admission-plugins=NodeRestriction \ --enable-aggregator-routing=False \ --enable-bootstrap-token-auth=true \ --endpoint-reconciler-type=lease \ --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem \ --etcd-certfile=/etc/ssl/etcd/ssl/node-ip-10-245-207-223.ec2.internal.pem \ --etcd-keyfile=/etc/ssl/etcd/ssl/node-ip-10-245-207-223.ec2.internal-key.pem \ --etcd-servers=https://10.245.207.119:2379 \ --event-ttl=1h0m0s \ --insecure-port=0 \ --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt \ --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key \ --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP \ --profiling=False \ --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt \ --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key \ --request-timeout=1m0s \ --requestheader-allowed-names=front-proxy-client \ --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --secure-port=6447 \ --service-account-key-file=/etc/kubernetes/ssl/sa.pub \ --service-cluster-ip-range=10.233.0.0/18 \ --service-node-port-range=30000-32767 \ --storage-backend=etcd3 \ --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt \ --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key </code></pre> <p>Now it's possible to SSH into the controller again and use <code>curl</code> to interact with the custom apiserver container.</p> <ul> <li>Set the APISERVER variable to point to the controller node. I'm using the value of the <code>advertise-address</code> parameter above.</li> </ul> <pre><code>APISERVER=https://10.245.207.223:6447 </code></pre> <ul> <li>Get a token.</li> </ul> <pre class="lang-sh prettyprint-override"><code>TOKEN=$(kubectl get secrets -o jsonpath=&quot;{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}&quot;|base64 --decode); echo &quot;TOKEN=$TOKEN&quot; </code></pre> <ul> <li>Make an api request. This request will fail because &quot;Peer's Certificate issuer is not recognized.&quot;.</li> </ul> <pre class="lang-sh prettyprint-override"><code>curl --header &quot;Authorization: Bearer $TOKEN&quot; -X GET $APISERVER/api </code></pre> <p>An error message will appear in the docker log. It will look like this:</p> <pre><code>I0921 14:39:07.662368 1 log.go:172] http: TLS handshake error from 10.245.207.223:59278: remote error: tls: unknown certificate authority </code></pre> <ul> <li>Now use the <code>-k</code> curl parameter to bypass the recognition issue.</li> </ul> <pre><code>curl -k --header &quot;Authorization: Bearer $TOKEN&quot; -X GET $APISERVER/api { &quot;kind&quot;: &quot;APIVersions&quot;, &quot;versions&quot;: [ &quot;v1&quot; ], &quot;serverAddressByClientCIDRs&quot;: [ { &quot;clientCIDR&quot;: &quot;0.0.0.0/0&quot;, &quot;serverAddress&quot;: &quot;10.245.207.223:6447&quot; } ] } </code></pre> <p>You'll see the request works correctly. However, I don't want to use the <code>-k</code> parameter. So I tried to use the certificate authority from the apiserver.</p> <ul> <li>Get the certificate authority from the apiserver.</li> </ul> <pre class="lang-sh prettyprint-override"><code>echo | \ openssl s_client -connect $APISERVER 2&gt;/dev/null | \ openssl x509 -text | \ sed -n &quot;/BEGIN CERTIFICATE/,/END CERTIFICATE/p&quot; \ &gt; apiserver.ca.crt </code></pre> <ul> <li>Use the certificate authority file for the api request.</li> </ul> <pre class="lang-sh prettyprint-override"><code>curl --cacert apiserver.ca.crt --header &quot;Authorization: Bearer $TOKEN&quot; -X GET $APISERVER/api </code></pre> <p><strong>UPDATE</strong></p> <p>Following a thought prompted by Wiktor's response, I added /etc/kubernetes/ssl/ca.crt as the certificate authority. And used that file in my <code>curl</code> command.</p> <pre class="lang-sh prettyprint-override"><code>curl --cacert /etc/kubernetes/ssl/ca.crt --header &quot;Authorization: Bearer $TOKEN&quot; -X GET $APISERVER/api </code></pre> <p>This worked.</p>
<p>In order to make the <code>--kubelet-certificate-authority</code> flag work you first need to make sure you got <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authentication" rel="noreferrer">Kubelet authentication</a> and <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization" rel="noreferrer">Kubelet authorization</a> enabled. After that you can follow the Kubernetes documentation and setup the <a href="https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#apiserver-to-kubelet" rel="noreferrer">TLS connection between the apiserver and kubelet</a>. And finally, you can edit the API server pod specification file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> on the master node and set the <code>--kubelet-certificate-authority</code> parameter to the path to the cert file for the certificate authority.</p> <p>So, to sum up the steps to do are:</p> <ol> <li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authentication" rel="noreferrer">Kubelet authentication</a>:</li> </ol> <ul> <li><p>start the kubelet with the <code>--anonymous-auth=false</code> flag</p> </li> <li><p>start the kubelet with the <code>--client-ca-file</code> flag, providing a CA bundle to verify client certificates with</p> </li> <li><p>start the apiserver with <code>--kubelet-client-certificate</code> and <code>--kubelet-client-key</code> flags</p> </li> <li><p>ensure the <code>authentication.k8s.io/v1beta1</code> API group is enabled in the API server</p> </li> <li><p>start the kubelet with the <code>--authentication-token-webhook</code> and <code>--kubeconfig flags</code></p> </li> <li><p>the kubelet calls the <code>TokenReview</code> API on the configured API server to determine user information from bearer tokens</p> </li> </ul> <ol start="2"> <li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization" rel="noreferrer">Kubelet authorization</a>:</li> </ol> <ul> <li><p>ensure the <code>authorization.k8s.io/v1beta1</code> API group is enabled in the API server</p> </li> <li><p>start the kubelet with the <code>--authorization-mode=Webhook</code> and the <code>--kubeconfig</code> flags</p> </li> <li><p>the kubelet calls the <code>SubjectAccessReview</code> API on the configured API server to determine whether each request is authorized</p> </li> </ul> <ol start="3"> <li>Use the <code>--kubelet-certificate-authority</code> flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.</li> </ol> <p>More details can be found in the linked documentation.</p>
<p>I'm getting lots of errors on one of my K8s worker nodes saying &quot;http: TLS handshake error from some_ip:port: remote error: tls: bad certificate&quot;, but I'm not having any problems using any of my K8s containers. The problem is being logged in /var/log/syslog seems to be specific to one particular K8s node.</p> <p>I assume I need to update a certificate, but I'm not sure if it's something in /etc/kubernetes/pki or /var/lib/kubelet/pki.</p> <p>I assume it's related to the cni0 interface, since that's the subnet that matches the .</p> <p>Does anybody know what it means, or better yet, how to fix it?</p> <p>Thanks in advance!</p>
<p>This is more likely coming from the cert-manager. You can find this from cert-manager-webhook- pod, usually in cert-manager namespace.</p>
<p><strong><a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#target-type" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#target-type</a></strong></p> <p>In above link it is mentioned that "instance mode" will route traffic to all ec2 instances within cluster on NodePort opened for your service. so how does kube-proxy make sure that request is served only once in case multiple replicas of pods are running in different instances and how does it makes sure that requests are evenly served from all pods?</p>
<p>As per documentation:</p> <p>Amazon Elastic Load Balancing Application Load Balancer (ALB) is a popular AWS service that load balances incoming traffic at the application layer (layer 7) across multiple targets, such as Amazon EC2 instances. </p> <p>The AWS ALB Ingress controller is a controller that triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource on the cluster. The Ingress resource uses the ALB to route HTTP[s] traffic to different endpoints within the cluster.</p> <blockquote> <ol> <li><p>With <strong>instance mode</strong>, ingress traffic start from ALB and <strong>reach Node Port opened for service</strong>. Traffic is routed to the container POD within cluster. Moreover <strong>target-type: "instance mode"</strong> is <strong>default setting</strong> in AWS ALB ingress controller and <strong>service must be type of "NodePort" or "LoadBalancer"</strong> to use this mode.</p></li> <li><p>Managing ALBs is automatic, and you only need to define your ingress resources as you would typically do. ALB ingress controller POD which is running inside the Kubernetes cluster communicates with Kubernetes API and does all the work. However, this POD is only a control plane, it doesn't do any proxying and stuff like that.</p></li> </ol> </blockquote> <p>Your <strong>Application Load Balancer</strong> periodically sends requests to its registered targets <strong>to test their status</strong>. These tests are called health checks. Alb-ingress-controller is performing "health checks" for targets groups. Different "health check's" on target groups can be controlled using annotations.</p> <p>You can find more information about ALB ingress and NodePort <a href="https://akomljen.com/aws-alb-ingress-controller-for-kubernetes/" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">here</a> </p> <p>Hope this help.</p>
<p>I have kubectl job that is invalid. I am debugging it and I extracted it to yaml file and I can see this:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: creationTimestamp: 2020-03-19T21:40:11Z labels: app: vault-unseal-app job-name: vault-unseal-vault-unseal-1584654000 name: vault-unseal-vault-unseal-1584654000 namespace: infrastructure ownerReferences: - apiVersion: batch/v1beta1 blockOwnerDeletion: true controller: true kind: CronJob name: vault-unseal-vault-unseal uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a resourceVersion: &quot;163413544&quot; selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000 uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4 spec: backoffLimit: 0 completions: 1 parallelism: 1 selector: matchLabels: app: vault-unseal-app template: metadata: creationTimestamp: null labels: app: vault-unseal-app job-name: vault-unseal-vault-unseal-1584654000 spec: containers: - env: - name: VAULT_ADDR value: http://vault-vault:8200 - name: VAULT_SKIP_VERIFY value: &quot;1&quot; - name: VAULT_TOKEN valueFrom: secretKeyRef: key: vault_token name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_0 valueFrom: secretKeyRef: key: unseal_key_0 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_1 valueFrom: secretKeyRef: key: unseal_key_1 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_2 valueFrom: secretKeyRef: key: unseal_key_2 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_3 valueFrom: secretKeyRef: key: unseal_key_3 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_4 valueFrom: secretKeyRef: key: unseal_key_4 name: vault-unseal-vault-unseal image: blockloop/vault-unseal imagePullPolicy: Always name: vault-unseal resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst nodeSelector: nodePool: ci restartPolicy: OnFailure schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 5 status: conditions: - lastProbeTime: 2020-03-19T21:49:11Z lastTransitionTime: 2020-03-19T21:49:11Z message: Job has reached the specified backoff limit reason: BackoffLimitExceeded status: &quot;True&quot; type: Failed failed: 1 startTime: 2020-03-19T21:40:11Z </code></pre> <p>When I run <code>kubectl create -f my_file.yaml</code>, I am getting this error:</p> <pre><code>The Job &quot;vault-unseal-vault-unseal-1584654000&quot; is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{&quot;controller-uid&quot;:&quot;35262878-07bb-11eb-9b2c-0abca2a23428&quot;, &quot;app&quot;:&quot;vault-unseal-app&quot;}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` not auto-generated </code></pre> <p>Can someone suggest how to fix this?</p> <p>Update:</p> <p>After testing removal of <code>.spec.selector</code> I am getting error: <code>error: jobs.batch &quot;vault-unseal-vault-unseal-1584654000&quot; is invalid</code></p> <p>This is how my config looks without <code>.spec.selector</code>:</p> <pre><code># Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: batch/v1 kind: Job metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;batch/v1&quot;,&quot;kind&quot;:&quot;Job&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;creationTimestamp&quot;:&quot;2020-03-19T21:40:11Z&quot;,&quot;labels&quot;:{&quot;controller-uid&quot;:&quot;35e63c20-6a2a-11ea-b577-069afd6d30d4&quot;,&quot;job-name&quot;:&quot;vault-unseal-vault-unseal-1584654000&quot;},&quot;name&quot;:&quot;vault-unseal-vault-unseal-1584654000&quot;,&quot;namespace&quot;:&quot;infrastructure&quot;,&quot;ownerReferences&quot;:[{&quot;apiVersion&quot;:&quot;batch/v1beta1&quot;,&quot;blockOwnerDeletion&quot;:true,&quot;controller&quot;:true,&quot;kind&quot;:&quot;CronJob&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;,&quot;uid&quot;:&quot;c9965fdb-4fbb-11e9-80d7-061cf1426d5a&quot;}],&quot;resourceVersion&quot;:&quot;163427805&quot;,&quot;selfLink&quot;:&quot;/apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000&quot;,&quot;uid&quot;:&quot;35e63c20-6a2a-11ea-b577-069afd6d30d4&quot;},&quot;spec&quot;:{&quot;backoffLimit&quot;:20,&quot;completions&quot;:1,&quot;parallelism&quot;:1,&quot;selector&quot;:{&quot;matchLabels&quot;:{&quot;controller-uid&quot;:&quot;35e63c20-6a2a-11ea-b577-069afd6d30d4&quot;}},&quot;template&quot;:{&quot;metadata&quot;:{&quot;creationTimestamp&quot;:null,&quot;labels&quot;:{&quot;controller-uid&quot;:&quot;35e63c20-6a2a-11ea-b577-069afd6d30d4&quot;,&quot;job-name&quot;:&quot;vault-unseal-vault-unseal-1584654000&quot;}},&quot;spec&quot;:{&quot;containers&quot;:[{&quot;env&quot;:[{&quot;name&quot;:&quot;VAULT_ADDR&quot;,&quot;value&quot;:&quot;http://vault-vault:8200&quot;},{&quot;name&quot;:&quot;VAULT_SKIP_VERIFY&quot;,&quot;value&quot;:&quot;1&quot;},{&quot;name&quot;:&quot;VAULT_TOKEN&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;vault_token&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}},{&quot;name&quot;:&quot;VAULT_UNSEAL_KEY_0&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;unseal_key_0&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}},{&quot;name&quot;:&quot;VAULT_UNSEAL_KEY_1&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;unseal_key_1&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}},{&quot;name&quot;:&quot;VAULT_UNSEAL_KEY_2&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;unseal_key_2&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}},{&quot;name&quot;:&quot;VAULT_UNSEAL_KEY_3&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;unseal_key_3&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}},{&quot;name&quot;:&quot;VAULT_UNSEAL_KEY_4&quot;,&quot;valueFrom&quot;:{&quot;secretKeyRef&quot;:{&quot;key&quot;:&quot;unseal_key_4&quot;,&quot;name&quot;:&quot;vault-unseal-vault-unseal&quot;}}}],&quot;image&quot;:&quot;blockloop/vault-unseal&quot;,&quot;imagePullPolicy&quot;:&quot;Always&quot;,&quot;name&quot;:&quot;vault-unseal&quot;,&quot;resources&quot;:{},&quot;terminationMessagePath&quot;:&quot;/dev/termination-log&quot;,&quot;terminationMessagePolicy&quot;:&quot;File&quot;}],&quot;dnsPolicy&quot;:&quot;ClusterFirst&quot;,&quot;nodeSelector&quot;:{&quot;nodePool&quot;:&quot;devs&quot;},&quot;restartPolicy&quot;:&quot;OnFailure&quot;,&quot;schedulerName&quot;:&quot;default-scheduler&quot;,&quot;securityContext&quot;:{},&quot;terminationGracePeriodSeconds&quot;:5}}},&quot;status&quot;:{&quot;conditions&quot;:[{&quot;lastProbeTime&quot;:&quot;2020-03-19T21:49:11Z&quot;,&quot;lastTransitionTime&quot;:&quot;2020-03-19T21:49:11Z&quot;,&quot;message&quot;:&quot;Job has reached the specified backoff limit&quot;,&quot;reason&quot;:&quot;BackoffLimitExceeded&quot;,&quot;status&quot;:&quot;True&quot;,&quot;type&quot;:&quot;Failed&quot;}],&quot;failed&quot;:1,&quot;startTime&quot;:&quot;2020-03-19T21:40:11Z&quot;}} creationTimestamp: 2020-03-19T21:40:11Z labels: controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4 job-name: vault-unseal-vault-unseal-1584654000 name: vault-unseal-vault-unseal-1584654000 namespace: infrastructure ownerReferences: - apiVersion: batch/v1beta1 blockOwnerDeletion: true controller: true kind: CronJob name: vault-unseal-vault-unseal uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a resourceVersion: &quot;163442526&quot; selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000 uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4 spec: backoffLimit: 100 completions: 1 parallelism: 1 template: metadata: creationTimestamp: null labels: controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4 job-name: vault-unseal-vault-unseal-1584654000 spec: containers: - env: - name: VAULT_ADDR value: http://vault-vault:8200 - name: VAULT_SKIP_VERIFY value: &quot;1&quot; - name: VAULT_TOKEN valueFrom: secretKeyRef: key: vault_token name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_0 valueFrom: secretKeyRef: key: unseal_key_0 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_1 valueFrom: secretKeyRef: key: unseal_key_1 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_2 valueFrom: secretKeyRef: key: unseal_key_2 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_3 valueFrom: secretKeyRef: key: unseal_key_3 name: vault-unseal-vault-unseal - name: VAULT_UNSEAL_KEY_4 valueFrom: secretKeyRef: key: unseal_key_4 name: vault-unseal-vault-unseal image: blockloop/vault-unseal imagePullPolicy: Always name: vault-unseal resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst nodeSelector: nodePool: devs restartPolicy: OnFailure schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 5 status: conditions: - lastProbeTime: 2020-03-19T21:49:11Z lastTransitionTime: 2020-03-19T21:49:11Z message: Job has reached the specified backoff limit reason: BackoffLimitExceeded status: &quot;True&quot; type: Failed failed: 1 startTime: 2020-03-19T21:40:11Z </code></pre>
<p>It looks like you are not using the <code>selector</code> that the system generates for you automatically by default. Bear in mind that the recommended option when creating a job is NOT to fill in <code>selector</code>. It makes it more probable to create a duplicate labels+selectors. Therefore you should use the auto-generated ones, which ensure uniqueness and release you from the necessity of manual management.</p> <p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#specifying-your-own-pod-selector" rel="nofollow noreferrer">official docs</a> have this explained in more detail with an example. Please notice the below parts:</p> <blockquote> <p>Normally, when you create a Job object, you do not specify <code>.spec.selector</code>. The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs.</p> </blockquote> <p>and:</p> <blockquote> <p>You need to specify <code>manualSelector: true</code> in the new Job since you are not using the selector that the system normally generates for you automatically.</p> </blockquote> <p>If you want to use manual selectors you need to set: <code>.spec.manualSelector: true</code> in the job's spec. This way the API server will not generate labels automatically and you will be able to set them yourself.</p> <p>EDIT:</p> <p>Remember that <code>spec.Completions</code> <code>spec.Selector</code> and <code>spec.Template</code> are immutable fields and are not allowed to be updated. In order to make changes there you need to create a new Job.</p> <p>The official docs regarding <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#writing-a-job-spec" rel="nofollow noreferrer">Writing a Job spec</a> will help you understand what should and what shouldn't be put into the Job spec. Notice that despite:</p> <blockquote> <p>In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-selector" rel="nofollow noreferrer">pod selector</a>) and an appropriate restart policy.</p> </blockquote> <p>it is advised that the pod selector / labels are not specified as I explained earlier in order to not create a duplicate labels+selectors.</p>
<p>Applying the virtual-service.yaml yields to a timeout (30s) with kubectl apply -f.</p> <p>On Monday did I reconfigure my virtual services at ISTIO-1.2.4 as usual. Today I run into timeouts applying the virtual-service. What I did</p> <pre><code>% kubectl -n istio-system apply -f virtual-service.yaml Error from server (Timeout): error when creating "virtual-service.yaml": Timeout: request did not complete within requested timeout 30s % kubectl -n istio-system delete pod istio-ingressgateay* % kubectl -n istio-system exec istio-ingressgateway* -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=debug % kubectl -n istio-system logs --follow istio-ingressgateway* -c istio-proxy % kubectl -n istio-system get gateways.networking.istio.io NAME AGE istio-gateway 3d19h % kubectl -n istio-system get virtualservices.networking.istio.io NAME GATEWAYS HOSTS AGE infrastructure-istio-k8s-ingress [istio-gateway] [${DOMAIN}] 3d19h % </code></pre> <p>And I still get 30s timeout message and no new virtual services and I do not anything in the log related to the virtual service.</p> <p>The configuration virtual-service.yaml looks like</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: default-istio-k8s-ingress namespace: istio-system spec: gateways: - istio-gateway-149-81-86-74 hosts: - "test.procom.energy" http: - match: - uri: regex: "/head.*s" rewrite: uri: "/headers" route: - destination: host: httpbin.default.svc.cluster.local port: number: 8000 weight: 100 - match: - uri: exact: /ip route: - destination: host: httpbin.default.svc.cluster.local port: number: 8000 weight: 100 - match: - uri: exact: /user-agent route: - destination: host: httpbin.default.svc.cluster.local port: number: 8000 weight: 100 </code></pre> <p>I would expect a new applied virtual service and I only see timeouts. In this case do I not have any possibilities to reconfigure virtual services or apply new ones.</p>
<p>Hej you. </p> <p>I fixed it. After disappointing deep look and see nothing did I reload all POD (over a for loop). After that I rebootet all our nodes and deleted all replicasets in the istio-system namespace I may reconfigure or apply my virtual services. </p> <p>Best regards, Jan</p>
<p>Following Error while connecting springboot app service to Postgres using yml.</p> <p>NOTE: url: jdbc:postgresql://localhost/postgres</p> <pre><code>Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'userServiceImpl': Unsatisfied dependency expressed through field 'userRepository'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)#3f0846c6' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#3f0846c6': Cannot resolve reference to bean 'entityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.exception.JDBCConnectionException: Unable to open JDBC Connection for DDL execution at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:639) at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:116) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:397) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1429) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1287) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1207) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:636) ... 62 common frames omitted Caused by: java.net.UnknownHostException: db at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) at java.base/java.net.Socket.connect(Socket.java:591) at org.postgresql.core.PGStream.&lt;init&gt;(PGStream.java:75) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ... 138 common frames omitted Please suggest the alternatives and ideas </code></pre>
<p>Well, you did not show your yaml file, that is most important. Error says problem with connection, so make sure your postgres is up and running, you can test it in console like</p> <pre><code>psql -h localhost -p port(default 5432) -U username -d database </code></pre> <pre><code>spring: datasource: url: jdbc:postgresql://localhost:5432/databaseName?createDatabaseIfNotExist=true&amp;autoReconnect=true&amp;useSSL=false username: yourUsername password: yourPassword jpa: hibernate: ddl-auto: update show-sql: true </code></pre> <p>This is example of application.yaml for connecting to postgresql running on your machine, i am using docker for databases,so it works for that as well. Then make sure you have postgresql in your build file. If you use maven, it looks like this</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.postgresql&lt;/groupId&gt; &lt;artifactId&gt;postgresql&lt;/artifactId&gt; &lt;scope&gt;runtime&lt;/scope&gt; &lt;/dependency&gt; </code></pre> <p>And don`t forget JPA, which you probably have, as i see from error, but anyway</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-data-jpa&lt;/artifactId&gt; &lt;/dependency&gt; </code></pre> <p>Version is defined as parent. Spring boot makes configuration and connection for you, just by adding dependencies and reading your application.yml file.</p>
<p>I'm trying to deploy EJBCA PKI in proxy mode with an ingress nginx to terminate all the SSL sessions. I was able to successfully make it works for Public/Admin web access, EJBCA Web Service and SCEP.</p> <p>The last protocol I need to validate is EST and for which I need some help. First of all EST works if I remove nginx from the mix and terminate the SSL session directly on EJBCA so my EST RA and EJBCA configuration works. </p> <p>When nginx terminates the SSL session with the EST RA, it complains there is something wrong with the HTTP request and send back an HTTP 400 code status. My EJBCA server doesn't receive anything.</p> <p>Below is my ingress configuration for EST:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: pki-est annotations: kubernetes.io/ingress.class: "fanhe-ingress" spec: tls: - hosts: - nginx-ingress-controller.ingress-nginx secretName: nginx-ingress-tls-ec-secret rules: - host: nginx-ingress-controller.ingress-nginx http: paths: - path: /.well-known/est backend: serviceName: pki-app servicePort: 8082 </code></pre> <p>I enabled all the debugs on the ingress and below is what I see in error.log:</p> <pre><code>2020/05/18 10:06:52 [debug] 198#198: *15975 http process request line 2020/05/18 10:06:52 [debug] 198#198: *15975 http request line: "POST /.well-known/est/simpleenroll HTTP/1.1" 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'2F:/' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:1 in:'2E:.' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:2 in:'77:w' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'65:e' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6C:l' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6C:l' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'2D:-' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6B:k' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6E:n' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6F:o' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'77:w' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6E:n' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'2F:/' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:1 in:'65:e' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'73:s' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'74:t' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'2F:/' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:1 in:'73:s' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'69:i' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6D:m' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'70:p' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6C:l' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'65:e' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'65:e' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6E:n' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'72:r' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6F:o' 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6C:l' --- 2020/05/18 10:06:52 [debug] 198#198: *15975 s:0 in:'6C:l' 2020/05/18 10:06:52 [debug] 198#198: *15975 http uri: "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 http args: "" 2020/05/18 10:06:52 [debug] 198#198: *15975 http exten: "" 2020/05/18 10:06:52 [debug] 198#198: *15975 http process request header line 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "User-Agent: libest 3.1.1" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "Connection: close" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "Host: nginx-ingress-controller.ingress-nginx:443" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "Accept: */*" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "Content-Type: application/pkcs10" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header: "Content-Length: 366" 2020/05/18 10:06:52 [debug] 198#198: *15975 http header done 2020/05/18 10:06:52 [info] 198#198: *15975 client SSL certificate verify error: (19:self signed certificate in certificate chain) while reading client request headers, client: fd10::1:165, server: nginx-ingress-controller.ingress-nginx, request: "POST /.well-known/est/simpleenroll HTTP/1.1", host: "nginx-ingress-controller.ingress-nginx:443" 2020/05/18 10:06:52 [debug] 198#198: *15975 http finalize request: 495, "/.well-known/est/simpleenroll?" a:1, c:1 2020/05/18 10:06:52 [debug] 198#198: *15975 event timer del: 3: 947664613 2020/05/18 10:06:52 [debug] 198#198: *15975 http special response: 495, "/.well-known/est/simpleenroll?" 2020/05/18 10:06:52 [debug] 198#198: *15975 http set discard body 2020/05/18 10:06:52 [debug] 198#198: *15975 headers more header filter, uri "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 lua header filter for user lua code, uri "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 lua capture header filter, uri "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 HTTP/1.1 400 Bad Request Server: openresty/1.15.8.1 Date: Mon, 18 May 2020 10:06:52 GMT Content-Type: text/html Content-Length: 221 Connection: close 2020/05/18 10:06:52 [debug] 198#198: *15975 write new buf t:1 f:0 0000558A2B30B250, pos 0000558A2B30B250, size: 158 file: 0, size: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 http write filter: l:0 f:0 s:158 2020/05/18 10:06:52 [debug] 198#198: *15975 http output filter "/.well-known/est/simpleenroll?" 2020/05/18 10:06:52 [debug] 198#198: *15975 http copy filter: "/.well-known/est/simpleenroll?" 2020/05/18 10:06:52 [debug] 198#198: *15975 lua body filter for user lua code, uri "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 lua capture body filter, uri "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 http postpone filter "/.well-known/est/simpleenroll?" 0000558A2B30B438 2020/05/18 10:06:52 [debug] 198#198: *15975 write old buf t:1 f:0 0000558A2B30B250, pos 0000558A2B30B250, size: 158 file: 0, size: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 write new buf t:0 f:0 0000000000000000, pos 0000558A2A15AEA0, size: 162 file: 0, size: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 write new buf t:0 f:0 0000000000000000, pos 0000558A2A15BE20, size: 59 file: 0, size: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 http write filter: l:1 f:0 s:379 2020/05/18 10:06:52 [debug] 198#198: *15975 http write filter limit 0 2020/05/18 10:06:52 [debug] 198#198: *15975 malloc: 0000558A2B318CC0:4096 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL buf copy: 158 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL buf copy: 162 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL buf copy: 59 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL to write: 379 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL_write: 379 2020/05/18 10:06:52 [debug] 198#198: *15975 http write filter 0000000000000000 2020/05/18 10:06:52 [debug] 198#198: *15975 http copy filter: 0 "/.well-known/est/simpleenroll?" 2020/05/18 10:06:52 [debug] 198#198: *15975 http finalize request: 0, "/.well-known/est/simpleenroll?" a:1, c:1 2020/05/18 10:06:52 [debug] 198#198: *15975 event timer add: 3: 5000:947609618 2020/05/18 10:06:52 [debug] 198#198: *15975 http lingering close handler 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL_read: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 SSL_get_error: 6 2020/05/18 10:06:52 [debug] 198#198: *15975 peer shutdown SSL cleanly 2020/05/18 10:06:52 [debug] 198#198: *15975 lingering read: 0 2020/05/18 10:06:52 [debug] 198#198: *15975 http request count:1 blk:0 2020/05/18 10:06:52 [debug] 198#198: *15975 http close request 2020/05/18 10:06:52 [debug] 198#198: *15975 lua log handler, uri:"/.well-known/est/simpleenroll" c:0 2020/05/18 10:06:52 [debug] 198#198: *15975 http log handler 2020/05/18 10:06:52 [debug] 198#198: *15975 http map started 2020/05/18 10:06:52 [debug] 198#198: *15975 http script var: "/.well-known/est/simpleenroll" 2020/05/18 10:06:52 [debug] 198#198: *15975 http map: "/.well-known/est/simpleenroll" "1" 2020/05/18 10:06:52 [debug] 198#198: *15975 http script var: "1" 2020/05/18 10:06:52 [debug] 198#198: *15975 http map started 2020/05/18 10:06:52 [debug] 198#198: *15975 http script var: "fd10::1:165" 2020/05/18 10:06:52 [debug] 198#198: *15975 http map: "" "fd10::1:165" 2020/05/18 10:06:52 [debug] 198#198: *15975 http map started 2020/05/18 10:06:52 [debug] 198#198: *15975 http script var: "39bef9f98c79778373515fb72f84e249" 2020/05/18 10:06:52 [debug] 198#198: *15975 http map: "" "39bef9f98c79778373515fb72f84e249" 2020/05/18 10:06:52 [debug] 198#198: *15975 free: 0000558A2B30A480, unused: 7 2020/05/18 10:06:52 [debug] 198#198: *15975 free: 0000558A2B279FC0, unused: 1974 2020/05/18 10:06:52 [debug] 198#198: *15975 close http connection: 3 </code></pre> <p>From those logs I don't see any obvious reason why nginx rejects the request.</p> <p>I did try the same HTTP request with Postman using the exact same certificate as the est-ra and it works !!. See logs below:</p> <pre><code>2020/05/18 22:52:02 [debug] 671#671: *172624 http process request line 2020/05/18 22:52:02 [debug] 671#671: *172624 http request line: "POST /.well-known/est/simpleenroll HTTP/1.1" 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'2F:/' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:1 in:'2E:.' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:2 in:'77:w' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'65:e' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6C:l' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6C:l' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'2D:-' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6B:k' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6E:n' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6F:o' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'77:w' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6E:n' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'2F:/' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:1 in:'65:e' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'73:s' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'74:t' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'2F:/' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:1 in:'73:s' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'69:i' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6D:m' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'70:p' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6C:l' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'65:e' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'65:e' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6E:n' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'72:r' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6F:o' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6C:l' 2020/05/18 22:52:02 [debug] 671#671: *172624 s:0 in:'6C:l' 2020/05/18 22:52:02 [debug] 671#671: *172624 http uri: "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 http args: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http exten: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http process request header line 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "User-Agent: libest 3.1.1" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Connection: close" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Host: nginx-ingress-controller.ingress-nginx:443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Accept: */*" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Content-Type: application/pkcs10" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Authorization: Basic cmEtYXBwOkZiV241M2p3" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header: "Content-Length: 280" 2020/05/18 22:52:02 [debug] 671#671: *172624 http header done 2020/05/18 22:52:02 [debug] 671#671: *172624 event timer del: 6: 993574054 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 0 2020/05/18 22:52:02 [debug] 671#671: *172624 rewrite phase: 1 2020/05/18 22:52:02 [debug] 671#671: *172624 rewrite phase: 2 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "-" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $proxy_upstream_name 2020/05/18 22:52:02 [debug] 671#671: *172624 test location: "/" 2020/05/18 22:52:02 [debug] 671#671: *172624 test location: "ejbca/ejbcaws" 2020/05/18 22:52:02 [debug] 671#671: *172624 test location: ".well-known/est" 2020/05/18 22:52:02 [debug] 671#671: *172624 using configuration "/.well-known/est" 2020/05/18 22:52:02 [debug] 671#671: *172624 http cl:280 max:0 2020/05/18 22:52:02 [debug] 671#671: *172624 rewrite phase: 4 2020/05/18 22:52:02 [debug] 671#671: *172624 rewrite phase: 5 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "default" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $namespace 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "pki-est" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $ingress_name 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "pki-app" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $service_name 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "{0 8082 }" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $service_port 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "/.well-known/est" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $location_path 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "https" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "https" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script equal 2020/05/18 22:52:02 [debug] 671#671: *172624 http script if 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "-1" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $balancer_ewma_score 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "default-pki-app-8082" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $proxy_upstream_name 2020/05/18 22:52:02 [debug] 671#671: *172624 http script complex value 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "default-pki-app-8082" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $proxy_host 2020/05/18 22:52:02 [debug] 671#671: *172624 http script complex value 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "https" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $pass_access_scheme 2020/05/18 22:52:02 [debug] 671#671: *172624 http script complex value 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $pass_server_port 2020/05/18 22:52:02 [debug] 671#671: *172624 http script complex value 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "nginx-ingress-controller.ingress-nginx:443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $best_http_host 2020/05/18 22:52:02 [debug] 671#671: *172624 http script complex value 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $pass_port 2020/05/18 22:52:02 [debug] 671#671: *172624 http script value: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script set $proxy_alternative_upstream_name 2020/05/18 22:52:02 [debug] 671#671: *172624 rewrite phase: 6 2020/05/18 22:52:02 [debug] 671#671: *172624 lua rewrite handler, uri:"/.well-known/est/simpleenroll" c:1 2020/05/18 22:52:02 [debug] 671#671: *172624 looking up Lua code cache with key '=rewrite_by_lua(nginx.conf:1274)nhli_dbdd52ba6d647a948759533fd68b064c' 2020/05/18 22:52:02 [debug] 671#671: *172624 lua creating new thread 2020/05/18 22:52:02 [debug] 671#671: *172624 lua reset ctx 2020/05/18 22:52:02 [debug] 671#671: *172624 http cleanup add: 0000558A2B3E0AB0 2020/05/18 22:52:02 [debug] 671#671: *172624 lua run thread, top:0 c:1 2020/05/18 22:52:02 [debug] 671#671: *172624 add cleanup: 0000558A2B3348A0 2020/05/18 22:52:02 [debug] 671#671: *172624 lua resume returned 0 2020/05/18 22:52:02 [debug] 671#671: *172624 lua light thread ended normally 2020/05/18 22:52:02 [debug] 671#671: *172624 lua deleting light thread 2020/05/18 22:52:02 [debug] 671#671: *172624 post rewrite phase: 7 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 8 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 9 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 10 2020/05/18 22:52:02 [debug] 671#671: *172624 access phase: 11 2020/05/18 22:52:02 [debug] 671#671: *172624 access phase: 12 2020/05/18 22:52:02 [debug] 671#671: *172624 access phase: 13 2020/05/18 22:52:02 [debug] 671#671: *172624 access phase: 14 2020/05/18 22:52:02 [debug] 671#671: *172624 post access phase: 15 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 16 2020/05/18 22:52:02 [debug] 671#671: *172624 generic phase: 17 2020/05/18 22:52:02 [debug] 671#671: *172624 http client request body preread 280 2020/05/18 22:52:02 [debug] 671#671: *172624 http request body content length filter 2020/05/18 22:52:02 [debug] 671#671: *172624 http body new buf t:1 f:0 0000558A2B42FB5D, pos 0000558A2B42FB5D, size: 280 file: 0, size: 0 2020/05/18 22:52:02 [debug] 671#671: *172624 http init upstream, client timer: 0 2020/05/18 22:52:02 [debug] 671#671: *172624 epoll add event: fd:6 op:3 ev:80002005 2020/05/18 22:52:02 [debug] 671#671: *172624 http map started 2020/05/18 22:52:02 [debug] 671#671: *172624 http map: "" "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http map started 2020/05/18 22:52:02 [debug] 671#671: *172624 posix_memalign: 0000558A2B2F54F0:4096 @16 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "80e3ae7c2495fcdc7ebe9b658dd579bc" 2020/05/18 22:52:02 [debug] 671#671: *172624 http map: "" "80e3ae7c2495fcdc7ebe9b658dd579bc" 2020/05/18 22:52:02 [debug] 671#671: *172624 http map started 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "fdff::a3d:fafb" 2020/05/18 22:52:02 [debug] 671#671: *172624 http map: "" "fdff::a3d:fafb" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "Host" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "nginx-ingress-controller.ingress-nginx:443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "ssl-client-verify" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "NONE" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Request-ID" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "80e3ae7c2495fcdc7ebe9b658dd579bc" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Real-IP" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "fdff::a3d:fafb" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Forwarded-For" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "fdff::a3d:fafb" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Forwarded-Host" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "nginx-ingress-controller.ingress-nginx:443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Forwarded-Port" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "443" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Forwarded-Proto" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "https" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Original-URI" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "X-Scheme" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "https" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "Content-Length" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script var: "280" 2020/05/18 22:52:02 [debug] 671#671: *172624 http script copy: "" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "User-Agent: libest 3.1.1" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Accept: */*" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Content-Type: application/pkcs10" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Authorization: Basic cmEtYXBwOkZiV241M2p3" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "POST /.well-known/est/simpleenroll HTTP/1.1 Host: nginx-ingress-controller.ingress-nginx:443 ssl-client-verify: NONE X-Request-ID: 80e3ae7c2495fcdc7ebe9b658dd579bc X-Real-IP: fdff::a3d:fafb X-Forwarded-For: fdff::a3d:fafb X-Forwarded-Host: nginx-ingress-controller.ingress-nginx:443 X-Forwarded-Port: 443 X-Forwarded-Proto: https X-Original-URI: /.well-known/est/simpleenroll X-Scheme: https Content-Length: 280 User-Agent: libest 3.1.1 Accept: */* Content-Type: application/pkcs10 Authorization: Basic cmEtYXBwOkZiV241M2p3 " 2020/05/18 22:52:02 [debug] 671#671: *172624 http cleanup add: 0000558A2B2F58A8 2020/05/18 22:52:02 [debug] 671#671: *172624 init keepalive peer 2020/05/18 22:52:02 [debug] 671#671: *172624 get keepalive peer 2020/05/18 22:52:02 [debug] 671#671: *172624 lua balancer peer, tries: 1 2020/05/18 22:52:02 [debug] 671#671: *172624 lua reset ctx 2020/05/18 22:52:02 [debug] 671#671: *172624 looking up Lua code cache with key 'balancer_by_luanhli_0f29762dfd828b8baa4d895affbc4b90' 2020/05/18 22:52:02 [debug] 671#671: *172624 stream socket 10 2020/05/18 22:52:02 [debug] 671#671: *172624 epoll add connection: fd:10 ev:80002005 2020/05/18 22:52:02 [debug] 671#671: *172624 connect to [fd10::1:169]:8082, fd:10 #172625 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream connect: -2 2020/05/18 22:52:02 [debug] 671#671: *172624 posix_memalign: 0000558A2B34D0A0:128 @16 2020/05/18 22:52:02 [debug] 671#671: *172624 event timer add: 10: 5000:993519304 2020/05/18 22:52:02 [debug] 671#671: *172624 http finalize request: -4, "/.well-known/est/simpleenroll?" a:1, c:2 2020/05/18 22:52:02 [debug] 671#671: *172624 http request count:2 blk:0 2020/05/18 22:52:02 [debug] 671#671: *172624 http run request: "/.well-known/est/simpleenroll?" 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream check client, write event:1, "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream request: "/.well-known/est/simpleenroll?" 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream send request handler 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream send request 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream send request body 2020/05/18 22:52:02 [debug] 671#671: *172624 chain writer buf fl:0 s:542 2020/05/18 22:52:02 [debug] 671#671: *172624 chain writer buf fl:1 s:280 2020/05/18 22:52:02 [debug] 671#671: *172624 chain writer in: 0000558A2B2F59F8 2020/05/18 22:52:02 [debug] 671#671: *172624 writev: 822 of 822 2020/05/18 22:52:02 [debug] 671#671: *172624 chain writer out: 0000000000000000 2020/05/18 22:52:02 [debug] 671#671: *172624 event timer del: 10: 993519304 2020/05/18 22:52:02 [debug] 671#671: *172624 event timer add: 10: 60000:993574308 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream request: "/.well-known/est/simpleenroll?" 2020/05/18 22:52:02 [debug] 671#671: *172624 http upstream process header 2020/05/18 22:52:02 [debug] 671#671: *172624 malloc: 0000558A2B33E000:4096 2020/05/18 22:52:02 [debug] 671#671: *172624 recv: eof:0, avail:1 2020/05/18 22:52:02 [debug] 671#671: *172624 recv: fd:10 911 of 4096 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy status 200 "200 OK" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Connection: keep-alive" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Content-Transfer-Encoding: base64" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Content-Type: application/pkcs7-mime; smime-type=certs-only" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Content-Length: 714" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header: "Date: Mon, 18 May 2020 22:52:02 GMT" 2020/05/18 22:52:02 [debug] 671#671: *172624 http proxy header done 2020/05/18 22:52:02 [debug] 671#671: *172624 headers more header filter, uri "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 lua header filter for user lua code, uri "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 looking up Lua code cache with key 'header_filter_by_luanhli_537482850bfc85b842f10d9c3d0521aa' 2020/05/18 22:52:02 [debug] 671#671: *172624 lua capture header filter, uri "/.well-known/est/simpleenroll" 2020/05/18 22:52:02 [debug] 671#671: *172624 HTTP/1.1 200 OK Server: openresty/1.15.8.1 Date: Mon, 18 May 2020 22:52:02 GMT Content-Type: application/pkcs7-mime; smime-type=certs-only Content-Length: 714 Connection: close Content-Transfer-Encoding: base64 Strict-Transport-Security: max-age=15724800; includeSubDomains </code></pre> <p>Any suggestion would be greatly appreciated !!.</p> <p>Thanks,</p>
<p>OK I finally was able to figure it out. @Dirbaio, you were right, the issue seems related to the certificate. I realized my postman config was not correct and once I fixed it, I was able to reproduce the pb and start seeing HTTP 400 Bad request - SSL Certificate error.</p> <p>I narrow it down to the following ingress configuration:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: pki-ws annotations: nginx.ingress.kubernetes.io/auth-tls-verify-client: "optional" nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true" nginx.ingress.kubernetes.io/auth-tls-secret: "default/managementca" nginx.ingress.kubernetes.io/backend-protocol: "HTTP" nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header SSL_CLIENT_CERT $ssl_client_cert; kubernetes.io/ingress.class: "fanhe-ingress" spec: tls: - hosts: - nginx-ingress-controller.ingress-nginx secretName: nginx-ingress-tls rules: - host: nginx-ingress-controller.ingress-nginx http: paths: - path: /ejbca/ejbcaws backend: serviceName: pki-app servicePort: 8082 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: pki-est annotations: kubernetes.io/ingress.class: "fanhe-ingress" spec: tls: - hosts: - nginx-ingress-controller.ingress-nginx secretName: nginx-ingress-tls rules: - host: nginx-ingress-controller.ingress-nginx http: paths: - path: /.well-known/est backend: serviceName: pki-app servicePort: 8082 </code></pre> <p>Because both are sharing the same host, I guess both config were merged in nginx and so all the annotations applied to both path. My EST client issuing CA is different from "default/managementca" and nginx was not able to authenticate it correctly.</p> <p>I fixed it by using two different hosts and one cert with multiple SANs for NGINX.</p> <p>Thanks for trying to help. It was appreciated.</p>
<p>I learning k8s and I have a little problem to understand PV.</p> <p>For example, I want to deploy PostgreSQL which store data in <strong>/var/lib/postgresql/data</strong> I want to use my local disk for PV so I created that and set my path: <strong>/mnt/ssd/Kubernetes/Postgres</strong></p> <p>I do not understand how PV and PVC will store my data, because I created pod1 with PostgreSQL, created new DB and kill this pod1. New pod2 still have database which I created few second time ago on pod1 but on my local disk in <strong>/mnt/ssd/Kubernetes/Postgres</strong> I do not have any files, so</p> <ol> <li>How new pod2 know about created database? How PV store data about my created database?</li> <li>Why does PV need my disk if it doesn't hold any data on it?</li> </ol>
<p>To get a strong understanding of how volumes, persistent volumes and claims work in Kubernetes I strongly suggest going through the official documentation regarding:</p> <ol> <li><a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Volumes</a>:</li> </ol> <blockquote> <p>On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes <code>Volume</code> abstraction solves both of these problems.</p> </blockquote> <ol start="2"> <li><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a>:</li> </ol> <blockquote> <p>A <code>PersistentVolume</code> (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like <code>Volumes</code>, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.</p> </blockquote> <ol start="3"> <li>A PersistentVolumeClaim (PVC):</li> </ol> <blockquote> <p>is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted <code>ReadWriteOnce</code>, <code>ReadOnlyMany</code> or <code>ReadWriteMany</code>, see <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">AccessModes</a>).</p> </blockquote> <p>After getting a solid theoretical grip of how they work, you can see the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">detailed walkthrough with working examples</a>.</p> <p>Also, your PostgreSQL is an example of a <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">stateful</a> application. <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">Here</a> you can find a tutorial showing you how to deploy a database and application with Persistent Volumes.</p>
<p>I've tried to change the default proxy_timeout(600s) to 3600s for tcp services in k8s maintained nginx-ingress.But its not working.</p> <p>I have exec the nginx-controller pods and got the following in nginx.conf.</p> <pre><code># TCP services server { preread_by_lua_block { ngx.var.proxy_upstream_name=&quot;tcp-test-test-db-test-lb-dev-7687&quot;; } listen 7687; proxy_timeout 600s; proxy_pass upstream_balancer; } </code></pre> <p>i have used following configmap and found not working.I'm getting still 600s timeout.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx annotations: ingress.kubernetes.io/proxyTimeout: 3600s ingress.kubernetes.io/proxy-stream-timeout: 3600s ingress.kubernetes.io/proxy-connect-timeout: 3600s ingress.kubernetes.io/proxy-read-timeout: 3600s ingress.kubernetes.io/proxy-send-timeout: 3600s ingress.kubernetes.io/proxy_timeout: 3600s data: 7687: &quot;test-test-db/test-lb-dev:7687&quot; 8687: &quot;test-test-db/test-lb-test:8687&quot; </code></pre> <p>Anyone please help me to short this issue out.</p>
<p>When using these annotations you have to set them to number (integer) values. For example:</p> <pre><code> ingress.kubernetes.io/proxy-stream-timeout: &quot;3600&quot; </code></pre> <p>instead of:</p> <pre><code>ingress.kubernetes.io/proxy-stream-timeout: 3600s </code></pre> <p>If you need more details regarding the timeout options than please check the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="nofollow noreferrer">official docs</a>.</p>
<p>Is it possible to aggregate 2 gauge metrics (i.e. <code>kube_pod_labels</code> and <code>kube_pod_container_resource_requests_cpu_cores</code>) so that by executing the query both resulting elements would be combined (i.e. all pod labels as resulting element and request_cpu_cores as value)?</p> <p>Query for cpu request value looks like this <code>sum by (namespace, pod) (kube_pod_container_resource_requests_cpu_cores{cluster=&quot;my-cluster&quot;})</code></p> <p>Actual result:</p> <pre><code>{namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;} 0.05 </code></pre> <p>Query for pod labels is <code>kube_pod_labels{label_foo=&quot;bar&quot;}</code></p> <p>Actual result:</p> <pre><code>kube_pod_labels{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;,label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 1 </code></pre> <hr /> <p>I have tried using a left joint but it seems that grouping by a given label (pod, namespace etc) is required as explained in this <a href="https://www.robustperception.io/left-joins-in-promql" rel="nofollow noreferrer">https://www.robustperception.io/left-joins-in-promql</a>.</p> <p>With a <code>multiplication</code> operator <code>*</code> is possible to obtain the desired result set but the set would only contain labels specified in the <code>by</code> clause. Example query:</p> <pre><code>group by (namespace,pod) (kube_pod_labels{label_foo=&quot;bar&quot;,cluster=&quot;my-cluster&quot;}) * sum by (namespace, pod) (kube_pod_container_resource_requests_cpu_cores{cluster=&quot;my-cluster&quot;}) </code></pre> <p>Example result:</p> <pre><code>{namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;} 0.05 </code></pre> <hr /> <p>What I am trying to obtain is a resulting set containing all labels without having to filter by an arbitrary label/value</p> <p>The <strong>desired</strong> result by joining the 2 queries should be:</p> <pre><code>{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;, label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 0.05 </code></pre>
<p>This can be achieved with a combination of the following:</p> <ul> <li><p><code>label_replace</code> query function: For each timeseries in v, label_replace(v instant-vector, dst_label string, replacement string, src_label string, regex string) matches the regular expression regex against the value of the label src_label. If it matches, the value of the label dst_label in the returned timeseries will be the expansion of replacement, together with the original labels in the input. Capturing groups in the regular expression can be referenced with $1, $2, etc. If the regular expression doesn't match then the timeseries is returned unchanged. <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#label_replace" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/querying/functions/#label_replace</a></p> </li> <li><p><code>multiplication *</code> operator and <code>group_left()</code> modifier: Many-to-one and one-to-many matchings refer to the case where each vector element on the &quot;one&quot;-side can match with multiple elements on the &quot;many&quot;-side. This has to be explicitly requested using the group_left or group_right modifier, where left/right determines which vector has the higher cardinality. <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/querying/operators/</a></p> </li> </ul> <p>Example query:</p> <pre><code>label_replace(kube_pod_labels{},&quot;label&quot;,&quot;$1&quot;,&quot;label_&quot;, &quot;(.+)&quot;) * on (cluster,namespace, pod) group_left() (sum by (cluster,namespace, pod) (kube_pod_container_resource_requests_cpu_cores{})) </code></pre> <p>Note that: <code>If the regular expression doesn't match then the timeseries is returned unchanged</code>. In this case the regular expression does not match - hence the full set of labels is return unchanged.</p> <p>Example result:</p> <pre><code>{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;, label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 0.05 </code></pre> <hr /> <p>Felipe provided a valuable hint on how to achieve this result in a comment for the original question.</p>
<p>I am following <a href="https://www.youtube.com/watch?v=bIdMveCe75c" rel="nofollow noreferrer">this</a> tutorial. I'm trying to create a Jenkins X app locally in <code>minikube</code> and setting it up with Github. </p> <p>But when I do <code>jx create quickstart</code> and follow the steps I get <strong><code>error: secrets "jenkins" not found</code></strong> as error. </p> <p>Also, I found out that there is no secret named <code>jenkins</code> </p> <pre><code>root@Unix:/home/dadart/Downloads# kubectl get secret -n jx jenkins Error from server (NotFound): secrets "jenkins" not found </code></pre> <p>Someone please point out what I'm doing wrong. </p>
<p>Please follow this post on Github with set-up &quot;env settings&quot; <a href="https://github.com/jenkins-x/jx/issues/1554" rel="nofollow noreferrer">before installation</a>.</p> <p>You can find also in &quot;Common problems&quot; section <a href="https://jenkins-x.io/faq/issues/#how-do-i-get-the-password-and-username-for-jenkins" rel="nofollow noreferrer">&quot;How do I get the Password and Username for Jenkins?&quot;</a></p> <p>As per documentation - it seems you missed some part during installation:</p> <blockquote> <p><strong>What happens during installation</strong></p> <p>Jenkins X generates an administration password for Monocular/Nexus/Jenkins and save it in secrets. It then retrieves git secrets for the helm install (so they can be used in the pipelines).</p> </blockquote> <p>this can be helpful &quot;jenkins image&quot; <a href="https://github.com/jenkins-x/jx/issues/3047" rel="nofollow noreferrer">issue</a>.</p> <p>In case you still notice more problems with jenkis installation please open an issue <a href="https://github.com/jenkins-x/jx" rel="nofollow noreferrer">here</a></p> <p>Please share with your findings</p>
<p>I'v setup Kubernetes Horizontal Pod Autoscaler with custom metrics using the prometheus adapter <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">https://github.com/DirectXMan12/k8s-prometheus-adapter</a>. Prometheus is monitoring rabbitmq, and Im watching the <strong>rabbitmq_queue_messages</strong> metric. The messages from the queue are picked up by the pods, that then do some processing, which can last for several hours.</p> <p>The scale-up and scale-down is working based on the number of messages in the queue.</p> <p><strong>The problem:</strong> When a pod finishes the processing and acks the message, that will lower the num. of messages in the queue, and that would trigger the Autoscaler terminate a pod. If I have multipe pods doing the processing and one of them finishes, if Im not mistaking, Kubernetes could terminate a pod that is still doing the processing of its own message. This wouldnt be desirable as all the processing that the pod is doing would be lost.</p> <p>Is there a way to overcome this, or another way how this could be acheveed?</p> <p>here is the Autoscaler configuration:</p> <pre><code>kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta1 metadata: name: sample-app-rabbitmq namespace: monitoring spec: scaleTargetRef: # you created above apiVersion: apps/v1 kind: Deployment name: sample-app minReplicas: 1 maxReplicas: 10 metrics: - type: Object object: target: kind: Service name: rabbitmq-cluster metricName: rabbitmq_queue_messages_ready targetValue: 5 </code></pre>
<p>You could consider approach using <strong>preStop hook</strong>.</p> <p>As per documentation <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="nofollow noreferrer">Container States</a>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">Define postStart and preStop handlers</a>:</p> <blockquote> <p>Before a container enters into Terminated, preStop hook (if any) is executed.</p> </blockquote> <p>So you can use in your deployment:</p> <pre><code>lifecycle: preStop: exec: command: ["your script"] </code></pre> <p><code>###</code> <strong>update</strong>:</p> <ol> <li><p>I would like to provide more information due to some research: There is an interesting <a href="https://github.com/kedacore/keda" rel="nofollow noreferrer">project</a>: </p> <blockquote> <p>KEDA allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition. KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.</p> </blockquote></li> <li><p>For the main question "Kubernetes could terminate a pod that is still doing the processing of its own message". </p> <p>As per documentation:</p> <blockquote> <p>"Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features"</p> </blockquote></li> </ol> <p>Deployment is backed by Replicaset. As per this controller code there exist function "<a href="https://github.com/kubernetes/kubernetes/blob/f794c824b1e6e68b302d94f42f60af4759e18c6d/pkg/controller/replicaset/replica_set.go#L684:6" rel="nofollow noreferrer">getPodsToDelete</a>". In combination with "<em>filteredPods</em>" it gives the result: "<strong>This ensures that we delete pods in the earlier stages whenever possible.</strong>"</p> <p>So as proof of concept:</p> <p>You can create deployment with <strong>init container</strong>. Init container should check if there is a message in the queue and exit when at least one message appears. This will allow main container to start, take and process that message. In this case we will have <strong>two kinds of pods</strong> - those which <strong>process the message</strong> and consume CPU and those who are in the <strong>starting state</strong>, idle and waiting for the next message. In this case <strong>starting containers will be deleted at the first place</strong> when HPA decide to decrease number of replicas in the deployment. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: complete name: complete spec: replicas: 5 revisionHistoryLimit: 10 selector: matchLabels: app: complete template: metadata: creationTimestamp: null labels: app: complete spec: hostname: c1 containers: - name: complete command: - "bash" args: - "-c" - "wa=$(shuf -i 15-30 -n 1)&amp;&amp; echo $wa &amp;&amp; sleep $wa" image: ubuntu imagePullPolicy: IfNotPresent resources: {} initContainers: - name: wait-for image: ubuntu command: ['bash', '-c', 'sleep 30'] dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 </code></pre> <p>Hope this help.</p>
<p>Based on the <a href="https://medium.com/@gregoire.waymel/istio-cert-manager-lets-encrypt-demystified-c1cbed011d67" rel="nofollow noreferrer">guide</a> </p> <p>I'm using GKE 1.13.6-gke.6 + Istio 1.1.3-gke.0 installed from cluster addon.</p> <p>Follow the same steps to install cert_manager and created Issuer and Certificate I need:</p> <p><strong><em>ISSUER</em></strong></p> <pre><code>$ kubectl describe issuer letsencrypt-prod -n istio-system Name: letsencrypt-prod Namespace: istio-system Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-prod","namespace":"istio-system"},"spec":{... API Version: certmanager.k8s.io/v1alpha1 Kind: Issuer Metadata: Creation Timestamp: 2019-06-14T03:11:17Z Generation: 2 Resource Version: 10044939 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/issuers/letsencrypt-prod UID: 131f1cdd-8e52-11e9-9ba7-42010a9801a6 Spec: Acme: Email: [email protected] Http 01: Private Key Secret Ref: Name: prod-issuer-account-key Server: https://acme-v02.api.letsencrypt.org/directory Status: Acme: Uri: https://acme-v02.api.letsencrypt.org/acme/acct/59211199 Conditions: Last Transition Time: 2019-06-14T03:11:18Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: &lt;none&gt; </code></pre> <p><strong><em>CERTIFICATE</em></strong></p> <pre><code>$ kubectl describe certificate dreamy-plum-bee-certificate -n istio-system Name: dreamy-plum-bee-certificate Namespace: istio-system Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"dreamy-plum-bee-certificate","namespace":"istio-s... API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2019-06-14T03:24:43Z Generation: 3 Resource Version: 10048432 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/certificates/dreamy-plum-bee-certificate UID: f3ed9f15-8e53-11e9-9ba7-42010a9801a6 Spec: Acme: Config: Domains: dreamy-plum-bee.somewhere.net Http 01: Ingress Class: istio Common Name: dreamy-plum-bee.somewhere.net Dns Names: dreamy-plum-bee.somewhere.net Issuer Ref: Name: letsencrypt-prod Secret Name: dreamy-plum-bee-certificate Status: Conditions: Last Transition Time: 2019-06-14T03:25:12Z Message: Certificate is up to date and has not expired Reason: Ready Status: True Type: Ready Not After: 2019-09-12T02:25:10Z Events: &lt;none&gt; </code></pre> <p><strong><em>GATEWAY</em></strong></p> <pre><code>$ kubectl describe gateway dreamy-plum-bee-gtw -n istio-system Name: dreamy-plum-bee-gtw Namespace: istio-system Labels: k8s-app=istio Annotations: &lt;none&gt; API Version: networking.istio.io/v1alpha3 Kind: Gateway Metadata: Creation Timestamp: 2019-06-14T06:08:13Z Generation: 1 Resource Version: 10084555 Self Link: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/dreamy-plum-bee-gtw UID: cabffdf1-8e6a-11e9-9ba7-42010a9801a6 Spec: Selector: Istio: ingressgateway Servers: Hosts: dreamy-plum-bee.somewhere.net Port: Name: https Number: 443 Protocol: HTTPS Tls: Credential Name: dreamy-plum-bee-certificate Mode: SIMPLE Private Key: sds Server Certificate: sds Events: &lt;none&gt; $ kubectl get gateway dreamy-plum-bee-gtw -n istio-system -o yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: creationTimestamp: 2019-06-14T06:08:13Z generation: 1 labels: k8s-app: istio name: dreamy-plum-bee-gtw namespace: istio-system resourceVersion: "10084555" selfLink: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/dreamy-plum-bee-gtw uid: cabffdf1-8e6a-11e9-9ba7-42010a9801a6 spec: selector: istio: ingressgateway servers: - hosts: - dreamy-plum-bee.somewhere.net port: name: https number: 443 protocol: HTTPS tls: credentialName: dreamy-plum-bee-certificate mode: SIMPLE privateKey: sds serverCertificate: sds </code></pre> <p>Now with the current setup, if I test with openssl command:</p> <pre><code>$ $ openssl s_client -connect dreamy-plum-bee.somewhere.net:443 CONNECTED(00000005) write:errno=54 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: Start Time: 1560492782 Timeout : 7200 (sec) Verify return code: 0 (ok) --- </code></pre> <p>In Chrome browser, it fails to visit the page with ERR_CONNECTION_RESET error message.</p> <p>However, if I change Gateway's tls setting with self-signed filesystem based certificate like:</p> <pre><code> tls: mode: PASSTHROUGH serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key </code></pre> <p>The site is reachable. Hence, I'm suspecting something is not right with credentialName setting. The Gateway doesn't seem to be able to pick up Certificate resource to initiate the connection.</p> <p>Any advice would be appreciated like things to check/debug etc... </p>
<p>Eventually I figured out and <a href="https://www.youtube.com/watch?v=QlQyqCaTOh0" rel="nofollow noreferrer">Envoy SDS: Fortifying Istio Security - Yonggang Liu &amp; Quanjie Lin, Google</a> was very helpful.</p> <ul> <li>Installed Istio from scratch (v1.1.8) instead of using addon (v1.1.3)</li> <li>Make sure --set gateways.istio-ingressgateway.sds.enabled=true is used during the installation.</li> <li>Enable istio-injection=enabled on the namespace for envoy proxy to be created.</li> <li>Increase the node capacity to host Istio properly. <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing" rel="nofollow noreferrer">Google suggest</a> that at least a 4 node cluster with the 2 vCPU machine type is required.</li> <li>Finally, remove manual TLS certificate from NodeApp I was deploying as Istio handles TLS and mTLS was not enabled yet.</li> </ul>
<p>I have a pod spec which runs a command like <code>rm -rf /some/path</code></p> <p>i create the pod using <code>kubectl apply -f ...</code></p> <p>now i want to wait till the pod completes. i can see that the pod is done, <code>kubectl get pod/&lt;mypod&gt;</code> shows <code>STATUS Completed</code></p> <p>How do i wait for this condition?</p> <p>i have looked at <code>kubectl wait ...</code> but that doesnt seem to help me</p> <p><code>kubectl wait --for=condition=complete pod/&lt;my-pod&gt;</code> seems to just block. I havent deleted the pod, it is still there in the Completed status</p>
<p>The command that you use: <code>kubectl wait --for=condition=complete pod/&lt;my-pod&gt;</code> will not work because a pod doesn't have such condition. <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="noreferrer">Pod Conditions</a> are as follows:</p> <ul> <li><p><code>PodScheduled</code>: the Pod has been scheduled to a node.</p> </li> <li><p><code>ContainersReady</code>: all containers in the Pod are ready.</p> </li> <li><p><code>Initialized</code>: all init containers have started successfully.</p> </li> <li><p><code>Ready</code>: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.</p> </li> </ul> <p>The <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="noreferrer">phase</a> for a successfully completed pod is called <code>succeeded</code>:</p> <blockquote> <p>All containers in the Pod have terminated in success, and will not be restarted.</p> </blockquote> <p>It would be better however if you use <code>kubectl wait</code> for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noreferrer">Jobs</a> instead of bare Pods and than execute <code>kubectl wait --for=condition=complete job/myjob</code>.</p>
<p>I want to run a docker container which uses GPU (it runs a cnn to detect objects on a video), and then run that container on Kubernetes.</p> <p>I can run the container from docker alone without problems, but when I try to run the container from Kubernetes it fails to find the GPU.</p> <p>I run it using this command:</p> <pre><code>kubectl exec -it namepod /bin/bash </code></pre> <p>This is the problem that I get:</p> <pre><code>kubectl exec -it tym-python-5bb7fcf76b-4c9z6 /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@tym-python-5bb7fcf76b-4c9z6:/opt# cd servicio/ root@tym-python-5bb7fcf76b-4c9z6:/opt/servicio# python3 TM_Servicev2.py Try to load cfg: /opt/darknet/cfg/yolov4.cfg, weights: /opt/yolov4.weights, clear = 0 CUDA status Error: file: ./src/dark_cuda.c : () : line: 620 : build time: Jul 30 2021 - 14:05:34 CUDA Error: no CUDA-capable device is detected python3: check_error: Unknown error -1979678822 root@tym-python-5bb7fcf76b-4c9z6:/opt/servicio# </code></pre> <p><strong>EDIT.</strong> I followed all the steps on the Nvidia docker 2 guide and downloaded the Nvidia plugin for Kubernetes.</p> <p>however when I deploy Kubernetes it stays as &quot;pending&quot; and never actually starts. I don't get an error anymore, but it never starts. The pod appears like this:</p> <pre><code>gpu-pod 0/1 Pending 0 3m19s </code></pre> <p><strong>EDIT 2.</strong></p> <p>I ended up reinstalling everything and now my pod appears completed but not running. like this.</p> <pre><code>default gpu-operator-test 0/1 Completed 0 62m </code></pre> <p>Answering Wiktor. when I run this command:</p> <pre><code>kubectl describe pod gpu-operator-test </code></pre> <p>I get:</p> <pre><code>Name: gpu-operator-test Namespace: default Priority: 0 Node: pdi-mc/192.168.0.15 Start Time: Mon, 09 Aug 2021 12:09:51 -0500 Labels: &lt;none&gt; Annotations: cni.projectcalico.org/containerID: 968e49d27fb3d86ed7e70769953279271b675177e188d52d45d7c4926bcdfbb2 cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs: Status: Succeeded IP: 192.168.10.81 IPs: IP: 192.168.10.81 Containers: cuda-vector-add: Container ID: docker://d49545fad730b2ec3ea81a45a85a2fef323edc82e29339cd3603f122abde9cef Image: nvidia/samples:vectoradd-cuda10.2 Image ID: docker-pullable://nvidia/samples@sha256:4593078cdb8e786d35566faa2b84da1123acea42f0d4099e84e2af0448724af1 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 09 Aug 2021 12:10:29 -0500 Finished: Mon, 09 Aug 2021 12:10:30 -0500 Ready: False Restart Count: 0 Limits: nvidia.com/gpu: 1 Requests: nvidia.com/gpu: 1 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ktgq (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-9ktgq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: &lt;none&gt; </code></pre> <p>I'm using this configuration file to create the pod</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: gpu-operator-test spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: &quot;nvidia/samples:vectoradd-cuda10.2&quot; resources: limits: nvidia.com/gpu: 1 </code></pre>
<p>Addressing two topics here:</p> <ol> <li>The error you saw at the beginning:</li> </ol> <hr /> <pre><code>kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. </code></pre> <p>Means that you tried to use a deprecated version of the <code>kubectl exec</code> command. The proper syntax is:</p> <pre><code>$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] </code></pre> <p>See <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">here</a> for more details.</p> <ol start="2"> <li>According the the <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html" rel="nofollow noreferrer">official docs</a> the <code>gpu-operator-test</code> pod should run to completion: <a href="https://i.stack.imgur.com/FrQXC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FrQXC.png" alt="enter image description here" /></a></li> </ol> <p>You can see that the pod's status is <code>Succeeded</code> and also:</p> <hr /> <pre><code> State: Terminated Reason: Completed Exit Code: 0 </code></pre> <p><code>Exit Code: 0</code> means that the specified container command completed successfully.</p> <p>More details can be found in the <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html" rel="nofollow noreferrer">official docs</a>.</p>
<p>I have created a Java based web service which utilizes SparkJava. By default this web service binds and listens to port 4567. My company requested this be placed in a Docker container. I created a Dockerfile and created the image, and when I run I expose port 4567...</p> <pre><code>docker run -d -p 4567:4567 -t myservice </code></pre> <p>I can invoke my web service for testing my calling a CURL command...</p> <pre><code>curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://localhost:4567/myservice" </code></pre> <p>... and this is working. My company then says it wants to put this in Amazon EKS Kubernetes so I publish my Docker image to the company's private Dockerhub. I create three yaml files...</p> <ul> <li>deployment.yaml</li> <li>service.yaml</li> <li>ingress.yaml</li> </ul> <p>I see my objects are created and I can get a /bin/bash command line to my container running in Kubernetes and from there test localhost access to my service is working correctly including references to external web service resources, so I know my service is good.</p> <p>I am confused by the ingress. I need to expose a URI to get to my service and I am not sure how this is supposed to work. Many examples show using NGINX, but I am not using NGINX.</p> <p>Here are my files and what I have tested so far. Any guidance is appreciated.</p> <h2>service.yaml</h2> <pre><code>kind: Service apiVersion: v1 metadata: name: my-api-service spec: selector: app: my-api ports: - name: main protocol: TCP port: 4567 targetPort: 4567 </code></pre> <h2>deployment.yaml</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-api-deployment spec: replicas: 1 template: metadata: labels: app: my-api spec: containers: - name: my-api-container image: hub.mycompany.net/myproject/my-api-service ports: - containerPort: 4567 </code></pre> <h2>ingress.yaml</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-api-ingress spec: backend: serviceName: my-api-service servicePort: 4567 </code></pre> <p>when I run the command ...</p> <pre><code>kubectl get ingress my-api-ingress </code></pre> <p>... shows ...</p> <pre><code>NAME HOSTS ADDRESS PORTS AGE my-api-ingress * 80 9s </code></pre> <p>when I run the command ...</p> <pre><code>kubectl get service my-api-service </code></pre> <p>... shows ...</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-api-service ClusterIP 172.20.247.225 &lt;none&gt; 4567/TCP 16h </code></pre> <p>When I run the following command...</p> <pre><code>kubectl cluster-info </code></pre> <p>... I see ...</p> <pre><code>Kubernetes master is running at https://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com </code></pre> <p>As such I try to hit the end point using CURL by issuing...</p> <pre><code>curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com:4567/myservice" </code></pre> <p>After some time I receive a time-out error...</p> <pre><code>curl: (7) Failed to connect to 12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com port 4567: Operation timed out </code></pre> <p>I believe my ingress is at fault but I am having difficulties finding non-NGINX examples to compare. </p> <p>Thoughts?</p>
<p>barrypicker.</p> <p>Your service should be "type: NodePort" This example is very similar (however tested in GKE).</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-api-service spec: selector: app: my-api ports: - name: main protocol: TCP port: 4567 targetPort: 4567 type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-api-deployment spec: replicas: 1 selector: matchLabels: app: my-api template: metadata: labels: app: my-api spec: containers: - name: my-api-container image: hashicorp/http-echo:0.2.1 args = ["-listen=:4567", "-text='Hello api'"] ports: - containerPort: 4567 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-api-ingress spec: backend: serviceName: my-api-service servicePort: 4567 </code></pre> <p>in your ingress <code>kubectl get ingress &lt;your ingress&gt;</code> you should see an external ip address. </p> <p>You can find specific AWS implementation <a href="https://aws.amazon.com/blogs/opensource/kubernetes-ingress-aws-alb-ingress-controller/" rel="nofollow noreferrer">here</a>. In addition more information about exposing services you can find <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#alternatives" rel="nofollow noreferrer">here</a></p>
<h1>Wished Behavior</h1> <ol> <li>Write OPA policy which check if image-name contain default latest tag. The following is my .rego file :</li> </ol> <pre><code>package kubernetes.admission import data.kubernetes.namespaces deny[msg] { input.request.kind.kind == &quot;Pod&quot; input.request.operation == &quot;CREATE&quot; container := input.request.object.spec.containers[_] [image_name, image_tag] := split(container.image, &quot;:&quot;) image_tag == &quot;latest&quot; msg := sprintf(&quot;Invalid image tag&quot;) } </code></pre> <ol start="2"> <li>Load the policy by creating a configmap. I used the following command:</li> </ol> <pre><code>kubectl create configmap registry-whitelist --from-file image-checker.rego </code></pre> <blockquote> <p>The default namespace in my current context is <code>opa</code>.</p> </blockquote> <ol start="3"> <li>After that it's supposed that I can exercise the policy by creating a pod with latest tag and it has to be rejected.</li> </ol> <h2>Actual Behavior</h2> <p>Pods with latest tags are created successfully and they are not rejected.</p> <h2>Steps to Reproduce the Problem</h2> <p>I followed these tips <a href="https://www.openpolicyagent.org/docs/latest/kubernetes-debugging/" rel="nofollow noreferrer">https://www.openpolicyagent.org/docs/latest/kubernetes-debugging/</a> .</p> <p>So, it's expected that the created configmap <code>registry-whitelist</code> has <code>openpolicyagent.org/policy-status</code> as annotations, however it has <code>&lt;none&gt;</code> as value, also I have checked logs of <code>kube-mgmt</code> container however they didn't help me. The only interesting log i get is when i try to delete the configmap <code>registry-whitelist</code> I can see the following log :</p> <p><strong>level=error msg=&quot;Failed to delete policy opa/registry-whitelist/image-checker.rego: code resource_not_found: storage_not_found_error: policy id &quot;opa/registry-whitelist/image-checker.rego&quot;&quot;</strong></p>
<p>Problem solved: actually my policy which is writen in rego had some errors this is why kube-mgmt took some time before adding the annotations to the new created configmap. After a while I found an annotation saying that my policy had errors .</p>
<p>I am using this query to calculate memory utilization of a node</p> <pre class="lang-sh prettyprint-override"><code> (1 - node_memory_MemAvailable_bytes{cluster=&quot;$cluster&quot;}/node_memory_MemTotal_bytes{cluster=&quot;$cluster&quot;}) * 100 </code></pre> <p>I see values around 25%</p> <p>but when I run <code>kubectl top node</code> for the same node I see a value around 16%</p>
<p>The difference in values between <code>kubectl top node</code> and Prometheus's node exporter values comes from the way these data is being collected and calculated.</p> <p>When you execute <code>kubectl top node</code> Kubernetes reads values from root cgroup. Specifically from <code>/sys/fs/cgroup/memory/memory.usage_in_bytes</code> and <code>/sys/fs/cgroup/memory/memory.stat</code>. The total memory usage is being calculated as: <code>memory.usage_in_bytes</code> - <code>total_inactive_file</code>.</p> <p>However, Prometheus's node exporter reads values from <code>/proc/meminfo</code> and than calculates it according to your query which seems correct.</p> <p>This difference is also being discussed in this (still open) <a href="https://github.com/google/cadvisor/issues/2042" rel="nofollow noreferrer">issue</a>.</p> <p>One way of dealing with it would be to simply stick to one method of measurement and reporting.</p>
<p>I want to run pods with <code>runsc</code> as default on my k8s nodes but <code>kube-proxy</code> or other nodes can't be run via <code>runsc</code>. So, I want to automate the process in a way that after the Kubernetes start, every new pod will be run via <code>runsc</code>.</p>
<p>According to the <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/" rel="nofollow noreferrer">official documentation</a>, this can be done with the below steps:</p> <ul> <li><p><a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#1-configure-the-cri-implementation-on-nodes" rel="nofollow noreferrer">Configure the CRI implementation on nodes</a></p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#2-create-the-corresponding-runtimeclass-resources" rel="nofollow noreferrer">Create the corresponding RuntimeClass resources</a></p> </li> <li><p>Specify a <code>runtimeClassName</code> in the Pod spec.</p> </li> </ul> <p>A step by step guide alongside all necessary details can be found in the linked docs.</p> <p>Notice that:</p> <ul> <li><p>low-level resources, such as <code>nodes</code> and <code>persistentVolumes</code>, are not in any namespace, <code>RuntimeClass</code> is a non-namespaced resource</p> </li> <li><p><code>RuntimeClass</code> assumes a homogeneous node configuration across the cluster by default (which means that all nodes are configured the same way with respect to container runtimes). To support heterogeneous node configurations, see <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#scheduling" rel="nofollow noreferrer">Scheduling</a> below.</p> </li> </ul>
<p>I'm using Rancher's nginx ingress controller, <code>rancher/nginx-ingress-controller:0.21.0-rancher3</code>, which should be based on <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a> AFAIK.</p> <p>My udp-services is configured as:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: udp-services namespace: ingress-nginx data: 69: "default/tftp:69" 8881: "default/test:8881" </code></pre> <p>Running <code>nc -l -u -p 8881</code> on <code>default/test</code> can communicate with an out-of-cluster client just fine. That should mean that the udp proxying works, at least for some cases. However tftp requests to <code>default/tftp</code> timeout consistently.</p> <p>Roughly, a TFTP read should work as <a href="https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol" rel="nofollow noreferrer">below</a>:</p> <ol> <li>Client port A => Server port 69 (request)</li> <li>Server port B => Client port A (send data, and note it's a <em>new</em> port B)</li> <li>Client port A => Server port B (acknowledgement)</li> </ol> <p><code>tcpdump</code> running on the tftp server shows the communication is like:</p> <ol> <li>Host port A => Server port 69 (request a file)</li> <li>Server port B => Host port A (sending data back to port A)</li> <li>Host => Server, ICMP port unreachable (but port A is unreachable)</li> </ol> <p>At the same time the ingress logs something like:</p> <pre><code>TIMESTAMP [error] ... upstream timed out (110: Connection timed out) while proxying connection, udp client: ::1, server: [::]:69, upstream: "...:69", bytes from/to client:..., bytes from/to upstream:... </code></pre> <p>TFTP requests from another in-cluster container work just fine. This should mean that the TFTP server itself is not the direct source of problem. And the issue is how the ingress controller handles the requests.</p> <p>I found <a href="https://manpages.ubuntu.com/manpages/precise/man8/tftpd.8.html" rel="nofollow noreferrer">tftpd</a> has a <code>--port-range</code> argument which can pin which ports tftpd can use to respond. I tried to pin it to port 8881 (<code>--port-range 8881:8881</code>), but the requests are still being dropped.</p> <p>My guess is that the ingress does not redirect the packet back to the client since the reply is not from port 69, but port B.</p> <p>Did anyone succeed to expose a TFTP service within a Kubernetes cluster?</p>
<p>It is not a 100% solution but i found a workaround for the exact same thing. The problem is that tftp creates a new outbound UDP connection that isn't known in the host's state table. Thus it treats it like an outgoing request rather than a reply. I will also note that TFTP client apps handle this fine, but PXE drivers (at least intel ones) do not.</p> <p>If you are using Calico as your CNI, you can disable "natOutgoing" on the IPPool. if you need NAT, you can create a second IPPool without NAT.</p> <p><a href="https://docs.projectcalico.org/networking/assign-ip-addresses-topology#features" rel="nofollow noreferrer">https://docs.projectcalico.org/networking/assign-ip-addresses-topology#features</a></p> <p>I disabled it for the default by <code>calicoctl get ippool -oyaml | sed 's/natOutgoing: true/natOutgoing: false/g' | calicoctl apply -f -</code></p> <p>I am sure other CNI plugins may have a similar workaround </p>
<p>10 microservices on kubernetes with helm3 charts, and saw that all of them have similar structure standard, deployment, service, hpa, network policies etc. and basically the <code>&lt;helm_chart_name&gt;/templates</code> directory is 99% same on all with some <code>if</code> statements on top of file whether we want to deploy that resource,</p> <pre class="lang-sh prettyprint-override"><code>{{ if .Values.hpa.create }} apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: {{ .Values.deployment.name }} ... spec: scaleTargetRef: ... {{ end }} </code></pre> <p>and in values passing yes/no whether we want it - Is there some tool to easily create template for the helm charts ? To create Helm chart with this 5 manifests pre-populated with the reference to values as above ?</p>
<p>What you need is the <a href="https://helm.sh/docs/topics/library_charts/" rel="noreferrer">Library Charts</a>:</p> <blockquote> <p>A library chart is a type of Helm chart that defines chart primitives or definitions which can be shared by Helm templates in other charts. This allows users to share snippets of code that can be re-used across charts, avoiding repetition and keeping charts DRY.</p> </blockquote> <p>You can find more details and examples in the linked documentation.</p>
<p>As part of rolling updates version 1 pod is rolled up with version 2 pod.</p> <p>We need to review the logs of shutdown process of service in the pod (version one).</p> <hr /> <ol> <li><p>Does rolling update delete the version one pod?</p> </li> <li><p>If yes, can we review the logs of deleted pod (version one)? To verify the shutdown process of service in version one pod...</p> </li> </ol>
<blockquote> <ol> <li>Does rolling update delete the version one pod?</li> </ol> </blockquote> <p>The short answer is: Yes.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update Deployment</a>:</p> <blockquote> <p>The Deployment updates Pods in a rolling update fashion when <code>.spec.strategy.type==RollingUpdate</code>. You can specify <code>maxUnavailable</code> and <code>maxSurge</code> to control the rolling update process.</p> </blockquote> <p>See the examples below:</p> <pre><code>spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 </code></pre> <p>In this example there would be one additional Pod (<code>maxSurge: 1</code>) above the desired number of 2, and the number of available Pods cannot go lower than that number (<code>maxUnavailable: 0</code>).</p> <p>Choosing this config, the Kubernetes will spin up an additional Pod, then stop an “old” one. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment. If not, the Pod will be deployed on an already used Node at the cost of resources from other Pods hosted on the same Node.</p> <p>You can also try something like this:</p> <pre><code>spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 1 </code></pre> <p>With the example above there would be no additional Pods (<code>maxSurge: 0</code>) and only a single Pod at a time will be unavailable (<code>maxUnavailable: 1</code>).</p> <p>In this case, Kubernetes will first stop a Pod before starting up a new one. The advantage of that is that the infrastructure doesn’t need to scale up but the maximum workload will be lower.</p> <hr /> <blockquote> <ol start="2"> <li>if yes, can we review the logs of deleted pod(version one)? To verify the shutdown process of service in version one pod...</li> </ol> </blockquote> <p>See the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">Debug Running Pods</a> docs. You can find several useful ways of checking logs/events such as:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods" rel="nofollow noreferrer">Debugging Pods</a> by executing <code>kubectl describe pods ${POD_NAME}</code> and checking the reason behind it's failure.</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examining pod logs</a>: with <code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}</code> or <code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}</code></p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#container-exec" rel="nofollow noreferrer">Debugging with container exec</a>: by running commands inside a specific container with <code>kubectl exec</code></p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">Debugging with an ephemeral debug container</a>: Ephemeral containers are useful for interactive troubleshooting when <code>kubectl exec</code> is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with <a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow noreferrer">distroless images</a>.</p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#node-shell-session" rel="nofollow noreferrer">Debugging via a shell on the node</a>: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.</p> </li> </ul> <hr /> <p>However, <code>--previous</code> flag works only if the previous container instance still exists in a Pod. Check out <a href="https://stackoverflow.com/a/57009702/11560878">this answer</a> for further options.</p> <p>Also, see this topic: <a href="https://stackoverflow.com/questions/40636021/how-to-list-kubernetes-recently-deleted-pods">How to list Kubernetes recently deleted pods?</a></p>
<p>I am using: <em>minikube version: v1.0.0</em> I now need to create Ingress resource:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: testsvc servicePort: 80 </code></pre> <p>and then I run <code>kubectl apply -f ./ingress.yaml</code></p> <p>Error happened:</p> <blockquote> <p>error: SchemaError(io.k8s.api.core.v1.CinderVolumeSource): invalid object doesn't have additional properties</p> </blockquote> <p>My kubectl version is:</p> <p>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}</p> <p>After upgrading kubectl version to v1.14.0, I can create ingress with no problem. But now, the issue is ingress is NOT redirecting to pod:</p> <p>this is my ingress.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dv spec: rules: - host: ui.dv.com http: paths: - path: / backend: serviceName: ngsc servicePort: 3000 </code></pre> <p>this is my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ngsc spec: type: NodePort selector: app: ngsc ports: - port: 3000 nodePort: 30080 name: http targetPort: 3000 </code></pre> <p>And this is my deployment:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: ngsc spec: replicas: 2 template: metadata: name: ngsc labels: app: ngsc spec: containers: - image: myimage name: ngsc imagePullPolicy: IfNotPresent </code></pre> <p>I ave already added ui.dv.com into /etc/hosts, after I start all, and using curl <a href="http://ui.dv.com" rel="nofollow noreferrer">http://ui.dv.com</a>, there is no response</p> <p>I checked the nginx log:</p> <p>Error obtaining Endpoints for Service "default/ngsc": no object matching key "default/ngsc" in local store</p> <p>for all pods, </p> <pre><code>default api-server-84dd8bcfc8-2hvlh 1/1 Running 26 3h23m default api-server-84dd8bcfc8-s697x 1/1 Running 28 3h23m default api-server-84dd8bcfc8-vq4vn 1/1 Running 26 3h23m default ngsc-559cbf57df-bcjb7 1/1 Running 3 3h27m default ngsc-559cbf57df-j5v68 1/1 Running 2 3h27m kube-system coredns-fb8b8dccf-ghj4l 1/1 Running 42 36h kube-system coredns-fb8b8dccf-rwhw5 1/1 Running 41 36h kube-system default-http-backend-6864bbb7db-p8fld 1/1 Running 47 36h kube-system etcd-minikube 1/1 Running 3 36h kube-system kube-addon-manager-minikube 1/1 Running 4 36h kube-system kube-apiserver-minikube 1/1 Running 27 36h kube-system kube-controller-manager-minikube 0/1 Error 4 11m kube-system kube-proxy-skn58 1/1 Running 2 12h kube-system kube-scheduler-minikube 0/1 CrashLoopBackOff 40 36h kube-system nginx-ingress-controller-f5744c676-j5r25 1/1 Running 47 3h16m kube-system storage-provisioner 1/1 Running 7 36h </code></pre> <p>here, the ingress controller is running</p> <p>now running:</p> <p>kubectl describe pods -n kube-system nginx-ingress-controller-f5744c676-j5r25</p> <p>I have this:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 41m (x98 over 3h9m) kubelet, minikube Liveness probe failed: Get http://172.17.0.7:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 25m (x208 over 3h10m) kubelet, minikube Readiness probe failed: Get http://172.17.0.7:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 5m46s (x4 over 12m) kubelet, minikube Readiness probe failed: Get http://172.17.0.6:10254/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning BackOff 35s (x448 over 3h3m) kubelet, minikube Back-off restarting failed container </code></pre> <p>describe ingress:</p> <pre><code>Namespace: default Address: Default backend: default-http-backend:80 () Rules: Host Path Backends ---- ---- -------- * /ui ngsc:3000 (172.17.0.10:3000,172.17.0.7:3000) * /api api-server:8083 (172.17.0.5:8083,172.17.0.9:8083) Annotations: </code></pre> <p>kubectl get ing NAME HOSTS ADDRESS PORTS AGE datavisor * 10.0.2.15 80 3h12m</p> <p>Finally:</p> <p>curl <a href="http://10.0.2.15/ui" rel="nofollow noreferrer">http://10.0.2.15/ui</a></p> <p>hanging, and stopped</p> <p>Anything wrong here?</p>
<ol> <li><p>Please check community comments:<br/> <a href="https://stackoverflow.com/questions/55417410/kubernetes-create-deployment-unexpected-schemaerror">Kubernetes create deployment unexpected SchemaError</a><br/> Could you please verify your "kubectl" and minikube version.<br/> Did you have any errors during installation?<br/> Could you please check logs and events.<br/> Please try also and create other deployment to see if there are other errors.</p></li> <li><p>For troubleshooting purposes please use:<br/> <pre><code> kubectl get pods kubectl get events kubectl logs "your_pod" </code></pre><br/> Please share with your findings</p></li> </ol>
<p>I'm trying to serve an application using NestJS but I'm not being able to do so.</p> <p>I've already configured Traefik IngressRoutes to serve both Traefik Dashboard and also ArgoCD (and a couple more test apps), but I've been trying to deploy this new application for almost 2 days, without success.</p> <p>The error is the following:</p> <pre><code>Bad Gateway </code></pre> <p>And this is the log Traefik outputs upon a request:</p> <pre><code>[traefik-c88c9f869-b8cm8] 10.0.1.122 - - [11/Dec/2020:03:13:20 +0000] &quot;GET /graphql HTTP/2.0&quot; 502 11 &quot;-&quot; &quot;-&quot; 764 &quot;develop-business-app-64fa6977f85a45bb4625@kubernetescrd&quot; &quot;http://10.0.3.86:8080&quot; 1ms </code></pre> <p>I don't know if there is any custom configuration I need to do in my app to use HTTP/2.0 or handle Traefik SSL (since the entry point is websecure). I've followed the docs over and over but I always get the same error (I've already tried to remove and installed Traefik again entirely)</p> <p>Also, if I run <code>kubectl port-forward</code> I can use the application as expected.</p> <p>Here are my configuration files:</p> <p>This is my Traefik deployment:</p> <pre><code>--- kind: Deployment apiVersion: apps/v1 metadata: name: traefik labels: app.kubernetes.io/name: traefik-proxy app.kubernetes.io/version: 1.0.0 app.kubernetes.io/component: infrastructure app.kubernetes.io/part-of: traefik spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: traefik-proxy template: metadata: labels: app.kubernetes.io/name: traefik-proxy app.kubernetes.io/version: 1.0.0 app.kubernetes.io/component: infrastructure app.kubernetes.io/part-of: traefik spec: serviceAccountName: traefik-ingress-controller volumes: - name: acme-certificates emptyDir: {} containers: - name: traefik image: traefik:v2.3 args: - --accesslog - --providers.kubernetescrd - --ping - --api.dashboard - --entrypoints.traefik.address=:8080 - --entrypoints.web.address=:80 - --entrypoints.websecure.address=:443 - --entrypoints.web.http.redirections.entrypoint.to=websecure - --entrypoints.websecure.http.tls.certResolver=letsencrypt - --certificatesresolvers.letsencrypt.acme.email=accounts+letsencrypt@getbud.co - --certificatesresolvers.letsencrypt.acme.storage=/etc/acme/letsencrypt.json - --certificatesResolvers.letsencrypt.acme.dnsChallenge.provider=route53 - --certificatesResolvers.letsencrypt.acme.dnsChallenge.delayBeforeCheck=0 volumeMounts: - name: acme-certificates mountPath: /etc/acme ports: - containerPort: 8080 name: admin protocol: TCP - containerPort: 80 name: web protocol: TCP - containerPort: 443 name: websecure protocol: TCP livenessProbe: failureThreshold: 3 httpGet: path: /ping port: 8080 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 readinessProbe: failureThreshold: 1 httpGet: path: /ping port: 8080 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 </code></pre> <p>This is my application deployment:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: business-app labels: app.kubernetes.io/name: business-app app.kubernetes.io/version: 1.0.0 app.kubernetes.io/component: business app.kubernetes.io/part-of: application-layer spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: business-app template: metadata: labels: app.kubernetes.io/name: business-app app.kubernetes.io/version: 1.0.0 app.kubernetes.io/component: business app.kubernetes.io/part-of: application-layer spec: containers: - name: business-app image: 904333181156.dkr.ecr.sa-east-1.amazonaws.com/business:$ECR_TAG &lt;- this is updated with the latest tag using envsubst ports: - containerPort: 8080 name: web protocol: TCP </code></pre> <p>This is my application service:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: business-app spec: selector: app.kubernetes.io/name: business-app ports: - name: web port: 80 targetPort: 8080 </code></pre> <p>And this is my IngressRoute:</p> <pre><code>--- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: business-app labels: app.kubernetes.io/name: business-app app.kubernetes.io/version: 1.0.0 app.kubernetes.io/component: business app.kubernetes.io/part-of: application-layer spec: entryPoints: - websecure routes: - match: Host(`api.develop.getbud.co`) kind: Rule services: - name: business-app port: 80 tls: certResolver: letsencrypt options: {} </code></pre> <p>Can someone give me a hint on what am I doing wrong?</p> <p>Just an update, I've changed the loglevel of traefik to debug, and here is what it logs upon request:</p> <pre><code>[traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:31Z&quot; level=debug msg=&quot;Error while Peeking first byte: read tcp 10.0.3.86:80-&gt;10.0.1.122:44996: read: connection reset by peer&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:31Z&quot; level=debug msg=&quot;Error while Peeking first byte: read tcp 10.0.3.86:8080-&gt;10.0.3.100:6380: read: connection reset by peer&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: begin ServeHttp on request&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/graphql\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;Upgrade-Insecure-Requests\&quot;:[\&quot;1\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/graphql\&quot;,\&quot;TLS\&quot;:null}&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: Forwarding this request to URL&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/graphql\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;Upgrade-Insecure-Requests\&quot;:[\&quot;1\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/graphql\&quot;,\&quot;TLS\&quot;:null}&quot; ForwardURL=&quot;http://10.0.1.158:8080&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;'502 Bad Gateway' caused by: dial tcp 10.0.1.158:8080: connect: connection refused&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: completed ServeHttp on request&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/graphql\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;Upgrade-Insecure-Requests\&quot;:[\&quot;1\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/graphql\&quot;,\&quot;TLS\&quot;:null}&quot; [traefik-55888dfd67-r8b2c] 10.0.1.122 - - [11/Dec/2020:04:54:32 +0000] &quot;GET /graphql HTTP/2.0&quot; 502 11 &quot;-&quot; &quot;-&quot; 754 &quot;develop-business-app-64fa6977f85a45bb4625@kubernetescrd&quot; &quot;http://10.0.1.158:8080&quot; 2ms [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;Error while Peeking first byte: read tcp 10.0.3.86:443-&gt;10.0.3.75:35314: read: connection reset by peer&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: begin ServeHttp on request&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/favicon.ico\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;image/webp,*/*\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Referer\&quot;:[\&quot;https://api.develop.getbud.co/graphql\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/favicon.ico\&quot;,\&quot;TLS\&quot;:null}&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: Forwarding this request to URL&quot; ForwardURL=&quot;http://10.0.1.158:8080&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/favicon.ico\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;image/webp,*/*\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Referer\&quot;:[\&quot;https://api.develop.getbud.co/graphql\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/favicon.ico\&quot;,\&quot;TLS\&quot;:null}&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;'502 Bad Gateway' caused by: dial tcp 10.0.1.158:8080: connect: connection refused&quot; [traefik-55888dfd67-r8b2c] time=&quot;2020-12-11T04:54:32Z&quot; level=debug msg=&quot;vulcand/oxy/roundrobin/rr: completed ServeHttp on request&quot; Request=&quot;{\&quot;Method\&quot;:\&quot;GET\&quot;,\&quot;URL\&quot;:{\&quot;Scheme\&quot;:\&quot;\&quot;,\&quot;Opaque\&quot;:\&quot;\&quot;,\&quot;User\&quot;:null,\&quot;Host\&quot;:\&quot;\&quot;,\&quot;Path\&quot;:\&quot;/favicon.ico\&quot;,\&quot;RawPath\&quot;:\&quot;\&quot;,\&quot;ForceQuery\&quot;:false,\&quot;RawQuery\&quot;:\&quot;\&quot;,\&quot;Fragment\&quot;:\&quot;\&quot;,\&quot;RawFragment\&quot;:\&quot;\&quot;},\&quot;Proto\&quot;:\&quot;HTTP/2.0\&quot;,\&quot;ProtoMajor\&quot;:2,\&quot;ProtoMinor\&quot;:0,\&quot;Header\&quot;:{\&quot;Accept\&quot;:[\&quot;image/webp,*/*\&quot;],\&quot;Accept-Encoding\&quot;:[\&quot;gzip, deflate, br\&quot;],\&quot;Accept-Language\&quot;:[\&quot;en-US,pt-BR;q=0.5\&quot;],\&quot;Cache-Control\&quot;:[\&quot;no-cache\&quot;],\&quot;Pragma\&quot;:[\&quot;no-cache\&quot;],\&quot;Referer\&quot;:[\&quot;https://api.develop.getbud.co/graphql\&quot;],\&quot;Te\&quot;:[\&quot;trailers\&quot;],\&quot;User-Agent\&quot;:[\&quot;Mozilla/5.0 (X11; Linux x86_64; rv:83.0) Gecko/20100101 Firefox/83.0\&quot;],\&quot;X-Forwarded-Host\&quot;:[\&quot;api.develop.getbud.co\&quot;],\&quot;X-Forwarded-Port\&quot;:[\&quot;443\&quot;],\&quot;X-Forwarded-Proto\&quot;:[\&quot;https\&quot;],\&quot;X-Forwarded-Server\&quot;:[\&quot;traefik-55888dfd67-r8b2c\&quot;],\&quot;X-Real-Ip\&quot;:[\&quot;10.0.1.122\&quot;]},\&quot;ContentLength\&quot;:0,\&quot;TransferEncoding\&quot;:null,\&quot;Host\&quot;:\&quot;api.develop.getbud.co\&quot;,\&quot;Form\&quot;:null,\&quot;PostForm\&quot;:null,\&quot;MultipartForm\&quot;:null,\&quot;Trailer\&quot;:null,\&quot;RemoteAddr\&quot;:\&quot;10.0.1.122:27473\&quot;,\&quot;RequestURI\&quot;:\&quot;/favicon.ico\&quot;,\&quot;TLS\&quot;:null}&quot; [traefik-55888dfd67-r8b2c] 10.0.1.122 - - [11/Dec/2020:04:54:32 +0000] &quot;GET /favicon.ico HTTP/2.0&quot; 502 11 &quot;-&quot; &quot;-&quot; 755 &quot;develop-business-app-64fa6977f85a45bb4625@kubernetescrd&quot; &quot;http://10.0.1.158:8080&quot; 1ms </code></pre> <p>So, it seems Traefik is receiving a connection refused from the pod. I've opened a shell inside Traefik's container and tried to run wget directly in the Pod IP and indeed I received the same error (connection refused).</p> <p>Any other working pod whenever I ran wget it works.</p> <p>Any ideas?</p>
<p>For those who (like me) are struggling with this issue, here is the problems:</p> <p>Fastify by default listens only at <code>127.0.0.1</code>, so, it automatically refuses any other host connection. To solve that you can simply add <code>0.0.0.0</code> as the second argument of your <code>app.listen</code> call, like the following:</p> <p>previous:</p> <pre><code> await app.listen(appConfig.port) </code></pre> <p>fixed:</p> <pre><code> await app.listen(appConfig.port, '0.0.0.0') </code></pre> <p>Thanks in any case :)</p>
<p>I am having some issues with a fairly new cluster where a couple of nodes (always seems to happen in pairs but potentially just a coincidence) will become NotReady and a <code>kubectl describe</code> will say that the Kubelet stopped posting node status for memory, disk, PID and ready.</p> <p>All of the running pods are stuck in Terminating (can use k9s to connect to the cluster and see this) and the only solution I have found is to cordon and drain the nodes. After a few hours they seem to be being deleted and new ones created. Alternatively I can delete them using kubectl.</p> <p>They are completely inaccessible via ssh (timeout) but AWS reports the EC2 instances as having no issues.</p> <p>This has now happened three times in the past week. Everything does recover fine but there is clearly some issue and I would like to get to the bottom of it.</p> <p>How would I go about finding out what has gone on if I cannot get onto the boxes at all? (Actually just occurred to me to maybe take a snapshot of the volume and mount it so will try that if it happens again, but any other suggestions welcome)</p> <p>Running kubernetes v1.18.8</p>
<p>There are two most common possibilities here, both most likely caused by a large load:</p> <ul> <li><p><code>Out of Memory</code> error on the kubelet host. Can be solved by adding proper <code>--kubelet-extra-args</code> to <code>BootstrapArguments</code>. For example: <code>--kubelet-extra-args &quot;--kube-reserved memory=0.3Gi,ephemeral-storage=1Gi --system-reserved memory=0.2Gi,ephemeral-storage=1Gi --eviction-hard memory.available&lt;200Mi,nodefs.available&lt;10%&quot; </code></p> </li> <li><p>An issue explained <a href="https://github.com/kubernetes/kubernetes/issues/74302#issuecomment-467359176" rel="nofollow noreferrer">here</a>:</p> </li> </ul> <blockquote> <p>kubelet cannot patch its node status sometimes, ’cos more than 250 resources stay on the node, kubelet cannot watch more than 250 streams with kube-apiserver at the same time. So, I just adjust kube-apiserver --http2-max-streams-per-connection to 1000 to relieve the pain.</p> </blockquote> <p>You can either adjust the values provided above or try to find the cause of high load/iops and try to tune it down.</p>
<p>Today I try to run a cent OS based container as second conatiner in my POD. While deploying my deployment.yaml I've got this message.</p> <pre><code>ImageInspectError: Failed to inspect image "XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/ym_image:v1.0": Id or size of image "XXX.dkr.ecr.eu-west-1.amazonaws.com/msg/my_image:v1.0" is not set </code></pre> <p>Does somebody know how to set this ID or Size?</p> <p>Kind regards Markus</p>
<p>I am not familiar with aws repositories but at first look it seems you are trying to pull image with improper name:tag.<br/></p> <p>Example of well tagged repository:<br/> <strong>docker tag hello-world aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-repository</strong><br/> Optionally you can add version "<em>hello-repository:latest</em>" You can login to aws account or list your repositories and verify with settings in your deployment.<br/></p> <p>Could you please verify please if your repository doesn't start at: "msg"<br/> XXX.dkr.ecr.eu-west-1.amazonaws.com/<strong>msg</strong>/ym_image:v1.0"<br/></p> <p>All information about repositories in aws you can find here: <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html</a> Try to pull mentioned image using Docker and share with your findings.</p>
<p>I can configure <code>apiserver.service-node-port-range</code> extra-config with a port range like <code>10000-19000</code> but when I specify a comma separated list of ports like <code>17080,13306</code> minkube wouldn't start it will bootloop with below error</p> <pre><code>💢 initialization failed, will try again: wait: /bin/bash -c &quot;sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflig ht-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailabl e--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,S ystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables&quot;: Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 5.10.26-1rodete1-amd64 DOCKER_VERSION: 20.10.5 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [preflight] Pulling images required for setting up a Kubernetes cluster . . . [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: </code></pre> <p>I checked the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> and takes only port range. Is comma comma separated list of ports supported in minikube?</p> <p><code>--service-node-port-range &lt;a string in the form 'N1-N2'&gt; Default: 30000-32767</code></p>
<p>Posting this as community wiki, please feel free and provide more details and findings about this topic.</p> <p>The only one place where we can find information about comma separated list of ports and port ranges is <em><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#increasing-the-nodeport-range" rel="nofollow noreferrer">minikube documentation</a></em>:</p> <blockquote> <h3>Increasing the NodePort range</h3> <p><strong>By default</strong>, minikube only <strong>exposes ports 30000-32767</strong>. If this does not work for &gt;you, you can adjust the range by using:</p> <p><code>minikube start --extra-config=apiserver.service-node-port-range=1-65535</code></p> <p><strong>This flag also accepts a comma separated list of ports and port ranges</strong>.</p> </blockquote> <p>On the other hand from the <em><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">k8s documentation</a></em>:</p> <blockquote> <p><strong>--service-node-port-range &lt;a string in the form 'N1-N2'&gt; Default: 30000-32767</strong></p> </blockquote> <p>I have tested this with k8s v 1.20 and comma separated list of ports also doesn't work for me. Kube-apiserver accept two approaches:</p> <blockquote> <p><strong>set parses a string of the form &quot;value&quot;, &quot;min-max&quot;, or &quot;min+offset&quot;, inclusive at both ends</strong></p> </blockquote> <pre><code>--service-node-port-range=30100-31000 # using &quot;min-max&quot; approach --service-node-port-range=25000+100 # using &quot;min+offset&quot; approach (valid ranges will be 25000-25100) </code></pre> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://github.com/kubernetes/kubernetes/blob/b6b4b974eb9f6ced70317e3da5426cdf9580ffb6/cmd/kube-apiserver/app/options/options.go#L235" rel="nofollow noreferrer">ServiceNodePortRange</a></em></li> <li><em><a href="https://github.com/kubernetes/kubernetes/blob/b6b4b974eb9f6ced70317e3da5426cdf9580ffb6/staging/src/k8s.io/apimachinery/pkg/util/net/port_range.go#L25" rel="nofollow noreferrer">utilnet.PortRange</a></em></li> </ul>
<p>I am using minikube v1.11.0 on Microsoft Windows 10 Pro . my minikube stopped frequently ,Find minikube status</p> <p>Minikube Status:</p> <p>type: Control Plane<br /> host: Running<br /> kubelet: Running<br /> apiserver: Stopped<br /> kubeconfig: Configured</p> <p>minikube logs:</p> <pre><code>Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout </code></pre> <p>Minikube Details:</p> <p>Minikube Version: 1.11.0<br /> Assigned Memory : 4096<br /> Processor :4 virtual processors</p> <p>'minikube ssh dmesg' command Result:</p> <pre><code>D:\IMP\DevOps Implementation\Python&gt;minikube ssh dmesg [ 0.000000] Linux version 4.19.107 (jenkins@jenkins) (gcc version 7.4.0 (Buildroot 2019.02.10)) #1 SMP Thu May 28 15:07:17 PDT 2020 [ 0.000000] Command line: BOOT_IMAGE=/boot/bzImage root=/dev/sr0 loglevel=3 console=ttyS0 noembed nomodeset norestore waitusb=10 random.trust_cpu=on hw_rng_model=virtio systemd.legacy_systemd_cgroup_controller=yes initrd=/boot/initrd [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 [ 0.000000] x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 [ 0.000000] x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000f7feffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000f7ff0000-0x00000000f7ffefff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000f7fff000-0x00000000f7ffffff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000101ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.3 present. [ 0.000000] DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018 [ 0.000000] Hypervisor detected: Microsoft Hyper-V [ 0.000000] Hyper-V: features 0x2e7f, hints 0x20c2c [ 0.000000] Hyper-V Host Build:18362-10.0-0-0.836 [ 0.000000] Hyper-V: LAPIC Timer Frequency: 0x30d40 [ 0.000000] tsc: Marking TSC unstable due to running on Hyper-V [ 0.000000] Hyper-V: Using hypercall for remote TLB flush [ 0.000000] tsc: Detected 1800.006 MHz processor [ 0.000686] e820: update [mem 0x00000000-0x00000fff] usable ==&gt; reserved [ 0.000687] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000690] last_pfn = 0x102000 max_arch_pfn = 0x400000000 [ 0.000705] MTRR default type: uncachable [ 0.000706] MTRR fixed ranges enabled: [ 0.000707] 00000-9FFFF write-back [ 0.000707] A0000-DFFFF uncachable [ 0.000707] E0000-FFFFF write-back [ 0.000708] MTRR variable ranges enabled: [ 0.000709] 0 base 0000000000 mask 7F00000000 write-back [ 0.000709] 1 base 0100000000 mask 7000000000 write-back [ 0.000709] 2 disabled [ 0.000710] 3 disabled [ 0.000710] 4 disabled [ 0.000710] 5 disabled [ 0.000710] 6 disabled [ 0.000711] 7 disabled [ 0.000718] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000727] last_pfn = 0xf7ff0 max_arch_pfn = 0x400000000 [ 0.006501] found SMP MP-table at [mem 0x000ff780-0x000ff78f] [ 0.006515] Scanning 1 areas for low memory corruption [ 0.006583] Using GB pages for direct mapping [ 0.006586] BRK [0x8da02000, 0x8da02fff] PGTABLE [ 0.006587] BRK [0x8da03000, 0x8da03fff] PGTABLE [ 0.006588] BRK [0x8da04000, 0x8da04fff] PGTABLE [ 0.006599] BRK [0x8da05000, 0x8da05fff] PGTABLE [ 0.006600] BRK [0x8da06000, 0x8da06fff] PGTABLE [ 0.006625] BRK [0x8da07000, 0x8da07fff] PGTABLE [ 0.006632] BRK [0x8da08000, 0x8da08fff] PGTABLE [ 0.006669] RAMDISK: [mem 0x75db4000-0x7fffffff] [ 0.006709] ACPI: Early table checksum verification disabled [ 3.211586] Freeing unused kernel image memory: 1428K [ 3.217804] Write protecting the kernel read-only data: 20480k [ 3.218711] Freeing unused kernel image memory: 2004K [ 3.218811] Freeing unused kernel image memory: 648K [ 3.218813] Run /init as init process [ 3.420796] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons [ 3.606951] tar (1220) used greatest stack depth: 14064 bytes left [ 3.658466] systemd[1]: systemd 240 running in system mode. (-PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid) [ 3.658536] systemd[1]: Detected virtualization microsoft. [ 3.658540] systemd[1]: Detected architecture x86-64. [ 3.667515] systemd[1]: Set hostname to &lt;minikube&gt;. [ 3.667543] systemd[1]: Initializing machine ID from random generator. [ 3.667835] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ 3.678215] systemd-fstab-generator[1225]: Ignoring &quot;noauto&quot; for root device [ 3.679587] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ 3.679589] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ 3.684942] systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:7: PIDFile= references path below legacy directory /var/run/, updating /var/run/vmtoolsd.pid \xe2\x86\x92 /run/vmtoolsd.pid; please update the unit file accordingly. [ 3.688167] systemd[1]: /usr/lib/systemd/system/rpc-statd.service:13: PIDFile= references path below legacy directory /var/run/, updating /var/run/rpc.statd.pid \xe2\x86\x92 /run/rpc.statd.pid; please update the unit file accordingly. [ 4.077349] systemd-journald[1482]: Received request to flush runtime journal from PID 1 [ 4.082397] journalctl (1875) used greatest stack depth: 14032 bytes left [ 4.328094] hv_vmbus: Vmbus version:5.0 [ 4.346743] hv_vmbus: registering driver hid_hyperv [ 4.347880] input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input4 [ 4.347953] hid-generic 0006:045E:0621.0001: input: &lt;UNKNOWN&gt; HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on [ 4.347953] hid-generic 0006:045E:0621.0001: input: &lt;UNKNOWN&gt; HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on [ 4.348317] hv_vmbus: registering driver hv_storvsc [ 4.349279] hv_vmbus: registering driver hyperv_keyboard [ 4.350678] hv_utils: Registering HyperV Utility Driver [ 52.468905] hv_balloon: Max. dynamic memory size: 4000 MB [ 96.728628] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. [ 96.729009] Bridge firewalling registered [ 96.734466] audit: type=1325 audit(1594295640.425:2): table=nat family=2 entries=0 [ 96.734568] audit: type=1300 audit(1594295640.425:2): arch=c000003e syscall=313 success=yes exit=0 a0=5 a1=41a8e6 a2=0 a3=5 items=0 ppid=1078 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;modprobe&quot; exe=&quot;/usr/bin/kmod&quot; subj=kernel key=(null) [ 96.734671] audit: type=1327 audit(1594295640.425:2): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0069707461626C655F6E6174 [ 96.753305] audit: type=1325 audit(1594295640.444:3): table=nat family=2 entries=5 [ 96.753308] audit: type=1300 audit(1594295640.444:3): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=1a1ea60 items=0 ppid=2350 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;iptables&quot; exe=&quot;/usr/sbin/xtables-legacy-multi&quot; subj=kernel key=(null) [ 96.753309] audit: type=1327 audit(1594295640.444:3): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 [ 355.252138] audit: type=1325 audit(1594295898.980:72): table=filter family=2 entries=30 [ 355.252142] audit: type=1300 audit(1594295898.980:72): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=207d050 items=0 ppid=2350 pid=6087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;iptables&quot; exe=&quot;/usr/sbin/xtables-legacy-multi&quot; subj=kernel key=(null) [ 355.252144] audit: type=1327 audit(1594295898.980:72): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D7000746370002D64003137322E31372E302E34002D2D64706F727400343433002D6A00414343455054 [ 569.369364] docker0: port 13(veth97c2039) entered disabled state [ 569.369455] audit: type=1700 audit(1594296113.095:87): dev=veth97c2039 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295 [ 569.377656] audit: type=1300 audit(1594296113.095:87): arch=c000003e syscall=44 success=yes exit=32 a0=e a1=c002858760 a2=20 a3=0 items=0 ppid=1 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;dockerd&quot; exe=&quot;/usr/bin/dockerd&quot; subj=kernel key=(null) [ 569.377659] audit: type=1327 audit(1594296113.095:87): proctitle=2F7573722F62696E2F646F636B657264002D48007463703A2F2F302E302E302E303A32333736002D4800756E69783A2F2F2F7661722F72756E2F646F636B65722E736F636B002D2D64656661756C742D756C696D69743D6E6F66696C653D313034383537363A31303438353736002D2D746C73766572696679002D2D746C7363 [ 575.768707] audit: type=1325 audit(1594296119.496:88): table=nat family=2 entries=27 [ 575.768710] audit: type=1300 audit(1594296119.496:88): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=10a24b0 items=0 ppid=2350 pid=8742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;iptables&quot; exe=&quot;/usr/sbin/xtables-legacy-multi&quot; subj=kernel key=(null) [ 575.768712] audit: type=1327 audit(1594296119.496:88): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400444F434B4552002D7000746370002D6400302F30002D2D64706F727400343433002D6A00444E4154002D2D746F2D64657374696E6174696F6E003137322E31372E302E343A3434330000002D6900646F636B657230 [ 575.770459] audit: type=1325 audit(1594296119.498:89): table=filter family=2 entries=32 [ 575.770461] audit: type=1300 audit(1594296119.498:89): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=17ad4d0 items=0 ppid=2350 pid=8744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;iptables&quot; exe=&quot;/usr/sbin/xtables-legacy-multi&quot; subj=kernel key=(null) [ 575.770463] audit: type=1327 audit(1594296119.498:89): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4400444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D7000746370002D64003137322E31372E302E34002D2D64706F727400343433002D6A00414343455054 [ 575.772344] audit: type=1325 audit(1594296119.500:90): table=nat family=2 entries=26 [ 575.772356] audit: type=1300 audit(1594296119.500:90): arch=c000003e syscall=54 success=yes exit=0 a0=5 a1=0 a2=40 a3=814150 items=0 ppid=2350 pid=8746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;iptables&quot; exe=&quot;/usr/sbin/xtables-legacy-multi&quot; subj=kernel key=(null) [ 575.772358] audit: type=1327 audit(1594296119.500:90): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4400504F5354524F5554494E47002D7000746370002D73003137322E31372E302E34002D64003137322E31372E302E34002D2D64706F727400343433002D6A004D415351554552414445 [ 575.775023] audit: type=1325 audit(1594296119.503:91): table=nat family=2 entries=25 [ 576.333512] veth3126200: renamed from eth0 [ 576.342533] docker0: port 1(vethb62d4b0) entered disabled state [ 576.518935] docker0: port 2(vetha808974) entered disabled state [ 576.521396] vethe98d518: renamed from eth0 [ 576.708284] docker0: port 3(vethc4912f9) entered disabled state [ 576.709295] vethbd779ad: renamed from eth0 [ 579.921427] docker0: port 1(vethb62d4b0) entered disabled state [ 579.926503] device vethb62d4b0 left promiscuous mode [ 579.926508] docker0: port 1(vethb62d4b0) entered disabled state [ 580.745101] docker0: port 2(vetha808974) entered disabled state [ 580.748501] device vetha808974 left promiscuous mode [ 580.748514] docker0: port 2(vetha808974) entered disabled state [ 582.042311] docker0: port 3(vethc4912f9) entered disabled state [ 582.047229] device vethc4912f9 left promiscuous mode [ 582.047236] docker0: port 3(vethc4912f9) entered disabled state [ 582.047260] kauditd_printk_skb: 14 callbacks suppressed [ 582.047261] audit: type=1700 audit(1594296125.769:96): dev=vethc4912f9 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295 [ 582.050812] audit: type=1300 audit(1594296125.769:96): arch=c000003e syscall=44 success=yes exit=32 a0=e a1=c001dcf7e0 a2=20 a3=0 items=0 ppid=1 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=&quot;dockerd&quot; exe=&quot;/usr/bin/dockerd&quot; subj=kernel key=(null) [ 582.050918] audit: type=1327 audit(1594296125.769:96): proctitle=2F7573722F62696E2F646F636B657264002D48007463703A2F2F302E302E302E303A32333736002D4800756E69783A2F2F2F7661722F72756E2F646F636B65722E736F636B002D2D64656661756C742D756C696D69743D6E6F66696C653D313034383537363A31303438353736002D2D746C73766572696679002D2D746C7363 </code></pre>
<p>Based on the info you provided the most common cause of this issue might be the lack of resources which would lead to the apiserver being evicted. There are few things to look at:</p> <ol> <li><p>Increase the amount of allocated memory and CPU to make sure it is enough for your use case.</p> </li> <li><p>Check if you have enough storage at <code>/dev/sda1</code>. Hitting a threshold might cause some of the processes to be evicted, apiserver included.</p> </li> <li><p>If this is still not enough, try to get more logs by executing: <code>minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'</code></p> </li> <li><p>Eventually you can restart your minikube with <code>minikube stop</code> and <code>minikube start</code> (you can add <code>minikube delete</code> in between if you want to start from scrach). If problem will return, try to catch its cause with <code>journalctl</code>.</p> </li> </ol> <p>Please let me know if that helped.</p>
<p>Is it required to clean up docker logs in AKS/Kubernetes? Or even more simple, it is possible to set a maximum log-size in AKS?</p>
<p>Short answers:</p> <ul> <li><p>No, you don't have to clean up the docker logs yourself.</p> </li> <li><p>Yes, there is a way to set a maximum log-size but it is not an officially supported method.</p> </li> </ul> <p>There is a config in <code>/etc/docker/daemon.json</code> that is responsible for log rotation. See the example below:</p> <pre><code>{ &quot;live-restore&quot;: true, &quot;log-driver&quot;: &quot;json-file&quot;, &quot;log-opts&quot;: { &quot;max-size&quot;: &quot;50m&quot;, &quot;max-file&quot;: &quot;5&quot; } } </code></pre> <p>You can change those values but it would not be persistent due to the fact that node can be replaced on scale or upgrade operations. There is a workaround however. You can use <a href="https://github.com/juan-lee/knode" rel="nofollow noreferrer">knode</a> in order to change the node configuration:</p> <blockquote> <p>knode uses a kubernetes daemonset for node configuration.</p> </blockquote> <p>More details regarding it can be found in the <a href="https://github.com/juan-lee/knode" rel="nofollow noreferrer">linked page</a>.</p> <p>Please let me know if that helps.</p>
<p>I have 2 clusters in GCP, one in Europe and the other in USA. I have created a VPC network to peer the subnetworks each others and configured the relative firewall rules. Now i'm able to create calls between pods, but i obtain a timeout when try to create them from a pod in europe to a service in the other cluster. I checked very well all firewall rules but can't find a solution. Can someone give me some hint to solve my problem?</p>
<p>The problem is that GCP requires to use a VM IP address in order to communicate outside the VPC. For allowing cross cluster communication on top of VPN, you need to make sure the clusters can communicate as they are on the same virtual network. GKE blocks egress traffic in case an internal IP address (pod address space) is used to access the internal IP address which are outside the virtual network (in this case through VPN). Hence you need to configure an ip table rule to masquerade traffic as it was originated from the VM instance IP address instead of the pod IP address for all outgoing traffic to the other subnet.</p> <p>There is an implementation that is using daemon set to define the iptables MASQUERADE rules.</p> <p>You can find more details on the github page - <a href="https://github.com/kubernetes-incubator/ip-masq-agent" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/ip-masq-agent</a> and on the k8s documentation - <a href="https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/</a></p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations" rel="nofollow noreferrer">this page</a> in K8S, "...two jobs might be created...".</p> <p>If I set my <code>concurrencyPolicy</code> to "Forbid" - Will I still get optionally concurrent runs due to the scheduler, or will I get concurrent calls to run but be prevented?</p> <p>I also opened an issue in the Docs site: <a href="https://github.com/kubernetes/website/issues/18655" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/18655</a></p>
<p>Setting concurrencyPolicy to "Forbid" will make it so that if, by the time the next scheduled job comes around, the previous job is still running cron will not spin up another job and will count as a missed job. Setting it to "Allow" will allow these jobs to run at least once when scheduled.</p> <p>There is only a chance of two jobs, or no job, being run at one scheduled time as K8S does not fully prevent this.</p>
<p>In my docker setup, I maintain <code>targets.json</code> file which is dynamically updated with targets to probe. The file starts empty but is appended with targets during some use case.</p> <p><strong>sample targets.json</strong></p> <pre><code>[ { &quot;targets&quot;: [ &quot;x.x.x.x&quot; ], &quot;labels&quot;: { &quot;app&quot;: &quot;testApp1&quot; } }, { &quot;targets&quot;: [ &quot;x.x.x.x&quot; ], &quot;labels&quot;: { &quot;app&quot;: &quot;testApp2&quot; } } ] </code></pre> <p>This file is then provided to prometheus configuration as <code>file_sd_configs</code>. Everything works fine, targets get added to targets.json file due to some event in application and prometheus starts monitoring along with blackbox for health checks.</p> <pre><code>scrape_configs: - job_name: 'test-run' metrics_path: /probe params: module: [icmp] file_sd_configs: - files: - targets.json relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: blackbox:9115 </code></pre> <p>Inside my node.js application I am able to append data to targets.json file, <strong>but</strong> now I trying to replicate this in Kubernetes on minikube. I tried adding in ConfigMap as following and it works, but I dont want to populate targets in configuration, but rather maintain a json file.</p> <p>Can this be done using Persistent Volumes? The pod running Prometheus will always read the targets file and pod running application will write to targets file.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: prometheus-cm data: targets.json: |- [ { &quot;targets&quot;: [ &quot;x.x.x.x&quot; ], &quot;labels&quot;: { &quot;app&quot;: &quot;testApp1&quot; } } ] </code></pre> <p>Simply, what strategy in Kubernetes is recommended to so that one pod can read a json file and another pod can write to that file.</p>
<p>In order to achieve your goal you need to use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PVC</a>:</p> <blockquote> <p>A <strong>PersistentVolume (PV)</strong> is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.</p> <p>A <strong>PersistentVolumeClaim (PVC)</strong> is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).</p> </blockquote> <p>The json file needs to be persisted if one pod has to write to it and another one to read it. There is an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">official guide</a> describing that concept in steps:</p> <ul> <li><p>Create a <code>PersistentVolume</code></p> </li> <li><p>Create a <code>PersistentVolumeClaim</code></p> </li> <li><p>Create a <code>Pod</code> that uses your <code>PersistentVolumeClaim</code> as a volume</p> </li> </ul> <p>I also recommend reading this: <a href="https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3" rel="nofollow noreferrer">Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster</a> as a supplement.</p>
<p>I have an EKS cluster with an application load balancer with a target group setup for each application environment. In my cluster I am building my application from a base docker image that is stored in a private ECR repository. I have confirmed that my pods are able to pull from the private ECR repo due to a secret I have setup to allow the private ECR image to be pulled. I am having a problem with the base docker image being able to get into a healthy state in the target group. I updated to containerPort in my deployment to match the port of the target group. I am not sure if that is how it needs to be configured. Below is how I defined everything for this namespace. I also have my dockerfile for the base image. Any advice how I can get a base docker image into a healthy state for me to build my application would be helpful.</p> <p><strong>dev.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: dev --- apiVersion: apps/v1 kind: Deployment metadata: namespace: dev name: dev-deployment spec: selector: matchLabels: app.kubernetes.io/name: dev-app replicas: 2 template: metadata: labels: app.kubernetes.io/name: dev-app spec: containers: - name: dev-app image: xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/private/base-docker-image:latest imagePullPolicy: Always ports: - containerPort: 30411 imagePullSecrets: - name: dev --- apiVersion: v1 kind: Service metadata: namespace: dev name: dev-service spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app.kubernetes.io/name: dev-app --- apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: dev name: dev-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance spec: rules: - http: paths: - path: /* backend: serviceName: dev-service servicePort: 80 --- </code></pre> <p><strong>dockerfile</strong></p> <pre><code>FROM private/base-docker-image:latest COPY . /apps WORKDIR /apps RUN npm run build ENV ML_HOST=$HOST ML_PORT=$PORT ML_USER=$USER ML_PASSWORD=$PASSWORD CMD [&quot;npm&quot;, &quot;run&quot;, &quot;dockerstart&quot;] </code></pre> <p><strong>Registered Targets</strong> <a href="https://i.stack.imgur.com/NjVem.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NjVem.png" alt="enter image description here" /></a></p> <p><strong>Health Check Settings</strong> <a href="https://i.stack.imgur.com/ZmM2q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZmM2q.png" alt="enter image description here" /></a></p>
<p>This is a community wiki answer posted for better visibility.</p> <p>As confirmed in the comments the solution is to set the <code>targetPort</code> to the port opened by the application which is <code>30411</code> as mentioned in the deployment's yaml configuration.</p>
<p>Jenkins (on a Kubernetes node) is complaining it requires a newer version of Jenkins to run some of my plug-ins.</p> <blockquote> <p>SEVERE: Failed Loading plugin Matrix Authorization Strategy Plugin v2.4.2 (matrix-auth) java.io.IOException: Matrix Authorization Strategy Plugin v2.4.2 failed to load. - You must update Jenkins from v2.121.2 to v2.138.3 or later to run this plugin.</p> </blockquote> <p>The same log file also complains farther down that it can't read my config file... I'm hoping this is just because of the version issue above, but I'm including it here in case it is a sign of deeper issues:</p> <blockquote> <p>SEVERE: Failed Loading global config java.io.IOException: Unable to read /var/jenkins_home/config.xml</p> </blockquote> <p>I'd either like to disable the plug-ins that are causing the issue so I can see the Jenkins UI and manage the plug-ins from there, or I'd like to update Jenkins in a way that DOES NOT DELETE MY USER DATA AND JOB CONFIG DATA.</p> <p>So far, I tried disabling ALL the plug-ins by adding .disabled files to the Jenkins plug-ins folder. That got rid of most of the errors, but it still complained about the plug-in above. So I removed the .disabled file for that, and now it's complaining about Jenkins not being a new enough version again (the error above).</p> <p>Note: this installation of Jenkins is using a persistent storage volume, mounted with EFS. So that will probably help alleviate some of the restrictions around upgrading Jenkins, if that's what we need to do.</p> <p>Finally, whatever we do with the plug-ins and Jenkins version, I need to make sure the change is going to persist if Kubernetes re-starts the node in the future. Unfortunately, I am pretty unfamiliar with Kubernetes, and I haven't discovered yet where these changes need to be made. I'm guessing the file that controls the Kubernetes deployment configuration?</p> <p>This project is using Helm, in case that matters. But again, I hardly know anything about Helm, so I don't know what files you might need to see to make this question solvable. Please comment so I know what to include here to help provide the needed information.</p>
<p>We faced the same problem with our cluster, and we have a basic explanation about that, but not sure about it (The following fix works)</p> <p>That error come with the fact that you have installed Jenkins via Helm, and their plugins through the Jenkins UI. It works if you decide to never reboot the pod, but if one day, jenkins have to make his initialization again, you will face that error. Jenkins try to load plugins from the JENKINS_PLUGINS_DIR, which is empty, so the pod die.</p> <p>To fix the current error, you should specify your plugin in the master.installPLugins parameter. If you followed a normal install, just go on your cluster and</p> <pre><code>helm get values jenkins_release_name </code></pre> <p>So you may have something like that:</p> <pre><code>master: enableRawHtmlMarkupFormatter: true installPlugins: - kubernetes:1.16.0 - workflow-job:2.32 </code></pre> <p>By default, some values are "embedded" by helm to be sure that jenkins works, see here for more details: <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">Github Helm Charts Jenkins</a></p> <p>So, just copy it in a file with the same syntax and add your plugins with their versions. After, you have just to use the helm upgrade command with your file on your release:</p> <pre><code>helm upgrade [RELEASE] [CHART] -f your_file.yaml </code></pre> <p>Good luck !</p>
<p>I'm currently migrating an IT environment from Nginx Ingress Gateway to IstIO Ingress Gateway on Kubernetes.</p> <p>I need to migrate the following Nginx annotations:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-buffer-size nginx.ingress.kubernetes.io/proxy-read-timeout nginx.ingress.kubernetes.io/proxy-send-timeout nginx.ingress.kubernetes.io/proxy-body-size nginx.ingress.kubernetes.io/upstream-vhost </code></pre> <p>For Nginx, the annotations are documented here: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a></p> <p>I didn't find the way of use for the IstIO Ingress Gateway on the documentation of IstIO for the Nginx annotations.</p> <p>Does anyone know how to implement the above mentioned annotations in the IstIO Ingress Gateway?</p>
<p>I think I found how to set <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-body-size" rel="nofollow noreferrer"><code>nginx.ingress.kubernetes.io/proxy-body-size</code></a> in Istio.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: reviews-lua namespace: bookinfo spec: workloadSelector: labels: app: reviews configPatches: # The first patch adds the lua filter to the listener/http connection manager - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: portNumber: 8080 filterChain: filter: name: &quot;envoy.http_connection_manager&quot; subFilter: name: &quot;envoy.router&quot; patch: operation: INSERT_BEFORE value: # lua filter specification name: envoy.lua config: inlineCode: | function envoy_on_request(request_handle) request_handle:headers():add(&quot;request_body_size&quot;, request_handle:body():length()) end </code></pre> <p>And also the TLS ciphers:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-tls-ingress spec: selector: app: my-tls-ingress-gateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - &quot;*&quot; tls: mode: SIMPLE serverCertificate: /etc/certs/server.pem privateKey: /etc/certs/privatekey.pem cipherSuites: &quot;&lt;tls-ciphers&gt;&quot; </code></pre>
<p>From what I understand, when a POD talks to a Service the IP tables have been updated by a CNI provider (this may be specific to some but not all CNI providers). The iptables have basically provided a virtual IP that then round robins or distributes (somehow) to backend ephemeral pods. Those pods may live on the same host or another host in the cluster. At this point (again based on the CNI) conntrack is used to keep src and dst straight as it remaps the svc-ip to the dest-ip of the POD. What I'm wondering though, is if the dest pod is on the same host, I'm not certain how it is routed on the return path. I would suspect via the service still and then possibly using conntrack for the return path.</p> <p>Does kubernetes use conntrack twice when a pod talks to a pod through a service where the destination pod is on the same host?</p>
<p>I will use Calico as an example while trying to explain this topic.</p> <p>Conntrack is only required from the source pod to the service endpoints. You have to track it to send rest of flow packets to the same destination endpoint. The return path always has only one option: from the destination pod to the source pod which means that even if conntrack is used here it doesn't change anything as the return path is managed by the NAT table.</p> <p>Also worth mentioning:</p> <blockquote> <p>For both iptables and IPVS mode, the response time overhead for kube-proxy is associated with establishing connections, not the number of packets or requests you send on those connections. This is because Linux uses connection tracking (conntrack) that is able to match packets against existing connections very efficiently. If a packet is matched in conntrack then it doesn’t need to go through kube-proxy’s iptables or IPVS rules to work out what to do with it.</p> </blockquote> <p>You can use <code>conntrack</code> <a href="https://www.systutorials.com/docs/linux/man/8-conntrack/" rel="nofollow noreferrer">command line interface</a> in order search, list, inspect and maintain the connection tracking subsystem of the Linux kernel.</p> <p>For example:</p> <p><code>conntrack -L</code> will show the connection tracking table in /proc/net/ip_conntrack format:</p> <pre><code> # conntrack -L tcp 6 431982 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34846 dport=993 packets=169 bytes=14322 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34846 packets=113 bytes=34787 [ASSURED] mark=0 secmark=0 use=1 tcp 6 431698 ESTABLISHED src=192.168.2.100 dst=123.59.27.117 sport=34849 dport=993 packets=244 bytes=18723 src=123.59.27.117 dst=192.168.2.100 sport=993 dport=34849 packets=203 bytes=144731 [ASSURED] mark=0 secmark=0 use=1 conntrack v0.9.7 (conntrack-tools): 2 flow entries have been shown. </code></pre> <p>A practical example would be when you change Calico's policy to disallow a flow that was previously allowed. Calico only needs to check its policies for the first packet in an allowed flow—between a pair of IP addresses and ports—and then conntrack automatically allows further packets in the same flow, without Calico rechecking every packet. If packets were recently exchanged on the previously allowed flow, and so there is conntrack state for that flow that has not yet expired, that conntrack state will allow further packets between the same IP addresses and ports, even after the Calico policy has been changed. To avoid that you could delete the relevant conntrack states manually with <code>conntrack -D</code> and than use <code>conntrack -E</code> in order to observe the connection events with a new Calico policy.</p> <p>Sources:</p> <ul> <li><p><a href="https://www.projectcalico.org/when-linux-conntrack-is-no-longer-your-friend/" rel="nofollow noreferrer">Linux Conntrack</a></p> </li> <li><p><a href="https://docs.projectcalico.org/reference/host-endpoints/conntrack" rel="nofollow noreferrer">Connection tracking</a></p> </li> <li><p><a href="https://www.projectcalico.org/comparing-kube-proxy-modes-iptables-or-ipvs/" rel="nofollow noreferrer">Kube-proxy modes</a></p> </li> <li><p><a href="https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg" rel="nofollow noreferrer">Packet flow</a></p> </li> </ul> <p>I hope this helps.</p>
<p>I want to extend my Kubernetes cluster by one node.</p> <p>So I run the scale.yaml Ansible playbook:</p> <pre><code>ansible-playbook -i inventory/local/hosts.ini --become --become-user=root scale.yml </code></pre> <p>But I am getting the error message when uploading the control plane certificates happens:</p> <pre><code>TASK [Upload control plane certificates] *************************************************************************************************************************************************** ok: [jay] fatal: [sam]: FAILED! =&gt; {&quot;changed&quot;: false, &quot;cmd&quot;: [&quot;/usr/local/bin/kubeadm&quot;, &quot;init&quot;, &quot;phase&quot;, &quot;--config&quot;, &quot;/etc/kubernetes/kubeadm-config.yaml&quot;, &quot;upload-certs&quot;, &quot;--upload-certs&quot;], &quot;delta&quot;: &quot;0:00:00.039489&quot;, &quot;end&quot;: &quot;2022-01-08 11:31:37.708540&quot;, &quot;msg&quot;: &quot;non-zero return code&quot;, &quot;rc&quot;: 1, &quot;start&quot;: &quot;2022-01-08 11:31:37.669051&quot;, &quot;stderr&quot;: &quot;error execution phase upload-certs: failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: no such file or directory\nTo see the stack trace of this error execute with --v=5 or higher&quot;, &quot;stderr_lines&quot;: [&quot;error execution phase upload-certs: failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: no such file or directory&quot;, &quot;To see the stack trace of this error execute with --v=5 or higher&quot;], &quot;stdout&quot;: &quot;&quot;, &quot;stdout_lines&quot;: []} </code></pre> <p>Anyone has an idea what the problem could be?</p> <p>Thanks in advance.</p>
<p>I solved it myself.</p> <p>I copied the /etc/kubernetes/admin.conf and /etc/kubernetes/ssl/ca.* to the new node and now the scale playbook works. Maybe this is not the right way, but it worked...</p>
<p>My docker build is failing due to the following error:</p> <blockquote> <p>COPY failed: CreateFile \?\C:\ProgramData\Docker\tmp\docker-builder117584470\Aeros.Services.Kubernetes\Aeros.Services.Kubernetes.csproj: The system cannot find the path specified.</p> </blockquote> <p>I am fairly new to docker and have went with the basic project template that is set up when you create a Kubernetes container project template, so I'd figure it would work out of the box, but I'm mistaken.</p> <p>I'm having problems trying to figure out what it's attempting to due in the temp directory structure and the reason it is failing. Can anyone offer some assistance? I've done some searching and others have said the default docker template was incorrect in Visual Studio, but I'm not seeing any of the files being copied over to the temp directory to begin with, so figuring out what is going on is being rather problematic at the time.</p> <p>Here is the docker file, the only thing I've added is a publishingProfile arg so I can tell it which profile to use in the Build and Publish steps :</p> <pre><code>ARG publishingProfile FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"] RUN dotnet restore "Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj" COPY . ./ WORKDIR "/src/Aeros.Services.Kubernetes" RUN dotnet build "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app FROM build AS publish RUN dotnet publish "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "Aeros.Services.Kubernetes.dll"] </code></pre> <p>I haven't touched the yaml file, but if you need that I can provide it as well. Again, all I've done with this is add a few NuGet packages to the project reference. Build in VisualStudio runs fine, but the docker command:</p> <pre><code>docker build . --build-arg publishingProfile=Release </code></pre> <p>is failing with the error mentioned above.</p> <p>Can someone be so kind as to offer some enlightenment? Thanks!</p> <p>Edit 1: I am executing this from the project's folder via a PowerShell command line.</p>
<p>Leandro's comments helped come across the solution.</p> <p>So first a rundown of that COPY command, it takes two parameters, source and destination. Within the template for the Dockerfile for Visual Studio, it includes the folder location of the .csproj file it is attempting to copy. In my case, the command read as follows:</p> <pre><code>COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"] </code></pre> <p>So it is looking for my Aeros.Services.Kubernetes.csproj file in the Aeros.Services.Kubernetes project folder and copying it to the Aeros.Services.Kubernetes folder in the src folder of Docker.</p> <p>The problem with this is that if you use the default setup, your dockerfile is included inside the project folder. If you are executing the docker build from within the project folder, the syntax for the COPY command is actually looking in the wrong file location. For instance, if your project is TestApp.csproj located in the TestApp project folder, and you are executing the Docker build command for the dockerfile within the same folder, the syntax for that COPY command:</p> <pre><code>COPY ["TestApp/TestApp.csproj", "TestApp/"] </code></pre> <p>is actually looking for: TestApp/TestApp/TestApp.csproj.</p> <p>The correct syntax for the COPY command in this situation should be:</p> <pre><code>COPY ["TestApp.csproj", "TestApp/"] </code></pre> <p>since you are already within the TestApp project folder.</p> <p>Another problem with the default template that may trouble some is that it doesn't copy the web files for the project either, so once you get past the COPY and dotnet restore steps, you will fail during the BUILD with a:</p> <blockquote> <p>CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point</p> </blockquote> <p>This is resolved by adding:</p> <pre><code>COPY . ./ </code></pre> <p>following your RUN dotnet restore command to copy your files.</p> <p>Once these pieces have been addressed in the default template provided, everything should be functioning as expected.</p> <p>Thanks for the help!</p>
<p>I have 3 delpoyments in my namespace. None of the deployments specify any resource limits for either CPU or memory. I am trying to set the default min and max using LimitRange for all pods and their containers(existing and future) in this namespace. I've deployed a LimitRange resource to the namespace as defined below. However, when I redeploy, the deployments fail with errors (as listed below)</p> <p>LimitRange:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: "v1" kind: "LimitRange" metadata: name: "core-resource-limits" namespace: x spec: limits: - type: "Pod" max: cpu: 1 memory: 1Gi min: cpu: 2m memory: 50Mi - type: "Container" max: cpu: 1 memory: 800Mi min: cpu: 1m memory: 50Mi default: cpu: 20m memory: 500Mi defaultRequest: cpu: 10m memory: 400Mi maxLimitRequestRatio: cpu: 4</code></pre> </div> </div> </p> <p><a href="https://i.stack.imgur.com/Y215F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y215F.png" alt="Error" /></a> This is the error that I see</p> <p>I don't understand where is 2040m coming from as its not defined in the LimitRange and its not defined in any of the deployments. Similarly all the other limit values. I have tried changing all these values to a bit higher/lower but I can't figure it out. Can someone explain what is wrong with this set of values?</p> <p>Thanks</p> <p>Edit: All the pods are on the same node. The node limit are as follows:</p> <ol> <li>CPU requests: 2.86</li> <li>CPU limits: 48.55</li> <li>Memory requests: 7.459</li> <li>Memory limits: 28.342</li> </ol>
<p>I would like to expand and explain in more detail what might be wrong here. Let's take a closer look at the overview of <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">Limit Range</a>:</p> <ul> <li><p>The administrator creates one <code>LimitRange</code> in one namespace.</p> </li> <li><p>Users create resources like <code>Pods</code>, <code>Containers</code>, and <code>PersistentVolumeClaims</code> in the namespace.</p> </li> <li><p>The <code>LimitRanger</code> admission controller enforces defaults and limits for all <code>Pods</code> and <code>Containers</code> that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any <code>LimitRange</code> present in the namespace.</p> </li> <li><p>If creating or updating a resource (<code>Pod</code>, <code>Container</code>, <code>PersistentVolumeClaim</code>) that violates a <code>LimitRange</code> constraint, the request to the API server will fail with an HTTP status code 403 FORBIDDEN and a message explaining the constraint that have been violated.</p> </li> <li><p>If a <code>LimitRange</code> is activated in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values. Otherwise, the system may reject Pod creation.</p> </li> <li><p><code>LimitRange</code> validations occurs only at Pod Admission stage, not on Running <code>Pods</code>.</p> </li> </ul> <p>As already mentioned by @FritzDuchardt the error message clearly states that the limits are misconfigured or wrongly enforced. This leads us to two ways:</p> <ul> <li><p>Check if there are any other limits set at the <code>Pod</code> level (<code>kubectl edit pod &lt;pod_name&gt;</code>).</p> </li> <li><p>Within a namespace, a <code>Pod</code> or <code>Container</code> can consume as much CPU and memory as defined by the namespace's <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">resource quota</a>. Check if resource quotas are set with <code>kubectl describe quota</code>. In the case where the total limits of the namespace is less than the sum of the limits of the Pods/Containers, there may be contention for resources. In this case, the Containers or Pods will not be created.</p> </li> </ul> <p>Here is an example of <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/#attempt-to-create-a-pod-that-exceeds-the-maximum-memory-constraint" rel="nofollow noreferrer">attempting to create a Pod that exceeds the maximum memory constraint</a> which represents the same result as you are experiencing.</p> <p>I hope this explains the topic and potential issues in more detail. Please let me know if that helps.</p>
<p>How can I use ConfigMap to write cluster node information to a JSON file?</p> <p>The below gives me Node information :</p> <pre><code>kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="Hostname")].address}' </code></pre> <p>How can I use Configmap to write the above output to a text file? </p>
<p>You can save the output of command in any file. Then use the file or data inside file to create configmap. After creating the configmap you can mount it as a file in your deployment/pod.</p> <p>For example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: appname name: appname namespace: development spec: selector: matchLabels: app: appname tier: sometier template: metadata: creationTimestamp: null labels: app: appname tier: sometier spec: containers: - env: - name: NODE_ENV value: development - name: PORT value: "3000" - name: SOME_VAR value: xxx image: someimage imagePullPolicy: Always name: appname volumeMounts: - name: your-volume-name mountPath: "your/path/to/store/the/file" readOnly: true volumes: - name: your-volume-name configMap: name: your-configmap-name items: - key: your-filename-inside-pod path: your-filename-inside-pod </code></pre> <p>I added the following configuration in deployment:</p> <pre><code> volumeMounts: - name: your-volume-name mountPath: "your/path/to/store/the/file" readOnly: true volumes: - name: your-volume-name configMap: name: your-configmap-name items: - key: your-filename-inside-pod path: your-filename-inside-pod </code></pre> <p>To create ConfigMap from file:</p> <pre><code>kubectl create configmap your-configmap-name --from-file=your-file-path </code></pre> <p>Or just create ConfigMap with the output of your command:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: your-configmap-name namespace: your-namespace data: your-filename-inside-pod: | output of command </code></pre>
<p>I want to add a new control plane node into the cluster.</p> <p>So, I run in an existing control plane server: <code>kubeadm token create --print-join-command</code></p> <p>I run this command in new control plane node:</p> <pre><code>kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 33cf0a1d30da4c714755b4de4f659d6d5a02e7a0bd522af2ebc2741487e53166 </code></pre> <ol start="3"> <li>I got this message:</li> </ol> <pre><code>[download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-e tcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt </code></pre> <ol start="4"> <li>I run in an existing production control plane node:</li> </ol> <pre><code>kubeadm init phase upload-certs --upload-certs [upload-certs] Storing the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace [upload-certs] Using certificate key: 0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b </code></pre> <ol start="5"> <li>Re-run this command in the new control plane node:</li> </ol> <pre><code>kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b </code></pre> <p>I got the same message:</p> <pre><code>[download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-etcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>What's I am wrong?</p> <p>I have all certs in the new node installed before doing this op:</p> <pre><code># ls /etc/kubernetes/pki/ apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub </code></pre> <p>I didn't see how to specify etcd certs files:</p> <pre><code>Usage: kubeadm init phase upload-certs [flags] Flags: --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. --config string Path to a kubeadm configuration file. -h, --help help for upload-certs --kubeconfig string The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default &quot;/etc/kubernetes/admin.conf&quot;) --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates. --upload-certs Upload control-plane certificates to the kubeadm-certs Secret. Global Flags: --add-dir-header If true, adds the file directory to the header of the log messages --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity </code></pre>
<p>You also need to pass the <code>--config</code> flag to your <code>kubeadm init phase</code> command (use <code>sudo</code> if needed). So instead of:</p> <pre><code>kubeadm init phase upload-certs --upload-certs </code></pre> <p>you should for example run:</p> <pre><code>kubeadm init phase upload-certs --upload-certs --config kubeadm-config.yaml </code></pre> <p>This topic is also explained by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#uploading-control-plane-certificates-to-the-cluster" rel="nofollow noreferrer">Uploading control-plane certificates to the cluster</a> docs.</p>
<p>I have installed CP-Kafka locally in K8s for Docker desktop. After a day or so the Contol Center remains in a wierd state. The service is visible:</p> <pre><code>$ kgsvc -l app=cp-control-center -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR officefriday-cp-control-center NodePort 10.106.121.181 &lt;none&gt; 9021:30569/TCP 15h app=cp-control-center,release=officefriday </code></pre> <p>The pod for the service is not there:</p> <pre><code>$ kgpo NAME READY STATUS RESTARTS AGE curl 1/1 Running 0 12d dnsutils 1/1 Running 0 14d kafka-client 1/1 Running 0 40h officefriday-cp-kafka-0 2/2 Running 0 14h officefriday-cp-kafka-1 2/2 Running 0 14h officefriday-cp-kafka-2 2/2 Running 0 14h officefriday-cp-kafka-connect-c77bd598c-x42tj 2/2 Running 1 15h officefriday-cp-kafka-rest-6559b7588b-ld8wk 2/2 Running 0 15h officefriday-cp-ksql-server-7fcdcdccd5-lwt2w 2/2 Running 0 15h officefriday-cp-schema-registry-84855ff8f6-v6p28 2/2 Running 1 15h officefriday-cp-zookeeper-0 2/2 Running 0 15h officefriday-cp-zookeeper-1 2/2 Running 0 15h officefriday-cp-zookeeper-2 2/2 Running 0 15h </code></pre> <p>But the problem is that for K8s there is nothing wrong with it.</p> <p>Now the CC service is still reachable:</p> <p><a href="https://i.stack.imgur.com/D7uqY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D7uqY.png" alt="enter image description here"></a></p> <p>I have restarted Docker and all the rest works apart from the missing pod.</p> <p>How can one troubleshoot sth like that? Not much good hints out there.</p> <p>Thnx!</p>
<p>It is very hard to say what might be the cause of this particular problem without any relevant logs or error messages. However, there are some recommended steps that should be taken in order to debug this and any similar problem in the future.</p> <p>According to the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/" rel="nofollow noreferrer">official documentation</a> you should start from:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-pods" rel="nofollow noreferrer">Debugging Pods</a></p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-replication-controllers" rel="nofollow noreferrer">Debugging Replication Controllers</a></p> </li> <li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-services" rel="nofollow noreferrer">Debugging Services</a></p> </li> </ul> <p>Also, it is strongly suggested to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">examine Pod logs</a> if possible.</p> <p>I hope this helps to solve your issue. Please let me know what are your findings so I could help further if needed.</p>