Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have a problem with a local kind kubernetes cluster I have.</p>
<p>I applied a regcred secret with the relevant details of my private registry and then a deployment file pointing to that registry and uses the relevant secret but it seems like the pods aren't able to pull the image. I tested it on a kubernetes cluster that is not local and the pods are running for the same deployment file and the same yaml file.</p>
<p>deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: db-deployment
labels:
app: db-deployment
spec:
replicas: 3
template:
metadata:
name: db-deployment
labels:
app: db-deployment
spec:
containers:
- name: db-deployment
image: *** private docker registry ***
ports:
- containerPort: 5001
command: ["python", "flask_main.py"]
restartPolicy: Always
imagePullSecrets:
- name: regcred
selector:
matchLabels:
app: db-deployment
</code></pre>
<p>I execed to my kind container and then run "crictl pull <em><strong>private docker registry</strong></em>/db:v1" and got the following error:</p>
<pre><code>pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "***private docker registry***/db:v1": failed to resolve reference "***private docker registry***/db:v1": failed to do request: Head https://***private docker registry***/db/manifests/v1: x509: certificate signed by unknown authority
</code></pre>
<p>I tried to add the relevant certificate to C:\Program Data\Docker\certs.d and reboot docker but that didn't help.
What can I do from here?
Thanks in advance.</p>
| Yaakov Shami | <p>To add an insecure docker registry, add the file C:\ProgramData\docker\config\daemon.json with the following content:</p>
<p>{
"insecure-registries" : [ "your.private.registry.host" ]
}
and then you need to restart docker.</p>
| Abhijit Gaikwad |
<p>i'm trying to find out how to get <code>preStop</code> execution result for debugging purposes. </p>
<p>I'm creating a pod (not part of a deployment) with the following lifecycle definition:</p>
<pre><code> terminationGracePeriodSeconds: 60
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "echo trying post_stop;sleep 60"
</code></pre>
<p>when I ran it I can see it waiting 60 seconds but I cannot see any events for <code>preStop</code> hook being triggers not when I run <code>kubectl get events</code> and not when I run <code>kubectl describe pod <my-pod></code>
more then that I would love to know where the logs of the hook is being written, I tried to run <code>kubectl logs <my-pod> -f</code> but I did not see any logs there</p>
| Lior Baber | <p>You were on the right path with <code>kubectl describe</code>, check out the following location: <code>/dev/termination-log</code> see also the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#writing-and-reading-a-termination-message" rel="nofollow noreferrer">docs</a>.</p>
| Michael Hausenblas |
<p>Is there a way to configure kube-proxy in GKE?</p>
<p>I can see the pods creating from the daemonset, but I cannot see the daemonset itself.</p>
<p>Thanks for your help.</p>
| matth3o | <p>kube-proxy pod in k8s (not only in GKE) is created as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/" rel="nofollow noreferrer">Static Pod</a>.</p>
<p>Kubelet automatically creates so-called mirror pod on Kubernetes API server for each static pod, so the pods are visible there, but they cannot be controlled from the API server.<a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/static-pod/" rel="nofollow noreferrer">2</a></p>
<p>That's why you cannot edit and configure it as a usual API object.</p>
<p>However, you can edit kube-proxy manifest on the nodes and kubelet applies a new configuration.
Static pod manifest located on each node in</p>
<blockquote>
<p>/etc/kubernetes/manifests//etc/kubernetes/manifests/kube-proxy.manifest</p>
</blockquote>
<p>You can ssh into node and manually change it but we can automate it using cilium approach for removing kube-proxy<a href="https://docs.cilium.io/en/v1.9/gettingstarted/kubeproxy-free/" rel="nofollow noreferrer">3</a> with DaemonSet and modify it a little bit<a href="https://tech.griphone.co.jp/2020/08/06/cilium-kube-proxy-replacement/" rel="nofollow noreferrer">4</a>.</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy-configurator
namespace: kube-system
spec:
selector:
matchLabels:
name: kube-proxy-configurator
template:
metadata:
labels:
name: kube-proxy-configurator
spec:
initContainers:
- command:
- /bin/sh
- -c
- |
echo 'Changing kube-proxy iptables-min-sync-period'
sed -i 's/iptables-min-sync-period=10s/iptables-min-sync-period=2s/g' /etc/kubernetes/manifests/kube-proxy.manifest
echo 'All Done!'
image: alpine:latest
name: kube-proxy-configurator
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/kubernetes
name: kubernetes-configs
containers:
- image: k8s.gcr.io/pause
name: pause
terminationGracePeriodSeconds: 0
volumes:
- hostPath:
path: /etc/kubernetes
type: Directory
name: kubernetes-configs
</code></pre>
<p>Just apply it and kube-proxy configuration will be changed on each node (even on newly created by autoscaler).</p>
<pre><code>kubectl apply -f kube-proxy-configurator.yml
</code></pre>
| d.garanzha |
<p>At the moment I have the following config in one of my services:</p>
<pre><code>spec:
clusterIP: None
ports:
- name: grpc
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: my-first-app
environment: production
group: my-group
type: ClusterIP
</code></pre>
<p>in the selector section, I need to add another app:</p>
<pre><code>.
.
app: my-first-app my-second-app
.
.
</code></pre>
<p>How can I achieve this?</p>
<p>Thank you!</p>
| Moein | <pre><code>selector:
matchExpressions:
- {key: app, operator: In, values: [my-first-app, my-second-app]}
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a></p>
| Abhijit Gaikwad |
<p>How can I check and/or wait that apiVersion and kind exists before trying to apply resource using those?</p>
<p><strong>Example:</strong></p>
<p><em>Install cilium and network policy using cilium</em></p>
<pre><code>kubectl apply -f cilium.yaml
kubectl apply -f policy.yaml # fails if run just after installing cilium, since cilium.io/v2 and CiliumNetworkPolicy doesn't exist yet
</code></pre>
<p><a href="https://github.com/cilium/cilium/blob/master/examples/kubernetes/1.13/cilium.yaml" rel="nofollow noreferrer"><em>cilium.yaml</em></a></p>
<p><em>policy.yaml</em></p>
<pre><code>apiVersion: cilium.io/v2
description: example policy
kind: CiliumNetworkPolicy
...
</code></pre>
<p><strong>EDIT:</strong> <em>(solved with following script)</em></p>
<pre><code>#! /bin/bash
function check_api {
local try=0
local retries=30
until (kubectl "api-$1s" | grep -P "\b$2\b") &>/dev/null; do
(( ++try > retries )) && exit 1
echo "$2 not found. Retry $try/$retries"
sleep 3
done
}
kubectl apply -f cilium.yaml
check_api version cilium.io/v2
check_api resource CiliumNetworkPolicy
kubectl apply -f policy.yaml
</code></pre>
| warbaque | <p>You can use the following to check for supported versions and kinds, that is, check what the API server you're talking to supports:</p>
<pre><code>$ kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
...
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
</code></pre>
<p>There's also <code>kubectl api-resources</code> that provides you with a tabular overview of the kinds, shortnames, and if a resource is namespaced or not.</p>
| Michael Hausenblas |
<p>I currently use the following script to wait for the job completion</p>
<p><code>ACTIVE=$(kubectl get jobs my-job -o jsonpath='{.status.active}')
until [ -z $ACTIVE ]; do ACTIVE=$(kubectl get jobs my-job -o jsonpath='{.status.active}') ; sleep 30 ; done
</code></p>
<p>The problem is the job can either fail or be successful as it is a test job.</p>
<p>Is there a better way to achieve the same?</p>
| Devendra Bhatte | <p>Yes. As I pointed out in <a href="https://hackernoon.com/kubectl-tip-of-the-day-wait-like-a-boss-40a818c423ac" rel="nofollow noreferrer">kubectl tip of the day: wait like a boss</a>, you can use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">kubectl wait</a> command.</p>
| Michael Hausenblas |
<p>(WSL2, minikube, Windows 11, Calico plugin, Docker-Desktop)</p>
<p>I simply don't understand why curl timed out when doing <code>curl http://$(minikube ip):32000</code>.
I saw <a href="https://stackoverflow.com/questions/54699742/minikube-has-ip-cant-curl">SO questions</a> but they are not applicable.</p>
<pre><code>$ k create deploy nginxsvc --image=nginx --replicas=3
$ k expose deploy nginxsvc --port=80
</code></pre>
<p>Then I edited the service to change Service Type to NodePort.</p>
<pre><code>$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18d
nginxsvc NodePort 10.103.11.225 <none> 80:32000/TCP 19m
regular-encourager NodePort 10.102.236.129 <none> 8080:32058/TCP 15d
$ curl http://$(minikube ip):32000
curl: (28) Failed to connect to 192.168.49.2 port 32000: Connection timed out
$ k describe svc nginxsvc
Name: nginxsvc
Namespace: default
Labels: app=nginxsvc
Annotations: <none>
Selector: app=nginxsvc
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.11.225
IPs: 10.103.11.225
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32000/TCP
Endpoints: 10.244.1.233:80,10.244.1.234:80,10.244.1.235:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ k get svc nginxsvc -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-03-28T23:18:04Z"
labels:
app: nginxsvc
name: nginxsvc
namespace: default
resourceVersion: "531373"
uid: fcd7960e-8610-4099-a014-9b4ce94cdec2
spec:
clusterIP: 10.103.11.225
clusterIPs:
- 10.103.11.225
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32000
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginxsvc
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
| likejudo | <p>As @glv pointed <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-service-with-tunnel" rel="nofollow noreferrer">out</a>,</p>
<blockquote>
<p>The network is limited if using the Docker driver on Darwin, Windows,
or WSL, and the Node IP is not reachable directly.</p>
</blockquote>
<p>The solution was to</p>
<ol>
<li><p>remove minikube from WSL</p>
<p>minikube delete --purge</p>
</li>
<li><p>Then installed VirtualBox on Windows</p>
</li>
<li><p>Install Ubuntu</p>
</li>
<li><p>Install docker then minikube, kubectl on Ubuntu</p>
</li>
<li><p>Start minikube</p>
<p>minikube start --memory=6g --cpus=4 --driver=docker --cni=calico</p>
</li>
</ol>
<p>Now all the networking that earlier failed, is now working!</p>
| likejudo |
<p>We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:</p>
<p>Secrets</p>
<pre><code>- User-Service-Secret
- Transaction-Service-Secret
</code></pre>
<p>Service Account</p>
<pre><code>- User-Service
- Transaction-Service
</code></pre>
<p>Pods</p>
<pre><code>- User-Service-Pod
- Transaction-Service-Pod
</code></pre>
<p>The idea is to restrict access to <code>User-Service-Secret</code>secret to <code>User-Service</code> service account that is assign to <code>User-Service-Pod</code>. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because <code>Transaction-Service-Pod</code> can just as easily read the <code>User-Service-Secret</code> secret when Pod starts up, even though the service account its assign to doesn't have <code>get</code> permission to the <code>User-Service-Secret</code>.</p>
<p>How do we actually enforce the RBAC system?</p>
<p>FYI we are using EKS</p>
| blee908 | <p>First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.</p>
<p>TLDR:</p>
<ul>
<li>RBAC controls who can access a secret (or any other resource) using K8s API requests.</li>
<li>Namespaces or the service account's <code>secrets</code> attribute control if a pod can consume a secret as an environment variable or through a volume mount.</li>
</ul>
<h1>API access</h1>
<p>RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the <code>serviceAccountName</code> attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.</p>
<h1>Consuming Secrets</h1>
<p>This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's <code>secrets</code> attribute, to limit what secrets a pod with thet service account can refer to.</p>
<pre><code>$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
</code></pre>
<p>You can learn more about the security implications of Kubernetes secrets in the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#restrictions" rel="nofollow noreferrer">secret documentation</a>.</p>
| pst |
<p>I have the following deployment.yaml file in Kuberentes:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: basic
spec:
containers:
- name: basic
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: basic-config
</code></pre>
<p>I am not sure how I can fix the following error when I run <code>kubectl create -f basic-deployment.yaml</code>:</p>
<p><strong>The Deployment "basic-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"basic"}: <code>selector</code> does not match template <code>labels</code></strong></p>
| coderWorld | <pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-deployment
spec:
replicas: 2
selector:
matchLabels:
app: basic
template:
metadata:
labels:
app: basic
spec:
containers:
- name: basic
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: basic-config
</code></pre>
<p>Basically, the selector match label in your deployment spec needs to match a label in your template. In your case, you have <code>app: nginx</code> as a matching label for the selector and you have <code>app: basic</code> in your template, so no match.</p>
<p>You would have to have something either one <code>app: nginx</code> or <code>app: basic</code> on both so that there is a match.</p>
| Abhijit Gaikwad |
<p>I've tried to reproduce the <a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/Example.java" rel="nofollow noreferrer">example in k8s-java</a> in my mini-maven project.
However, I kept getting into this error - Exception in thread "main" java.lang.NoClassDefFoundError: io/kubernetes/client/openapi/ApiException.
My maven project is borrowed from <a href="https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html" rel="nofollow noreferrer">maven in 5 minutes</a> and is only changed a few lines.</p>
<ul>
<li>App.java</li>
</ul>
<pre class="lang-java prettyprint-override"><code>package com.mycompany.app;
import io.kubernetes.client.openapi.ApiClient;
import io.kubernetes.client.openapi.ApiException;
import io.kubernetes.client.openapi.Configuration;
import io.kubernetes.client.openapi.apis.CoreV1Api;
import io.kubernetes.client.openapi.models.V1Pod;
import io.kubernetes.client.openapi.models.V1PodList;
import io.kubernetes.client.util.Config;
import java.io.IOException;
/**
* A simple example of how to use the Java API
*
* <p>Easiest way to run this: mvn exec:java
* -Dexec.mainClass="io.kubernetes.client.examples.Example"
*
* <p>From inside $REPO_DIR/examples
*/
public class App {
public static void main(String[] args) throws IOException, ApiException {
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list =
api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
}
}
</code></pre>
<ul>
<li>pom.xml</li>
</ul>
<pre class="lang-xml prettyprint-override"><code> <dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.kubernetes</groupId>
<artifactId>client-java</artifactId>
<version>10.0.0</version>
</dependency>
</dependencies>
</code></pre>
<ul>
<li>Commands</li>
</ul>
<ol>
<li><code>mvn clean package</code></li>
<li><code>java -cp target/my-app-1.0-SNAPSHOT.jar com.mycompany.app.App</code></li>
</ol>
<ul>
<li>Errors</li>
</ul>
<pre><code>Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: io/kubernetes/client/openapi/ApiException
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:650)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:632)
Caused by: java.lang.ClassNotFoundException: io.kubernetes.client.openapi.ApiException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 7 more
</code></pre>
<p>Thanks anyone for the help!</p>
| Byron Hsu | <p>run using <code>mvn exec:java -D exec.mainClass=com.mycompany.app.App</code></p>
| Abhijit Gaikwad |
<p>We are building application with argo workflows. We are wondering if we could just set up opentelemetry collector inside of our kubernetes cluster, and start using it as stdout exporter into elastic stack. Couldn't find information if OTEL can export logs without instrumentation. Any thoughts?</p>
| Vsevolod Mitskevich | <p>The short answer is: yes. For logs you don't need instrumentation, in principle.</p>
<p>However, logs are still in development (so, you'd need to track upstream and deal with the fact that you're operating against a moving target. There are a number of components upstream you can use, for example, you can use the <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver" rel="nofollow noreferrer">Filelog Receiver</a> and the <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter" rel="nofollow noreferrer">Elasticsearch Exporter</a> in a pipeline. I recently did a <a href="https://gist.github.com/mhausenblas/98dd345cf5476c0b025c724ea73df1a8" rel="nofollow noreferrer">POC and demo</a> (just ignore the custom collector part and use upstream) that you could use as a starting point.</p>
| Michael Hausenblas |
<p>I have a Python process that does some heavy computations with Pandas and such - not my code so I basically don't have much knowledge on this.</p>
<p>The situation is this Python code used to run perfectly fine on a server with 8GB of RAM maxing all the resources available.</p>
<p>We moved this code to Kubernetes and we can't make it run: even increasing the allocated resources up to 40GB, this process is greedy and will inevitably try to get as much as it can until it gets over the container limit and gets killed by Kubernetes.</p>
<p>I know this code is probably suboptimal and needs rework on its own.</p>
<p>However my question is how to get Docker on Kubernetes mimic what Linux did on the server: give as much as resources as needed by the process without killing it?</p>
| Nicolas Landier | <p>I found out that running something like this seems to work:</p>
<pre class="lang-py prettyprint-override"><code>import resource
import os
if os.path.isfile('/sys/fs/cgroup/memory/memory.limit_in_bytes'):
with open('/sys/fs/cgroup/memory/memory.limit_in_bytes') as limit:
mem = int(limit.read())
resource.setrlimit(resource.RLIMIT_AS, (mem, mem))
</code></pre>
<p>This reads the memory limit file from cgroups and set it as both hard and soft limit for the process' max area address space.</p>
<p>You can test by runnning something like:</p>
<pre><code>docker run --it --rm -m 1G --cpus 1 python:rc-alpine
</code></pre>
<p>And then trying to allocate 1G of ram before and after running the script above.</p>
<p>With the script, you'll get a <code>MemoryError</code>, without it the container will be killed.</p>
| caarlos0 |
<p>How to check if a pod has access to url something like this.</p>
<p>http://hostname:8080.</p>
<p>It says connection failed, I went through lot of documentation, unable to figure out how to check the connection.</p>
<p>Thanks,
Vijay.</p>
| Vijay | <p>You should set up egress.</p>
<p>For example , following yaml will allow all egress traffic</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a></p>
| Abhijit Gaikwad |
<p>there is 2 ways to deploy OpenTelemetry collector on Kubernetes
<a href="https://opentelemetry.io/docs/collector/deployment/" rel="nofollow noreferrer">https://opentelemetry.io/docs/collector/deployment/</a>
Agent and Gateway</p>
<p>my question is when deploying OpenTelemetry collector as a deamonset
why we still need the Agent ?
<a href="https://www.aspecto.io/blog/opentelemetry-collector-guide/" rel="nofollow noreferrer">https://www.aspecto.io/blog/opentelemetry-collector-guide/</a>
<a href="https://i.stack.imgur.com/yTtgL.png" rel="nofollow noreferrer">agent mode</a></p>
<p>and also is it good approach to deploy OpenTelemetry as a deamonset without the Agent ?</p>
| Idan Reuven | <p>Before I get to the core of your question, note that the agent/gateway modes are not Kubernetes specific. This pattern is equally valid and applicable in case you're running your microservices on, say, virtual machines or ECS.</p>
<p>Now, let's dive deeper into why there are cases where it makes sense to use the OpenTelemetry collector in agent/gateway mode. At its core, it's about separation of concerns, enabling different teams to focus on things they care about.</p>
<p>The <em>agent mode</em> means running an OpenTelemetry collector "close" to the workload. In the context of Kubernetes, this could be a sidecar (another container running the collector in the app pod), you could run a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> per namespace, and indeed you could run the collector as a DaemonSet. No matter how you run these collectors, the expectation would be that the dev or product team owns the collector (config) and with it decides what receivers are needed for workloads. For example, one team needs a <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver" rel="nofollow noreferrer">Prometheus receiver</a>, another team needs a <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver" rel="nofollow noreferrer">Statsd receiver</a> for their app. We get back to the outbound (exporter) side of the collector in a moment.</p>
<p>The <em>gateway mode</em> means a central (standalone) operation of the collector, typically owned by the platform team, enabling them to:</p>
<ol>
<li>Centrally enforce policies such as filtering sensitive log items, making sampling decisions for traces, dropping certain metrics, and so forth.</li>
<li>Manage permissions and credentials in a central place. For example, in order to ingest metrics using the <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter" rel="nofollow noreferrer">Prometheus Remote Write exporter</a> into Amazon Managed Service for Prometheus, the collector needs to use an <a href="https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-ingest-metrics-OpenTelemetry.html" rel="nofollow noreferrer">IAM role with a specific IAM policy attached</a>. The same applies for OTLP exporters, requiring you to provide an API key.</li>
<li>Scale the collector: either by running a bunch of OpenTelemetry collectors behind a loadbalancer and scale them horizontally or by vertically scaling a single collector instance. Through managing the collector scaling, the platform team can ensure that all signals (traces, metrics, logs) are reliable delivered to the backend destinations.</li>
</ol>
<p>Now we get back to the communication between agents and gateway: this is done via the <a href="https://opentelemetry.io/docs/reference/specification/protocol/otlp/" rel="nofollow noreferrer">OpenTelemetry Protocol (OTLP)</a>. That is, using the OTLP exporter on the agent side and the OTLP receiver on the gateway side, which further simplifies both the task on the side of the dev/product teams and the platform team, providing for a secure and performant telemetry data transfer.</p>
| Michael Hausenblas |
<p><strong>Is there a way to extend the kustomize image transformer to recognise more keys as image specifiers?</strong> Like the <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#name-reference-transformer" rel="nofollow noreferrer"><code>nameReference</code> transformer</a> does for the <code>namePrefix</code> and <code>nameSuffix</code> transformers.</p>
<p><a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md" rel="nofollow noreferrer">The Kustomize <code>images:</code> transformer</a> is very useful for image replacement and registry renaming in k8s manifests.</p>
<p>But it <a href="https://github.com/kubernetes-sigs/kustomize/issues/686" rel="nofollow noreferrer">only supports types that embed <code>PodTemplate</code></a> and maybe some hardcoded types. CRDs that don't use <code>PodTemplate</code> are not handled despite them being <em>very</em> common. Examples include the <code>kube-prometheus</code> <code>Prometheus</code> and <code>AlertManager</code> resources and the <code>opentelemetry-operator</code> <code>OpenTelemetryCollector</code> resource.</p>
<p>As a result you land up having to maintain a bunch of messy strategic merge or json patches to prefix such images with a trusted registry or the like.</p>
<hr />
<p>Here's an example of the problem as things stand. Say I have to deploy everything prefixed with <code>mytrusted.registry</code> with an <code>images:</code> transformer list. For the sake of brevity here I'll use a dummy one that replaces all matched images with <code>MATCHED</code>, so I don't have to list them all:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "https://github.com/prometheus-operator/kube-prometheus"
images:
- name: "(.*)"
newName: "MATCHED"
newTag: "fake"
</code></pre>
<p>You'd expect the only images in the result to be "MATCHED:fake", but in reality:</p>
<pre><code>$ kustomize build | grep 'image: .*' | sort | uniq -c
12 image: MATCHED:fake
1 image: quay.io/prometheus/alertmanager:v0.24.0
1 image: quay.io/prometheus/prometheus:v2.34.0
</code></pre>
<p>the images in the <code>kind: Prometheus</code> and <code>kind: AlertManager</code> resources don't get matched because they are not a <code>PodTemplate</code>.</p>
<p>You have to write a custom patch for these, which creates mess like this <code>kustomization.yaml</code> content:</p>
<pre><code>patches:
- path: prometheus_image.yaml
target:
kind: Prometheus
- path: alertmanager_image.yaml
target:
kind: Alertmanager
</code></pre>
<p>with <code>prometheus_image.yaml</code>:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: ignored
spec:
image: "MATCHED:fake"
</code></pre>
<p>and <code>alertmanager_image.yaml</code>:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
name: ignored
spec:
image: "MATCHED:fake"
</code></pre>
<p>which is IMO ghastly.</p>
<p>What I <em>want</em> to be able to do is tell <code>Kustomize</code>'s image transformer about it, like it can be extended with custom configmap generators, etc, like the following <em>unsupported and imaginary pseudocode</em> modeled on the existing <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#name-reference-transformer" rel="nofollow noreferrer"><code>nameReference</code> transformer</a></p>
<pre><code>imageReference:
- kind: Prometheus
fieldSpecs:
- spec/image
</code></pre>
| Craig Ringer | <p>Just after writing this up I finally stumbled on the answer: Kustomize does support <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/images/README.md" rel="noreferrer">image transformer configs</a>.</p>
<p>The correct way to express the above would be a <code>image_transformer_config.yaml</code> file containing:</p>
<pre><code>images:
- path: spec/image
kind: Prometheus
- path: spec/image
kind: Alertmanager
</code></pre>
<p>and a <code>kustomization.yaml</code> entry referencing it, like</p>
<pre><code>configurations:
- image_transformer_config.yaml
</code></pre>
<p>This appears to work fine when imported as a <code>Component</code> too.</p>
<p>It's even <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#images-transformer" rel="noreferrer">pointed out by the transformer docs</a> so I'm going to blame this one on being blind.</p>
| Craig Ringer |
<p>This question is about logging/monitoring.</p>
<p>I'm running a 3 node cluster on AKS, with 3 orgs, Dev, Test and Prod. The chart worked fine in Dev, but the same chart keeps getting killed by Kubernetes in Test, and it keeps getting recreated, and re-killed. Is there a way to extract details on why this is happening? All I see when I describe the pod is Reason: Killed</p>
<p>Please tell me more details on this or can give some suggestions. Thanks!</p>
| user8172324 | <h1>List Events sorted by timestamp</h1>
<p>kubectl get events --sort-by=.metadata.creationTimestamp</p>
| Abhijit Gaikwad |
<p>I need to watch (and wait) until a POD is deleted. I need to this is because I need to start a second pod (with the same name) immediately after the first one has been deleted.</p>
<p>This is what I'm trying:</p>
<pre><code>func (k *k8sClient) waitPodDeleted(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
k.logger.Debugf("The POD \"%s\" is deleted", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodDeleted for POD \"%s\" because the context is done", resName)
return nil
}
}
}
</code></pre>
<p>The problem with this approach is that when I get the <code>Deleted</code> event, is when the POD receives the event for deletion, but not when it is actually deleted. Doing some extra tests I ended debugging the process with this code:</p>
<pre><code>case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
pod := event.Object.(*v1.Pod)
k.logger.Debugf("EVENT %s PHASE %s MESSAGE %s", event.Type, pod.Status.Phase, pod.Status.Message)
}
</code></pre>
<p>The log result for this is:</p>
<pre><code>2022-02-15T08:21:51 DEBUG EVENT DELETED PHASE Running MESSAGE
2022-02-15T08:22:21 DEBUG EVENT DELETED PHASE Running MESSAGE
</code></pre>
<p>I'm getting two Deleted events. The first one right away I send the delete command. The last one when the pod is effectively deleted from the cluster.</p>
<p>My questions are:</p>
<ul>
<li>Why I'm getting two Deleted events? How can I differentiate one from another? I've tried to compare the two events and they seems exactly the same (except the timestamps)</li>
<li>What is the best approach to watch and wait for a pod deletion, so I can immediately relaunch it? should I poll the API until the pod is not returned?</li>
</ul>
<p>The usecase I'm trying to solve:
In my application there is a feature to replace a tool with another with different options. The feature needs to delete the pod that contains the tool and relaunch it with another set of options. In this scenario I need to wait for the pod deletion so I can start it again.</p>
<p>Thanks in advance!</p>
| Alejandro González | <p>As I said in the comments, the real problem was the watcher I was creating to watch the pod I want to get deleted. In the watcher I was creating a LabelSelector that was selecting two pods instead of one. This is the complete solution, including the watcher.</p>
<pre><code>func (k *k8sClient) createPodWatcher(ctx context.Context, resName string) (watch.Interface, error) {
labelSelector := fmt.Sprintf("app.kubernetes.io/instance=%s", resName)
k.logger.Debugf("Creating watcher for POD with label: %s", labelSelector)
opts := metav1.ListOptions{
TypeMeta: metav1.TypeMeta{},
LabelSelector: labelSelector,
FieldSelector: "",
}
return k.clientset.CoreV1().Pods(k.cfg.Kubernetes.Namespace).Watch(ctx, opts)
}
func (k *k8sClient) waitPodDeleted(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
k.logger.Debugf("The POD \"%s\" is deleted", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodDeleted for POD \"%s\" because the context is done", resName)
return nil
}
}
}
func (k *k8sClient) waitPodRunning(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
pod := event.Object.(*v1.Pod)
if pod.Status.Phase == v1.PodRunning {
k.logger.Infof("The POD \"%s\" is running", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodRunning for POD \"%s\" because the context is done", resName)
return nil
}
}
}
</code></pre>
| Alejandro González |
<p>Today my kubernetes 1.15 shows this error:</p>
<pre><code>Failed to inspect image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-illidan-superhub:v1.0.3": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
Error: ImageInspectError
</code></pre>
<p>I am tried to pull image in my local machine:</p>
<pre><code>~ ⌚ 10:57:02
$ docker pull registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-illidan-superhub:v1.0.3
v1.0.3: Pulling from dabai_app_k8s/dabai-fat/soa-illidan-superhub
e9afc4f90ab0: Already exists
989e6b19a265: Already exists
af14b6c2f878: Already exists
5573c4b30949: Already exists
fb1a405f128d: Already exists
197b0f525c26: Already exists
f133ed18caca: Already exists
ec53837eaf93: Pull complete
24caf1aa821b: Pull complete
f2d6b0ee2469: Pull complete
Digest: sha256:43caa136d717fcca3a6aad96568c9d29745f3c3d391f29facc2bebeb9c26b5a0
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-illidan-superhub:v1.0.3
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-illidan-superhub:v1.0.3
(base)
</code></pre>
<p>works fine, and I am tried to pull image in remote kubernetes cluster machine, works fine. It seems the kubernetes problem, what should I do to fix it?</p>
<p><a href="https://i.stack.imgur.com/mpUQN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mpUQN.png" alt="enter image description here" /></a></p>
<p>This is my kubernetes version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T23:35:15Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
(base)
</code></pre>
| Dolphin | <p>I suggest you to stop the docker engine first, by <code>systemctl stop docker</code> for example, and manually remove the folder using <code>rm -rf /var/lib/docker</code>. then you should start the docker again with <code>systemctl start docker</code>.</p>
<p>reference:<a href="https://github.com/docker/for-mac/issues/1396" rel="noreferrer">https://github.com/docker/for-mac/issues/1396</a></p>
| Abhijit Gaikwad |
<p>Trying to add more kafka connectors to our kafka cluster based on the following link's instructions . But getting failed with <code>errImagepull</code> error . Please find the details and help me resolve this .</p>
<p>Reference Link :</p>
<p><a href="https://docs.confluent.io/home/connect/extending.html#create-a-docker-image-containing-c-hub-connectors" rel="nofollow noreferrer">https://docs.confluent.io/home/connect/extending.html#create-a-docker-image-containing-c-hub-connectors</a></p>
<p>Created Custom Docker Image :</p>
<pre><code>FROM confluentinc/cp-server-connect-operator:6.0.0.0
USER root
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-s3:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-tibco-source:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-azure-event-hubs:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-azure-event-hubs:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-datadog-metrics:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-ftps:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-pubsub:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-gcs-source:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-pagerduty:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-sftp:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-teradata:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-tibco-source:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:latest \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-gcs:latest
USER 1001
</code></pre>
<p>Push to publicaccess :</p>
<p><a href="https://i.stack.imgur.com/gvPDY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gvPDY.png" alt="enter image description here" /></a></p>
<p>Updated in my-values.yaml</p>
<p><a href="https://i.stack.imgur.com/QiV6M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QiV6M.png" alt="enter image description here" /></a></p>
<p>failing with errImagepull error :</p>
<p><a href="https://i.stack.imgur.com/eUIjS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eUIjS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/djlnX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/djlnX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/1bnRH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1bnRH.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/63iBI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63iBI.png" alt="enter image description here" /></a></p>
<p>my-values.yaml</p>
<pre><code>## Overriding values for Chart's values.yaml for AWS
##
global:
provider:
name: aws
region: us-east-1
## Docker registry endpoint where Confluent Images are available.
##
kubernetes:
deployment:
zones:
- us-east-1a
- us-east-1b
- us-east-1c
registry:
fqdn: docker.io
credential:
required: false
sasl:
plain:
username: test
password: test123
authorization:
rbac:
enabled: false
simple:
enabled: false
superUsers: []
dependencies:
mds:
endpoint: ""
publicKey: ""
## Zookeeper cluster
##
zookeeper:
name: zookeeper
replicas: 3
oneReplicaPerNode: true
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-zookeeper
rule: requiredDuringSchedulingIgnoredDuringExecution
resources:
requests:
cpu: 200m
memory: 512Mi
## Kafka Cluster
##
kafka:
name: kafka
replicas: 3
oneReplicaPerNode: true
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-broker
rule: requiredDuringSchedulingIgnoredDuringExecution
resources:
requests:
cpu: 200m
memory: 1Gi
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
tls:
enabled: false
fullchain: |-
privkey: |-
cacerts: |-
metricReporter:
enabled: true
publishMs: 30000
replicationFactor: ""
tls:
enabled: false
internal: false
authentication:
type: ""
bootstrapEndpoint: ""
## Connect Cluster
##
connect:
name: connectors
image:
repository: rdkarthikeyan27/hebdevkafkaconnectors
tag: 1.0
oneReplicaPerNode: false
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-connector
rule: requiredDuringSchedulingIgnoredDuringExecution
replicas: 2
tls:
enabled: false
## "" for none, "tls" for mutual auth
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
dependencies:
kafka:
bootstrapEndpoint: kafka:9071
brokerCount: 3
schemaRegistry:
enabled: true
url: http://schemaregistry:8081
## Replicator Connect Cluster
##
replicator:
name: replicator
oneReplicaPerNode: false
replicas: 0
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
dependencies:
kafka:
brokerCount: 3
bootstrapEndpoint: kafka:9071
##
## Schema Registry
##
schemaregistry:
name: schemaregistry
oneReplicaPerNode: false
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-schema-reg
rule: requiredDuringSchedulingIgnoredDuringExecution
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
dependencies:
kafka:
brokerCount: 3
bootstrapEndpoint: kafka:9071
##
## KSQL
##
ksql:
name: ksql
replicas: 2
oneReplicaPerNode: true
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-ksql
rule: requiredDuringSchedulingIgnoredDuringExecution
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
dependencies:
kafka:
brokerCount: 3
bootstrapEndpoint: kafka:9071
brokerEndpoints: kafka-0.kafka:9071,kafka-1.kafka:9071,kafka-2.kafka:9071
schemaRegistry:
enabled: false
tls:
enabled: false
authentication:
type: ""
url: http://schemaregistry:8081
## Control Center (C3) Resource configuration
##
controlcenter:
name: controlcenter
license: ""
##
## C3 dependencies
##
dependencies:
c3KafkaCluster:
brokerCount: 3
bootstrapEndpoint: kafka:9071
zookeeper:
endpoint: zookeeper:2181
connectCluster:
enabled: true
url: http://connectors:8083
ksql:
enabled: true
url: http://ksql:9088
schemaRegistry:
enabled: true
url: http://schemaregistry:8081
oneReplicaPerNode: false
affinity:
nodeAffinity:
key: worker-type
values:
- node-group-control
rule: requiredDuringSchedulingIgnoredDuringExecution
##
## C3 External Access
##
loadBalancer:
enabled: true
type: internal
domain: conf-ka01.dsol.core
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
##
## TLS configuration
##
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
##
## C3 authentication
##
auth:
basic:
enabled: true
##
## map with key as user and value as password and role
property:
admin: Developer1,Administrators
disallowed: no_access
</code></pre>
| Karthikeyan Rasipalay Durairaj | <p>Image from docker io <a href="https://hub.docker.com/r/confluentinc/cp-server-connect-operator" rel="nofollow noreferrer">https://hub.docker.com/r/confluentinc/cp-server-connect-operator</a> doesn’t have tag 1.0.0 available . Try tag 6.0.0.0
That is confluentinc/cp-server-connect-operator:6.0.0.0</p>
| Abhijit Gaikwad |
<p>I'm working on an application that deploys kubernetes resources dynamically, and I'd like to be able to provision a shared SSL certificate for all of them. At any given time, all of the services have the path <code>*.*.*.example.com</code>. </p>
<p>I've heard that cert-manager will provision/re-provision certs automatically, but I don't necessarily need auto-provisioning if its too much overhead. The solution also needs to be able to handle these nested url subdomains.</p>
<p>Any thoughts on the easiest way to do this? </p>
| Jay K. | <p>Have a look at <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx-ingress</a>, which is a Kubernetes Ingress Controller that essentially makes it possible to run <a href="https://www.nginx.com/" rel="noreferrer">Nginx</a> reverse proxy/web server/load balancer on Kubernetes.</p>
<p>nginx-ingress is built around the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a> resource. It will watch Ingress objects and manage nginx configuration in config maps. You can define powerful traffic routing rules, caching, url rewriting, and a lot more via the Kubernetes Ingress resource rules and <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="noreferrer">nginx specific annotations</a>.</p>
<p>Here's an example of an Ingress with some routing. There's a lot more you can do with this, <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#ingressrule-v1beta1-extensions" rel="noreferrer">and it does support wildcard domain routing</a>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
cert-manager.io/cluster-issuer: letsencrypt-prod
name: my-ingress
spec:
rules:
- host: app1.domain.com
http:
paths:
- backend:
serviceName: app1-service
servicePort: http
path: /(.*)
- host: app2.sub.domain.com
http:
paths:
- backend:
serviceName: app2-service
servicePort: http
path: /(.*)
tls:
- hosts:
- app1.domain.com
secretName: app1.domain.com-tls-secret
- hosts:
- app2.sub.domain.com
secretName: app2.sub.domain.com-tls-secret
</code></pre>
<p>The annotations section is really important. Above indicates that nginx-ingress should manage this Ingress definition. This annotations section allows to specify additional nginx configuration, in the above example it specifies a url rewrite target that can be used to rewrite urls in the rule section.</p>
<p>See <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="noreferrer">this community post</a> for installing nginx-ingress on GKE.</p>
<p>You'll notice the annotations also have a cert manager specific annotation which, if installed will instruct cert manager to issue certificates based on the hosts and secrets defined under the <code>tls</code> section.</p>
<p>Using <a href="https://cert-manager.io/docs/concepts/" rel="noreferrer">cert-manager</a> in combination with nginx-ingress, which isn't that complicated, you can set up automatic certificate creation/renewals.</p>
<p>It's hard to know the exact nature of your setup with deploying dynamic applications. But some possible ways to achieve the configuration are:</p>
<ul>
<li>Have each app define it's own Ingress with it's own routing rules and TLS configuration, which gets installed/updated each time your the application is deployed</li>
<li>Have an Ingress per domain/subdomain. You could then specify a wild card subdomain and tls section with routing rules for that subdomain</li>
<li>Or possibly you could have one uber Ingress which handles all domains and routing.</li>
</ul>
<p>The more fine grained the more control, but a lot more moving parts. I don't see this as a problem. For the last two options, it really depends on the nature of your dynamic application deployments.</p>
| julz256 |
<p>We're using Spring Boot 3.0.5 with leader election from <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#leader-election" rel="nofollow noreferrer">Spring Cloud Kubernetes</a> version 3.0.2 (<code>org.springframework.cloud:spring-cloud-kubernetes-fabric8-leader:3.0.2</code>) that is based on the fabric8's Java client.</p>
<p>However, after moving to a new Kubernetes cluster which is more restrictive by default (i.e. pods are not using the "default service account" that has access to everything), we can't get the leader election to work (it used to work when the pod had access to "everything" using the default service account). We've configured the following rules:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-editor
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "delete", "write"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-viewer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
</code></pre>
<p>And bindings for the pod that uses the leader election:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: configmap-editor-binding
namespace: default
subjects:
- kind: ServiceAccount
name: my-app
namespace: default
roleRef:
kind: Role
name: configmap-editor
apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer-binding
namespace: default
subjects:
- kind: ServiceAccount
name: my-app
namespace: default
roleRef:
kind: Role
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The Kubernetes service account referenced in the <code>RoleBinding</code>, is then connected to a Google Cloud Service Account using <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">workload identity</a>. The only permission that it currently has, on the Google Cloud side, is "Cloud Trace Agent".</p>
<p>We don't get any error messages when booting our application, but our function that takes an <code>OnGrantedEvent</code> from Spring Cloud Kubernetes, is never received:</p>
<pre class="lang-kotlin prettyprint-override"><code>@EventListener
fun onLeaderGranted(event: OnGrantedEvent): Unit = ...
</code></pre>
<p>What permissions and/or RBAC rules are we missing?</p>
| Johan | <p>The problem was that the "configmap-editor" role didn't have enough rules. The correct rules should be:</p>
<p><code>["get", "watch", "list", "create", "update", "patch", "delete"]</code></p>
| Johan |
<p>I have a Go application running in a Kubernetes cluster which needs to read files from a large MapR cluster. The two clusters are separate and the Kubernetes cluster does not permit us to use the CSI driver. All I can do is run userspace apps in Docker containers inside Kubernetes pods and I am given <code>maprticket</code>s to connect to the MapR cluster.</p>
<p>I'm able to use the <code>com.mapr.hadoop</code> <code>maprfs</code> <a href="https://repository.mapr.com/nexus/content/groups/mapr-public/com/mapr/hadoop/maprfs/6.1.0-mapr/maprfs-6.1.0-mapr.jar" rel="nofollow noreferrer">jar</a> to write a Java app which is able to connect and read files using a <code>maprticket</code>, but we need to integrate this into a Go app, which, ideally, shouldn't require a Java sidecar process.</p>
| Mihai Todor | <p>This is a good question because it highlights the way that some environments impose limits that violate the assumptions external software may hold.</p>
<p>And just for reference, MapR was acquired by HPE so a MapR cluster is now an HPE Ezmeral Data Fabric cluster. I am still training myself to say that.</p>
<p>Anyway, the accepted method for a generic program in language X to communicate with the Ezmeral Data Fabric (the filesystem formerly known as MapR FS) is to mount the file system and just talk to it using file APIs like open/read/write and such. This applies to Go, Python, C, Julia or whatever. Inside Kubernetes, the normal way to do this mount is to use a CSI driver that has some kind of operator working in the background. That operator isn't particularly magical ... it just does what is needful. In the case of data fabric, the operator mounts the data fabric using NFS or FUSE and then bind mounts[1] part of that into the pod's awareness.</p>
<p>But this question is cool because it precludes all of that. If you can't install an operator, then this other stuff is just a dead letter.</p>
<p>There are three alternative approaches that may work.</p>
<ol>
<li><p>NFS mounts were included in Kubernetes as a native capability before the CSI plugin approach was standardized. It might still be possible to use that on a very vanilla Kubernetes cluster and that could give access to the data cluster.</p>
</li>
<li><p>It is possible to integrate a container into your pod that does the necessary FUSE mount in an unprivileged way. This will be kind of painful because you would have to tease apart the FUSE driver from the data fabric install and get it to work. That would let you see the data fabric inside the pod. Even then, there is no guarantee Kubernetes or the OS will allow this to work.</p>
</li>
<li><p>There is an unpublished Go file system client that users the low level data fabric API directly. We don't yet release that separately. For more information on that, folks should ping me directly (my contact info is everywhere ... email to ted.dunning hpe.com or gmail.com works)</p>
</li>
<li><p>The data fabric allows you to access data via S3. With the 7.0 release of Ezmeral Data Fabric, this capability is heavily revamped to give massive performance especially since you can scale up the number of gateways essentially without limit (I have heard numbers like 3-5GB/s per stateless connection to a gateway, but YMMV). This will require the least futzing and should give plenty of performance. You can even access files as if they were S3 objects.</p>
</li>
</ol>
<p>[1] <a href="https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount#:%7E:text=A%20bind%20mount%20is%20an,the%20same%20as%20the%20original">https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount#:~:text=A%20bind%20mount%20is%20an,the%20same%20as%20the%20original</a>.</p>
| Ted Dunning |
<p>I have a docker image in AWS ECR which is in my secondary account. I want to pull that image to the Minikube Kubernetes cluster using AWS IAM Role ARN where MFA is enabled on it. Due to this, my deployment failed while pulling the Image.</p>
<p>I enabled the registry-creds addon to access ECR Image but didn't work out.</p>
<p>May I know any other way to access AWS ECR of AWS Account B via AWS IAM Role ARN with MFA enabled using the credential of the AWS Account A?</p>
<p>For example, I provided details like this</p>
<ul>
<li>Enter AWS Access Key ID: <strong>Access key of Account A</strong></li>
<li>Enter AWS Secret Access Key: <strong>Secret key of Account A</strong></li>
<li>(Optional) Enter AWS Session Token:</li>
<li>Enter AWS Region: <strong>us-west-2</strong></li>
<li>Enter 12 digit AWS Account ID (Comma separated list): [<strong>AccountA, AccountB</strong>]</li>
<li>(Optional) Enter ARN of AWS role to assume: <<strong>role_arn of AccountB</strong>></li>
</ul>
<p><strong>ERROR MESSAGE:</strong>
<code>Warning Failed 2s (x3 over 42s) kubelet Failed to pull image "XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/sample-dev:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/v2/sample-dev/manifests/latest": no basic auth credentials</code></p>
<p><code>Warning Failed 2s (x3 over 42s) kubelet Error: ErrImagePull</code></p>
| Arun546 | <p>While the <code>minikube addons</code> based solution shown by @DavidMaze is probably cleaner and generally preferable, I wasn't able to get it to work.</p>
<p>Instead, I <a href="https://stackoverflow.com/questions/55223075/automatically-use-secret-when-pulling-from-private-registry">found out</a> it is possible to give the service account of the pod a copy of the docker login tokens in the local home. If you haven't set a serviceaccount, it's <code>default</code>:</p>
<pre class="lang-sh prettyprint-override"><code># Log in with aws ecr get-login or however
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
</code></pre>
<p>This will work fine in a bind.</p>
| Caesar |
<p>I am getting "extension (5) should not be presented in certificate_request" when trying to run locally a Java Kubernetes client application which queries the Kubernetes cluster over a lube proxy connection. Any thoughts? Thanks in advance</p>
<pre><code> ApiClient client = null;
try {
client = Config.defaultClient();
//client.setVerifyingSsl(false);
} catch (IOException e) {
e.printStackTrace();
}
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list = null;
try {
list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
} catch (ApiException e) {
e.printStackTrace();
}
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
</code></pre>
| Alexander F | <p>Which version of Java are you using?</p>
<p>JDK 11 onwards have support for TLS 1.3 which can cause the error <code>extension (5) should not be presented in certificate_request</code>.</p>
<p>Add <code>-Djdk.tls.client.protocols=TLSv1.2</code> to the JVM args to make it use <code>1.2</code> instead.</p>
<p>There is an issue on Go lang relating to this <a href="https://github.com/golang/go/issues/35722" rel="noreferrer">https://github.com/golang/go/issues/35722</a> and someone there also posted to <a href="https://github.com/golang/go/issues/35722#issuecomment-571173416" rel="noreferrer">disable TLS 1.3 on the Java side</a></p>
| zcourts |
<p>i'm launching a glassfish pod on my Kubernetes cluster, and i'm trying to copy some .war files from a folder that's on my host, but the command cp always seems to fail.</p>
<p>my yaml file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: glassfish
spec:
# replicas: 2
selector:
matchLabels:
app: glassfish
strategy:
type: Recreate
template:
metadata:
labels:
app: glassfish
spec:
containers:
- image: glassfish:latest
name: glassfish
ports:
- containerPort: 8080
name: glassfishhttp
- containerPort: 4848
name: glassfishadmin
command: ["/bin/cp"]
args: #["/mnt/apps/*","/usr/local/glassfish4/glassfish/domains/domain1/autodeploy/"]
- /mnt/apps/
- /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
volumeMounts:
- name: glassfish-persistent-storage
mountPath: /mount
- name: app
mountPath: /mnt/apps
volumes:
- name: glassfish-persistent-storage
persistentVolumeClaim:
claimName: fish-mysql-pvc
- name: app
hostPath:
path: /mnt/nfs
type: Directory</code></pre>
</div>
</div>
</p>
<p>I'm trying to use the following command in my container:</p>
<pre><code>cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy
</code></pre>
<p>What am I doing wrong?</p>
<p>So far i've tried to do it with the /<em>, without it, using /*
When i use apps/</em> i see "item or directory not found", when i use apps/ i get "directory ommitted", i need only whats in the map, not the map itself so -r doesn't really help either.</p>
| kubernoobus | <p>Two things to note here:</p>
<ol>
<li>If you want to copy a direcyory using <code>cp</code>, you have to provide the <code>-a</code> or <code>-R</code> flag to <code>cp</code>:</li>
</ol>
<blockquote>
<pre><code> -R If source_file designates a directory, cp copies the directory and the entire subtree connected at
that point. If the source_file ends in a /, the contents of the directory are copied rather than
the directory itself. This option also causes symbolic links to be copied, rather than indirected
through, and for cp to create special files rather than copying them as normal files. Created
directories have the same mode as the corresponding source directory, unmodified by the process'
umask.
In -R mode, cp will continue copying even if errors are detected.
</code></pre>
</blockquote>
<ol start="2">
<li><p>If you use <code>/bin/cp</code> as your entrypoint in the pod, then this command is not executed in a shell. The <code>*</code> in <code>/path/to/*</code> however is a shell feature.</p></li>
<li><p>initContainers do not have <code>args</code>, only `command.</p></li>
</ol>
<p>To make this work, use <code>/bin/sh</code> as the command instead:</p>
<pre><code>command:
- /bin/sh
- -c
- cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
</code></pre>
| mhutter |
<p>I am having problems when populate magnum database, please help me.</p>
<p>I have followed the docs.</p>
<p><a href="https://docs.openstack.org/magnum/train/install/install-rdo.html" rel="nofollow noreferrer">https://docs.openstack.org/magnum/train/install/install-rdo.html</a></p>
<pre><code>sudo su -s /bin/sh -c "magnum-db-manage upgrade" magnum
</code></pre>
<blockquote>
<p>/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning:
(3719, u"'utf8' is currently an alias for the character set UTF8MB3,
but will be an alias for UTF8MB4 in a future release. Please consider
using UTF8MB4 in order to be unambiguous.") result =
self._query(query) INFO [alembic.runtime.migration] Context impl
MySQLImpl. INFO [alembic.runtime.migration] Will assume
non-transactional DDL. INFO [alembic.runtime.migration] Running
upgrade -> 2581ebaf0cb2, initial migration INFO
[alembic.runtime.migration] Running upgrade 2581ebaf0cb2 ->
3bea56f25597, Multi Tenant Support INFO [alembic.runtime.migration]
Running upgrade 3bea56f25597 -> 5793cd26898d, Add bay status INFO
[alembic.runtime.migration] Running upgrade 5793cd26898d ->
3a938526b35d, Add docker volume size column INFO
[alembic.runtime.migration] Running upgrade 3a938526b35d ->
35cff7c86221, add private network to baymodel INFO
[alembic.runtime.migration] Running upgrade 35cff7c86221 ->
1afee1db6cd0, Add master flavor INFO [alembic.runtime.migration]
Running upgrade 1afee1db6cd0 -> 2d1354bbf76e, ssh authorized key INFO
[alembic.runtime.migration] Running upgrade 2d1354bbf76e ->
29affeaa2bc2, rename-bay-master-address INFO
[alembic.runtime.migration] Running upgrade 29affeaa2bc2 ->
2ace4006498, rename-bay-minions-address INFO
[alembic.runtime.migration] Running upgrade 2ace4006498 ->
456126c6c9e9, create baylock table INFO [alembic.runtime.migration]
Running upgrade 456126c6c9e9 -> 4ea34a59a64c, add-discovery-url-to-bay
INFO [alembic.runtime.migration] Running upgrade 4ea34a59a64c ->
e772b2598d9, add-container-command INFO [alembic.runtime.migration]
Running upgrade e772b2598d9 -> 2d8657c0cdc, add bay uuid INFO
[alembic.runtime.migration] Running upgrade 2d8657c0cdc ->
4956f03cabad, add cluster distro INFO [alembic.runtime.migration]
Running upgrade 4956f03cabad -> 592131657ca1, Add coe column to
BayModel INFO [alembic.runtime.migration] Running upgrade
592131657ca1 -> 3b6c4c42adb4, Add unique constraints INFO
[alembic.runtime.migration] Running upgrade 3b6c4c42adb4 ->
2b5f24dd95de, rename service port INFO [alembic.runtime.migration]
Running upgrade 2b5f24dd95de -> 59e7664a8ba1, add_container_status
INFO [alembic.runtime.migration] Running upgrade 59e7664a8ba1 ->
156ceb17fb0a, add_bay_status_reason INFO [alembic.runtime.migration]
Running upgrade 156ceb17fb0a -> 1c1ff5e56048,
rename_container_image_id INFO [alembic.runtime.migration] Running
upgrade 1c1ff5e56048 -> 53882537ac57, add host column to pod INFO
[alembic.runtime.migration] Running upgrade 53882537ac57 ->
14328d6a57e3, add master count to bay INFO
[alembic.runtime.migration] Running upgrade 14328d6a57e3 ->
421102d1f2d2, create x509keypair table INFO
[alembic.runtime.migration] Running upgrade 421102d1f2d2 ->
6f21dc998bb, Add master_addresses to bay INFO
[alembic.runtime.migration] Running upgrade 6f21dc998bb ->
966a99e70ff, add-proxy INFO [alembic.runtime.migration] Running
upgrade 966a99e70ff -> 6f21dc920bb, Add cert_uuuid to bay INFO
[alembic.runtime.migration] Running upgrade 6f21dc920bb ->
5518af8dbc21, Rename cert_uuid INFO [alembic.runtime.migration]
Running upgrade 5518af8dbc21 -> 4e263f236334, Add registry_enabled
INFO [alembic.runtime.migration] Running upgrade 4e263f236334 ->
3be65537a94a, add_network_driver_baymodel_column INFO
[alembic.runtime.migration] Running upgrade 3be65537a94a ->
1481f5b560dd, add labels column to baymodel table INFO
[alembic.runtime.migration] Running upgrade 1481f5b560dd ->
1d045384b966, add-insecure-baymodel-attr INFO
[alembic.runtime.migration] Running upgrade 1d045384b966 ->
27ad304554e2, adding magnum_service functionality INFO
[alembic.runtime.migration] Running upgrade 27ad304554e2 ->
5ad410481b88, rename-insecure
/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py:78:
SAWarning: An exception has occurred during handling of a previous
exception. The previous exception is: <class
'pymysql.err.InternalError'> (3959, u"Check constraint
'baymodel_chk_2' uses column 'insecure', hence column cannot be
dropped or renamed.")</p>
</blockquote>
| giangnvh | <p>This is a <a href="https://github.com/sqlalchemy/alembic/issues/699" rel="nofollow noreferrer">bug</a> when running Magnum on MySQL 8.0. This bug was just recently fixed.
<a href="https://github.com/openstack/magnum/commit/8dcf91b2d3f04b7b5cb0e7711d82438b69f975a1" rel="nofollow noreferrer">https://github.com/openstack/magnum/commit/8dcf91b2d3f04b7b5cb0e7711d82438b69f975a1</a></p>
<p>You will need to either run an older version of MySQL, or apply the above patch. It has been backported to Victoria, so going with Victoria would be your easiest path forward.</p>
| eandersson |
<p>Is there any shorter alias on the kubectl/oc for deployments? In OpenShift you have deployment configurations and you can access them using their alias <code>dc</code>.</p>
<p>Writing <code>deployment</code> all the time takes too much time. Any idea how to shorten that without setting a local alias on each machine?</p>
<p>Reality:</p>
<pre><code>kubectl get deployment/xyz
</code></pre>
<p>Dream:</p>
<pre><code>kubectl get d/xyz
</code></pre>
| Rtholl | <p>All of the above answers are correct and I endorse the idea of using aliases: I have several myself. But the question was fundamentally about shortnames of API Resources, like <code>dc</code> for <code>deploymentcontroller</code>.</p>
<p>And the answer to that question is to use <code>oc api-resources</code> (or <code>kubectl api-resources</code>). Each API Resource also includes any SHORTNAMES that are available. For example, the results for me of <code>oc api-resources |grep deploy</code> on OpenShift 4.10 is:</p>
<pre><code>➜oc api-resources |grep deploy
deployments deploy apps/v1 true Deployment
deploymentconfigs dc apps.openshift.io/v1 true DeploymentConfig
</code></pre>
<p>Thus we can see that the previously given answer of "deploy" is a valid SHORTNAME of deployments. But it's also useful for just browsing the list of other available abbreviations.</p>
<p>I'll also make sure that you are aware of <code>oc completion</code>. For example <code>source <(oc completion zsh)</code> for zsh. You say you have multiple devices, so you may not set up aliases, but completions are always easy to add. That way you should never have to type more than a few characters and then autocomplete yourself the rest of the way.</p>
| David Ogren |
<p>I have a UI application written in Angular, which has a backend running in NodeJS. I also have two other services which will be invoked from the NodeJS backend. These applications are running in docker containers and are deployed to a Kubernetes cluster in AWS. </p>
<p>The flow is like this:</p>
<p>AngularUI -> NodeJS -> Service1/Service2</p>
<p>AngularUI & NodeJS are in the same docker container, while the other two services are in 2 separate containers.</p>
<p>I have been able to get the services running in Kubernetes on AWS. Service to Service calls (Service 1-> Service2) work fine, as I'm invoking them using k8s labels.</p>
<p>Now Im not able to figure out how to make calls from the Angular front end to the NodeJS backend, since the requests execute on the client side. I cannot give the IP of the ELB of the service, as the IP changes with every deployment.</p>
<p>I tried creating an AWS API Gateway which points to the ELB IP of the Angular UI, but that does not serve up the page. </p>
<p>What is the right way to do this? Any help is much appreciated.</p>
| Athomas | <p>The ELB has a static DNS hostname, like <code>foobar.eu-west-4.elb.amazonaws.com</code>. When you have a domain at hand, create an A record (alias) that points to this DNS hostname. E.g.</p>
<pre><code>webservice.mydomain.com -> mywebservicelb.eu-west-4.elb.amazonaws.com
</code></pre>
<hr>
<p>You can also use static ip address, which seems to be a fairly new feature:</p>
<blockquote>
<p>Each Network Load Balancer provides a single IP address for each
Availability Zone in its purview. If you have targets in us-west-2a
and other targets in us-west-2c, NLB will create and manage two IP
addresses (one per AZ); connections to that IP address will spread
traffic across the instances in all the VPC subnets in the AZ. You can
also specify an existing Elastic IP for each AZ for even greater
control. With full control over your IP addresses, Network Load
Balancer can be used in situations where IP addresses need to be
hard-coded into DNS records, customer firewall rules, and so forth.</p>
</blockquote>
<p><a href="https://aws.amazon.com/de/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/" rel="nofollow noreferrer">https://aws.amazon.com/de/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/</a></p>
| DarkLeafyGreen |
<p>I'm new to helm and Kubernetes world.
I'm working on a project using Docker, Kubernetes and helm in which I'm trying to deploy a simple Nodejs application using helm chart on Kubernetes.</p>
<p>Here's what I have tried:</p>
<p><strong>From <code>Dockerfile</code>:</strong></p>
<pre><code>FROM node:6.9.2
EXPOSE 30000
COPY server.js .
CMD node server.js
</code></pre>
<p>I have build the image, tag it and push it to the docker hub repository at: <code>MY_USERNAME/myhello:0.2</code></p>
<p>Then I run the simple commad to create a helm chart as:
<code>helm create mychart</code>
It created a mychart directory witll all the helm components.</p>
<p>Then i have edited the <code>values.yaml</code> file as:</p>
<pre><code>replicaCount: 1
image:
repository: MY_USERNAME/myhello
tag: 0.2
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: NodePort
port: 80
externalPort: 30000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>After that I have installed the chart as:
<code>helm install --name myhelmdep01 mychart</code></p>
<p>and when run <code>kubectl get pods</code>
it shows the <code>ErrImagePull</code></p>
<p>I have tried with by mentioning the image name as : <code>docker.io/arycloud/myhello</code>
in this case the image pulled successfully but there's another error comes up as:</p>
<blockquote>
<p>Liveness probe failed: Get <a href="http://172.17.0.5:80/" rel="nofollow noreferrer">http://172.17.0.5:80/</a>: dial tcp 172.17.0.5:80: connect: connection refused</p>
</blockquote>
| Abdul Rehman | <p>Run <code>kubectl describe pod <yourpod></code> soon after the error occurs and there should be an event near the bottom of the output that tells you exactly what the image pull problem is.</p>
<p>Off the top of my head it could be one of these options:</p>
<ul>
<li>It's a private repo and you haven't provided the service account for the pod/deployment with the proper imagePullSecret</li>
<li>Your backend isn't docker or does not assume that non prefixed images are on hub.docker.com. Try this instead: <code>registry-1.docker.io/arycloud/myhello</code></li>
</ul>
<p>If you can find that error it should be pretty straight forward.</p>
| Brett Wagner |
<p>I have found some similar questions to kubernetes API server not starting but the error message I am getting is different. I have had a working cluster for several months, went to login yesterday and it was offline. Looked around in some log files and this is what I get below, looks like its trying to make a DNS query to my local DNS Server which has been working fine for the last few years and still works fine. The Log is below and I'm pretty frustrated because I don't know how to fix this, have made no config changes and hoping the community can help. </p>
<blockquote>
<p>E0609 00:03:14.518792 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.5.2, ResourceVersion: 0, AdditionalErrorMsg:
F0609 00:03:14.534558 1 controller.go:161] Unable to perform initial IP allocation check: unable to refresh the service IP block: Get <a href="https://localhost:6443/api/v1/services" rel="nofollow noreferrer">https://localhost:6443/api/v1/services</a>: dial tcp: lookup localhost on 172.16.0.1:53: no such host</p>
</blockquote>
| Duncan Krebs | <p>In case anybody else comes across this issue, it had to do with a missing entry in my /etc/hosts file, there needs to be a line "127.0.0.1 localhost" for the api server to start correctly. If that is missing it tries to use a DNS server lookup which does not make sense, happy I have it working! </p>
| Duncan Krebs |
<p>I have deployed my application via docker on Google Kubernetes cluster. I am facing an application error due to which I want to enter inside the container and check a few things. At local, all I had to do was run <code>sudo docker ps</code> and then exec -it into the container.</p>
<p>How can I do that when containers are deployed on a Kubernetes cluster?</p>
| aviral sanjay | <p>You need to use kubectl</p>
<pre><code>kubectl get pods
kubectl exec -it pod-container-name -- /bin/bash
</code></pre>
| Edmhs |
<p>If host is * in ingress resource, how do I curl path?
use Nginx Ingress Controller</p>
<pre><code>curl -kL <ingress-ip>/path
curl -kL <controller-service-ip>/path
curl -kL <node-ip>/path
Is only curl -kL /path possible?
</code></pre>
| john chen | <p>You have not told what Ingress implementation you are using. Some ingress services even run outside of Kubernetes so there is no generic answer except running <code>kubectl describe ingress <my-ingress></code> then curl whatever address it shows.</p>
| Petter Nordlander |
<p><strong>My Application:</strong></p>
<p>I have one production application named <strong>"Realtime Student Streaming Application"</strong>.
and it is connected to MongoDB1 and i have Student table(collection) inside of it.</p>
<p><strong>My Task:</strong></p>
<ol>
<li>My Application will Listen for any insertion happened in student table.</li>
<li>Once insertion happened, mongo will give the inserted record to my listener class.</li>
<li>Once the record came to java class I will insert it into new database called MongoDB2.</li>
</ol>
<p><strong>What I did:</strong></p>
<ol>
<li>I deployed my application in OpenShift cluster and it has 5 pods running on it.</li>
<li>On this case, If any insertion is happened in MongoDB1, I'm receiving inserted data in all my 5 pods.</li>
</ol>
<p><strong>Current Output:</strong></p>
<p>Pod 1: I will process inserted-document 1,2,3,4,5</p>
<p>Pod 2: I will process inserted-document 1,2,3,4,5</p>
<p>Pod 3: I will process inserted-document 1,2,3,4,5</p>
<p>Pod 4: I will process inserted-document 1,2,3,4,5</p>
<p>Pod 5: I will process inserted-document 1,2,3,4,5</p>
<p><strong>Expected Output:</strong></p>
<p>Pod 1: I need to process only inserted-document 1</p>
<p>Pod 2: I need to process only inserted-document 2</p>
<p>Pod 3: I need to process only inserted-document 3</p>
<p>Pod 4: I need to process only inserted-document 4</p>
<p>Pod 5: I need to process only inserted-document 5</p>
<p><strong>Q1: Let say if i'm inserting 5 documents, How it will be shared between 5 pods.</strong></p>
<p><strong>Q2: In General, while using Kubernetes/Openshift , How to equally split data between multiple pods?</strong></p>
| Vaishagkumar techy | <p>This really isn't an OpenShift or Kubernetes question, it's an application design question. Fundamentally, it's up to <em>you</em> to shard the data in some fashion. Fundamentally it's a difficult problem to solve, and even tougher to solve once you start tackling problems around transactionality, scaling up/down and preserving message order.</p>
<p>Therefore, one common way to do this is to use Kafka. Either manually, or with something like Debezium, stream the changes from the DB into a Kafka topic. You can then use Kafka partitions and consumer groups to automatically divide the work between a dynamic number of workers. This may seem like overkill, but it solves a lot of the problems I mentioned above.</p>
<p>Of course you can also try to do this manually, just manually sharding the work and having each worker use Mongo features like findAndModify to "lock" specific updates and then update them again once completed. But then you have to build a lot of those features yourself, including how to recover if a worker locks an update but then fails before processing the change.</p>
| David Ogren |
<p>I have a working Kubernetes deployment of my application.</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
...
spec:
containers:
- name: my-app
image: my-image
...
readinessProbe:
httpGet:
port: 3000
path: /
livenessProbe:
httpGet:
port: 3000
path: /
</code></pre>
<p>When I apply my deployment I can see it runs correctly and the application responds to my requests.</p>
<pre><code>$ kubectl describe pod -l app=my-app
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m7s default-scheduler Successfully assigned XXX
Normal Pulled 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine
Normal Created 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app
Normal Started 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app
</code></pre>
<p>The application has a defect and crashes under certain circumstances. I "invoke" such a condition and then I see the following in pod events:</p>
<pre><code>$ kubectl describe pod -l app=my-app
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m45s default-scheduler Successfully assigned XXX
Normal Pulled 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine
Normal Created 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app
Normal Started 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app
Warning Unhealthy 9s kubelet, pool-standard-4gb-2cpu-b9vc Readiness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 4s (x3 over 14s) kubelet, pool-standard-4gb-2cpu-b9vc Liveness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Normal Killing 4s kubelet, pool-standard-4gb-2cpu-b9vc Container crawler failed liveness probe, will be restarted
</code></pre>
<p>It is expected the liveness probe fails and the container is restarted. But why do I see <code>Readiness probe failed</code> event?</p>
| Maksim Sorokin | <p>As @suren wrote in the comment, readiness probe is still executed after container is started. Thus if both liveness and readiness probes are defined (and also fx they are the same), both readiness and liveness probe can fail.</p>
<p>Here is <a href="https://stackoverflow.com/questions/44309291/kubernetes-readiness-probe-execution-after-container-started">a similar question with a clear in-depth answer</a>.</p>
| Maksim Sorokin |
<p>I'm trying to tweak my Kubernetes-based app to make it more development-friendly. I'm using <code>kustomize</code> to add some extra containers into my <code>app-deployment</code>. It looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
volumes:
- name: php-src
emptyDir: {}
- name: nginx-src
emptyDir: {}
- name: webpack-src
emptyDir: {}
containers:
- name: lsyncd
image: axiom/rsync-server
ports:
- name: sshd
containerPort: 22
- name: rsyncd
containerPort: 873
env:
- name: USERNAME
value: user
- name: PASSWORD
value: pass
volumeMounts:
- name: php-src
mountPath: /data/php
- name: nginx-src
mountPath: /data/nginx
- name: webpack-src
mountPath: /data/webpack
- name: nginx
volumeMounts:
- name: nginx-src
mountPath: /app
- name: php
volumeMounts:
- name: php-src
mountPath: /app
- name: webpack
image: dreg.mpen.ca/kymark/webpack
imagePullPolicy: Never # https://stackoverflow.com/a/54043058/65387
ports:
- name: http
containerPort: 8082
livenessProbe:
httpGet:
port: http
path: /assets/manifest.json
initialDelaySeconds: 20
periodSeconds: 60
command: ['node','--max_old_space_size=20000','node_modules/.bin/webpack-dev-server']
volumeMounts:
- name: webpack-src
mountPath: /app
</code></pre>
<p>Basically, I've added a new rsync server with 3 volumes corresponding with the 3 different containers that I want to upload code to. This part is working great; I can rsync my code into the containers and it shows up instantly.</p>
<p>However, the nginx, PHP, and webpack images already have a bit of code in that <code>/app</code> directory (like <code>node_modules</code> and <code>vendor</code>). When I mount over the top of it, it disappears (which makes sense, since I'm mounting an <code>emptyDir</code>). But what I would like to do instead is use an overlay mount so that any code I upload will be 'overlayed' on top of what's already there.</p>
<p>Can I do that? How?</p>
| mpen | <p>No, <strong>Kubernetes</strong> mounts, unlike <strong>Docker</strong> mounts do not allow mounting one directory over the other. Potential <code>VOLUME</code> lines from the <code>Dockerfile</code> do not matter.</p>
<p>A common solution for your use-case is, to use an <strong>init container</strong> to set up the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a>, before the <strong>main container</strong> is started. So basically you specify the <code>emptyDir</code> as a volume mount in both the <strong>init container</strong> and the <strong>main container</strong> and have the <strong>init container</strong> copy the files into the <code>emptyDir</code>.</p>
<p>Additionally, there are two projects that offer different solutions to what you are trying to achieve:</p>
<ul>
<li><a href="https://github.com/ksync/ksync" rel="nofollow noreferrer">https://github.com/ksync/ksync</a></li>
<li><a href="https://github.com/telepresenceio/telepresence" rel="nofollow noreferrer">https://github.com/telepresenceio/telepresence</a></li>
</ul>
<p>Using one of those you would not need the custom <code>rsync</code> and <strong>init container</strong> setup in your overlay.</p>
| pst |
<p>I'm creating Kubernetes clusters programmatically for end-to-end tests in GitLab CI/CD. I'm using <code>gcloud container clusters create</code>. I'm doing this for half a year and created and deleted a few hundred clusters. The cost went up and down. Now, I got an unusually high bill from Google and I checked the cost breakdown. I noticed that the cost is >95% for "Storage PD Capacity". I found out that <code>gcloud container clusters delete</code> never deleted the Google Compute Disks created for Persistent Volume Claims in the Kubernetes cluster.</p>
<p>How can I delete those programmatically? What else could be left running after deleting the Kubernetes cluster and the disks?</p>
| Kalle Richter | <p>Suggestions:</p>
<ol>
<li><p>To answer your immediate question: you can programatically delete your disk resource(s) with the <a href="https://cloud.google.com/compute/docs/reference/rest/v1/disks/delete" rel="nofollow noreferrer">Method: disks.delete</a> API.</p></li>
<li><p>To determine what other resources might have been allocated, look here: <a href="https://cloud.google.com/resource-manager/docs/listing-all-resources" rel="nofollow noreferrer">Listing all Resources in your Hierarchy</a>.</p></li>
<li><p>Finally, this link might also help: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering" rel="nofollow noreferrer">GKE: Understanding cluster resource usage </a></p></li>
</ol>
| paulsm4 |
<p>Pod containers are not ready and stuck under Waiting state over and over every single time after they run sh commands (/bin/sh as well).
As example all pod's containers seen at <a href="https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps" rel="nofollow noreferrer">https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps</a> they just go on "Complete" status after executing the sh command, or if I set "restartPolicy: Always" they have the "Waiting" state for the reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.
If I use the sh command within container, after creating them I can read using "kubectl logs" the env variable was set correctly.</p>
<p>The expected behaviour is to get pod's containers running after they execute the sh command.</p>
<p>I cannot find references regarding this particular problem and I need little help if possible, thank you very much in advance!</p>
<p>Please disregard I tried different images, the problem happens either way.</p>
<p>environment: Kubernetes v 1.17.1 on qemu VM</p>
<p>yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
</code></pre>
<p>describe pod:</p>
<pre><code>kubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels: <none>
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://nginx@sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: <set to the key 'how' of config map 'special-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed container
</code></pre>
| Stefan | <p>This happens because the process in the container you are running has completed and the container shuts down, and so kubernetes marks the pod as completed. </p>
<p>If the command that is defined in the docker image as part of <code>CMD</code>, or if you've added your own command as you have done, then the container shuts down after the command completed. It's the same reason why when you run Ubuntu using plain docker, it starts up then shuts down directly afterwards. </p>
<p>For pods, and their underlying docker container to continue running, you need to start a process that will continue running. In your case, running the <code>env</code> command completes right away. </p>
<p>If you set the pod to restart Always, then kubernetes will keep trying to restart it until it's reached it's back off threshold.</p>
<p>One off commands like you're running are useful for utility type things. I.e. do one thing then get rid of the pod.</p>
<p>For example:</p>
<p><code>kubectl run tester --generator run-pod/v1 --image alpine --restart Never --rm -it -- /bin/sh -c env</code></p>
<p>To run something longer, start a process that continues running. </p>
<p>For example:</p>
<p><code>kubectl run tester --generator run-pod/v1 --image alpine -- /bin/sh -c "sleep 30"</code></p>
<p>That command will run for 30 seconds, and so the pod will also run for 30 seconds. It will also use the default restart policy of Always. So after 30 seconds the process completes, Kubernetes marks the pod as complete, and then restarts it to do the same things again.</p>
<p>Generally pods will start a long running process, like a web server. For Kubernetes to know if that pod is healthy, so it can do it's high availability magic and restart it if it cashes, it can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">readiness and liveness probes</a>.</p>
| julz256 |
<p>I am trying to collect some information regarding the kubernetes namespaces.
I found a command where you can see some information.</p>
<pre><code>kubectl describe resourcequota -n my-namespaces
</code></pre>
<p>I have as return:</p>
<pre><code>Name: gke-resource-quotas
Namespace: tms-prod
Resource Used Hard
-------- ---- ----
count/ingresses.extensions 0 5k
count/jobs.batch 0 10k
pods 68 5k
services 44 1500
</code></pre>
<p>However I need information like:</p>
<pre><code>CPU request
CPU Limit
Memory request
Memory Limit
Service (count)
Pods (Count)
Phase
</code></pre>
<p>I studied a little and saw that it is possible to create ResourceQuota to get this information. However I did not understand very well its operation.</p>
<p>Could anyone get this data?</p>
| Danilo Marquiori | <ol>
<li>run first <code>kubectl get quota</code></li>
<li>then it will display quotas available</li>
<li>then run <code>kubectl describe quota <quota name></code></li>
</ol>
<p>if you don't have any custom quota then you can create it as describe from <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/resource-quotas/</a></p>
| Abhijit Gaikwad |
<p>I have two pending pods which I cannot delete by any means. Could you help?
OS: Cent OS 7.8
Docker: 1.13.1
kubenetes: "v1.20.1"</p>
<pre><code>[root@master-node ~]# k get pods --all-namespaces (note: k = kubectl alias)
NAMESPACE NAME READY STATUS RESTARTS AGE
**default happy-panda-mariadb-master-0 0/1 Pending** 0 11m
**default happy-panda-mariadb-slave-0 0/1 Pending** 0 49m
default whoami 1/1 Running 0 5h13m
[root@master-node ~]# k describe pod/happy-panda-mariadb-master-0
Name: happy-panda-mariadb-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=mariadb
chart=mariadb-7.3.14
component=master
controller-revision-hash=happy-panda-mariadb-master-7b55b457c9
release=happy-panda
statefulset.kubernetes.io/pod-name=happy-panda-mariadb-master-0
IPs: <none>
Controlled By: StatefulSet/happy-panda-mariadb-master
Containers:
mariadb:
Image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27
Port: 3306/TCP
Host Port: 0/TCP
Liveness: exec [sh -c password_aux="${MARIADB_ROOT_PASSWORD:-}"
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-happy-panda-mariadb-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: happy-panda-mariadb-master
Optional: false
default-token-wpvgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wpvgf
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 15m default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
[root@master-node ~]# k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-master-0 no persistent volumes available for this claim and no storage class is set
105s Normal FailedBinding persistentvolumeclaim/data-happy-panda-mariadb-slave-0 no persistent volumes available for this claim and no storage class is set
65m Warning FailedScheduling pod/happy-panda-mariadb-master-0 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
</code></pre>
<p>I already tried delete by various ways but nothing worked (I also tried to delete from the dashboard)</p>
<pre><code>**kubectl delete pod happy-panda-mariadb-master-0 --namespace="default"
k delete deployment mysql-1608901361
k delete pod/happy-panda-mariadb-master-0 -n default --grace-period 0 --force**
</code></pre>
<p>Could you advise me on this?</p>
| Sean Lee | <p>kubectl delete rc replica set names</p>
<p>Or You forgot to specify storageClassName: manual in PersistentVolumeClaim.</p>
| Abhijit Gaikwad |
<p>I am trying to create a mongodb user along with a stateful set. Here is my .yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
type: NodePort
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
---
apiVersion: v1
kind: Secret
metadata:
name: admin-secret
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: pass1
# corresponds to user.spec.passwordSecretKeyRef.key
---
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: admin
spec:
passwordSecretKeyRef:
name: admin-secret
# Match to metadata.name of the User Secret
key: password
username: admin
db: "admin" #
mongodbResourceRef:
name: mongo
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
selector:
matchLabels:
name: mongo
template:
metadata:
labels:
name: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
# - envFrom:
# - secretRef:
# name: mongo-secret
- image: mongo
name: mongodb
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
ports:
- containerPort: 27017
</code></pre>
<p>Earlier I used the secret to create a mongo user:</p>
<pre><code>...
spec:
containers:
- envFrom:
- secretRef:
name: mongo-secret
...
</code></pre>
<p>but once I added spec.template.spec.containers.command to the StatefulSet this approach is no longer working. Then I added Secret and MongoDBUser but I started getting this error:</p>
<pre><code>unable to recognize "mongo.yaml": no matches for kind "MongoDBUser" in version "mongodb.com/v1"
</code></pre>
<p>How to automatically create a mongodb user when creating StatefulSet with few replicas in kubernetes?</p>
| Kiramm | <p>One of the resources in your yaml file refers to a <code>kind</code> that doesn't exist in your cluster.</p>
<p>You can check this by running the command <code>kubectl api-resources | grep mongo -i</code></p>
<p>Specifically it's the resource of kind <code>MongoDBUser</code>. This API resource type is part of <a href="https://www.mongodb.com/kubernetes" rel="nofollow noreferrer">MongoDB Enterprise Kubernetes Operator</a>. </p>
<p>You haven't indicated whether you are using this in your cluster, but the error you're getting implies the CRD's for the operator are not installed and so cannot be used.</p>
<p>MongoDB Kubernetes Operator is a paid enterprise package for Kubernetes. If you don't have access to this enterprise package from MongoDB you can also install the community edition yourself by either setting up all the resources yourself or using <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> to install it as a package. Using Helm makes managing the resources significantly easier, especially with regards to configuration, upgrades, re-installation or unistalling. The existing Helm charts are open source and also allow for running MongDB as a standalone instance, replica set or a sharded cluster.</p>
<p>For reference, Bitnami <a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb" rel="nofollow noreferrer">provides a MongoDB Standalone or replica set helm chart</a> which seems to be on the latest MongoDB version and is maintained regularly. There is also <a href="https://github.com/helm/charts/tree/master/stable/mongodb-replicaset" rel="nofollow noreferrer">this one</a>, but it's on an older version of MongoDB and doesn't seem to be getting much attention.</p>
| julz256 |
<p>I downloaded minikube after that ...</p>
<ol>
<li>I did <code>minikube start</code>... so, node started</li>
<li>I played around with some containers(deployment object)</li>
<li>Now when I am doing <code>docker ps</code> => it's showing all the k8's container running -_-"</li>
</ol>
<p><strong>What I wanted to see is the local Docker Daemon containers rather than Vm's containers</strong></p>
<hr />
<p>When I run <code>minikube docker-env</code> it shows:</p>
<blockquote>
<p>Exiting due to ENV_DRIVER_CONFLICT: 'none' driver does not support 'minikube docker-env' command</p>
</blockquote>
<p><strong>What should I do now to connect to local Docker Daemon ?</strong></p>
<p><em>I am using Ubuntu 18</em> :)</p>
| Ashutosh Tiwari | <p>Since you started minikube without specifying driver, the host docker daemon will be used, so you can access it without any special environment variables. That’s why you see “ <em>Exiting due to ENV_DRIVER_CONFLICT: 'none' driver does not support 'minikube docker-env' command</em>”</p>
<p>Try starting minikube using other driver viz <code>minikube start --driver=hyperkit</code>
or stopping minkikube</p>
| Abhijit Gaikwad |
<p>I am using Openshift 4, CPU Request: 0.2, Limit 0.4.</p>
<p>From the monitoring, I can see the CPU usage started from 0.1, and increased gradually. Is it because that there is a machanisim to prevent over reserve the CPU usage?</p>
<p>Can I setup that the pod to use the max request CPU from the beginning, and adapt to Limit as fast as possible?</p>
<p><a href="https://i.stack.imgur.com/ysdYY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ysdYY.png" alt="enter image description here" /></a></p>
| Jim | <p>The max limit is already available from the beginning (presuming that the node has the CPU available to give). OCP is using CFS to enforce that limit, and CFS doesn't have anything that gradually kicks in, CFS only has one thing it considers: the configured limit.</p>
<p>As for why you are seeing this in your monitoring, I'm not sure. But my first guess would be that that graph is using a moving average. (And thus, since it's a moving average it will converge towards the actual usage.)</p>
| David Ogren |
<p>I created a Docker image based on microsoft/dotnet-framework of a C#.NET console application built for Windows containers, then ensured I can run the image in a container locally. I successfully pushed the image to our Azure Container registry. Now I'm trying to create a deployment in our Azure Kubernetes service, but I'm getting an error: </p>
<blockquote>
<p>Failed to pull image "container-registry/image:tag": rpc error: code = Unknown desc = unknown blob</p>
</blockquote>
<p>I see this error on my deployment, pods, and replica sets in the Kubernetes dashboard.</p>
<p>We already have a secret that works with the azure-vote app, so I wouldn't think this is related to secrets, but I could be wrong.</p>
<p>So far, I've tried to create this deployment by pasting the following YAML into the Kubernetes dashboard Create dialog:</p>
<pre><code>apiVersion:
kind: Deployment
metadata:
name: somename
spec:
selector:
matchLabels:
app: somename
tier: backend
replicas: 2
template:
metadata:
labels:
app: somename
tier: backend
spec:
containers:
- name: somename
image: container-registry/image:tag
ports:
- containerPort: 9376
</code></pre>
<p>And I also tried running variations of this kubectl command:</p>
<pre><code>kubectl run deploymentname --image=container-registry/image:tag
</code></pre>
<p>In my investigation so far, I've tried reading about different parts of k8s to understand what may be going wrong, but it's all fairly new to me. I think it may have to do with this being a Windows Server 2016 based image. A team member successfully added the azure-vote tutorial code to our AKS, so I'm wondering if there is a restriction on a single AKS service running deployments for both Windows and Linux based containers. I see by running <code>az aks list</code> that the AKS has an agentPoolProfile with "osType": "Linux", but I don't know if that means simply that the orchestrator is in Linux or if the containers in the pods have to be Linux based. I have found stackoverflow questions about the "unknown blob" error, and it seems <a href="https://stackoverflow.com/questions/45138558/pull-image-from-and-connect-to-the-acs-engine-kubernetes-cluster/45141031#45141031">the answer to this question</a> might support my hypothesis, but I can't tell if that question is related to my questions.</p>
<p>Since the error has to do with failing to pull an image, I don't think this has to do with configuring a service for this deployment. Adding a service didn't change anything. I've tried rebuilding my app under the suspicion that the image was corrupted, but rebuilding and re-registering had no effect. Another thing that doesn't seem relevant that I read about is <a href="https://stackoverflow.com/questions/48765821/unable-to-pull-public-images-with-kubernetes-using-kubectl">this question and answer</a> regarding a manifest mismatch (which I don't completely understand yet).</p>
<p>I have not tried creating a local Kubernetes. I don't know if that's something folks typically do. </p>
<p>Summary of questions:</p>
<ol>
<li>What causes this unknown blob error? Does it have to do with a Windows container/Linux container mismatch?</li>
<li>Does the agent pool profile affect all the nodes in the cluster, or just the "master" nodes? </li>
</ol>
<p>Let me know if you need more information. Thanks.</p>
| Will | <p><strong>1. What causes this unknown blob error? Does it have to do with a Windows container/Linux container mismatch?</strong>
It's because you're trying to run a Windows-based Docker container on a Linux host. It has nothing directly to do with Kubernetes or AKS. Currently AKS is in preview and supports only Linux environments. To be more precise, when you provision your AKS cluster (<code>az aks create</code>), all your k8s minions (worker nodes) will be Linux boxes and thus will not be able to run Windows-based containers.</p>
<p><strong>2. Does the agent pool profile affect all the nodes in the cluster, or just the "master" nodes?</strong>
It affects the worker nodes and is used to group them together logically so you can better manage workload distribution. In the future, when AKS supports both Linux and Windows, you will be able to i.e. create agent pools based on OS type and instruct k8s to deploy your Windows-based services only to the Windows-based hosts (agents).</p>
| dmusial |
<p>I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.</p>
<p>All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.</p>
<p>After creating a client instance by using the passed in Manager instance (i.e. do <code>mgr.GetClient()</code>), I then tried to write some code to get the list of all the Environment resources I created.</p>
<pre><code>func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
</code></pre>
<p>The example in the documentation for the List method shows:</p>
<pre><code>c.List(context.Background, &result);
</code></pre>
<p>which doesn't even compile.</p>
<p>I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.</p>
<p>Is there a way to do this via the <code>Client</code> object? Should I do something else entirely?</p>
| Chris Tavares | <p>So figured it out - the answer is to pass <code>nil</code> for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.</p>
| Chris Tavares |
<p>I am trying to convert a docker compose file to a kubernetes manifest for deployment and after installing the Kompose on my system, I used the command <code>kompose convert -f docker-compose.yml</code> for the conversion process but it is not working. The error response is <code>←[31mFATA←[0m services.web.stdin_open must be a boolean</code>.</p>
<p>The docker-compose file is shown thus:</p>
<pre><code>version: '3'
services:
# Backend / Database
database:
image: database
build: ../backend/database
volumes:
- ../backend/database:/data/db
restart: always
networks:
- back
# Backend / API
api:
image: api
build: ../backend/api
volumes:
- ../backend/api/public:/user/src/app/public
restart: always
ports:
- "8084:8080"
depends_on:
- database
networks:
- front
- back
# Backend / Proxy
proxy:
image: nginx
volumes:
- ../backend/proxy/nginx.conf:/etc/nginx/conf.d/proxy.conf
restart: always
ports:
- "80:80"
- "443:443"
depends_on:
- database
- api
networks:
- front
- docker-network
# Frontend / App / Web
web:
image: web
stdin_open: "true"
build: ../frontend/app/web
restart: always
ports:
- "3000:3000"
depends_on:
- api
networks:
- front
networks:
front:
driver: bridge
back:
driver: bridge
docker-network:
driver: bridge
</code></pre>
<p>Please I'd like help on how I can avoid this error and build the manifest file. I am running this on a windows machine. Also a screenshot of the error is shown in the image attached.</p>
<p><a href="https://i.stack.imgur.com/ebAcl.png" rel="nofollow noreferrer">screenshot of error</a></p>
<p>Many Thanks.</p>
| Sam Bayo | <p>Change stdin_open to true instead of ”true”</p>
<pre><code>services:
web:
stdin_open: true
</code></pre>
| Abhijit Gaikwad |
<p>How can I modify the values in a Kubernetes <code>secret</code> using <code>kubectl</code>?</p>
<p>I created the secret with <code>kubernetes create secret generic</code>, but there does not seem to be a way to modify a secret. For example, to add a new secret-value to it, or to change a secret-value in it.</p>
<p>I assume i can go 'low-level', and write the yaml-file and do a <code>kubectl edit</code> but I hope there is a simpler way.</p>
<p>(I'm using <code>kubernetes 1.2.x</code>)</p>
| gabor | <p>The most direct (and interactive) way should be to execute <code>kubectl edit secret <my secret></code>. Run <code>kubectl get secrets</code> if you'd like to see the list of secrets managed by Kubernetes.</p>
| Timo Reimann |
<p>I have a container with a dotnet core application running in Azure Kubernetes Service. No memory limits were specified in Pod specs.</p>
<p>The question is why GC.GetTotalMemory(false) shows approx. 3 Gb of memory used while AKS Insights shows 9.5 GB for this Pod container?</p>
<p><a href="https://i.stack.imgur.com/rRBa9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rRBa9.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/nmVD4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nmVD4.png" alt="enter image description here" /></a></p>
<p>Running <code>top</code> reveals these 9.5 GB:
<a href="https://i.stack.imgur.com/rATs5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rATs5.png" alt="enter image description here" /></a></p>
| Anton Petrov | <p>As I understand <code>GC.GetTotalMemory(false)</code> returns the size of managed objects in bytes but the entire working memory set is much larger because memory is allocated in pages and because of managed heap fragmentation and because GC is not performed.</p>
| Anton Petrov |
<p>I'm trying to understand why this particular <code>socat</code> command isn't working in my case where I run it in a IPv6 only Kubernetes cluster.</p>
<p>Cluster is build on top of AWS with Calico CNI & containerd. Provisioned using <code>kubeadm</code> and Kubernetes 1.21.</p>
<p>I have run the following <code>socat</code> command which binds to loopback interface <code>::1</code>,</p>
<pre><code>kubectl --context=$CLUSTER1 run --image=alpine/socat socat -- tcp6-listen:15000,bind=\[::1\],fork,reuseaddr /dev/null
</code></pre>
<p>And then I try to <code>port-forward</code> and <code>curl</code> to <code>15000</code> port,</p>
<pre><code>kubectl --context=$CLUSTER1 port-forward pod/socat 35000:15000 --address=::1
curl -ivg http://localhost:35000
</code></pre>
<p>I get the error,</p>
<pre><code>Forwarding from [::1]:35000 -> 15000
Handling connection for 35000
E0830 17:09:59.604799 79802 portforward.go:400] an error occurred forwarding 35000 -> 15000: error forwarding port 15000 to pod a8ba619774234e73f4c1b4fe4ff47193af835cffc56cb6ad1a8f91e745ac74e9, uid : failed to execute portforward in network namespace "/var/run/netns/cni-8bade2c1-28c9-6776-5326-f10d55fd0ff9": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused
</code></pre>
<p>Its listening to <code>15000</code> as,</p>
<pre><code>Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 ::1:15000 :::* LISTEN 1/socat
</code></pre>
<p>However if I run the following it works fine,</p>
<pre><code>kubectl --context=$CLUSTER1 run --image=alpine/socat socat -- tcp6-listen:15000,bind=\[::\],fork,reuseaddr /dev/null
</code></pre>
<p>Not sure I understand why <code>port-forward</code> would fail for the loopback interface binding <code>::1</code> but not for catch all <code>::</code>. Can someone please shed some light on this ?</p>
| nixgadget | <p>For those of you running into a similar issue with your IPv6 only Kubernetes clusters heres what I have investigated found so far.</p>
<p><strong>Background:</strong> It seems that this is a generic issue relating to IPv6 and CRI.
I was running <code>containerd</code> in my setup and <code>containerd</code> versions <code>1.5.0</code>-<code>1.5.2</code> added two PRs (<a href="https://github.com/containerd/containerd/commit/11a78d9d0f1664466fa3fffebd8ff234f3ef2677" rel="nofollow noreferrer">don't use socat for port forwarding</a> and <a href="https://github.com/containerd/containerd/commit/305b42583073aa1435d15ff5773c049827fa8d51" rel="nofollow noreferrer">use happy-eyeballs for port-forwarding</a>) which fixed a number of issues in IPv6 port-forwarding.</p>
<p><strong>Potential fix:</strong> Further to pulling in <code>containerd</code> version <code>1.5.2</code> (as part of Ubuntu 20.04 LTS) I was also getting the error <code>IPv4: dial tcp4 127.0.0.1:15021: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found</code> when port-forwarding. This is caused by a DNS issue when resolving <code>localhost</code>. Hence I added <code>localhost</code> to resolve as <code>::1</code> in the host machine with the following command.</p>
<pre><code>sed -i 's/::1 ip6-localhost ip6-loopback/::1 localhost ip6-localhost ip6-loopback/' /etc/hosts
</code></pre>
<p>I think the important point here is that check your container runtimes to make sure IPv6 (tcp6 binding) is supported.</p>
| nixgadget |
<p><strong>I wanna have a trigger when I make a deployment of my React application, and my pod is finally <code>running</code></strong></p>
<p>I need this Kubernetes trigger, in order to launch another pod which is gonna copy/past the static files for another pod specific just made for static files. (I wanna do this to keep the old <code>bundle.js</code> if ever users on the application are still surfing with old bundles, this way I'll be able to make a fat PWA)</p>
<p>I don't wanna have to wait myself the end of the deployment (6 minutes of docker building enough to take a cup of tea)</p>
<p>When my React app is <code>Running</code> => My goal is to start a pod which is gonna make a <code>kubectl</code> command from inside the cluster and kill himself afterwards</p>
<p>The command from inside the pod is gonna be a simple copy/past <code>kubectl cp fresh-new-deployment/static pod-just-for-static-files/var/www</code></p>
<p>Everything is working fine in local, I just need the Kubernetes trigger ;)</p>
<p>I don't want to make a kubernetes CRON for this (or maybe this is the only way), or an every minute CRON ? what you recommend ?</p>
<p>Thanks I already typed in google <code>kubernetes trigger when pod running</code> there was nothing interesting.</p>
| Thomas Aumaitre | <p>Ideally you should use <code>init-containers</code> pattern described here <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/</a></p>
<p>Here are some ideas for how to use init containers:</p>
<p><em>Wait for a Service to be created, using a shell one-line command like:</em></p>
<pre><code>for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
</code></pre>
<p><em>Register this Pod with a remote server from the downward API with a command like:</em></p>
<pre><code>curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
</code></pre>
<p><em>Wait for some time before starting the app container with a command like</em></p>
<pre><code>sleep 60
</code></pre>
| Abhijit Gaikwad |
<p>Im trying to launch two Cassandra statefulset instances and respective PVC in a cluster created in AWS AZ (Across 3 zones, <em>eu-west-1a</em>, <em>eu-west-1b</em> & <em>eu-west-1c</em>). </p>
<p>I created a node group with the following 2 nodes so as shown these nodes attach in the zones <em>eu-west-1a</em> and <em>eu-west-1b</em></p>
<pre><code>ip-192-168-47-86.eu-west-1.compute.internal - failure-domain.beta.kubernetes.io/zone=eu-west-1a,node-type=database-only
ip-192-168-3-191.eu-west-1.compute.internal - failure-domain.beta.kubernetes.io/zone=eu-west-1b,node-type=database-only
</code></pre>
<p>When I launch the Cassandra instances (using Helm) only one instance starts. The other instance shows the error,</p>
<pre><code>0/4 nodes are available: 2 node(s) didn't match node selector, 2 node(s) had no available volume zone.
</code></pre>
<p>The PVCs for these instances are bounded,</p>
<pre><code>kubectl get pvc -n storage -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cc-cassandra-0 Bound pvc-81e30224-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO gp2 4m
cassandra-data-cc-cassandra-1 Bound pvc-abd30868-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO gp2 3m
</code></pre>
<p>However, the PVs show that they are in zones <em>eu-west-1b</em> & <em>eu-west-1c</em></p>
<pre><code>kubectl get pv -n storage --show-labels
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
pvc-81e30224-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO Delete Bound storage/cassandra-data-cc-cassandra-0 gp2 7m failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1b
pvc-abd30868-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO Delete Bound storage/cassandra-data-cc-cassandra-1 gp2 6m failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c
</code></pre>
<p>I have tried adding the following topology to the <code>StorageClass</code> to no avail,</p>
<pre><code>allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- eu-west-1a
- eu-west-1b
</code></pre>
<p>But despite of this I can still see the PVs in the zones, <code>eu-west-1b</code> & <code>eu-west-1c</code>. </p>
<p>Using K8 1.11.</p>
<p>Any other possible fixes ?</p>
| nixgadget | <p>Looking at <a href="https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/</a> <code>allowedTopologies</code> doesnt exist.</p>
<p>So I used <code>zones: eu-west-1a, eu-west-1b</code> in the <code>StorageClass</code> which seems to have worked.</p>
<pre><code>provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: eu-west-1a, eu-west-1b
</code></pre>
| nixgadget |
<p>I have installed Rancher Desktop. It is working perfectly except for the inability for nerdctl and k3s to download docker images from hub.docker.com from behind my corporate firewall.</p>
<p>Question 1: After downloading Rancher Desktop, how do I set my corporate proxy credentials such that Kubernetes (with Rancher Desktop) pulls images from hub.docker.com.</p>
<p>Question 2: After downloading Rancher Desktop, how do I set my corporate proxy credentials such that the following command works from behind my corporate firewall.</p>
<pre><code>% nerdctl run --name jerrod-mysql-test -e MYSQL_ROOT_PASSWORD=password -p 7700:3306 mysql:8.0
INFO[0000] trying next host error="failed to do request: Head \"https://registry-1.docker.io/v2/library/mysql/manifests/8.0\": dial tcp: lookup registry-1.docker.io on xxx.xxx.x.x:53: no such host" host=registry-1.docker.io
FATA[0000] failed to resolve reference "docker.io/library/mysql:8.0": failed to do request: Head "https://registry-1.docker.io/v2/library/mysql/manifests/8.0": dial tcp: lookup registry-1.docker.io on xxx.xxx.x.x:53: no such host
</code></pre>
| Jerrod Horton | <p>I am using mac. In mac. machine, add proxy details to /etc/conf.d/docker file.</p>
<pre><code>#rdctl shell
#sudo su -
#vi /etc/conf.d/docker
NO_PROXY="localhost,127.0.0.1"
HTTPS_PROXY="http://HOST:PORT"
HTTP_PROXY="http://HOST:PORT"
export HTTP_PROXY
export HTTPS_PROXY
export NO_PROXY
</code></pre>
| Vallabha Vamaravelli |
<p>I have written a simple spring boot application(version springboot 2.0) which uses mysql(version 5.7).</p>
<p><strong>application.properties</strong> snippet</p>
<pre><code>spring.datasource.url = jdbc:mysql://localhost:3306/test?useSSL=false
spring.datasource.username = testuser
spring.datasource.password = testpassword
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
</code></pre>
<p>When I run it locally, it works fine.
If I want to run this spring boot application in docker then I can change</p>
<pre><code>spring.datasource.url = jdbc:mysql://mysql-container:3306/test?useSSL=false
</code></pre>
<p><em>mysql-container is run using mysql:5.7 image from dockerhub.</em></p>
<p>However I want to change value of host from some placeholder properties file. so that this looks something like:</p>
<pre><code>spring.datasource.url = jdbc:mysql://${MYSQL_HOST}:3306/test?useSSL=false
</code></pre>
<p>note: I am not sure about placeholder format. Is it ${MYSQL_HOST} or @MYSQL_HOST@ ?</p>
<p>you can name this placeholder file as <em>placeholder.properties</em> or <em>placeholder.conf</em> or <em>.env</em> or anything. The content of that file should be something like:</p>
<pre><code>MYSQL_HOST=localhost
</code></pre>
<p>or</p>
<pre><code>MYSQL_HOST=some ip address
</code></pre>
<p>I can create .env or .env.test or .env.prod and I can refer that env file based on where I want to run application.</p>
<hr>
<p>UPDATE -</p>
<p>I have two questions:</p>
<ol>
<li><p>Where should I keep placeholder.properties? Is it under /config/ or under some specific directory?</p></li>
<li><p>how to invoke placeholder inside application.properties ?</p></li>
</ol>
<p>can someone suggest?</p>
| Shivraj | <p>SUGGESTION: If you have a relatively small #/properties, why not just have a different application.properties file for each different environment?</p>
<p>You'd specify the environment at runtime with <code>-Dspring.profiles.active=myenv</code>.</p>
<p>Look <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html" rel="nofollow noreferrer">here</a> and <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-profile-specific-properties" rel="nofollow noreferrer">here</a>.</p>
<p>PS:</p>
<p>To answer your specific question: the syntax is <code>${MYSQL_HOST}</code></p>
| paulsm4 |
<p>I'm creating three EKS clusters using <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="nofollow noreferrer">this</a> module. Everything works fine, just that when I try to add the configmap to the clusters using <code>map_roles</code>, I face an issue.</p>
<p>My configuration looks like this which I have it within all three clusters</p>
<pre><code>map_roles = [{
rolearn = "arn:aws:iam::${var.account_no}:role/argo-${var.environment}-${var.aws_region}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
}
]
</code></pre>
<p>The problem occurs while applying the template. It says</p>
<pre><code>configmaps "aws-auth" already exists
</code></pre>
<p>When I studied the error further I realised that when applying the template, the module creates three configmap resources of the same name like these</p>
<pre><code> resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
</code></pre>
<p>This obviously is a problem. How do I fix this issue?</p>
| Red Bottle | <p>The aws-auth configmap is created by EKS, when you create a managed node pool. It has the configuration required for nodes to register with the control plane. If you want to control the contents of the configmap with Terraform you have two options.</p>
<p>Either make sure you create the config map before the managed node pools resource. Or import the existing config map into the Terraform state manually.</p>
| pst |
<p>Friends,</p>
<p>I'm trying to deploy a mysql cluster on minikube using the helm chart of Bitnami. It's not working apparently because of lack of space since I'm getting the following error: <strong>mkdir: cannot create directory '/bitnami/mysql/data': No space left on device</strong>.</p>
<p>I am running minikube (version: v1.15.0) on a macOS with 500GB RAM, more than the half of it is still free. Any ideas about how could I solve this problem?</p>
<p>I ssh into the minikube environment and run df -h. This the result:</p>
<p>$ df -h</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
tmpfs 3.4G 487M 3.0G 15% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 18M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 176K 1.9G 1% /tmp
**/dev/vda1 17G 16G 0 100% /mnt/vda1**
</code></pre>
<p>It seems it minikube is really out of space. What can be done in this case?</p>
<p>Here the complete logs of my pod:</p>
<pre><code>mysql 17:08:49.22 Welcome to the Bitnami mysql container
mysql 17:08:49.22 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mysql
mysql 17:08:49.22 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mysql/issues
mysql 17:08:49.23
mysql 17:08:49.23 INFO ==> ** Starting MySQL setup **
mysql 17:08:49.24 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mysql 17:08:49.24 INFO ==> Initializing mysql database
mkdir: cannot create directory '/bitnami/mysql/data': No space left on device
</code></pre>
| marcelo | <p><code>minikube stop && minikube delete</code> then</p>
<pre><code>minikube start --disk-size 50000mb
</code></pre>
| Abhijit Gaikwad |
<p>I really dont understand this issue. In my <code>pod.yaml</code> i set the <code>persistentVolumeClaim</code> . i copied on my lastapplication declaration with PVC & PV.
i've checked that the files are in the right place !
on my Deployment file i've just set the port and the spec for the containers. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ds-mg-cas-pod
namespace: ds-svc
spec:
containers:
- name: karaf
image: docker-all.xxxx.net/library/ds-mg-cas:latest
env:
- name: JAVA_APP_CONFIGS
value: "/apps/ds-cas-webapp/context"
- name: JAVA_EXTRA_PARAMS
value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
volumeMounts:
- name: ds-cas-config
mountPath: "/apps/ds-cas-webapp/context"
volumes:
- name: ds-cas-config
persistentVolumeClaim:
claimName: ds-cas-pvc
</code></pre>
<p>the <code>PersistentVolume</code> & <code>PersistenteVolumeClaim</code></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: ds-cas-pv
namespace: ds-svc
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 5Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/ds-cas-webapp/context"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ds-cas-pvc
namespace: ds-svc
spec:
storageClassName: generic
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Mi
</code></pre>
<p>The error i get when i run the pod </p>
<pre><code> java.io.FileNotFoundException: ./config/truststore.jks (No such file or directory)
</code></pre>
<p>I run the same image manually with docker. i didn't had an error. My question is where i can made a mistake because i really dont see :(
i set everything </p>
<ul>
<li>the mountpoints</li>
<li>the ports</li>
<li>the variable</li>
</ul>
<p><em>the docker command that i used to run the container</em> :</p>
<pre><code>docker run --name ds-mg-cas-manually
-e JAVA_APP=/apps/ds-cas-webapp/cas.war
-e JAVA_APP_CONFIGS=/apps/ds-cas-webapp/context
-e JAVA_EXTRA_PARAMS="-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
-p 8443:8443
-p 6402:640
-d
-v /apps/ds-cas-webapp/context:/apps/ds-cas-webapp/context
docker-all.attanea.net/library/ds-mg-cas
/bin/sh -c
</code></pre>
| morla | <p>Your PersistentVolumeClaim is probably bound to the wrong PersistentVolume.</p>
<p>PersistentVolumes exist cluster-wide, only PersistentVolumeClaims are attached to a namespace:</p>
<pre><code>$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
</code></pre>
| Florian Neumann |
<p>I have been running a spring boot application on Kubernetes with using JDK 11 image. My expectation is that when the JVM hits out of memory exception then the pod should be killed so that Kubernetes can bring up a larger pod. I can confirm that this is not what's happening. I am not sure if there are some JVM arguments I have to set that I am missing or perhaps some Kubernetes configurations to be aware of this situation.</p>
<p>I am using the following JVM arguments:</p>
<pre><code>-XX:InitialRAMPercentage=20.0 -XX:MinRAMPercentage=50.0 -XX:MaxRAMPercentage=80.0 -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutofMemoryError
</code></pre>
<p>The thrown exceptions:</p>
<pre><code>{
"message": "Stopping container due to an Error",
"logger_name": "org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer",
"thread_name": "KafkaConsumerDestination{consumerDestinationName='message-submitted', partitions=21, dlqName='dlq'}.container-0-C-1",
"level": "ERROR",
"stack_trace": "java.lang.OutOfMemoryError: Java heap space\n\tat java.base/java.util.Arrays.copyOf(Arrays.java:3745)\n\tat java.base/java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:120)\n\tat java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:95)\n\tat java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:156)\n\tat software.amazon.awssdk.utils.IoUtils.toByteArray(IoUtils.java:49)\n\tat software.amazon.awssdk.core.sync.ResponseTransformer.lambda$toBytes$3(ResponseTransformer.java:175)\n\tat software.amazon.awssdk.core.sync.ResponseTransformer$$Lambda$1517/0x0000000101087040.transform(Unknown Source)\n\tat software.amazon.awssdk.core.client.handler.BaseSyncClientHandler$HttpResponseHandlerAdapter.transformResponse(BaseSyncClientHandler.java:154)\n\tat software.amazon.awssdk.core.client.handler.BaseSyncClientHandler$HttpResponseHandlerAdapter.handle(BaseSyncClientHandler.java:142)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.handleSuccessResponse(HandleResponseStage.java:89)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.handleResponse(HandleResponseStage.java:70)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:58)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:41)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:64)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:36)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.doExecute(RetryableStage.java:113)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.execute(RetryableStage.java:86)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:62)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:42)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:57)\n\tat software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:37)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)\n"
}
{
"message": "Error while stopping the container: ",
"logger_name": "org.springframework.kafka.listener.KafkaMessageListenerContainer",
"thread_name": "KafkaConsumerDestination{consumerDestinationName='message-submitted', partitions=21, dlqName='dlq'}.container-0-C-1",
"level": "ERROR",
"stack_trace": "java.lang.OutOfMemoryError: Java heap space\n\tat java.base/java.util.Arrays.copyOf(Arrays.java:3745)\n\tat java.base/java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:120)\n\tat java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:95)\n\tat java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:156)\n\tat software.amazon.awssdk.utils.IoUtils.toByteArray(IoUtils.java:49)\n\tat software.amazon.awssdk.core.sync.ResponseTransformer.lambda$toBytes$3(ResponseTransformer.java:175)\n\tat software.amazon.awssdk.core.sync.ResponseTransformer$$Lambda$1517/0x0000000101087040.transform(Unknown Source)\n\tat software.amazon.awssdk.core.client.handler.BaseSyncClientHandler$HttpResponseHandlerAdapter.transformResponse(BaseSyncClientHandler.java:154)\n\tat software.amazon.awssdk.core.client.handler.BaseSyncClientHandler$HttpResponseHandlerAdapter.handle(BaseSyncClientHandler.java:142)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.handleSuccessResponse(HandleResponseStage.java:89)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.handleResponse(HandleResponseStage.java:70)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:58)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:41)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:64)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:36)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.doExecute(RetryableStage.java:113)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.execute(RetryableStage.java:86)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:62)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:42)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:57)\n\tat software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:37)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)\n\tat software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)\n"
}
</code></pre>
<p>I think what's happening is OOM exception causes the pod to shutdown and then on trying to shutdown the pod the same exception is being thrown. So I tried to kill the pod regardless by adding <code>-XX:OnOutOfMemoryError="kill -9 %p</code> but it didn't help.</p>
<p>On a slightly different note, the pod memory limit is 2Gi. However, the pod reaches to OOM exception on about 700Mi, so I don't think there is not enough memory, just the pod throws an exception before even trying to expand the memory:</p>
<pre><code> resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 10m
memory: 128Mi
</code></pre>
<p>I have also tested <code>-XX:+CrashOnOutOfMemoryError</code> and it didn't help to resolve my situation and pod keeps throwing OOM on the attempt to shutdown the container.</p>
| Ali | <p>Minikube may not have enough memory.</p>
<p>You can check memory settings by
minikube start -h</p>
<p>Then
minikube stop && minikube start --cpus 4 --memory 2048</p>
<p>Update:
try setting heap size -Xms1g -Xmx2g while starting java application</p>
| Abhijit Gaikwad |
<p>I had deploy a tomcat on kubernetes and when I run this command : <code>kubectl describe svc dev-tomcat</code> I have this :</p>
<pre><code>Name: dev-tomcat
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=dev-tomcat
Type: ClusterIP
IP: 10.50.54.10
Port: tomcat 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.115.122.114:8080
Session Affinity: None
Events: <none>
</code></pre>
<p>How can I access to tomcat now ? when I try to run this on my browser nothing appears :</p>
<pre><code>http://10.115.122.114:8080
</code></pre>
| rodolf12343 | <p>Use <code>IP</code> from the output, try <code>10.50.54.10:8080</code></p>
<p>Update:</p>
<p>for minikube, you need to use <code>NodePort</code></p>
<p>I did following</p>
<pre><code> kubectl create deployment tomcatinfra --image=saravak/tomcat8
kubectl expose deployment tomcatinfra --port=8080 --target-port=8080 --type NodePort
kubectl get svc
minikube service tomcatinfra
</code></pre>
| Abhijit Gaikwad |
<p>Both pods are scheduled on same node with podaffinity, each pod on a different namespace. Once I try to deploy both of them on same namespace, podaffinity fails, and one one pod is running while the other one remains pending with podaffinity error.
Thanks!</p>
| Tzvika Avni | <p>From your comment, I suspect that you have a label collision that is only apparent when you try to run the pods in the same namespace. </p>
<p>Take a look at your <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer"><code>nodeSelectorTerms</code></a> and <code>matchExpressions</code> </p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">From the docs</a>:</p>
<blockquote>
<p>If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions can be satisfied.</p>
</blockquote>
| Dan O'Boyle |
<p>I am attempting to enforce a limit on the number of pods per deployment within a namespace / cluster from a single configuration point. I have two non-production environments, and I want to deploy fewer replicas on these as they do not need the redundancy.</p>
<p>I've read <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">this</a> but I'm unable to find out how to do it on the deployment level. Is it possible to enforce this globally without going into each one of them?</p>
| Alexander | <p>Since the replicas is a fixed value on a deployment YAML file, you may better to use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> to make a template for adding some flexibility in your application.</p>
| Eric Ho |
<p>I'm trying to deploy a simple .NET App in local kubernetes cluster (Kind) for testing purposes. When a deployment is applied, a pod doesn't start with an error. But the image is built well as a container works well if started locally in Docker.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
orderproducer-68d5ff7944-d2d89 0/1 Error 4 103s
</code></pre>
<pre><code>kubectl logs -l app=orderproducer
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'OrderProducer.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
</code></pre>
<p>It's weird, because if I start a docker container from the same image locally (not in a cluster) it works well. Besides, I had run bash on that container and ensured that <code>OrderProducer.dll</code> was really presented in the <code>/app</code> folder (which is a workdir).</p>
<pre><code>xxx@xxx:/mnt/c/Users/xxx$ docker run --name test6 orderproducer:latest
Order Producer has started!
Kafka broker: 127.0.0.1:9092
</code></pre>
<p>Do you have any ideas what's my mistake? Why it run in Docker, but not in a K8s's pod? I've already spent about 3 hours trying to figure out, but still not. Many thanks in advance.</p>
<p>Here are some artifacts I used.</p>
<p>Dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["OrderProducer/OrderProducer.csproj", "OrderProducer/"]
COPY ["Common/Common.csproj", "Common/"]
RUN dotnet restore "OrderProducer/OrderProducer.csproj"
COPY . .
WORKDIR "/src/OrderProducer"
RUN dotnet build "OrderProducer.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "OrderProducer.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "OrderProducer.dll"]
</code></pre>
<p>Then I build an image and make it accessible for kind:</p>
<pre><code>docker build -f OrderProducer/Dockerfile -t orderproducer:latest .
kind load docker-image orderproducer:latest
</code></pre>
<p>Then I apply a deployment:</p>
<pre><code>kubectl apply -f orderproducer-deployment.yml
</code></pre>
<p>orderproducer-deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: orderproducer
spec:
replicas: 1
selector:
matchLabels:
app: orderproducer
template:
metadata:
labels:
app: orderproducer
spec:
containers:
- name: orderproducer
image: orderproducer:latest
imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
| Dmitriy | <p>The reason was when a volume was mounted to <code>/app</code> it wiped out all the container's <code>/app</code> content.</p>
<p>I fixed the issue by editing the following deployment's part:</p>
<pre><code>volumeMounts:
- name: appsettings-volume
mountPath: /app/appSettings.json
subPath: appSettings.json
</code></pre>
| Dmitriy |
<p>I am trying to run the <a href="https://github.com/akka/akka-sample-cluster-kubernetes-scala" rel="nofollow noreferrer">akka-sample-cluster-kubernetes-scala</a> as it is recommended to deploy an Akka cluster on to minikube using <code>akka-management-cluster-bootstrap</code>. After run every step recommended on the README file I can see the pods running on my <code>kubectl</code> output:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
appka-8b88f7bdd-485nx 1/1 Running 0 48m
appka-8b88f7bdd-4blrv 1/1 Running 0 48m
appka-8b88f7bdd-7qlc9 1/1 Running 0 48m
</code></pre>
<p>When I execute the <code>./scripts/test.sh</code> it seems to fail on the last step:</p>
<pre><code>"No 3 MemberUp log events found"
</code></pre>
<p>And I cannot connect to the given address said on the README file. <strong>The error</strong>:</p>
<pre><code>$ curl http://127.0.0.1:8558/cluster/members/
curl: (7) Failed to connect to 127.0.0.1 port 8558: Connection refused
</code></pre>
<p>From now on I describe how I try to find the reason that I cannot use the sample akka + kubernetes project. I am trying to find the cause of the error above mentioned. I suppose I have to execute <code>sbt run</code>, even it is not mentioned on the sample project. And them I get the following error with respect to the <code>${REQUIRED_CONTACT_POINT_NR}</code> variable at <code>application.conf</code>:</p>
<blockquote>
<p>[error] Exception in thread "main"
com.typesafe.config.ConfigException$UnresolvedSubstitution:
application.conf @
jar:file:/home/felipe/workspace-idea/akka-sample-cluster-kubernetes-scala/target/bg-jobs/sbt_12a05599/job-1/target/d9ddd12d/64fe375d/akka-sample-cluster-kubernetes_2.13-0.0.0-70-49d6a855-20210104-1057.jar!/application.conf:
19: Could not resolve substitution to a value:
${REQUIRED_CONTACT_POINT_NR}</p>
</blockquote>
<pre><code>#management-config
akka.management {
cluster.bootstrap {
contact-point-discovery {
# pick the discovery method you'd like to use:
discovery-method = kubernetes-api
required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
}
}
}
#management-config
</code></pre>
<p>So, I suppose that it is not getting the configuration from the <code>kubernetes/akka-cluster.yml</code> file: <code>name: REQUIRED_CONTACT_POINT_NR</code>. Changing it to <code>required-contact-point-nr = 3</code> or <code>4</code> I get the error:</p>
<pre><code>[error] SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
[error] SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
[error] SLF4J: See also http://www.slf4j.org/codes.html#replay
...
[info] [2021-01-04 11:00:57,373] [INFO] [akka.remote.RemoteActorRefProvider$RemotingTerminator] [] [appka-akka.actor.default-dispatcher-3] - Shutting down remote daemon. MDC: {akkaAddress=akka://[email protected]:25520, sourceThread=appka-akka.remote.default-remote-dispatcher-9, akkaSource=akka://[email protected]:25520/system/remoting-terminator, sourceActorSystem=appka, akkaTimestamp=10:00:57.373UTC}
[info] [2021-01-04 11:00:57,376] [INFO] [akka.remote.RemoteActorRefProvider$RemotingTerminator] [] [appka-akka.actor.default-dispatcher-3] - Remote daemon shut down; proceeding with flushing remote transports. MDC: {akkaAddress=akka://[email protected]:25520, sourceThread=appka-akka.remote.default-remote-dispatcher-9, akkaSource=akka://[email protected]:25520/system/remoting-terminator, sourceActorSystem=appka, akkaTimestamp=10:00:57.375UTC}
[info] [2021-01-04 11:00:57,414] [INFO] [akka.remote.RemoteActorRefProvider$RemotingTerminator] [] [appka-akka.actor.default-dispatcher-3] - Remoting shut down. MDC: {akkaAddress=akka://[email protected]:25520, sourceThread=appka-akka.remote.default-remote-dispatcher-9, akkaSource=akka://[email protected]:25520/system/remoting-terminator, sourceActorSystem=appka, akkaTimestamp=10:00:57.414UTC}
[error] Nonzero exit code returned from runner: 255
[error] (Compile / run) Nonzero exit code returned from runner: 255
[error] Total time: 6 s, completed Jan 4, 2021 11:00:57 AM
</code></pre>
| Felipe | <p>You are getting your contact point error because you are trying to use <code>sbt run</code>. <code>sbt run</code> will run a single instance outside of minikube, which isn't what you want. And since it's running outside of Minikube it won't pick up the environment variables being set in the container spec. The scripts do the build/deploy and you should not need to run sbt manually.</p>
<p>Also, the main error is not the connection to 8558, I don't believe that the configuration exposes that admin port outside of minikube.</p>
<p>The fact that all three containers report a status Running indicates that you may actually have a running cluster and the test script may just be missing the messages in the logs. As others have said in comments, the logs from one of the containers would be helpful in determining whether you have a working cluster, and diagnosing any problems in cluster formation.</p>
| David Ogren |
<p>We have realised the mistake of using a Deployment with a PVC for our stateful app instead of going with Statefulset. I was wondering how the upgrade would work. How can I point to the old data with the new statefulset ? I am guessing that the old PVC cannot be used by the volumeClaimTemplate ? I have not found anything via Google with my search abilities.</p>
<p>Did anyone else go through this phase ? If you have, what was the process you followed ?</p>
<p>Thanks.</p>
<p>Adding some more details regarding the setup.</p>
<ol>
<li>Currently it is a simple deployment with no replicas. Just 1 deployment and 1 pod.</li>
<li>PV+PVC is used to have a persistent volume mounted where we write all of the data.</li>
<li>On Helm upgrade, we have a pre-upgrade hook added which mounts the same PV+PVC into the upgrade container and upgrades the data (Modifying XML files etc)</li>
</ol>
<p>It is simple, but the helm chart is bit too complicated with lots of other noise, but basically the application can be considered as simple as above.</p>
<p>Now, what I am looking for in my next upgrade is a process where I can make the above deployment as a statefulset and also have all the data still usable by the Pod.</p>
| Arunmu | <p>So, here is what I did. First, let me brief about my test setup:</p>
<ol>
<li>Have a set of deployments running which are having their own PVC+PV for persistence.</li>
<li>New release has all of those deployments converted into statefulsets, thus have different PVC.</li>
</ol>
<p>Problems:</p>
<ol>
<li><p>You cannot use the PV from earlier deployment because a PV is bound to a single PVC. You can refer to the PV using only that PVC.</p>
</li>
<li><p>So another question is, why can't you mount the old PV using it's PVC to new pod and copy that ?
Well, we cannot do that because, by the time the new pods are started, the old pods are released thereby releasing its associated PVC and PV. In my case I could see the PVC going to 'Terminate' state. It probably could be solved with some kind of Reclaim policy, but I am not sure of that.</p>
</li>
</ol>
<p>Solution:</p>
<ol>
<li><p>Create the PV and its PVC's manually and apply them. The PVC name should match the name that the statefulset would create which is quite straight forward. The PVC name matching for statefulset is very important, it expects it to be in certain format which you can find online.</p>
</li>
<li><p>In your new Helm chart, create a pre-upgrade kubernetes Job. This is pre-upgrade hook of Helm which is run just before the actual upgrade of Helm release.
So, here you create a container and mount both old and new PV (use the PVC you created manually).</p>
</li>
<li><p>Now, this container must run something right to copy the data ? Yeah, for that I created a new configmap and put a script in it which would just copy the data from the old PV to new PV. This configmap is mounted inside the container and the container is made to execute that command.</p>
</li>
<li><p>Run the helm upgrade command and see it in action.</p>
</li>
</ol>
<p>A rough example of the K8s pre-upgrade job:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-pre-upgrade-job
namespace: {{ .Values.namespace.name }}
annotations:
"helm.sh/hook": pre-upgrade
spec:
template:
spec:
containers:
- name: upgrade82-copy
image: <your-image>
command: ["/bin/bash"]
args: ["-c", "/scripts/upgrade82.sh"]
volumeMounts:
- name: old-data
mountPath: /usr/old
readOnly: false
- name: new-data
mountPath: /usr/new
readOnly: false
- name: scripts
mountPath: /scripts
readOnly: false
restartPolicy: Never
volumes:
- name: old-data
persistentVolumeClaim:
claimName: old-claim << Needs to be hardcoded to what is running
- name: new-data
persistentVolumeClaim:
claimName: new-data-namespace-app-0 << There is a specific format for statefulset PVC
- name: scripts
configMap:
name: upgrade
defaultMode: 0755
backoffLimit: 4
#activeDeadlineSeconds: 200
ttlSecondsAfterFinished : 100
</code></pre>
<p>And "upgrade" above is another Configmap with your copy script.</p>
| Arunmu |
<p>I have 2 pods of Redmine deployed with Kubernetes the problem due to this the issue of session management is happening so some users are unable to login due to this so I came up with the idea to store the cache of both the pods in Redis server with Kubernetes(Centralized).<br>
I am giving the below configuration inside the Redmine pod in location.
/opt/bitnami/redmine/config/application.rb</p>
<p>configuration</p>
<pre><code>config.cache_store = :redis_store, {
host: "redis-headless.redis-namespace", #service name of redis
port: 6379,
db: 0,
password: "xyz",
namespace: "redis-namespace"
}, {
expires_in: 90.minutes
}
</code></pre>
<p>But this is not working as supposed .Need help where I am doing wrong.</p>
| user12417145 | <p>Redmine doesn't store any session data in its cache. Thus, configuring your two Redmines to use the same cache won't help.</p>
<p>By default Redmine stores the user sessions in a signed cookie sent to the user's browser without any server-local session storage. Since the session cookie is signed with a private key, you need to make sure that all installations using the same sessions also use the same application secret (and code and database).</p>
<p>Depending on how you have setup your Redmine, this secret is typically either stored in <code>config/initializers/secret_token.rb</code> or <code>config/secrets.yml</code> (relative to your Redmine installation directory). Make sure that you use the same secret here on both your Redmines.</p>
| Holger Just |
<p>I see this command to temporarily run a pod</p>
<pre><code>k run -it pod1 --image=cosmintitei/bash-curl --restart=Never --rm
</code></pre>
<p>What does <code>-it</code> mean here ?</p>
<p>I don't know about the <code>-it</code> being used here. Why is it being used? What else can it be used for?</p>
| ANKIT RAWAT | <p>The <code>-it</code> a a short form of <code>-i -t</code> which in turn is a short form of <code>--stdin --tty</code>.</p>
<p>As such, this instructs kubernetes to</p>
<ul>
<li>pass its STDIN to the started process</li>
<li>and to present STDIN as a TTY (i.e. a interactive terminal)</li>
</ul>
| Holger Just |
<p>I have installed dask-gateway via the helm chart. I assume that I can provide an options handler in the <code>gateway.backend.extraConfig</code> section of the chart values. I would also assume I can then configure any option for <a href="https://gateway.dask.org/api-server.html#kube-cluster-config" rel="nofollow noreferrer">KubeClusterConfig</a>.</p>
<p>This will allow me to customize the image. How do I specify an image pull secret?</p>
| shaunc | <p>In fact, <code>KubeClusterConfig</code> contains the option <code>worker_extra_pod_config</code>, which is a dictionary merged into the pod spec, so <code>imagePullSecrets</code> can be specified here.</p>
| shaunc |
<p>I have the following <code>PersistentVolumeClaim</code> and <code>Deployment</code> in Kubernetes:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-app-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25Gi
storageClassName: "gp2"
</code></pre>
<pre><code>...
containers:
...
volumeMounts:
- name: my-app-storage
mountPath: /var/my-app/storage
readOnly: false
...
volumes:
- name: my-app-storage
persistentVolumeClaim:
claimName: my-app-storage
</code></pre>
<p>The problem is this is only creating a single volume that is mounted into each pod. I have <code>replicas: 3</code> though. How can I have a volume created per pod? I need to stay using a <code>Deployment</code> and can't migrate to using a <code>StatefulSet</code>. Is this even possible? Currently, this is scheduling all the pods on one single Kubernetes worker node because the volume is shared and only mounted to a single Kubernetes worker node.</p>
| Justin | <p>I ended up converting the <code>Deployment</code> to a <code>StatefulSet</code> and this works. Key thing is remove the <code>PersistentVolumeClaim</code> and update the Deployment to a StatefulSet and use the following props:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
</code></pre>
| Justin |
<p>I just have a doubt on whether is possible to run multiples liveness probes in the same <code>deployment.yaml</code>.
For example: I already have a liveness probe that runs a python script that check my application like:</p>
<pre><code>livenessProbe:
failureThreshold: 5
initialDelaySeconds: 15
timeoutSeconds: 10
periodSeconds: 60
exec:
command: ["/usr/local/bin/python", "/app/check_application_health.py"]
</code></pre>
<p>Is that possible to include another liveness probe that check a <code>httpGet</code> healthcheck? Or should I include a <code>httpGet</code> healthcheck in this python script and run all in one?</p>
<p>Thanks!</p>
| Arthur Ávila | <p>Hi Currently its not possible,</p>
<p>as a workaround you can do something like this</p>
<pre><code>"livenessProbe": {
"exec": {
"command": ["sh", "-c",
"reply=$(curl -s -o /dev/null -w %{http_code} http://< healthcheck url>); if [ \"$reply\" -lt 200 -o \"$reply\" -ge 400 ]; then exit 1; fi; /app/check_application_health.py;"
]
}
}
</code></pre>
<p>Source: <a href="https://github.com/kubernetes/kubernetes/issues/37218#issuecomment-372887460" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/37218#issuecomment-372887460</a></p>
| Abhijit Gaikwad |
<p>I create a ingress by this example:</p>
<pre><code>$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
spec:
ingressClassName: nginx
rules:
- host: my.hostname.com
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
' | kubectl create -f -
</code></pre>
<p>But if I go to <code>my.hostname.com/something</code> the route is not matched, even if I changed it to</p>
<pre><code>$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: rewrite
spec:
ingressClassName: nginx
rules:
- host: my.hostname.com
http:
paths:
- path: /something
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
' | kubectl create -f -
</code></pre>
<p>The route pass me to <code>http-svc</code> but the <code>rewrite</code> is not working.</p>
<p>So how can I do a complex rewrite which <code>haproxy.router.openshift.io/rewrite-target: / </code> can not provide?</p>
| PaleNeutron | <p>OpenShift routers aren't based on nginx, so nginx annotations/rules aren't going to do anything. If the <a href="https://docs.openshift.com/container-platform/4.11/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration" rel="nofollow noreferrer">builtin HAProxy based functionality</a> doesn't meet your needs, you'd have to either <a href="https://github.com/nginxinc/nginx-ingress-operator/blob/main/docs/nginx-ingress-controller.md" rel="nofollow noreferrer">install an nginx based ingress controller</a> or handle the rewrite at the application level.</p>
| David Ogren |
<p>I would like to install Kubernetes on my debian machine:</p>
<pre><code>Distributor ID: Debian
Description: Debian GNU/Linux 9.5 (stretch)
Release: 9.5
Codename: stretch
</code></pre>
<p>Looking into google deb package archive I only find the package for "kubectl", nothing else:</p>
<p><a href="https://packages.cloud.google.com/apt/dists/kubernetes-stretch/main/binary-amd64/Packages" rel="noreferrer">https://packages.cloud.google.com/apt/dists/kubernetes-stretch/main/binary-amd64/Packages</a></p>
<p>Comparing to ubuntu xenial many packages are missing. Could someone be so kind and give me more information how to deal with this ? Is it possible to install kubeadm and kubelet on debian stretch too ?</p>
<p><a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl" rel="noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl</a></p>
<p>Thank you very much in advance !</p>
| Literadix | <p>As of K8S 1.18.5 I am not aware of any official DEB package from Google unfortunately. I would highly recommend you build your own DEB package on Debian Stretch. I have created 2 examples on how to do so with Debian 10 and Ubuntu 18.04 at <a href="https://github.com/runlevel5/kubernetes-packages" rel="nofollow noreferrer">https://github.com/runlevel5/kubernetes-packages</a>.</p>
| Trung Lê |
<p>While trying to add/update a dependency to a helm chart I'm getting this error. No helm plugins are installed with the name <code>helm-manager</code>.</p>
<pre class="lang-sh prettyprint-override"><code>$ helm dep update
Getting updates for unmanaged Helm repositories...
...Unable to get an update from the "https://kubernetes-charts.storage.googleapis.com/" chart repository:
failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
...Unable to get an update from the "https://kubernetes-charts.storage.googleapis.com/" chart repository:
failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. Happy Helming!
Error: no cached repository for helm-manager-1067d9c6027b8c3f27b49e40521d64be96ea412858d8e45064fa44afd3966ddc found. (try 'helm repo update'): open /Users/<redacted>/Library/Caches/helm/repository/helm-manager-1067d9c6027b8c3f27b49e40521d64be96ea412858d8e45064fa44
afd3966ddc-index.yaml: no such file or directory
</code></pre>
| merqurio | <p>I get this sometimes when there's a mismatch between my <code>Chart.yaml</code> and the configuration of my subchart dependencies.</p>
<p>For instance:</p>
<p><code>Chart.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>dependencies:
- name: foo
...
- name: bar
...
</code></pre>
<p><code>values.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>foo:
flag: true
# but no entry for bar
</code></pre>
<p>It may be an artifact of some other element of my configuration (Artifactory hosting a proxy of the world for Helm) but I find myself running into this frequently enough that I hope this answer may help someone else.</p>
| Jon O |
<p>Based on the docs that I've read, there are 3 methods of patching:</p>
<ul>
<li>patches</li>
<li>patchesStrategicMerge</li>
<li>patchesJson6902.</li>
</ul>
<p>The difference between <code>patchesStrategicMerge</code> and <code>patchesJson6902</code> is obvious. <code>patchesStrategicMerge</code> requires a duplicate structure of the kubernetes resource to identify the base resource that is being patched followed by the modified portion of the spec to denote what gets changed (or deleted).</p>
<p><code>patchesJson6902</code> defines a 'target' attribute used to specify the kubernetes resource with a 'path' attribute that specifies which attribute in the resource gets modified, added, or removed.</p>
<p>However, what is not clear to me is the difference between <code>patches</code> and <code>patchesJson6902</code>. They seem to be very similar in nature. Both specify a 'target' attribute and operation objects which describes what gets modified.</p>
<p>The only difference I've noticed is that <code>patches</code> does not require a 'group' attribute while <code>patchesJson6902</code> does; The reason for this is unknown.</p>
<p>So why the difference between the two? How do I determine which one to use?</p>
| Alex | <p>The explanation for this is <a href="https://github.com/kubernetes-sigs/kustomize/issues/2705#issuecomment-659012281" rel="noreferrer">here</a>.</p>
<p>To summarize, <code>patchJson6902</code> is an older keyword which can only match one resource via <code>target</code> (no wildcards), and accepts only Group-version-kind (GVK), namespace, and name.</p>
<p>The <code>patches</code> directive is newer and accepts more elements (annotation selector and label selector as well). In addition, namespace and name can be regexes. The target for <code>patches</code> can match more than one resource, all of which will be patched.</p>
<p>In addition, with <code>patches</code>, it will attempt to parse patch files as a Json6902 patch, and if that does not work, it will fall back to attempting the patch as a strategic merge. Therefore, in many cases <code>patches</code> can obviate the need of using <code>patchesStrategicMerge</code> as well.</p>
<p>Overall, it seems as if <code>patches</code> should work pretty universally for new projects.</p>
<p>Upstream documentation for these key words:</p>
<ul>
<li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/" rel="noreferrer">patches</a></li>
<li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesjson6902/" rel="noreferrer">patchesJson6902</a></li>
<li><a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patchesstrategicmerge/" rel="noreferrer">patchesStrategicMerge</a></li>
</ul>
| Raman |
<p>I'm running Microk8s on an EC2 instance. I fail to pull containers from our private registry. When trying to run such a container <code>kubectl describe pod</code> shows:</p>
<blockquote>
<p>Failed to pull image "docker.xxx.com/import:v1": rpc
error: code = Unknown desc = failed to resolve image
"docker.xxx.com/import:v1": no available registry
endpoint: failed to fetch anonymous token: unexpected status: 401
Unauthorized</p>
</blockquote>
<p>I can <code>docker login</code> and <code>docker pull</code> from that machine. The yaml I used to deploy the container is working fine on another (non containerd) cluster. It refers to a pull secret, which is identical to the one used in the other cluster and working fine there.</p>
<p>I added the following entry to the containerd-template.toml of Microk8s:</p>
<pre><code> [plugins.cri.registry]
[plugins.cri.registry.mirrors]
...
[plugins.cri.registry.mirrors."docker.xxx.com"]
endpoint = ["https://docker.xxx.com"]
</code></pre>
<p>I have no idea what else I might be missing.</p>
| Achim | <p>If you are getting an error <code>401</code> probably something is wrong with the authentication. E.g. you are missing credentials to your private registry. </p>
<p>To make sure that microk8s would use proper credentials, in addition of <code>mirrors</code> sections within the configuration you have to specify <code>auths</code> section where you would put your docker registry credentials.</p>
<pre><code>[plugins.cri.registry.auths]
[plugins.cri.registry.auths."https://gcr.io"]
username = ""
password = ""
auth = ""
identitytoken = ""
</code></pre>
<p>Attributes within that section are compatible with configuration which you can find in your <code>.docker/config.json</code>.</p>
<p>Notice that this is section on the same level as <code>mirrors</code> it should not be part of <code>mirrors</code> entry but added as new section.
Another important part is to make sure that the <code>auth</code> hosts match yours registry host (e.g. https vs http). </p>
<p>For more details check reference: <a href="https://github.com/containerd/cri/blob/master/docs/registry.md" rel="nofollow noreferrer">https://github.com/containerd/cri/blob/master/docs/registry.md</a></p>
<p>p.s. Keep in mind that <code>containerd</code> is supported from microk8s[1] <code>v1.14</code> if you use older version you should check other options like official kubernates documentation[2]</p>
<p>[1] <a href="https://microk8s.io/docs/working" rel="nofollow noreferrer">https://microk8s.io/docs/working</a></p>
<p>[2] <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a> </p>
| mtfk |
<p>I have a react application that is hosted in a nginx container using static files that are prepared in a build step. The problem I run in to is that the API URL is then hard coded in the js files and I get a problem when I want to deploy the application to different environments.</p>
<p>So basically I have put a config.js file with the localhost API URL variable in the public directory which is then loaded in the application in the section of the index.html file. This works for the local environment. The problem comes when I want to deploy it to the test or production environment.</p>
<p>I have found out that it is possible to use a configMap with volume mounts, but that requires me to prepare one file for each environment in advance as I understand it. I want to be able to use the variables I have set in my Azure DevOps library to populate the API URL value.</p>
<p>So my question is if there is a way to replace the values in the config.js file in the nginx container using Kuberentes/Helm or if I can make use of a Azure DevOps pipeline task to replace the content of a pre-prepared config.js file and mount that using Kubernetes?</p>
<p>Not sure if it is clear what I want to do, but hopefully you can understand it...</p>
<p>config.js</p>
<pre><code>window.env = {
API_URL: 'http://localhost:8080'
};
</code></pre>
<p>index.html</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>My application</title>
<!--
config.js provides all environment specific configuration used in the client
-->
<script src="%PUBLIC_URL%/config.js"></script>
</head>
...
</code></pre>
| KungWaz | <p>What I ended up doing was setting it up like this:</p>
<p>First I added a <strong>configmap.yaml</strong> to generate the config.js file</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config-frontend
data:
config.js: |-
window.env = {
API_URL: "{{ .Values.service.apiUrl }}"
}
</code></pre>
<p><code>Values.service.apiUrl</code> comes from the arguments provided in the "Package and deploy Helm charts" task <code>--set service.apiUrl=$(backend.apiUrl)</code></p>
<p>Then I added a volume mount in the <strong>deployment.yaml</strong> to replace the config.js file in the nginx container</p>
<pre><code>...
containers:
...
volumeMounts:
- name: config-frontend-volume
readOnly: true
mountPath: "/usr/share/nginx/html/config.js"
subPath: "config.js"
volumes:
- name: config-frontend-volume
configMap:
name: config-frontend
</code></pre>
<p>This did the trick and now I can control the variable from the Azure DevOps pipeline based on the environment I'm deploying to.</p>
| KungWaz |
<p>I am trying to create two containers within a pod with one container being an init container. The job of the init container is to download a jar and make it available for the app container. I am able to create everything and the logs look good but when i check, i do not see the jar in my app container. Below is my deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service-test
labels:
app: web-service-test
spec:
replicas: 1
selector:
matchLabels:
app: web-service-test
template:
metadata:
labels:
app: web-service-test
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: web-service-test
image: some image
ports:
- containerPort: 8081
volumeMounts:
- name: shared-data
mountPath: /tmp/jar
initContainers:
- name: init-container
image: busybox
volumeMounts:
- name: shared-data
mountPath: /jdbc-jar
command:
- wget
- "https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"
</code></pre>
| daze-hash | <p>You need to save jar in the <code>/jdbc-jar</code> folder</p>
<p>try updating your yaml to following</p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "wget -O /pod-data/ojdbc8-19.3.0.0.jar https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"]
</code></pre>
| Abhijit Gaikwad |
<p>It's simple to apply complicate yaml config using <code>kubectl</code>, for example, installing the <a href="https://github.com/Kong/kubernetes-ingress-controller#get-started" rel="nofollow noreferrer">kong-ingress-controller</a> is simply one line using <code>kubectl</code>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-dbless.yaml
</code></pre>
<p>what is the equivalent way of doing this in Golang?</p>
| DiveInto | <p>figured out by checking out this issue: <a href="https://github.com/kubernetes/client-go/issues/193#issuecomment-363318588" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/193#issuecomment-363318588</a></p>
<p>I'm using <code>kubebuilder</code>, simple turn yamls into <code>runtime.Objects</code> using <code>UniversalDeserializer</code>, then create the object using Reconciler's <code>Create</code> method:</p>
<pre><code>// ref: https://github.com/kubernetes/client-go/issues/193#issuecomment-363318588
func parseK8sYaml(fileR []byte) []runtime.Object {
acceptedK8sTypes := regexp.MustCompile(`(Namespace|Role|ClusterRole|RoleBinding|ClusterRoleBinding|ServiceAccount)`)
fileAsString := string(fileR[:])
sepYamlfiles := strings.Split(fileAsString, "---")
retVal := make([]runtime.Object, 0, len(sepYamlfiles))
for _, f := range sepYamlfiles {
if f == "\n" || f == "" {
// ignore empty cases
continue
}
decode := scheme.Codecs.UniversalDeserializer().Decode
obj, groupVersionKind, err := decode([]byte(f), nil, nil)
if err != nil {
log.Println(fmt.Sprintf("Error while decoding YAML object. Err was: %s", err))
continue
}
if !acceptedK8sTypes.MatchString(groupVersionKind.Kind) {
log.Printf("The custom-roles configMap contained K8s object types which are not supported! Skipping object with type: %s", groupVersionKind.Kind)
} else {
retVal = append(retVal, obj)
}
}
return retVal
}
func (r *MyReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("MyReconciler", req.NamespacedName)
// your logic here
log.Info("reconciling")
yaml := `
apiVersion: v1
kind: Namespace
metadata:
name: test-ns`
obj := parseK8sYaml([]byte(yaml))
if err := r.Create(ctx, obj[0]); err != nil {
log.Error(err, "failed when creating obj")
}
...
}
</code></pre>
| DiveInto |
<p>I ran into the above stated error and the most popular answer for this error is adding 'selector:' to the yaml file. I get this error even after adding it. Can you please help me rectify this issue?</p>
<p>deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
labels:
app: sampleapp
spec:
replicas: 4
selector:
matchLabels:
app: sampleapp
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: sampleapp
spec:
containers:
- name: sampleapp
#replace <foobar> with your container registry. Example: contosodemo.azurecr.io
image: containerregistrycanary.azurecr.io/azure-pipelines-canary-k8s
imagePullPolicy: Always
ports:
- containerPort: 8000
- containerPort: 8080
</code></pre>
<p>fortio.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fortio
spec:
replicas: 1
selector:
template:
metadata:
labels:
app: fortio
spec:
containers:
- name: fortio
image: fortio/fortio:latest_release
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http-fortio
- containerPort: 8079
name: grpc-ping
</code></pre>
<p>servicemonitor.yml</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: sampleapp
labels:
release: sampleapp
spec:
selector:
matchLabels:
app: sampleapp
endpoints:
- port: metrics
</code></pre>
| vishal_P | <p>You need to add selection rules to your <code>selector</code> in the <em>fortio.yml</em>, e.q.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fortio
spec:
replicas: 1
selector:
matchLabels:
app: fortio
template:
metadata:
labels:
app: fortio
spec:
containers:
- name: fortio
image: fortio/fortio:latest_release
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http-fortio
- containerPort: 8079
name: grpc-ping
</code></pre>
<blockquote>
<p>The .spec.selector field defines how the Deployment finds which Pods
to manage. In this case, you select a label that is defined in the Pod
template (app: nginx). However, more sophisticated selection rules are
possible, as long as the Pod template itself satisfies the rule.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment</a></p>
| Sebastian |
<p>When I push my deployments, for some reason, I'm getting the error on my pods:</p>
<blockquote>
<p>pod has unbound PersistentVolumeClaims</p>
</blockquote>
<p>Here are my YAML below:</p>
<p>This is running locally, not on any cloud solution.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
</code></pre>
<hr>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
</code></pre>
| soniccool | <p>You have to define a <strong>PersistentVolume</strong> providing disc space to be consumed by the <strong>PersistentVolumeClaim</strong>.</p>
<p>When using <code>storageClass</code> Kubernetes is going to enable <strong>"Dynamic Volume Provisioning"</strong> which is not working with the local file system.</p>
<hr />
<h3>To solve your issue:</h3>
<ul>
<li>Provide a <strong>PersistentVolume</strong> fulfilling the constraints of the claim (a size >= 100Mi)</li>
<li>Remove the <code>storageClass</code> from the <strong>PersistentVolumeClaim</strong> or provide it with an empty value (<code>""</code>)</li>
<li>Remove the <strong>StorageClass</strong> from your cluster</li>
</ul>
<hr />
<h3>How do these pieces play together?</h3>
<p>At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.<br />
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.</p>
<p>The <strong>PersistentVolumeClaim</strong> is used to provide a storage-constraint alongside the deployment of an application.</p>
<p>The <strong>PersistentVolume</strong> offers cluster-wide volume-instances ready to be consumed ("<code>bound</code>"). One PersistentVolume will be bound to <em>one</em> claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">accessed</a> by multiple nodes.</p>
<p>A <strong>PersistentVolume without StorageClass</strong> is considered to be <strong>static</strong>.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning" rel="noreferrer"><strong>"Dynamic Volume Provisioning"</strong></a> alongside <strong>with</strong> a <strong>StorageClass</strong> allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="noreferrer">provisioning</a> - this allows the cluster to request the provisioning of a "new" <strong>PersistentVolume</strong> when an unsatisfied <strong>PersistentVolumeClaim</strong> pops up.</p>
<hr />
<h3>Example PersistentVolume</h3>
<p>In order to find how to specify things you're best advised to take a look at the <a href="https://kubernetes.io/de/docs/reference/#api-referenz" rel="noreferrer">API for your Kubernetes version</a>, so the following example is build from the <a href="https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#persistentvolume-v1-core" rel="noreferrer">API-Reference of K8S 1.17</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
</code></pre>
<p>The <strong>PersistentVolumeSpec</strong> allows us to define multiple attributes.
I chose a <code>hostPath</code> volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.</p>
<hr />
<h3>Additional Resources:</h3>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">Configure PersistentVolume Guide</a></li>
</ul>
| Florian Neumann |
<p>I have installed the kube-prometheus stack on k8s via helm:</p>
<pre><code>helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring -f alertmanager-config.yaml
</code></pre>
<p>where the <code>alertmanager-config.yaml</code> looks as follows:</p>
<pre><code>alertmanager:
config:
global:
resolve_timeout: 5m
route:
group_wait: 20s
group_interval: 4m
repeat_interval: 4h
receiver: 'null'
routes:
- receiver: 'slack-k8s-admin'
receivers:
- name: 'null'
- name: 'slack-k8s-admin'
slack_configs:
- api_url: 'https://hooks.slack.com/services/...'
channel: '#k8s-monitoring'
send_resolved: true
icon_url: https://avatars3.githubusercontent.com/u/3380462
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
</code></pre>
<p>I can receive the alerts after I configured them, however they never show me from which cluster they stem:</p>
<p><a href="https://i.stack.imgur.com/RTbAW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTbAW.png" alt="enter image description here" /></a></p>
<p>Since, I do have several clusters that should send to the same channel, it would be good to get notified which cluster has issues.</p>
<p>Do you know how I have to adapt the config?</p>
| tobias | <p>You can read the cluster name from Prometheus' labels as guided in the other answer.</p>
<p>Another option is to define a template for cluster name.</p>
<pre class="lang-yaml prettyprint-override"><code> receivers:
- name: 'slack-notifications'
slack_configs:
- channel: '#lolcats'
send_resolved: true
text: '{{ template "slack.default-alert.text" . }}'
- name: 'null'
templates:
- '/etc/alertmanager/config/*.tmpl'
</code></pre>
<pre class="lang-yaml prettyprint-override"><code> templateFiles:
template_1.tmpl: |-
{{ define "cluster" }}static-cluster-name{{ end }}
{{ define "slack.default-alert.text" }}
{{- $root := . -}}
... your template ...
{{ template "cluster" $root }}
{{ end }}
{{ end }}
</code></pre>
<p>In place of a <code>static-cluster-name</code> text, you can use a placeholder from Helm if you're using the prometheus-stack helm chart:</p>
<pre class="lang-yaml prettyprint-override"><code>{{ define "cluster" }}{{ .ExternalURL | reReplaceAll ".*alertmanager\\.(.*)" "$1" }}{{ end }}
</code></pre>
<p>The latter will read the Helm value <code>.ExternalURL</code> defined earlier in the values.yaml. It's of course possible to also use a placeholder of whatever tool you're using to generate these manifests.</p>
| Petrus Repo |
<p>I'm new to go and playing with k8s go-client. I'd like to pass items from <code>deploymentsClient.List(metav1.ListOptions{})</code> to a funcion. <code>fmt.Printf("%T\n", deploy)</code> says it's type <code>v1.Deployment</code>. So I write a function that takes <code>(deploy *v1.Deployment)</code> and pass it <code>&deploy</code> where deploy is an item in the <code>deploymentsClient.List</code>. This errors with <code>cmd/list.go:136:38: undefined: v1</code> however. What am I doing wrong?</p>
<p>Here are my imports</p>
<pre><code>import (
// "encoding/json"
"flag"
"fmt"
//yaml "github.com/ghodss/yaml"
"github.com/spf13/cobra"
// "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"os"
"path/filepath"
)
</code></pre>
<p>Then I get the list of deployments:</p>
<pre><code> deploymentsClient := clientset.AppsV1().Deployments(ns)
deployments, err := deploymentsClient.List(metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, deploy := range deployments.Items {
fmt.Println(deploy.ObjectMeta.SelfLink)
// printDeploymentSpecJson(deploy)
// printDeploymentSpecYaml(deploy)
}
</code></pre>
| user1855481 | <p>You need to import "k8s.io/api/apps/v1", Deployment is defined in the package. See <a href="https://godoc.org/k8s.io/api/apps/v1" rel="nofollow noreferrer">https://godoc.org/k8s.io/api/apps/v1</a>.</p>
| Dagang |
<p>First of all to put some context on that question.</p>
<ul>
<li>I have an <code>EKS</code> cluster with version >= <code>1.15</code></li>
<li>The <code>EFS</code> - <code>EKS</code> <code>security group</code> / <code>mount target</code> etc. are working properly</li>
<li>The <code>CSI</code> driver for <code>EFS</code> in <code>EKS</code> is installed and work as expected</li>
<li>I have deployed a storage class called <code>efs-sc</code> using the <code>EFS CSI</code> driver as a provisioner</li>
<li>I can access the <code>EFS</code> volume on the pod</li>
</ul>
<p>But ... it only works if it is the root path <code>/</code> that is defined as the path in the <code>kubernetes</code> persistent volume resource definition.</p>
<p><strong>Example with Terraform 0.12 syntax</strong></p>
<pre><code>resource "kubernetes_persistent_volume" "vol" {
metadata {
name = "my-vol"
}
spec {
capacity = {
storage = "15Gi"
}
access_modes = ["ReadWriteMany"]
storage_class_name = "efs-sc"
persistent_volume_reclaim_policy = "Recycle"
persistent_volume_source {
nfs {
path = "/" # -> OK it works properly
# path = "/access-point-path" -> NOT WORKING
server = var.efs-storage-apt-server
}
}
}
}
</code></pre>
<p>When I try to specify the path of my access point the mounting of the volume fails.</p>
<p>The <code>efs</code> access point is configured like this</p>
<p><a href="https://i.stack.imgur.com/Vbenn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vbenn.png" alt="enter image description here"></a></p>
<p>So is it a limitation? Did I miss something?</p>
<p>I was looking about this solution <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">efs-provisioner</a> but I don't see what this will solve from this current configuration. </p>
| Asa | <p>There's now documentation available: <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md#create-access-points-in-efs" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md#create-access-points-in-efs</a></p>
<p>You'll need to be using the updated EFS CSI driver. The access point is defined under PersistentVolume's <code>volumeHandle</code>. The recent EFS CSI driver no longer supports dynamic binding, hence, the PersistentVolume needs to be created manually for each PersistentVolumeClaim.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: [FileSystemId]::[AccessPointId]
</code></pre>
| Petrus Repo |
<p>To preface this I’m working on the GCE, and Kuberenetes. My goal is simply to expose all microservices on my cluster over SSL. Ideally it would work the same as when you expose a deployment via type=‘LoadBalancer’ and get a single external IP. That is my goal but SSL is not available with those basic load balancers. </p>
<p>From my research the best current solution would be to set up an nginx ingress controller, use ingress resources and services to expose my micro services. Below is a diagram I drew up with my understanding of this process. </p>
<p><a href="https://i.stack.imgur.com/bGt6B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bGt6B.png" alt="enter image description here"></a></p>
<p>I’ve got this all to successfully work over HTTP. I deployed the default nginx controller from here: <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a> . As well as the default backend and service for the default backend. The ingress for my own micro service has rules set as my domain name and path: /. </p>
<p>This was successful but there were two things that were confusing me a bit. </p>
<ol>
<li><p>When exposing the service resource for my backend (microservice) one guide I followed used type=‘NodePort’ and the other just put a port to reach the service. Both set the target port to the backend app port. I tried this both ways and they both seemed to work. Guide one is from the link above. Guide 2: <a href="http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html</a>. What is the difference here?</p></li>
<li><p>Another point of confusion is that my ingress always gets two IPs. My initial thought process was that there should only be one external ip and that would hit my ingress which is then directed by nginx for the routing. Or is the ip directly to the nginx? Anyway the first IP address created seemed to give me the expected results where as visiting the second IP fails.</p></li>
</ol>
<p>Despite my confusion things seemed to work fine over HTTP. Over HTTPS not so much. At first when I made a web request over https things would just hang. I opened 443 on my firewall rules which seemed to work however I would hit my default backend rather than my microservice.</p>
<p>Reading led me to this from Kubernetes docs: Currently the Ingress resource only supports http rules.
This may explain why I am hitting the default backend because my rules are only for HTTP. But if so how am I supposed to use this approach for SSL?</p>
<p>Another thing I noticed is that if I write an ingress resource with no rules and give it my desired backend I still get directed to my original default backend. This is even more odd because kubectl describe ing updated and states that my default backend is my desired backend...</p>
<p>Any help or guidance would be much appreciated. Thanks!</p>
| Steve | <p>So, for #2, you've probably ended up provisioning a Google HTTP(S) LoadBalancer, probably because you're missing the <code>kubernetes.io/ingress.class: "nginx"</code> annotation as described here: <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers" rel="noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers</a>. </p>
<p>GKE has it's own ingress controller that you need to override by sticking that annotation on your nginx deployment. <a href="https://beroux.com/english/articles/kubernetes/?part=3" rel="noreferrer">This article</a> has a good explanation about that stuff.</p>
<p>The <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="noreferrer">kubernetes docs</a> have a pretty good description of what <code>NodePort</code> means - basically, the service will allocate a port from a high range on each Node in your cluster, and Nodes will forward traffic from that port to your service. It's one way of setting up load balancers in different environments, but for your approach it's not necessary. You can just omit the <code>type</code> field of your microservice's Service and it will be assigned the default type, which is <code>ClusterIP</code>.</p>
<p>As for SSL, it could be a few different things. I would make sure you've got the Secret set up just as they describe in the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#https" rel="noreferrer">nginx controller docs</a>, eg with a <code>tls.cert</code> and <code>tls.key</code> field.</p>
<p>I'd also check the logs of the nginx controller - find out which pod it's running as with <code>kubectl get pods</code>, and then tail it's logs: <code>kubectl logs nginx-pod-<some-random-hash> -f</code>. This will be helpful to find out if you've misconfigured anything, like if a service does not have any endpoints configured. Most of the time I've messed up the ingress stuff, it's been due to some pretty basic misconfiguration of Services/Deployments. </p>
<p>You'll also need to set up a DNS record for your hostname pointed at the LoadBalancer's static IP, or else ping your service with cURL's <code>-H</code> <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#http" rel="noreferrer">flag as they do in the docs</a>, otherwise you might end up getting routed to the default back end 404.</p>
| IanI |
<p>The Python API is available to read objects from a cluster. By cloning we can say:</p>
<ol>
<li>Get a copy of an existing Kubernetes object using <code>kubectl get</code></li>
<li>Change the properties of the object</li>
<li>Apply the new object</li>
</ol>
<p>Until recently, the option to <a href="https://medium.com/@jonathan.johnson/export-has-been-deprecated-in-1-14-51cfef5a0cb7" rel="nofollow noreferrer"><code>--export</code> api was deprecated in 1.14</a>. How can we use the Python Kubernetes API to do the steps from 1-3 described above?</p>
<p>There are multiple questions about how to extract the <a href="https://github.com/kubernetes-client/python/issues/977" rel="nofollow noreferrer">code from Python API to YAML</a>, but it's unclear how to transform the Kubernetes API object. </p>
| Marcello DeSales | <p>After looking at the requirement, I spent a couple of hours researching the Kubernetes Python API. <a href="https://github.com/kubernetes-client/python/issues/340" rel="nofollow noreferrer">Issue 340</a> and others ask about how to transform the Kubernetes API object into a <code>dict</code>, but the only workaround I found was to <a href="https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414" rel="nofollow noreferrer">retrieve the raw data</a> and then convert to JSON.</p>
<ul>
<li>The following code uses the Kubernetes API to get a <code>deployment</code> and its related <code>hpa</code> from the namespaced objects, but retrieving their raw values as JSON. </li>
<li>Then, after transforming the data into a dict, you can alternatively clean up the data by <a href="https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570">removing null references</a>.</li>
<li>Once you are done, you can transform the <code>dict</code> as YAML payload to then <a href="https://stackoverflow.com/questions/12470665/how-can-i-write-data-in-yaml-format-in-a-file/18210750#18210750">save the YAML to the file system</a> </li>
<li>Finally, you can apply either using <code>kubectl</code> or the Kubernetes Python API.</li>
</ul>
<p>Note:</p>
<ul>
<li>Make sure to set <code>KUBECONFIG=config</code> so that you can point to a cluster</li>
<li>Make sure to adjust the values of <code>origin_obj_name = "istio-ingressgateway"</code> and <code>origin_obj_namespace = "istio-system"</code> with the name of the corresponding objects to be cloned in the given namespace.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import os
import logging
import yaml
import json
logging.basicConfig(level = logging.INFO)
import crayons
from kubernetes import client, config
from kubernetes.client.rest import ApiException
LOGGER = logging.getLogger(" IngressGatewayCreator ")
class IngressGatewayCreator:
@staticmethod
def clone_default_ingress(clone_context):
# Clone the deployment
IngressGatewayCreator.clone_deployment_object(clone_context)
# Clone the deployment's HPA
IngressGatewayCreator.clone_hpa_object(clone_context)
@staticmethod
def clone_deployment_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
v1apps = client.AppsV1beta1Api()
deployment_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
deployment = v1apps.read_namespaced_deployment(deployment_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("Deployment %s not found in namespace %s", deployment_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(deployment, clone_context)
# Change additional objects
cloned_dict["spec"]["selector"]["matchLabels"]["istio"] = clone_context.name
cloned_dict["spec"]["template"]["metadata"]["labels"]["istio"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "deployment")
@staticmethod
def clone_hpa_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
hpas = client.AutoscalingV1Api()
hpa_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
hpa = hpas.read_namespaced_horizontal_pod_autoscaler(hpa_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("HPA %s not found in namespace %s", hpa_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(hpa, clone_context)
# Change additional objects
cloned_dict["spec"]["scaleTargetRef"]["name"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "hpa")
@staticmethod
def clone_k8s_object(k8s_object, clone_context):
# Manipilate in the dict level, not k8s api, but from the fetched raw object
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
cloned_obj = json.loads(k8s_object.data)
labels = cloned_obj['metadata']['labels']
labels['istio'] = clone_context.name
cloned_obj['status'] = None
# Scrub by removing the "null" and "None" values
cloned_obj = IngressGatewayCreator.scrub_dict(cloned_obj)
# Patch the metadata with the name and labels adjusted
cloned_obj['metadata'] = {
"name": clone_context.name,
"namespace": clone_context.origin_obj_namespace,
"labels": labels
}
return cloned_obj
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
@staticmethod
def scrub_dict(d):
new_dict = {}
for k, v in d.items():
if isinstance(v, dict):
v = IngressGatewayCreator.scrub_dict(v)
if isinstance(v, list):
v = IngressGatewayCreator.scrub_list(v)
if not v in (u'', None, {}):
new_dict[k] = v
return new_dict
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
@staticmethod
def scrub_list(d):
scrubbed_list = []
for i in d:
if isinstance(i, dict):
i = IngressGatewayCreator.scrub_dict(i)
scrubbed_list.append(i)
return scrubbed_list
class IngressGatewayContext:
def __init__(self, manifest_dir, name, hostname, nats, type):
self.manifest_dir = manifest_dir
self.name = name
self.hostname = hostname
self.nats = nats
self.ingress_type = type
self.origin_obj_name = "istio-ingressgateway"
self.origin_obj_namespace = "istio-system"
def save_clone_as_yaml(self, k8s_object, kind):
try:
# Just try to create if it doesn't exist
os.makedirs(self.manifest_dir)
except FileExistsError:
LOGGER.debug("Dir already exists %s", self.manifest_dir)
full_file_path = os.path.join(self.manifest_dir, self.name + '-' + kind + '.yaml')
# Store in the file-system with the name provided
# https://stackoverflow.com/questions/12470665/how-can-i-write-data-in-yaml-format-in-a-file/18210750#18210750
with open(full_file_path, 'w') as yaml_file:
yaml.dump(k8s_object, yaml_file, default_flow_style=False)
LOGGER.info(crayons.yellow("Saved %s '%s' at %s: \n%s"), kind, self.name, full_file_path, k8s_object)
try:
k8s_clone_name = "http2-ingressgateway"
hostname = "my-nlb-awesome.a.company.com"
nats = ["123.345.678.11", "333.444.222.111", "33.221.444.23"]
manifest_dir = "out/clones"
context = IngressGatewayContext(manifest_dir, k8s_clone_name, hostname, nats, "nlb")
IngressGatewayCreator.clone_default_ingress(context)
except Exception as err:
print("ERROR: {}".format(err))
</code></pre>
| Marcello DeSales |
<p>As far as I can see, GKE seems to be slighty more complex to configure and deploy an application (using Kubernetes direct files or Helm Charts or something else ?). Furthermore it seems to have no better pod failure detection or better performances ?</p>
<p>Why should we use GKE whereas there is GAE which only needs dispatch.yaml, app.yaml files and gcloud cli to deploy ?</p>
<p>Is there any technical or financial feedback against GAE ?</p>
<p>Finally, how can we make a choice between GKE and GAE ? What whould be the reason to not choose GAE ?</p>
| Kelindil | <p>Google Kubernetes Engine(GKE) is a cluster manager and orchestration system for running your Docker containers. Google App Engine(GAE) is basically google managed containers. </p>
<p>They both try to provide you similar main benefits(scalability, redundancy, rollouts, rollbacks, etc.). <strong>The main difference is in their philosophy: GKE tries to provide you very fine grained control over everything about your cluster. GAE tries to get you run your apps with as little configuration/management as possible.</strong></p>
<p>With GKE you have more control, but also more work for you. You need to configure the network, security, software updates etc. With GAE you don't need to worry about many of these things, and you can focus on your app.</p>
| Caner |
<p>I am running my elixir app on GKE</p>
<p>here is my deployment configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: myapp
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: myapp
image: myimage
resources:
limits:
cpu: 3000m
memory: 2000Mi
requests:
cpu: 2500m
memory: 1000Mi
ports:
- containerPort: 80
args:
- foreground
</code></pre>
<p>as you can see in the image, the pod reached its memory limit and crashed
<a href="https://i.stack.imgur.com/nbiwR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nbiwR.png" alt="pod"></a></p>
<p>these are my last logs:</p>
<pre><code>erl_child_setup closed
Crash dump is being written to: erl_crash.dump...done
Shutting down..
Node is not running!
</code></pre>
<p>and then my app is frozen, I get 502 when trying to request the app,</p>
<p>In order to restart I restart the pod (kubectl delete pod), and then it runs again,</p>
<p>my question is: why doesnt the pod restart automatically when reaches memory limit?</p>
| dina | <p>You'll need to add probes that will check if your application is healthy.</p>
<p>Since you mentioned a <code>502</code>, I'm assuming this is a Phoenix application and you can add a health-check endpoint:</p>
<pre><code>livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
</code></pre>
<p>When this request stops receiving a <code>200</code>, then the Kubernetes Controller will restart your pod.</p>
| Rawkode |
<p>I am trying to achieve zero downtime deployment process, but it is not working.</p>
<p>My deployment has one replica. The pod probes look like this:</p>
<pre><code>livenessProbe:
httpGet:
path: /health/live
port: 80
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /health/ready
port: 80
initialDelaySeconds: 15
periodSeconds: 20
</code></pre>
<p>During deployment, accessing pod returns 503 for at least 10 seconds. Questions I have:</p>
<ul>
<li>what might be wrong?</li>
<li>how can I debug this?</li>
<li>where can I see logs from a service that is probing my service? </li>
</ul>
<p>Running <code>describe</code> on the pod I get:</p>
<pre><code>Liveness: http-get http://:80/health/live delay=5s timeout=1s period=2s #success=1 #failure=3
Readiness: http-get http://:80/health/ready delay=5s timeout=1s period=2s #success=1 #failure=3
</code></pre>
| kosnkov | <p>The problem was in </p>
<pre><code>kind: Service
spec:
type: ClusterIP
selector:
app: maintenance-api
version: "1.0.0"
stage: #{Release.EnvironmentName}#
release: #{Release.ReleaseName}#
</code></pre>
<p>if selector is sth like #{Release.ReleaseName}# which changes every release then its like the old pod cannot be found so when release starts service is disconnecting from pod and only after the new pod finish deploying the service will start redirecting to it.</p>
| kosnkov |
<p>My scrapy spider always stopped at the 1000th request in Kubernetes pods. I can't found any problem. It's just close my spider.</p>
<p>I had tested in terminal and docker in local with no problems.</p>
<p>Please help me deal with it.</p>
<pre><code>2021-09-23 09:36:41 [scrapy.core.engine] INFO: Closing spider (finished)
2021-09-23 09:36:41 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 360643,
'downloader/request_count': 1003,
'downloader/request_method_count/GET': 1000,
'downloader/request_method_count/POST': 3,
'downloader/response_bytes': 2597069,
'downloader/response_count': 1003,
'downloader/response_status_count/200': 1000,
'downloader/response_status_count/404': 3,
'elapsed_time_seconds': 85.16985,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 9, 23, 9, 36, 41, 366720),
'httpcompression/response_bytes': 4896324,
'httpcompression/response_count': 997,
'item_scraped_count': 1000,
'log_count/DEBUG': 5137,
'log_count/INFO': 3016,
'log_count/WARNING': 1,
'memusage/max': 111157248,
'memusage/startup': 92839936,
'request_depth_max': 1,
'response_received_count': 1003,
'scheduler/dequeued': 1006,
'scheduler/dequeued/memory': 1006,
'scheduler/enqueued': 1006,
'scheduler/enqueued/memory': 1006,
'splash/render.html/request_count': 3,
'splash/render.html/response_count/200': 3,
'start_time': datetime.datetime(2021, 9, 23, 9, 35, 16, 196870)}
2021-09-23 09:36:41 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
| Tín Trung | <p>The "finished" status usually means that the job ran just fine. However, some sites have hard limits for pagination and/or items displayed in searches.
Are you able to reach the 1001th item in a browser?</p>
| Thiago Curvelo |
<p>I got the following service defined: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: customerservice
spec:
type: NodePort
selector:
app: customerapp
ports:
- protocol: TCP
port: 31004
nodePort: 31004
targetPort: 8080
</code></pre>
<p>Current situation: I am able to hit the pod via the service IP.
Now my goal is to reach the <code>customerservice</code> via the name of the service, which does not work right now. So I would simply type <code>http://customerservice:31004</code> instead of <code>http://<IP>:31004</code>.</p>
| elp | <p>DNS resolution of services is ONLY available within the cluster, provided by CoreDNS/KubeDNS.</p>
<p>Should you wish to have access to this locally on your machine, you'd need to use another tool. One such tool is <code>kubefwd</code>:</p>
<p><a href="https://github.com/txn2/kubefwd" rel="nofollow noreferrer">https://github.com/txn2/kubefwd</a></p>
<p>A slightly simpler solution, is to use port-forward; which is a very simple way to access a single service locally.</p>
<p><code>kubectl port-forward --namespace=whatever svs/service-name port</code></p>
<p>EDIT:// I've made the assumption that you want to use the service DNS locally, as I'm assuming by saying:</p>
<blockquote>
<p>I would simply type <a href="http://customerservice:31004" rel="nofollow noreferrer">http://customerservice:31004</a></p>
</blockquote>
<p>is in the context of your web browser.</p>
| Rawkode |
<p>I have an API and MongoDB that was hosted on Kubernetes (EKS on AWS specifically). While my API runs completely fine on my local and also Docker local, it was not on the Kubernetes cluster.</p>
<p>When I get the logs from <code>kubernetes logs -f <pod-name></code>, I get below exceptions:</p>
<pre><code>Unexpected generic exception occurred Autofac.Core.DependencyResolutionException: An exception was thrown while activating QueryService -> Database -> DbClient.
---> Autofac.Core.DependencyResolutionException: An exception was thrown while invoking the constructor 'Void .ctor(Microsoft.Extensions.Options.IOptions`1[DbOptions])' on type 'DbClient'.
---> System.ArgumentNullException: Value cannot be null. (Parameter 'connectionString')
at MongoDB.Driver.Core.Misc.Ensure.IsNotNull[T](T value, String paramName)
at MongoDB.Driver.Core.Configuration.ConnectionString..ctor(String connectionString, IDnsResolver dnsResolver)
at MongoDB.Driver.Core.Configuration.ConnectionString..ctor(String connectionString)
at MongoDB.Driver.MongoUrlBuilder.Parse(String url)
at MongoDB.Driver.MongoUrlBuilder..ctor(String url)
at MongoDB.Driver.MongoUrl..ctor(String url)
at MongoDB.Driver.MongoClientSettings.FromConnectionString(String connectionString)
at MongoDB.Driver.MongoClient..ctor(String connectionString)
at Mongo.DbOperations..ctor(String connectionString, String dataBase, String collection) in /src/Mongo/DbOperations.cs:line 36
at DbClient..ctor(IOptions`1 DbOptions) in /src/DbClient.cs:line 22
at lambda_method(Closure , Object[] )
at Autofac.Core.Activators.Reflection.ConstructorParameterBinding.Instantiate()
--- End of inner exception stack trace ---
at Autofac.Core.Activators.Reflection.ConstructorParameterBinding.Instantiate()
at Autofac.Core.Activators.Reflection.ReflectionActivator.ActivateInstance(IComponentContext context, IEnumerable`1 parameters)
at Autofac.Core.Resolving.InstanceLookup.Activate(IEnumerable`1 parameters, Object& decoratorTarget)
--- End of inner exception stack trace ---
at Autofac.Core.Resolving.InstanceLookup.Activate(IEnumerable`1 parameters, Object& decoratorTarget)
at Autofac.Core.Resolving.InstanceLookup.Execute()
at Autofac.Core.Resolving.ResolveOperation.GetOrCreateInstance(ISharingLifetimeScope currentOperationScope, ResolveRequest request)
at Autofac.Core.Resolving.ResolveOperation.Execute(ResolveRequest request)
at Autofac.Features.LazyDependencies.LazyRegistrationSource.<>c__DisplayClass5_1`1.<CreateLazyRegistration>b__1()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at Api.Controllers.MainController.<>c__DisplayClass4_0.<<ListLogicalDevicesAsync>b__0>d.MoveNext() in /src/Api/Controllers/Main/List.cs:line 40
--- End of stack trace from previous location where exception was thrown ---
at Api.Controllers.StandardController.ProcessExceptionsAsync(Func`1 action) in /src/Api/Controllers/StandardController.cs:line 64
</code></pre>
<p>The dependency injection should have no problem since I can run it locally without any trouble. So is it the connection string that I provided for MongoDB not working? Or is it really the dependency injection issue?</p>
<p>Below are my code for constructor in <code>DbOperation.cs</code></p>
<pre><code>public DbOperations(string connectionString, string dataBase, string collection)
var mongoClient = new MongoClient(connectionString);
var mongoDatabase = mongoClient.GetDatabase(dataBase);
_collection = mongoDatabase.GetCollection<BsonDocument>(collection);
}
</code></pre>
<p>My <code>DbClient.cs</code></p>
<pre><code>internal class DbClient : DbOperations, IDbClient
{
public DbClient(IOptions<DbOptions> dbOptions) :
base(dbOptions.Value.ConnectionString,dbOptions.Value.Database, dbOptions.Value.DeviceCollection)
{
}
}
</code></pre>
<p>My <code>Startup.cs</code></p>
<pre><code>//In ConfigureContainer() method
builder.RegisterType<DbClient>().As<IDbClient>();
//In ConfigureServices() method
services.Configure<DbOptions>(configuration.GetSection("MainDb"));
</code></pre>
<p>My <code>DbOptions.cs</code></p>
<pre><code>public class DbOptions
{
public string ConnectionString { get; set; }
public string Database { get; set; }
public string DeviceCollection { get; set; }
}
</code></pre>
<p>My <code>appsetting.json</code></p>
<pre><code> "MainDb": {
"Database": "main",
"DeviceCollection": "data"
}
</code></pre>
<p>My <code>appsetting.Dev.json</code></p>
<pre><code> "MainDb": {
"ConnectionString": "mongodb://<service-name>.default.svc.cluster.local:27017"
}
</code></pre>
<p>May I know is there anything that might gone wrong? Is it the MongoDB connection string? Or the dependency injection that could not get the connection string?</p>
<p>Thanks!</p>
| Sivvie Lim | <p>Based on the exception message and the stack trace, <code>connectionString</code> is <code>null</code>. Check your config -- it's missing in your non-dev config.</p>
| Igor Pashchuk |
<p>After upgrading from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS, I am currently facing issues with my k3s cluster.</p>
<p>For instance, the logs of the <code>local-path-provisioner</code> pod:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl logs -n kube-system local-path-provisioner-6c79684f77-l4cqp
time="2022-04-28T03:27:00Z" level=fatal msg="Error starting daemon: Cannot start Provisioner: failed to get Kubernetes server version: Get \"https://10.43.0.1:443/version?timeout=32s\": dial tcp 10.43.0.1:443: i/o timeout"
</code></pre>
<p>I've tried the following actions:</p>
<ul>
<li>Disabling ipv6, as described <a href="https://cwesystems.com/?p=231" rel="nofollow noreferrer">here</a></li>
<li>Disabling <code>ufw</code> firewall</li>
<li>Use legacy iptables</li>
<li>Adding rules for internal traffic to iptables, like so:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ sudo iptables -A INPUT -s 10.42.0.0/16 -d <host_ip> -j ACCEPT
</code></pre>
<p>Still, <code>coredns</code>, <code>local-path-provisioner</code> and <code>metrics-server</code> deployments won't start. When listing pods, here's the output:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
cilium-64r4c 1/1 Running 2 (18m ago) 174m
cilium-d8grw 1/1 Running 2 (18m ago) 174m
cilium-g4gmf 1/1 Running 2 (18m ago) 174m
cilium-h5j4h 1/1 Running 2 (18m ago) 174m
cilium-n62nv 1/1 Running 2 (18m ago) 174m
cilium-operator-76cff99967-6fgkv 1/1 Running 2 (18m ago) 174m
cilium-operator-76cff99967-pbr4l 1/1 Running 2 (18m ago) 174m
cilium-w4n6d 1/1 Running 2 (18m ago) 174m
cilium-wgm7l 1/1 Running 2 (18m ago) 174m
cilium-zqb6w 1/1 Running 2 (18m ago) 174m
coredns-d76bd69b-bhgnl 0/1 CrashLoopBackOff 44 (3m27s ago) 177m
hubble-relay-67f64789c7-cjzz9 0/1 CrashLoopBackOff 63 (4m15s ago) 174m
hubble-ui-794cd44b77-9vgbl 3/3 Running 6 (18m ago) 174m
local-path-provisioner-6c79684f77-l4cqp 0/1 CrashLoopBackOff 35 (3m53s ago) 177m
metrics-server-7cd5fcb6b7-v74rc 0/1 CrashLoopBackOff 42 (3m35s ago) 177m
</code></pre>
<p>Any help is appreciated! thanks</p>
| Charles Guertin | <p>Since you're using Cilium, I think you might be running into this issue: <a href="https://github.com/cilium/cilium/issues/10645" rel="nofollow noreferrer">https://github.com/cilium/cilium/issues/10645</a></p>
<p>The workaround is to ensure <code>net.ipv4.conf.lxc*.rp_filter</code> is set to 0:</p>
<pre><code>echo 'net.ipv4.conf.lxc*.rp_filter = 0' | sudo tee -a /etc/sysctl.d/90-override.conf
sudo systemctl start systemd-sysctl
</code></pre>
| Sam Day |
<p>Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.</p>
<p>This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.</p>
<p>To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.</p>
| rubik | <p>Yes, it's possible by using <a href="https://github.com/doitintl/kubeip" rel="noreferrer">KubeIP</a>.</p>
<p>You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.</p>
<p>IP addresses can be created by:</p>
<ol>
<li>opening Google Cloud Dashboard</li>
<li>going VPC Network -> External IP addresses</li>
<li>clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).</li>
</ol>
| Pedro Rodrigues |
<p>According to this <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md" rel="nofollow noreferrer">documentation</a>, Extra flags for the API server, controller, and scheduler components can be specified using the variables below, in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubeadm YAML config file:</p>
<ul>
<li><em><strong>kube_kubeadm_apiserver_extra_args</strong></em></li>
<li><em><strong>kube_kubeadm_controller_extra_args</strong></em></li>
<li><em><strong>kube_kubeadm_scheduler_extra_args</strong></em></li>
</ul>
<p>But I can't really figure out where to add them in ansible playbooks so that they can be rendered on the master node during the cluster deloyment.</p>
<p>I tried using this file <code>kubespray/roles/kubernetes/master/defaults/main/main.yml</code> and this file <code>kubespray/roles/kubespray-defaults/defaults/main.yaml</code> but it doesn't work for none of the two files, ansible doesn't deploy them, like if ansible doesn't read them.</p>
<p>Where the <code>kubeadm</code> YAML config file is located?</p>
<p>Can someone here help with these parameters management?</p>
| nixmind | <p>As documented on <a href="https://kubespray.io/#/docs/ansible?id=group-vars-and-overriding-variables-precedence" rel="nofollow noreferrer">https://kubespray.io/#/docs/ansible?id=group-vars-and-overriding-variables-precedence</a>, you should take a look at <code>inventory/<mycluster>/group_vars/all/all.yml</code> and <code>inventory/<mycluster>/group_vars/k8s-cluster/k8s-cluster.yml</code> for the configuration of your cluster.</p>
<p>Where <code>inventory/<mycluster></code> is a copy of the kubespray provided <a href="https://github.com/kubernetes-sigs/kubespray/tree/master/inventory/sample" rel="nofollow noreferrer"><code>inventory/sample</code> folder</a> with adaptations of the <code>inventory.ini</code> file and files inside <code>group_vars</code>.</p>
<p>Kubespray use the inventory layout proposed in <a href="https://docs.ansible.com/ansible/latest/user_guide/sample_setup.html#alternative-directory-layout" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/user_guide/sample_setup.html#alternative-directory-layout</a></p>
<p>Whatever your layout, for group_vars to be loaded, the have to be in the same folder as the file referenced by the <code>--inventory-file/--inventory/-i</code> option or <code>defaults.inventory</code> config.</p>
<p>For example, if your inventory is the file <code>config/inventory</code>, you need to copy the sample inventory group_vars in <code>config/group_vars</code>.</p>
| zigarn |
<p>TL;DR Kubernetes Ingress Nginx controller doesn't keep <code>path</code> if underlying service redirects to a relative URL</p>
<p>I have the following Ingress configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: release-name-test-tool
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: "kub.internal.company.com"
http:
paths:
- path: /api/test-tool(/|$)(.*)
pathType: Prefix
backend:
service:
name: test-tool-service
port:
name: http
</code></pre>
<p>Behind <code>test-tool-service</code> is a Spring Boot Application with authorization by OAuth2 protocol.
As a first step, the application redirects to <code>http://server:port/oauth2/authorization/test-tool</code>.</p>
<p>But in the K8S deployment, <code>path</code> part is missed in response's <code>location</code> header and I receive <code>404</code> after redirection (because there is no Ingress rule for <code>kub.internal.company.com/oauth2(/|$)(.*)</code>)</p>
<p><em>Actual</em>:</p>
<pre><code>cache-control: no-cache, no-store, max-age=0, must-revalidate
content-length: 0
date: Wed, 11 Aug 2021 13:04:24 GMT
expires: 0
location: https://kub.internal.company.com/oauth2/authorization/test-tool
pragma: no-cache
strict-transport-security: max-age=15724800; includeSubDomains
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
</code></pre>
<p><em>Expected</em>:</p>
<pre><code>location: https://kub.internal.company.com/api/test-tool/oauth2/authorization/test-tool
</code></pre>
<p>So, <code>location</code> header in response doesn't contain <code>path</code> from the Ingress configuration.</p>
<p>The same service deployed on bare metal + Nginx <code>proxy_pass</code> configuration works fine.</p>
<p>PS: I found a similar issue in GitHub <a href="https://github.com/kubernetes/ingress-nginx/issues/5076" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/5076</a> but without an answer.</p>
| Sansend | <p>I found two steps solution that works for me:</p>
<ul>
<li>Application side. Spring Boot has a configuration for <a href="https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto.webserver.use-behind-a-proxy-server" rel="nofollow noreferrer">Running Behind a Front-end Proxy Server</a>. We need to set processing of <strong>X-Forwarded-</strong>* headers to <code>FRAMEWORK</code>(default is <code>NONE</code>). Spring Boot will automatically update the <code>Location</code> header based on the header value:</li>
</ul>
<pre><code>server.forward-headers-strategy=FRAMEWORK
</code></pre>
<ul>
<li>Nginx side. In my K8S setup, not all required <strong>X-Forwarded-</strong>* headers were set. In my case <code>X-Forwarded-Prefix</code> was not configured. Luckily there is a Nginx-Ingress configuration for that - <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header</a> :</li>
</ul>
<pre><code>ingress:
annotations:
nginx.ingress.kubernetes.io/x-forwarded-prefix: /api/test-tool
</code></pre>
<p>How everything works together:</p>
<ul>
<li>Request <code>/api/test-tool/hello</code> is proxied to <code>/hello</code>. Nginx also set <code>X-Forwarded-Prefix : /api/test-tool</code></li>
<li>Spring Boot application handles request for <code>/name</code> and made a redirection using <code>Location</code> header: <code>Location : /oauth2/authorization/test-tool</code></li>
<li>Since we set up processing of <code>X-Forwarded-Prefix</code> Spring Boot adds proxy path to <code>Location</code> : <code>Location : /api/test-tool/oauth2/authorization/test-tool</code></li>
<li>Response is proxied back to the client's browser and we achieve what we want, the browser is redirected to a correct authorization server.</li>
</ul>
| Sansend |
<p>Akka app on Kubernetes is facing delayed heartbeats, even when there is no load. </p>
<p>There is also constantly the following warning: </p>
<pre><code>heartbeat interval is growing too large for address ...
</code></pre>
<p>I tried to add a custom dispatcher for the cluster, even for every specific actor but did not help. I am not doing any blocking operations, since it is just a simple Http server.</p>
<p>When the cluster has load, the nodes get Unreachable. </p>
<p>I created a repository which can be used to reproduce the issue : <a href="https://github.com/CostasChaitas/Akka-Demo" rel="nofollow noreferrer">https://github.com/CostasChaitas/Akka-Demo</a></p>
| kostas takis | <p>First, thanks for the well documented reproducer. I did find one minor glitch with a dependency you included, but it was easy to resolve.</p>
<p>That said, I was unable to reproduce your errors. Everything worked fine on my local machine and on my dev cluster. You don't include your load generator, so maybe I just wasn't generating as sustained a load, but I got no heartbeat delays at all.</p>
<p>I suspect this is a duplicate of <a href="https://stackoverflow.com/questions/58015699/akka-cluster-heartbeat-delays-on-kubernetes">Akka Cluster heartbeat delays on Kubernetes</a> . If so, it sounds like you've already checked for my usual suspects of GC and CFS. And if you are able to reproduce locally it also make it improbable that it's my other common problem of badly configured K8 networking. (I had one client that was having problems with Akka clustering on K8 and it turns out that it was just a badly configured cluster: the network was dropping and delaying packets between pods.)</p>
<p>Since you say this is load testing perhaps you are just running out of sockets/files? You don't have much in the way of HTTP server configuration. (Nor any JVM options.)</p>
<p>I think my next debugging step would be to connect to one of the running contains and trying to test the network between the pods in the network.</p>
| David Ogren |
<p>I am trying to deploy IBM MQ to my local MAC machine using an image hosted on docker hub repository. I am using docker edge version with Kubernetes support on it.</p>
<p>I am able to deploy the image successfully using kubernetes and also have the Queue Manager running fine inside the container. I am also able to ssh into the container and make sure all the MQ processes are running as expected. </p>
<p>But when I use port forwarding using the following kubectl command, it opens the port, but does not let me telnet to it using "IP or hostname" (even from the local machine). But when I use "localhost" to telnet it works fine. </p>
<p>While troubleshooting, I deployed the same image using docker commands instead of kubernetes and with docker deployment, the port forwarding works as expected. It lets me telnet using IP, hostname and localhost. </p>
<p>So, definitely its some issue with Kubernetes port forwarding. Can some one please let me know if I am missing anything here? Let me know if there is some additional information needed from my end.</p>
<p>I am new to kubernetes and docker, but pretty familiar with IBM MQ.</p>
<p><strong><em>Commands being used:</em></strong></p>
<p><strong>To create port forwarding rule using kubectl, checking netstat and connecting with telnet:</strong> </p>
<hr>
<pre><code>HOSTNAME:Test2 an0s5v4$ sudo kubectl port-forward private-reg 1414:1414 &
[1] 3001
HOSTNAME:Test2 an0s5v4$ Forwarding from 127.0.0.1:1414 -> 1414
Forwarding from [::1]:1414 -> 1414
HOSTNAME:Test2 an0s5v4$ netstat -an |grep 1414
tcp6 0 0 ::1.1414 *.* LISTEN
tcp4 0 0 127.0.0.1.1414 *.* LISTEN
HOSTNAME:Test2 an0s5v4$ ps -ef|grep 1414
0 3001 920 0 10:27AM ttys006 0:00.03 sudo kubectl port-forward private-reg 1414:1414
0 3002 3001 0 10:27AM ttys006 0:00.18 kubectl port-forward private-reg 1414:1414
502 3007 920 0 10:28AM ttys006 0:00.00 grep 1414
</code></pre>
<hr>
<pre><code>HOSTNAME:Test2 an0s5v4$ telnet IP 1414
Trying IP...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<hr>
<pre><code>HOSTNAME:Test2 an0s5v4$ telnet localhost 1414
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Handling connection for 1414
</code></pre>
<hr>
<pre><code>L-RCC9048942:Test2 an0s5v4$ telnet HOSTNAME 1414
Trying IP ...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<hr>
<pre><code>HOSTNAME:Test2 an0s5v4$ nslookup HOSTNAME
;; Truncated, retrying in TCP mode.
Name: HOSTNAME
Address: IP
</code></pre>
<hr>
<p><strong>Kubernetes pod YAML file contents</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
labels:
app: ibmmq
spec:
containers:
-
env:
-
name: LICENSE
value: accept
-
name: MQ_QMGR_NAME
value: QM4
image: "image path in docker hub"
name: private-reg-container
ports:
-
containerPort: 1414
hostPort: 1414
</code></pre>
<hr>
<p><strong>EDIT: ADDED K8S Service to the post</strong></p>
<p><strong>Kubernetes service YAML file contents</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice-nodeport
labels:
app: ibmmq
spec:
ports:
- port: 3000
targetPort: 1414
nodePort: 31414
selector:
app: ibmmq
type: NodePort
</code></pre>
| Anurag | <p>If you need port forwarding to bind to all the network interfaces, you can give this option. This will be useful when working with servers in OpenShift etc where there are different local and floating IPs</p>
<blockquote>
<p>--address 0.0.0.0</p>
</blockquote>
<p>Example</p>
<pre><code>kubectl port-forward minio-64b7c649f9-9xf5x --address 0.0.0.0 7000:9000 --namespace minio
</code></pre>
| Alex Punnen |
<p>We are looking for viable option to map external windows file share inside <strong>kubernetes+AWS-EKS</strong> hosted docker containers and few of the options. Windows file share being in same VPN is accessible with IP address</p>
<p>In absence of anything natively supported by kubernetes esp on EKS, we are trying Flexvolumes along with persistant volume. But that would need installation of cifs drivers on nodes which as I understand EKS does't provide being manages nodes. </p>
<p>Any option which doesn't require node level installation of custom drives including cifs etc?</p>
| AnilR | <p>You could modify the cloudformation stack to install the drivers after startup, see
<a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-cfn-quickstart-windows.yaml" rel="nofollow noreferrer">https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-cfn-quickstart-windows.yaml</a> </p>
<p>It references <a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-windows-nodegroup.yaml" rel="nofollow noreferrer">https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-windows-nodegroup.yaml</a> which contains the following powershell startup lines</p>
<pre><code><powershell>
[string]$EKSBinDir = "$env:ProgramFiles\Amazon\EKS"
[string]$EKSBootstrapScriptName = 'Start-EKSBootstrap.ps1'
[string]$EKSBootstrapScriptFile = "$EKSBinDir\$EKSBootstrapScriptName"
[string]$cfn_signal = "$env:ProgramFiles\Amazon\cfn-bootstrap\cfn-signal.exe"
& $EKSBootstrapScriptFile -EKSClusterName ${ClusterName} ${BootstrapArguments} 3>&1 4>&1 5>&1 6>&1
$LastError = if ($?) { 0 } else { $Error[0].Exception.HResult }
& $cfn_signal --exit-code=$LastError `
--stack="${AWS::StackName}" `
--resource="NodeGroup" `
--region=${AWS::Region}
</powershell>
</code></pre>
<p>Add your custom installation requirements and use this new stack when launching your nodes</p>
| jontro |
<p>I want to setup a pre-defined PostgreSQL cluster in a bare meta kubernetes 1.7 with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local PV</a> enable. I have three work nodes. I create local PV on each node and deploy the stateful set successfully (with some complex script to setup Postgres replication).</p>
<p>However I'm noticed that there's a kind of naming convention between the volumeClaimTemplates and PersistentVolumeClaim.
For example</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
volumeClaimTemplates:
- metadata:
name: pgvolume
</code></pre>
<p>The created pvc are <code>pgvolume-postgres-0</code>, <code>pgvolume-postgres-1</code>, <code>pgvolume-postgres-2</code> .</p>
<p>With some <a href="https://docs.openshift.org/3.11/install_config/persistent_storage/selector_label_binding.html" rel="nofollow noreferrer">tricky</a>, I manually create PVC and bind to the target PV by selector. I test the stateful set again. It seems the stateful set is very happy to use these PVC.</p>
<p>I finish my test successfully but I still have this question. Can I rely on volumeClaimTemplates naming convention? Is this an undocumented feature?</p>
| Gong Yi | <p>Based on the statefulset <a href="https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#statefulset-v1beta1-apps" rel="noreferrer">API reference</a> </p>
<blockquote>
<p>volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name.</p>
</blockquote>
<p>So I guess you can rely on it. </p>
<p>Moreover, you can define a storage class to leverage dynamic provisioning of persistent volumes, so you won't have to create them manually.</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: my-storage-class
resources:
requests:
storage: 1Gi
</code></pre>
<p>Please refer to <a href="http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html" rel="noreferrer">Dynamic Provisioning and Storage Classes in Kubernetes</a> for more details.</p>
| Jimmy Lu |
<p>I have process A and process B. Process A opens a file, calls mmap and write to it, process B do the same but reads the same mapped region when process A has finished writing.</p>
<p>Using mmap, process B is suppossed to read the file from memory instead of disk assuming process A has not called munmap.</p>
<p>If I would like to deploy process A and process B to diferent containers in the same pod in Kubernetes, is memory mapped IO supposed to work the same way as the initial example? Should container B (process B) read the file from memory as in my regular Linux desktop?</p>
<p>Let's assume both containers are in the same pod and are reading/writing the file from the same persistent volume. Do I need to consider a specific type of volume to achieve mmap IO?</p>
<p>In case you are courious I am using Apache Arrow and pyarrow to read and write those files and achieve zero-copy reads.</p>
| rboc | <p>A Kubernetes pod is a group of containers that are deployed together on the same host. (<a href="https://coreos.com/kubernetes/docs/latest/pods.html#:%7E:text=A%20Kubernetes%20pod%20is%20a,and%20accurately%20understand%20the%20concept." rel="noreferrer">reference</a>). So this question is really about what happens for multiple containers running on the same host.</p>
<p>Containers are isolated on a host using a number of different technologies. There are two that <em>might</em> be relevant here. <strong>Neither</strong> prevent two processes from different containers sharing the same memory when they mmap a file.</p>
<p>The two things to consider are how the file systems are isolated and how memory is ring fenced (limited).</p>
<h1>How the file systems are isolated</h1>
<p>The trick used is to create a <a href="https://man7.org/linux/man-pages/man7/mount_namespaces.7.html" rel="noreferrer">mount namespace</a> so that any new mount points are not seen by other processes. Then file systems are mounted into a directory tree and finally the process calls <a href="https://en.wikipedia.org/wiki/Chroot" rel="noreferrer">chroot</a> to set <code>/</code> as the root of that directory tree.</p>
<p>No part of this affects the way processes mmap files. This is just a clever trick on how file names / file paths work for the two different processes.</p>
<p>Even if, as part of that setup, the same file system was mounted from scratch by the two different processes the result would be the same as a <a href="https://unix.stackexchange.com/q/198590/20140">bind mount</a>. That means the same file system exists under two paths but it is *<em>the same</em> file system, not a copy.</p>
<p>Any attempt to mmap files in this situation would be identical to two processes in the same namespace.</p>
<h1>How are memory limits applied?</h1>
<p>This is done through <a href="https://en.wikipedia.org/wiki/Cgroups" rel="noreferrer">cgroups</a>. cgroups don't really isolate anything, they just put limits on what a single process can do.</p>
<p>But there is a natuarl question to ask, if two processes have different memory limits through cgroups can they share the same shared memory? <a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="noreferrer">Yes they can!</a></p>
<blockquote>
<p>Note: file and shmem may be shared among other cgroups. In that case,
mapped_file is accounted only when the memory cgroup is owner of page
cache.</p>
</blockquote>
<p>The reference is a little obscure, but describes how memory limits are applied to such situations.</p>
<h1>Conclusion</h1>
<p>Two processes both memory mapping the same file from the same file system as different containers on the same host will behave almost exactly the same as if the two processes were in the same container.</p>
| Philip Couling |
<p>I want to know if it's possible to get metrics for the services inside the pods using Prometheus.</p>
<p>I don't mean monitoring the pods but <strong>the processes inside those pods</strong>. For example, containers which have apache or nginx running inside them along other main services, so I can retrieve metrics for the web server and the other main service (for example a wordpress image which aso comes with an apache configured).</p>
<p>The cluster already has running kube-state-metrics, node-exporter and blackbox exporter.</p>
<p>Is it possible? If so, how can I manage to do it?</p>
<p>Thanks in advance</p>
| Jose David Palacio | <p>Prometheus works by scraping an HTTP endpoint that provides the actual metrics. That's where you get the term "exporter". So if you want to get metrics from the processes running inside of pods you have three primary steps:</p>
<ol>
<li>You must modify those processes to export the metrics you care about. This is inherently something that must be custom for each kind of application. The good news is that there are lots of <a href="https://prometheus.io/docs/instrumenting/exporters/" rel="nofollow noreferrer">pre-built ones</a> including things like nginx and apache that you mention . Most application frameworks also have capability to export prometheus metrics. ex: <a href="https://microprofile.io/project/eclipse/microprofile-metrics" rel="nofollow noreferrer">Microprofile</a>, <a href="https://quarkus.io/guides/smallrye-metrics" rel="nofollow noreferrer">Quarkus</a>, and many more.</li>
<li>You must then modify your pod definition to expose the HTTP endpoint that those processes are now providing. Very straightfoward, but will depend on the configuration you specify for your exporters.</li>
<li>You must then modify your Prometheus to scrape those targets. This will depend on your monitoring stack. For Openshift you will find the docs here for <a href="https://docs.openshift.com/container-platform/4.9/monitoring/enabling-monitoring-for-user-defined-projects.html" rel="nofollow noreferrer">enabling user workload monitoring</a>, and <a href="https://docs.openshift.com/container-platform/4.9/monitoring/managing-metrics.html" rel="nofollow noreferrer">here</a> for providing exporter details.</li>
</ol>
| David Ogren |
<p>We are working with Akka and have an application which uses Akka Cluster that we deploy to AWS EKS environment. When we run a load test against the application we observe heartbeat intervals between the components growing large and pods getting restarted. Akka documentation mentions not to use CPU resource limit, and removing the resource limit solves the issue.</p>
<p>But is there any other way of getting around this, we are not sure if removing the resource limits is a good practice when deploying application.</p>
<p><a href="https://doc.akka.io/docs/akka/current/additional/deploying.html?_ga=2.206580627.186827534.1646216705-1504733962.1642433119#resource-limits" rel="nofollow noreferrer">Resource limits</a></p>
<blockquote>
<p>To avoid CFS scheduler limits, it is best not to use resources.limits.cpu limits, but use resources.requests.cpu configuration instead.</p>
</blockquote>
| Karan Khanna | <p>So, I know the docs do make this recommendation to not use limits so the following is just "unofficial" advice you are getting from StackOverflow. (And thus, if you want support from Lightbend you will have to debate this with them.)</p>
<p>Firstly, I agree with you. For many reasons, you absolutely should have resource limits. For example, if you don't have CPU limits your process ends up being designated as "best effort" as far as the CFS scheduler is concerned and that can actually have bad consequences.</p>
<p>As I understand the history of this recommendation from Lightbend, it comes from a situation similar to yours where the CFS scheduler was preventing Akka from getting the resources it needed. Plus the broader problem that, especially when the garbage collector kicks in, it's definitely possible to consume all of your CFS quota very quickly and end up with long GC pause times. The Lightbend position has been if you use CPU resource limits, then the CFS scheduler will limit your CPU consumption and that could cause problems.</p>
<p>But my position is that limiting your CPU consumption is the entire point, and is actually a good thing. Kubernetes is a shared environment and limiting CPU consumption is how the system is designed to prevent "noisy neighbors", fair sharing, and often cost chargebacks. My position is that the CPU limits themselves aren't bad, the problem is only when your limits are too low.</p>
<p>While I hate to make generic advice, as there may be some situations where I might make different recommendations, I would generally recommend setting CPU limits, but having them be significantly higher than your CPU requests. (2-3 times as a rule of thumb.) This type of setting will classify your Pods as "Burstable". Which is important so that your Akka node can burst up to handle high load situations such as GC, handling failover, etc.</p>
<p>Ideally you then use something like HBA such that your Akka cluster will autoscale so that it can handle its normal load with its "request" allocation of CPU and only uses that burst allocation during these occasional circumstances. (Because if you are always operating past the request allocation and close to burst allocation, then you really aren't getting the benefit of bursting.)</p>
<p>For example, in your specific case, you say you have problems with heartbeats when you set CPU limits but that goes away when you turn off the CPU limits. This tells me that your CPU limits were <em>way</em> too low. If Akka isn't able get enough CPU to do heartbeats, even under normal circumstances, then there is no way that it would be able to get enough CPU to handle peaks.</p>
<p>(Also, since we are we assuming that the system will be normally running at the request CPU allocation, but potentially running at limit CPU allocation during GC, I would typically tune my Akka threads as if it had "request" CPU and tune my GC threads as if it had "limit" CPU allocation.)</p>
| David Ogren |
<p>I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below</p>
<pre><code>Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
</code></pre>
| Abhishek R | <p>Yes, you can use ECR for storing helm charts (<a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html" rel="noreferrer">https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html</a>)</p>
<p>I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.</p>
<pre><code>argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
</code></pre>
| tifoz |
Subsets and Splits