Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I am looking to use Cypress for end to end testing for some kubernetes applications. Typically, I access these applications via OIDC through kong, however cypress doesn't support this, but does support key-auth via an API key. Is there a way of setting up the service so that I can use both of these simultaneously?</p>
| Sheen | <p>I think you cannot use more than one authentication plugin in an XOR scenario. This would only work for AND as long as the plugins do not use the same headers.</p>
<p>I also faced this problem and I solved it by setting up one service (pointing to the backend) and multiple routes (one for normal traffic, one for test traffic). You then can activate different plugins on each route instead of sticking it to the service.</p>
<p>The only downside is the slightly different base path you use for testing, but I think this is less problematic than the downside of testing with a different way of authentication.</p>
| Philipp |
<p>I have all 4 files needed to read from/write to HDFS in my resources folder and method to create hdfs object is as below .</p>
<pre><code>public static FileSystem getHdfsOnPrem(String coreSiteXml, String hdfsSiteXml, String krb5confLoc, String keyTabLoc){
// Setup the configuration object.
try {
Configuration config = new Configuration();
config.addResource(new org.apache.hadoop.fs.Path(coreSiteXml));
config.addResource(new org.apache.hadoop.fs.Path(hdfsSiteXml));
config.set("hadoop.security.authentication", "Kerberos");
config.addResource(krb5confLoc);
config.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
System.setProperty("java.security.krb5.conf", krb5confLoc);
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(config);
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab("my_username", keyTabLoc);
return org.apache.hadoop.fs.FileSystem.get(config);
}
catch(Exception ex) {
ex.printStackTrace();
return null;
}
}
</code></pre>
<p>It works when I run it in local and pass below as the paths</p>
<pre><code>C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\core-site.xml
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\hdfs-site.xml
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\krb5.conf
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\my_username.user.keytab
</code></pre>
<p>It runs fine when I run it in local but when I bundle it as JAR and run it in an env like kubernetes it throws below error (Since bundling as JAR I can read contents of resource files as stream but I need to pass in path for loginuserFromKeytab method)</p>
<pre><code>org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: my_username from keytab file:/opt/spark-3.0.0/jars/foo-project-name!/my_username.user.keytab javax.security.auth.login.LoginException: Unable to obtain password from user
</code></pre>
<p>Any suggestions/pointers are appreciated.</p>
| Venkatesh Gotimukul | <p>I suggest you use <a href="https://docs.oracle.com/javase/7/docs/technotes/guides/security/jaas/tutorials/GeneralAcnOnly.html#:%7E:text=JAAS%20can%20be%20used%20for,bean%2C%20or%20a%20servlet%3B%20and" rel="nofollow noreferrer">Jaas</a> config file instead of writing this code. This helps to remove the security plumbing from your code and externalizes it. "Unable to obtain password " would occur if the user that is running your app doesn't have permission to access the file.</p>
| Matt Andruff |
<p>I am connecting Nodejs app with mongodb using kubernetes cluster. I want to ensure that mongo POD communicates only with Nodejs POD and deny any other POD traffic. When I apply the default deny policy and then apply the allow policy by app is not working.</p>
<p>I have come up with the following policies - why are they not working?</p>
<p>Default Deny Policy:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
namespace: default
spec:
podSelector:
matchLabels: {}
</code></pre>
<p>Network Policy:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nodejs
namespace: default
spec:
podSelector:
matchLabels:
app: nodejs-mongo
ingress:
- from:
- podSelector:
matchLabels:
run: mongo
</code></pre>
| user15068655 | <p>I applied your policies and it worked for me. However, you don't have to specify deny-all policy. Once you create a Network Policy that allows pod to accept traffic <strong>from</strong> a specific set of pods it will be restricted by it. This way it will keep your setup simpler and will require less troubleshooting.</p>
<p>So in your case you can create a Network Policy that allows communication from a specific pod. Make sure you are using correct labels to select targeted pod(s) and pod(s) that it can accept traffic from.</p>
<p>Keep in mind that a <code>NetworkPolicy</code> is applied to a particular Namespace and only selects Pods in that particular <code>Namespace</code>.</p>
<p>Example policy that can be applied to accept traffic from pods matching it's selectors:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-access
spec:
podSelector:
matchLabels:
app: restricted-access #it selects pods that are targeted by this policy
ingress:
- from:
- podSelector:
matchLabels:
app: allowed-traffic #selects pods that communicate with pods
</code></pre>
| kool |
<p>I am on Windows and used Docker Desktop to deploy a local Kubernetes cluster using WSL 2. I tried to deploy a pod and expose it through a NodePort service so I could access it outside the cluster, but it is not working.</p>
<p>Here are the commands to reproduce the scenario:</p>
<pre><code>kubectl create deployment echoserver --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment echoserver --type=NodePort --port=8080
</code></pre>
<p>Trying to open <code>NODE_IP:EXPOSED_PORT</code> in the browser or running the netcat command <code>nc NODE_IP EXPOSED_PORT</code> and trying to send a message (from either WSL or Windows) does not work.</p>
<ul>
<li><code>NODE_IP</code> is the internal IP of the Docker Desktop K8S node (obtained by seeing the <code>INTERNAL-IP</code> column on the command <code>kubectl get nodes -o wide</code>)</li>
<li><code>EXPOSED_PORT</code> is the node port exposed by the service (obtained by seeing the field <code>NodePort</code> on command <code>kubectl describe service echoserver</code>)</li>
</ul>
<p>Opening the URL on the browser should be met with <a href="https://i.stack.imgur.com/G8hPk.jpg" rel="nofollow noreferrer">this</a> page. However, you will get a generic error response saying the browser couldn't reach the URL.</p>
<p>Sending a message with the netcat command should be met with a 400 Bad Request response, as it will not be a properly formatted HTTP request. However, you will not get any response at all or the TCP connection may not even be made in the 1st place.</p>
<p>Trying to communicate with the service and/or the pod from inside the cluster, for example, through another pod, works perfectly.
Using the command <code>kubectl port-forward deployment/echoserver 2311:8080</code> to port forward the deployment locally and then either accessing <code>localhost:2311</code> in the browser or through netcat also work perfectly (in both WSL and Windows).</p>
| Tiago Silva | <p>If you want to access it not using localhost you should use your <code><windows_hosts's_IP:NodePort></code>.</p>
<p>So having your deployment and service deployed:</p>
<pre><code>$kubectl get svc,deploy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/echoserver NodePort 10.105.169.2 <none> 8080:31570/TCP 4m12s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m3s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/echoserver 1/1 1 1 4m19s
</code></pre>
<p>You can either access it by using <code>localhost:31570</code> or <windows_hosts's_IP:NodePort>.</p>
<p>In my case <code>192.168.0.29</code> is my Windows host's IP:</p>
<pre><code>curl.exe 192.168.0.29:31570
CLIENT VALUES:
client_address=192.168.65.3
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.0.29:8080/
</code></pre>
| kool |
<p>I wanted to deploy Strapi CMS to Kubernetes. On my local machine, I am trying to do this with Minikube. The structure of the project is MySQL in a different container outside of the cluster. I want to access the MySQL database from inside the cluster via this IP <code>172.17.0.2:3306</code></p>
<p>The Database is outside of the cluster and lives in a docker container. But the Strapi project lives in a cluster of Kubernetes.</p>
<p>This is my deployment YAML file for doing the Kubernetes stuff:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cg-api
spec:
selector:
matchLabels:
app: cg-api
replicas: 2
template:
metadata:
labels:
app: cg-api
spec:
containers:
- name: cg-api
image: alirezahd/cg-api
env:
- name: DATABASE_HOST
value: "172.17.0.2"
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 1337
</code></pre>
| Alireza Hamid | <p>You could try using the special minikube host to access your root machine. Assuming you have exposed the docker mysql container on your root machine using -p <code>3306:3306</code></p>
<pre><code>env:
- name: DATABASE_HOST
value: "host.minikube.internal"
</code></pre>
<p><a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/#hostminikubeinternal" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/host-access/#hostminikubeinternal</a>
Looks like you need minikube v1.10+</p>
| TJ H. |
<p>I have a simple Webserver that exposes the pod name on which it is located by using the <code>OUT</code> env var.</p>
<p>Deployment and service look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: simpleweb-service
spec:
selector:
app: simpleweb
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleweb-deployment
labels:
app: simpleweb
spec:
replicas: 3
selector:
matchLabels:
app: simpleweb
template:
metadata:
labels:
app: simpleweb
spec:
containers:
- name: simpleweb
env:
- name: OUT
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
image: simpleweb
ports:
- containerPort: 8080
</code></pre>
<p>I deploy this on my local <a href="https://kind.sigs.k8s.io" rel="nofollow noreferrer">kind cluster</a></p>
<pre><code>default simpleweb-deployment-5465f84584-m59n5 1/1 Running 0 12m
default simpleweb-deployment-5465f84584-mw8vj 1/1 Running 0 9m36s
default simpleweb-deployment-5465f84584-x6n74 1/1 Running 0 12m
</code></pre>
<p>and access it via</p>
<pre class="lang-bash prettyprint-override"><code>kubectl port-forward service/simpleweb-service 8080:8080
</code></pre>
<p>When I am hitting <code>localhost:8080</code> I always get to the same pod</p>
<p><strong>Questions:</strong></p>
<ul>
<li>Is my service not doing round robin?</li>
<li>Is there some caching that I am not aware of</li>
<li>Do I have to expose my service differently? Is this a kind issue?</li>
</ul>
| User12547645 | <p>port-forward will only select the first pod for a service selector. If you want round-robin you'd need to use a load balancer like traefik or nginx.</p>
<p><a href="https://github.com/kubernetes/kubectl/blob/652881798563c00c1895ded6ced819030bfaa4d7/pkg/polymorphichelpers/attachablepodforobject.go#L52" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/blob/652881798563c00c1895ded6ced819030bfaa4d7/pkg/polymorphichelpers/attachablepodforobject.go#L52</a></p>
| TJ H. |
<p>I'm getting below error whenever I'm trying to apply an ingress resource/rules yaml file:</p>
<p><em><strong>failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": EOF</strong></em></p>
<p>It seems there are multiple errors for "failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": <em>Error here</em></p>
<p>Like below:</p>
<ol>
<li>context deadline exceeded</li>
<li>x509: certificate signed by unknown authority</li>
<li>Temporary Redirect</li>
<li>EOF</li>
<li>no endpoints available for service "ingress-nginx-controller-admission"</li>
</ol>
<p>...and many more.</p>
<p><strong>My Observations:</strong></p>
<p>As soon as the the ingress resource/rules yaml is applied, the above error is shown and the Ingress Controller gets restarted as shown below:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 OOMKilled 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 CrashLoopBackOff 6 30m
ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 Running 7 31m
ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 7 32m
</code></pre>
<p>One possible solution could be (not sure though) mentioned here:
<a href="https://stackoverflow.com/a/69289313/12241977">https://stackoverflow.com/a/69289313/12241977</a></p>
<p>But not sure if it could possibly work in case of Managed Kubernetes services like AWS EKS as we don't have access to kube-api server.</p>
<p>Also the section "kind: ValidatingWebhookConfiguration" has below field from yaml:</p>
<pre><code>clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
</code></pre>
<p>So what does the "path: /networking/v1/ingresses" do & where it resides or simply where we can find this path?
I checked the validation webhook using below command but, not able to get where to find the above path</p>
<pre><code>kubectl describe validatingwebhookconfigurations ingress-nginx-admission
</code></pre>
<p><strong>Setup Details</strong></p>
<p>I installed using the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters" rel="nofollow noreferrer">Bare-metal method</a> exposed with NodePort</p>
<p>Ingress Controller Version - <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">v1.1.0</a></p>
<p>Kubernetes Cluster Version (AWS EKS): 1.21</p>
| AT07 | <p>Ok, I got this working now:
I was getting the status as "OOMKilled" (Out Of Memory). So what I did is I've added the "limits:" section under "resources:" section of Deployment yaml as below:</p>
<pre><code> resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 190Mi
</code></pre>
<p>Now, it works fine for me.</p>
| AT07 |
<p>I have a vanilla EKS cluster deployed with <code>Terraform</code> at version 1.14 with RBAC enabled, but nothing installed into the cluster. I just executed <code>linkerd install | kubecetl apply -f -</code>. </p>
<p>After that completes I have waited about 4 minutes for things to stabilize. Running <code>kubectl get pods -n linkerd</code> shows me the following:</p>
<pre><code>linkerd-destination-8466bdc8cc-5mt5f 2/2 Running 0 4m20s
linkerd-grafana-7b9b6b9bbf-k5vc2 1/2 Running 0 4m19s
linkerd-identity-6f78cd5596-rhw72 2/2 Running 0 4m21s
linkerd-prometheus-64df8d5b5c-8fz2l 2/2 Running 0 4m19s
linkerd-proxy-injector-6775949867-m7vdn 1/2 Running 0 4m19s
linkerd-sp-validator-698479bcc8-xsxnk 1/2 Running 0 4m19s
linkerd-tap-64b854cdb5-45c2h 2/2 Running 0 4m18s
linkerd-web-bdff9b64d-kcfss 2/2 Running 0 4m20s
</code></pre>
<p>For some reason <code>linkerd-proxy-injector</code>, <code>linkerd-proxy-injector</code>, <code>linkerd-controller</code>, and <code>linkerd-grafana</code> are not fully started</p>
<p>Any ideas as to what I should check? The <code>linkerd-check</code> command is hanging.</p>
<p>The logs for the <code>linkerd-controller</code> show:</p>
<pre><code>linkerd-controller-68d7f67bc4-kmwfw linkerd-proxy ERR! [ 335.058670s] admin={bg=identity} linkerd2_proxy::app::identity Failed to certify identity: grpc-status: Unknown, grpc-message: "the request could not be dispatched in a timely fashion"
</code></pre>
<p>and </p>
<pre><code>linkerd-proxy ERR! [ 350.060965s] admin={bg=identity} linkerd2_proxy::app::identity Failed to certify identity: grpc-status: Unknown, grpc-message: "the request could not be dispatched in a timely fashion"
time="2019-10-18T21:57:49Z" level=info msg="starting admin server on :9996"
</code></pre>
<p>Deleting the pods and restarting the deployments results in different components becoming ready, but the entire control plane never becomes fully ready.</p>
| cpretzer | <p>A Linkerd community member answered with:</p>
<p>Which VPC CNI version do you have installed?
I ask because of:
- <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/641" rel="nofollow noreferrer">https://github.com/aws/amazon-vpc-cni-k8s/issues/641</a>
- <a href="https://github.com/mogren/amazon-vpc-cni-k8s/commit/7b2f7024f19d041396f9c05996b70d057f96da11" rel="nofollow noreferrer">https://github.com/mogren/amazon-vpc-cni-k8s/commit/7b2f7024f19d041396f9c05996b70d057f96da11</a></p>
<p>And after testing, this was the solution:</p>
<p>Sure enough, downgrading the AWS VPC CNI to v1.5.3 fixed everything in my cluster</p>
<p>Not sure why, but it does.
It seems that admission controllers are not working with v1.5.4</p>
<p>So, the solution is to use AWS VPC CNI v1.5.3 until the root cause in AWS VPC CNIN v1.5.4 is determined.</p>
| cpretzer |
<p>I'm having a hard time with pulling image from private repository. Here's the drill down:</p>
<p>The pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
[...]
spec:
containers:
- image: gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main
imagePullPolicy: Always
[...]
imagePullSecrets:
- name: gitlab-dc-cdndemo-2
</code></pre>
<p>And the pull secret:</p>
<pre><code>$ base64 -d <(kubectl -n test-cdndemo get secret gitlab-dc-cdndemo-2 -o json | jq -r '.data.".dockerconfigjson"') | jq
{
"auths": {
"https://gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>It's a playbook example of how it should be done. But when I deploy this I get:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned test-cdndemo/appcdnmanagerui-68c8f8c6dd-qcxr5 to node-waw107
Normal Pulling 13m (x4 over 14m) kubelet Pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main"
Warning Failed 13m (x4 over 14m) kubelet Failed to pull image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main": rpc error: code = Unknown desc = failed to pull and unpack image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to resolve referen
ce "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Warning Failed 13m (x4 over 14m) kubelet Error: ErrImagePull
Warning Failed 12m (x6 over 14m) kubelet Error: ImagePullBackOff
Normal BackOff 4m41s (x43 over 14m) kubelet Back-off pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main"
gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main
</code></pre>
<p>Notice the error, it's 403 Forbidden, not 401 Unauthorized so the credentials do work. Despite this, the image cannot be pulled from my private repo. But when I do this manually on a worker node everything goes smoothly:</p>
<pre><code>$ crictl --debug pull --creds gitlab+deploy-token-22:EGDLqGKJwBtfYYf9cDFg gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest
DEBU[0000] get image connection
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest,Annotations:map[string]string{},},Auth:&AuthConfig{Username:gitlab+deploy-token-22,Password:EGDLqGKJwBtfYYf9cDFg,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:nil,}
DEBU[0006] PullImageResponse: &PullImageResponse{ImageRef:sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260,}
Image is up to date for sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260
$ crictl image
IMAGE TAG IMAGE ID SIZE
[...]
gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main latest 0c3b5d355c164 105MB```
</code></pre>
<p>It alwo works when I use auth data instead (as expected):</p>
<pre><code>crictl --debug pull --auth Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw== gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest
</code></pre>
<p>Now I'm stuck. The only thing that comes to my mind is some kind of weird bug that's in the k3s release I'm using:</p>
<pre><code># k3s -v
k3s version v1.25.4+k3s1 (0dc63334)
go version go1.19.3
# crictl -v
crictl version v1.25.0-k3s1
</code></pre>
<p>The bug hypothesis is based on previous experience with older k8s deployment, where this approach is used and it works. But on a fresh cluster all I get is 403 error from k3s despite crictl pulling the image with no problems.</p>
<p>Anyone had this kind of problem before and solved it?</p>
| Rafał W. | <p>OK, I've found the problem. Instead of:</p>
<pre><code>{
"auths": {
"https://gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>It should have been:</p>
<pre><code> {
"auths": {
"gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>Apparently the documentation at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a> is a bit misleading since the example there does contain https:// in the URL.</p>
<pre><code>{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c3R...zE2"
}
}
}
</code></pre>
<p>And yes, my private repo does work over HTTPS connections.</p>
| Rafał W. |
<p>Can any one help me understand how can we
<code>Try to optimize the Dockerfile by removing all unnecessary cache/files to reduce the image size.</code> and
<code>Removing unnecessary binaries/permissions to improve container security</code></p>
<p>My docker file look like this</p>
<pre><code>FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
</code></pre>
| Waseem Mir | <p>Well, there is actually some ways to do that I guess:</p>
<ul>
<li>multi-stage build</li>
</ul>
<pre><code># STAGE1
FROM alpine AS stage1
WORKDIR /bin
RUN wget https://link/of/some/binaries -O app1 \
&& chmod +x app1
# Run additional commands if you want
# STAGE2
FROM alpine AS stage2
WORKDIR /usr/local/bin
RUN wget https://link/of/some/binaries -O app2 \
&& chmod +x app2
# Run additional commands if you want
# FINAL STAGE (runtime)
FROM python:3.7-alpine as runtime
COPY --from=stage1 /bin/app1 /bin/app1
COPY --from=stage2 /usr/local/bin/app2 /bin/app2
...
</code></pre>
<p>this will actually allow you to simply get only the binaries you need that you downloaded on the previous stages.</p>
<blockquote>
<p>If you are using <code>apk add</code> and you don't know where things are getting installed you can try to test on an alpine image by running <code>which command</code></p>
</blockquote>
<ul>
<li>remove cache</li>
</ul>
<pre><code>... # Install some stuff...
# Remove Cache
RUN rm -rf /var/cache/apk/*
</code></pre>
| Affes Salem |
<p>it was a working set up and no manual changes were made.</p>
<p>when we are trying to deploy application on aks; it fails to pull an image from the acr.</p>
<p>as per kubectl describe po output:</p>
<p>Failed to pull image "xyz.azurecr.io/xyz:-beta-68": [rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup rxyz.azurecr.io on [::1]:53: read udp [::1]:46256->[::1]:53: read: connection refused, rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup xyz.azurecr.io on [::1]:53: read udp [::1]:46112->[::1]:53: read: connection refused, rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup xyz.azurecr.io on [::1]:53: read udp [::1]:36677->[::1]:53: read: connection refused]</p>
<p>while troubleshooting i realised, few nodes has the dns entry in /etc/resolv.conf where image pull is working fine without issue and few node doesn't have the dns entry in /etc/resolv.conf where the image pull fails.</p>
<p>and if i manually add dns entry to /etc/resolv.conf on the nodes that doesn't have the entry; the changes are reverted to the initial state withing few minutes.</p>
<p>is there a procedure to edit /etc/resolv.conf or fix image pull issues.?</p>
| sanjeeth | <p>There is a bug in ubuntu that impacts AKS (global).
You can follow the link below to see the status.
<a href="https://status.azure.com/en-us/status" rel="nofollow noreferrer">https://status.azure.com/en-us/status</a>
In addition, there is a thread here you can follow the suggestions to overcome this issue.
<a href="https://learn.microsoft.com/en-us/answers/questions/987231/error-connecting-aks-with-acr.html" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/answers/questions/987231/error-connecting-aks-with-acr.html</a></p>
| Yonatan Shimoni |
<p>I have a k3s cluster with calico pods <code>calico-node-xxx</code> & <code>calico-kube-controllers-xxx</code> running in kube-system namespace. I am using <a href="https://docs.projectcalico.org/manifests/calico.yaml" rel="nofollow noreferrer">calico.yaml</a> config in my project.</p>
<p>Now, I want these images in calico.yaml to be pulled from my ACR repo instead of <code>docker.io</code> repo. So I tagged & pushed these images in my ACR repo and changed the images path in calico manifest yaml file (for e.g. <code>myacrrepo.io/calico/node:v3.17.1</code> ), so that pods can pull image from ACR repo.</p>
<p>But where should I mention credentials of ACR repo in calico manifest file, as without credential (username/password/hostname) the pod is failing with error <code>x509: certificate signed by unknown authority</code>.</p>
<p>Can someone let me know that where in <a href="https://docs.projectcalico.org/manifests/calico.yaml" rel="nofollow noreferrer">calico.yaml</a>, ACR repo credential shall I add (and what is the syntax for providing credential in <code>calico.yaml</code> because for dockr.io images no credentials are mentioned in <code>calico.yaml</code> which I can replace with my ACR repo credentials) ?</p>
| Thor | <p>If you want to use a private container registry then you have to create an imagePullSecret. Microsoft explained <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-kubernetes" rel="nofollow noreferrer">here</a> on how to do that for ACR.</p>
<p>Once you have created the imagePullSecret add it to the <code>pod spec</code> as outlined the section 'Use the image pull secret' of the documenation.</p>
<p>This imagePullSecret is not necessary for docker.io, since docker.io is a public registry and imagePullSecrets are only needed for private registries.</p>
| avinashpancham |
<p>I am trying to setup 3 nodes Kubernets 1.18 on CentOS 8 with Containerd. following Stacked control plane and etcd nodes (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/</a>) document, I was able to setup the primary master with Calico CNI successfully.</p>
<p>When I add second control-plane node, while adding second ETCD membership step, its crashing the primary ETCD container because of this whole cluster came down. not sure why its not able to add second ETCD member. firewall is disabled on my hosts</p>
<p><strong>Here is my configuration</strong></p>
<pre><code>- kube-cp-1.com, 10.10.1.1
- kube-cp-2.com, 10.10.1.2
- kube-cp-3.com, 10.10.1.3
- lb.kube-cp.com, 10.10.1.4
</code></pre>
<p><strong>kubeadm-config.yaml</strong></p>
<pre><code>---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.1.1
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: kube-cp-1.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.10.1.1:2379"
advertise-client-urls: "https://10.10.1.1:2379"
listen-peer-urls: "https://10.10.1.1:2380"
initial-advertise-peer-urls: "https://10.10.1.1:2380"
initial-cluster: "kube-cp-1.com=https://10.10.1.1:2380"
serverCertSANs:
- kube-cp-1.com
- kube-cp-2.com
- kube-cp-3.com
- localhost
- 127.0.0.1
- 10.10.1.1
- 10.10.1.2
- 110.10.1.3
- 10.10.1.1
- lb.kube-cp.com
peerCertSANs:
- kube-cp-1.com
- kube-cp-2.com
- kube-cp-3.com
- localhost
- 127.0.0.1
- 10.10.1.1
- 10.10.1.2
- 110.10.1.3
- 10.10.1.1
- lb.kube-cp.com
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "lb.kube-cp.com"
controlPlaneEndpoint: "10.10.1.1:6443"
networking:
dnsDomain: cluster.local
serviceSubnet: 10.236.0.0/12
podSubnet: 10.236.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
</code></pre>
<p><strong>init fist master</strong></p>
<pre><code>kubeadm init --upload-certs --config k8s-nprd.kubeadm-init.yaml
</code></pre>
<p><strong>adding second master node</strong></p>
<pre><code> kubeadm join 10.10.1.1:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4823caf8f50f531ba1bd7ee6681411cfac923ead603a805f33a3a667fcfb62a4 \
--control-plane --certificate-key a3005aca06076d93233becae71c600a34fa914aefa9e360c3f8b64092e1c43e5 --cri-socket /run/containerd/containerd.sock
</code></pre>
<p><strong>message from kubeadm join</strong></p>
<pre><code>I0406 10:25:45.903249 6984 manifests.go:91] [control-plane] getting StaticPodSpecs
W0406 10:25:45.903292 6984 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0406 10:25:45.903473 6984 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0406 10:25:45.903941 6984 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0406 10:25:45.904727 6984 local.go:78] [etcd] Checking etcd cluster health
I0406 10:25:45.904745 6984 local.go:81] creating etcd client that connects to etcd pods
I0406 10:25:45.904756 6984 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0406 10:25:45.912390 6984 etcd.go:102] etcd endpoints read from pods: https://10.10.1.1:2379
I0406 10:25:45.924703 6984 etcd.go:250] etcd endpoints read from etcd: https://10.10.1.1:2379
I0406 10:25:45.924732 6984 etcd.go:120] update etcd endpoints: https://10.10.1.1:2379
I0406 10:25:45.938129 6984 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0406 10:25:45.940638 6984 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "kube-cp-2.com" and status "Ready"
I0406 10:25:45.942529 6984 kubelet.go:159] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0406 10:25:46.597353 6984 cert_rotation.go:137] Starting client certificate rotation controller
I0406 10:25:46.599553 6984 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node
I0406 10:25:46.599572 6984 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kube-cp-2.com" as an annotation
I0406 10:26:01.608756 6984 local.go:130] creating etcd client that connects to etcd pods
I0406 10:26:01.608782 6984 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0406 10:26:01.613158 6984 etcd.go:102] etcd endpoints read from pods: https://10.10.1.1:2379
I0406 10:26:01.621527 6984 etcd.go:250] etcd endpoints read from etcd: https://10.10.1.1:2379
I0406 10:26:01.621569 6984 etcd.go:120] update etcd endpoints: https://10.10.1.1:2379
I0406 10:26:01.621577 6984 local.go:139] Adding etcd member: https://10.10.1.2:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0406 10:26:01.631714 6984 local.go:145] Updated etcd member list: [{kube-cp-2.com https://10.10.1.2:2380} {kube-cp-1.com https://10.10.1.1:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0406 10:26:01.632669 6984 etcd.go:500] [etcd] attempting to see if all cluster endpoints ([https://10.10.1.1:2379 https://10.10.1.2:2379]) are available 1/8
[kubelet-check] Initial timeout of 40s passed.
I0406 10:26:41.650088 6984 etcd.go:480] Failed to get etcd status for https://10.10.1.2:2379: failed to dial endpoint https://10.10.1.2:2379 with maintenance client: context deadline exceeded
</code></pre>
<p><strong>Primary ETCD log message, while adding second node.</strong></p>
<pre><code>crictl logs -f b127c56d13d5f
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-06 14:05:10.587582 I | etcdmain: etcd Version: 3.4.3
2020-04-06 14:05:10.587641 I | etcdmain: Git SHA: 3cf2f69b5
2020-04-06 14:05:10.587646 I | etcdmain: Go Version: go1.12.12
2020-04-06 14:05:10.587648 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-06 14:05:10.587652 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-06 14:05:10.587713 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-au
th = true, crl-file =
2020-04-06 14:05:10.588321 I | embed: name = kube-cp-1.com
2020-04-06 14:05:10.588335 I | embed: data dir = /var/lib/etcd
2020-04-06 14:05:10.588339 I | embed: member dir = /var/lib/etcd/member
2020-04-06 14:05:10.588341 I | embed: heartbeat = 100ms
2020-04-06 14:05:10.588344 I | embed: election = 1000ms
2020-04-06 14:05:10.588347 I | embed: snapshot count = 10000
2020-04-06 14:05:10.588353 I | embed: advertise client URLs = https://10.10.1.1:2379
2020-04-06 14:05:10.595691 I | etcdserver: starting member 9fe7e24231cce76d in cluster bd17ed771bd8406b
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=()
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became follower at term 0
raft2020/04/06 14:05:10 INFO: newRaft 9fe7e24231cce76d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became follower at term 1
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=(11522426945581934445)
2020-04-06 14:05:10.606487 W | auth: simple token is not cryptographically signed
2020-04-06 14:05:10.613683 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-04-06 14:05:10.614928 I | etcdserver: 9fe7e24231cce76d as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=(11522426945581934445)
2020-04-06 14:05:10.615341 I | etcdserver/membership: added member 9fe7e24231cce76d [https://10.10.1.1:2380] to cluster bd17ed771bd8406b
2020-04-06 14:05:10.616288 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-c
ert-auth = true, crl-file =
2020-04-06 14:05:10.616414 I | embed: listening for metrics on http://127.0.0.1:2381
2020-04-06 14:05:10.616544 I | embed: listening for peers on 10.10.1.1:2380
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d is starting a new election at term 1
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became candidate at term 2
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 2
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became leader at term 2
raft2020/04/06 14:05:10 INFO: raft.node: 9fe7e24231cce76d elected leader 9fe7e24231cce76d at term 2
2020-04-06 14:05:10.798941 I | etcdserver: setting up the initial cluster version to 3.4
2020-04-06 14:05:10.799837 N | etcdserver/membership: set the initial cluster version to 3.4
2020-04-06 14:05:10.799882 I | etcdserver/api: enabled capabilities for version 3.4
2020-04-06 14:05:10.799904 I | etcdserver: published {Name:kube-cp-1.com ClientURLs:[https://10.10.1.1:2379]} to cluster bd17ed771bd8406b
2020-04-06 14:05:10.800014 I | embed: ready to serve client requests
raft2020/04/06 14:26:01 INFO: 9fe7e24231cce76d switched to configuration voters=(11306080513102511778 11522426945581934445)
2020-04-06 14:26:01.629134 I | etcdserver/membership: added member 9ce744531170fea2 [https://10.10.1.2:2380] to cluster bd17ed771bd8406b
2020-04-06 14:26:01.629159 I | rafthttp: starting peer 9ce744531170fea2...
2020-04-06 14:26:01.629184 I | rafthttp: started HTTP pipelining with peer 9ce744531170fea2
2020-04-06 14:26:01.630090 I | rafthttp: started streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:26:01.630325 I | rafthttp: started streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:26:01.631552 I | rafthttp: started peer 9ce744531170fea2
2020-04-06 14:26:01.631581 I | rafthttp: added peer 9ce744531170fea2
2020-04-06 14:26:01.631594 I | rafthttp: started streaming with peer 9ce744531170fea2 (stream MsgApp v2 reader)
2020-04-06 14:26:01.631826 I | rafthttp: started streaming with peer 9ce744531170fea2 (stream Message reader)
2020-04-06 14:26:02.849514 W | etcdserver: failed to reach the peerURL(https://10.10.1.2:2380) of member 9ce744531170fea2 (Get https://10.10.1.2:2380/version: dial tcp 192.168.80.1
30:2380: connect: connection refused)
2020-04-06 14:26:02.849541 W | etcdserver: cannot get the version of member 9ce744531170fea2 (Get https://10.10.1.2:2380/version: dial tcp 10.10.1.2:2380: connect: connection refus
ed)
raft2020/04/06 14:26:02 WARN: 9fe7e24231cce76d stepped down to follower since quorum is not active
raft2020/04/06 14:26:02 INFO: 9fe7e24231cce76d became follower at term 2
raft2020/04/06 14:26:02 INFO: raft.node: 9fe7e24231cce76d lost leader 9fe7e24231cce76d at term 2
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d is starting a new election at term 2
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d became candidate at term 3
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 3
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 3
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d is starting a new election at term 3
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d became candidate at term 4
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 4
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 4
2020-04-06 14:26:06.631923 W | rafthttp: health check for peer 9ce744531170fea2 could not connect: dial tcp 10.10.1.2:2380: connect: connection refused
2020-04-06 14:26:06.632008 W | rafthttp: health check for peer 9ce744531170fea2 could not connect: dial tcp 10.10.1.2:2380: connect: connection refused
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d is starting a new election at term 4
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d became candidate at term 5
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 5
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 5
raft2020/04/06 14:26:08 INFO: 9fe7e24231cce76d is starting a new election at term 5
raft2020/04/06 14:26:08 INFO: 9fe7e24231cce76d became candidate at term 6
2020-04-06 14:27:11.684519 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-scheduler-kube-cp-2.com.1603412b3ca5e3ea\" " with result "error:context canceled" took too long (7.013696732s) to execute
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:11.684604 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kube-cp-2.com\" " with result "error:context canceled" took too long (6.216330254s) to
execute
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d is starting a new election at term 48
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d became candidate at term 49
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 49
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 49
2020-04-06 14:27:12.632989 N | pkg/osutil: received terminated signal, shutting down...
2020-04-06 14:27:12.633468 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "error:context canceled" took too long (7.957912936s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.633992 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.649430193s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.637645 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ippools\" range_end:\"/registry/crd.projectcalico.org/ippoolt\" count_only:true " with result "error:context canceled" took too long (6.174043444s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.637888 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ipamconfigs\" range_end:\"/registry/crd.projectcalico.org/ipamconfigt\" count_only:true " with result "error:context canceled" took too long (7.539908265s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638007 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:context canceled" took too long (1.967145665s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638271 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "error:context canceled" took too long (1.809718334s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638423 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:context canceled" took too long (1.963396181s) to execute
2020-04-06 14:27:12.638433 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result
"error:context canceled" took too long (6.779544473s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638462 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-kube-cp-1.com\" " with result "error:context canceled" took too long
(970.539525ms) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.639866 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/clusterinformations/default\" " with result "error:context canceled" took too long (2.965996315s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.640009 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kube-cp-1.com\" " with result "error:context canceled" took too long (566.004502ms) to
execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: addrConn.createTransport failed to connect to {10.10.1.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.10.1.1:2379: connect: connection refused". Reconnecting...
2020-04-06 14:27:12.647096 I | etcdserver: skipped leadership transfer for stopping non-leader member
2020-04-06 14:27:12.647188 I | rafthttp: stopping peer 9ce744531170fea2...
2020-04-06 14:27:12.647201 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:27:12.647209 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:27:12.647228 I | rafthttp: stopped HTTP pipelining with peer 9ce744531170fea2
2020-04-06 14:27:12.647238 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (stream MsgApp v2 reader)
2020-04-06 14:27:12.647248 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (stream Message reader)
2020-04-06 14:27:12.647260 I | rafthttp: stopped peer 9ce744531170fea2
</code></pre>
<p>Any help to add seconday etcd node?</p>
<p>Thanks
SR</p>
| sfgroups | <p>I ran into a similar issue with v1.18.1, and root-caused it to the fact that my control-plane hosts had an incorrectly configured MTU value for the network interface. This was causing the etcdserver pod interactions between the first and subsequent control-plane nodes to timeout, because some network packets were getting silently dropped.</p>
<p>Fixing the MTU allowed me to complete the control-plane setup as advertised.</p>
<p>Details for my setup:</p>
<p>In my case, I was using KVM VMs (launched using LXD) as my control-plane hosts. Because of a DHCP misconfiguration error, the hosts were not receiving the proper MTU, so stayed with the defaul value of 1500 ... which ran into problems with the inter-host overlay network. Reducing the MTU down to 1450 resolved the issue.</p>
<p>If you are interested in additional details about why the MTU misconfiguration manifested in the way it did, I found the following Project Calico issue discussion useful:
<a href="https://github.com/projectcalico/calico/issues/1709" rel="nofollow noreferrer">https://github.com/projectcalico/calico/issues/1709</a></p>
| Vijay Karamcheti |
<p>On a Kubernetes cluster with CoreOS Prometheus Operator scraping all standard cluster metrics, what Prometheus metric would show me the <code>currentCPUUtilizationPercentage</code> value for a simple HPA (horizontal pod autoscaler)?</p>
<p>If I setup a simple hpa like:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl autoscale deployment php-apache --cpu-percent=30 --min=3 --max=10
</code></pre>
<p>And then if I do <code>kubectl get hpa php-apache -o yaml</code> I see something like:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
maxReplicas: 10
minReplicas: 3
targetCPUUtilizationPercentage: 30
...
status:
currentCPUUtilizationPercentage: 28
currentReplicas: 9
desiredReplicas: 9
...
</code></pre>
<p>I want to see that <code>currentCPUUtilizationPercentage</code> in Prometheus. I've done a bunch of Prometheus queries to look for this.</p>
<p>I've searched all metrics tagged <code>{hpa="php-apache"}</code> and I see many hpa metrics, but not the metric I'm looking or. I can see <code>kube_hpa_spec_target_metric</code> set to 30 and I can see current status metrics like <code>kube_hpa_status_condition</code> but not the current cpu metric value that I want to see.</p>
<p>I've searched all metrics tagged <code>{metric_name="cpu"}</code> and only see <code>kube_hpa_spec_target_metric</code></p>
<p>I've searched all container and pod related metrics tagged <code>{container="my-container-name"}</code> and <code>{pod=~"my-pod-prefix.*"}</code> and I see several cpu related metrics like <code>container_cpu_usage_seconds_total</code> abnd <code>container_spec_cpu_quota</code> but nothing similar to or nothing that seems to be able to be used in calculating the <code>currentCPUUtilizationPercentage</code> value that I'm looking for?</p>
<p>FYI, this is on Kubernetes 1.17.x and using a recent version of CoreOS Prometheus Operator.</p>
| clay | <p>If I remember correctly currentCPUUtilizationPercentage is a k8 internal metrics for HPA and not exposed directly as a metrics you can <strong>scrape</strong> with Prometheus.
see <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm</a></p>
<p>you probably could <strong>scrape</strong> component of currentCPUUtilizationPercentage metrics and create custom metrics to see it in Prometheus.</p>
| Vlad Ulshin |
<p>I'm trying to build a docker image using DIND with Atlassian Bamboo.</p>
<p>I've created the deployment/ StatefulSet as follows:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>Note that I've set <code>privileged: true</code> in <code>securityContext</code> to enable this.</p>
<p>However, when trying to run docker images, I get a permission error:</p>
<pre><code>Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
</code></pre>
<p>Am I missing something wrt setting up DIND?</p>
| bear | <p>The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.</p>
<p>Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its <a href="https://bitbucket.org/atlassian-docker/docker-bamboo-server/src/def31993078d9d06dedf36b6bdc6f4e33f4a5232/Dockerfile#lines-55" rel="nofollow noreferrer">Dockerfile</a>, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.</p>
<p>So I think there are two approaches possible:</p>
<ul>
<li><p>Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.</p></li>
<li><p>Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">here</a>. Both should be '0'.</p></li>
</ul>
| Boris Van Hardeveld |
<p>I've self hosted instances of <code>Gitlab</code> and <code>Kubernetes</code>. I'm trying to integrate both tools to implements a CI/CD. I'm trying to configure privary registry of self hosted Gitlab in Kubernetes, but when Pod is starting, it fails when trying to pull image from registry.</p>
<p><strong>Deployment.yml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: ns-app
labels:
app: app
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app.kubernetes.io/name: app
app: app
spec:
containers:
- name: app
image: private-registry.gitlab.com/some-link/image:0.1.0
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
</code></pre>
<p><strong>docker/config.json</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"auths":
{
"gitlab-registry.al.rn.leg.br":
{
"username": "user",
"password": "password"
}
}
}
</code></pre>
<p><strong>secrets.yml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: ns-app
type: Opaque
data:
.dockerconfigjson: {{docker/config.json in base64}}
</code></pre>
<p>Pod is getting <strong>Forbidden 403</strong> error.</p>
| v1d3rm3 | <p>The problem is wrong <strong>type</strong> attribute of <strong>secrets.yml</strong>.</p>
<p>It should be: <strong>kubernetes.io/dockerconfigjson</strong></p>
<p>Final file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: ns-app
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{docker/config.json in base64}}
</code></pre>
| v1d3rm3 |
<p>I try execute</p>
<pre><code>kubeadm init --apiserver-advertise-address 49.232.211.230 --pod-network-cidr=10.244.0.0/16 -v=9
</code></pre>
<p>print log:</p>
<pre><code>[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0426 01:21:19.624413 30483 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.0 (linux/amd64) kubernetes/cb303e6" 'https://49.232.211.230:6443/healthz?timeout=10s'
I0426 01:21:19.626723 30483 round_trippers.go:454] GET https://49.232.211.230:6443/healthz?timeout=10s in 0 milliseconds
I0426 01:21:19.626800 30483 round_trippers.go:460] Response Headers:
I0426 01:21:20.127086 30483 round_trippers.go:435] curl -k -v -XGET -H "User-Agent: kubeadm/v1.21.0 (linux/amd64) kubernetes/cb303e6" -H "Accept: application/json, */*" 'https://49.232.211.230:6443/healthz?timeout=10s'
I0426 01:21:20.127764 30483 round_trippers.go:454] GET https://49.232.211.230:6443/healthz?timeout=10s in 0 milliseconds
I0426 01:21:20.127782 30483 round_trippers.go:460] Response Headers:
I0426 01:21:20.627098 30483 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.0 (linux/amd64) kubernetes/cb303e6" 'https://49.232.211.230:6443/healthz?timeout=10s'
I0426 01:21:20.627747 30483 round_trippers.go:454] GET https://49.232.211.230:6443/healthz?timeout=10s in 0 milliseconds
</code></pre>
<p>finally:</p>
<pre><code> Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
</code></pre>
<p>Configuration file after execution failure:
"/var/lib/kubelet/config.yaml" 38L, 921C</p>
<pre><code>apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
</code></pre>
<p>I even reinstalled the system,But it's still like this.</p>
<p>I tried all the kubeadm methods in the kubernetes documentation.</p>
<p>It's always been like this, I can't help it.</p>
<p>Thanks, please help me.</p>
| why | <p>As it's mentioned in the kubeadm init command logs it's a kubelet server issue or CRI issue, Rest your cluster using the kubeadm reset -f command and Try to perform these steps in order:</p>
<ol>
<li>Stop Kubelet and CRI services :</li>
</ol>
<pre><code>sudo systemctl stop kubelet
sudo systemctl stop docker (if you are using docker)
</code></pre>
<ol start="2">
<li>flush iptables and turn swapoff (Important if any firewall service is running verify that <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noreferrer">kubernetes_cluster_ports</a> enabled) :</li>
</ol>
<pre><code>sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
sudo swapoff -a
</code></pre>
<ol start="3">
<li>start CRI and kubelet service and verify that it's working fine (in activating status):</li>
</ol>
<pre><code> sudo systemctl start --now docker
sudo systemctl start --now kubelet
sudo systemctl status kubelet
</code></pre>
<ol start="4">
<li>initialize your cluster</li>
</ol>
<pre><code> sudo kubeadm init
</code></pre>
<p>If all these steps are done and you still got the issue verify your networking configuration propably docker could not initialize a specific pod for the control plane due to a network issue.</p>
| Hajed.Kh |
<p>I am following this guide to run up a zeppelin container in a local kubernetes cluster set up using minikube.</p>
<p><a href="https://zeppelin.apache.org/docs/0.9.0-SNAPSHOT/quickstart/kubernetes.html" rel="nofollow noreferrer">https://zeppelin.apache.org/docs/0.9.0-SNAPSHOT/quickstart/kubernetes.html</a></p>
<p>I am able to set up zeppelin and run some sample code there. I have downloaded spark 2.4.5 & 2.4.0 source code and built it for kubernetes support with the following command:</p>
<pre><code>./build/mvn -Pkubernetes -DskipTests clean package
</code></pre>
<p>Once spark is built I created a docker container as explained in the article:</p>
<pre><code>bin/docker-image-tool.sh -m -t 2.4.X build
</code></pre>
<p>I configured zeppelin to use the spark image which was built with kubernetes support. The article above explains that the spark interpreter will auto configure spark on kubernetes to run in client mode and run the job.</p>
<p>But whenever I try to run any parahgraph with spark I receive the following error</p>
<pre><code>Exception in thread "main" java.lang.IllegalArgumentException: basedir must be absolute: ?/.ivy2/local
</code></pre>
<p>I tried setting the spark configuration <code>spark.jars.ivy</code> in zeppelin to point to a temp directory but that does not work either. </p>
<p>I found a similar issue here:
<a href="https://stackoverflow.com/questions/50861477/basedir-must-be-absolute-ivy2-local">basedir must be absolute: ?/.ivy2/local</a></p>
<p>But I can't seem to configure spark to run with the <code>spark.jars.ivy /tmp/.ivy</code> config. I tried building spark with the <strong>spark-defaults.conf</strong> when building spark but that does not seems to be working either.</p>
<p>Quite stumped at this problem and how to solve it any guidance would be appreciated.</p>
<p>Thanks!</p>
| Talha Fahim | <p>I have also run into this problem, but a work-around I used for setting <code>spark.jars.ivy=/tmp/.ivy</code> is to rather set it is as an environment variable.</p>
<p>In your spark interpreter settings, add the following property: <code>SPARK_SUBMIT_OPTIONS</code> and set its value to <code>--conf spark.jars.ivy=/tmp/.ivy</code>.</p>
<p>This should pass additional options to spark submit and your job should continue.</p>
| J Levenson |
<p>I've been struggling with my first kubernetes setup, and I can't work out where I've been going wrong. I have a .Net API that I've got working locally, and I can confirm that the Docker image runs without a problem locally. But when deployed into a Kubernetes service I only ever get a 404 from Nginx and that the certificate I have setup is invalid.</p>
<p>I'm hosted in azure, and am building and publishing to the ACR fine, and the deployment is being run by a CICD pipeline and pulling the correct image.</p>
<p>These are the files I have so far:
Deployment.yaml (the placeholders are pulled in by the CICD):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite-api-deployment
spec:
selector:
matchLabels:
app: mywebsite-api-pod
template:
metadata:
labels:
app: mywebsite-api-pod
spec:
containers:
- name: mywebsite-api-container
image: ${acrLoginServer}/${repository}:${imageTag}
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
imagePullPolicy: Always
</code></pre>
<p>service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mywebsite-api-service
spec:
selector:
app: mywebsite-api-pod
ports:
- port: 80
targetPort: 80
type: ClusterIP
</code></pre>
<p>ingress.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebsite-ingress
spec:
tls:
- hosts:
- api-uat.mywebsite.com
secretName: mywebsite-tls
defaultBackend:
service:
name: mywebsite-api-service
port:
number: 80
rules:
- host: api-uat.mywebsite.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mywebsite-api-service
port:
number: 80
</code></pre>
<p>secret:</p>
<pre><code>apiVersion: v1
data:
tls.crt: /notpasted/
tls.key: /notpasted/
kind: Secret
metadata:
creationTimestamp: "2023-06-06T17:47:27Z"
name: mywebsite-tls
namespace: default
resourceVersion: "2795660"
uid:
type: kubernetes.io/tls
</code></pre>
<p>The certificate I have is a wildcard, and I've pointed my DNS entry to the correct IP address, so I'm assuming that bit is configured correctly at least.</p>
<p>Any idea where I'm going wrong, the certificate being delivered by k8s is the standard Acme cert, and every request I make is returned with a 404. I'm not using Helm as it seemed overkill for this, but perhaps it's not?</p>
<p>Thank you for any advice.</p>
| wombat172a | <p>Found the answer to this myself. I didn't understand how nginx was setup, and that it runs as a pod itself in a different namespace.</p>
<p>From that nginx pod, I could view the logs and identified the following entry:</p>
<pre><code>I0606 17:34:42.990421 8 store.go:425] "Ignoring ingress because of error while validating ingress class" ingress="default/mywebsite-ingress" error="ingress does not contain a valid IngressClass"
</code></pre>
<p>I could then run the following command to get the ingressClass name:
<code>kubectl get ingressclass -A -o yaml</code></p>
<pre><code>apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
creationTimestamp: "2023-06-04T11:38:58Z"
generation: 1
name: webapprouting.kubernetes.azure.com
resourceVersion: "2997218"
uid: 8547e63e-30f7-4dfe-910b-2f7dce8884a3
spec:
controller: k8s.io/ingress-nginx
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>And that class name I just had to add to my ingress.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebsite-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: webapprouting.kubernetes.azure.com
tls:
- hosts
etc. etc.
</code></pre>
| wombat172a |
<p>I'm trying to do a k8s tutorial on youtube <a href="https://www.youtube.com/watch?v=X48VuDVv0do&t=5428s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=X48VuDVv0do&t=5428s</a>.
An error occured that k8s pod failed to connect mongodb when I create the deployment mongo-express. Please kindly help!</p>
<p>Error Info of pod retrieved by kubectl logs command:</p>
<pre><code>Welcome to mongo-express
------------------------
(node:7) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
Could not connect to database using connectionString: mongodb://username:password@mongodb-service:27017/"
(node:7) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [mongodb-service:27017] on first connect [Error: getaddrinfo ENOTFOUND mongodb-service
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) {
name: 'MongoNetworkError'
}]
at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:441:11)
at Pool.emit (events.js:314:20)
at /node_modules/mongodb/lib/core/connection/pool.js:564:14
at /node_modules/mongodb/lib/core/connection/pool.js:1000:11
at /node_modules/mongodb/lib/core/connection/connect.js:32:7
at callback (/node_modules/mongodb/lib/core/connection/connect.js:300:5)
at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:330:7)
at Object.onceWrapper (events.js:421:26)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
(node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:7) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
</code></pre>
<p><strong>configuration files</strong>
Configuration files of kubernetes component for your reference.</p>
<pre><code> 1. kubernetes secret
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=```
2. mongodb deployment & service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017```
3. kubernetes configmap
```apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service```
4. mongo-express
```apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
</code></pre>
| Andy Xu | <p>I followed the same tutorial and faced the same issue. The problem is with docker. Starting minikube with docker was the issue, seems like docker has some limitations. Install hyperkit if you are using MacOS, Hyper-V if Windows or install virtualbox if you are using some distribution of linux.</p>
<p>Then start minikube with the virtual machine you installed, like this</p>
<pre><code>minikube start --driver=virtualbox
</code></pre>
| Harxish |
<p>I can't find clear information anywhere. How to make Ceph cluster healthy again after osd removing?
I just removed one of the 4 osd. Deleted as in <a href="https://rook.github.io/docs/rook/v1.10/Storage-Configuration/Advanced/ceph-osd-mgmt/#remove-an-osd" rel="nofollow noreferrer">manual</a>.</p>
<pre><code>kubectl -n rook-ceph scale deployment rook-ceph-osd-2 --replicas=0
kubectl rook-ceph rook purge-osd 2 --force
2023-02-23 14:31:50.335428 W | cephcmd: loaded admin secret from env var ROOK_CEPH_SECRET instead of from file
2023-02-23 14:31:50.335546 I | rookcmd: starting Rook v1.10.11 with arguments 'rook ceph osd remove --osd-ids=2 --force-osd-removal=true'
2023-02-23 14:31:50.335558 I | rookcmd: flag values: --force-osd-removal=true, --help=false, --log-level=INFO, --operator-image=, --osd-ids=2, --preserve-pvc=false, --service-account=
2023-02-23 14:31:50.335563 I | op-mon: parsing mon endpoints: b=10.104.202.63:6789
2023-02-23 14:31:50.351772 I | cephclient: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2023-02-23 14:31:50.351969 I | cephclient: generated admin config in /var/lib/rook/rook-ceph
2023-02-23 14:31:51.371062 I | cephosd: validating status of osd.2
2023-02-23 14:31:51.371103 I | cephosd: osd.2 is marked 'DOWN'
2023-02-23 14:31:52.449943 I | cephosd: marking osd.2 out
2023-02-23 14:31:55.263635 I | cephosd: osd.2 is NOT ok to destroy but force removal is enabled so proceeding with removal
2023-02-23 14:31:55.280318 I | cephosd: removing the OSD deployment "rook-ceph-osd-2"
2023-02-23 14:31:55.280344 I | op-k8sutil: removing deployment rook-ceph-osd-2 if it exists
2023-02-23 14:31:55.293007 I | op-k8sutil: Removed deployment rook-ceph-osd-2
2023-02-23 14:31:55.303553 I | op-k8sutil: "rook-ceph-osd-2" still found. waiting...
2023-02-23 14:31:57.315200 I | op-k8sutil: confirmed rook-ceph-osd-2 does not exist
2023-02-23 14:31:57.315231 I | cephosd: did not find a pvc name to remove for osd "rook-ceph-osd-2"
2023-02-23 14:31:57.315237 I | cephosd: purging osd.2
2023-02-23 14:31:58.845262 I | cephosd: attempting to remove host '\x02' from crush map if not in use
2023-02-23 14:32:03.047937 I | cephosd: no ceph crash to silence
2023-02-23 14:32:03.047963 I | cephosd: completed removal of OSD 2
</code></pre>
<p>Here is the status of the cluster before and after deletion.</p>
<pre><code>[root@rook-ceph-tools-6cd9f76d46-bl4tl /]# ceph status
cluster:
id: 75b45cd3-74ee-4de1-8e46-0f51bfd8a152
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 43h)
mgr: a(active, since 42h), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 4 osds: 4 up (since 43h), 4 in (since 43h)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 13 pools, 201 pgs
objects: 1.13k objects, 1.5 GiB
usage: 2.0 GiB used, 38 GiB / 40 GiB avail
pgs: 201 active+clean
io:
client: 1.3 KiB/s rd, 7.5 KiB/s wr, 2 op/s rd, 0 op/s wr
[root@rook-ceph-tools-6cd9f76d46-bl4tl /]# ceph status
cluster:
id: 75b45cd3-74ee-4de1-8e46-0f51bfd8a152
health: HEALTH_WARN
Degraded data redundancy: 355/2667 objects degraded (13.311%), 42 pgs degraded, 144 pgs undersized
services:
mon: 3 daemons, quorum a,b,c (age 43h)
mgr: a(active, since 42h), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 28m), 3 in (since 17m); 25 remapped pgs
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 13 pools, 201 pgs
objects: 1.13k objects, 1.5 GiB
usage: 1.7 GiB used, 28 GiB / 30 GiB avail
pgs: 355/2667 objects degraded (13.311%)
56/2667 objects misplaced (2.100%)
102 active+undersized
42 active+undersized+degraded
33 active+clean
24 active+clean+remapped
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
</code></pre>
<p>If I did it wrong, then how to do it right in the future?</p>
<p>Thanks</p>
<p>Update:</p>
<pre><code>[root@rook-ceph-tools-6cd9f76d46-bl4tl /]# ceph health detail
HEALTH_WARN 1 MDSs report slow metadata IOs; Reduced data availability: 9 pgs inactive, 9 pgs down; Degraded data redundancy: 406/4078 objects degraded (9.956%), 50 pgs degraded, 150 pgs undersized; 1 daemons have recently crashed; 256 slow ops, oldest one blocked for 6555 sec, osd.1 has slow ops
[WRN] MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
mds.ceph-filesystem-a(mds.0): 1 slow metadata IOs are blocked > 30 secs, oldest blocked for 6490 secs
[WRN] PG_AVAILABILITY: Reduced data availability: 9 pgs inactive, 9 pgs down
pg 13.5 is down, acting [0,1,NONE]
pg 13.7 is down, acting [1,0,NONE]
pg 13.b is down, acting [1,0,NONE]
pg 13.e is down, acting [0,NONE,1]
pg 13.15 is down, acting [0,NONE,1]
pg 13.16 is down, acting [0,1,NONE]
pg 13.18 is down, acting [0,NONE,1]
pg 13.19 is down, acting [1,0,NONE]
pg 13.1e is down, acting [1,0,NONE]
[WRN] PG_DEGRADED: Degraded data redundancy: 406/4078 objects degraded (9.956%), 50 pgs degraded, 150 pgs undersized
pg 2.8 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 2.9 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 2.a is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 2.b is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 2.c is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 2.d is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 2.e is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 5.9 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 5.a is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 5.b is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 5.c is stuck undersized for 108m, current state active+undersized+degraded, last acting [1,0]
pg 5.d is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 5.e is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 5.f is stuck undersized for 108m, current state active+undersized+degraded, last acting [1,0]
pg 6.8 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 6.9 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 6.a is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 6.c is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 6.d is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 6.e is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 6.f is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 8.0 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 8.1 is stuck undersized for 108m, current state active+undersized+degraded, last acting [1,0]
pg 8.2 is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 8.3 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 8.4 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 8.6 is stuck undersized for 108m, current state active+undersized+degraded, last acting [1,0]
pg 8.7 is stuck undersized for 108m, current state active+undersized+degraded, last acting [1,0]
pg 9.0 is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 9.1 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 9.2 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 9.5 is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 9.6 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 9.7 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 11.0 is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 11.2 is stuck undersized for 108m, current state active+undersized, last acting [1,0]
pg 11.3 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 11.4 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 11.5 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 11.7 is stuck undersized for 108m, current state active+undersized, last acting [0,1]
pg 12.0 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 12.2 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 12.3 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 12.4 is stuck undersized for 108m, current state active+undersized+remapped, last acting [1,0]
pg 12.5 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 12.6 is stuck undersized for 108m, current state active+undersized+remapped, last acting [1,0]
pg 12.7 is stuck undersized for 108m, current state active+undersized+degraded, last acting [0,1]
pg 13.1 is stuck undersized for 108m, current state active+undersized, last acting [1,NONE,0]
pg 13.2 is stuck undersized for 108m, current state active+undersized, last acting [0,NONE,1]
pg 13.3 is stuck undersized for 108m, current state active+undersized, last acting [1,0,NONE]
pg 13.4 is stuck undersized for 108m, current state active+undersized+remapped, last acting [0,1,NONE]
[WRN] RECENT_CRASH: 1 daemons have recently crashed
osd.3 crashed on host rook-ceph-osd-3-6f65b8c5b6-hvql8 at 2023-02-23T16:54:29.395306Z
[WRN] SLOW_OPS: 256 slow ops, oldest one blocked for 6555 sec, osd.1 has slow ops
[root@rook-ceph-tools-6cd9f76d46-bl4tl /]# ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 18 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'ceph-blockpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 35 lfor 0/0/31 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 3 'ceph-objectstore.rgw.control' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 181 lfor 0/181/179 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 4 'ceph-objectstore.rgw.meta' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 54 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 5 'ceph-filesystem-metadata' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 137 lfor 0/0/83 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 6 'ceph-filesystem-data0' replicated size 3 min_size 2 crush_rule 5 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 92 lfor 0/0/83 flags hashpspool stripe_width 0 application cephfs
pool 7 'ceph-objectstore.rgw.log' replicated size 3 min_size 2 crush_rule 6 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 273 lfor 0/273/271 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 8 'ceph-objectstore.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 7 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 98 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 9 'ceph-objectstore.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 8 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 113 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 10 'qa' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 310 lfor 0/0/137 flags hashpspool,selfmanaged_snaps max_bytes 42949672960 stripe_width 0 application rbd
pool 11 'ceph-objectstore.rgw.otp' replicated size 3 min_size 2 crush_rule 9 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 123 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 12 '.rgw.root' replicated size 3 min_size 2 crush_rule 10 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 308 lfor 0/308/306 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 13 'ceph-objectstore.rgw.buckets.data' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile size 3 min_size 2 crush_rule 11 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 200 lfor 0/0/194 flags hashpspool,ec_overwrites stripe_width 8192 application rook-ceph-rgw
[root@rook-ceph-tools-6cd9f76d46-f4vsj /]# ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 17 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'ceph-blockpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 39 lfor 0/0/35 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 3 'ceph-objectstore.rgw.control' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 194 lfor 0/194/192 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 4 'ceph-objectstore.rgw.meta' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 250 lfor 0/250/248 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 5 'ceph-filesystem-metadata' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 70 lfor 0/0/55 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 6 'ceph-filesystem-data0' replicated size 3 min_size 2 crush_rule 5 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 115 lfor 0/0/103 flags hashpspool stripe_width 0 application cephfs
pool 7 'ceph-objectstore.rgw.log' replicated size 3 min_size 2 crush_rule 6 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 84 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 8 'ceph-objectstore.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 7 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 100 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 9 'ceph-objectstore.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 8 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 122 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 10 'ceph-objectstore.rgw.otp' replicated size 3 min_size 2 crush_rule 9 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 135 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 11 '.rgw.root' replicated size 3 min_size 2 crush_rule 10 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 144 flags hashpspool stripe_width 0 pg_num_min 8 application rook-ceph-rgw
pool 12 'ceph-objectstore.rgw.buckets.data' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile size 3 min_size 2 crush_rule 11 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 167 lfor 0/0/157 flags hashpspool,ec_overwrites stripe_width 8192 application rook-ceph-rgw
pool 13 'qa' replicated size 2 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 267 lfor 0/0/262 flags hashpspool,selfmanaged_snaps max_bytes 32212254720 stripe_width 0 application qa,rbd
[root@rook-ceph-tools-6cd9f76d46-f4vsj /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.02939 root default
-5 0.02939 region nbg1
-4 0.02939 zone nbg1-dc3
-11 0.01959 host k8s-qa-pool1-7b6956fb46-cvdqr
1 ssd 0.00980 osd.1 up 1.00000 1.00000
3 ssd 0.00980 osd.3 up 1.00000 1.00000
-3 0.00980 host k8s-qa-pool1-7b6956fb46-mbnld
0 ssd 0.00980 osd.0 up 1.00000 1.00000
[root@rook-ceph-tools-6cd9f76d46-f4vsj /]# ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "replicated_rule",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 1,
"rule_name": "ceph-blockpool",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 2,
"rule_name": "ceph-objectstore.rgw.control",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 3,
"rule_name": "ceph-objectstore.rgw.meta",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 4,
"rule_name": "ceph-filesystem-metadata",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 5,
"rule_name": "ceph-filesystem-data0",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 6,
"rule_name": "ceph-objectstore.rgw.log",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 7,
"rule_name": "ceph-objectstore.rgw.buckets.index",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 8,
"rule_name": "ceph-objectstore.rgw.buckets.non-ec",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 9,
"rule_name": "ceph-objectstore.rgw.otp",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 10,
"rule_name": ".rgw.root",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 11,
"rule_name": "ceph-objectstore.rgw.buckets.data",
"type": 3,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_indep",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
]
</code></pre>
| JDev | <p>I'm not familiar with rook, but apparently the rulesets are created for you? Anyway, they all use "host" as failure-domain and have a size of 3, but with only two hosts your requirements can not be fullfilled. I assume the 4th OSD you had was on a third host, that's why your cluster is now degraded. You'll need to add at least one more host so your PGs can recover successfully. As for the erasure-coded pool it also has "host" as failure-domain, and with size = 3 (I assume the EC profile is something like k=2, m=1 ?) you also require 3 hosts. To get the replicated pools recovered you could change their size to 2, but I don't recommend to do that permanently, only for recovery reason. Since you can't change an EC profile that pool will stay degraded until you add a third OSD node.
To answer your other questions:</p>
<ol>
<li>Failure domain: It really depends on your setup, it could be rack, chassis, data center and so on. But with such a tiny setup it makes sense to have "host" as the failure domain.</li>
<li>Ceph is a self-healing software, so in case an OSD fails Ceph is able to recover automatically, but only if you have enough spare hosts/OSDs. So with your tiny setup you don't have enough capacity to be resilient against at least one OSD failure. If you plan to use Ceph for production data you should familiarize yourself with the concepts and plan a proper setup.</li>
<li>The more OSDs per host the better, you'll have recovery options. Warnings are fine, in case Ceph notices a disk outage it warns you about it, but it is able to recover automatically if there are enough OSDs and hosts.
If you look at the output of <code>ceph osd tree</code> there are only 2 hosts with three OSDs, that's why it's not fine at the moment.</li>
</ol>
| eblock |
<p>I try to do something fairly simple: To run a GPU machine in a k8s cluster using auto-provisioning. When deploying the Pod with a limits: nvidia.com/gpu specification the auto-provisioning is correctly creating a node-pool and scaling up an appropriate node. However, the Pod stays at Pending with the following message:</p>
<p><code>Warning FailedScheduling 59s (x5 over 2m46s) default-scheduler 0/10 nodes are available: 10 Insufficient nvidia.com/gpu.</code></p>
<p>It seems like taints and tolerations are added correctly by gke. It just doesnt scale up.</p>
<p>Ive followed the instructions here:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers</a></p>
<p>To reproduce:</p>
<ol>
<li>Create a new cluster in a zone with auto-provisioning that includes gpu (I have replaced my own project name with MYPROJECT). This command is what comes out of the console when these changes are done:</li>
</ol>
<pre><code>gcloud beta container --project "MYPROJECT" clusters create "cluster-2" --zone "europe-west4-a" --no-enable-basic-auth --cluster-version "1.18.12-gke.1210" --release-channel "regular" --machine-type "e2-medium" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "1" --enable-stackdriver-kubernetes --enable-ip-alias --network "projects/MYPROJECT/global/networks/default" --subnetwork "projects/MYPROJECT/regions/europe-west4/subnetworks/default" --default-max-pods-per-node "110" --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --enable-autoprovisioning --min-cpu 1 --max-cpu 20 --min-memory 1 --max-memory 50 --max-accelerator type="nvidia-tesla-p100",count=1 --enable-autoprovisioning-autorepair --enable-autoprovisioning-autoupgrade --autoprovisioning-max-surge-upgrade 1 --autoprovisioning-max-unavailable-upgrade 0 --enable-vertical-pod-autoscaling --enable-shielded-nodes --node-locations "europe-west4-a"
</code></pre>
<ol start="2">
<li><p>Install NVIDIA drivers by installing DaemonSet:
<code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml</code></p>
</li>
<li><p>Deploy pod that requests GPU:</p>
</li>
</ol>
<p>my-gpu-pod.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:
containers:
- name: my-gpu-container
image: nvidia/cuda:11.0-runtime-ubuntu18.04
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 600; done;"]
resources:
limits:
nvidia.com/gpu: 1
</code></pre>
<p><code>kubectl apply -f my-gpu-pod.yaml</code></p>
<p>Help would be really appreciated as Ive spent quite some time on this now :)</p>
<p>Edit: Here is the running Pod and Node specifications (the node that was auto-scaled):</p>
<pre><code>Name: my-gpu-pod
Namespace: default
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
my-gpu-container:
Image: nvidia/cuda:11.0-runtime-ubuntu18.04
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
--
Args:
while true; do sleep 600; done;
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9rvjz (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-9rvjz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9rvjz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
nvidia.com/gpu:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 11m cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added):
Warning FailedScheduling 5m54s (x6 over 11m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu.
Warning FailedScheduling 54s (x7 over 5m37s) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
</code></pre>
<pre><code>Name: gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=n1-standard-1
beta.kubernetes.io/os=linux
cloud.google.com/gke-accelerator=nvidia-tesla-p100
cloud.google.com/gke-boot-disk=pd-standard
cloud.google.com/gke-nodepool=nap-n1-standard-1-gpu1-18jc7z9w
cloud.google.com/gke-os-distribution=cos
cloud.google.com/machine-family=n1
failure-domain.beta.kubernetes.io/region=europe-west4
failure-domain.beta.kubernetes.io/zone=europe-west4-a
kubernetes.io/arch=amd64
kubernetes.io/hostname=gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2
kubernetes.io/os=linux
node.kubernetes.io/instance-type=n1-standard-1
topology.gke.io/zone=europe-west4-a
topology.kubernetes.io/region=europe-west4
topology.kubernetes.io/zone=europe-west4-a
Annotations: container.googleapis.com/instance_id: 7877226485154959129
csi.volume.kubernetes.io/nodeid:
{"pd.csi.storage.gke.io":"projects/exor-arctic/zones/europe-west4-a/instances/gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2"}
node.alpha.kubernetes.io/ttl: 0
node.gke.io/last-applied-node-labels:
cloud.google.com/gke-accelerator=nvidia-tesla-p100,cloud.google.com/gke-boot-disk=pd-standard,cloud.google.com/gke-nodepool=nap-n1-standar...
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 22 Mar 2021 11:32:17 +0100
Taints: nvidia.com/gpu=present:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2
AcquireTime: <unset>
RenewTime: Mon, 22 Mar 2021 11:38:58 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
KernelDeadlock False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 KernelHasNoDeadlock kernel has no deadlock
ReadonlyFilesystem False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 FilesystemIsNotReadOnly Filesystem is not read-only
CorruptDockerOverlay2 False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 NoCorruptDockerOverlay2 docker overlay2 is functioning properly
FrequentUnregisterNetDevice False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 NoFrequentUnregisterNetDevice node is functioning properly
FrequentKubeletRestart False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 NoFrequentKubeletRestart kubelet is functioning properly
FrequentDockerRestart False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 NoFrequentDockerRestart docker is functioning properly
FrequentContainerdRestart False Mon, 22 Mar 2021 11:37:25 +0100 Mon, 22 Mar 2021 11:32:23 +0100 NoFrequentContainerdRestart containerd is functioning properly
NetworkUnavailable False Mon, 22 Mar 2021 11:32:18 +0100 Mon, 22 Mar 2021 11:32:18 +0100 RouteCreated NodeController create implicit route
MemoryPressure False Mon, 22 Mar 2021 11:37:49 +0100 Mon, 22 Mar 2021 11:32:17 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 22 Mar 2021 11:37:49 +0100 Mon, 22 Mar 2021 11:32:17 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 22 Mar 2021 11:37:49 +0100 Mon, 22 Mar 2021 11:32:17 +0100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 22 Mar 2021 11:37:49 +0100 Mon, 22 Mar 2021 11:32:19 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.164.0.16
ExternalIP: 35.204.55.105
InternalDNS: gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2.c.exor-arctic.internal
Hostname: gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2.c.exor-arctic.internal
Capacity:
attachable-volumes-gce-pd: 127
cpu: 1
ephemeral-storage: 98868448Ki
hugepages-2Mi: 0
memory: 3776196Ki
pods: 110
Allocatable:
attachable-volumes-gce-pd: 127
cpu: 940m
ephemeral-storage: 47093746742
hugepages-2Mi: 0
memory: 2690756Ki
pods: 110
System Info:
Machine ID: 307671eefc01914a7bfacf17a48e087e
System UUID: 307671ee-fc01-914a-7bfa-cf17a48e087e
Boot ID: acd58f3b-1659-494c-b83d-427f834d23a6
Kernel Version: 5.4.49+
OS Image: Container-Optimized OS from Google
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.9
Kubelet Version: v1.18.12-gke.1210
Kube-Proxy Version: v1.18.12-gke.1210
PodCIDR: 10.100.1.0/24
PodCIDRs: 10.100.1.0/24
ProviderID: gce://exor-arctic/europe-west4-a/gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system fluentbit-gke-k22gv 100m (10%) 0 (0%) 200Mi (7%) 500Mi (19%) 6m46s
kube-system gke-metrics-agent-5fblx 3m (0%) 0 (0%) 50Mi (1%) 50Mi (1%) 6m47s
kube-system kube-proxy-gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2 100m (10%) 0 (0%) 0 (0%) 0 (0%) 6m44s
kube-system nvidia-driver-installer-vmw8r 150m (15%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system nvidia-gpu-device-plugin-8vqsl 50m (5%) 50m (5%) 10Mi (0%) 10Mi (0%) 6m45s
kube-system pdcsi-node-k9brg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 403m (42%) 50m (5%)
memory 260Mi (9%) 560Mi (21%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m47s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m47s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m46s (x4 over 6m47s) kubelet Node gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m46s (x4 over 6m47s) kubelet Node gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m46s (x4 over 6m47s) kubelet Node gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2 status is now: NodeHasSufficientPID
Normal NodeReady 6m45s kubelet Node gke-cluster-1-nap-n1-standard-1-gpu1--39fe3143-s8x2 status is now: NodeReady
Normal Starting 6m44s kube-proxy Starting kube-proxy.
Warning NodeSysctlChange 6m41s sysctl-monitor
Warning ContainerdStart 6m41s systemd-monitor Starting containerd container runtime...
Warning DockerStart 6m41s (x2 over 6m41s) systemd-monitor Starting Docker Application Container Engine...
Warning KubeletStart 6m41s systemd-monitor Started Kubernetes kubelet.
</code></pre>
| Nemis | <p>A common error related with GKE is about project Quotas limiting resources, this could lead to the nodes not auto-provisioning or scaling up due to not being able to assign the resources.</p>
<p>Maybe your project Quotas for GPU (or specifically for nvidia-tesla-p100) are set to 0 or to a number way below to the requested one.</p>
<p>In this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#gpu_quota" rel="nofollow noreferrer">link</a> there's more information about how to check it and how to request more resources for your quota.</p>
<p>Also, I see that you're making use of shared-core E2 instances, which are not compatible with accelerators. It shouldn't be an issue as GKE should automatically change the machine type to N1 if it detects the workload contains a GPU, as seen in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#supported_machine_types" rel="nofollow noreferrer">this link</a>, but still maybe attempt to run the cluster with other machine types such as <a href="https://cloud.google.com/compute/docs/machine-types#n1_machine_types" rel="nofollow noreferrer">N1</a>.</p>
| verdier |
<p>I created a k3s cluster and disabled the service loadbalancer & traefik. I installed metallb via a manifest file. Also, I created a ConfigMap for Metallb below named "config" with an address pool so I don't know why the metallb-controller is saying "no available ips".</p>
<pre><code>ubuntu@mark:~$ k describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.29.17
IPs: 10.43.29.17
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30584/TCP
Endpoints: 10.42.4.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AllocationFailed 34s metallb-controller Failed to allocate IP for "default/nginx": no available IPs
</code></pre>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.136.206-192.168.136.209
</code></pre>
| Mark Matlock | <p>You are using the ConfigMap while in the <a href="https://metallb.universe.tf/#backward-compatibility" rel="nofollow noreferrer">documentation</a> it says that in newer versions you should not use it:</p>
<blockquote>
<p>Previous versions of MetalLB are configurable via a configmap.
However, starting from the version v0.13.0, it will be possible to
configure it only via CRs. A tool to convert old configmaps to CRs is
provided as a container image under quay.io/metallb/configmaptocrs.</p>
</blockquote>
<p>I was doing the same thing, I had an old repo, where I used the ConfigMap, so I decided to reuse it, but than I read the documentation and followed the instructions from the MetalLB <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">website</a>.</p>
<p>From <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">installation</a> I executed the following commands:</p>
<pre><code>kubectl edit configmap -n kube-system kube-proxy
# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
</code></pre>
<p>, and then from the <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">configuration</a> I choose the <a href="https://metallb.universe.tf/configuration/#layer-2-configuration" rel="nofollow noreferrer">Layer 2 Configuration</a> setup. Add the code bellow in a file.yaml:</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250
</code></pre>
<p>, and then apply it using kubectl</p>
<pre><code>kubectl apply -f file.yaml
</code></pre>
<p>After that every thing works well. I hope this answer would be helpful for those who used the old approach, correct me if I have misunderstood.</p>
| fflores |
<p>I am trying to configure Strimzi Kafka listener custom cert, following the documentation: <a href="https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str</a>
I want to expose those listener outside of the Azure Kubernetes Service within the private virtual network.</p>
<p>I have provided a custom cert with private key generated by an internal CA and pointed towards that secret in the Kafka configuration:</p>
<p><code>kubectl create secret generic kafka-tls --from-literal=listener.cer=$cert --from-literal=listener.key=$skey -n kafka</code></p>
<p>`</p>
<pre><code>listeners:
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
#Listener TLS config
configuration:
brokerCertChainAndKey:
secretName: kafka-tls
certificate: listener.cer
key: listener.key
bootstrap:
loadBalancerIP: 10.67.249.253
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
brokers:
- broker: 0
loadBalancerIP: 10.67.249.251
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
- broker: 1
loadBalancerIP: 10.67.249.252
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
- broker: 2
loadBalancerIP: 10.67.249.250
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
authorization:
type: simple
</code></pre>
<p>`</p>
<p>Certificate has following records:</p>
<p>SAN:
*.kafka-datalake-prod-kafka-brokers *.kafka-datalake-prod-kafka-brokers.kafka.svc kafka-datalake-prod-kafka-bootstrap kafka-datalake-prod-kafka-bootstrap.kafka.svc kafka-datalake-prod-kafka-external-bootstrap kafka-datalake-prod-kafka-external-bootstrap.kafka.svc kafka-datalake-prod-azure.custom.domain</p>
<p>CN=kafka-datalake-produkty-prod-azure.custom.domain</p>
<p>I have also created an A record in the custom DNS for the given address: kafka-datalake-produkty-prod-azure.custom.domain 10.67.249.253</p>
<p>Then, I created a KafkaUser object:</p>
<pre><code>apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: customuser
namespace: kafka
labels:
strimzi.io/cluster: kafka-datalake-prod
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: notify.somecustomapp.prod.topic_name
patternType: literal
operations:
- Create
- Describe
- Write
# host: "*"
</code></pre>
<p>When I then retrieve the secrets from the Kafka cluster on AKS:</p>
<p><code>kubectl get secret kafka-datalake-prod-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > broker.crt kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.key}' | base64 -d > customuser.key kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.crt}' | base64 -d > customuser.crt</code></p>
<p>Communication fails, when I try to connect and send some messages with a producer using those 3 files to authenticate/authorize, I get a following issue:</p>
<p><code>INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <connecting> [IPv4 ('10.67.249.253', 9094)]>: connecting to 10.67.249.253:9094 [('10.67.249.253', 9094) IPv4] INFO:kafka.conn:Probing node bootstrap-0 broker version INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL CA from certs/prod/broker.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Cert from certs/prod/customuser.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Key from certs/prod/customuser.key [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)</code></p>
<p>What am I doing wrong?</p>
<p>The communication worked perfectly fine when I was using the same method of connecting, yet the cluster itself and listeners were using the default certs generated by Strimzi cluster.</p>
<p>All the best,
Krzysztof</p>
| Krzysztof | <p>@Turing85 @Jakub</p>
<p>Many thanks for your comments - especially those critical ones</p>
<p>And thanks, Jakub, for pointing me towards using the CA of custom certificate. What needed to be done in order to fix this was:</p>
<ol>
<li>switch the value obtained from kafka-datalake-prod-cluster-ca-cert secret with the full chain of root CA, intermediate signing cert and the certificate itself.</li>
<li>Add LoadBalancer IPs of brokers - this is stated in the documentation, yet the way it is formulated misguided me into thinking that adding hostnames/service names to SAN is enough (<a href="https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples</a>, and later <a href="https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples</a>).</li>
</ol>
<p>After those changes, everything started to work.</p>
<p>Thank you for help.</p>
| Krzysztof |
<p>I have a set of kubernetes config files that work in one environment. I'm looking to deploy into another environment where I need to add an imagePullSecrets entry to all of the <code>Deployment</code> configs.</p>
<p>I can do:</p>
<p>regcred-1.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-1
spec:
template:
spec:
imagePullSecrets:
- name: regcred
</code></pre>
<p>kustomization.yaml:</p>
<pre><code>bases:
- ../base
patchesStrategicMerge:
- regcred-1.yaml
</code></pre>
<p>and that will patch only <code>deployment-1</code>.</p>
<p>Is there a way to apply the patch to all deployments?</p>
| Job Evers | <p>Using <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md" rel="noreferrer">Inline Patch</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
resources:
- ../../base
patches:
- target:
kind: Deployment
patch: |-
- op: add
path: /spec/template/spec/imagePullSecrets
value: [{ name: image-pull-secret }]
</code></pre>
<p>Reference: <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md" rel="noreferrer">Patching multiple resources at once.</a></p>
| albertocavalcante |
<p>I am using below example to test postStart and preStop hooks.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: aaaa
labels:
wwww: qqqq
spec:
containers:
- name: adassa
image: nginx:1.10.1
lifecycle:
postStart:
exec:
command: ['bash', '-c', 'echo 11111111']
preStop:
exec:
command: ['bash', '-c', 'echo 2222222']
</code></pre>
<p>My pod was successfully created, but later when I tried to check the hooks output using <code>kubectl logs aaaa -c adassa</code> and <code>kubectl logs aaaa</code> then there were no logs.</p>
<p>I tried but not able to pinpoint the issue. I thought maybe shell issue, so I changed <code>bash</code> to <code>sh</code> but after this even Pod was not created.</p>
<p>Anyone has any idea what is wrong here?</p>
| hagrawal7777 | <p><strong>PostStart event:</strong></p>
<p>The echo output is inside the container scope so it does not show/log anything.
For executing a command i use:</p>
<blockquote>
<p>command: ["/bin/sh", "-c", "echo 11111111> /tmp/message"]</p>
</blockquote>
<p>There are several logging approaches however writing on a file as above is a simple example and for testing purposes.</p>
<p>Once you create a pod, execute interactive to get a shell and finally check if lifecycle trigger wrote on the sample file this way.</p>
<blockquote>
<p>kubectl exec -it aaaa -- /bin/bash</p>
<p>cat /tmp/message</p>
</blockquote>
<p>You should get:</p>
<blockquote>
<p>11111111</p>
</blockquote>
<p>Refer to <a href="https://logz.io/blog/a-practical-guide-to-kubernetes-logging/" rel="nofollow noreferrer">this</a> guide for different logging approaches, think about a 3rd party</p>
| jmvcollaborator |
<p>I am using terraform 0.13.0 and trying to the kubernetes-alpha provider (<a href="https://github.com/hashicorp/terraform-provider-kubernetes-alpha" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes-alpha</a>). I download the plugin for Mac and copied the plugin to ~/.terraform.d/plugins dir</p>
<p>when I run terraform init it doesn't find the local plugin, instead it is trying to find from hashicorp site</p>
<pre><code>terraform init
2020/08/21 16:42:58 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:42:58 [INFO] Terraform version: 0.13.0
2020/08/21 16:42:58 [INFO] Go runtime version: go1.14.2
2020/08/21 16:42:58 [INFO] CLI args: []string{"<$HOME>/bin/terraform", "init"}
2020/08/21 16:42:58 [DEBUG] Attempting to open CLI config file: <$HOME>/.terraformrc
2020/08/21 16:42:58 Loading CLI configuration from <$HOME>/.terraformrc
2020/08/21 16:42:58 [DEBUG] checking for credentials in "<$HOME>/.terraform.d/plugins"
2020/08/21 16:42:58 [DEBUG] checking for credentials in "<$HOME>/.terraform.d/plugins/darwin_amd64"
2020/08/21 16:42:58 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2020/08/21 16:42:58 [DEBUG] will search for provider plugins in <$HOME>/.terraform.d/plugins
2020/08/21 16:42:58 [DEBUG] ignoring non-existing provider search directory <$HOME>/Library/Application Support/io.terraform/plugins
2020/08/21 16:42:58 [DEBUG] ignoring non-existing provider search directory /Library/Application Support/io.terraform/plugins
2020/08/21 16:42:58 [INFO] CLI command args: []string{"init"}
2020/08/21 16:42:58 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
Initializing modules...
2020/08/21 16:42:58 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:42:58 [DEBUG] Module installer: begin app
- app in app
2020/08/21 16:42:58 [DEBUG] Module installer: app installed at app
2020/08/21 16:42:58 [DEBUG] Module installer: begin gke
- gke in gke
2020/08/21 16:42:58 [DEBUG] Module installer: gke installed at gke
2020/08/21 16:42:58 [DEBUG] Module installer: begin iam
2020/08/21 16:42:58 [DEBUG] Module installer: iam installed at iam
2020/08/21 16:42:58 [DEBUG] Module installer: begin vpc
- iam in iam
2020/08/21 16:42:58 [DEBUG] Module installer: vpc installed at vpc
Initializing the backend...
2020/08/21 16:42:58 [DEBUG] New state was assigned lineage "7541d58f-fc27-1b61-d496-834e76d1fcdb"
2020/08/21 16:42:58 [DEBUG] checking for provisioner in "."
Initializing provider plugins...
- Finding latest version of hashicorp/kubernetes-alpha...
2020/08/21 16:42:58 [DEBUG] checking for provisioner in "<$HOME>/bin"
2020/08/21 16:42:58 [DEBUG] checking for provisioner in "<$HOME>/.terraform.d/plugins"
2020/08/21 16:42:58 [DEBUG] checking for provisioner in "<$HOME>/.terraform.d/plugins/darwin_amd64"
2020/08/21 16:42:58 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
2020/08/21 16:42:58 [WARN] Failed to scan provider cache directory .terraform/plugins: cannot search .terraform/plugins: lstat .terraform/plugins: no such file or directory
2020/08/21 16:42:58 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json
2020/08/21 16:42:58 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:42:58 [DEBUG] GET https://registry.terraform.io/v1/providers/hashicorp/kubernetes-alpha/versions
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v3.35.0...
- Installed hashicorp/google v3.35.0 (unauthenticated)
2020/08/21 16:42:59 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:42:59 [DEBUG] GET https://registry.terraform.io/v1/providers/-/kubernetes-alpha/versions
Error: Failed to install provider
Error while installing hashicorp/kubernetes-alpha: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/kubernetes-alpha
</code></pre>
<p>Next I tried to force the plugin by adding a requires</p>
<pre><code>terraform {
required_providers {
kubernetes-alpha = {
source = "localdomain/provider/kubernetes-alpha"
version = "0.1.0"
}
}
}
</code></pre>
<p>and copied the plugin to
$HOME/.terraform.d/plugins/localdomain/provider/kubernetes-alpha/0.1.0/darwin_amd64</p>
<pre><code>Initializing provider plugins...
2020/08/21 16:54:41 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json
- Finding localdomain/provider/kubernetes-alpha versions matching "0.1.0"...
- Finding latest version of hashicorp/google...
- Finding latest version of hashicorp/kubernetes-alpha...
2020/08/21 16:54:42 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:54:42 [DEBUG] GET https://registry.terraform.io/v1/providers/hashicorp/kubernetes-alpha/versions
- Installing hashicorp/google v3.35.0...
- Installed hashicorp/google v3.35.0 (unauthenticated)
- Installing localdomain/provider/kubernetes-alpha v0.1.0...
- Installed localdomain/provider/kubernetes-alpha v0.1.0 (unauthenticated)
2020/08/21 16:54:42 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2020/08/21 16:54:42 [DEBUG] GET https://registry.terraform.io/v1/providers/-/kubernetes-alpha/versions
Error: Failed to install provider
Error while installing hashicorp/kubernetes-alpha: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/kubernetes-alpha
</code></pre>
<p>I can't figure out why it is trying to find the plugin on registry rather than using local.</p>
<p>I am new to terraform and wondering if I am missing something basic.</p>
| RandomQuests | <p>You'll have to run <code>terraform state replace-provider 'registry.terraform.io/-/kubernetes-alpha' 'localdomain/provider/kubernetes-alpha'</code> in order to fix any legacy / non-namespaced providers. See the 0.13 upgrade guide <a href="https://www.terraform.io/upgrade-guides/0-13.html#in-house-providers" rel="noreferrer">here</a> for more details.</p>
| Nick Keenan |
<p>I am running into an issue where the interpreters are missing after enabling the Kubernetes ingress to reach my Zeppelin container.</p>
<p>I'm running Zeppelin 0.8.2 on Kubernetes, and everything works correctly when it is not behind an ingress.
I am able to choose a default interpreter in the notebook creation popup, and run paragraphs without issue.</p>
<p>When an ingress is setup in Kubernetes, previously created notebooks and interpreter selection during new notebook creation are missing.
I am still able to reach the interpreter settings page, and it appears they are all still there.</p>
<p>Required change made in zeppelin-site.xml to run without the ingress.</p>
<pre><code><property>
<name>zeppelin.server.addr</name>
<value>0.0.0.0</value>
<description>Server binding address</description>
</property>
</code></pre>
<p>By default, was 127.0.0.0</p>
<p>Changes made in shiro.ini based on this forum discussion:
<a href="https://community.cloudera.com/t5/Support-Questions/How-do-I-recover-missing-Zeppelin-Interpreter-tab-and-its/m-p/160680" rel="nofollow noreferrer">https://community.cloudera.com/t5/Support-Questions/How-do-I-recover-missing-Zeppelin-Interpreter-tab-and-its/m-p/160680</a></p>
<pre><code>sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
</code></pre>
<p>Ingress used below with variable names changed</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
name: zeppelin-name
namespace: namespace_here
spec:
rules:
- host: zeppelin.zeppelin-services.my-host.org
http:
paths:
- backend:
serviceName: zeppelin-name
servicePort: 8080
path: /
tls:
- hosts:
- zeppelin.zeppelin-services.my-host.org
secretName: secret-name
status:
loadBalancer: {}
</code></pre>
<p>What am I missing for Zeppelin to work correctly behind the Kubernetes Ingress, to allow me to see previously created notebooks, and interpreters present in the dropdown menu when creating new notebooks?</p>
| saturner | <p>Coworker found the answer.
It turns out the information was not reaching the zeppelin page.
On inspection, there was a 405 error:</p>
<p>'wss://zeppelin.zeppelin-services.my-host.org/ws' failed: Error during WebSocket handshake: Unexpected response code: 405</p>
<p>He checked the logs on the nginx-ingress pod and found similar failure messages.</p>
<p>Issue was in the ingress yaml used and required an addition to the annotation</p>
<pre><code>metadata:
annotations:
nginx.org/websocket-services: zeppelin-name
</code></pre>
| saturner |
<p>I am trying to install Openshift4 locally on my computer and I am missing a file with the .crcbundle-extension. Could someone help me out on where to find this file or how to create it?</p>
<p>I am talking about the following project on github:
<a href="https://github.com/code-ready/crc#documentation" rel="nofollow noreferrer">https://github.com/code-ready/crc#documentation</a></p>
<p>Cheers</p>
| Steven | <p>You can download the latest crc binaries <a href="https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest" rel="nofollow noreferrer">here</a></p>
<p>You also need a Red Hat developer account to run <code>crc</code> as it requires you to log in to <a href="https://cloud.redhat.com/openshift/install/crc/installer-provisioned" rel="nofollow noreferrer">https://cloud.redhat.com/openshift/install/crc/installer-provisioned</a> to get a "pull secret" to deploy the cluster.</p>
| tost |
<p>I have created 2 pods within the same cluster. One service is initialized as</p>
<pre><code>kubectl create deployment my-web --image=nginx --port=80
kubectl expose deployment my-web --target-port=80 --type=NodePort
</code></pre>
<p>to my understanding, this creates a deployment with one pod <code>my-web-<string></code> and exposes a port. With <code>kubectl describe services my-web</code>, I find that the following information:</p>
<pre><code>Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32004/TCP
Endpoints: 10.244.0.10:80
</code></pre>
<p>testing pod:</p>
<pre><code>kubectl run test-pod --image=nginx --restart=Never
</code></pre>
<p>this creates another pod and I try to curl the nginx of my-web pod with the command <code>curl 10.244.0.10:32004</code>. That request times out. But somehow it works when I use <code>curl 10.244.0:80</code>. Why is that? I thought the service was exposed on port 32004 outside the my-web pod?</p>
<p>Please also let me know what IP and port to curl from my host machine to access the my-web pod. I am running the cluster from minikube on MacOS.</p>
<p>Thanks for the help!</p>
<p>I try to curl the nginx of my-web pod with the command <code>curl 10.244.0.10:32004</code>. That request times out. But somehow it works when I use <code>curl 10.244.0:80</code>.</p>
| loud_mouth | <p>NodePort is used to access a service within the cluster scope.
You might create a firewall rule that allows TCP traffic on your node port. Create a firewall rule that allows TCP traffic on port 32004.
On Ubuntu you can do something like:</p>
<blockquote>
<p>sudo ufw allow 32004/tcp</p>
</blockquote>
<p>And check port status with:</p>
<blockquote>
<p>sudo ufw status</p>
</blockquote>
<p>Once you are sure the port is opened you can curl the ip:port</p>
<pre><code>curl http://10.244.0.10:32004
</code></pre>
<p>For further info check the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">Kubernetes official documentation.</a></p>
| jmvcollaborator |
<p>I have deployed pyspark 3.0.1 in Kubernetes.</p>
<p>I am using koalas in a jupyter notebook in order to perform some transformations and I need to write and read from Azure Database for PostgreSQL.</p>
<p>I can read it from pandas using the following code:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
import psycopg2
import pandas
uri = 'postgres+psycopg2://<postgreuser>:<postgrepassword>@<server>:5432/<database>'
engine_azure = create_engine(uri, echo=False)
df = pdf.read_sql_query(f"select * from public.<table>", con=engine_azure)
</code></pre>
<p>I want to read this table from Pyspark using this code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import databricks.koalas as ks
from s3fs import S3FileSystem
import datetime
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=org.apache.hadoop:hadoop-aws:2.7.3,org.postgresql:postgresql:42.1.1 pyspark-shell pyspark-shell"
os.environ['PYSPARK_SUBMIT_ARGS2'] = "--packages org.postgresql:postgresql:42.1.1 pyspark-shell"
sparkClassPath = os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.postgresql:postgresql:42.1.1 pyspark-shell'
# Create Spark config for our Kubernetes based cluster manager
sparkConf = SparkConf()
sparkConf.setMaster("k8s://https://kubernetes.default.svc.cluster.local:443")
sparkConf.setAppName("spark")
sparkConf.set("spark.kubernetes.container.image", "<image>")
sparkConf.set("spark.kubernetes.namespace", "spark")
sparkConf.set("spark.executor.instances", "3")
sparkConf.set("spark.executor.cores", "2")
sparkConf.set("spark.driver.memory", "2000m")
sparkConf.set("spark.executor.memory", "2000m")
sparkConf.set("spark.kubernetes.pyspark.pythonVersion", "3")
sparkConf.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
sparkConf.set("spark.kubernetes.authenticate.serviceAccountName", "spark")
sparkConf.set("spark.driver.port", "29414")
sparkConf.set("spark.driver.host", "<deployment>.svc.cluster.local")
sparkConf.set("spark.driver.extraClassPath", sparkClassPath)
# Initialize our Spark cluster, this will actually
# generate the worker nodes.
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
df3 = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://<host>:5432/<database>") \
.option("driver", "org.postgresql.Driver") \
.option("dbtable", "select * from public.<table>") \
.option("user", "<user>") \
.option("password", "<password>") \
.load()
</code></pre>
<p>But I receive this error:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-5-a529178ed9a0> in <module>
1 url = 'jdbc:postgresql://psql-mcf-prod1.postgres.database.azure.com:5342/cpke-prod'
2 properties = {'user': '[email protected]', 'password': '4vb44B^V8w2D*q!eQZgl',"driver": "org.postgresql.Driver"}
----> 3 df3 = spark.read.jdbc(url=url, table='select * from public.userinput_write_offs where reversed_date is NULL', properties=properties)
/usr/local/spark/python/pyspark/sql/readwriter.py in jdbc(self, url, table, column, lowerBound, upperBound, numPartitions, predicates, properties)
629 jpredicates = utils.toJArray(gateway, gateway.jvm.java.lang.String, predicates)
630 return self._df(self._jreader.jdbc(url, table, jpredicates, jprop))
--> 631 return self._df(self._jreader.jdbc(url, table, jprop))
632
633
/usr/local/lib/python3.7/dist-packages/py4j/java_gateway.py in __call__(self, *args)
1303 answer = self.gateway_client.send_command(command)
1304 return_value = get_return_value(
-> 1305 answer, self.gateway_client, self.target_id, self.name)
1306
1307 for temp_arg in temp_args:
/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
126 def deco(*a, **kw):
127 try:
--> 128 return f(*a, **kw)
129 except py4j.protocol.Py4JJavaError as e:
130 converted = convert_exception(e.java_exception)
/usr/local/lib/python3.7/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o89.jdbc.
: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:275)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
at org.postgresql.Driver.makeConnection(Driver.java:450)
at org.postgresql.Driver.connect(Driver.java:252)
at org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.connect(DriverWrapper.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$createConnectionFactory$1(JdbcUtils.scala:64)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:312)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at org.postgresql.core.PGStream.<init>(PGStream.java:68)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144)
... 27 more
</code></pre>
| J.C Guzman | <p>Your port number is incorrect - it should be 5432, not 5342. Therefore your connection timed out. If you change the line</p>
<pre><code>.option("url", "jdbc:postgresql://<host>:5342/<database>")
</code></pre>
<p>to</p>
<pre><code>.option("url", "jdbc:postgresql://<host>:5432/<database>")
</code></pre>
<p>maybe it will solve your problem.</p>
| mck |
<p>i'm learning docker/k8s; I want to pass/store a .pem file to my boostrap container which runs on a k8s cluster. This container uses the .pem to create a k8s secret (kubectl create secrets ...) which will be used by the other apps running on k8s by mounting the kubernetes secrets.</p>
<p>I can think of the following options,</p>
<ul>
<li>I can pass the .pem details as ENV to the container.</li>
<li>I can build the image with the .pem file.</li>
<li>I can store the .pem file in S3 and download it from within the container.</li>
</ul>
<p>Wanted to understand which of these is the best practice/secure method to accomplish this task.</p>
| nevosial | <p>I have seen it done in multiple ways but I would suggest using a config map so that then the pem file lives inside your k8s cluster and you don't have to deal with encryption within s3 and such. Also this allows your devops team to handle the maintenance rather than the app developers if you include this within the docker code</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Config Map Kubernetes Docs</a></p>
<ol>
<li><p>Create the config map</p>
<pre><code>kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore — from-file=my-cert.pem
</code></pre>
</li>
<li><p>Add new config to your pod yaml file</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: <some metadata name>
spec:
containers:
- name: <container name>
image: <container image>
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: ...
args: ...
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
</code></pre>
</li>
</ol>
| Edward Romero |
<p>I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.</p>
<p>Use-case:</p>
<ul>
<li>login as admin:admin (created by default)</li>
<li>click on Manage account</li>
</ul>
<p>(<a href="https://i.stack.imgur.com/n3wuZ.png" rel="nofollow noreferrer">manage account dialog screenshot</a>)</p>
<p>I have compared how the (same) image (<strong>quay.io/keycloak/keycloak:17.0.0</strong>) behaves if it runs on Docker or in Kubernetes (K3S).</p>
<p>If I run it from Docker, the account console loads. In other terms, I get a success (<strong>204</strong>) for the request</p>
<p><code>GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console</code></p>
<p>From the same image deployed in Kubernetes, the same request fails with error <strong>403</strong>. However, on this same application, I get a success (<strong>204</strong>) for the request</p>
<p><code>GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console</code></p>
<p>Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.</p>
<p>I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.</p>
<p>I have no more idea about what could be the problem, do you have any suggestion?</p>
| vlt | <p>Try to set <code>ssl_required = NONE</code> in <strong>realm</strong> table in Keycloak database to your realm (master)</p>
| fire_Rising |
<p>I am using Helm v3.3.0, with a Kubernetes 1.16.</p>
<p>The cluster has the <a href="https://svc-cat.io/docs/" rel="nofollow noreferrer">Kubernetes Service Catalog</a> installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as <code>ServiceInstance</code>s and <code>ServiceBinding</code>s.</p>
<p><code>ServiceBinding</code>s reflect as K8S <code>Secret</code>s and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S <code>Deployment</code>.</p>
<p>Now I am using Helm to deploy my Kubernetes resources, and I read <a href="https://helm.sh/docs/topics/charts/#operational-aspects-of-using-dependencies" rel="nofollow noreferrer">here</a> that...</p>
<blockquote>
<p>The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in <a href="https://github.com/helm/helm/blob/release-3.3/pkg/releaseutil/kind_sorter.go" rel="nofollow noreferrer">kind_sorter.go</a></p>
</blockquote>
<p>In that file, the order does neither mention <code>ServiceInstance</code> nor <code>ServiceBinding</code> as resources, and that would mean that Helm installs these resource types <strong>after</strong> it has installed any of its InstallOrder list - in particular <code>Deployment</code>s. That seems to match the output of <code>helm install --dry-run --debug</code> run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.</p>
<p><strong>Question:</strong> What I cannot understand is, why my <code>Deployment</code> does <strong>not</strong> fail to install with Helm.
After all my <code>Deployment</code> resource seems to be deployed before the <code>ServiceBinding</code> is. And it is the <code>Secret</code> generated out of the <code>ServiceBinding</code> that my <code>Deployment</code> references. I would expect it to fail, since the <code>Secret</code> is not there yet, when the <code>Deployment</code> is getting installed. But that is not the case.</p>
<p>Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?</p>
<p>Thanks!</p>
| FloW | <p>As said in the comment I posted:</p>
<blockquote>
<p>In fact your <code>Deployment</code> is failing at the start with <code>Status: CreateContainerConfigError</code>. Your <code>Deployment</code> is created before <code>Secret</code> from the <code>ServiceBinding</code>. It's only working as it was recreated when the <code>Secret</code> from <code>ServiceBinding</code> was available.</p>
</blockquote>
<p>I wanted to give more insight with example of why the <code>Deployment</code> didn't fail.</p>
<p>What is happening (simplified in order):</p>
<ul>
<li><code>Deployment</code> -> created and spawned a <code>Pod</code></li>
<li><code>Pod</code> -> failing pod with status: <code>CreateContainerConfigError</code> by lack of <code>Secret</code></li>
<li><code>ServiceBinding</code> -> created <code>Secret</code> in a background</li>
<li><code>Pod</code> gets the required <code>Secret</code> and starts</li>
</ul>
<p>Previously mentioned <code>InstallOrder</code> will leave <code>ServiceInstace</code> and <code>ServiceBinding</code> as last by comment on <a href="https://github.com/helm/helm/blob/release-3.3/pkg/releaseutil/kind_sorter.go#L147" rel="nofollow noreferrer">line 147</a>.</p>
<hr />
<h2>Example</h2>
<p>Assuming that:</p>
<ul>
<li>There is a working Kubernetes cluster</li>
<li>Helm3 installed and ready to use</li>
</ul>
<p>Following guides:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/service-catalog/install-service-catalog-using-helm/" rel="nofollow noreferrer">Kubernetes.io: Instal Service Catalog using Helm</a></em></li>
<li><em><a href="https://www.magalix.com/blog/kubernetes-service-catalog-101" rel="nofollow noreferrer">Magalix.com: Blog: Kubernetes Service Catalog</a></em></li>
</ul>
<p>There is a Helm chart with following files in <code>templates/</code> directory:</p>
<ul>
<li><code>ServiceInstance</code></li>
<li><code>ServiceBinding</code></li>
<li><code>Deployment</code></li>
</ul>
<p>Files:</p>
<p><code>ServiceInstance.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
</code></pre>
<p><code>ServiceBinding.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
</code></pre>
<p><code>Deployment.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
</code></pre>
<p>Applying above resources to check what is happening can be done in 2 ways:</p>
<ul>
<li>First method is to use <code>timestamp</code> from <code>$ kubectl get RESOURCE -o yaml</code></li>
<li>Second method is to use <code>$ kubectl get RESOURCE --watch-only=true</code></li>
</ul>
<hr />
<h3>First method</h3>
<p>As said previously the <code>Pod</code> from the <code>Deployment</code> couldn't start as <code>Secret</code> was not available when the <code>Pod</code> tried to spawn. After the <code>Secret</code> was available to use, the <code>Pod</code> started.</p>
<p>The statuses this <code>Pod</code> had were the following:</p>
<ul>
<li><code>Pending</code></li>
<li><code>ContainerCreating</code></li>
<li><code>CreateContainerConfigError</code></li>
<li><code>Running</code></li>
</ul>
<p>This is a table with timestamps of <code>Pod</code> and <code>Secret</code>:</p>
<pre><code>| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
</code></pre>
<p>You can get this timestamp by invoking below commands:</p>
<ul>
<li><code>$ kubectl get pod pod_name -n namespace -o yaml</code></li>
<li><code>$ kubectl get secret secret_name -n namespace -o yaml</code></li>
</ul>
<p>You can also get get additional information with:</p>
<ul>
<li><code>$ kubectl get event -n namespace</code></li>
<li><code>$ kubectl describe pod pod_name -n namespace</code></li>
</ul>
<h3>Second method</h3>
<p>This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:</p>
<ul>
<li><code>$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done</code></li>
<li><code>$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done</code></li>
</ul>
<p>After that apply your Helm chart.</p>
<blockquote>
<p>Disclaimer!</p>
<p>Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.</p>
</blockquote>
<p>The output for <code>Pod</code>:</p>
<pre><code>21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
</code></pre>
<p>The output for <code>Secret</code>:</p>
<pre><code>21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/51962615/12257134">Stackoverflow.com: Answer: Helm install in certain order</a></em></li>
<li><em><a href="https://www.alibabacloud.com/blog/helm-charts-and-templates-hooks-and-tests-part-3_595650" rel="nofollow noreferrer">Alibabacloud.com: Helm charts and templates hooks and tests part 3</a></em></li>
</ul>
| Dawid Kruk |
<p>I have following yaml-configuration for my horizontal pod autoscaler in GCP.</p>
<pre><code>//...
metrics:
- type: Pods
pods:
metric:
name: "my-custom-metric"
target:
type: AverageValue
averageValue: 60
//..
</code></pre>
<p>I have added the metric as a custom metric in metric explorer in Cloud Monitoring. It is a log based metric. I have even added it to a dashboard and it looks fine.</p>
<p>For HPA the metric is not found. I get "Unable to read all metrics".</p>
<p>I'm not sure why? Do I need to add specific permissions in my serviceaccount? Or do I need to use a Custom Metrics Adapter? Does the HPA not have access without a adapter?</p>
<p>I have even tried to add the full name path for the metric name, but I get an error that "/" is not allowed in the name.</p>
<p>I would appreciate any help!</p>
| user11122042 | <p>Indeed, you need to deploy a metric server, please refer to this <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">link</a>. And bear in mind that it works for cpu and memory.</p>
<ol>
<li><strong>CPU and Memory HPA</strong></li>
</ol>
<p><strong>Notice:</strong></p>
<p>Kubelet certificate needs to be signed by cluster Certificate Authority (<strong>or disable certificate validation by passing --kubelet-insecure-tls to Metrics Server</strong>)
So add under args (you will get some headaches and TLS can be added later):</p>
<pre><code>- --kubelet-insecure-tls
</code></pre>
<ol start="2">
<li><strong>Custom metrics</strong></li>
</ol>
<p><strong>Notice:</strong> Before reading below check this <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">out</a>.</p>
<p>This will require some custom resource definition as well as adapter, metrics api. This video might help A LOT :)</p>
<p><a href="https://www.youtube.com/watch?v=iodq-4srXA8&t=550s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=iodq-4srXA8&t=550s</a></p>
<p>Bonus track: do not hesitate to use siege to stress the pods, also i would try scale up and scale down :)</p>
| jmvcollaborator |
<p>I want to know the share of resource - cpu, memory- per kubernetes's pod.
And I want to know what the standard of share is</p>
| gyuhyeon lee | <p>This is hard to do using kubectl only (or I don't know how). What we usually do is to use the kubelet metric-server to export all metric to prometheus. We then use Grafana to calculate those values. The following metrics should allow you to calculate your values:</p>
<p>CPU cores:</p>
<ul>
<li>kube_node_status_allocatable_cpu_cores - available cores</li>
<li>kube_pod_container_resource_requests_cpu_cores - requested cores per container</li>
<li>container_cpu_usage_seconds_total - used cores per container</li>
</ul>
<p>Memory:</p>
<ul>
<li>kube_node_status_allocatable_memory_bytes - available memory</li>
<li>kube_pod_container_resource_requests_memory_bytes - requested memory by container</li>
<li>container_memory_usage_bytes - used memory by container</li>
</ul>
<p>You can filter those by label (i.e. by pod name or namespace) and calculate all kinds of things based on them.</p>
| jankantert |
<p>The golang operator started writing an error.</p>
<p><code>failed to list v1.Secret: secrets is forbidden: User "system:serviceaccount:operator-*****" cannot list resource "secrets" in API group "" in the namespace "namespace-name"</code></p>
<p>The error appeared after we enabled restrictions on list secret (set resource Names).
Without restrictions, everything works fine.
I am not familiar with golang, but after looking at the source code, I came to the conclusion that the error occurs in this place</p>
<pre><code> if err := g.client.Get(ctx, client.ObjectKey{Name: tokens.Name, Namespace: g.namespace}, &tokens); err != nil {
return nil, errors.WithMessage(err, "failed to query tokens")
</code></pre>
<p>Here is a more complete part of the code</p>
<pre><code>package initgeneration
import (
"context"
"encoding/json"
dynatracev1beta1 "github.com/Dynatrace/dynatrace-operator/src/api/v1beta1"
"github.com/Dynatrace/dynatrace-operator/src/config"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
// InitGenerator manages the init secret generation for the user namespaces.
type InitGenerator struct {
client client.Client
apiReader client.Reader
namespace string
canWatchNodes bool
dynakubeQuery kubeobjects.DynakubeQuery
}
...
func (g *InitGenerator) createSecretConfigForDynaKube(ctx context.Context, dynakube *dynatracev1beta1.DynaKube, kubeSystemUID types.UID, hostMonitoringNodes map[string]string) (*standalone.SecretConfig, error) {
var tokens corev1.Secret
if err := g.client.Get(ctx, client.ObjectKey{Name: tokens.Name, Namespace: g.namespace}, &tokens); err != nil {
return nil, errors.WithMessage(err, "failed to query tokens")
</code></pre>
<p>The Get method is used in the package "asus.k8s.io/controller-runtime/pkg/client " , when accessing the K8S API, it requests all the secrets and only then imposes a filter.
I need to get around this limitation
How to get secrets, how to access the K8S API through the client package with a restriction on resourceName?</p>
| Sergey Belov | <p>Controller-runtime client ( "sigs.k8s.io/controller-runtime/pkg/client") goes to its cache when receiving a request, and it updates the cache through the watch operation, i.e. it needs access to all secrets</p>
<p>The solution that helped: disable cache for secrets in "Options"
<a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager#Options" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager#Options</a></p>
<p>via ClientDisableCacheFor</p>
<pre><code>func (provider operatorManagerProvider) createOptions(namespace string) ctrl.Options {
return ctrl.Options{
Namespace: namespace,
Scheme: scheme.Scheme,
MetricsBindAddress: metricsBindAddress,
Port: operatorManagerPort,
...
ClientDisableCacheFor: []client.Object{
&corev1.Secret{},
},
}
}
</code></pre>
| Sergey Belov |
<p>I have few consumers which updates their local cache on reading a message from the kinesis queue. But this leads to inconsistency as message is only processed by only one among all the replicas and any information retrieved from other replicas gives us invalid cache.</p>
<p>How can i make sure i clear the cache among all the replicas when the invalidation message appears in the queue.</p>
| Chakri | <p>The library that you're using assigns consumers to shards, so any message that you put on the stream will only go to one consumer.</p>
<p>The best solution would be to send your invalidation message out-of-band, using another Kinesis stream (or alternative such as SNS). This would add complexity to your listener, as they'd now have to listen for two sources (and you couldn't use that library for the out-of-band messages).</p>
<p>If you want to send in-band cache invalidation messages, then you need to write the validation message multiple times, with different partition keys, so that it goes to all shards in the stream. The way that the partition key works is that it's MD5-hashed, and that hash is used to select a shard (see <a href="https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html" rel="nofollow noreferrer">PutRecord doc</a>).</p>
<p>Assuming that your application can handle multiple cache invalidation messages, I think that the best way to handle this is to write multiple messages with different partition keys (no need to get fancy, just use "A", "B", "C", and so on). Keep doing this until you've written a message to every shard (getting the shard ID for each record from the response to <code>PutRecord</code>).</p>
<p>You can call <a href="https://docs.aws.amazon.com/kinesis/latest/APIReference/API_ListShards.html" rel="nofollow noreferrer">ListShards</a> with a <code>ShardFilter</code> of <code>AT_LATEST</code> to get the shard IDs of the currently active shards.</p>
| Parsifal |
<p>I currently have a working Frontend and Backend nodeports with an Ingress service setup with GKE's Google-managed certificates.</p>
<p>However, my issue is that by default <strong>when a user goes to samplesite.com, it uses http as default.</strong> This means that the user needs to specifically type in the browser <a href="https://samplesite.com" rel="nofollow noreferrer">https://samplesite.com</a> in order to get the https version of my website.</p>
<p><strong>How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?</strong> I understand that this can be forcefully done in my backend code as well but I want to separate concerns and handle this in my Kubernetes setup.</p>
<p>Here is my ingress.yaml file:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: frontend-node-service
namespace: default
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 5000
targetPort: 80
protocol: TCP
name: http
---
kind: Service
apiVersion: v1
metadata:
name: backend-node-service
namespace: default
spec:
type: NodePort
selector:
app: backend
ports:
- port: 8081
targetPort: 9229
protocol: TCP
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-frontend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-static-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-ssl
spec:
backend:
serviceName: frontend-node-service
servicePort: 5000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-backend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-backend-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-api-ssl
spec:
backend:
serviceName: backend-node-service
servicePort: 8081
</code></pre>
| Raven | <p><strong>Currently <code>GKE</code> Ingress does not support out of the box <code>HTTP</code>-><code>HTTPS</code> redirect.</strong></p>
<p>There is an ongoing Feature Request for it here:</p>
<ul>
<li><em><a href="https://issuetracker.google.com/issues/35904733" rel="noreferrer">Issuetracker.google.com: Issues: Redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer
</a></em></li>
</ul>
<p>There are some <strong>workarounds</strong> for it:</p>
<ul>
<li>Use different <code>Ingress</code> controller like <code>nginx-ingress</code>.</li>
<li>Create a <code>HTTP</code>-><code>HTTPS</code> redirection in <code>GCP</code> Cloud Console.</li>
</ul>
<hr />
<blockquote>
<p>How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?</p>
</blockquote>
<p>To disable <code>HTTP</code> on <code>GKE</code> you can use following annotation:</p>
<ul>
<li><code>kubernetes.io/ingress.allow-http: "false"</code></li>
</ul>
<p>This annotation will:</p>
<ul>
<li>Allow traffic only on port: <code>443 (HTTPS)</code>.</li>
<li>Deny traffic on port: <code>80 (HTTP)</code> resulting in error code: <code>404</code>.</li>
</ul>
<hr />
<p>Focusing on previously mentioned workarounds:</p>
<h3>Use different <code>Ingress</code> controller like <code>nginx-ingress</code></h3>
<p>One of the ways to have the <code>HTTP</code>-><code>HTTPS</code> redirection is to use <code>nginx-ingress</code>. You can deploy it with official documentation:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="noreferrer">Kubernetes.github.io: Ingress-nginx: Deploy: GCE-GKE</a></em></li>
</ul>
<p>This <code>Ingress</code> controller will create a service of type <code>LoadBalancer</code> which will be the entry point for your traffic. <code>Ingress</code> objects will respond on <code>LoadBalancer IP</code>. You can download the manifest from installation part and modify it to support the static IP you have requested in <code>GCP</code>. More reference can be found here:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/33830507/12257134">Stackoverflow.com: How to specify static IP address for Kubernetes load balancer?</a></em></li>
</ul>
<p>You will need to provide your own certificates or use tools like <code>cert-manager</code> to have <code>HTTPS</code> traffic as the annotation: <code>networking.gke.io/managed-certificates</code> will not work with <code>nginx-ingress</code>.</p>
<p>I used this <code>YAML</code> definition and without any other annotations I was always redirected to the <code>HTTPS</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx" # IMPORTANT
spec:
tls: # HTTPS PART
- secretName: ssl-certificate # SELF PROVIDED CERT NAME
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
</code></pre>
<hr />
<h3>Create a <code>HTTP</code>-><code>HTTPS</code> redirection in <code>GCP</code> Cloud Console.</h3>
<p>There is also an option to <strong>manually</strong> create a redirection rule for your <code>Ingress</code> resource. You will need to follow official documentation:</p>
<ul>
<li><em><a href="https://cloud.google.com/load-balancing/docs/https/setting-up-http-https-redirect#setting_up_the_http_load_balancer" rel="noreferrer">Cloud.google.com: Load Balancing: Docs: HTTPS: Setting up HTTP -> HTTPS Redirect</a></em></li>
</ul>
<p>Using the part of above documentation, you will need to create a <code>HTTP</code> LoadBalancer responding on the same IP as your <code>Ingress</code> resource (reserved static IP) redirecting traffic to <code>HTTPS</code>.</p>
<blockquote>
<p>Disclaimer!</p>
<p>Your <code>Ingress</code> resource will need to have following annotation:</p>
<ul>
<li><code>kubernetes.io/ingress.allow-http: "false"</code></li>
</ul>
<p>Lack there of will result in forbidding you to create a redirection mentioned above.</p>
</blockquote>
| Dawid Kruk |
<p>Consul has a TTL health check which status should be updated periodically over the HTTP interface.
From akka.net microservices we were performing the GET request to the registered Consul Service endpoints to reset the TTL timer and staying life at Consul Service dashboard.</p>
<p>Does Kubernetes have something similar to it? Not the liveness/readiness probe that performing requests to <code>pod_ip:port</code>, but waiting for the request from the running app.
For example, we want to monitor not just that AKKA app running on some port, but to make sure that each actor in the actor system is healthy.</p>
<p>Thanks!</p>
| Artur Cherniak | <p>Kubernetes wants to probe the application (with liveness and readiness probes), while the application wants to send its TTL heartbeats to signal liveness to, say, a Consul agent.</p>
<p>One way to reconcile both health check strategies could be a special <a href="https://ahmet.im/blog/advanced-kubernetes-health-checks/" rel="nofollow noreferrer">sidecar health check server</a> running inside the app's pod. Such a sidecar server would sit between the app and the kubelet, and would handle the TTL heartbeats of the app to update its internal state, noting if the app is still alive. As long as that is the case, it would reply with a 200 OK to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">HTTP probes of Kubernetes</a>. Else it would reply with a code outside the 200-300 range to Kubernetes to signal that the app is unhealthy.</p>
<p>The Consul agent itself could serve as part of such a sidecar health-check server. Its <a href="https://www.consul.io/api/health.html" rel="nofollow noreferrer">HTTP health check API</a> returns the status of the app's TTL-liveness as a JSON object. All that needs to be done is to translate the status into an appropriate HTTP return code. But using Consul agent is totally optional: a sidecar could of course handle the TTL heartbeats itself.</p>
| Farid Hajji |
<p>I am testing lifecycle hooks, and post-start works pretty well, but I think pre-stop never gets executed. There is another <a href="https://stackoverflow.com/questions/49670761/prestop-hook-in-kubernetes-never-gets-executed">answer</a>, but it is not working, and actually if it would work, it would contradict k8s documentation. So, from the docs:</p>
<blockquote>
<p>PreStop</p>
<p>This hook is called immediately before a container is terminated due
to an <strong>API request</strong> or management event such as liveness probe failure,
preemption, resource contention and others. A call to the preStop hook
fails if the container is already in terminated or completed state.</p>
</blockquote>
<p>So, the API request makes me think I can simply do <code>kubectl delete pod POD</code>, and I am good.</p>
<p>More from the docs (<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">pod shutdown process</a>):</p>
<blockquote>
<p>1.- User sends command to delete Pod, with default grace period (30s)</p>
<p>2.- The Pod in the API server is updated with the time beyond which the Pod is considered “dead” along with the grace period.</p>
<p>3.- Pod shows up as “Terminating” when listed in client commands</p>
<p>4.- (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.</p>
<p>4.1.- If one of the Pod’s containers has defined a preStop hook, it is invoked inside of the container. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.</p>
<p>4.2.- The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a preStop hook if the order in which they shut down matters.</p>
<p>...</p>
</blockquote>
<p>So, since when you do <code>kubectl delete pod POD</code>, the pod gets on <code>Terminating</code>, I assume I can do it.</p>
<p>From the other answer, I can't do this, but the way is to do a rolling-update. Well, I tried in all possible ways and it didn't work either.</p>
<p>My tests:</p>
<p>I have a deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
spec:
replicas: 1
template:
metadata:
name: lifecycle-demo
labels:
lifecycle: demo
spec:
containers:
- name: nginx
image: nginx
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- echo "Hello at" `date` > /usr/share/post-start
preStop:
exec:
command:
- /bin/sh"
- -c
- echo "Goodbye at" `date` > /usr/share/pre-stop
volumeMounts:
- name: hooks
mountPath: /usr/share/
volumes:
- name: hooks
hostPath:
path: /usr/hooks/
</code></pre>
<p>I expect the <code>pre-stop</code> and <code>post-start</code> files to be created in <code>/usr/hooks/</code>, on the host (node where the pod is running). post-start is there, but pre-stop, never.</p>
<ul>
<li>I tried <code>kubectl delete pod POD</code>, and it didn't work.</li>
<li>I tried <code>kubectl replace -f deploy.yaml</code>, with a different image, and when I do <code>kubectl get rs</code>, I can see the new replicaSet created, but the file isn't there.</li>
<li>I tried <code>kubectl set image ...</code>, and again, I can see the new <code>replicaSet</code> created, but the file isn't there.</li>
<li>I even tried putting them in a completely separated volumes, as I thought may be when I kill the pod and it gets re-created it re-creates the folder where the files should be created, so it deletes the folder and the pre-stop file, but that was not the case.
Note: It always get re-created on the same node. I made sure on that.</li>
</ul>
<p>What I have not tried is to bomb the container and break it by setting low CPU limit, but that's not what I need.</p>
<p>Any idea what are the circumstances under which <code>preStop</code> hook would get triggered?</p>
| suren | <p>Posting this as community wiki for a better visibility.</p>
<blockquote>
<p>There is a typo in the second "/bin/sh"; for preStop. There is an extra double quote ("). It was letting me to create the deployment, but was the cause it was not creating the file. All works fine now.</p>
</blockquote>
<p>The exact point where the issue lied was here:</p>
<pre class="lang-yaml prettyprint-override"><code> preStop:
exec:
command:
- /bin/sh" # <- this quotation
- -c
- echo "Goodbye at" `date` > /usr/share/pre-stop
</code></pre>
<p><strong>To be correct it should look like that:</strong></p>
<pre class="lang-yaml prettyprint-override"><code> preStop:
exec:
command:
- /bin/sh
- -c
- echo "Goodbye at" `date` > /usr/share/pre-stop
</code></pre>
<hr />
<p>For the time of writing this community wiki post, this <code>Deployment</code> manifest is outdated. Following changes were needed to be able to run this manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: good-deployment
spec:
selector:
matchLabels:
lifecycle: demo
replicas: 1
template:
metadata:
labels:
lifecycle: demo
spec:
containers:
- name: nginx
image: nginx
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- echo "Hello at" `date` > /usr/share/post-start
preStop:
exec:
command:
- /bin/sh
- -c
- echo "Goodbye at" `date` > /usr/share/pre-stop
volumeMounts:
- name: hooks
mountPath: /usr/share/
volumes:
- name: hooks
hostPath:
path: /usr/hooks/
</code></pre>
<p>Changes were following:</p>
<h3>1. <code>apiVersion</code></h3>
<pre><code>+--------------------------------+---------------------+
| Old | New |
+--------------------------------+---------------------+
| apiVersion: extensions/v1beta1 | apiVersion: apps/v1 |
+--------------------------------+---------------------+
</code></pre>
<p>StackOverflow answer for more reference:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/58482194/12257134">Stackoverflow.com: Questions: No matches for kind “Deployment” in version extensions/v1beta1
</a></em></li>
</ul>
<h3>2. <code>selector</code></h3>
<p>Added <code>selector</code> section under <code>spec</code>:</p>
<pre><code>spec:
selector:
matchLabels:
lifecycle: demo
</code></pre>
<p>Additional links with reference:</p>
<ul>
<li><em><a href="https://serverfault.com/a/917359/545585">What is spec - selector - matchLabels used for while creating a deployment?</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#selector" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Controllers: Deployment: Selector</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm trying to write a Prometheus query that can tell me how much, as a percentage, CPU (and another for memory and network) each namespace has used over a time frame, say a week.</p>
<p>The metrics I'm trying to use are <code>container_spec_cpu_shares</code> and <code>container_memory_working_set_bytes</code> but I can't figure out how sum them over time. Whatever I try either returns 0 or errors.</p>
<p>Any help on how to write a query for this would be greatly appreciated.</p>
| Steve | <p>To check the percentage of memory used by each namespace you will need a query similar to the one below:</p>
<pre><code>sum( container_memory_working_set_bytes{container="", namespace=~".+"} )|
by (namespace) / ignoring (namespace) group_left
sum( machine_memory_bytes{}) * 100
</code></pre>
<p>Above query should produce a graph similar to this one:</p>
<p><a href="https://i.stack.imgur.com/eiqur.png" rel="noreferrer"><img src="https://i.stack.imgur.com/eiqur.png" alt="GRAFANA IMAGE" /></a></p>
<blockquote>
<p>Disclaimers!:</p>
<ul>
<li>The screenshot above is from Grafana for better visibility.</li>
<li><strong>This query does not acknowledge changes in available RAM (changes in nodes, autoscaling of nodes, etc.).</strong></li>
</ul>
</blockquote>
<p>To get the metric over a period of time in PromQL you will need to use additional function like:</p>
<ul>
<li><code>avg_over_time(EXP[time])</code>.</li>
</ul>
<p>To go back in time and calculate resources from specific point in time you will need to use:</p>
<ul>
<li><code>offset TIME</code></li>
</ul>
<p><strong>Using above pointers query should combine to:</strong></p>
<pre><code>avg_over_time( sum(container_memory_working_set_bytes{container="", namespace=~".+"} offset 45m) by (namespace)[120m:]) / ignoring (namespace) group_left
sum( machine_memory_bytes{})
</code></pre>
<p>Above query will calculate the average percentage of memory used by each namespace and divide it by all memory in the cluster in the span of 120 minutes to present time. It will also start 45 minutes earlier from present time.</p>
<p>Example:</p>
<ul>
<li>Time of running the query: 20:00</li>
<li><code>avg_over_time(EXPR[2h:])</code></li>
<li><code>offset 45 min</code></li>
</ul>
<p>Above example will start at 17:15 and it will run the query to the 19:15. You can modify it to include the whole week :).</p>
<p>If you want to calculate the CPU usage by namespace you can replace this metrics with the one below:</p>
<ul>
<li><code>container_cpu_usage_seconds_total{}</code> - please check <code>rate()</code> function when using this metric (counter)</li>
<li><code>machine_cpu_cores{}</code></li>
</ul>
<p>You could also look on this network metrics:</p>
<ul>
<li><code>container_network_receive_bytes_total</code> - please check <code>rate()</code> function when using this metric (counter)</li>
<li><code>container_network_transmit_bytes_total</code> - please check <code>rate()</code> function when using this metric (counter)</li>
</ul>
<p>I've included more explanation below with examples (memory), methodology of testing and dissection of used queries.</p>
<hr />
<p>Let's assume:</p>
<ul>
<li>Kubernetes cluster <code>1.18.6</code> (Kubespray) with 12GB of memory in total:
<ul>
<li>master node with <code>2GB</code> of memory</li>
<li>worker-one node with <code>8GB</code> of memory</li>
<li>worker-two node with <code>2GB</code> of memory</li>
</ul>
</li>
<li>Prometheus and Grafana installed with: <em><a href="https://github.com/coreos/kube-prometheus" rel="noreferrer">Github.com: Coreos: Kube-prometheus</a></em></li>
<li>Namespace <code>kruk</code> with single <code>ubuntu</code> pod set to generate artificial load with below command:
<ul>
<li><code>$ stress-ng --vm 1 --vm-bytes <AMOUNT_OF_RAM_USED> --vm-method all -t 60m -v</code></li>
</ul>
</li>
</ul>
<p>The artificial load was generated with <code>stress-ng</code> two times:</p>
<ul>
<li>60 minutes - <strong>1GB</strong> of memory used</li>
<li>60 minutes - <strong>2GB</strong> of memory used</li>
</ul>
<p>The percentage of memory used by namespace <code>kruk</code> in this timespan:</p>
<ul>
<li>1GB which accounts for about ~8.5% of all memory in the cluster (12GB)</li>
<li>2GB which accounts for about ~17.5% of all memory in the cluster (12GB)</li>
</ul>
<p>The load from Prometheus query for <code>kruk</code> namespace was looking like that:</p>
<p><a href="https://i.stack.imgur.com/x1FTl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/x1FTl.png" alt="kruk namespace memory usage" /></a></p>
<p>Calculation using <code>avg_over_time(EXPR[time:]) / memory in the cluster</code> showed the usage in the midst of about 13% (<code>(17.5+8.5)/2</code>) when querying the time the artificial load was generated. This should indicate that the query was correct:</p>
<p><a href="https://i.stack.imgur.com/g3y6r.png" rel="noreferrer"><img src="https://i.stack.imgur.com/g3y6r.png" alt="Average usage" /></a></p>
<hr />
<p>As for the used query:</p>
<pre><code>avg_over_time( sum( container_memory_working_set_bytes{container="", namespace="kruk"} offset 1380m )
by (namespace)[120m:]) / ignoring (namespace) group_left
sum( machine_memory_bytes{}) * 100
</code></pre>
<p>Above query is really similar to the one in the beginning but I've made some changes to show only the <code>kruk</code> namespace.</p>
<p>I divided the query explanation on 2 parts (dividend/divisor).</p>
<h3>Dividend</h3>
<pre><code>container_memory_working_set_bytes{container="", namespace="kruk"}
</code></pre>
<p>This metric will output records of memory usage in namespace <code>kruk</code>. If you were to query for all namespaces look on additional explanation:</p>
<ul>
<li><code>namespace=~".+"</code> <- this regexp will match only when the value inside of namespace key is containing 1 or more characters. This is to avoid empty namespace result with aggregated metrics.</li>
<li><code>container=""</code> <- part is used to filter the metrics. If you were to query without it you would get multiple memory usage metrics for each container/pod like below. <code>container=""</code> will match only when container value is empty (last row in below citation).</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>container_memory_working_set_bytes{container="POD",endpoint="https-metrics",id="/kubepods/podab1ed1fb-dc8c-47db-acc8-4a01e3f9ea1b/e249c12010a27f82389ebfff3c7c133f2a5da19799d2f5bb794bcdb5dc5f8bca",image="k8s.gcr.io/pause:3.2",instance="192.168.0.124:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_POD_ubuntu_kruk_ab1ed1fb-dc8c-47db-acc8-4a01e3f9ea1b_0",namespace="kruk",node="worker-one",pod="ubuntu",service="kubelet"} 692224
container_memory_working_set_bytes{container="ubuntu",endpoint="https-metrics",id="/kubepods/podab1ed1fb-dc8c-47db-acc8-4a01e3f9ea1b/fae287e7043ff00da16b6e6a8688bfba0bfe30634c52e7563fcf18ac5850f6d9",image="ubuntu@sha256:5d1d5407f353843ecf8b16524bc5565aa332e9e6a1297c73a92d3e754b8a636d",instance="192.168.0.124:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_ubuntu_ubuntu_kruk_ab1ed1fb-dc8c-47db-acc8-4a01e3f9ea1b_0",namespace="kruk",node="worker-one",pod="ubuntu",service="kubelet"} 2186403840
container_memory_working_set_bytes{endpoint="https-metrics",id="/kubepods/podab1ed1fb-dc8c-47db-acc8-4a01e3f9ea1b",instance="192.168.0.124:10250",job="kubelet",metrics_path="/metrics/cadvisor",namespace="kruk",node="worker-one",pod="ubuntu",service="kubelet"} 2187096064
</code></pre>
<blockquote>
<p>You can read more about pause container here:</p>
<ul>
<li><em><a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">Ianlewis.org: Almighty pause container</a></em></li>
</ul>
</blockquote>
<pre><code>sum( container_memory_working_set_bytes{container="", namespace="kruk"} offset 1380m )
by (namespace)
</code></pre>
<p>This query will sum the results by their respective namespaces. <code>offset 1380m</code> is used to go back in time as the tests were made in the past.</p>
<pre><code>avg_over_time( sum( container_memory_working_set_bytes{container="", namespace="kruk"} offset 1380m )
by (namespace)[120m:])
</code></pre>
<p>This query will calculate average from memory metric across namespaces in the specified time (120m to now) starting 1380m earlier than present time.</p>
<p>You can read more about <code>avg_over_time()</code> here:</p>
<ul>
<li><em><a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="noreferrer">Prometheus.io: Aggregation over time</a></em></li>
<li><em><a href="https://prometheus.io/blog/2019/01/28/subquery-support/" rel="noreferrer">Prometheus.io: Blog: Subquery support</a></em></li>
</ul>
<h3>Divisor</h3>
<pre><code>sum( machine_memory_bytes{})
</code></pre>
<p>This metric will sum the memory available in each node in the cluster.</p>
<pre><code>EXPR / ignoring (namespace) group_left
sum( machine_memory_bytes{}) * 100
</code></pre>
<p>Focusing on:</p>
<ul>
<li><code>/ ignoring (namespace) group_left</code> <- this expression will allow you to divide each "record" in the dividend (each namespace with their memory average across time) by a divisor (all memory in the cluster). You can read more about it here: <em><a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#vector-matching" rel="noreferrer">Prometheus.io: Vector matching</a></em></li>
<li><code>* 100</code> is rather self explanatory and will multiply the result by a 100 to look more like percentages.</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noreferrer">Prometheus.io: Querying: Basics</a></em></li>
<li><em><a href="https://timber.io/blog/promql-for-humans/" rel="noreferrer">Timber.io: Blog: Promql for humans</a></em></li>
<li><em><a href="https://grafana.com/grafana/dashboards/315" rel="noreferrer">Grafana.com: Dashboards: 315</a></em></li>
</ul>
| Dawid Kruk |
<p>I have Redis instances in k8s with one master and 2 slave. I backup from rdb file that said for persistent Redis but I don't have persistent path so when I copy backup rdb file to data path of Redis then I should restart. Because i don't persist my data path, data will be lost after restart.</p>
<p>What should I do to restore successfully in in-memory Redis?</p>
| Ali Rezvani | <p>Redis doesn't support loading an RDB file at runtime, only at startup.
there are some external tools that can translate the rdb file to RESTORE commands, like redis-rdb-cli, etc. but maybe you should just find a way to place the file on the pod before Redis starts.</p>
| Ali Rezvani |
<p><strong>Issue</strong></p>
<p>I've a set of working WordPress Docker Compose containers that includes blog image and db image.</p>
<pre><code>version: '3'
services:
wordpress-app-db:
build:
context: ./mysql
image: wordpress-app-db:5.7-mysql
restart: always
php7-wordpress-app:
depends_on:
- wordpress-app-db
build:
context: ./wordpress
image: wordpress-app:php7
links:
- wordpress-app-db
ports:
- "8080:80"
restart: always
volumes:
data:
</code></pre>
<p>Now, here the above <code>yaml</code> will work with no issues at all, but when I want to change the port <code>8080</code> to some other port it simply wont work.</p>
<pre><code>ports:
- "<my-custom-port>:80"
</code></pre>
<p>All around the url takes me to <code>http://localhost:8080/</code>.</p>
<p>I'm confused of it's behaviour, I'm not able to understand why it is redirecting to <code>8080</code> if it has been mapped to other port <code><my-custom-port></code>.</p>
<p>For info, I've exposed port <code>80</code> in Dockerfile.</p>
<p><strong>Reason</strong></p>
<p>I want to do so, as I've to run this set in kubernetes cluster with <code>nodePort</code> and I can't assign it the port <code>8080</code> in <code>nodePort</code>.</p>
| Sushil K Sharma | <p>Verify that the <code>siteurl</code> and <code>home</code> wp_options in your database are set to the correct hostname and port. I recently migrated a WordPress site from a LAMP compute instance to Kubernetes using the official WordPress images, and I experienced the exact same issue.</p>
<p>WordPress will perform a redirect if those values do not match your domain name.</p>
<p>In my case, the two fields mutated somehow during the migration, causing every request to redirect to <code>:8080</code>. They were changed to <code>localhost:8080</code>.</p>
| Shane Rainville |
<p>Getting an error in c# console application.</p>
<blockquote>
<p>Use of unassigned local variable 'KubeClient'</p>
</blockquote>
<p>I tried to use Kubernetes client in my application. But its working with the above error. I know the error is due to uninitialised variable KubeClient. But i used this way in my webapi project. I don't understand the difference. How do i initialise with kubernetes client.It show</p>
<blockquote>
<p>is inaccessible due to its protection level</p>
</blockquote>
<p>. plz Help me ?</p>
<p>my code is</p>
<pre><code>using k8s;
using k8s.Models;
public bool ReadTLSSecretIteratable(string secretname, string namespacename)
{
V1Secret sec = null;
Kubernetes KubeClient;
try
{
sec = KubeClient.ReadNamespacedSecret(secretname, namespacename);
}
catch (Microsoft.Rest.HttpOperationException httpOperationException)
{
var content = httpOperationException.Response.Content;
Console.WriteLine(content);
throw httpOperationException;
}
retrun true;
}
</code></pre>
| avancho marchos | <p>As you mentioned in the question uninitialised variableis due to error. follow the code</p>
<pre><code>Kubernetes KubeClient ;
var config = KubernetesClientConfiguration.InClusterConfig();
//for local testing BuildDefaultConfig && for cluster testing InClusterConfig
KubeClient = new Kubernetes(config);
</code></pre>
| june alex |
<p>I am trying to put my .net core web applications into k8s.</p>
<p>I've two front end application namely:</p>
<ul>
<li>Authentication (Auth) service (using .net 3.1 identityserver4). Auth service allow user to authenticate themselves and upon successfully, Auth service shall redirect to web services with JWT token</li>
<li>Web service (using .net 5). Once user authenticate successfully, Web service receive the JWT token and create a session cookie.</li>
</ul>
<p>When deploying in Docker, both services are run with different ports and using Nginx reverse proxy and both services contains root path etc</p>
<pre><code>server {
listen 44343
location /
{redirect to Auth Service}
}
server {
listen 44345
location /
{redirect to Web Service}
}
</code></pre>
<p>But in k8s, it seems like I can't do that way. Hence any kind souls guide me what is the correct set up for ingress or nginx?</p>
| Joseph Y | <p>You might create two services for each port and an ingress:</p>
<p><strong>authsvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
component: web
ports:
- port: 44343
targetPort: x (i guess this could be your port 80/443 because is the "entry")
</code></pre>
<p><strong>webservicesvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
component: web
ports:
- port: 44345
targetPort: x
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 44343
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 44345
</code></pre>
<p>/ means all paths
Also you can have a different ingress type like TLS.
Take a look at: <a href="https://www.yogihosting.com/kubernetes-ingress-aspnet-core/" rel="nofollow noreferrer">https://www.yogihosting.com/kubernetes-ingress-aspnet-core/</a></p>
| jmvcollaborator |
<p>I'm trying to spin up a cluster with one node (VM machine) but I'm getting some pods for <code>kube-system</code> stuck as <code>ContainerCreating </code></p>
<pre><code>> kubectl get pods,svc -owide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cattle-system pod/cattle-cluster-agent-7db88c6b68-bz5dp 0/1 ContainerCreating 0 7m13s <none> hdn-dev-app66 <none> <none>
cattle-system pod/cattle-node-agent-ccntw 1/1 Running 0 7m13s 10.105.1.76 hdn-dev-app66 <none> <none>
cattle-system pod/kube-api-auth-9kdpw 1/1 Running 0 7m13s 10.105.1.76 hdn-dev-app66 <none> <none>
ingress-nginx pod/default-http-backend-598b7d7dbd-rwvhm 0/1 ContainerCreating 0 7m29s <none> hdn-dev-app66 <none> <none>
ingress-nginx pod/nginx-ingress-controller-62vhq 1/1 Running 0 7m29s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/coredns-849545576b-w87zr 0/1 ContainerCreating 0 7m39s <none> hdn-dev-app66 <none> <none>
kube-system pod/coredns-autoscaler-5dcd676cbd-pj54d 0/1 ContainerCreating 0 7m38s <none> hdn-dev-app66 <none> <none>
kube-system pod/kube-flannel-d9m6q 2/2 Running 0 7m43s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/metrics-server-697746ff48-q7cpx 0/1 ContainerCreating 0 7m33s <none> hdn-dev-app66 <none> <none>
kube-system pod/rke-coredns-addon-deploy-job-npjll 0/1 Completed 0 7m40s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-ingress-controller-deploy-job-b9rs4 0/1 Completed 0 7m30s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-metrics-addon-deploy-job-5rpbj 0/1 Completed 0 7m35s 10.105.1.76 hdn-dev-app66 <none> <none>
kube-system pod/rke-network-plugin-deploy-job-lvk2q 0/1 Completed 0 7m50s 10.105.1.76 hdn-dev-app66 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 8m19s <none>
ingress-nginx service/default-http-backend ClusterIP 10.43.144.25 <none> 80/TCP 7m29s app=default-http-backend
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 7m39s k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 10.43.251.47 <none> 443/TCP 7m34s k8s-app=metrics-server
</code></pre>
<p>when I will do describe on failing pods I'm getting that:</p>
<pre><code>Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "345460c8f6399a0cf20956d8ea24d52f5a684ae47c3e8ec247f83d66d56b2baa" network for pod "cattle-cluster-agent-7db88c6b68-bz5dp": networkPlugin cni failed to set up pod "cattle-cluster-agent-7db88c6b68-bz5dp_cattle-system" network: error getting ClusterInformation: connection is unauthorized: clusterinformations.crd.projectcalico.org "default" is forbidden: User "system:node" cannot get resource "clusterinformations" in API group "crd.projectcalico.org" at the cluster scope, failed to clean up sandbox container "345460c8f6399a0cf20956d8ea24d52f5a684ae47c3e8ec247f83d66d56b2baa" network for pod "cattle-cluster-agent-7db88c6b68-bz5dp": networkPlugin cni failed to teardown pod "cattle-cluster-agent-7db88c6b68-bz5dp_cattle-system" network: error getting ClusterInformation: connection is unauthorized: clusterinformations.crd.projectcalico.org "default" is forbidden: User "system:node" cannot get resource "clusterinformations" in API group "crd.projectcalico.org" at the cluster scope]
</code></pre>
<p>Had try to re-registry that node once more time but no luck. Any thoughts?</p>
| JackTheKnife | <p>As it says unauthorized so you have to give rbac permissions to make it work.</p>
<p>Try adding</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
</code></pre>
| Saurabh Nigam |
<p>I am trying to deploy Jenkins on Kubernetes. I have deployed it with ClusterIP along with Nginx Ingress Controller on AKS.</p>
<p>When I access the IP of the Ingress-Controller, the Jenkins login URL (http://<em>ExternalIP</em>/login?from=%2F) comes up. However the UI of the Jenkins page isn't coming up and there is a some sort of redirection happening and keeps growing (http://<em>ExternalIP</em>/login?from=%2F%3Ffrom%3D%252F%253Ffrom%253D%25252F%25253F). I am very new to Ingress controller and annotations. I am not able to figure on what's causing this redirection.</p>
<p>Below are my configuration files. Can anyone please help on what's going wrong ?</p>
<p><strong>ClusterIP-Service.yml</strong></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: jenkins-nodeport-svc
namespace: jenkins
labels:
env: poc
app: myapp_jenkins
spec:
ports:
- name: "http"
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: myapp_jenkins
</code></pre>
<p><strong>Ingress.yml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- http:
paths:
- backend:
serviceName: jenkins-nodeport-svc
servicePort: 80
path: /(.*)
</code></pre>
| rb16 | <p>There's something in your ingress:</p>
<pre><code>path: /(.*)
</code></pre>
<p>is a regular expression with a single capturing group that match everything. For example with following url: <code>http://ExternalIP/login?from=myurl</code> your capturing group <code>$1</code> (the first and only one) would match <code>login?from/myurl</code>.</p>
<p>Now the problem is that <code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code> annotation is rewriting your url with a non existing capturing group.</p>
<p>You don't need rewriting, you just need to plain forward every request to the service.<br />
<a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">Here you can find Rewrite Examples</a> if you interested on it.</p>
<p>But in your case you can set:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- http:
paths:
- backend:
serviceName: jenkins-nodeport-svc
servicePort: 80
path: /
</code></pre>
<p>and you're good to go.</p>
| oldgiova |
<p>I've been trying to deploy a workflow in Argo with Kubernetes and I'm getting this error</p>
<p><a href="https://i.stack.imgur.com/BGPAk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGPAk.png" alt="![Kubernetes Argo Error" /></a></p>
<p>Can someone help me to know the root of the issue?</p>
<p>I’ve tried several things but I’ve been unsuccessful.</p>
| Derek Menénedez | <p>The way Argo solves that problem is by using compression on the stored entity, but the real question is whether you have to have all 3MB worth of that data at once, or if it is merely more convenient for you and they could be decomposed into separate objects with relationships between each other. The kubernetes API is not a blob storage, and shouldn't be treated as one.</p>
<ul>
<li>The "error": "Request entity too large: limit is 3145728" is probably
the default response from kubernetes handler for objects larger than
3MB, as you can see <a href="https://github.com/kubernetes/kubernetes/blob/db1990f48b92d603f469c1c89e2ad36da1b74846/test/integration/master/synthetic_master_test.go#L315" rel="nofollow noreferrer">here at L305</a> of the source code:</li>
</ul>
<p>expectedMsgFor1MB := <code>etcdserver: request is too large</code>
expectedMsgFor2MB := <code>rpc error: code = ResourceExhausted desc = trying to send message larger than max</code>
expectedMsgFor3MB := <code>Request entity too large: limit is 3145728</code>
expectedMsgForLargeAnnotation := <code>metadata.annotations: Too long: must have at most 262144 bytes</code></p>
<ul>
<li>The <a href="https://github.com/etcd-io/etcd/issues/9925" rel="nofollow noreferrer">ETCD</a> has indeed a 1.5MB limit for processing a file and you will
find on ETCD Documentation a suggestion to try the--max-request-bytes
flag but it would have no effect on a GKE cluster because you don't
have such permission on master node.</li>
</ul>
<p>But even if you did, it would not be ideal because usually this error means that you are <a href="https://github.com/kubeflow/pipelines/issues/3134#issuecomment-591278230" rel="nofollow noreferrer">consuming the objects</a> instead of referencing them which would degrade your performance.</p>
<p>I highly recommend that you consider instead these options:</p>
<p><strong>- Determine whether your object includes references that aren't used</strong></p>
<p><strong>- Break up your resource</strong></p>
<p><strong>- Consider a volume mount instead</strong></p>
<p>There's a request for <a href="https://github.com/kubernetes/kubernetes/issues/88709" rel="nofollow noreferrer">a new API Resource</a>: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on.</p>
<p>Partial source for this answer: <a href="https://github.com/etcd-io/etcd/issues/9925" rel="nofollow noreferrer">https://stackoverflow.com/a/60492986/12153576</a></p>
| jmvcollaborator |
<p>I am learning Kubernetes and i run into trouble reaching an API in my local Minikube (Docker driver).
I have a pod running an angluar-client which tries to reach a backend pod. The frontend Pod is exposed by a NodePort Service. The backend pod is exposed to the Cluster by a ClusterIP Service.</p>
<p>But when i try to reach the clusterip service from the frontend the dns <code>transpile-svc.default.svc.cluster.local</code> cannot get resolved.
<a href="https://i.stack.imgur.com/NEaQ6.png" rel="nofollow noreferrer">error message in the client</a></p>
<p>the dns should work properly. i followed this <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a> and deployed a dnsutils pod from where i can <code>nslookup</code>.</p>
<pre><code>winpty kubectl exec -i -t dnsutils -- nslookup transpile-svc.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: transpile-svc.default.svc.cluster.local
Address: 10.99.196.82
</code></pre>
<p>This is the .yaml file for the clusterIP Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: transpile-svc
labels:
app: transpile
spec:
selector:
app: transpile
ports:
- port: 80
targetPort: 80
</code></pre>
<p>Even if i hardcode the IP into the request of the frontend i am getting an empty response.
I verified, that the backend pod is working correctly and when i expose it as a NodePort i can reach the api with my browser.</p>
<p>What am i missing here? Im stuck with this problems for quite some time now and i dont find any solution.</p>
| Verwey | <p>Since your frontend application is calling your application from outside the cluster you need to expose your backend application to outside network too.</p>
<p>There are two ways: either expose it directly by changing transpile-svc service to loadbalancer type or introduce an ingress controller(eg Nginx ingress controller with an Ingres object) which will handle all redirections</p>
<p>Steps to expose service as loadbalancer in minikube</p>
<p>1.Change your service transpile-svc type to LoadBalancer
2.Run command minikube service transpile-svc to expose the service ie an IP will be allocated.
3. Run kubectl get services to get external IP assigned. Use IP:POST to call from frontend application</p>
| Saurabh Nigam |
<p>We are trying out Kubernetes Autoscaling options. We have configured for Horizontal Pod Autoscaling but was wondering if it is possible to implement both horizontal and vertical auto scaling condition for a particular application ? to explain more I want to be able to increase resource of a pod if I don't want to increase the number of pods and if I don't want to increase the pod resources I will be able to increase the number of pods to scale for the same application.</p>
| user3396478 | <p>If your HPA is not based on CPU or Memory you can do this with no problem, but this is not recommended to use both VPA and HPA when the HPA is based on CPU or Memory.</p>
<p>Taken from the <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#known-limitations" rel="nofollow noreferrer">VPA documentation</a>: <em>"Vertical Pod Autoscaler should not be used with the Horizontal Pod Autoscaler (HPA) on CPU or memory at this moment"</em></p>
<p>There are some options on how to do so. For example, you can update the HPA relatively to the CPU/Memory change.</p>
<p>For further examples, there is a free tool named <a href="https://granulate.io/gmaestro/?utm_source=stackoverflow&utm_medium=forum&utm_campaign=gMaestro" rel="nofollow noreferrer">gMaestro</a> that supports both rightsizing and HPA, you can try to use it.</p>
| Shahar Yakov |
<p>Iam facing wierd problem i was following a tutorial with stephen grider in microservices
i built skafold file and ingress service and 2 services one called auth and the other called client and this is output after running <code>skafold dev</code></p>
<pre><code>Listing files to watch...
- moatazemadnaeem/auth
- moatazemadnaeem/client
Generating tags...
- moatazemadnaeem/auth -> moatazemadnaeem/auth:latest
- moatazemadnaeem/client -> moatazemadnaeem/client:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- moatazemadnaeem/auth: Found Locally
- moatazemadnaeem/client: Found Locally
Tags used in deployment:
- moatazemadnaeem/auth -> moatazemadnaeem/auth:02cc5bb4a3dd94ee5f02dcbeb23a8e7ba7308f12c55e701b6d5a50fd950df4da
- moatazemadnaeem/client -> moatazemadnaeem/client:f6f4b9ce5f6c3f1655e6a902b6a5d7fa27e203ec75b6c8772a8ebe4b9c105dbd
Starting deploy...
- deployment.apps/auth-depl created
- service/auth-srv created
- deployment.apps/mongo-depl created
- service/auth-mongo-srv created
- deployment.apps/client-depl created
- service/client-srv created
- ingress.networking.k8s.io/ingress-srv created
Waiting for deployments to stabilize...
- deployment/client-depl is ready. [2/3 deployment(s) still pending]
- deployment/auth-depl is ready. [1/3 deployment(s) still pending]
- deployment/mongo-depl is ready.
Deployments stabilized in 3.103 seconds
Press Ctrl+C to exit
Watching for changes...
[client]
[auth]
[auth] > [email protected] start
[auth] > nodemon ./src/index.ts
[auth]
[auth] [nodemon] 2.0.15
[auth] [nodemon] to restart at any time, enter `rs`
[client] > [email protected] dev
[auth] [nodemon] watching path(s): *.*
[client] > next
[auth] [nodemon] watching extensions: ts,json
[client]
[auth] [nodemon] starting `ts-node ./src/index.ts`
[client] ready - started server on 0.0.0.0:3000, url: http://localhost:3000
[auth] connected to db
[auth] listening in port 3000
[client] event - compiled client and server successfully in 2.8s (185 modules)
[client] Attention: Next.js now collects completely anonymous telemetry regarding usage.
[client] This information is used to shape Next.js' roadmap and prioritize features.
[client] You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
[client] https://nextjs.org/telemetry
[client]
</code></pre>
<p>and this is my ingress service</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
<p>and here is my skafold file</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: moatazemadnaeem/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
- image: moatazemadnaeem/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "**/*.js"
dest: .
</code></pre>
<p>every thing was working fine until i decided to resize my docker disk every image deleted all containers etc...
so i started from the beginning first i need to apply the ingress controller after applying it</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>everything seems to work api pod is working and connected to Database
but after reaching out ticketing.dev the website is not responding
so i troubleshoot and figure out that ingress controller does not have external ip its in the state pending (<strong>forever</strong>)</p>
<pre><code>ingress-nginx-controller LoadBalancer 10.103.240.204 <pending> 80:31413/TCP,443:30200/TCP 34s
</code></pre>
<p>and here is logs of ingress-controller i used this command <code>kubectl logs -n ingress-nginx ingress-nginx-controller-54d8b558d4-bwjjd</code></p>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.1.1
Build: a17181e43ec85534a6fea968d95d019c5a4bc8cf
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
W0213 04:09:10.888703 7 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0213 04:09:10.888860 7 main.go:223] "Creating API client" host="https://10.96.0.1:443"
I0213 04:09:10.900669 7 main.go:267] "Running in Kubernetes cluster" major="1" minor="22" git="v1.22.4" state="clean" commit="b695d79d4f967c403a96986f1750a35eb75e75f1" platform="linux/arm64"
I0213 04:09:10.982532 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0213 04:09:10.994820 7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0213 04:09:11.003072 7 nginx.go:255] "Starting NGINX Ingress controller"
I0213 04:09:11.009752 7 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"10ad2461-56de-4c14-9614-3d341476a118", APIVersion:"v1", ResourceVersion:"3699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0213 04:09:12.206706 7 nginx.go:297] "Starting NGINX process"
I0213 04:09:12.206782 7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I0213 04:09:12.207597 7 nginx.go:317] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0213 04:09:12.207902 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:09:12.234784 7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I0213 04:09:12.236967 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-54d8b558d4-bwjjd"
I0213 04:09:12.309234 7 controller.go:172] "Backend successfully reloaded"
I0213 04:09:12.309306 7 controller.go:183] "Initial sync, sleeping for 1 second"
I0213 04:09:12.309339 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:17:36.404962 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 04:17:36.406104 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"bf9349f3-9e16-44b4-876f-e1ee0de9ea51", APIVersion:"networking.k8s.io/v1", ResourceVersion:"4833", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 04:17:39.645534 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:17:39.739356 7 controller.go:172] "Backend successfully reloaded"
I0213 04:17:39.739891 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 04:19:14.912300 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 04:19:14.914196 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:19:15.132889 7 controller.go:172] "Backend successfully reloaded"
I0213 04:19:15.134516 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
2022/02/13 04:19:17 http: TLS handshake error from 192.168.65.3:63670: remote error: tls: bad certificate
I0213 04:19:18.246782 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:19:18.330122 7 controller.go:172] "Backend successfully reloaded"
I0213 04:19:18.330756 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:19:51.033718 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 04:19:51.035689 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"fe4c4a5c-8186-475e-b862-3574c995394e", APIVersion:"networking.k8s.io/v1", ResourceVersion:"5233", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 04:19:54.271841 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:19:54.382188 7 controller.go:172] "Backend successfully reloaded"
I0213 04:19:54.382524 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 04:36:53.789772 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 04:36:53.792021 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:36:54.084136 7 controller.go:172] "Backend successfully reloaded"
I0213 04:36:54.089358 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:36:57.122506 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:36:57.194378 7 controller.go:172] "Backend successfully reloaded"
I0213 04:36:57.194546 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
2022/02/13 04:37:12 http: TLS handshake error from 192.168.65.3:62628: remote error: tls: bad certificate
I0213 04:37:47.079701 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 04:37:47.081605 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"a538b7de-9b58-4ced-83a2-2f7f91972e3d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"6949", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 04:37:50.324726 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:37:50.466695 7 controller.go:172] "Backend successfully reloaded"
I0213 04:37:50.467301 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 04:41:45.984007 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 04:41:45.986154 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:41:46.239624 7 controller.go:172] "Backend successfully reloaded"
I0213 04:41:46.242229 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:41:49.312445 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:41:49.382286 7 controller.go:172] "Backend successfully reloaded"
I0213 04:41:49.382557 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:42:34.030470 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 04:42:34.032340 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"baeaac48-a580-4334-9c6d-e5ec9b9322fd", APIVersion:"networking.k8s.io/v1", ResourceVersion:"7459", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 04:42:37.264159 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:42:37.350956 7 controller.go:172] "Backend successfully reloaded"
I0213 04:42:37.351720 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 04:51:07.915558 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 04:51:07.917577 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:51:08.145057 7 controller.go:172] "Backend successfully reloaded"
I0213 04:51:08.147466 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 04:51:11.249421 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 04:51:11.326464 7 controller.go:172] "Backend successfully reloaded"
I0213 04:51:11.326711 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 05:02:51.329191 7 store.go:424] "Found valid IngressClass" ingress="default/demo" ingressclass="nginx"
I0213 05:02:51.329962 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"demo", UID:"b2009c54-0c51-46ac-bc00-52946563d25f", APIVersion:"networking.k8s.io/v1", ResourceVersion:"9220", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 05:02:51.335381 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 05:02:51.444871 7 controller.go:172] "Backend successfully reloaded"
I0213 05:02:51.445228 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 05:11:52.100612 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 05:11:52.101829 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"67c8b6ed-6449-4710-bb0c-05132287ce6a", APIVersion:"networking.k8s.io/v1", ResourceVersion:"10007", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 05:11:55.348373 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 05:11:55.485075 7 controller.go:172] "Backend successfully reloaded"
I0213 05:11:55.486547 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 05:16:56.655485 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 05:16:56.655708 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 05:16:56.868807 7 controller.go:172] "Backend successfully reloaded"
I0213 05:16:56.869459 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 05:16:59.992987 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 05:17:00.060763 7 controller.go:172] "Backend successfully reloaded"
I0213 05:17:00.060950 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
2022/02/13 16:10:58 http: TLS handshake error from 192.168.65.3:65258: remote error: tls: bad certificate
2022/02/13 16:11:02 http: TLS handshake error from 192.168.65.3:65322: remote error: tls: bad certificate
2022/02/13 16:11:05 http: TLS handshake error from 192.168.65.3:65360: remote error: tls: bad certificate
I0213 16:12:09.427485 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 16:12:09.428413 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"1d7284a4-9337-4394-97bc-9e50311bfd14", APIVersion:"networking.k8s.io/v1", ResourceVersion:"11403", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 16:12:12.658798 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 16:12:12.768695 7 controller.go:172] "Backend successfully reloaded"
I0213 16:12:12.769331 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 16:34:00.171544 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 16:34:00.177362 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 16:34:00.404447 7 controller.go:172] "Backend successfully reloaded"
I0213 16:34:00.405281 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 16:34:03.508701 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 16:34:03.580855 7 controller.go:172] "Backend successfully reloaded"
I0213 16:34:03.581050 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
2022/02/13 16:34:18 http: TLS handshake error from 192.168.65.3:56440: remote error: tls: bad certificate
2022/02/13 17:19:25 http: TLS handshake error from 192.168.65.3:62284: remote error: tls: bad certificate
I0213 17:19:51.751884 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 17:19:51.753174 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"fb63a4c5-437f-4737-a4c2-4cfc581fe560", APIVersion:"networking.k8s.io/v1", ResourceVersion:"14499", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 17:19:54.977214 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 17:19:55.179125 7 controller.go:172] "Backend successfully reloaded"
I0213 17:19:55.181141 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 17:44:53.550406 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"fb63a4c5-437f-4737-a4c2-4cfc581fe560", APIVersion:"networking.k8s.io/v1", ResourceVersion:"16638", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0213 18:06:01.779080 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 18:06:01.780145 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 18:06:02.062236 7 controller.go:172] "Backend successfully reloaded"
I0213 18:06:02.071438 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 18:06:04.757208 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 18:06:04.758134 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"982dc419-95cb-45cd-938d-6150a5732baa", APIVersion:"networking.k8s.io/v1", ResourceVersion:"18548", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0213 18:06:05.110417 7 controller.go:1083] Service "default/auth-srv" does not have any active Endpoint.
W0213 18:06:05.110441 7 controller.go:1083] Service "default/client-srv" does not have any active Endpoint.
I0213 18:06:05.110548 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 18:06:05.202121 7 controller.go:172] "Backend successfully reloaded"
I0213 18:06:05.202707 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0213 18:11:48.540680 7 controller.go:988] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0213 18:11:48.543520 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 18:11:48.755536 7 controller.go:172] "Backend successfully reloaded"
I0213 18:11:48.758696 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 18:11:51.877652 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 18:11:51.987030 7 controller.go:172] "Backend successfully reloaded"
I0213 18:11:51.987707 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0213 18:11:52.600029 7 store.go:424] "Found valid IngressClass" ingress="default/ingress-srv" ingressclass="nginx"
I0213 18:11:52.603754 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-srv", UID:"cfcc5c54-e596-4fd6-80d0-a8c1689261ff", APIVersion:"networking.k8s.io/v1", ResourceVersion:"19149", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0213 18:11:55.205772 7 controller.go:155] "Configuration changes detected, backend reload required"
I0213 18:11:55.273789 7 controller.go:172] "Backend successfully reloaded"
I0213 18:11:55.274700 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54d8b558d4-bwjjd", UID:"10331660-de0e-4f65-8519-38c834383472", APIVersion:"v1", ResourceVersion:"4068", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
moataznaeem@moatazs-air ticketing %
</code></pre>
<p>and after digging and digging there is no way to solve this issue so please any help</p>
| Moataz Emad | <p>after many many hours of digging i got stuck so i decided to re-install docker
and started fresh for somehow External ip (localhost) got assigned to LoadBalancer so i test it and everything is working now.</p>
| Moataz Emad |
<p><a href="https://i.stack.imgur.com/Ecusb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ecusb.png" alt="error be like" /></a></p>
<p>While I'm trying to install IBM mq in the GCP Kubernetes engine using Helm charts, I got an error as shown in above figure. Anyone help me out from this...</p>
<pre class="lang-sh prettyprint-override"><code>Infrastructure: Google Cloud Platform
Kubectl version:
Client Version: v1.18.6
Server Version: v1.16.13-gke.1.
Helm version: v3.2.1+gfe51cd1
helm chart:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
</code></pre>
<p><strong>Helm command:</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ helm install mqa ibm-charts/ibm-mqadvanced-server-dev --version 4.0.0 --set license=accept --set service.type=LoadBalancer --set queueManager.dev.secret.name=mysecret --set queueManager.dev.secret.adminPasswordKey=adminPassword --set security.initVolumeAsRoot=true
</code></pre>
| harish hari | <p>IBM have provided a new sample MQ Helm chart <a href="https://github.com/ibm-messaging/mq-helm" rel="nofollow noreferrer">here</a>. Included are a number of samples for different Kubernetes distributions, and GKE can be found <a href="https://github.com/ibm-messaging/mq-helm/tree/main/samples/GoogleKubernetesEngine" rel="nofollow noreferrer">here</a>. Worth highlighting that this sample deploys IBM MQ in it Cloud Native high availability topology called NativeHA.</p>
| Callum Jackson |
<p>I understand the basics how containers and how they differ from running virtual machines. I also get that auto scaling when resources are low and how services such as AWS can horizontally scale and provision more resources for you.</p>
<p>What I don't understand however is how containerisation management technologies such as swarm or kubernetes stop over provisioning.</p>
<p>In my mind - you still have to have the resources available in order to add more containers as the container management solution is only managing the containers themselves. Correct?</p>
<p>So if I had an ec2 (in the AWS world) which I was using for my application and kubernetes was running on it to manage containers for my application I would still need to have a auto scaling and have to spin up another ec2 if vm itself was being pushed to capacity by my application.</p>
<p>Perhaps because I've not worked with container orchestration as yet I can't grasp the mechanics of this but in principal I don't see how this works harmoniously.</p>
| tom808 | <p>So when you consider containers you cannot view it as a single application or service per host.</p>
<p>Traditionally people would have an individual or multiple instances all running a single application. With containers, you would have an application per containers. So an individual host instance may be running multiple applications, this makes better use of CPU and memory resources to ensure you are using less hosts across the board.</p>
<p>When you look at optimizing containers, this is when people start to break down larger applications into services and microservices to help distribute key functionality into smaller and more scalable pieces of code.</p>
<p>Depending on the containerisation layer you can also use dynamic port mappings, this would support you having multiple of the same containers on the same host but each with a unique port.</p>
<p>Finally when looking at AWS, if you don't want to be scaling physical hosts a service was released in 2018 and expanded in 2019 to include Kubernetes. This service is <a href="https://aws.amazon.com/fargate/" rel="nofollow noreferrer">Fargate</a>, and allows you to run your cluster in a serverless way.</p>
| Chris Williams |
<p>My rails deployment is not able to connect to the postgres database. Error in the logs:</p>
<pre><code>PG::ConnectionBad (could not connect to server: Connection refused
Is the server running on host "db" (10.0.105.11) and accepting
TCP/IP connections on port 5432?
):
</code></pre>
<p>This is my <code>kubectl get services</code> output:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer LoadBalancer 10.0.5.147 60.86.4.33 80:31259/TCP 10m
db ClusterIP 10.0.105.11 <none> 5432/TCP 10m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10m
web ClusterIP 10.0.204.107 <none> 3000/TCP 10m
</code></pre>
<p>db-deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
name: db
spec:
replicas: 1
selector:
matchLabels:
app: db
strategy:
type: Recreate
template:
metadata:
labels:
app: db
spec:
containers:
- env:
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRES_USER
value: postgres
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
image: postgres
imagePullPolicy: ""
name: db
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres
persistentVolumeClaim:
claimName: postgres
status: {}
</code></pre>
<p>db-service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
spec:
ports:
- port: 5432
selector:
app: db
tier: database
</code></pre>
<p>config/database.yml:</p>
<pre><code>default: &default
adapter: postgresql
encoding: utf8
host: db
username: postgres
pool: 5
development:
<<: *default
database: myapp_development
test:
<<: *default
database: myapp_test
production:
<<: *default
database: myapp_production
</code></pre>
<p>How do I ensure the postgres instance is running and accepting connections? Did I misconfigure something in the manifest files, or does the database.yml file need to point to a different host?</p>
| usbToaster | <p>The first thing that jumps out at me is that your <code>db</code> service is targeting two selectors <code>app: db</code> and <code>tier: database</code>, however the corresponding <code>deployment.yml</code> only has the <code>db</code> label. You will need to add the <code>tier</code> label to your deployment template metadata so that the service will appropriately target the right pod.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db
tier: database
name: db
spec:
replicas: 1
selector:
matchLabels:
app: db
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: db
tier: database
</code></pre>
<p>In general, if a service is not connecting to a backend pod, you can easily diagnose the issue with a simple kubectl command.<br>
This will tell you what current selectors are applied to your service </p>
<pre><code>kubectl describe service db -o yaml
</code></pre>
<p>Then you can fetch the pods that the selectors refer to and make sure that this is returning something.</p>
<pre><code>kubectl get pods -l app=db -l tier=database
</code></pre>
<p>Finally, I would recommend using the <em>DNS</em> name of the service setup through kube-proxy instead of the cluster IP address. I tend to view this as more resilient then a cluster up, as it will be automatically routed if the services IP ever changes. In your connection string use <code>db:5432</code> instead of <code>10.0.105.11</code>.</p>
| Patrick Magee |
<p>How do I rewrite the URI and send it to two different services?
With this example from Azure. It route all traffic to "aks-helloworld" on <a href="https://demo.azure.com/" rel="nofollow noreferrer">https://demo.azure.com/</a>. However, if the url is: <a href="https://demo.azure.com/hello-world-two" rel="nofollow noreferrer">https://demo.azure.com/hello-world-two</a> the traffic is sent to service "ingress-demo". This is fine. </p>
<p>The problem is when I request <a href="https://demo.azure.com/hello-world-two/test" rel="nofollow noreferrer">https://demo.azure.com/hello-world-two/test</a>.
How do I request an handler "/test" on the "ingress-demo" service?</p>
<p>Logically you would think to write:
<br>/hello-world-two/* <br>
and <br>
/*
<br>
And this would then send the request to the correct service.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- demo.azure.com
secretName: aks-ingress-tls
rules:
- host: demo.azure.com
http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
- backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two(/|$)(.*)
</code></pre>
| TryingMyBest | <p>I solved it,
by changing the path to this:</p>
<pre><code> - backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two/?(.*)
</code></pre>
| TryingMyBest |
<p>I am writing a script, where I want to restart kubernetes pods with the scale-down scale-up method</p>
<pre><code>kubectl scale --replicas=0 myPod -n myNamespace
kubectl scale --replicas=3 myPod -n myNamespace
</code></pre>
<p>I would like the script to wait until the pods are <code>Running</code> - so I thought something like</p>
<pre><code>while kubectl get pods --field-selector=status.phase=Running -n myNameSpace | grep -c myPod = 3;
do
sleep 1
echo "."
done
</code></pre>
<p>could work - but no dice. The <code>= 3</code> part doesn't work.
I can't just use</p>
<pre><code>while kubectl get pods --field-selector=status.phase!=Running -n myNameSpace | grep -c myPod > /dev/null
</code></pre>
<p>since the pods start in sequence, and I could get unlucky by querying just as one pod deployed, and others didn't even start.</p>
<p>How can I ensure that the script continues only after all 3 of the pods are <code>Running</code>?</p>
| Milan Smolík | <p>Write your condition in [ ] and get the value of command with ` or $. for example in your case:</p>
<pre><code>while [ "$(kubectl get pods --field-selector=status.phase=Running -n myNameSpace | grep -c myPod)" != 3 ]
do
sleep 1
echo "wait"
done
echo "All three pods is running and continue your script"
</code></pre>
| Hamid Ostadvali |
<p>I have a kubernetes cluster with 1 master node and 4 worker nodes?</p>
<p>Is it possible to convert this cluster to openshift?</p>
| Ramesh KP | <p>I would guess no. You may plan to gradually move your workloads to a new Openshift cluster.
There's also a <a href="https://docs.openshift.com/container-platform/4.4/migration/migrating_3_4/about-migration.html" rel="nofollow noreferrer">migration tool</a> which is not supporting migration from kubernetes clusters at the moment, but might in the future.</p>
| Gokhan |
<p>I'm running Azure AKS Cluster 1.15.11 with <a href="https://hub.helm.sh/charts/stable/prometheus-operator/8.15.6" rel="nofollow noreferrer">prometheus-operator 8.15.6</a> installed as a helm chart and I'm seeing some different metrics displayed by <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">Kubernetes Dashboard</a> compared to the ones provided by prometheus Grafana.</p>
<p>An application pod which is being monitored has three containers in it. Kubernetes-dashboard shows that the memory consumption for this pod is ~250MB, standard prometheus-operator <a href="https://grafana.com/grafana/dashboards/12120" rel="nofollow noreferrer">dashboard</a> is displaying almost exactly double value for the memory consumption ~500MB.</p>
<p>At first we thought that there might be some misconfiguration on our monitoring setup. Since prometheus-operator is installed as standard helm chart, Daemon Set for node exporter ensures that every node has exactly one exporter deployed so duplicate exporters shouldn't be the reason. However, after migrating our cluster to different node pools I've noticed that when our application is running on <strong>user node pool</strong> instead of <strong>system node pool</strong> metrics does match exactly on both tools. I know that system node pool is running CoreDNS and tunnelfront but I assume these are running as separate components also I'm aware that overall it's not the best choice to run infrastructure and applications in the same node pool.</p>
<p>However, I'm still wondering why running application under <strong>system node pool</strong> causes metrics by prometheus to be doubled?</p>
| efomo | <p>I ran into a similar problem (aks v1.14.6, prometheus-operator v0.38.1) where all my values were multiplied by a factor of 3. Turns out you have to remember to remove the extra endpoints called <code>prometheus-operator-kubelet</code> that are created in the <code>kube-system</code>-namespace during install <em>before</em> you remove / reinstall prometheus-operator since Prometheus aggregates the metric types collected for each endpoint.</p>
<p>Log in to the Prometheus-pod and check the status page. There should be as many endpoints as there are nodes in the cluster, otherwise you may have a surplus of endpoints:</p>
<p><img src="https://i.stack.imgur.com/3eYZt.png" alt="Prometheus status page" /></p>
| Samir D |
<p>I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s.</p>
<p>I am working with two nodes </p>
<ol>
<li>master - 2cpu, 2 GB memory</li>
<li>worker - 1 cpu, 1 GB memory</li>
<li>OS - ubuntu - hashicorp/bionic64</li>
</ol>
<p>I did setup the master node successfully and i can see it is up and running </p>
<pre><code>vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 29m v1.18.2
</code></pre>
<p>Here is token which i have generated </p>
<pre><code>vagrant@master:~$ kubeadm token create --print-join-command
W0419 13:45:52.513532 16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521
</code></pre>
<p><strong>Issue</strong> - But when i try to join it from the worker node i get</p>
<pre><code>vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521
W0419 13:46:17.651819 15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>Here are the ports which are occupied </p>
<pre><code>10.0.2.15:2379
10.0.2.15:2380
10.0.2.15:68
</code></pre>
<p>Note i am using CNI from - </p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
| Rahul Wagh | <p>It worked for me using this <strong>--apiserver-advertise-address</strong>:</p>
<p><code>sudo kubeadm init --apiserver-advertise-address=172.16.28.10 --pod-network-cidr=192.168.0.0/16</code></p>
<p><code>kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml</code></p>
<p>I have used calico and worked for me.</p>
<p>In Member node for join:</p>
<p><code>kubeadm join 172.16.28.10:6443 --token 2jm9hd.o2gulx4x1b8l1t5d \ --discovery-token-ca-cert-hash sha256:b8b679e86c4d228bfa486086f18dcac4760e5871e8dd023ec166acfd93781595</code></p>
| Aranya |
<p>I cannot seem to find why couldn't connect to an endpoint I have exposed via NodePort. While the container is running as expected when running <code>kubectl logs <pod></code>, I am unable to reach:</p>
<p><code>http://<node ip>:30100/<valid endpoint form container></code></p>
<p>where the <code><node ip></code> is the ip from running <code>kubectl get nodes -o wide</code> under <code>INTERNAL IP</code>.</p>
<p>I am unsure whether the problem resides in the deployment yaml. Below is my single deployment that I've applied to the cluster:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-app-deployment
labels:
app: auth-app
spec:
replicas: 1
selector:
matchLabels:
app: auth-app
template:
metadata:
labels:
app: auth-app
spec:
containers:
- name: auth-app
image: <working image>
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: auth-app-service
spec:
type: NodePort
selector:
app: auth-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30100
</code></pre>
<p>Also, here are the results from running <code>kubectl get all</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/auth-app-deployment-58cd68d8cf-mjpph 1/1 Running 0 29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/auth-app-service NodePort 10.104.172.176 <none> 80:30100/TCP 138m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 150m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/auth-app-deployment 1/1 1 1 138m
NAME DESIRED CURRENT READY AGE
replicaset.apps/auth-app-deployment-58cd68d8cf 1 1 1 29m
</code></pre>
<p>PS: I am using <code>minikube</code>.</p>
| chandler | <p>As you are using a NodePort service you can't use the cluster IP, which is different from the node IP's.
Assuming you only one have 1 node (bc you are using minikube) you need to use the ip you get from <code>kubectl describe nodes</code> under the addresses section, use InternalIP value.</p>
<p>Your url will be <code>http://<node_ip>:30100/<valid endpoint form container></code></p>
| Roberto Yoc |
<p>I'm able to install elasticseach and kibana, both are up and running. In Kibana dashboard APM server is setup, and indices are showing up.</p>
<p>I am getting the following error for APM-Agent when I trace the log.
ERROR co.elastic.apm.agent.report.IntakeV2ReportingEventHandler - Error trying to connect to APM Server. Some details about SSL configurations corresponding the current connection are logged at INFO level.</p>
<p>ERROR co.elastic.apm.agent.report.IntakeV2ReportingEventHandler - Failed to handle event of type JSON_WRITER withthis error: connect timed out</p>
<p>APM-Agent Yaml File</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
namespace: blogdemodeployments
spec:
selector:
matchLabels:
app: azuretest
template:
metadata:
labels:
app: azuretest
spec:
containers:
- name: apm-agent-container
image: dockerid/application-service
volumeMounts:
- name: shared-data
mountPath: /usr/share/app/
ports:
- containerPort: 6000
name: http
protocol: TCP
env:
- name: SERVER_URL
value: "http://40.83.185.238:8200"
- name: filebeat-container
image: docker.elastic.co/beats/filebeat:7.10.0
volumeMounts:
- name: shared-data
mountPath: /usr/share/filebeat/filebeat.yml
volumes:
- name: shared-data
azureFile:
secretName: storage-secret
shareName: myfileshare
readOnly: false
---
kind: Service
apiVersion: apps/v1
metadata:
name: apmfb
namespace: blogdemodeployments
spec:
type: LoadBalancer
selector:
app: apmfb
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 6000
</code></pre>
<p>ElasticSearch Yaml</p>
<pre><code>---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
labels:
component: elasticsearch
spec:
version: 7.10.2
http:
service:
spec:
type: LoadBalancer
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
</code></pre>
<p>Kibana Yaml</p>
<pre><code>---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.10.2 #Make sure Kibana and Elasticsearch are on the same version.
http:
service:
spec:
type: LoadBalancer #Adds a External IP
tls:
selfSignedCertificate:
disabled: true
count: 1
elasticsearchRef:
name: quickstart
</code></pre>
<p>APM Server Yaml</p>
<pre><code>---
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: apm-new-quickstart
namespace: default
spec:
version: 7.10.2
count: 1
http:
service:
spec:
type: LoadBalancer
tls:
selfSignedCertificate:
disabled: true
config:
output:
elasticsearch:
enabled: true
hosts: ["https://52.224.33.53:9200"]
username: "elastic"
password: "XXXXXXXXXXXXXXXXXXXX"
logging:
level: debug
to_files: false
to_stderr: true
apm-server:
hosts: "0.0.0.0:8200"
</code></pre>
| Jack Smith | <p>That error indicates the agent can't connect to apm-server. <code>SERVER_URL</code> should be <code>ELASTIC_APM_SERVER_URL</code> in the apm-agent-container env.</p>
| Gil Raphaelli |
<p>I have created a Kubernetes cluster on my virtual machine and I have been trying to expose this to Internet with my own domain(for eg, <a href="http://www.mydomain.xyz" rel="nofollow noreferrer">www.mydomain.xyz</a>). I have created an ingress resource as below and I've also modified kubelet configuration to have my domain name. All my pods and services are created in this domain name (Eg, default.svc.mydomain.xyz)</p>
<pre><code>root@master-1:~# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> www.mydomain.xyz 192.168.5.11 80 5d20h
root@master-1:~# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.59.116 192.168.5.11 443:30740/TCP,80:31894/TCP 6d21h
</code></pre>
<p>I tried to add A record in my domain DNS page as below and could not add it.</p>
<p><a href="https://i.stack.imgur.com/2XEFs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2XEFs.png" alt="enter image description here" /></a></p>
<p>This is where I get stuck and unable to proceed further. Do I need to change anything in the cluster to add this namespace in "Domain DNS configuration" (Hostinger) or anything to be added in master node.</p>
<p>How does the domain that I own redirect all the traffic to my kubernetes cluster?</p>
<p>Any help would be highly appreciated.</p>
| Prasa2166 | <p><strong>You cannot expose your Kubernetes cluster like you've tried.</strong></p>
<p>I strongly advise to use a different Kubernetes solution as <code>minikube</code> is more a tool to experiment and develop as said in the official site:</p>
<blockquote>
<p>Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day.</p>
<p><em><a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="noreferrer">Kubernetes.io: Learning environment: Minikube </a></em></p>
</blockquote>
<p>Please take a look on other solutions like:</p>
<ul>
<li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="noreferrer">Kubernetes.io: Production environment: Create cluster kubeadm</a></li>
<li><a href="https://github.com/kubernetes-sigs/kubespray" rel="noreferrer">Github.com: Kubespray</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine" rel="noreferrer">Cloud.google.com: Kubernetes Engine</a></li>
<li><a href="https://aws.amazon.com/eks/" rel="noreferrer">Aws.amazon.com: EKS</a></li>
</ul>
<hr />
<p>You have several things to remember when trying to expose Kubernetes to the Internet from your private network.</p>
<ul>
<li>Access to public IP</li>
<li>Ability to port forward traffic inside your network</li>
<li>Allow traffic to your <code>minikube</code> instance</li>
<li>Combining all of the above</li>
</ul>
<blockquote>
<p><em>Why do I think it's <code>minikube</code> instance?</em></p>
<p>You have 2 network interfaces:</p>
<ul>
<li><code>NAT</code></li>
<li><code>Host-only</code></li>
</ul>
<p>This interfaces are getting created when you run your <code>minikube</code> with Virtualbox</p>
</blockquote>
<h3>Access to public IP</h3>
<p>Access to public IP is crucial. <strong>Without it you will not be able to expose your services to the Internet.</strong> There are some exclusions but I will not focus on them here.</p>
<p>In the DNS panel you've entered the private IP address. You cannot do that unless the DNS server is intended resolve only local queries (your private network). To allow other users to connect to your Kubernetes cluster you need to provide a public IP address
like <code>94.XXX.XXX.XXX</code>.</p>
<p>You can read more about differences between public and private ip addresses here:</p>
<ul>
<li><a href="https://help.keenetic.com/hc/en-us/articles/213965789-What-is-the-difference-between-a-public-and-private-IP-address-" rel="noreferrer">Help.keenetic.com: What is the difference between a public and private IP address</a></li>
</ul>
<h3>Ability to port forward traffic inside your network</h3>
<p>If you have your public IP you will also need to check if the incoming connections are not blocked by other devices like ISP's firewalls or your router. <strong>If they are blocked you will be unable to expose your services.</strong> To expose your services to the Internet you will need to use "port-forwarding".</p>
<p>You can read more about it here:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Port_forwarding" rel="noreferrer">Wikipedia.org: Port forwarding</a></li>
</ul>
<h3>Allow traffic to your <code>minikube</code> instance</h3>
<p>As I previously mentioned: When you create your <code>minikube</code> instance with Virtualbox you will create below network interfaces:</p>
<ul>
<li><code>NAT</code>- interface which will allow your VM to access the Internet. This connection cannot be used to expose your services</li>
<li><code>Host-only-network-adapter</code> - interface created by your host which allows to communicate within the interface. It means that your host and other vm's with this particular adapter could connect with each other. It's designed for internal usage.</li>
</ul>
<p>You can read more about Virtualbox networking here:</p>
<ul>
<li><a href="https://www.virtualbox.org/manual/ch06.html" rel="noreferrer">Virtualbox.org: Virtual Networking</a></li>
</ul>
<p>I've managed to find a <strong>workaround</strong> to allow connections outside your laptop/pc to your <code>minikube</code> instance. You will need to change network interface in settings of your <code>minikube</code> instance from <strong><code>Host-only-network-adapter</code></strong> to <strong><code>Bridged Adapter</code></strong> (2nd adapter). This will work as another device was connected to your physical network. Please make sure that this bridged adapter is used with Ethernet NIC. <code>Minikube</code> should change IP address to match the one used in your physical one.</p>
<blockquote>
<p>You will also need to change your <code>.kube/config</code> as it will have the old/wrong IP address!</p>
</blockquote>
<p>After that you should be able to connect to your <code>Ingress</code> resource by IP accessible in your physical network.</p>
<hr />
<h3>Combining all of the above</h3>
<p>Remembering the information above, let's assume.</p>
<ul>
<li>You have a public IP address associated on the WAN interface of your router (for example <code>94.100.100.100</code>).</li>
<li>You create a <code>A</code> record in DNS pointing to your domain name to <code>94.100.100.100</code>.</li>
<li>You create a port-forwarding from port <code>80</code> to port <code>80 </code>to the IP address of <code>minikube</code> bridged adapter.</li>
</ul>
<p>After that you should be able to connect from outside to your <code>Ingress</code> resource.</p>
<p>The request will first contact DNS server for IP address associated with the domain. Then it will send request to this IP address (which is presumably your router). Your router will port-forward this connection to your <code>minikube</code> instance.</p>
| Dawid Kruk |
<p>I want to using prometheus in EKS on AWS fargate<br />
I follow this.<br />
<a href="https://aws.amazon.com/jp/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/" rel="nofollow noreferrer">https://aws.amazon.com/jp/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/</a><br />
but I can't create persistent volume claims.<br />
this is prometheus-storageclass.yaml.</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
namespace: prometheus
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
debug
</code></pre>
<p>Can I Use aws-ebs in provisioner field?</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-server
namespace: prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
storageClassName: prometheus
</code></pre>
<p>I apply sc and pvc</p>
<p>When I apply the PVC, the PVC is still pending and I get the following message</p>
<p>Failed to provision volume with StorageClass "prometheus": error finding candidate zone for pvc: no instances returned</p>
| 多田信洋 | <p>I forgot to create the node group.</p>
<p>eksctl create nodegroup --cluster=myClusterName</p>
| 多田信洋 |
<p>Revisiting the data locality for Spark on Kubernetes question: if the Spark pods are colocated on the same nodes as the HDFS data node pods then does data locality work ?</p>
<p>The Q&A session here: <a href="https://www.youtube.com/watch?v=5-4X3HylQQo" rel="nofollow noreferrer">https://www.youtube.com/watch?v=5-4X3HylQQo</a> seems to suggest it doesn't.</p>
| JHI Star | <p>Locality is an issue Spark on Kubernetes. Basic Data locality does work if the Kubernetes provider provides a network topology plugins that are required to resolve where the data is and where the spark nodes should be run. <em>and you have built kubernetes to include the <a href="https://github.com/apache-spark-on-k8s/kubernetes-HDFS" rel="nofollow noreferrer">code here</a></em></p>
<p>There is a method to test <a href="https://github.com/apache-spark-on-k8s/kubernetes-HDFS/blob/master/topology/README.md" rel="nofollow noreferrer">this data locality</a>. I have copied it here for completeness:</p>
<p>Here's how one can check if data locality in the namenode works.</p>
<p>Launch a HDFS client pod and go inside the pod.</p>
<pre><code>$ kubectl run -i --tty hadoop --image=uhopper/hadoop:2.7.2
--generator="run-pod/v1" --command -- /bin/bash
</code></pre>
<p>Inside the pod, create a simple text file on HDFS.</p>
<pre><code>$ hadoop fs
-fs hdfs://hdfs-namenode-0.hdfs-namenode.default.svc.cluster.local
-cp file:/etc/hosts /hosts
</code></pre>
<p>Set the number of replicas for the file to the number of your cluster nodes. This ensures that there will be a copy of the file in the cluster node that your client pod is running on. Wait some time until this happens.</p>
<pre><code> `$ hadoop fs -setrep NUM-REPLICAS /hosts`
</code></pre>
<p>Run the following hdfs cat command. From the debug messages, see which datanode is being used. Make sure it is your local datanode. (You can get this from $ kubectl get pods hadoop -o json | grep hostIP. Do this outside the pod)</p>
<pre><code>$ hadoop --loglevel DEBUG fs
-fs hdfs://hdfs-namenode-0.hdfs-namenode.default.svc.cluster.local
-cat /hosts ... 17/04/24 20:51:28 DEBUG hdfs.DFSClient: Connecting to datanode 10.128.0.4:50010 ...
</code></pre>
<p>If no, you should check if your local datanode is even in the list from the debug messsages above. If it is not, then this is because step (3) did not finish yet. Wait more. (You can use a smaller cluster for this test if that is possible)</p>
<pre><code>`17/04/24 20:51:28 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{ fileLength=199 underConstruction=false blocks=[LocatedBlock{BP-347555225-10.128.0.2-1493066928989:blk_1073741825_1001; getBlockSize()=199; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[10.128.0.4:50010,DS-d2de9d29-6962-4435-a4b4-aadf4ea67e46,DISK], DatanodeInfoWithStorage[10.128.0.3:50010,DS-0728ffcf-f400-4919-86bf-af0f9af36685,DISK], DatanodeInfoWithStorage[10.128.0.2:50010,DS-3a881114-af08-47de-89cf-37dec051c5c2,DISK]]}] lastLocatedBlock=LocatedBlock{BP-347555225-10.128.0.2-1493066928989:blk_1073741825_1001;`
</code></pre>
<p>Repeat the hdfs cat command multiple times. Check if the same datanode is being consistently used.</p>
| Matt Andruff |
<p>Can somebody point in the direction on how to enable <code>PodNodeSelector</code> admission controller in EKS version 1.15 ?</p>
<p>I'm trying to achieve what is explained in <a href="https://stackoverflow.com/questions/52487333/how-to-assign-a-namespace-to-certain-nodes">this</a> link, but how to do this in Managed Kubernetes like EKS where you don't have access to control plane components.</p>
| Aziz Zoaib | <p><strong>In fact: You cannot enable <code>PodNodeSelector</code> in <code>EKS</code>.</strong></p>
<p>The fact that <code>EKS</code> is a Managed Kubernetes solution denies any reconfiguration of control plane components. That's why you cannot enable <code>PodNodeSelector</code>.</p>
<p>There is an official documentation about enabled admission controllers in EKS: <a href="https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html" rel="noreferrer">Aws.amazon.com: EKS: Userguide: Platform-versions</a> </p>
<p>There is ongoing feature request for <code>PodNodeSelector</code> here (as well as some workarounds): <a href="https://github.com/aws/containers-roadmap/issues/304" rel="noreferrer">Github.com: AWS: Issue: 304</a>. </p>
<p>There is an answer on StackOverflow with similar question <a href="https://stackoverflow.com/a/51445477/12257134">How to enable admission controllers in EKS</a></p>
| Dawid Kruk |
<p>I have a Gitlab runner using a K8s executor. But when running the pipeline I am getting below error</p>
<pre><code>Checking for jobs... received job=552009999
repo_url=https://gitlab.com/deadbug/rns.git runner=ZuT1t3BJ
WARNING: Namespace is empty, therefore assuming 'default'. job=552009999 project=18763260
runner=ThT1t3BJ
ERROR: Job failed (system failure): secrets is forbidden: User "deadbug" cannot create resource
"secrets" in API group "" in the namespace "default" duration=548.0062ms job=552009999
</code></pre>
<p>From the error message, I undestand the namespace needs to be updated. I specified namespace in the Gitlab variables</p>
<p><a href="https://i.stack.imgur.com/VCRua.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VCRua.png" alt="enter image description here"></a></p>
<p>But after this also, pipeline is failing with the above error message. How do I change the namespace for the runner ?</p>
| deadbug | <p>This seems to be linked to the permissions of the service account rather than the namespace directly. If you use GitLab's Kubernetes integration, you should not override the namespace, as GitLab will create one for you.</p>
<p>Make sure the service account you added to GitLab has the correct role. From <a href="https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html</a>:</p>
<blockquote>
<p>When GitLab creates the cluster, a gitlab service account with cluster-admin privileges is created in the default namespace to manage the newly created cluster</p>
</blockquote>
| Lawson30 |
<p>To exec into a container in a pod, I use the following two commands (note the <code>template</code> flag in the first command trims the output to print just the name of the pods):</p>
<pre><code>$ kubectl get pods --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
app-api-6421cdf4fd-x9tbk
app-worker-432f86f54-fknxw
app-frontend-87dd65d49c-6b4mn
app-rabbit-413632c874-s2ptw
$ kubectl exec -it app-api-6421cdf4fd-x9tbk -- bash
</code></pre>
<p>It would be nice to exec into the container without having to discover the random guid at the end of the pod name every single time. How can I do this?</p>
| Johnny Metz | <p>You can exec into a pod using the deployment <br>
You can use the following command:</p>
<pre><code>kubectl exec -it deploy/<deployment-name> -- bash
</code></pre>
| Ahmed Akrout |
<p>In one of our environment, few Kubernetes PODs are restarting very frequently and we are trying to find the reason for that by collecting heap and thread dumps.
Any idea of how to collect those if PODs are failing very often?</p>
| ik009 | <p>Yoh can try mounting a host volume to the pod,and than configure your app to dump in the path which is mapped to the host volume. Or any other way to save the heap dump in persitent place</p>
| sam |
<p>I would like to create a kubernetes CronJob scheduler which can invoke .netcore console application every minute.</p>
<p><strong>CronJob Spec [cronjob-poc.yml]:</strong></p>
<pre><code>kind: CronJob
metadata:
name: cronjob-poc
spec:
schedule: "*/1 * * * *" #Every Minute
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-poc
image: cronjobpoc:dev
command: ["/usr/local/bin/dotnet", "/app/CronJobPoc.dll"]
restartPolicy: OnFailure
</code></pre>
<p><strong>Kind Commands:</strong></p>
<pre><code>kind create cluster --name=cronjob-poc
kind load docker-image cronjobpoc:dev --name=cronjob-poc
kubectl apply -f .\cronjob-poc.yml
</code></pre>
<p><strong>.netcore is a very simple app which just prints hello</strong></p>
<pre><code>using System;
namespace CronJobPoc{
class Program{
static void Main(string[] args)
{
Console.WriteLine($"{DateTime.Now} - Hello World!");
}
}
}
</code></pre>
<p>Docker File:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["CronJobPoc/CronJobPoc.csproj", "CronJobPoc/"]
RUN dotnet restore "CronJobPoc/CronJobPoc.csproj"
COPY . .
WORKDIR "/src/CronJobPoc"
RUN dotnet build "CronJobPoc.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CronJobPoc.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CronJobPoc.dll"]
</code></pre>
<p>When I do <code>kubectl get pods</code> I see below information.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
cronjob-poc-1589463060-9dp8p 0/1 CrashLoopBackOff 4 2m51s
</code></pre>
<p>When I try to see logs using <code>kubectl logs cronjob-poc-1589463060-9dp8p</code> I do not see anything. The command returns empty.</p>
<p>Not sure how to see the logs. There is something wrong but not sure what?</p>
<p>I am expecting to see the output " - Hello World!" somewhere in the log. Not sure how to check what went wrong and where can the logs with proper error messages can be seen.</p>
<p>Any quick help on this would be highly appreciated.</p>
<p>I also tried using command argument as shown below in cronjob-poc.yml. I get CrashLoopBackOff status for the pod</p>
<pre><code>command:
- /bin/sh
- -c
- echo Invoking CronJobPoc.dll ...; /usr/local/bin/dotnet CronJobPoc.dll
</code></pre>
<p>When I try to check the log using kubectl logs , I see /bin/sh: 1: /usr/local/bin/dotnet: not found</p>
| userpkp | <p>I was trying to dockerize a <code>dotnetcore</code> app using Visual Studio which was not installing the sdk. When I dockerized it manually using the command <code>docker build -t cronjobpoc .</code>, it worked.</p>
<p>Also in the shell command I simply used <code>dotnet CronJobPoc.dll</code> as dotnet is already in the path</p>
<p>command:</p>
<pre><code>/bin/sh -c echo Invoking CronJobPoc.dll ...; dotnet CronJobPoc.dll
</code></pre>
| userpkp |
<p>I understand that services use a selector to identify which pods to route traffic to by thier labels.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc
spec:
ports:
- name: tcp
protocol: TCP
port: 443
targetPort: 443
selector:
app: nginx
</code></pre>
<p>Thats all well and good.</p>
<p>Now what is the difference between this selector and the one of the <code>spec.selector</code> from the deployment. I understand that it is used so that the deployment can match and manage its pods.</p>
<p>I dont understand however why i need the extra <code>matchLabels</code> declaration and cant just do it like in the service. Whats the use of this semantically?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
</code></pre>
<p>Thanks in advance</p>
| Tobias | <p>In the <code>Service</code>'s <code>spec.selector</code>, you can identify which pods to route traffic to only by their labels.</p>
<p>On the other hand, in the <code>Deployment</code>'s <code>spec.selector</code> you have two options to decide on which node the pods will be scheduled on, which are: <code>matchExpressions</code>, <code>matchLabels</code>.</p>
| tpaz1 |
<p>Currently I have bitnami discourse in docker and the data storing in pod, we scale up the pods from 1 to many. Now I am facing error with media uploads, the issue is the pods data are not in sync so I have to mount single volume and shared it between pods. But I want to do that persistent volume claim in kubernetes with the help of azure-storage-class not in docker volume. </p>
| Swatantra Gupta | <p>I presume you are using your own kubernetes manifest as bitnami aren't supporting discourse chart currently. Mainly, what I understand that you need is a volume that could be accessed by many PODs.</p>
<p>I think you would need <code>read-write-many</code> volume, this is not usually supported by cloud providers but let's give a try in azure.</p>
<p>I hope it helps. </p>
| Daniel Arteaga Barba |
<p>I’m trying to set some environment variables in k8s deployment and use them within the <code>application.properties</code> in my spring boot application, but it looks like I'm doing something wrong because spring is not reading those variables, although when checking the <code>env vars</code> on the pod, all the vars are set correctly.</p>
<p>The error log from the container:</p>
<pre><code>org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port...
</code></pre>
<p>Any help will be appreciated.</p>
<p><strong>application.properties:</strong></p>
<pre><code>spring.datasource.url=jdbc:postgresql://${DB_URL}:${DB_PORT}/${DB_NAME}
spring.datasource.username=${DB_USER_NAME}
spring.datasource.password=${DB_PASSWORD}
</code></pre>
<p><strong>DockerFile</strong></p>
<pre><code>FROM openjdk:11-jre-slim-buster
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p><strong>deployment.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: .../api
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
env:
- name: DB_URL
value: "posgres"
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: postgres-config
key: dbName
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: db-secret
key: dbUserName
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: dbPassword
</code></pre>
| Anna | <p>The DockerFile was wrong.
Everything is working fine after changing the DockerFile to this:</p>
<pre><code>FROM maven:3.6.3-openjdk-11-slim as builder
WORKDIR /app
COPY pom.xml .
COPY src/ /app/src/
RUN mvn install -DskipTests=true
FROM adoptopenjdk/openjdk11:jre-11.0.8_10-alpine
COPY --from=builder /app/target/*.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
</code></pre>
| Anna |
<h2>Sincere apologies for this lengthy posting.</h2>
<p>I have a 4 node Kubernetes cluster with 1 x master and 3 x worker nodes. I connect to the kubernetes cluster using kubeconfig, since yesterday I was not able to connect using kubeconfig.</p>
<p><code>kubectl get pods</code> was giving an error "The connection to the server api.xxxxx.xxxxxxxx.com was refused - did you specify the right host or port?"</p>
<p>In the kubeconfig server name is specified as <a href="https://api.xxxxx.xxxxxxxx.com" rel="nofollow noreferrer">https://api.xxxxx.xxxxxxxx.com</a></p>
<h2>Note:</h2>
<p>Please note as there were too many https links, I was not able to post the question. So I have renamed https:// to https:-- to avoid the links in the background analysis section.</p>
<p>I tried to run <code>kubectl</code> from the master node and received similar error
The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
<p>Then checked kube-apiserver docker and it was continuously exiting / Crashloopbackoff.</p>
<p><code>docker logs <container-id of kube-apiserver></code> shows below errors</p>
<blockquote>
<p>W0914 16:29:25.761524 1 clientconn.go:1251] grpc:
addrConn.createTransport failed to connect to {127.0.0.1:4001 0
}. Err :connection error: desc = "transport: authentication
handshake failed: x509: certificate has expired or is not yet valid".
Reconnecting... F0914 16:29:29.319785 1 storage_decorator.go:57]
Unable to create storage backend: config (&{etcd3 /registry
{[https://127.0.0.1:4001]
/etc/kubernetes/pki/kube-apiserver/etcd-client.key
/etc/kubernetes/pki/kube-apiserver/etcd-client.crt
/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt} false true
0xc000266d80 apiextensions.k8s.io/v1beta1 5m0s 1m0s}), err
(context deadline exceeded)</p>
</blockquote>
<p><code>systemctl status kubelet</code> --> was giving below errors</p>
<blockquote>
<p>Sep 14 16:40:49 ip-xxx-xxx-xx-xx kubelet[2411]: E0914 16:40:49.693576
2411 kubelet_node_status.go:385] Error updating node status, will
retry: error getting node
"ip-xxx-xxx-xx-xx.xx-xxxxx-1.compute.internal": Get
<a href="https://127.0.0.1/api/v1/nodes/ip-xxx-xxx-xx-xx.xx-xxxxx-1.compute.internal?timeout=10s" rel="nofollow noreferrer">https://127.0.0.1/api/v1/nodes/ip-xxx-xxx-xx-xx.xx-xxxxx-1.compute.internal?timeout=10s</a>:
dial tcp 127.0.0.1:443: connect: connection refused</p>
</blockquote>
<p>Note: ip-xxx-xx-xx-xxx --> internal IP address of aws ec2 instance.</p>
<h2>Background Analysis:</h2>
<p>Looks there was some issue with the cluster on 7th Sep 2020 and both kube-controller and kube-scheduler dockers exited and restarted. I believe since then kube-apiserver is not running or because of kube-apiserver, those dockers restarted. The kube-apiserver server certificate expired in July 2020 but access via kubectl was working until 7th Sep.</p>
<p>Below are the <code>docker logs from the exited kube-scheduler</code> docker container:</p>
<blockquote>
<p>I0907 10:35:08.970384 1 scheduler.go:572] pod
default/k8version-1599474900-hrjcn is bound successfully on node
ip-xx-xx-xx-xx.xx-xxxxxx-x.compute.internal, 4 nodes evaluated, 3
nodes were found feasible I0907 10:40:09.286831 1
scheduler.go:572] pod default/k8version-1599475200-tshlx is bound
successfully on node ip-1x-xx-xx-xx.xx-xxxxxx-x.compute.internal, 4
nodes evaluated, 3 nodes were found feasible I0907 10:44:01.935373<br />
1 leaderelection.go:263] failed to renew lease
kube-system/kube-scheduler: failed to tryAcquireOrRenew context
deadline exceeded E0907 10:44:01.935420 1 server.go:252] lost
master lost lease</p>
</blockquote>
<p>Below are the docker logs from exited kube-controller docker container:</p>
<blockquote>
<p>I0907 10:40:19.703485 1 garbagecollector.go:518] delete object
[v1/Pod, namespace: default, name: k8version-1599474300-5r6ph, uid:
67437201-f0f4-11ea-b612-0293e1aee720] with propagation policy
Background I0907 10:44:01.937398 1 leaderelection.go:263] failed
to renew lease kube-system/kube-controller-manager: failed to
tryAcquireOrRenew context deadline exceeded E0907 10:44:01.937506<br />
1 leaderelection.go:306] error retrieving resource lock
kube-system/kube-controller-manager: Get https:
--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:
net/http: request canceled (Client.Timeout exceeded while awaiting
headers) I0907 10:44:01.937456 1 event.go:209]
Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system",
Name:"kube-controller-manager",
UID:"ba172d83-a302-11e9-b612-0293e1aee720", APIVersion:"v1",
ResourceVersion:"85406287", FieldPath:""}): type: 'Normal' reason:
'LeaderElection' ip-xxx-xx-xx-xxx_1dd3c03b-bd90-11e9-85c6-0293e1aee720
stopped leading F0907 10:44:01.937545 1
controllermanager.go:260] leaderelection lost I0907 10:44:01.949274<br />
1 range_allocator.go:169] Shutting down range CIDR allocator I0907
10:44:01.949285 1 replica_set.go:194] Shutting down replicaset
controller I0907 10:44:01.949291 1 gc_controller.go:86] Shutting
down GC controller I0907 10:44:01.949304 1
pvc_protection_controller.go:111] Shutting down PVC protection
controller I0907 10:44:01.949310 1 route_controller.go:125]
Shutting down route controller I0907 10:44:01.949316 1
service_controller.go:197] Shutting down service controller I0907
10:44:01.949327 1 deployment_controller.go:164] Shutting down
deployment controller I0907 10:44:01.949435 1
garbagecollector.go:148] Shutting down garbage collector controller
I0907 10:44:01.949443 1 resource_quota_controller.go:295]
Shutting down resource quota controller</p>
</blockquote>
<p>Below are the docker logs from kube-controller since the restart (7th Sep):</p>
<blockquote>
<p>E0915 21:51:36.028108 1 leaderelection.go:306] error retrieving
resource lock kube-system/kube-controller-manager: Get
https:--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:51:40.133446 1 leaderelection.go:306] error retrieving
resource lock kube-system/kube-controller-manager: Get
https:--127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s:
dial tcp 127.0.0.1:443: connect: connection refused</p>
</blockquote>
<p>Below are the docker logs from kube-scheduler since the restart (7th Sep):</p>
<blockquote>
<p>E0915 21:52:44.703587 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node:
Get <a href="https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0" rel="nofollow noreferrer">https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0</a>: dial
tcp 127.0.0.1:443: connect: connection refused E0915 21:52:44.704504<br />
1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed
to list *v1.ReplicationController: Get
https:--127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.705471 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service:
Get https:--127.0.0.1/api/v1/services?limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.706477 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1.ReplicaSet: Get https:--127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.707581 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1.StorageClass: Get https:--127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.708599 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1.PersistentVolume: Get https:--127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.709687 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1.StatefulSet: Get https:--127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.710744 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1.PersistentVolumeClaim: Get https:--127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.711879 1 reflector.go:126]
k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list
*v1.Pod: Get https:--127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused E0915
21:52:44.712903 1 reflector.go:126]
k8s.io/client-go/informers/factory.go:133: Failed to list
*v1beta1.PodDisruptionBudget: Get https:--127.0.0.1/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0:
dial tcp 127.0.0.1:443: connect: connection refused</p>
</blockquote>
<h2>kube-apiserver certificate Renewal:</h2>
<p>I found the kube-apiserver certificate which is this one <code>/etc/kubernetes/pki/kube-apiserver/etcd-client.crt</code> had expired in July 2020. There were few other expired certificates related to etcd-manager-main and events (it is same copy of the certificates on both places) but I don't see this referenced in the manifest files.</p>
<p>I searched and found steps to renew the certificates but most of them were using "kubeadm init phase" commands but I couldn't find kubeadm on master server and the certificates names and paths were different to my setup. So I generated a new certificate using openssl for kube-apiserver using existing ca cert and included DNS names with internal and external IP address (ec2 instance) and loopback ip address using openssl.cnf file. I replaced the new certificate with the same name <code>/etc/kubernetes/pki/kube-apiserver/etcd-client.crt</code>.</p>
<p>After that I restarted the kube-apiserver docker (which was continuously exiting) and restarted kubelet. Now the certificate expiry message is not coming but the kube-apiserver is continuously restarting which I believe is the reason for the errors on kube-controller and kube-scheduler docker containers.</p>
<h2>NOTE:</h2>
<p>I have not restarted the docker on the master server after replacing the certificate.</p>
<p>NOTE: All our production PODs are running on worker nodes so they are not affected but I can't manage them as I can't connect using kubectl.</p>
<p>Now, I am not sure what is the issue and why kube-apiserver is restarting continuously.</p>
<h2>Update to the original question:</h2>
<p>Kubernetes version: v1.14.1
Docker version: 18.6.3</p>
<p>Below are the latest <code>docker logs from kube-apiserver container</code> (which is still crashing)</p>
<blockquote>
<p>F0916 08:09:56.753538 1 storage_decorator.go:57] Unable to create storage backend: config (&{etcd3 /registry {[https:--127.0.0.1:4001] /etc/kubernetes/pki/kube-apiserver/etcd-client.key /etc/kubernetes/pki/kube-apiserver/etcd-client.crt /etc/kubernetes/pki/kube-apiserver/etcd-ca.crt} false true 0xc00095f050 apiextensions.k8s.io/v1beta1 5m0s 1m0s}), err (tls: private key does not match public key)</p>
</blockquote>
<p>Below is the output from <code>systemctl status kubelet</code></p>
<blockquote>
<p>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.095615 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x.compute.internal" not found</p>
</blockquote>
<blockquote>
<p><strong>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.130377 388 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR</strong></p>
</blockquote>
<blockquote>
<p>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.147390 388 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https:--127.0.0.1/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused</p>
</blockquote>
<blockquote>
<p>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.195768 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x..compute.internal" not found</p>
</blockquote>
<blockquote>
<p>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.295890 388 kubelet.go:2244] node "ip-xxx-xx-xx-xx.xx-xxxxx-x..compute.internal" not found</p>
</blockquote>
<blockquote>
<p>Sep 16 08:10:16 ip-xxx-xx-xx-xx kubelet[388]: E0916 08:10:16.347431 388 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get <a href="https://127.0.0.1/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0" rel="nofollow noreferrer">https://127.0.0.1/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0</a>: dial tcp 127.0.0.1:443: connect: connection refused</p>
</blockquote>
<p>This cluster (along with 3 others) was setup using kops. The other clusters are running normally and looks like they have some expired certificates as well. The person who setup the clusters is not available for comment and I have limited experience on Kubernetes. Hence required assistance from the gurus.</p>
<p>Any help is very much appreciated.</p>
<p>Many thanks.</p>
<h2>Update after response from Zambozo and Nepomucen:</h2>
<p>Thanks to both of you for your response. Based that I found that there were expired etcd certificates on the /mnt mount point.</p>
<p>I followed workaround from <a href="https://kops.sigs.k8s.io/advisories/etcd-manager-certificate-expiration/" rel="nofollow noreferrer">https://kops.sigs.k8s.io/advisories/etcd-manager-certificate-expiration/</a></p>
<p>and recreated etcd certificates and keys. I have verified each of the certificate with a copy of the old one (from my backup folder) and everything is matching and the new certificates has expiry date set to Sep 2021.</p>
<p>Now I am getting different error on etcd dockers (both etcd-manager-events and etcd-manager-main)</p>
<h2>Note:xxx-xx-xx-xxx is the IP address of the master server</h2>
<blockquote>
<p>root@ip-xxx-xx-xx-xxx:~# <code>docker logs <etcd-manager-main container> --tail 20</code>
I0916 14:41:40.349570 8221 peers.go:281] connecting to peer "etcd-a" with TLS policy, servername="etcd-manager-server-etcd-a"
W0916 14:41:40.351857 8221 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3996: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
I0916 14:41:40.351878 8221 peers.go:347] was not able to connect to peer etcd-a: map[xxx.xx.xx.xxx:3996:true]
W0916 14:41:40.351887 8221 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-a
I0916 14:41:41.205763 8221 controller.go:173] starting controller iteration
W0916 14:41:41.205801 8221 controller.go:149] unexpected error running etcd cluster reconciliation loop: cannot find self "etcd-a" in list of peers []
I0916 14:41:45.352008 8221 peers.go:281] connecting to peer "etcd-a" with TLS policy, servername="etcd-manager-server-etcd-a"
I0916 14:41:46.678314 8221 volumes.go:85] AWS API Request: ec2/DescribeVolumes
I0916 14:41:46.739272 8221 volumes.go:85] AWS API Request: ec2/DescribeInstances
I0916 14:41:46.786653 8221 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-a.internal.xxxxx.xxxxxxx.com:[xxx.xx.xx.xxx xxx.xx.xx.xxx]], final=map[xxx.xx.xx.xxx:[etcd-a.internal.xxxxx.xxxxxxx.com etcd-a.internal.xxxxx.xxxxxxx.com]]
I0916 14:41:46.786724 8221 hosts.go:181] skipping update of unchanged /etc/hosts</p>
</blockquote>
<blockquote>
<p>root@ip-xxx-xx-xx-xxx:~# <code>docker logs <etcd-manager-events container> --tail 20</code>
W0916 14:42:40.294576 8316 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-events-a
I0916 14:42:41.106654 8316 controller.go:173] starting controller iteration
W0916 14:42:41.106692 8316 controller.go:149] unexpected error running etcd cluster reconciliation loop: cannot find self "etcd-events-a" in list of peers []
I0916 14:42:45.294682 8316 peers.go:281] connecting to peer "etcd-events-a" with TLS policy, servername="etcd-manager-server-etcd-events-a"
W0916 14:42:45.297094 8316 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3997: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
I0916 14:42:45.297117 8316 peers.go:347] was not able to connect to peer etcd-events-a: map[xxx.xx.xx.xxx:3997:true]
I0916 14:42:46.791923 8316 volumes.go:85] AWS API Request: ec2/DescribeVolumes
I0916 14:42:46.856548 8316 volumes.go:85] AWS API Request: ec2/DescribeInstances
I0916 14:42:46.945119 8316 hosts.go:84] hosts update: primary=map[], fallbacks=map[etcd-events-a.internal.xxxxx.xxxxxxx.com:[xxx.xx.xx.xxx xxx.xx.xx.xxx]], final=map[xxx.xx.xx.xxx:[etcd-events-a.internal.xxxxx.xxxxxxx.com etcd-events-a.internal.xxxxx.xxxxxxx.com]]
I0916 14:42:50.297264 8316 peers.go:281] connecting to peer "etcd-events-a" with TLS policy, servername="etcd-manager-server-etcd-events-a"
W0916 14:42:50.300328 8316 peers.go:325] unable to grpc-ping discovered peer xxx.xx.xx.xxx:3997: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
I0916 14:42:50.300348 8316 peers.go:347] was not able to connect to peer etcd-events-a: map[xxx.xx.xx.xxx:3997:true]
W0916 14:42:50.300360 8316 peers.go:215] unexpected error from peer intercommunications: unable to connect to peer etcd-events-a</p>
</blockquote>
<p>Could you please suggest on how to proceed from here?</p>
<p>Many thanks.</p>
| Jerome Roserio | <p>Generating a new cert using openssl for kube-apiserver and replacing the cert and key brought the kube-apiserver docker to stable state and provided access via kubectl.</p>
<p>To resolve etcd-manager certs issue, upgraded etcd-manager to <code>kopeio/etcd-manager:3.0.20200531</code> for both etcd-manager-main and etcd-manager-events as described at <a href="https://github.com/kubernetes/kops/issues/8959#issuecomment-673515269" rel="nofollow noreferrer">https://github.com/kubernetes/kops/issues/8959#issuecomment-673515269</a></p>
<p>Thank you</p>
| Jerome Roserio |
<p>I do have multiple persistent volumes which need to be shrinked to reduce the hosting costs. I already figured out that Kubernetes does not provide such an option. I also tried to clone or the restore the volumes from an snapshot to a new smaller volume - with the same result (<code>requested volume size XXX is less than the size XXX for the source snapshot</code>).</p>
<p>Nevertheless I need a solution or workaround to get this done.</p>
<p>The cluster is deployed with Rancher and the volumes are mounted to a Ceph Cluster. Everything is provided by an external hoster.</p>
| Philipp Hölscher | <p>Finally I achieved what needed with the following steps (still tricky and manual work):</p>
<ul>
<li>Stop running pod (otherwhise you could not use the volume in the next steps)</li>
<li>Create the a new PVC with the desired capacity (ensure that the spec and label matches the exisitng PVC)</li>
<li>Run this Job <a href="https://github.com/edseymour/pvc-transfer" rel="nofollow noreferrer">https://github.com/edseymour/pvc-transfer</a>
<ul>
<li>In the spec of the <code>job-template.yaml</code> set the source and destination volume</li>
</ul>
</li>
<li>Set the ReclaimPolicy on the new created pv to Retain. This will ensure that the pv won't be deleted after we delete the temp pvc in the next step</li>
<li>Delete the source and destination pvc</li>
<li>Create a new pvc with the old name and the new storage capacity</li>
<li>On the new pv point the claimRef to the new pvc</li>
</ul>
| Philipp Hölscher |
<p>I have it working on one site application I already set up and now I am just trying to replicate the exact same thing for a different site/domain in another namespace.</p>
<p>So <strong>staging.correct.com</strong> is my working https domain</p>
<p>and <strong>staging.example.com</strong> is my not working https domain (http works - just not https)</p>
<p>When I do the following it shows 3 certs, the working one for correct and then 2 for the example.com when it should only have one for example:</p>
<p><strong>kubectl get -A certificate</strong></p>
<pre><code>correct staging-correct-com True staging-correct-com-tls 10d
example staging-example-com False staging-example-com-tls 16h
example staging-example-website-com False staging-example-com-tls 17h
</code></pre>
<p>When I do:
<strong>kubectl get -A certificaterequests</strong>
It shows 2 certificate requests for the example</p>
<pre><code>example staging-example-com-nl46v False 15h
example staging-example-website-com-plhqb False 15h
</code></pre>
<p>When I do:
<strong>kubectl get ingressroute -A</strong></p>
<pre><code>NAMESPACE NAME AGE
correct correct-ingress-route 10d
correct correct-secure-ingress-route 6d22h
kube-system traefik-dashboard 26d
example example-website-ingress-route 15h
example example-website-secure-ingress-route 15h
routing dashboard 29d
routing traefik-dashboard 6d21h
</code></pre>
<p>When I do:
<strong>kubectl get secrets -A</strong> (just showing the relevant ones)</p>
<pre><code>correct default-token-bphcm kubernetes.io/service-account-token
correct staging-correct-com-tls kubernetes.io/tls
example default-token-wx9tx kubernetes.io/service-account-token
example staging-example-com-tls Opaque
example staging-example-com-wf224 Opaque
example staging-example-website-com-rzrvw Opaque
</code></pre>
<p><strong>Logs from cert manager pod:</strong></p>
<p>1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="staging.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-bqjsj" "related_resource_namespace”=“example” "related_resource_version"="v1beta1" "resource_kind"="Challenge" "resource_name"="staging-example-com-ltjl6-1661100417-771202110" "resource_namespace”=“example” "resource_version"="v1" "type"="HTTP-01"</p>
<p>When I do:
<strong>kubectl get challenge -A</strong></p>
<pre><code>example staging-example-com-nl46v-1661100417-2848337980 staging.example.com 15h
example staging-example-website-com-plhqb-26564845-3987262508 pending staging.example.com
</code></pre>
<p>When I do: <strong>kubectl get order -A</strong></p>
<pre><code>NAMESPACE NAME STATE AGE
example staging-example-com-nl46v-1661100417 pending 17h
example staging-example-website-com-plhqb-26564845 pending 17h
</code></pre>
<p>My yml files:</p>
<p><strong>My ingress route:</strong></p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: web
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- web
routes:
- match: Host(`staging.example.com`)
middlewares:
- name: https-only
kind: Rule
services:
- name: example-website
namespace: example
port: 80
</code></pre>
<p><strong>my issuer:</strong></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer-staging
namespace: example
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: staging-example-com-tls
# Enable the HTTP-01 challenge provider
solvers:
# An empty 'selector' means that this solver matches all domains
- http01:
ingress:
class: traefik
</code></pre>
<p><strong>my middleware:</strong></p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-only
namespace: example
spec:
redirectScheme:
scheme: https
permanent: true
</code></pre>
<p><strong>my secure ingress route:</strong></p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-website-secure-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- websecure
routes:
- match: Host(`staging.example.com`)
kind: Rule
services:
- name: example-website
namespace: example
port: 80
tls:
domains:
- main: staging.example.com
options:
namespace: example
secretName: staging-example-com-tls
</code></pre>
<p><strong>my service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: example
name: 'example-website'
spec:
type: ClusterIP
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
- protocol: TCP
name: https
port: 443
targetPort: 80
selector:
app: 'example-website'
</code></pre>
<p><strong>my solver:</strong></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: staging-example-com
namespace: example
spec:
secretName: staging-example-com-tls
issuerRef:
name: example-issuer-staging
kind: Issuer
commonName: staging.example.com
dnsNames:
- staging.example.com
</code></pre>
<p><strong>my app:</strong></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: example
name: 'example-website'
labels:
app: 'example-website'
tier: 'frontend'
spec:
replicas: 1
selector:
matchLabels:
app: 'example-website'
template:
metadata:
labels:
app: 'example-website'
spec:
containers:
- name: example-website-container
image: richarvey/nginx-php-fpm:1.10.3
imagePullPolicy: Always
env:
- name: SSH_KEY
value: 'secret'
- name: GIT_REPO
value: 'url of source code for site'
- name: GIT_EMAIL
value: '[email protected]'
- name: GIT_NAME
value: 'example'
ports:
- containerPort: 80
</code></pre>
<p>How can I delete all these secrets, orders, certificates and stuff in the example namespace and try again? Does cert-manager let you do this without restarting them continuously?</p>
<p>EDIT:</p>
<p>I deleted the namespace and redeployed, then:</p>
<p><strong>kubectl describe certificates staging-example-com -n example</strong></p>
<pre><code>Spec:
Common Name: staging.example.com
Dns Names:
staging.example.com
Issuer Ref:
Kind: Issuer
Name: example-issuer-staging
Secret Name: staging-example-com-tls
Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:06Z
Message: Issuing certificate as Secret does not contain a certificate
Reason: MissingData
Status: False
Type: Ready
Last Transition Time: 2020-09-26T21:25:07Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: staging-example-com-gnbl4
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 3m10s cert-manager Issuing certificate as Secret does not exist
Normal Reused 3m10s cert-manager Reusing private key stored in existing Secret resource "staging-example-com-tls"
Normal Requested 3m9s cert-manager Created new CertificateRequest resource "staging-example-com-qrtfx"
</code></pre>
<p>So then I did:</p>
<p><strong>kubectl describe certificaterequest staging-example-com-qrtfx -n example</strong></p>
<pre><code>Status:
Conditions:
Last Transition Time: 2020-09-26T21:25:10Z
Message: Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: "pending"
Reason: Pending
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 8m17s cert-manager Created Order resource example/staging-example-com-qrtfx-1661100417
Normal OrderPending 8m17s cert-manager Waiting on certificate issuance from order example/staging-example-com-qrtfx-1661100417: ""
</code></pre>
<p>So I did:</p>
<p><strong>kubectl describe challenges staging-example-com-qrtfx-1661100417 -n example</strong></p>
<pre><code>Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 11m cert-manager Challenge scheduled for processing
Normal Presented 11m cert-manager Presented challenge using HTTP-01 challenge mechanism
</code></pre>
| Jacob | <p>I figured it out. The issue seems to be that IngressRoute (which is used in traefik) does not work with cert mananger. I just deployed this file, then the http check was confirmed, then I could delete it again. Hope this helps others with same issue.</p>
<p>Seems cert manager does support IngressRoute which is in Traefik? I opened the issue here so let's see what they say: <a href="https://github.com/jetstack/cert-manager/issues/3325" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/3325</a></p>
<pre><code>kubectl apply -f example-ingress.yml
</code></pre>
<p>File:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: example
name: example-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: example-issuer-staging
spec:
rules:
- host: staging.example.com
http:
paths:
- path: /
backend:
serviceName: example-website
servicePort: 80
tls:
- hosts:
- staging.example.com
secretName: staging-example-com-tls
</code></pre>
| Jacob |
<p>I am playing with microk8s and I am trying to deploy nextcloud to get more familiar with it. However the deployment of nextcloud went fine, I am facing some issues with setting ingress for that. Maybe you could take a look at my manifests and ingress resource and help find me the problem.</p>
<p>This is the deployment file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: nextcloud
name: nextcloud-service
labels:
run: nextcloud-app
spec:
ports:
- port: 80
targetPort: 8080
selector:
run: nextcloud-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nextcloud
name: nextcloud-deployment
labels:
app: nextcloud-app
spec:
replicas: 1
selector:
matchLabels:
app: nextcloud-app
template:
metadata:
labels:
app: nextcloud-app
spec:
containers:
- image: nextcloud:latest
name: nextcloud
env:
- name: NEXTCLOUD_ADMIN_USER
valueFrom:
configMapKeyRef:
name: nextcloud-configuration
key: nextcloud_admin_user
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud-secret
key: admin_password
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: nextcloud-pv
mountPath: /var/www/html/data
volumes:
- name: nextcloud-pv
persistentVolumeClaim:
claimName: nextcloud-pv-claim
</code></pre>
<p>and this is the ingress resource file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
namespace: nextcloud
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nextcloud
pathType: Prefix
backend:
service:
name: nextcloud-service
port:
number: 80
</code></pre>
<p>Following addons are enabled on my microk8s:</p>
<ul>
<li>dns</li>
<li>ingress</li>
</ul>
<p>Now I would like to show you some k8s output.</p>
<p><strong>kubectl -n nextcloud describe svc nextcloud-service</strong></p>
<pre><code>Name: nextcloud-service
Namespace: nextcloud
Labels: run=nextcloud-app
Annotations: <none>
Selector: run=nextcloud-app
Type: ClusterIP
IP: 10.152.183.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
</code></pre>
<p><strong>kubectl -n nextcloud describe ingress nextcloud-ingress</strong></p>
<pre><code>Name: nextcloud-ingress
Namespace: nextcloud
Address: 192.168.60.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/nextcloud nextcloud-service:80 <none>)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 11m nginx-ingress-controller Ingress nextcloud/nextcloud-ingress
Normal CREATE 11m nginx-ingress-controller Ingress nextcloud/nextcloud-ingress
Normal UPDATE 63s (x22 over 11m) nginx-ingress-controller Ingress nextcloud/nextcloud-ingress
Normal UPDATE 63s (x22 over 11m) nginx-ingress-controller Ingress nextcloud/nextcloud-ingress
</code></pre>
<p><strong>kubectl -n ingress logs pod/nginx-ingress-microk8s-controller-k2q6c</strong></p>
<pre><code>I1024 19:56:37.955953 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 19:56:37.963861 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:56:37.964276 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192287", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 19:56:39.491960 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192295", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 19:56:41.297313 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:57:37.955734 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 19:57:37.969214 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:57:37.969711 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192441", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 19:57:39.492467 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192446", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 19:57:41.302640 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:58:37.956198 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 19:58:37.964655 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:58:37.965017 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192592", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 19:58:39.493436 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192600", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 19:58:41.298097 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:59:37.955569 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 19:59:37.964975 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:59:37.965045 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192746", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 19:59:39.491840 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192750", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 19:59:41.298496 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 20:00:37.956061 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 20:00:37.965139 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 20:00:37.965212 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192896", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 20:00:39.489924 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192904", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 20:00:41.298762 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 20:01:37.955481 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 20:01:37.963612 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 20:01:37.963681 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"193049", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
I1024 20:01:39.490523 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"193058", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
W1024 20:01:41.297141 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
</code></pre>
<p>Calling <a href="http://k8s.ip/nextcloud" rel="nofollow noreferrer">http://k8s.ip/nextcloud</a> results with 503. Any ideas what I am missing?</p>
| WinterMute | <p>As I posted in the comments:</p>
<blockquote>
<p>You are receiving 503 code because you have a missmatch in your <code>Service</code> -> <code>.spec.selector</code> (<code>run: nextcloud-app</code>) and your <code>Deployment</code> -> <code>.spec.selector.matchLabels</code> (<code>app: nextcloud-app</code>). You will need to have them both the same. You can see it also when describing the service (no endpoint).</p>
</blockquote>
<p>The issue in this particular setup is that there is a missmatch between a <code>matchLabel</code> in a <code>Deployment</code> and <code>selector</code> in the <code>Service</code>:</p>
<p><code>Deployment</code></p>
<pre class="lang-yaml prettyprint-override"><code>spec:
replicas: 1
selector:
matchLabels:
app: nextcloud-app # <-- HERE!
</code></pre>
<p><code>Service</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
ports:
- port: 80
targetPort: 8080
selector:
run: nextcloud-app # <-- HERE!
</code></pre>
<p>To fix that you will need to have both of them matching (for example):</p>
<ul>
<li><code>app: nextcloud-app</code> in a <code>Deployment</code> and a <code>Service</code></li>
</ul>
<hr />
<p>Some of the ways to identify mismatched selector (by using examples from post):</p>
<ul>
<li>Manual inspection of <code>YAML</code> definitions as shown above</li>
<li><code>$ kubectl -n nextcloud describe svc nextcloud-service</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>Name: nextcloud-service
Namespace: nextcloud
Labels: run=nextcloud-app
Annotations: <none>
Selector: run=nextcloud-app
Type: ClusterIP
IP: 10.152.183.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: <none> # <-- HERE
Session Affinity: None
Events: <none>
</code></pre>
<p>Above <code>describe</code> shows that the service is created but there are no <code>endpoints</code> (<code>Pods</code>) to send the traffic to.</p>
<blockquote>
<p>No endpoint could be also related to the fact that <code>Pod</code> is not <code>Ready</code> or is not in <code>Healthy</code> state</p>
</blockquote>
<ul>
<li><code>$ kubectl get endpoint -n nextcloud</code></li>
</ul>
<pre><code>NAME ENDPOINTS AGE
nextcloud-service <none> 1m
</code></pre>
<ul>
<li>Logs from <code>Ingress</code> controller (posted in the question):</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>I1024 19:56:37.955953 6 status.go:275] updating Ingress nextcloud/nextcloud-ingress status from [{192.168.60.2 }] to [{127.0.0.1 }]
W1024 19:56:37.963861 6 controller.go:909] Service "nextcloud/nextcloud-service" does not have any active Endpoint.
I1024 19:56:37.964276 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"nextcloud", Name:"nextcloud-ingress", UID:"913dcf73-e5df-4ad9-a23b-22d6ad8b83a7", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"192287", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress nextcloud/nextcloud-ingress
</code></pre>
<p><code><--REDACTED--> Service "nextcloud/nextcloud-service" does not have any active Endpoint.</code></p>
<hr />
<p>I encourage you to check the Helm chart of nextcloud:</p>
<ul>
<li><em><a href="https://github.com/nextcloud/helm/tree/master/charts/nextcloud" rel="nofollow noreferrer">Github.com: Nextcloud: Helm: Charts: Nextcloud</a></em></li>
<li><em><a href="https://microk8s.io/docs/commands" rel="nofollow noreferrer">Microk8s.io: Docs: Commands</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Services networking: Service</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Services networking: Ingress</a></em></li>
<li><em><a href="https://microk8s.io/docs/addon-ingress" rel="nofollow noreferrer">Microk8s.io: Docs: Addon Ingress</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm requesting some JSON data from a pod's web server via the Kubernetes API proxy verb. That is:</p>
<pre><code>corev1 = kubernetes.client.CoreV1Api()
res = corev1.connect_get_namespaced_pod_proxy_with_path(
'mypod:5000', 'default', path='somepath', path2='somepath')
print(type(res))
print(res)
</code></pre>
<p>The call succeeds and returns a <code>str</code> containing the serialized JSON data from my pod's web service. Unfortunately, <code>res</code> now looks like this ... which isn't valid JSON at all, so <code>json.loads(res)</code> denies to parse it:</p>
<pre><code>{'x': [{'xx': 'xxx', ...
</code></pre>
<p>As you can see, the stringified response looks like a Python dictionary, instead of valid JSON. Any suggestions as to how this convert safely back into either correct JSON or a correct Python <code>dict</code>?</p>
| TheDiveO | <p>Looking at the source code for <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api/core_v1_api.py" rel="nofollow noreferrer">core_v1_api.py</a>. The method calls generally accept a kwarg named <code>_preload_content</code>.</p>
<p>Setting this argument to <code>False</code> instructs the method to return the <code>urllib3.HTTPResponse</code> object instead of a processed <code>str</code>. You can then work directly with the data, which cooperates with <code>json.loads()</code>.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>corev1 = client.CoreV1Api()
res = corev1.connect_get_namespaced_pod_proxy_with_path(
'mypod:5000', 'default', path='somepath',
path2='somepath', _preload_content=False)
json.loads(res.data)
</code></pre>
| Kennedn |
<p>I am using keycloak to authenticate with kubernetes using kube-oidc-proxy and oidc-login.</p>
<p>I have created a client in keycloak and a mapper with the following configuration.
<a href="https://i.stack.imgur.com/QwQOW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QwQOW.png" alt="enter image description here" /></a></p>
<p>The kube-oidc-proxy is running with this configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>command: ["kube-oidc-proxy"]
args:
- "--secure-port=443"
- "--tls-cert-file=/etc/oidc/tls/crt.pem"
- "--tls-private-key-file=/etc/oidc/tls/key.pem"
- "--oidc-client-id=$(OIDC_CLIENT_ID)"
- "--oidc-issuer-url=$(OIDC_ISSUER_URL)"
- "--oidc-username-claim=$(OIDC_USERNAME_CLAIM)"
- "--oidc-signing-algs=$(OIDC_SIGNING_ALGS)"
- "--oidc-username-prefix='oidcuser:'"
- "--oidc-groups-claim=groups"
- "--oidc-groups-prefix='oidcgroup:'"
- "--v=10"
</code></pre>
<p>And the kube config has this configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
server: <KUBE_OIDC_PROXY_URL>
name: default
contexts:
- context:
cluster: default
namespace: default
user: oidc
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- -v10
- --oidc-issuer-url=<ISSUER_URL>
- --oidc-client-id=kube-oidc-proxy
- --oidc-client-secret=<CLIENT_SECRET>
- --oidc-extra-scope=email
- --grant-type=authcode
command: kubectl
env: null
provideClusterInfo: false
</code></pre>
<p>I can successfully get the user info with groups in the jwt token as shown below:</p>
<pre class="lang-json prettyprint-override"><code> "name": "Test Uset",
"groups": [
"KubernetesAdmins"
],
"preferred_username": "test-user",
"given_name": "Test",
"family_name": "Uset",
"email": "[email protected]"
</code></pre>
<p>And i have created the following cluster role binding:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-admin-group
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidcgroup:KubernetesAdmins
</code></pre>
<p>But I still get forbidden error as follows:</p>
<pre><code>Error from server (Forbidden): pods is forbidden: User "'oidcuser:'ecc4d1ac-68d7-4158-8a58-40b469776c07" cannot list resource "pods" in API group "" in the namespace "default"
</code></pre>
<p>Any ideas on how to solve this issue ??</p>
<p>Thanks in advance,,</p>
| N05h3ll | <p>I figured it out.
Removing the single quotes from the user and group prefix to be like:</p>
<pre class="lang-yaml prettyprint-override"><code>"--oidc-username-prefix=oidcuser:"
--oidc-groups-prefix=oidcgroup:"
</code></pre>
<p>This solved the issue.</p>
| N05h3ll |
<p>I have tried to set up Kubernetes 1.13 for OpenID Connect (OIDC) authentication as follows:</p>
<ul>
<li>installed Keycloak server</li>
<li>added command line options <code>--oidc-issuer-url=https://my_keycloak/auth/realms/my_realm</code>, etc., to <code>kube-apiserver</code></li>
<li>stored id token at <code>users.user.auth-provider.config.client-id</code>, etc., in kubeconfig's <code>my_user</code></li>
</ul>
<p>From my reading of the documentation <code>kubectl</code> should now be able to access the cluster as <code>my_user</code>. However, <code>kubectl get nodes</code> says:</p>
<pre><code>error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>And <code>curl -k https://api_server:6443/api/v2/nodes --header "Authorization: Bearer $id_token"</code> says:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p><code>kubectl logs $kube_apiserver -n kube-system</code>, <code>journalctl -u kubelet.service</code>, and Keycloak's stdout are all silent. So where can I see more logging information to discern where OIDC authentication may go wrong?</p>
<p><strong>UPDATE</strong> Option <a href="https://github.com/kubernetes/kubernetes/issues/22723" rel="nofollow noreferrer"><code>--v</code></a> on both the client (<code>keyctl</code>) or the server (e.g. API server) help to some degree.</p>
| rookie099 | <p>If you are using the <code>email</code> claim, Kubernetes requires your <code>email_verified</code> claim to be <code>true</code>. By default in Keycloak, this is set to <code>false</code>.</p>
<p>Source: <a href="https://github.com/kubernetes/kubernetes/search?q=email_verified&unscoped_q=email_verified" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/search?q=email_verified&unscoped_q=email_verified</a></p>
| pbar |
<p>Create an External Ingress for a service like the one below.</p>
<p>Everything gets created but without the behavior I expect.</p>
<ul>
<li>Global IP is not used, another one is used instead</li>
<li>HTTP is still enabled</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: "extensions/v1beta1"
kind: "Ingress"
metadata:
annotations:
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.global-static-ip-name: "my-ingress-ip"
labels:
app.kubernetes.io/instance: "my-service-api"
app.kubernetes.io/managed-by: "pulumi"
app.kubernetes.io/name: "my-service-api"
app.kubernetes.io/version: "1.0.0-service"
helm.sh/chart: "service-0.1.0"
name: "my-service-api-proxy"
namespace: "load-test"
spec:
backend:
serviceName: "my-service-api-proxy"
servicePort: 80
tls:
- secretName: "my-tls-secret-cert"
</code></pre>
<p>As soon as I remove the kubernetes.io/ingress.allow-http annotation, the ingress picks up my global IP.</p>
<p>Has anyone ran into this issue when creating an Ingress with a global IP and HTTPS only access?</p>
<p>GKE version: 1.18.16-gke.2100</p>
<p>Node Config:</p>
<blockquote>
<p>Kernel version 5.4.89+</p>
<p>OS image Container-Optimized OS from Google</p>
<p>Container runtime version docker://19.3.14</p>
<p>kubelet version v1.18.16-gke.2100</p>
<p>kube-proxy version v1.18.16-gke.2100</p>
</blockquote>
| anon_coward | <p>This works, remember to use quotes on all annotation values since they are strings.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: "extensions/v1beta1"
kind: "Ingress"
metadata:
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: "my-ingress-ip"
labels:
app.kubernetes.io/instance: "my-service-api"
app.kubernetes.io/managed-by: "pulumi"
app.kubernetes.io/name: "my-service-api"
app.kubernetes.io/version: "1.0.0-service"
helm.sh/chart: "service-0.1.0"
name: "my-service-api-proxy"
namespace: "load-test"
spec:
backend:
serviceName: "my-service-api-proxy"
servicePort: 80
tls:
- secretName: "my-tls-secret-cert"
</code></pre>
<p>Also if you are updating an existing Ingress, give it a few minutes (like 10) for it to pick up the changes</p>
| anon_coward |
<p>I created a k8s CronJob with the following schedule (run every minute):</p>
<p><code>schedule: "*/1 * * * *"</code></p>
<p>I see my CronJob created:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
job-staging job-1593017820-tt2sn 2/3 Running 0 10m
</code></pre>
<p>My job simply does a Printf to the log, one time, then exits.</p>
<p>When I do a <code>kubernetes get cronjob</code> I see:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
job-staging job */1 * * * * False 1 19m 19m
</code></pre>
<p>When I look at the logs, it looks like it only ran once, which was the first run. Do I need to prevent my program from exiting?</p>
<p>I assumed k8s would restart my program, but maybe that's a wrong assumption.</p>
| L P | <p>Your assumption about the behavior of Kubernetes ("restarting the program") is correct.</p>
<p>As you may know, a Job is basically a Kubernetes Pod that executes some process and successfully finishes when it <em>exits with a zero exit code</em>. The "Cron" part of CronJob is the most obvious, scheduling the Job to execute in a particular time pattern.</p>
<p>Most YAML objects for CronJobs include the <code>restartPolicy: OnFailure</code> key that prevents Kubernetes from rescheduling the Job for a <strong>non-zero exit code</strong> (the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#example" rel="nofollow noreferrer">hello-world YAML file</a> in Kubernetes documentation uses this flag).</p>
<p>From what I see in the logs obtained by your <code>kubectl</code> instruction, it looks like your Job is failing - because of the <code>Status 1</code>. I would recommend you check the logs of the CronJob using <code>kubectl logs -f -n default job-1593017820-tt2sn</code> for any possible errors in the execution of your script (if your script explicitly exits with an exit-code, check for a possible non-zero code).</p>
<p>[UPDATE]</p>
<p>CronJobs also have limitations:</p>
<blockquote>
<p>A cron job creates a job object about once per execution time of its schedule. We say “about” because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, but do not completely prevent them. Therefore, jobs should be idempotent.</p>
</blockquote>
<p>I think these are pretty rare scenarios, but maybe you've found yourself in these rare situations. The documentation is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations" rel="nofollow noreferrer">here</a>.</p>
| guimorg |
<p><strong>Context:</strong></p>
<p>I am building an application and now I am on the infrastructure step.</p>
<p>The application is built with Java and persistence layer is MongoDB.</p>
<p><strong>Problem:</strong></p>
<p>If the application is running in same Node as persistence Layer are, everything goes ok, but on different nodes the application cannot communicate with MongoDB.</p>
<p>There is a print of Kubernetes Dashboard:</p>
<p><a href="https://i.stack.imgur.com/Nziyf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nziyf.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/fKQg1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fKQg1.png" alt="enter image description here" /></a></p>
<p>As you can see, two pods of application (gateway) are running in same node as Mongo, but other two don't. These two are not finding MongoDb.</p>
<p>Here is the mongo-db.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /home/vitor/seguranca/mongo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
volumeName: mongo-data
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo-service
spec:
volumes:
- name: "deployment-storage"
persistentVolumeClaim:
claimName: "pvc"
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: "deployment-storage"
mountPath: "/data/db"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
</code></pre>
<p>and here the application.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gateway
name: gateway
spec:
replicas: 4
selector:
matchLabels:
app: gateway
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: vitornilson1998/native-micro
name: native-micro
env:
- name: MONGO_CONNECTION_STRING
value: mongodb://mongo-service:27017 #HERE IS THE POINT THAT THE APPLICATION USES TO ACCESS MONGODB
- name: MONGO_DB
value: gateway
resources: {}
ports:
- containerPort: 8080
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: gateway-service
name: gateway-service
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: gateway
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>I can't see what is stopping application to reach MongoDB.</p>
<p>Should I do what?</p>
| Vitor Nilson | <p>I was using calico as CNI.</p>
<p>I removed calico and let kube-proxy take care of everything.</p>
<p>Now everything is working fine.</p>
| Vitor Nilson |
<p>According to <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">cert-manager installation docs</a> jetstack repository should be added:</p>
<blockquote>
<p>$ helm repo add jetstack <a href="https://charts.jetstack.io" rel="nofollow noreferrer">https://charts.jetstack.io</a></p>
</blockquote>
<p>It gives error message:</p>
<blockquote>
<p>Error: looks like "https://charts.jetstack.io" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "serverInfo"</p>
</blockquote>
<p>What are the ways to fix the issue?</p>
| tiktak | <p>This looks to be caused by a patch done in Version 3.3.2 of Helm for security based issues.</p>
<p>Reference Issue: <a href="https://github.com/helm/helm/issues/8761" rel="nofollow noreferrer">https://github.com/helm/helm/issues/8761</a></p>
<p>Security Patch: <a href="https://github.com/helm/helm/pull/8762" rel="nofollow noreferrer">https://github.com/helm/helm/pull/8762</a></p>
| Cameron Munroe |
<p>I can't find any examples as to where the behavior section should be specified in <code>Kind: HorizontalPodAutoscaler</code>.</p>
<p>In the docs they have this <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#default-behavior" rel="nofollow noreferrer">section</a> but I couldn't find any examples as where it should fit in?</p>
<pre><code>behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
</code></pre>
<p>Here is a sample <code>auto-scaler.yml</code> without the behavior section</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 100Mi
</code></pre>
| user630702 | <p>Talking specifically about your example you will need to paste your <code>.behavior</code> definition part under the <code>.spec</code> like below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
spec:
# <--- START --->
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
# <--- END --->
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 100Mi
</code></pre>
<p><strong>Please remember that this feature is available from Kubernetes v1.18.</strong></p>
<p>Earlier version of Kubernetes will show following error:</p>
<pre class="lang-sh prettyprint-override"><code>error: error validating "hpa.yaml": error validating data: ValidationError(HorizontalPodAutoscaler.spec): unknown field "behavior" in io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerSpec; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<hr />
<p>As for a side note you can also take a look on:</p>
<ul>
<li><code>$ kubectl autoscale</code></li>
<li><code>$ kubectl autoscale deployment nginx --min=1 --max=10 --cpu-percent=80</code> <- example</li>
</ul>
<blockquote>
<p>Creates an autoscaler that automatically chooses and sets the number of pods that run in a kubernetes cluster.</p>
</blockquote>
<hr />
<p>Additional reference:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Run application: Horizontal pod autoscaler</a></em></li>
<li><em><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Generated: Kubectl: kubectl commands: autoscale</a></em></li>
</ul>
| Dawid Kruk |
<p>Currently have velero up and running and it's working great. The only issue I have is that the snap shots of the volumes are being created in the same region as the originals which kinda defeats the purpose of disaster recovery. This flag</p>
<p><code>--snapshot-location-config</code></p>
<p>doesn't have arg for region. I know there is a config for the default snap shot location</p>
<p><code>volumesnapshotlocations.velero.io "default"</code></p>
<p>Does anyone know how to modify the default so I can get my snap shots into new regions?</p>
| Hizzy | <p>Snapshots creation from the main region into a different region is not supported.<br />
Azure zone-redundant snapshots and images for managed disks have a decent 99.9999999999% (12 9's) durability. The availability zones in a region are usually physically separated and even if an outage affects one AZ, you can still access your data from a redundant AZ.</p>
<p>However, if you fear calamities that can affect several square kilometers(multiple zones in a region), you can manually move the snapshots in a different region or even automate the process. <a href="https://thomasthornton.cloud/2020/04/06/copy-azure-virtual-machine-snapshots-to-another-region-and-create-managed-disks-using-powershell/" rel="nofollow noreferrer">Here</a> is a guide to do it.</p>
| Neo Anderson |
<p>I created a kubernetes pod efgh in namespace ns1</p>
<pre><code>kubectl run efgh --image=nginx -n ns1
</code></pre>
<p>I created another pod in default namespace</p>
<pre><code>kubectl run apple --image=nginx
</code></pre>
<p>I created a service efgh in namespace ns1</p>
<pre><code>kubectl expose pod efgh --port=80 -n ns1
</code></pre>
<p>Now I created a network policy to block incoming connections to the pod</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: ns1
spec:
podSelector:
matchLabels:
run: efgh
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: ns1
- from:
- namespaceSelector:
matchLabels:
project: default
podSelector:
matchLabels:
run: apple
ports:
- protocol: TCP
port: 80
</code></pre>
<p>Checking the pods in ns1 gives me</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP
efgh 1/1 Running 0 3h4m 10.44.0.4
</code></pre>
<p>Checking the services in ns1 gives me</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efgh ClusterIP 10.109.170.238 <none> 80/TCP 164m
</code></pre>
<p>Once I open terminal in apple pod and run below it works</p>
<pre><code>curl http://10-44-0-4.ns1.pod
curl http://10.44.0.4
</code></pre>
<p>but when I try curl by trying to access the pod through the service it fails.</p>
<pre><code>curl http://10.109.170.238
</code></pre>
<p>If i delete the network policy the above curl works</p>
<p>I think this is an issue with my local kubernetes cluster. I tried elsewhere it works</p>
<p>When I did port forward</p>
<pre><code>root@kubemaster:/home/vagrant# kubectl port-forward service/efgh 8080:80 -n ns1
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
</code></pre>
| BJ5 | <p>See below, more details here <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">ServiceTypes</a></p>
<pre><code>Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
</code></pre>
| jmvcollaborator |
<p>I have setup a backend and frontend service running on Kubernetes. Frontend would be <code>www.<myDomain>.com</code> and backend would be <code>api.<myDomain>.com</code></p>
<p>I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <code><myDomain>.com</code> should cover both <code>www.</code> and <code>api.</code>.</p>
<p>Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the <code>www.<myDomain>.com</code> working with https. Things doesn't work when I tried to add the <code>api.<myDomain>.com</code></p>
<p>I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again</p>
<pre><code>Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.allow-http: "true"
cert-manager.io/issuer: letsencrypt-staging
spec:
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
rules:
- host: "www.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: angular-service
port:
number: 80
- host: "api.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: spring-boot-service
port:
number: 8080
</code></pre>
| XintongTheCoder | <p>I faced the same requirement as you.
from</p>
<pre><code> tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
</code></pre>
<p>change to</p>
<pre><code> tls:
- hosts:
- www.<myDomain>.com
secretName: web-ssl
- hosts:
- api.<myDomain>.com
secretName: web-ssl
</code></pre>
<p>Help me to solve the issue!</p>
| Jun |
<p>I am trying to create a deployment or replicaSet with the Kubernetes Javascript client. The Kubernetes javascript client documentation is virtually non-existent.</p>
<p>Is there any way to achieve this?</p>
| Paschal | <p>Assuming that by:</p>
<ul>
<li><code>createDeployment()</code></li>
<li>you are referring to: <code>createNamespacedDeployment()</code></li>
</ul>
<p>You can use below code snippet to create a <code>Deployment</code> using Javascript client library:</p>
<pre class="lang-js prettyprint-override"><code>const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.AppsV1Api); // <-- notice the AppsV1Api
// Definition of the deployment
var amazingDeployment = {
metadata: {
name: 'nginx-deployment'
},
spec: {
selector: {
matchLabels: {
app: 'nginx'
}
},
replicas: 3,
template: {
metadata: {
labels: {
app: 'nginx'
}
},
spec: {
containers: [
{
name: 'nginx',
image: 'nginx'
} ]
}
}
}
};
// Sending the request to the API
k8sApi.createNamespacedDeployment('default', amazingDeployment).then(
(response) => {
console.log('Yay! \nYou spawned: ' + amazingDeployment.metadata.name);
},
(err) => {
console.log('Oh no. Something went wrong :(');
// console.log(err) <-- Get the full output!
}
);
</code></pre>
<blockquote>
<p>Disclaimer!</p>
<p>This code assumes that you have your <code>~/.kube/config</code> already configured!</p>
</blockquote>
<p>Running this code for the first time with:</p>
<ul>
<li><code>$ node deploy.js</code></li>
</ul>
<p>should output:</p>
<pre class="lang-sh prettyprint-override"><code>Yay!
You spawned: nginx-deployment
</code></pre>
<p>You can check if the <code>Deployment</code> exists by:</p>
<ul>
<li><code>$ kubectl get deployment nginx-deployment</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 6m57s
</code></pre>
<p>Running this code once again will output (deployment already exists!):</p>
<pre class="lang-sh prettyprint-override"><code>Oh no. Something went wrong :(
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">Github.com: Kubernetes-client: Javascript</a></em></li>
</ul>
| Dawid Kruk |
<p>Im currently trying to set up ArgoCD for my 3 Nodes Kuberentes Cluster.
During the set up i made a mistake and falsly deleted the ArgoCD-Server</p>
<p>now how can i get it back ? i cant find the service file to create the argocd-server service</p>
<p>thanks</p>
<p>kubernetesNoob</p>
| Kami | <p>You just need to reinstall ArgoCD with command:</p>
<pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
| quoc9x |
<p>I want to add a new cluster in addition to the default cluster on ArgoCD but when I add it, I get an error:<br />
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials<br />
I use the command <code>argocd cluster add cluster-name</code><br />
I download config file k8s of Rancher.<br />
Thanks!</p>
| quoc9x | <p>I solved my problem but welcome other solutions from everyone :D<br />
First, create a secret with the following content:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: argocd # same namespace of argocd-app
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: cluster-name # Get from clusters - name field in config k8s file.
server: https://mycluster.com # Get from clusters - name - cluster - server field in config k8s file.
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
</code></pre>
<p><code>bearerToken</code> - Get from users - user - token field in config k8s file.<br />
<code>caData</code> - Get from clusters - name - cluster - certificate-authority-data field in config k8s file.<br />
Then, apply this yaml file and the new cluster will be automatically added to ArgoCD.<br />
I found the solution on github:<br />
<a href="https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80" rel="noreferrer">https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80</a></p>
| quoc9x |
<p>In my folder, I have a <strong>deployment.yaml</strong> file and a <strong>kustomization.yaml</strong>
Inside the <strong>kustomization.yaml</strong>:</p>
<pre><code>bases:
- ../base
- deployment.yaml
</code></pre>
<p>When I run <code>kubectl apply -f deployment.yaml</code>, it runs successfully
but when running <code>kubectl apply -k [folder name]</code> then it gives the error message: <code>error: couldn't make loader for deployment.yaml: got file 'deployment.yaml', but '/[absolute path of the folder]/deployment_azure.yaml' must be a directory to be a root</code></p>
| user17249888 | <p>This is most likely because the folder is a symlink to another folder or nfs share
It should be hard local directory to be able to apply yaml files in it from kubectl</p>
| emadelbieh |
<p>I am running a self managed kubernetes cluster on AWS EC2 instances( not EKS). I've used kubeadm to do the setup
The master and worker nodes are in public subnet and have public IPs.
I have set kube config in the worker and master nodes and I am able to run kubectl commands from the nodes, but I want to be able to run it from my local linux machine.</p>
<p>I have kubectl installed on my local and have copied over the kube config to ./kube/config in my local.</p>
<p>But when I run the kubectl command i get the error
$ kubectl get nodes
<code>E0929 12:04:43.319775 26908 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.227:6443/api?timeout=32s": dial tcp 10.0.1.227:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.</code></p>
<p>Now I know that this - 10.0.1.227 ,is the private IP of my master node and I cant connect to that from my local.</p>
<p>I tried by replacing the private ip with public ip of my master node(dumb solution I know) but it didn't work. got this error</p>
<pre><code>E0929 11:26:50.902035 25572 memcache.go:265] couldn't get current server API group list: Get "https://3.110.130.136:6443/api?timeout=32s": tls: failed to verify certificate: x509: certificate is valid for 10.96.0.1, 10.0.1.227, not 3.110.130.136
</code></pre>
<p>Guessing cert in kube config is not generated for the public ip</p>
<p>FYI, port 6443 is open to all IPs in the security group.</p>
<p>How do I make this work?</p>
| Aman Deep | <p>When initializing the Kubernetes Cluster with <code>kubeadmin</code>, you specified option <code>--apiserver-advertise-address</code> as the Private IP, so the certificate in the <code>~/.kube/config</code> file only authenticates for this IP.
To be able to use the <code>kubectl</code> command from local with Public IP you can do the following:</p>
<ol>
<li><p>Copy the K8S config file from <code>master node</code> to local and put it in the path <code>~/.kube/config</code></p>
</li>
<li><p>Open <code>~/.kube/config</code> file and change the Private IP in the <code>server</code> to Public IP</p>
</li>
<li><p>When using the <code>kubectl</code> command you will need to add the <code>--insecure-skip-tls-verify</code> option. Example:</p>
<pre><code>kubectl --insecure-skip-tls-verify get pods
</code></pre>
</li>
</ol>
<p>This is an easy and quick way or a better solution is to add your <code>Public IP</code> to the kubernetes config file to be able to use multiple IPs. You can follow the instructions below:<br />
<a href="https://devops.stackexchange.com/a/9506/31538">https://devops.stackexchange.com/a/9506/31538</a></p>
| quoc9x |
<p>i have ingress controller up and running in default namespace. my other namespaces have their own ingress yaml files. whenever i try to deploy that. i get the following error:</p>
<pre><code>Error from server (InternalError): error when creating "orchestration-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: x509: certificate is valid for ingress-nginx-controller-admission, ingress-nginx-controller-admission.ingress-nginx.svc, not ingress-nginx-controller-admission.default.svc```
</code></pre>
| Gaurav Jaitly | <p>This solved my error. i removed previous version of ingress and deployed this one.</p>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml</a></p>
| Gaurav Jaitly |
<p>I want to read the DAYA label value from the kubectl get secret -o wide command shown below.</p>
<p><a href="https://i.stack.imgur.com/ciknD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ciknD.png" alt="enter image description here" /></a></p>
<p>I using this code:</p>
<pre><code>from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
secret = v1.read_namespaced_secret("mysql-pass", "default")
print(secret)
</code></pre>
<p>However, the value of the data label was not found...</p>
<pre><code>{'api_version': None,
'data': {'ca.crt': 'LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwdGFXNXAKYTNWaVpVTkJNQjRYRFRJd01EZ3hOekV4TkRjMU5Gb1hEVE13TURneE5qRXhORGMxTkZvd0ZURVRNQkVHQTFVRQpBeE1LYldsdWFXdDFZbVZEUVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTVBiClBMQ2t3cTZUQzV2RWVtSEswbGx6T3k0OFZsWjNtUXYzS1NnOHI5TW9ualhheDhZaXhHMGVkaVMvL1NBMTJUUWUKVWFWZEJIWE5Lc2wxNXhPcnRnTWkxbmdpcWdoazZ6eFNXUVZpa0dxWFFJSzlqeTJwamo5WGZFQ0I1Sk5QcDFaUQowS1ZSUWtZV2ZwMTV3dEsyVnVleFhNZDdTT2plb1R3OGhmOHdKeXJUaVliSC8vZVZrNzExbmJHRVBSckh6SlVQClBMckRkN0RNaVArRWtmR2syVlBFN3I3N0pmUGJNRU5za0dEMlZRelQrbjFpZkd3NlJDS3BZSFVYVUZoekVHek4KcDZ5YkEwUGJUNVRCZks0ZEVUZWlYOFQrVnBrNlpNbzJBYUs1TkNsN2NIZEJSVjZwWWcvNElOaFdCZzV1WjFHKwpZMTZ4RW5JMXBwVDFLVGRhRnJNQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUNCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCVU02TjNNS1haMldnNlJFR3MrQUlWbzAybHQxMFFPUldES2lEbFd5Y1VpWFpXWDlRTgpib0dxcEFvNzZXbUhOOUtWM3dMOFBtRkhHRTFGczVLMTlTdkVVVVFRb0d1ZHRQcWxGMEtaYlVwTFBKUEFFM3JzClRXSkdWSHNSS3hQUFYvZ3JBZlZoN0pFMU5jRWY2MWVwZENrVWE4cVh2KzV0bWNYOUtJM0s2Tzk4azVPZUVJdVIKSGNUS1FtL3VZOStNcm9xVE1SYVUxQUh5VmRpOFFidHAxT1ZBNHVsc09aOHZoRDIvYmNiUFp5NkkwaWJNcFhkSApCRXdITGVMNHJ4VDJlS0VhL2pMbVJBREF6djNodUZCWXNmdUthcmx1S2Y0M2xQeW40czhpQ2VNN29LVnRQK1diCngyMFplc28wa1JEYlhocHp4VDVBcElkdWRwSU9kbkZzSzdLMgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==',
'namespace': 'ZGVmYXVsdA==',
'token': 'ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkluWlFWVU5aV21aM2FERkhRbGhJTW1KbFJUTjBZV1EzWlcxcFRUVkJibEJWVFdka1lXTmhXVkpKVGxraWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbVJsWm1GMWJIUXRkRzlyWlc0dGVuWnNPR3NpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1dVlXMWxJam9pWkdWbVlYVnNkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJbUptWkdVek5tSmlMV1UyTjJZdE5ESXdaQzFoWlRZekxXTXdaVGRtWkRobE56VXdaQ0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwa1pXWmhkV3gwT21SbFptRjFiSFFpZlEuU2taV05IcE1zTHk5enBpOEdpT0FvN29rSXE4U05oME41Yy15QkNKU0VESWs5M2lhNFZnNjRyZGNSTVdGTTFiWTBNZk91WnVvNnV2bEgxeXhRRmlOaDJXaEp6OEZfcXRCUDRmQnRMUE4xX3hBRzNGWllYLXA3eWJ2M19RQnU2R01aS3dMSi1vNEpqLVlveXdlQVBIRnRUVGpiYk9fNGpidmROTEhyUml0Q3RyOGJRMGFQWmxiU3JGSW1XNUxtbmVjLUFUYXluVm83NjdSRmdueWNQd2Z5RmxELTJlVzBmcndac0tCSXhyUFpiSUFfaEZXRXFFdGllaHhUUm12Q3dGSjByUGlDbG5iUkloekpZV29IWVg4NnVuR09CMGVPdmNMT29DSmM1VnVMX1pFeFpmQUN6OE5OVW9iaEVwUFZrOURPNmlINXc0cDNrS28xZHdwYkZSZDN3'},
'kind': None,
'metadata': {'annotations': {'kubernetes.io/service-account.name': 'default',
'kubernetes.io/service-account.uid': 'bfde36bb-e67f-420d-ae63-c0e7fd8e750d'},
'cluster_name': None,
'creation_timestamp': datetime.datetime(2020, 8, 18, 11, 48, 19, tzinfo=tzutc()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': None,
'generate_name': None,
'generation': None,
'initializers': None,
'labels': None,
'managed_fields': [{'api_version': 'v1',
'fields': None,
'manager': 'kube-controller-manager',
'operation': 'Update',
'time': datetime.datetime(2020, 8, 18, 11, 48, 19, tzinfo=tzutc())}],
'name': 'default-token-zvl8k',
'namespace': 'default',
'owner_references': None,
'resource_version': '319',
'self_link': '/api/v1/namespaces/default/secrets/default-token-zvl8k',
'uid': 'd11e5725-7ce5-4a9d-b6fb-4bb76c8fb9eb'},
'string_data': None,
'type': 'kubernetes.io/service-account-token'}
</code></pre>
| 유동근 | <p>As pointed by user Dandy, there are 2 possible interpretations to this question:</p>
<ul>
<li>Getting the <code>data</code> field from the <code>secret</code> - comment (I thought this first)</li>
<li>Getting the <code>DATA</code> (counter) from the <code>kubectl get secret -o yaml</code> - answer</li>
</ul>
<p>Not really sure why did you post a code sample referencing the secret <code>mysql-pass</code> and included the output of the <code>default-token-zvl8k</code></p>
<hr />
<p>I've created an example to show you both options:</p>
<p><code>secret.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: example-secret
type: Opaque
data:
username: a3J1awo= # kruk
password: c3VwZXJoYXJkcGFzc3dvcmQ= # superhardpassword
</code></pre>
<p><code>Python3</code> code:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
import base64
config.load_kube_config()
v1 = client.CoreV1Api()
secret = v1.read_namespaced_secret("example-secret", "default") # get the secret
data = secret.data # extract .data from the secret
name = secret.metadata.name
password = secret.data['password'] # extract .data.password from the secret
decoded = base64.b64decode(password) # decode (base64) value from pasw
print ("Secret: " + name)
print("-------------")
print("Here you have the data: ")
print(data)
print("COUNTED DATA: " + str(len(data))) # <- ANSWER FOR THE DATA COUNTER
print("-------------")
print("Here you have the decoded password: ")
print(decoded)
</code></pre>
<p>With above code you can:</p>
<ul>
<li>extract needed information from secret named <code>example-secret</code>.</li>
<li>count the slices from the data field (as in <code>$ kubectl get secret -o yaml</code>) - pointed by user Dandy</li>
</ul>
<p>The output of above <code>Python3</code> script:</p>
<pre class="lang-sh prettyprint-override"><code>Secret: example-secret
-------------
Here you have the data:
{'password': 'c3VwZXJoYXJkcGFzc3dvcmQ=', 'username': 'a3J1awo='}
COUNTED DATA: 2
-------------
Here you have the decoded password:
b'superhardpassword'
</code></pre>
<p>As for <code>b''</code> you can look here:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/41918836/how-do-i-get-rid-of-the-b-prefix-in-a-string-in-python">Stackoverflow.com: Questions: How do I get rid of the b prefix in a string in python</a></em></li>
</ul>
| Dawid Kruk |
<p>I would like to forward Kubernetes logs from fluent-bit to elasticsearch through fluentd but fluent-bit cannot parse kubernetes logs properly. In order to install Fluent-bit and Fluentd, I use Helm charts. I tried both stable/fluentbit and fluent/fluentbit and faced with same problem:</p>
<pre><code>#0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'Could not dynamically add mapping for field [app.kubernetes.io/component]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text].'"
</code></pre>
<p>I put following lines into fluent-bit values file as shown <a href="https://github.com/kruftik/fluent-helm-charts/blob/feature/dedot-metadata-keys/charts/fluent-bit/values.yaml" rel="nofollow noreferrer">here</a></p>
<pre><code> remapMetadataKeysFilter:
enabled: true
match: kube.*
## List of the respective patterns and replacements for metadata keys replacements
## Pattern must satisfy the Lua spec (see https://www.lua.org/pil/20.2.html)
## Replacement is a plain symbol to replace with
replaceMap:
- pattern: "[/.]"
replacement: "_"
</code></pre>
<p>...nothing changed, same errors are listed.</p>
<p>Is there a workaround to get rid of that bug?</p>
<p>my values.yaml is here:</p>
<pre><code> # Default values for fluent-bit.
# kind -- DaemonSet or Deployment
kind: DaemonSet
# replicaCount -- Only applicable if kind=Deployment
replicaCount: 1
image:
repository: fluent/fluent-bit
pullPolicy: Always
# tag:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name:
rbac:
create: true
podSecurityPolicy:
create: false
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 2020
annotations:
prometheus.io/path: "/api/v1/metrics/prometheus"
prometheus.io/port: "2020"
prometheus.io/scrape: "true"
serviceMonitor:
enabled: true
namespace: monitoring
interval: 10s
scrapeTimeout: 10s
# selector:
# prometheus: my-prometheus
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
priorityClassName: ""
env: []
envFrom: []
extraPorts: []
# - port: 5170
# containerPort: 5170
# protocol: TCP
# name: tcp
extraVolumes: []
extraVolumeMounts: []
## https://docs.fluentbit.io/manual/administration/configuring-fluent-bit
config:
## https://docs.fluentbit.io/manual/service
service: |
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
## https://docs.fluentbit.io/manual/pipeline/inputs
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
## https://docs.fluentbit.io/manual/pipeline/filters
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
[FILTER]
Name lua
Match kube.*
script /fluent-bit/etc/functions.lua
call dedot
## https://docs.fluentbit.io/manual/pipeline/outputs
outputs: |
[OUTPUT]
Name forward
Match *
Host fluentd-in-forward.elastic-system.svc.cluster.local
Port 24224
tls off
tls.verify off
## https://docs.fluentbit.io/manual/pipeline/parsers
customParsers: |
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
</code></pre>
| Tireli Efe | <p>I had the same issue, which is caused by multiple labels that are converted into json.
I renamed the conflicting keys to match with the newer format of recommended labels:</p>
<pre><code><filter **>
@type rename_key
rename_rule1 ^app$ app.kubernetes.io/name
rename_rule2 ^chart$ helm.sh/chart
rename_rule3 ^version$ app.kubernetes.io/version
rename_rule4 ^component$ app.kubernetes.io/component
rename_rule5 ^istio$ istio.io/name
</filter>
</code></pre>
| Jacob Stampe Mikkelsen |
<p>Hey im trying to cross account access for a role. <strong>i have 2 accounts: prod and non-prod</strong>.
and <strong>bucket in prod account</strong>, which im trying to write files to there from a non-prod role which is used as a service account in k8s cluster.</p>
<p><strong>in prod account i configured:</strong>
a role with the following policy(read write access to the bucket):</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::test2"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": [
"arn:aws:s3:::test2/*"
]
}
]
</code></pre>
<p>}</p>
<p>and the following trust:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::non-prod-AccountID:role/name-of-the-non-prod-role"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
</code></pre>
<p><strong>in non prod i configured:</strong></p>
<p>a role with the following policy:</p>
<pre><code> {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::prod-Account-ID:role/prod-role-name"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
</code></pre>
<p>and trust as follows:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::non-prod-accountID:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/1111111111111111111"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/1111111111111111111:sub":
"system:serviceaccount:name-space:name-of-the-service-account"
}
}
}
]
}
</code></pre>
<p>serviceAccount annotation is:</p>
<pre><code>annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::non-prod-AccountID:role/non-prod-role-name
</code></pre>
<p>when running the command from inside the pod with the service account of the role in non-prod:</p>
<pre><code>aws s3 cp hello.txt s3://test2/hello.txt
</code></pre>
<p>im having:</p>
<pre><code>upload failed: ./hello.txt to s3://test2/hello.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
</code></pre>
<p>by the way the cluster is in another account (devops account) if its related, surely added OIDC provider identity to both non-prod and prod accounts as identity provider.</p>
| talms1 | <p>If you're getting the error <code>An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: No OpenIDConnect provider found in your account for $oidc_url</code> when trying to cross-account assume roles, but you can assume roles in your cluster account normally, here's some points:</p>
<p><strong>EKS ACCOUNT</strong></p>
<ol>
<li>Create a ServiceAccount</li>
</ol>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: $sa_name
namespace: $eks_ns
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::$resource_account_id:role/$role_name
</code></pre>
<ol start="2">
<li>Annotate your deployment</li>
</ol>
<pre><code>spec.template.spec:
serviceAccountName: $sa_name
</code></pre>
<ol start="3">
<li>Get info about your cluster OIDC Provider</li>
</ol>
<pre><code>aws iam get-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::$eks_cluster_account_id:oidc-provider/$oidc_provider
</code></pre>
<p>3.1. The output will be like:</p>
<pre><code>{
"Url": "...",
"ClientIDList": ["..."],
"ThumbprintList": ["..."],
"CreateDate": "...",
"Tags": [...]
}
</code></pre>
<p>3.2. Take note of the outputs (<em>Url</em> and <em>ThumbprintList</em> specially)</p>
<p><strong>RESOURCE ACCOUNT</strong></p>
<ol>
<li>Add the provider (if you don`t have it already), using the output from your cluster account</li>
</ol>
<pre><code>aws iam create-open-id-connect-provider --url $oidc_url --client-id-list sts.amazonaws.com --thumbprint-list $oidc_thumbprint
</code></pre>
<p>This should be enought to the mentioned error stop. If you now get <code>An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity</code>, you're problably using the <em>$eks_cluster_account_id</em> on Principal.Federated, instead of <em>$resource_account_id</em> created on the previous step. So, make sure you're using the ARN from the IP that is assigned to the resource account, not the cluster account.</p>
<ol start="2">
<li>Create a role and a policy to access your resources with following trusted entities policy:</li>
</ol>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::$resource_account_id:oidc-provider/$oidc_provider"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
</code></pre>
<p>Also, there's no need to have two roles. One is enough.</p>
| Davi Miranda |
<p>I have been trying to query mysql through Fastapi using K8s: but in the swagger, I got an error</p>
<blockquote>
<p>504 Gateway Time-out</p>
</blockquote>
<p>I'll share with you all the conf I did, hoping to help me out find the pb. The conf is too long so bare with me and thanks in advance.</p>
<p>I already have an image for the database, and I created the fastapi image and push it to docker hub:</p>
<p>main.py</p>
<pre><code>from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from sqlalchemy.engine import create_engine
import os
# creating a FastAPI server
server = FastAPI(title='User API')
# creating a connection to the database
MYSQL_ROOT_USER = os.getenv('MYSQL_ROOT_USER')
MYSQL_ROOT_PASSWORD = os.getenv('MYSQL_ROOT_PASSWORD')
MYSQL_ROOT_HOST = os.getenv('MYSQL_ROOT_HOST')
MYSQL_ROOT_DB = os.getenv('MYSQL_ROOT_DB')
# recreating the URL connection
connection_url = 'mysql://{user}:{password}@{url}/{database}'.format(
user=MYSQL_ROOT_USER,
password=MYSQL_ROOT_PASSWORD,
url="10.100.252.148",# this is Ip address of mysql-service
database=MYSQL_ROOT_DB
)
# creating the connection
mysql_engine = create_engine(connection_url)
# creating a User class
class User(BaseModel):
user_id: int = 0
username: str = 'daniel'
email: str = '[email protected]'
@server.get('/status')
async def get_status():
"""Returns 1
"""
return 1
@server.get('/users')
async def get_users():
try:
with mysql_engine.connect() as connection:
results = connection.execute('SELECT * FROM Users;')
results = [
User(
user_id=i[0],
username=i[1],
email=i[2]
) for i in results.fetchall()]
return results
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
</code></pre>
<p>the deployment.yml for both mysql and fastapi:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: k8s-deployment
name: k8s-deployment
spec:
replicas: 3
selector:
matchLabels:
app: k8s-deployment
template:
metadata:
labels:
app: k8s-deployment
spec:
containers:
- image: raouf001/fastapi:2
name: k8s-deployment
imagePullPolicy: Always
env:
- name: FAST_API_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: FAST_API_PORT
- name: MYSQL_ROOT_USER
valueFrom:
configMapKeyRef:
name: app-config
key: MYSQL_ROOT_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: app-config
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_ROOT_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: MYSQL_ROOT_HOST
- name: MYSQL_ROOT_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: MYSQL_ROOT_PORT
- name: MYSQL_ROOT_DB
valueFrom:
configMapKeyRef:
name: app-config
key: MYSQL_ROOT_DB
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-mysql
spec:
replicas: 3
selector:
matchLabels:
app: lbl-k8s-mysql
template:
metadata:
labels:
app: lbl-k8s-mysql
spec:
containers:
- name: mysql
image: datascientest/mysql-k8s:1.0.0
imagePullPolicy: Always
env:
- name: MYSQL_DATABASE
value: Main
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: password
ports:
- containerPort: 3306
protocol: TCP
</code></pre>
<p>now the service for both:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-fastapi-service
labels:
app: k8s-deployment
spec:
type: ClusterIP
selector:
app: k8s-deployment
ports:
- port: 8001
protocol: TCP
targetPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
name: lbl-k8s-mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
name: lbl-k8s-mysql
type: ClusterIP
</code></pre>
<p>ingress.yml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
defaultBackend:
service:
name: mysql-fastapi-service
port:
number: 8000
name: mysql-service
port:
number: 3306
</code></pre>
<p>configmap.yml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
FAST_API: app.py
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: datascientest1234
MYSQL_ROOT_HOST: "k8s-deployment.service.default"
MYSQL_ROOT_PORT: "3306"
MYSQL_ROOT_DB: Main
FAST_API_PORT: "8000"
</code></pre>
<p>secret.yml</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
password: ZGF0YXNjaWVudGVzdDEyMzQ=
</code></pre>
<p>I am ready to share with you any information you need thank you in advance.</p>
| Raouf Yahiaoui | <p>I guess the ip address keeps changing every time u apply. Hence it takes time for it to search and eventually leads to session timeout . Best solution is to give your service name as hostname while connecting and make sure both your services are in the same namespace.</p>
| Mihir Shah |
<p>Trying to copy files from the container to the local first</p>
<p>So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1</p>
<p>and docker run -d --name container1 -v /root/test1:/test1 Image:1</p>
<p>I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.</p>
<p>Could you please someone help me here?</p>
<p>For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.</p>
<p>Thanks,</p>
| Siva | <p>The relatively new <code>--mount</code> flag replaces the <code>-v/--volume</code> mount. It's easier to understand (syntactically) and is also more verbose (see <a href="https://docs.docker.com/storage/volumes/" rel="nofollow noreferrer">https://docs.docker.com/storage/volumes/</a>).</p>
<p>You can mount and copy with:</p>
<pre><code>docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
</code></pre>
<p>where you need to adjust the <code>cp</code> command to your needs. I'm not sure if you need the <code>"$(pwd)"</code> part.</p>
| Casper Dijkstra |
<p>I'm running a Kubernetes job, where I want to monitor the state. I'm running various <code>--watch-only</code> commands simultaneously, e.g
<code>kubectl get pods --watch-only</code>, which shows me the updated state of pods. But, I want to have <em>timestamp</em> and some <em>string</em> appended to the output.</p>
<p>The idea is to know when the state changed and also add additional info as a string.</p>
<p>How can I achieve this?</p>
| kaur | <p>Giving more visibility to the potential solution for the question posted in the comments by original poster:</p>
<blockquote>
<p>This is what i found working so far kubectl get pods --watch-only | while read line ; do echo -e "$(date +"%Y-%m-%d %H:%M:%S.%3N")\t pods\t $line" ; done</p>
</blockquote>
<p>Command used:</p>
<ul>
<li><code>$ kubectl get pods --watch-only | while read line ; do echo -e "$(date +"%Y-%m-%d %H:%M:%S.%3N")\t pods\t $line" ; done</code></li>
</ul>
<p>Solution is correct but the changes in state (<code>PENDING</code>,<code>RUNNING</code>,<code>SUCCEEDED/COMPLETED</code>) would need to be further extracted (assuming further actions) from the above command.</p>
<hr />
<p>Looking on this from a different perspective you can use the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="noreferrer">official Kubernetes API library</a> to monitor the statuses of pods and jobs and act accordingly to them (for example: do something when job succeeded).</p>
<hr />
<p>I've created an <strong>example</strong> app with Kubernetes Python API library to watch the statuses of pods and jobs.</p>
<p>Assuming that:</p>
<ul>
<li>You have a working Kubernetes cluster with <code>kubectl</code> configured</li>
<li>You have installed Python and required library:
<ul>
<li><code>$ pip install kubernetes</code></li>
</ul>
</li>
</ul>
<p>Using the example for a <code>Job</code>:</p>
<blockquote>
<p><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noreferrer">Kubernetes.io: Docs: Concepts: Job</a></em></p>
</blockquote>
<h3>Example for pods</h3>
<p>Below is the sample code in Python3 that would watch for pods and print a message when status of pod is set to <code>Succeeded</code>:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config, watch
from datetime import datetime
config.load_kube_config()
v1 = client.CoreV1Api()
w = watch.Watch()
for event in w.stream(v1.list_namespaced_pod, namespace="default", watch=False):
print("POD_NAME: " + event['object'].metadata.name) # print the name
print("TIME: " + str(datetime.now())) # print the time
print("PHASE: " + event['object'].status.phase) # print the status of the pod
print("CUSTOM AMAZING TEXT HERE!")
if (event['object'].status.phase == "Succeeded") and (event['type'] != "DELETED"): # do below when condition is met
print ("----> This pod succeeded, do something here!")
print("---")
</code></pre>
<p>This would produce an output similar to one below:</p>
<pre class="lang-sh prettyprint-override"><code>POD_NAME: pi-pjmm5
TIME: 2020-09-06 15:28:01.541244
PHASE: Pending
CUSTOM AMAZING TEXT HERE!
---
POD_NAME: pi-pjmm5
TIME: 2020-09-06 15:28:03.894063
PHASE: Running
CUSTOM AMAZING TEXT HERE!
---
POD_NAME: pi-pjmm5
TIME: 2020-09-06 15:28:09.044219
PHASE: Succeeded
CUSTOM AMAZING TEXT HERE!
----> This pod succeeded, do something here!
---
</code></pre>
<h3>Example for jobs</h3>
<p>Below is the sample code in Python3 that would watch for jobs and print a message when status of job is set to <code>Succeeded</code>:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config, watch
from datetime import datetime
config.load_kube_config()
v1 = client.BatchV1Api()
w = watch.Watch()
for event in w.stream(v1.list_namespaced_job, namespace="default", watch=False):
print("JOB_NAME: " + event['object'].metadata.name)
print("TIME: " + str(datetime.now()))
print("STATUS: " + event['type'])
print("CUSTOM AMAZING TEXT HERE!")
if (event['object'].status.succeeded == 1) and (event['type'] != "DELETED"):
print ("----> This job succeeded, do something here!")
print("---")
</code></pre>
<p>This would produce an output similar to one below:</p>
<pre class="lang-sh prettyprint-override"><code>JOB_NAME: pi
TIME: 2020-09-06 15:32:49.909096
STATUS: ADDED
CUSTOM AMAZING TEXT HERE!
---
JOB_NAME: pi
TIME: 2020-09-06 15:32:49.936856
STATUS: MODIFIED
CUSTOM AMAZING TEXT HERE!
---
JOB_NAME: pi
TIME: 2020-09-06 15:32:56.998511
STATUS: MODIFIED
CUSTOM AMAZING TEXT HERE!
----> This job succeeded, do something here!
---
</code></pre>
| Dawid Kruk |
Subsets and Splits