Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>I'm setting up a Kubernetes cluster in AWS using Kops. I've got an nginx ingress controller, and I'm trying to use letsencrypt to setup tls. Right now I can't get my ingress up and running because my certificate challenges get this error:</p> <p><code>Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://critsit.io/.well-known/acme-challenge/[challengeId]': Get http://critsit.io/.well-known/acme-challenge/[challengeId]: EOF</code></p> <p>I've got a LoadBalancer service that's taking public traffic, and the certificate issuer automatically creates 2 other services which don't have public IPs.</p> <p>What am I doing wrong here? Is there some networking issue preventing the pods from finishing the acme flow? Or maybe something else?</p> <p>Note: I have setup an A record in Route53 to direct traffic to the LoadBalancer.</p> <pre><code>&gt; kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cm-acme-http-solver-m2q2g NodePort 100.69.86.241 &lt;none&gt; 8089:31574/TCP 3m34s cm-acme-http-solver-zs2sd NodePort 100.67.15.149 &lt;none&gt; 8089:30710/TCP 3m34s default-http-backend NodePort 100.65.125.130 &lt;none&gt; 80:32485/TCP 19h kubernetes ClusterIP 100.64.0.1 &lt;none&gt; 443/TCP 19h landing ClusterIP 100.68.115.188 &lt;none&gt; 3001/TCP 93m nginx-ingress LoadBalancer 100.67.204.166 [myELB].us-east-1.elb.amazonaws.com 443:30596/TCP,80:30877/TCP 19h </code></pre> <p>Here's my ingress setup:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: critsit-ingress namespace: default annotations: kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/acme-challenge-type: &quot;http01&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - critsit.io - app.critsit.io secretName: letsencrypt-prod rules: - host: critsit.io http: paths: - path: / backend: serviceName: landing servicePort: 3001 </code></pre> <p>And my certificate issuer:</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: [email protected] # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx selector: {} </code></pre> <p>Update: I've noticed that my load balancer has all of the instances marked as OutOfOrder because they're failing health checks. I wonder if that's related to the issue.</p> <p>Second update: I abandoned this route altogether, and rebuilt my networking/ingress system using Istio</p>
<p>The error message you are getting can mean a wide variety of issues. However, there are few things you can check/do in order to make it work:</p> <ol> <li>Delete the Ingress, the certificates and the cert-manager <a href="https://cert-manager.io/docs/installation/uninstall/kubernetes/" rel="nofollow noreferrer">fully</a>. After that add them all back to make sure it installs clean. Sometimes stale certs or bad/multi Ingress pathing might be the issue. For example you can use Helm:</li> </ol> <hr /> <pre><code>helm install my-nginx-ingress stable/nginx-ingress helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.0 --set installCRDs=true </code></pre> <hr /> <ol start="2"> <li><p>Make sure your traffic allows HTTP or has HTTPS with a trusted cert.</p> </li> <li><p>Check if hairpin mode of your loadbalancer and make sure it is working.</p> </li> <li><p>Add: <code>nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot;</code> annotation to the Ingress rule. Wait a moment and see if valid cert will be created.</p> </li> <li><p>You can manually manually issue certificates in your Kubernetes cluster. To do so, please follow <a href="https://github.com/nabsul/k8s-letsencrypt" rel="nofollow noreferrer">this guide</a>.</p> </li> <li><p>The problem can solve itself in time. Currently if the self check fails, it updates the status information with the reason (like: self check failed) and than tries again later (to allow for propagation). This is an expected behavior.</p> </li> </ol> <p>This is an ongoing issue that is being tracked <a href="https://github.com/jetstack/cert-manager/issues/656" rel="nofollow noreferrer">here</a> and <a href="https://github.com/jetstack/cert-manager/issues/3238" rel="nofollow noreferrer">here</a>.</p>
<p>I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s.</p> <p>I am working with two nodes </p> <ol> <li>master - 2cpu, 2 GB memory</li> <li>worker - 1 cpu, 1 GB memory</li> <li>OS - ubuntu - hashicorp/bionic64</li> </ol> <p>I did setup the master node successfully and i can see it is up and running </p> <pre><code>vagrant@master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 29m v1.18.2 </code></pre> <p>Here is token which i have generated </p> <pre><code>vagrant@master:~$ kubeadm token create --print-join-command W0419 13:45:52.513532 16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 </code></pre> <p><strong>Issue</strong> - But when i try to join it from the worker node i get</p> <pre><code>vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 W0419 13:46:17.651819 15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>Here are the ports which are occupied </p> <pre><code>10.0.2.15:2379 10.0.2.15:2380 10.0.2.15:68 </code></pre> <p>Note i am using CNI from - </p> <pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml </code></pre>
<ol> <li><p>Run the command 'kubectl config view' or 'kubectl cluster-info' to check the IP address of Kubernetes control plane. In my case it is 10.0.0.2.</p> <p>$ kubectl config view</p> <p>apiVersion: v1</p> <p>clusters:</p> <ul> <li><p>cluster:</p> <p>certificate-authority-data: DATA+OMITTED</p> <p>server: <a href="https://10.0.0.2:6443" rel="nofollow noreferrer">https://10.0.0.2:6443</a></p> </li> </ul> <p>Or</p> <p>$ kubectl cluster-info</p> <p>Kubernetes control plane is running at <a href="https://10.0.0.2:6443" rel="nofollow noreferrer">https://10.0.0.2:6443</a></p> </li> <li><p>Tried to telnet the Kubernetes control plane.</p> <p>telnet 10.0.0.2 6443</p> <p>Trying 10.0.0.2...</p> </li> <li><p>Press Control + C in your keyboard to terminate the telnet command.</p> </li> <li><p>Go to your Firewall Rules and add port 6443 and make sure to allow all instances in the network.</p> </li> <li><p>Then try to telnet the Kubernetes control plane once again and you should be able to connect now:</p> <p>$ telnet 10.0.0.2 6443</p> <p>Trying 10.0.0.2...</p> <p>Connected to 10.0.0.2.</p> <p>Escape character is '^]'.</p> </li> <li><p>Try to join the worker nodes now. You can run the command 'kubeadm token create --print-join-command' to create new token just in case you forgot to save the old one.</p> </li> <li><p>Run 'kubectl get nodes' on the control-plane to see this node join the cluster</p> <p>$ kubectl get nodes</p> <p>NAME STATUS ROLES AGE VERSION</p> <p>k8s Ready control-plane 57m v1.25.0</p> <p>wk8s-node-0 Ready 36m v1.25.0</p> <p>wk8s-node-1 Ready 35m v1.25.0</p> </li> </ol>
<p>I need to setup a windows authentication in Kubernetes. And to configure GMSA in K8s for pods and containers in windows, I came across this link:-(<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-gmsa/</a>).</p> <p>This documentation has a step which confirms to “<strong>Install Webhooks to validate GMSA users</strong>”. To follow this step a linux/unix script is asked to execute which generates certificates, private key and other values and substitue in YAML file which is further executed on a Kubernetes cluster. As mentioned in a screenshot below (part of mentioned link) </p> <p><a href="https://i.stack.imgur.com/7iuyu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7iuyu.png" alt="enter image description here"></a></p> <p>Now I have a Kubectl client installed on Windows machine and even all images created and deployed on windows container running on windows server 2019 only. </p> <p>I cannot execute this unix/linux script to create Webhook from windows machine. Is there any other way to achieve this step. </p> <p>Thanks</p>
<p>I installed Cygwin (Linux Platform on Windows) to execute the script.</p>
<p>I can't create pipelines. I can't even load the samples / tutorials on the AI Platform Pipelines Dashboard because it doesn't seem to be able to proxy to whatever it needs to.</p> <pre><code>An error occurred Error occured while trying to proxy to: ... </code></pre> <p>I looked into the cluster's details and found 3 components with errors:</p> <pre><code>Deployment metadata-grpc-deployment Does not have minimum availability Deployment ml-pipeline Does not have minimum availability Deployment ml-pipeline-persistenceagent Does not have minimum availability </code></pre> <p>Creating the clusters involve approx. 3 clicks in GCP Kubernetes Engine so I don't think I messed up this step.</p> <p>Anyone have an idea of how to achieve &quot;minimum availability&quot;?</p> <p><strong>UPDATE 1</strong></p> <p>Nodes have adequate resources and are Ready. YAML file looks good. I have 2 clusters in diff regions/zones and both have the deployment errors listed above. 2 Pods are not ok.</p> <pre><code>Name: ml-pipeline-65479485c8-mcj9x Namespace: default Priority: 0 Node: gke-cluster-3-default-pool-007784cb-qcsn/10.150.0.2 Start Time: Thu, 17 Sep 2020 22:15:19 +0000 Labels: app=ml-pipeline app.kubernetes.io/name=kubeflow-pipelines-3 pod-template-hash=65479485c8 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container ml-pipeline-api-server Status: Running IP: 10.4.0.8 IPs: IP: 10.4.0.8 Controlled By: ReplicaSet/ml-pipeline-65479485c8 Containers: ml-pipeline-api-server: Container ID: ... Image: ... Image ID: ... Ports: 8888/TCP, 8887/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Fri, 18 Sep 2020 10:27:31 +0000 Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 18 Sep 2020 10:20:38 +0000 Finished: Fri, 18 Sep 2020 10:27:31 +0000 Ready: False Restart Count: 98 Requests: cpu: 100m Liveness: exec [wget -q -S -O - http://localhost:8888/apis/v1beta1/healthz] delay=3s timeout=2s period=5s #success=1 #failure=3 Readiness: exec [wget -q -S -O - http://localhost:8888/apis/v1beta1/healthz] delay=3s timeout=2s period=5s #success=1 #failure=3 Environment: HAS_DEFAULT_BUCKET: true BUCKET_NAME: PROJECT_ID: &lt;set to the key 'project_id' of config map 'gcp-default-config'&gt; Optional: false POD_NAMESPACE: default (v1:metadata.namespace) DEFAULTPIPELINERUNNERSERVICEACCOUNT: pipeline-runner OBJECTSTORECONFIG_SECURE: false OBJECTSTORECONFIG_BUCKETNAME: DBCONFIG_DBNAME: kubeflow_pipelines_3_pipeline DBCONFIG_USER: &lt;set to the key 'username' in secret 'mysql-credential'&gt; Optional: false DBCONFIG_PASSWORD: &lt;set to the key 'password' in secret 'mysql-credential'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ml-pipeline-token-77xl8 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: ml-pipeline-token-77xl8: Type: Secret (a volume populated by a Secret) SecretName: ml-pipeline-token-77xl8 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 52m (x409 over 11h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Back-off restarting failed container Warning Unhealthy 31m (x94 over 12h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Readiness probe failed: Warning Unhealthy 31m (x29 over 10h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn (combined from similar events): Readiness probe failed: c annot exec in a stopped state: unknown Warning Unhealthy 17m (x95 over 12h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Liveness probe failed: Normal Pulled 7m26s (x97 over 12h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Container image &quot;gcr.io/cloud-marketplace/google-cloud-ai -platform/kubeflow-pipelines/apiserver:1.0.0&quot; already present on machine Warning Unhealthy 75s (x78 over 12h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Liveness probe errored: rpc error: code = DeadlineExceede d desc = context deadline exceeded </code></pre> <p>And the other pod:</p> <pre><code>Name: ml-pipeline-persistenceagent-67db8b8964-mlbmv Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 32s (x2238 over 12h) kubelet, gke-cluster-3-default-pool-007784cb-qcsn Back-off restarting failed container </code></pre> <p><strong>SOLUTION</strong></p> <p>Do not let google handle any storage. Uncheck &quot;Use managed storage&quot; and set up your own artifact collections manually. You don't actually need to enter anything in these fields since the pipeline will be launched anyway.</p>
<p>The <code>Does not have minimum availability</code> error is generic. There could be many issues that trigger it. You need to analyse more in-depth in order to find the actual problem. Here are some possible causes:</p> <ul> <li><p><strong>Insufficient resources:</strong> check if your Node has adequate resources (CPU/Memory). If Node is ok than <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#does_not_have_minimum_availability" rel="nofollow noreferrer">check the Pod's status</a>.</p> </li> <li><p><strong>Liveliness probe and/or Readiness probe failure:</strong> execute <code>kubectl describe pod &lt;pod-name&gt;</code> to check if they failed and why.</p> </li> <li><p><strong>Deployment misconfiguration:</strong> review your deployment yaml file to see if there are any errors or leftovers from previous configurations.</p> </li> <li><p>You can also try to wait a bit as sometimes it takes some time in order to deploy everything and/or try changing your Region/Zone.</p> </li> </ul>
<p>I would like the set the same env variable to the same value on all the containers of my pod. I am not trying to pass info between containers so this variable will not be updated but I want to ensure that if somebody update its value, it will be kept in sync across all the containers in my pod.</p> <p>Is there any way to do this out of the box? Currently I see two options:</p> <ul> <li>The ugly one would be to set the value on one container and use a <code>fieldRef</code> on the others.</li> <li>The less ugly would be to create a new ConfigMap and use envFrom in all the containers.</li> </ul> <p>But Pods being a &quot;single unit&quot; it seems odd that there wouldn't be a way to define &quot;pod-wide&quot; env variables.</p>
<p>There are a few ways.</p> <ol> <li><p>The most common practice is as you mention using <code>ConfigMap</code>. This way is explain in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Kubernetes docs</a>.</p> </li> <li><p>Another option is to use <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">Secret</a>, however it's similar to <code>ConfigMap</code> way.</p> </li> <li><p>If you know that variable before deploying, you can <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container" rel="nofollow noreferrer">Define an environment variable for a container</a></p> </li> <li><p>Last method is to set them manually in each container, however it's harsh way.</p> </li> </ol> <p>There is also a good comparsion of passing <code>environmental variable</code> in <a href="https://medium.com/@caboadu/environment-variables-4-ways-to-set-them-in-kubernetes-bc7c8ceb333d" rel="nofollow noreferrer">medium article</a>.</p>
<p>I am planning to deploy 15 different applications initially and would endup with 300+ applications on azure kubernetes and would be using Prometheus and Grafana for monitoring.</p> <p>I have deployed both the Prometheus and Grafana on a separate namespace on the dedicated node.</p> <p>How do I enable horizontal pod scaling for Prometheus and Grafana?</p>
<p>You can scale your applications based on <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">custom metrics</a> gathered by Prometheus and presented in the Grafana dashboard.</p> <p>In order to do that you'll need the <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter#prometheus-adapter-for-kubernetes-metrics-apis" rel="nofollow noreferrer">Prometheus Adapter</a> to implement the custom metrics API, which enables the <code>HorizontalPodAutoscaler</code> controller to retrieve metrics using the <code>custom.metrics.k8s.io</code> API. You can define your own metrics through the adapter’s configuration so the HPA would scale based on those stats.</p> <p><a href="https://www.section.io/blog/horizontal-pod-autoscaling-custom-metrics/" rel="nofollow noreferrer">Here</a> you can find a short guide that would get you started.</p>
<p>I'm trying to configure K8s plugin on Jenkins to automatically create slave agent whenever a job is triggered. However, as far as I researched, it is only possible to do so providing that Jenkins server is running on k8s cluster. Is there a way to configure k8s plugin on Jenkins server which is running on Openstack server?</p> <p>I've a Jenkins server and also configured k8s plugin on it. Everytime I build a new job, a new pod for slave agent is created but not possible to be started. When i tried command <code>kubectl logs &lt;pod-name&gt;</code> I received the following error:</p> <pre><code>Error from server: Get https://XX.XX.XX.XX:10250/containerLogs/jenkins/slave-tester-4c4wb/jnlp: net/http: TLS handshake timeout </code></pre>
<p>This is definitely possible, there is a good amount of documentation available here: <a href="https://github.com/jenkinsci/kubernetes-plugin#kubernetes-cloud-configuration" rel="nofollow noreferrer">https://github.com/jenkinsci/kubernetes-plugin#kubernetes-cloud-configuration</a></p> <p>The important part is: "When running the Jenkins master outside of Kubernetes you will need to set the credential to secret text. The value of the credential will be the token of the service account you created for Jenkins in the cluster the agents will run on."</p>
<p>I am running a gke cluster (v1.16.15gke.4300) and the nginx ingress authentication is failing. The below snippet is for external oauth2 authentication but even a basic auth is also not working. Seems that nginx is completely ignore these annotations.</p> <p>The oauth2 proxy with google api <strong>is actually working fine</strong>, but nginx is not including the auth configuration on his own configuration. I can easily check that on the nginx running pods. No auth conf there.</p> <p>nginx ingress controller:</p> <pre><code> repoURL: 'https://helm.nginx.com/stable' targetRevision: 0.6.1 version: nginx/1.19.2 </code></pre> <p>The live manifest for an ingress service protected by oauth2:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/auth-signin: https://oauth2.####.net/oauth2/start?rd=$escaped_request_uri ingress.kubernetes.io/auth-url: https://oauth2.####.net/oauth2/auth kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;extensions/v1beta1&quot;,&quot;kind&quot;:&quot;Ingress&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:##########} creationTimestamp: &quot;####&quot; finalizers: - networking.gke.io/ingress-finalizer-V2 generation: 1 labels: argocd.argoproj.io/instance: k8s-default name: dashboard-ingress namespace: kubernetes-dashboard resourceVersion: &quot;22174124&quot; selfLink: /apis/extensions/v1beta1/namespaces/kubernetes-dashboard/ingresses/dashboard-ingress uid: 34263f6b-6818-403f-####-4c6acb196c49 spec: rules: - host: dashboard.###.net http: paths: - backend: serviceName: kdashboard-kubernetes-dashboard servicePort: 8080 path: / tls: - hosts: - dashboard.###.net secretName: reflect-certificate-secret-internal status: loadBalancer: ingress: - ip: ##.##.##.## </code></pre> <p>When running the service i never get a 403/401:</p> <pre><code>curl -I 'https://dashboard.###.net/' HTTP/1.1 200 OK Server: nginx/1.19.2 Date: Mon, 14 Dec 2020 19:50:05 GMT Content-Type: text/html; charset=utf-8 Content-Length: 1272 Connection: keep-alive Accept-Ranges: bytes Cache-Control: no-store Last-Modified: Mon, 22 Jun 2020 14:25:00 GMT </code></pre>
<p><strong>EDIT:</strong></p> <p>Based on the info you provided it looks like that you are using the <a href="https://github.com/nginxinc/kubernetes-ingress#nginx-ingress-controller" rel="nofollow noreferrer">Nginxinc Ingress Controller</a> and not the <a href="https://github.com/kubernetes/ingress-nginx#nginx-ingress-controller" rel="nofollow noreferrer">NGINX Ingress Controller</a> which are not the same. Nginxinc Ingress Controller is different from the NGINX Ingress controller in <a href="https://github.com/kubernetes/ingress-nginx#nginx-ingress-controller" rel="nofollow noreferrer">kubernetes/ingress-nginx repo</a> and also different from the default <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Ingress Controller</a>. The main difference that would affect your use case is that they all use different annotations and those annotations can only be satisfied by a proper Controller. You can find the key differences between the mentioned above <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">here</a> and <a href="https://medium.com/omnius/kubernetes-ingress-gce-vs-nginx-controllers-1-3-d89d6dd3da73" rel="nofollow noreferrer">here</a>.</p> <p>Below are some useful docs/guides:</p> <ul> <li><p><a href="https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/#external-oauth-authentication" rel="nofollow noreferrer">External OAUTH Authentication</a>: The <code>auth-url</code> and <code>auth-signin</code> annotations allow you to use an external authentication provider to protect your Ingress resources.</p> </li> <li><p><a href="https://toolbox.kurio.co.id/securing-your-website-with-oauth2-in-nginx-ingress-controller-c84984eae7fa" rel="nofollow noreferrer">Securing your website with OAuth2 using NGINX Ingress Controller</a>: NGINX Ingress Controller can be combined with <code>oauth2_proxy</code> to enable many OAuth providers like Google, GitHub and others.</p> </li> <li><p><a href="https://github.com/nginxinc/kubernetes-ingress/issues/982" rel="nofollow noreferrer">How to configure external OAuth authentication?</a></p> </li> </ul> <p>To sum up:</p> <ul> <li><p>Choose the proper controller that would satisfy the annotations that you want to use.</p> </li> <li><p>Keep in mind that different Controllers might use different annotations (<a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">nginxinc</a> vs <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">nginx</a>).</p> </li> <li><p>Use <code>kubernetes.io/ingress.class:</code> annotation to choose the controller installed on your GKE Cluster.</p> </li> </ul>
<p>I'm using the rabbitmq helm chart from here: <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq-ha" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/rabbitmq-ha</a></p> <p>and I want to store the messages and queues outside minikube so I can continue from there everytime I run minikube. However, I cannot see in the documentation how to add a volume or persistent volume to point to my host machine.</p>
<p>I found the answer thanks to Grigoriy Mikhalkin. My problem was that I was using <strong>hyperkit</strong> that deleted the data when I run <code>minikube stop</code> whereas if we use <strong>virtualbox</strong> as the driver and run minikube stop the data will be there next time we run <code>minikube start</code></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>minikube start --driver=virtualbox</code></pre> </div> </div> </p>
<p>I got a cluster running on a Ubuntu server. I provide the web content on the server running in the cluster via port 80/443. The server itself I am accessing via <code>ssh</code> only, so no graphical interface at all.</p> <p>Now I want to access the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui" rel="nofollow noreferrer">kubernetes web ui</a> for that cluster. During research I found sources who say that accessing the <code>web ui</code> per remote access is not recommended for prod environments. The guides are only about using <code>kubectl proxy</code> to expose the dashboard to localhost.</p> <p>Is there a solution or a more or less common way to access the dashboard of a cluster running on a server?</p>
<pre><code>... spec: clusterIP: 10.104.126.244 externalIPs: - 192.168.64.1 externalTrafficPolicy: Cluster ports: - nodePort: 31180 port: 443 protocol: TCP targetPort: 8443 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: LoadBalancer status: </code></pre> <p>The above kubernetes-dashboard-service will work, by going to <a href="https://192.168.64.1:31180" rel="nofollow noreferrer">https://192.168.64.1:31180</a> , where 192.168.64.1 is the IP address of your Kubernetes Controller, however there are caveats. </p> <p>You'll need to use an old browser to access it and accept the security exception. </p> <p>then run </p> <p><code>kubectl -n kube-system get secret</code></p> <p>And look for your <code>replicaset-controller-token-kzpmc</code></p> <p>Then run </p> <p><code>$ kubectl -n kube-system describe secrets replicaset-controller-token-kzpmc</code></p> <p>And copy the long token at the bottom.</p> <pre><code>Name: replicaset-controller-token-kzpmc Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=replicaset-controller kubernetes.io/service-account.uid=d0d93741-96c5-11e7-8245-901b0e532516 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3 </code></pre>
<p>I cannot for the life of me find a detailed table of what all the Kubernetes RBAC verbs do. The only resource I see people recommending is <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb" rel="nofollow noreferrer">this one</a>, which is <em>woefully</em> inadequate.</p> <p>So I've been working it out by experimentation.</p> <p>Most are fairly straightforward so far, except for <code>UPDATE</code>. This does not seem to be able to do <em>anything</em> I would expect it to.</p> <p><strong>Permissions I gave my alias:</strong> [<code>GET</code>, <code>UPDATE</code>] on [<code>deployments</code>] in <code>default</code> namespace.</p> <p><strong>Things I've tried:</strong></p> <ul> <li><code>kubectl set image deployment/hello-node echoserver=digitalocean/flask-helloworld --as user</code></li> <li><code>kubectl edit deploy hello-node --as user</code></li> <li><code>kubectl apply -f hello-node.yaml --as eks-user</code></li> </ul> <p>These all failed with error: <code>deployments.apps &quot;hello-node&quot; is forbidden: User &quot;user&quot; cannot patch resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot;</code></p> <p>I then tried some rollout commands like:</p> <ul> <li><code>k rollout undo deploy hello-node --as user</code></li> </ul> <p>But they failed because I didn't have replica-set access.</p> <hr /> <p><strong>TLDR:</strong> What is the point of the Kubernetes RBAC <code>update</code> verb?</p> <p>For that matter, does anyone have a more detailed list of all RBAC verbs?</p>
<p>Following up this, I went to the Kubernetes <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/" rel="noreferrer">REST API documentation</a>, which has a long list of all the HTTP API calls you can make to the REST server.</p> <p>I thought this would help because the one (1) table available describing what the different verbs can do did so by comparing them to HTTP verbs. So the plan was:</p> <ol> <li>See what HTTP verb the <code>update</code> permission is equated to.</li> <li>Go to the reference and find an example of using that HTTP verb on a deployment.</li> <li>Test the <code>kubectl</code> equivalent.</li> </ol> <p>So.</p> <p><strong>What HTTP verb equals the <code>update</code> permission?</strong></p> <p><code>PUT</code>.</p> <p><strong>Example of using <code>PUT</code> for deployments?</strong></p> <blockquote> <p>Replace Scale: replace scale of the specified Deployment</p> </blockquote> <blockquote> <p>HTTP Request <code>PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale</code></p> </blockquote> <p><strong>What's the equivalent <code>kubectl</code> command?</strong></p> <p>Well we're scaling a deployment, so I'm going to say:</p> <pre><code>kubectl scale deployment hello-node --replicas=2 </code></pre> <p><strong>Can I run this command?</strong></p> <p>I extended my permissions to <code>deployment/scale</code> first, and then ran it.</p> <pre><code>Error from server (Forbidden): deployments.apps &quot;hello-node&quot; is forbidden: User &quot;user&quot; cannot patch resource &quot;deployments/scale&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; </code></pre> <p>Well. That also needs <code>patch</code> permissions, it would appear.</p> <p>Despite the fact that the HTTP verb used is <strong><code>PUT</code></strong> according to the API docs, and <code>PUT</code> is equivalent to <code>update</code> according to the one (1) source of any information on these RBAC verbs.</p> <p>Anyway.</p> <p><strong>My Conclusion:</strong> It appears that <code>update</code> is indeed pretty useless, at least for Deployments.</p> <p>The RBAC setup seemed promising at first, but honestly it's starting to lose its lustre as I discover more and more edge cases and undocumented mysteries. Access permissions seem like the absolute worst thing to be vague about, or your security ends up being more through obscurity than certainty.</p>
<p>I am trying to use the K8S through Azure AKS. </p> <p>But when doing a simple command like: kubectl create namespace airflow </p> <p>I get the following error message: </p> <blockquote> <p>Error from server (Forbidden): namespaces is forbidden: User "xxx" cannot create resource "namespaces" in API group "" at the cluster scope </p> </blockquote> <p>I have already commanded az aks get-credentials to connect to the cluster and then I try to create the namespace but without success.</p>
<p>In my case, this works when I use this command:</p> <pre><code>az aks get-credentials --resource-group &lt;RESOURCE GROUP NAME&gt; --name &lt;AKS Cluster Name&gt; --admin </code></pre>
<p>I have a k8s cluster with an <code>ipvs</code> kube-proxy mode and a database cluster outside of k8s.</p> <p>In order to get access to the DB cluster I created service and endpoints resources:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: database spec: type: ClusterIP ports: - protocol: TCP port: 3306 targetPort: 3306 --- apiVersion: v1 kind: Endpoints metadata: name: database subsets: - addresses: - ip: 192.168.255.9 - ip: 192.168.189.76 ports: - port: 3306 protocol: TCP </code></pre> <p>Then I run a pod with MySQL client and try to connect to this service:</p> <pre><code>mysql -u root -p password -h database </code></pre> <p>In the network dump I see a successful TCP handshake and successful MySQL connection. On the node where the pod is running (hereinafter the worker node) I see the next established connection:</p> <pre><code>sudo netstat-nat -n | grep 3306 tcp 10.0.198.178:52642 192.168.189.76:3306 ESTABLISHED </code></pre> <p>Then I send some test queries from the pod in an opened MySQL session. They all are sent to the same node. It's expected behavior.</p> <p>Then I monitor established connections on the worker node. After about 5 minutes the established connection to the database node is missed.</p> <p>But in the network dump I see that TCP finalization packets are not sent from the worker node to the database node. As a result, I get a leaked connection on the database node.</p> <p>How <code>ipvs</code> decides to drop an established connection? If <code>ipvs</code> drops a connection, why it doesn't finalize TCP connection properly? Is it a bug or do I misunderstand something with an <code>ipvs</code> mode in kube-proxy?</p>
<p>Kube-proxy and Kubernetes don't help to balance persistent connections.</p> <p>The whole concept of the long-lived connections in Kubernetes is well described in <a href="https://learnk8s.io/kubernetes-long-lived-connections" rel="nofollow noreferrer">this article</a>:</p> <blockquote> <p>Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others. If you're using HTTP/2, gRPC, RSockets, AMQP or any other long-lived connection such as a database connection, you might want to consider client-side load balancing.</p> </blockquote> <p>I recommend going through the whole thing but overall it can be summed up with:</p> <blockquote> <ul> <li><p>Kubernetes Services are designed to cover most common uses for web applications.</p> </li> <li><p>However, as soon as you start working with application protocols that use persistent TCP connections, such as databases, gRPC, or WebSockets, they fall apart.</p> </li> <li><p>Kubernetes doesn't offer any built-in mechanism to load balance long-lived TCP connections.</p> </li> <li><p>Instead, you should code your application so that it can retrieve and load balance upstreams client-side.</p> </li> </ul> </blockquote>
<p>when I am list process of host using this command:</p> <pre><code>[root@fat001 ~]# ps -o user,pid,pidns,%cpu,%mem,vsz,rss,tty,stat,start,time,args ax|grep "room" root 3488 4026531836 0.0 0.0 107992 644 pts/11 S+ 20:06:01 00:00:00 tail -n 200 -f /data/logs/soa-room/spring.log root 18114 4026534329 8.5 2.2 5721560 370032 ? Sl 23:17:51 00:01:53 java -jar /root/soa-room-service-1.0.0-SNAPSHOT.jar root 19107 4026531836 0.0 0.0 107992 616 pts/8 S+ 19:14:10 00:00:00 tail -f -n 200 /data/logs/soa-room/spring.log root 23264 4026531836 0.0 0.0 112684 1000 pts/13 S+ 23:39:57 00:00:00 grep --color=auto room root 30416 4026531836 3.4 3.4 4122552 567232 ? Sl 19:52:03 00:07:53 /opt/dabai/tools/jdk1.8.0_211/bin/java -Xmx256M -Xms128M -jar -Xdebug -Xrunjdwp:transport=dt_socket,suspend=n,server=y,address=5011 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/dump /data/jenkins/soa-room-service/soa-room-service-1.0.0-SNAPSHOT.jar </code></pre> <p>I am very sure this process is kubernetes pod's process:</p> <pre><code>root 18114 4026534329 8.5 2.2 5721560 370032 ? Sl 23:17:51 00:01:53 java -jar /root/soa-room-service-1.0.0-SNAPSHOT.jar </code></pre> <p>Why the kubernetes container's process show on host?It should be in the docker's container!!!!!</p>
<p>This is perfectly normal. <strong>Containers are not VM.</strong> </p> <p>Every process run by Docker is run on the host Kernel. There is no isolation in term of <em>Kernel</em>.</p> <p>Of course, there is an isolation in terms of process between containers, as each container's process are run in an isolated process namespace. </p> <p>In summary : container A can't see container B process <em>(well, not by default)</em>, however as all the containers process are run inside your host, you'll always be able to see the process from your host.</p>
<p>I am new to containerize environment and not such sequestial guidance.</p> <p>I have installed <strong>mysql</strong> service in <strong>GKE environment</strong>(using <strong>GCP free account</strong>) and created basic spring boot app which will <strong>communicate to mysql database</strong>. In My local environment (without container) It can talk to MySQL DB as we can use <strong>localhost</strong> url. But when I am building the spring-boot application in GKE environment with mysql using "<strong>mysql</strong>" as service name(which is available in env) as connection url then its fails to connect.</p> <pre><code>kubectl get ep NAME ENDPOINTS AGE mysql 10.32.3.24:3306 20h service-one 10.32.3.7:8080 47h </code></pre> <hr> <pre><code>Every 2.0s: kubectl get pods -l app=mysql -o wide cs-6000-devshell-vm-9160cf3e-c260-495d-8f85-75a4a731b464: Mon Sep 30 11:57:30 2019 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-799956477c-gmlvk 1/1 Running 0 20h 10.32.3.24 gke-ms-k8s-cluster-default-pool-90dce83a-c2xf &lt;none&gt; &lt;none&gt; </code></pre> <p><strong>properties used:</strong></p> <pre><code>server.port=8080 spring.application.name=myservice server.servlet.context-path=/ spring.datasource.url= jdbc:mysql://mysql/TestDB #mysql is service name #spring.datasource.url= jdbc:mysql://ClusterIP:3306/TestDB # tried with this #spring.datasource.url= jdbc:mysql://podIP:3306/TestDB # tried with this #spring.datasource.url= jdbc:mysql://mysql:3306/TestDB # service name :port as well #spring.datasource.url= jdbc:mysql://localhost:3306/TestDB spring.datasource.username=root spring.datasource.password=password spring.datasource.testWhileIdle = true spring.datasource.timeBetweenEvictionRunsMillis = 60000 spring.datasource.validationQuery = SELECT 1 #spring.jpa.hibernate.ddl-auto=create-drop spring.jpa.show-sql=true --- </code></pre> <p>And while maven build in GKE the error is showing as :</p> <pre><code>2019-09-30 11:26:54.332 INFO 41020 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2019-09-30 11:26:58.909 ERROR 41020 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_191] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_191] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:341) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.&lt;init&gt;(ConnectionImpl.java:776) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.JDBC4Connection.&lt;init&gt;(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.45.jar:5.1.45] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_191] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_191] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:117) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:123) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:365) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:460) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:534) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.&lt;init&gt;(HikariPool.java:115) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-2.7.8.jar:na] at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:151) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:115) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:78) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:318) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:355) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup.getDatabase(DatabaseLookup.java:72) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaProperties.determineDatabase(JpaProperties.java:168) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.jpaVendorAdapter(JpaBaseConfiguration.java:111) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.CGLIB$jpaVendorAdapter$4(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9$$FastClassBySpringCGLIB$$4d05eb24.invoke(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) [spring-core-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361) [spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.jpaVendorAdapter(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1085) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:858) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at com.test.servicethree.ServiceThreeApplication.main(ServiceThreeApplication.java:33) ~[classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.0.0.RELEASE.jar:2.0.0.RELEASE] Caused by: java.net.UnknownHostException: mysql at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_191] at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_191] at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_191] at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:188) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:300) ~[mysql-connector-java-5.1.45.jar:5.1.45] ... 90 common frames omitted 2019-09-30 11:26:58.919 WARN 41020 --- [ restartedMain] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:327) ~[spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:355) ~[spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup.getDatabase(DatabaseLookup.java:72) ~[spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaProperties.determineDatabase(JpaProperties.java:168) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.jpaVendorAdapter(JpaBaseConfiguration.java:111) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.CGLIB$jpaVendorAdapter$4(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9$$FastClassBySpringCGLIB$$4d05eb24.invoke(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) [spring-core-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361) [spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.jpaVendorAdapter(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.0.0.RELEASE.jar:2.0.0.RELEASE] Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure </code></pre> <p>UPDATE1:</p> <pre><code>kubectl get -o yaml svc mysql&gt; apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"mysql"},"name":"mysql","namespace":"ms-k8s-ns"},"spec":{"ports":[{"port":3306}],"selector":{"app":"mysql"}}} creationTimestamp: "2019-09-29T09:33:31Z" labels: app: mysql name: mysql namespace: ms-k8s-ns resourceVersion: "361958" selfLink: /api/v1/namespaces/ms-k8s-ns/services/mysql uid: 331538cb-e29c-11e9-b837-42010a800039 spec: clusterIP: 10.35.243.227 ports: - port: 3306 protocol: TCP targetPort: 3306 selector: app: mysql sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <hr> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql ClusterIP 10.35.243.227 &lt;none&gt; 3306/TCP 23h service-one LoadBalancer 10.35.244.117 35.225.130.162 80:31736/TCP 2d2h </code></pre>
<p>The error message is due to the A record for kubedns not configured properly to connect to the mysql.</p> <p>From the application:</p> <h1>spring.datasource.url= jdbc:mysql://mysql:3306/TestDB</h1> <p>The url is pointing to mysql which is not going to resolve to the service.</p> <p>The proper format for configuring the A record in kubedns is my-svc.my-namespace.svc.cluster.local[1]</p> <p>In your case, it should be mysql.ms-k8s-ns.svc.cluster.local.</p> <p>[1] <a href="https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="nofollow noreferrer">https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records</a></p>
<p>I am trying to do a install of RabbitMQ in Kubernetes and following the entry on the RabbitMQ site <a href="https://www.rabbitmq.com/blog/2020/08/10/deploying-rabbitmq-to-kubernetes-whats-involved/" rel="nofollow noreferrer">https://www.rabbitmq.com/blog/2020/08/10/deploying-rabbitmq-to-kubernetes-whats-involved/</a>.</p> <p>Please note my CentOS 7 and Kubernetes 1.18. Also, I am not even sure this is the best way to deploy RabbitMQ, its the best documentation I could find though. I did find something that said that volumeClaimTemplates does not support NFS so I am wondering if that is the issue.</p> <p>I have added the my Persistent Volume using NFS:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: rabbitmq-nfs-pv namespace: ninegold-rabbitmq spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: path: /var/nfsshare server: 192.168.1.241 persistentVolumeReclaimPolicy: Retain </code></pre> <p>It created the PV correctly.</p> <pre><code>[admin@centos-controller ~]$ kubectl get pv -n ninegold-rabbitmq NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ninegold-platform-custom-config-br 1Gi RWX Retain Bound ninegold-platform/ninegold-db-pgbr-repo 22d ninegold-platform-custom-config-pgadmin 1Gi RWX Retain Bound ninegold-platform/ninegold-db-pgadmin 21d ninegold-platform-custom-config-pgdata 1Gi RWX Retain Bound ninegold-platform/ninegold-db 22d rabbitmq-nfs-pv 5Gi RWO Retain Available 14h </code></pre> <p>I then add my StatefulSet.</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: rabbitmq namespace: ninegold-rabbitmq spec: selector: matchLabels: app: &quot;rabbitmq&quot; # headless service that gives network identity to the RMQ nodes, and enables them to cluster serviceName: rabbitmq-headless # serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where &quot;pod-specific-string&quot; is managed by the StatefulSet controller. volumeClaimTemplates: - metadata: name: rabbitmq-data namespace: ninegold-rabbitmq spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: &quot;5Gi&quot; template: metadata: name: rabbitmq namespace: ninegold-rabbitmq labels: app: rabbitmq spec: initContainers: # Since k8s 1.9.4, config maps mount read-only volumes. Since the Docker image also writes to the config file, # the file must be mounted as read-write. We use init containers to copy from the config map read-only # path, to a read-write path - name: &quot;rabbitmq-config&quot; image: busybox:1.32.0 volumeMounts: - name: rabbitmq-config mountPath: /tmp/rabbitmq - name: rabbitmq-config-rw mountPath: /etc/rabbitmq command: - sh - -c # the newline is needed since the Docker image entrypoint scripts appends to the config file - cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf &amp;&amp; echo '' &gt;&gt; /etc/rabbitmq/rabbitmq.conf; cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins volumes: - name: rabbitmq-config configMap: name: rabbitmq-config optional: false items: - key: enabled_plugins path: &quot;enabled_plugins&quot; - key: rabbitmq.conf path: &quot;rabbitmq.conf&quot; # read-write volume into which to copy the rabbitmq.conf and enabled_plugins files # this is needed since the docker image writes to the rabbitmq.conf file # and Kubernetes Config Maps are mounted as read-only since Kubernetes 1.9.4 - name: rabbitmq-config-rw emptyDir: {} - name: rabbitmq-data persistentVolumeClaim: claimName: rabbitmq-data serviceAccount: rabbitmq # The Docker image runs as the `rabbitmq` user with uid 999 # and writes to the `rabbitmq.conf` file # The security context is needed since the image needs # permission to write to this file. Without the security # context, `rabbitmq.conf` is owned by root and inaccessible # by the `rabbitmq` user securityContext: fsGroup: 999 runAsUser: 999 runAsGroup: 999 containers: - name: rabbitmq # Community Docker Image image: rabbitmq:latest volumeMounts: # mounting rabbitmq.conf and enabled_plugins # this should have writeable access, this might be a problem - name: rabbitmq-config-rw mountPath: &quot;/etc/rabbitmq&quot; # mountPath: &quot;/etc/rabbitmq/conf.d/&quot; mountPath: &quot;/var/lib/rabbitmq&quot; # rabbitmq data directory - name: rabbitmq-data mountPath: &quot;/var/lib/rabbitmq/mnesia&quot; env: - name: RABBITMQ_DEFAULT_PASS valueFrom: secretKeyRef: name: rabbitmq-admin key: pass - name: RABBITMQ_DEFAULT_USER valueFrom: secretKeyRef: name: rabbitmq-admin key: user - name: RABBITMQ_ERLANG_COOKIE valueFrom: secretKeyRef: name: erlang-cookie key: cookie ports: - name: amqp containerPort: 5672 protocol: TCP - name: management containerPort: 15672 protocol: TCP - name: prometheus containerPort: 15692 protocol: TCP - name: epmd containerPort: 4369 protocol: TCP livenessProbe: exec: # This is just an example. There is no &quot;one true health check&quot; but rather # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive # and intrusive health checks. # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks. # # Stage 2 check: command: [&quot;rabbitmq-diagnostics&quot;, &quot;status&quot;] initialDelaySeconds: 60 # See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations. periodSeconds: 60 timeoutSeconds: 15 readinessProbe: # probe to know when RMQ is ready to accept traffic exec: # This is just an example. There is no &quot;one true health check&quot; but rather # several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive # and intrusive health checks. # Learn more at https://www.rabbitmq.com/monitoring.html#health-checks. # # Stage 1 check: command: [&quot;rabbitmq-diagnostics&quot;, &quot;ping&quot;] initialDelaySeconds: 20 periodSeconds: 60 timeoutSeconds: 10 </code></pre> <p>However my stateful set is not binding, I am getting the following error:</p> <pre><code>running &quot;VolumeBinding&quot; filter plugin for pod &quot;rabbitmq-0&quot;: pod has unbound immediate PersistentVolumeClaims </code></pre> <p>The PVC did not correctly bind to the PV but stays in pending state.</p> <pre><code>[admin@centos-controller ~]$ kubectl get pvc -n ninegold-rabbitmq NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rabbitmq-data-rabbitmq-0 Pending local-storage 14h </code></pre> <p>I have double checked the capacity, accessModes, I am not sure why this is not binding. My example came from here <a href="https://github.com/rabbitmq/diy-kubernetes-examples/tree/master/gke" rel="nofollow noreferrer">https://github.com/rabbitmq/diy-kubernetes-examples/tree/master/gke</a>, the only changes I have done is to bind my NFS volume.</p> <p>Any help would be appreciated.</p>
<p>In your YAMLs I found some misconfigurations.</p> <ol> <li><code>local-storage</code> class.</li> </ol> <p>I assume, you used <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Documentation example</a> to create <code>local-storage</code>. There is information that:</p> <blockquote> <p>Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling.</p> </blockquote> <p>When you want to use <code>volumeClaimTemplates</code>, you will use <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Provisioning</a>. It's well explained in <a href="https://medium.com/@zhimin.wen/persistent-volume-claim-for-statefulset-8050e396cc51" rel="nofollow noreferrer">Medium article</a>.</p> <blockquote> <p><strong>PV in StatefulSet</strong></p> <p>Specifically to the volume part, <code>StatefulSet</code> provides a key named as <code>volumeClaimTemplates</code>. With that, you can request the <code>PVC</code> from the storage class dynamically. As part of your new <code>statefulset</code> app definition, replace the volumes ... The <code>PVC</code> is named as <code>volumeClaimTemplate name + pod-name + ordinal number</code>.</p> </blockquote> <p>As <code>local-storage</code> does not support <code>dynamic provisioning</code>, it will not work. You would need to use <code>NFS storageclass</code> with proper provisioner or create <code>PV</code> manually.</p> <p>Also, when you are using <code>volumeClaimTemplates</code> for each pod it will create <code>Pv</code> and <code>PVC</code>. In addition <code>PVC</code> and <code>PV</code> are bounding in 1:1 relationship. For more details you can check <a href="https://stackoverflow.com/questions/57839938/kubernetes-pvcs-sharing-a-single-pv/57924275">this SO thread</a>.</p> <ol start="2"> <li>Error <code>unbound immediate PersistentVolumeClaims</code></li> </ol> <p>It means that <code>dynamic provisioning</code> didn't work as expected. If you would check <code>kubectl get pv,pvc</code> you would not see any new <code>PV</code>,<code>PVC</code> with name: <code>volumeClaimTemplate name + pod-name + ordinal number</code>.</p> <ol start="3"> <li><code>claimName: rabbitmq-data</code></li> </ol> <p>I assume, in this claim you wanted to mount it to <code>PV</code> created by <code>volumeClaimTemplates</code> but it was not created. Also <code>PV</code> would have name <code>rabbitmq-data-rabbitmq-0</code> for first pod and <code>rabbitmq-data-rabbitmq-1</code> for the second one.</p> <p>As last part, this article - <a href="https://medium.com/@myte/kubernetes-nfs-and-dynamic-nfs-provisioning-97e2afb8b4a9" rel="nofollow noreferrer">Kubernetes : NFS and Dynamic NFS provisioning</a> might be helpful.</p>
<p>We have 2 nodes, each with 96 GB RAM. The plan was that our pods will take 90.5 GB RAM from one of the nodes and 91 GB from the other. What actually happened was the pods took 93.5 GB from one of the nodes and 88 GB from the other. This caused the pods to just restart forever and the application never reached running state.</p> <p>background: We are new to kubernetes and using version 1.14 on an eks cluster on AWS (v1.14.9-eks-658790). Currently we have pods of different sizes that together make 1 unit of our product. On the testing setup we want to work with 1 unit, and on production with many. It is a problem for us to pay more money for nodes, reduce the pod limits or the number of copies.</p> <p>Details on the pods:</p> <pre><code>+-------------+--------------+-----------+-------------+ | Pod name | Mem requests | pod limit | # of copies | +-------------+--------------+-----------+-------------+ | BIG-OK-POD | 35 | 46 | 2 | | OK-POD | 7.5 | 7.5 | 4 | | A-OK-POD | 6 | 6 | 8 | | WOLF-POD | 5 | 5 | 1 | | WOLF-B-POD | 1 | 1 | 1 | | SHEEP-POD | 2 | 2 | 1 | | SHEEP-B-POD | 2 | 2 | 1 | | SHEEP-C-POD | 1.5 | 1.5 | 1 | +-------------+--------------+-----------+-------------+ </code></pre> <p>We don't care where the pods run, we just want the node to be able to handle the memory requirements without failing.</p> <p>I renamed the pods to make it easier to follow what we expected.</p> <p><strong>Expected placement:</strong></p> <p>We expected the the wolf pods will be on one node, and the sheep pods on the other, while the OK pods will be splitted up between the nodes.</p> <pre><code>Node 1: +-------------+-----------+-------------+----------------+ | Pod name | pod limit | # of copies | combined limit | +-------------+-----------+-------------+----------------+ | BIG-OK-POD | 46 | 1 | 46 | | OK-POD | 7.5 | 2 | 15 | | A-OK-POD | 6 | 4 | 24 | | WOLF-POD | 5 | 1 | 5 | | WOLF-B-POD | 1 | 1 | 1 | +-------------+-----------+-------------+----------------+ | | TOTAL: 91 | +-------------+-----------+-------------+----------------+ Node 2: +-------------+-----------+-------------+----------------+ | Pod name | pod limit | # of copies | combined limit | +-------------+-----------+-------------+----------------+ | BIG-OK-POD | 46 | 1 | 46 | | OK-POD | 7.5 | 2 | 15 | | A-OK-POD | 6 | 4 | 24 | | SHEEP-POD | 2 | 1 | 2 | | SHEEP-B-POD | 2 | 1 | 2 | | SHEEP-C-POD | 1.5 | 1 | 1.5 | +-------------+-----------+-------------+----------------+ | | TOTAL: 90.5 | +-------------+-----------+-------------+----------------+ </code></pre> <p><strong>Actual placement:</strong></p> <pre><code>Node 1: +-------------+-----------+-------------+----------------+ | Pod name | pod limit | # of copies | combined limit | +-------------+-----------+-------------+----------------+ | BIG-OK-POD | 46 | 1 | 46 | | OK-POD | 7.5 | 2 | 15 | | A-OK-POD | 6 | 4 | 24 | | WOLF-POD | 5 | 1 | 5 | | SHEEP-B-POD | 2 | 1 | 2 | | SHEEP-C-POD | 1.5 | 1 | 1.5 | +-------------+-----------+-------------+----------------+ | | TOTAL: 93.5 | +-------------+-----------+-------------+----------------+ Node 2: +-------------+-----------+-------------+----------------+ | Pod name | pod limit | # of copies | combined limit | +-------------+-----------+-------------+----------------+ | BIG-OK-POD | 46 | 1 | 46 | | OK-POD | 7.5 | 2 | 15 | | A-OK-POD | 6 | 4 | 24 | | WOLF-B-POD | 1 | 1 | 1 | | SHEEP-POD | 2 | 1 | 2 | +-------------+-----------+-------------+----------------+ | | TOTAL: 88 | +-------------+-----------+-------------+----------------+ </code></pre> <p>Is there a way to tell kubernetes that the Node should leave 4 GB of memory to the node itself?</p> <p>After reading Marc ABOUCHACRA answer, we tried changing the system-reserved=memory (which was set to 0.2Gi), but for any values higher than 0.3Gi (0.5Gi, 1Gi, 2Gi, 3Gi and 4Gi), pods were stuck on pending state forever.</p> <p>Update: We found a way to reduce the limit on a few of the pods and now the system is up and stable (even though 1 of the nodes is on 99%). We couldn't get K8s to start with previews config and we still don't know why.</p>
<p>Kubernetes let's you configure the server in order to reserve resources for system daemons.</p> <p>To do that, you need to configure the <strong>kubelet</strong> agent. This is a per/node configuration.<br /> So if you want to reserve 4Gb of memory on one node, you need to configure the kubelet agent on this node with the following option :</p> <pre><code>--system-reserved=memory=4Gi </code></pre> <p>You can find out more about that in the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved" rel="nofollow noreferrer">official documentation</a></p>
<p>I'm looking to create yaml files for specific resources (pvs, pvcs, policies and so on) via the command line with <code>kubectl</code>. </p> <p><code>kubectl create</code> only supports the creation of certain resource types (namely clusterroles, clusterrolebindings, configmaps, cronjobs, deployments, jobs, namespaces, pod disruption budgets, priorityclasses, quotas, roles, rolebindings, secrets, services and serviceaccounts). </p> <p>Why doesn't it support pods, pvs, pvcs, etc?</p> <p>I know of <code>kubectl run --generator=run=pod/v1</code> for pods, but is there a specific reason it hasn't been added to <code>kubectl create</code>?</p> <p>I searched the docs and github issues, but couldn't find an explanation.</p> <p>I know of tools like ksonnet, but I was wondering if there is a native way (or a reason why there isn't).</p>
<p>You can create <strong>any type</strong> of object with <code>kubectl create</code>. To do that you have two solutions : </p> <ol> <li>Using a file descriptor : <code>kubectl create -f my-pod-descriptor.yml</code></li> <li>Using stdin (where your file content is in fact in your console) : </li> </ol> <pre class="lang-sh prettyprint-override"><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox-sleep spec: containers: - name: busybox image: busybox args: - sleep - "1000000" EOF </code></pre> <p>Now back to the question, as to why didn't they add a <code>kubectl create pod</code> command for example. I don't really have the answer.<br> My guess is because it is not really a good practice to manage <strong>pods</strong> directly. It is recommended to use <strong>deployments</strong> instead. And you have a <code>kubectl create deployment</code> command.</p> <p>What about other objects wich are perfectly fine such as <strong>pv or pvc</strong>. Well, I don't know :)</p> <p>Just keep in mind that it is not really a good practice to create/manage everything from the console, as you won't be able to keep the history of what you are doing. Prefer using files managed in a SCM. </p> <p>Thus, I guess the K8S team is not putting too much an effort in a procedure or command that is not recommended. Which is fine to me.</p>
<p>I want to change the IP address of my LoadBalancer ingress-nginx-controller in Google Cloud. I have now assigned the IP address via LoadBalancer. See the screenshot. Unfortunately it is not adopted in GKE. Why? Is that a bug? <a href="https://i.stack.imgur.com/IdOPR.png" rel="nofollow noreferrer">GKE lb IP address change</a></p>
<p>I have verified this on my <code>GKE</code> test cluster.</p> <p>When you <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address" rel="nofollow noreferrer">Reserving a static external IP address</a> it isn't assigned to any of your VMs. Depends on how you <code>created cluster</code>/<code>reserved ip</code> (standard or premium) you can get error like below:</p> <pre><code>Error syncing load balancer: failed to ensure load balancer: failed to create forwarding rule for load balancer (a574130f333b143a2a62281ef47c8dbb(default/nginx-ingress-controller)): googleapi: Error 400: PREMIUM network tier (the project's default network tier) is not supported: The network tier of specified IP address is STANDARD, that of Forwarding Rule must be the same., badRequest </code></pre> <p>In this scenario I've used cluster based in <code>us-central-1c</code> and <code>reserved IP</code> as <code>Network Service Tier: Premium</code>, <code>Type: Regional</code> and used region where my cluster is based - <code>us-central-1.</code> My <code>ExternalIP: 34.66.79.1X8</code></p> <p><strong>NOTE</strong> <code>Reserved IP must be in the same reagion as your cluster</code></p> <p><strong>Option 1:</strong> - Use <a href="https://helm.sh/" rel="nofollow noreferrer">Helm chart</a></p> <p>Deploy Nginx</p> <pre><code>helm install nginx-ingress stable/nginx-ingress --set controller.service.loadBalancerIP=34.66.79.1X8,rbac.create=true </code></pre> <p>Service output:</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.8.0.1 &lt;none&gt; 443/TCP 5h49m nginx-ingress-controller LoadBalancer 10.8.5.158 &lt;pending&gt; 80:31898/TCP,443:30554/TCP 27s nginx-ingress-default-backend ClusterIP 10.8.13.209 &lt;none&gt; 80/TCP 27s </code></pre> <p>Service describe output:</p> <pre><code>$ kubectl describe svc nginx-ingress-controller ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 32s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 5s service-controller Ensured load balancer </code></pre> <p>Final output:</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.8.0.1 &lt;none&gt; 443/TCP 5h49m nginx-ingress-controller LoadBalancer 10.8.5.158 34.66.79.1X8 80:31898/TCP,443:30554/TCP 35s nginx-ingress-default-backend ClusterIP 10.8.13.209 &lt;none&gt; 80/TCP 35s </code></pre> <p><strong>Option 2</strong> - Editing <a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="nofollow noreferrer">Nginx</a> YAMLs before deploying Nginx</p> <p>As per docs: Initialize your user as a cluster-admin with the following command:</p> <pre><code>kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) </code></pre> <p>Download YAML</p> <pre><code>$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <p>Edit <code>LoadBalancer</code> service and add <code>loadBalancerIP: &lt;your-reserved-ip&gt;</code> like below:</p> <pre><code># Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.13.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.35.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: LoadBalancer loadBalancerIP: 34.66.79.1x8 #This line externalTrafficPolicy: Local ports: </code></pre> <p>Deploy it <code>kubectl apply -f deploy.yaml</code>. Service output below:</p> <pre><code>$ kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.8.0.1 &lt;none&gt; 443/TCP 6h6m ingress-nginx ingress-nginx-controller LoadBalancer 10.8.5.165 &lt;pending&gt; 80:31226/TCP,443:31161/TCP 17s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.8.9.216 &lt;none&gt; 443/TCP 18s 6h6m ... </code></pre> <p>Describe output:</p> <pre><code>$ kubectl describe svc ingress-nginx-controller -n ingress-nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 40s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 2s service-controller Ensured load balancer </code></pre> <p>Service with reserved IP:</p> <pre><code>$ kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.8.5.165 34.66.79.1X8 80:31226/TCP,443:31161/TCP 2m22s ingress-nginx-controller-admission ClusterIP 10.8.9.216 &lt;none&gt; 443/TCP 2m23s </code></pre> <p><strong>In Addition</strong></p> <p>Also please keep in mind that you should add <code>annotations: kubernetes.io/ingress.class: nginx</code> in your <code>ingress</code> resource when you want force <code>GKE</code> to use <code>Nginx Ingress</code> features, like <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite</a>.</p>
<p>I have a use case where I need to append metrics label value in Prometheus. for eg, if my metrics has a label {pod=pod1}, I need to change it to {pod=pod2} before or after scraping. Is this supported?</p>
<p>You need to have a look at those two prometheus configurations :</p> <ol> <li><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config" rel="nofollow noreferrer">relabel_config</a> : this will allow you to work with the labels name and value <strong>before</strong> the scrape</li> <li><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs" rel="nofollow noreferrer">metric_relabel_configs</a> : this will allow you to work with your labels <strong>after</strong> the scrape <em>(but before ingestion)</em></li> </ol> <p>You can check out this <a href="https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs" rel="nofollow noreferrer">article</a> which explains a bit more the difference between those two.</p>
<p>I have a kubernetes cluster on the eks of aws. I use aws-ebs as the Provisioner of StorageClass, and the ReclaimPolicy is set to Retain. I install the application with helm. When I delete the application, the pvc is deleted, but the pv still exists. The status is Released. I can see that the aws-ebs volume is still in the aws console. Now I want to create a new application and still use the original data. I think there are two ways to do it</p> <ol> <li>Manually create a pvc to bind this pv to make pv work again, but the StorageClass cannot be used</li> <li>Re-create a new PV to bind aws-ebs and then create a new PVC through this PV, it seems that StorageClass cannot be used</li> </ol> <p>But I don’t know how to change it, can I ask for help? thanks in advance</p>
<p>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">ReclaimPolicy: Retain</a> means that:</p> <blockquote> <p>The <code>Retain</code> reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered &quot;released&quot;. But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.</p> <ol> <li><p>Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.</p> </li> <li><p>Manually clean up the data on the associated storage asset accordingly.</p> </li> <li><p>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.</p> </li> </ol> </blockquote> <p><a href="https://rtfm.co.ua/en/kubernetes-persistentvolume-and-persistentvolumeclaim-an-overview-with-examples/#Deleting_PV_and_PVC_%E2%80%93_an_example" rel="nofollow noreferrer">Here</a> you can find an example showing step by step how to manually reuse a PV after PVC was deleted the way that the data will not be lost.</p>
<p>Deploying ArangoDB to a MicroK8s cluster results in:</p> <pre><code>$ kubectl logs -f dbgraph-64c6fd9b84-chqkm automatically choosing storage engine Initializing database...Hang on... ArangoDB didn't start correctly during init cat: can't open '/tmp/init-log': No such file or directory </code></pre> <p>where the deployment declaration is:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null name: dbgraph spec: replicas: 1 selector: matchLabels: name: dbgraph strategy: type: Recreate template: metadata: creationTimestamp: null name: dbgraph labels: name: dbgraph spec: containers: - env: - name: ARANGO_NO_AUTH value: "1" image: arangodb:3.5 name: dbgraph ports: - containerPort: 8529 resources: limits: memory: "2Gi" cpu: 0.5 volumeMounts: - mountPath: /var/lib/arangodb3 name: dbdata-arangodb restartPolicy: Always volumes: - name: dbdata-arangodb persistentVolumeClaim: claimName: dbdata-arangodb-pvc status: {} </code></pre> <p>the PersistentVolumeClaim is:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: dbdata-arangodb-pvc spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi status: {} </code></pre> <p>and the PersistentVolume declaration is:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: dbdata-arangodb-pv labels: type: local spec: storageClassName: "" capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/disk5/k8s-peristent-volumes/test/arangodb" </code></pre> <p>Having a similar <em>Deployment-with-volume-declaration -> PVC -> PV</em> relationship works fine with other deployments, such as for Minio. I've also had luck with a similar setup for ArangoDB on GKE.</p> <p>Could this be an issue ArangoDB is having with the Kubernetes version?</p> <pre><code>$ microk8s.kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I did try the <a href="https://github.com/arangodb/kube-arangodb" rel="nofollow noreferrer">ArangoDB Kubernetes Operator</a> with no luck (but fine on GKE) - in that project's <a href="https://github.com/arangodb/kube-arangodb#production-readiness-state" rel="nofollow noreferrer">readiness state table</a> it can be seen that at most Kubernetes version 1.14 is supported - so that is probably as expected.</p> <p>How can I have ArangoDB running on a MicroK8s cluster?</p>
<ol> <li><p>The prerequisite for ArangoDB binary requires a CPU supporting SSE4.2.</p></li> <li><p>You can install ArangoDB on a MicroK8s cluster with a <a href="https://github.com/arangodb/kube-arangodb#installation-of-latest-release-using-helm" rel="nofollow noreferrer">Helm</a>. </p></li> </ol> <p><code>microk8s.enable helm</code> - Using Helm within Microk8s allows you to manage, update, share and rollback Kubernetes applications.</p> <p><a href="https://www.arangodb.com/docs/stable/deployment-kubernetes-helm.html" rel="nofollow noreferrer">Here</a> you can find a manual showing how to use the ArangoDB Kubernetes Operator with Helm.</p> <p>Also, for the general guide I recommend <a href="https://github.com/arangodb/kube-arangodb/blob/master/docs/Manual/Tutorials/Kubernetes/README.md" rel="nofollow noreferrer">this thread</a>.</p> <p>I hope it helps. </p>
<p>is it possible to setup a prometheus/grafana running on centos to monitor several K8S clusters in the lab? the architecture can be similar to the bellow one, although not strictly required. Right now the kubernetes clusters we have, do not have prometheus and grafana installed. The documentation is not very much clear if an additional component/agent remote-push is required or not and how the central prometheus and the K8S need to be configured to achieve the results? Thanks.</p> <p><img src="https://i.stack.imgur.com/bCWTX.png" alt="Required architecture" /></p>
<p>You have different solution in order to implement your use case :</p> <ol> <li>You can use <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">prometheus federation</a>. This will allow you to have a central prometheus server that will scrape samples from other prometheus servers.</li> <li>You can use <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write" rel="nofollow noreferrer">remote_write configuration</a>. This will allow you to send your samples to a remote endpoint (and then eventually scrape that central endpoint). You'll also be able to apply relabeling rules with this configuration.</li> <li>As @JulioHM said in the comment, you can use another tool like <a href="https://thanos.io/" rel="nofollow noreferrer">thanos</a> or <a href="https://github.com/cortexproject/cortex" rel="nofollow noreferrer">Cortex</a>. Those tools are great and allow you to do more stuff than just writing to a remote endpoint. You'll be able to implement horizontal scalling of your prometheus servers, long-term storage, etc.</li> </ol>
<p>Here my configmap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: chart-1591249502-zeppelin namespace: ra-iot-dev labels: helm.sh/chart: zeppelin-0.1.0 app.kubernetes.io/name: zeppelin app.kubernetes.io/instance: chart-1591249502 app.kubernetes.io/version: "0.9.0" app.kubernetes.io/managed-by: Helm data: log4j.properties: |- log4j.rootLogger = INFO, dailyfile log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd log4j.appender.dailyfile.DEBUG = INFO log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender log4j.appender.dailyfile.File = ${zeppelin.log.file} log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n log4j.logger.org.apache.zeppelin.python=DEBUG log4j.logger.org.apache.zeppelin.spark=DEBUG </code></pre> <p>I'm trying to mount this file into <code>/zeppelin/conf/log4j.properties</code> pod directory file.</p> <p>Here my deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: chart-1591249502-zeppelin labels: helm.sh/chart: zeppelin-0.1.0 app.kubernetes.io/name: zeppelin app.kubernetes.io/instance: chart-1591249502 app.kubernetes.io/version: "0.9.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: zeppelin app.kubernetes.io/instance: chart-1591249502 template: metadata: labels: app.kubernetes.io/name: zeppelin app.kubernetes.io/instance: chart-1591249502 spec: serviceAccountName: chart-1591249502-zeppelin securityContext: {} containers: - name: zeppelin securityContext: {} image: "apache/zeppelin:0.9.0" imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {} env: - name: ZEPPELIN_PORT value: "8080" - name: ZEPPELIN_K8S_CONTAINER_IMAGE value: apache/zeppelin:0.9.0 - name: ZEPPELIN_RUN_MODE value: local volumeMounts: - name: log4j-properties-volume mountPath: /zeppelin/conf/log4j.properties volumes: - name: log4j-properties-volume configMap: name: chart-1591249502-zeppelin items: - key: log4j.properties path: keys </code></pre> <p>I'm getting this error event in kubernetes:</p> <blockquote> <p>Error: failed to start container "zeppelin": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\"/var/lib/origin/openshift.local.volumes/pods/63ac209e-a626-11ea-9e39-0050569f5f65/volumes/kubernetes.io~configmap/log4j-properties-volume\\" to rootfs \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged\\" at \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged/zeppelin/conf/log4j.properties\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type</p> </blockquote> <p>Take in mind, that I only want to replace an existing file. I mean, into <code>/zeppelin/conf/</code> directory there are several files. I only want to replace <code>/zeppelin/conf/log4j.properties</code>.</p> <p>Any ideas?</p>
<p>From logs I saw that you are working on <code>OpenShift</code>, however I was able to do it on <code>GKE</code>.</p> <p>I've deployed pure zeppelin deployment form your example.</p> <pre><code>zeppelin@chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ cat log4j.properties # # Licensed to the Apache Software Foundation (ASF) under one or more ... # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software ... # limitations under the License. # log4j.rootLogger = INFO, stdout log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n zeppelin@chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ </code></pre> <p>If you want to repleace one specific file, you need to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a>. There is also article with another example which can be found <a href="https://carlos.mendible.com/2019/02/10/kubernetes-mount-file-pod-with-configmap/" rel="nofollow noreferrer">here</a>.</p> <p><strong>Issue 1. ConfigMap belongs to <code>namespace</code></strong></p> <p>Your deployment did not contains any namespace so it was deployed in <code>default</code> namespace. <code>ConfigMap</code> included <code>namespace: ra-iot-dev</code>.</p> <pre><code>$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND ... configmaps cm true ConfigMap ... </code></pre> <p>If you will keep this namespace, you will probably get error like:</p> <pre><code>MountVolume.SetUp failed for volume "log4j-properties-volume" : configmap "chart-1591249502-zeppelin" not found </code></pre> <p><strong>Issue 2. subPath to replace file</strong></p> <p>Ive changed one part in <code>deployment</code> (added <code>subPath</code>)</p> <pre><code> volumeMounts: - name: log4j-properties-volume mountPath: /zeppelin/conf/log4j.properties subPath: log4j.properties volumes: - name: log4j-properties-volume configMap: name: chart-1591249502-zeppelin </code></pre> <p>and another in <code>ConfigMap</code> (removed namespace and set proper names)</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: chart-1591249502-zeppelin labels: helm.sh/chart: zeppelin-0.1.0 app.kubernetes.io/name: zeppelin app.kubernetes.io/instance: chart-1591249502 app.kubernetes.io/version: "0.9.0" app.kubernetes.io/managed-by: Helm data: log4j.properties: |- ... </code></pre> <p>After that output of the file looks like:</p> <pre><code>$ kubectl exec -ti chart-1591249502-zeppelin-64495dcfc8-ccddr -- /bin/bash zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~$ cd conf zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ ls configuration.xsl log4j.properties log4j_yarn_cluster.properties zeppelin-env.cmd.template zeppelin-site.xml.template interpreter-list log4j.properties2 shiro.ini.template zeppelin-env.sh.template zeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ cat log4j.properties log4j.rootLogger = INFO, dailyfile log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd log4j.appender.dailyfile.DEBUG = INFO log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender log4j.appender.dailyfile.File = ${zeppelin.log.file} log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n log4j.logger.org.apache.zeppelin.python=DEBUG log4j.logger.org.apache.zeppelin.spark=DEBUGzeppelin@chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ </code></pre>
<p>I am trying to run multiple kubectl commands using Kubernetes@1 task in Azure Devops Pipeline, however I am not sure how to do this. </p> <p><code>kubectl exec $(kubectl get pods -l app=deployment_label -o custom-columns=:metadata.name --namespace=some_name_space) --namespace=some_namespace -- some command</code></p>
<p>If what you want is input these multiple commands into <code>Command</code> parameter of the task:</p> <p><a href="https://i.stack.imgur.com/MhNND.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MhNND.png" alt="enter image description here" /></a></p> <p>Unfortunately to say, No, the task script does not support this compiled method until now.</p> <p>As the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops#commands" rel="nofollow noreferrer">doc</a> described:</p> <p><a href="https://i.stack.imgur.com/WTSBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WTSBu.png" alt="enter image description here" /></a></p> <p>The command input accept <strong>only</strong> <strong>one</strong> of these commands, which means you can only input one command in each <code>Kubernetes@1</code> task.</p> <p>Also, if you want to input instead of select one of commands from it, it could not exceed the range of commands allowed by this task and has restrict writing format like this:</p> <p><a href="https://i.stack.imgur.com/tzcMI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzcMI.png" alt="enter image description here" /></a></p> <p>For the commands you provided, if continue want to use <code>Kubernetes@1</code> task, you'd better split these commands into the separate one with multiple tasks. You could check this <a href="https://colinsalmcorner.com/devops-with-kubernetes-and-vsts-part-2/" rel="nofollow noreferrer">blog</a> for detailed usage.</p> <hr /> <p>As work around, if you still want to execute this multiple commands at same time, you can use <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCLIV1/Readme.md" rel="nofollow noreferrer">Azure CLI task</a>(if you are connecting Azure K8s), or use <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&amp;tabs=yaml" rel="nofollow noreferrer">Command line task</a>(if what you are connecting is the local k8s server).</p>
<p>Searching through the Internet, I have seen that EKS only enables IAM authentication for IAM users.<br> Is it possible to configure client certificate authentication manually? I mean, create Kubernetes users and roles internally and not use IAM authentication.</p>
<p>Kubernetes supports several authentication modules, for example:</p> <ul> <li><p>X509 client certificates</p></li> <li><p>Service account tokens</p></li> <li><p>OpenID Connect tokens</p></li> <li><p>Webhook token authentication</p></li> <li><p>Authenticating proxy, etc.</p></li> </ul> <p>You can find more details regarding them in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">official documentation</a>.</p> <p>However, Amazon EKS uses only one specific authentication method, an implementation of a webhook token authentication to authenticate Kube API requests. This webhook service is implemented by an open source tool called AWS IAM Authenticator, which has both client and server sides.</p> <p>In short, the client sends a token (which includes the AWS IAM identity—user or role—making the API call) which is verified on the server-side by the webhook service.</p> <p>So the answer to your question is: if you choose to use EKS you only have one authentication option which is IAM.</p> <p>I hope it helps. </p>
<p>I have <strong>tried reinstalling</strong> it but nothing seems to work.</p> <p>console output:</p> <pre><code>E1126 15:42:35.408904 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.436232 19976 cache_images.go:80] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.439164 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.467462 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.483078 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.485031 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.492838 19976 cache_images.go:80] CacheImage k8s.gcr.io/coredns:1.6.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.514311 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.516262 19976 cache_images.go:80] CacheImage k8s.gcr.io/pause:3.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\pause_3.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.536759 19976 cache_images.go:80] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.544566 19976 cache_images.go:80] CacheImage k8s.gcr.io/etcd:3.3.15-0 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.544566 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.546525 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% * Starting existing virtualbox VM for "minikube" ... * Waiting for the host to be provisioned ... * Found network options: - NO_PROXY=192.168.99.103 - no_proxy=192.168.99.103 ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository * Preparing Kubernetes v1.16.2 on Docker '18.09.9' ... - env NO_PROXY=192.168.99.103 - env NO_PROXY=192.168.99.103 E1126 15:44:39.347174 19976 start.go:799] Error caching images: Caching images for kubeadm: caching images: caching image C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% * Unable to load cached images: loading cached images: loading image C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2: CreateFile C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2: The system cannot find the path specified. </code></pre> <hr>
<p>This problem was raised in <a href="https://stackoverflow.com/questions/55403284/installing-minikube-on-windows">this SO question</a>. I am posting a community wiki answer from it:</p> <hr> <p>You did not provide how you are trying to install minikube and what else is installed on your PC. So it is hard to provide 100% accurate answer. I will try with providing a way that I use to install minikube on Windows, if that does not help please provide more information on what steps did you do that led to this error. I do not want to guess but it seems like you did not add the minikube binary to your PATH:</p> <p><code>executable file not found in %PATH% - Preparing Kubernetes environment ...</code></p> <p>First let's delete all the traces of your current installation. Run <code>minikube delete</code> go to C:\Users\current-user\ and delete <code>.kube</code> and <code>.minikube</code> folders.</p> <p>Open Powershell and install chocolatey as explained <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">here</a>:</p> <p><code>Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))</code></p> <p>After installation run <code>choco install minikube kubernetes-cli</code>.</p> <p>Now depending on what hypervisor you want to use you can follow steps from this <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">tutorial</a> (Hyper-V). You can use VirtualBox as well but then you won't be able to use Docker for Windows (assuming you want to) - you can read more in one of my answers <a href="https://stackoverflow.com/questions/52600524/is-it-possible-to-run-minikube-with-virtualbox-on-windows-10-along-with-docker/52611412#52611412">here</a>. Another possibility is to use Kubernetes in Docker for Windows as explained <a href="https://docs.docker.com/docker-for-windows/install/" rel="nofollow noreferrer">here</a> - but you won't be using minikube in this scenario. </p> <hr> <p>Please let me know if that helped. </p>
<p>I have the following <code>Dockerfile</code> which I need to create an image and run as a <code>kubernetes</code> deployment</p> <pre><code>ARG PYTHON_VERSION=3.7 FROM python:${PYTHON_VERSION} ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 ARG USERID ARG USERNAME WORKDIR /code COPY requirements.txt ./ COPY manage.py ./ RUN pip install -r requirements.txt RUN useradd -u &quot;${USERID:-1001}&quot; &quot;${USERNAME:-jananath}&quot; USER &quot;${USERNAME:-jananath}&quot; EXPOSE 8080 COPY . /code/ RUN pwd RUN ls ENV PATH=&quot;/code/bin:${PATH}&quot; # CMD bash ENTRYPOINT [&quot;/usr/local/bin/python&quot;] # CMD [&quot;manage.py&quot;, &quot;runserver&quot;, &quot;0.0.0.0:8080&quot;] </code></pre> <p>And I create the image, tag it and pushed to my private repository.</p> <p>And I have the <code>kubernetes</code> manifest file as below:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: tier: my-app name: my-app namespace: my-app spec: replicas: 1 selector: matchLabels: tier: my-app template: metadata: labels: tier: my-app spec: containers: - name: my-app image: &quot;&lt;RETRACTED&gt;.dkr.ecr.eu-central-1.amazonaws.com/my-ecr:webv1.11&quot; imagePullPolicy: Always args: - &quot;manage.py&quot; - &quot;runserver&quot; - &quot;0.0.0.0:8080&quot; env: - name: HOST_POSTGRES valueFrom: configMapKeyRef: key: HOST_POSTGRES name: my-app - name: POSTGRES_DB valueFrom: configMapKeyRef: key: POSTGRES_DB name: my-app - name: POSTGRES_USER valueFrom: configMapKeyRef: key: POSTGRES_USER name: my-app - name: USERID valueFrom: configMapKeyRef: key: USERID name: my-app - name: USERNAME valueFrom: configMapKeyRef: key: USERNAME name: my-app - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: key: POSTGRES_PASSWORD name: my-app ports: - containerPort: 8080 resources: limits: cpu: 1000m memory: 1000Mi requests: cpu: 00m memory: 1000Mi </code></pre> <p>When I run the deployment above, the pod kills everytime and when I try to see the logs, this is all I see.</p> <pre><code>exec /usr/local/bin/python: exec format error </code></pre> <p>This is a simple <code>django</code> python application.</p> <p>What is interesting is, this is working fine with <code>docker-compose</code> as below:</p> <pre><code>services: db: image: postgres volumes: - ./data/db:/var/lib/postgresql/data environment: - POSTGRES_DB=${POSTGRES_DB} - POSTGRES_USER=${POSTGRES_USER} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} web: build: context: . args: USERID: ${USERID} USERNAME: ${USERNAME} command: manage.py runserver 0.0.0.0:8080 volumes: - .:/code ports: - &quot;8080:8080&quot; environment: - POSTGRES_NAME=${POSTGRES_DB} - POSTGRES_USER=${POSTGRES_USER} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} env_file: - .env </code></pre> <p>Can someone help me with this?</p>
<p>Try to inspect your image architecture using</p> <pre><code>docker image inspect &lt;your image name&gt; </code></pre> <p>If you see something like,</p> <pre><code>&quot;Architecture&quot;: &quot;arm64&quot;, &quot;Variant&quot;: &quot;v8&quot;, &quot;Os&quot;: &quot;linux&quot;, </code></pre> <p>which is different from your cluster architecture. Then you must build your image on a machine with the same architecture as your cluster.</p>
<p>I have added a NFS volume mount to my Spring Boot container running on Kubernetes. Below is my deployment file for Kubernetes.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: ldap spec: replicas: 3 spec: serviceAccountName: xxx-staging-take-poc-admin volumes: - name: nfs-volume nfs: server: 10.xxx.xxx.xxx path: /ifs/standard/take1-poc containers: - image: image-id volumeMounts: - name: nfs-volume mountPath: /var/nfs name: ldap </code></pre> <p>How do I access the mount path from my Spring Boot application to achieve file read and write.</p>
<p>If I understand you correctly you can pass external info to sprint boot application through environment variables. <a href="https://dzone.com/articles/configuring-spring-boot-on-kubernetes-with-configm" rel="nofollow noreferrer">Here</a> is an article with more detailed info of how to do it. </p> <blockquote> <p>Kubernetes ConfigMaps also allows us to load a file as a ConfigMap property. That gives us an interesting option of loading the Spring Boot application.properties via Kubernetes ConfigMaps.</p> </blockquote> <p>Also, you may want to get familiar with <a href="https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fuse_integration_services_2.0_for_openshift/kube-spring-boot" rel="nofollow noreferrer">this documentation</a>. It shows how to reference secrets which are also mounted so you may find it helpful in your case.</p> <blockquote> <p>The Spring Cloud Kubernetes plug-in implements the integration between Kubernetes and Spring Boot. In principle, you could access the configuration data from a ConfigMap using the Kubernetes API.</p> </blockquote> <p>Please let me know if that helped. </p>
<p>I'm new AKS, ACR, and DevOps Pipelines and I'm trying to setup a CI/CD pipeline.</p> <p>I have a resource group setup that has both AKS and ACR in it. AKS is using <code>Standard_B2s</code> and only one node at this point since I'm just playing around.</p> <p>Images are being deployed to ACR automatically on a commit to master--haven't figured out how to setup testing yet--but when it comes to deploying to AKS, I just keep getting a:</p> <pre><code>##[error]error: deployment "client-deployment" exceeded its progress deadline </code></pre> <p>I've changed my <code>client.yaml</code> to include a <code>progressDeadlineSeconds</code> of like an hour as 10, 15, and 20 minutes didn't work:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment spec: progressDeadlineSeconds: 3600 replicas: 1 selector: matchLabels: component: client template: metadata: labels: component: client spec: containers: - name: client image: testappcontainers.azurecr.io/testapp-client ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service spec: type: ClusterIP selector: component: client ports: - port: 3000 targetPort: 3000 </code></pre> <p>I've just been modifying the <code>azure-pipelines.yml</code> that Pipelines generated for me, which I currently have as the following:</p> <pre><code># Docker # Build and push an image to Azure Container Registry # https://learn.microsoft.com/azure/devops/pipelines/languages/docker trigger: - master resources: - repo: self variables: # Container registry service connection established during pipeline creation dockerRegistryServiceConnection: &lt;dockerRegistryServiceConnection_key&gt; imageRepository: 'testapp' containerRegistry: 'testappcontainers.azurecr.io' dockerfilePath: '$(Build.SourcesDirectory)' tag: '$(Build.BuildId)' imagePullSecret: &lt;imagePullSecret_key&gt; # Agent VM image name vmImageName: 'ubuntu-latest' stages: - stage: Build displayName: Build and push stage jobs: - job: Build displayName: Build pool: vmImage: $(vmImageName) steps: - task: Docker@2 displayName: Build and push client image to container registry inputs: command: buildAndPush repository: $(imageRepository)-client dockerfile: $(dockerfilePath)/client/Dockerfile containerRegistry: $(dockerRegistryServiceConnection) tags: | $(tag) - upload: manifests artifact: manifests - stage: Deploy displayName: Deploy stage dependsOn: Build jobs: - deployment: Deploy displayName: Deploy job pool: vmImage: $(vmImageName) environment: 'testapp.default' strategy: runOnce: deploy: steps: - task: KubernetesManifest@0 displayName: Create imagePullSecret inputs: action: createSecret secretName: $(imagePullSecret) dockerRegistryEndpoint: $(dockerRegistryServiceConnection) - task: KubernetesManifest@0 displayName: Deploy to Kubernetes cluster inputs: action: deploy manifests: | $(Pipeline.Workspace)/manifests/client.yaml imagePullSecrets: | $(imagePullSecret) containers: | $(containerRegistry)/$(imageRepository):$(tag) </code></pre> <p>Here is the log too for the Task that fails:</p> <pre><code>##[debug]Evaluating condition for step: 'Deploy to Kubernetes cluster' ##[debug]Evaluating: SucceededNode() ##[debug]Evaluating SucceededNode: ##[debug]=&gt; True ##[debug]Result: True ##[section]Starting: Deploy to Kubernetes cluster ============================================================================== Task : Deploy to Kubernetes Description : Use Kubernetes manifest files to deploy to clusters or even bake the manifest files to be used for deployments using Helm charts Version : 0.162.1 Author : Microsoft Corporation Help : https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes-manifest ============================================================================== ##[debug]agent.TempDirectory=/home/vsts/work/_temp ##[debug]loading inputs and endpoints ##[debug]loading INPUT_ACTION ##[debug]loading INPUT_KUBERNETESSERVICECONNECTION ##[debug]loading INPUT_STRATEGY ##[debug]loading INPUT_TRAFFICSPLITMETHOD ##[debug]loading INPUT_PERCENTAGE ##[debug]loading INPUT_BASELINEANDCANARYREPLICAS ##[debug]loading INPUT_MANIFESTS ##[debug]loading INPUT_CONTAINERS ##[debug]loading INPUT_IMAGEPULLSECRETS ##[debug]loading INPUT_RENDERTYPE ##[debug]loading INPUT_DOCKERCOMPOSEFILE ##[debug]loading INPUT_HELMCHART ##[debug]loading INPUT_KUSTOMIZATIONPATH ##[debug]loading INPUT_RESOURCETOPATCH ##[debug]loading INPUT_RESOURCEFILETOPATCH ##[debug]loading INPUT_MERGESTRATEGY ##[debug]loading INPUT_SECRETTYPE ##[debug]loading ENDPOINT_AUTH_&lt;token&gt; ##[debug]loading ENDPOINT_AUTH_SCHEME_&lt;token&gt; ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_AZUREENVIRONMENT ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_AZURETENANTID ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_SERVICEACCOUNTNAME ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_ROLEBINDINGNAME ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_SECRETNAME ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_APITOKEN ##[debug]loading ENDPOINT_AUTH_PARAMETER_&lt;token&gt;_SERVICEACCOUNTCERTIFICATE ##[debug]loading ENDPOINT_AUTH_SYSTEMVSSCONNECTION ##[debug]loading ENDPOINT_AUTH_SCHEME_SYSTEMVSSCONNECTION ##[debug]loading ENDPOINT_AUTH_PARAMETER_SYSTEMVSSCONNECTION_ACCESSTOKEN ##[debug]loading SECRET_CONTAINER_PASSWORD ##[debug]loading SECRET_CONTAINER_USERNAME ##[debug]loading SECRET_SYSTEM_ACCESSTOKEN ##[debug]loaded 32 ##[debug]Agent.ProxyUrl=undefined ##[debug]Agent.CAInfo=undefined ##[debug]Agent.ClientCert=undefined ##[debug]Agent.SkipCertValidation=undefined ##[debug]SYSTEM_HOSTTYPE=build ##[debug]System.TeamFoundationCollectionUri=https://dev.azure.com/thetestcompany/ ##[debug]Build.BuildNumber=20191231.5 ##[debug]Build.DefinitionName=test-app ##[debug]System.DefinitionId=4 ##[debug]Agent.JobName=Deploy job ##[debug]System.TeamProject=test-app ##[debug]Build.BuildId=41 ##[debug]System.TeamProject=test-app ##[debug]namespace=null ##[debug]containers=***/testapp:41 ##[debug]imagePullSecrets=testappcontainers&lt;key&gt;-auth ##[debug]manifests=/home/vsts/work/1/manifests/client.yaml ##[debug]percentage=0 ##[debug]strategy=none ##[debug]trafficSplitMethod=pod ##[debug]baselineAndCanaryReplicas=0 ##[debug]arguments=null ##[debug]secretArguments=null ##[debug]secretType=dockerRegistry ##[debug]secretName=null ##[debug]dockerRegistryEndpoint=null ##[debug]kubernetesServiceConnection=&lt;token&gt; ##[debug]&lt;token&gt; data namespace = default ##[debug]System.TeamFoundationCollectionUri=https://dev.azure.com/thetestcompany/ ##[debug]System.HostType=build ##[debug]System.DefaultWorkingDirectory=/home/vsts/work/1/s ##[debug]Build.SourceBranchName=master ##[debug]Build.Repository.Provider=TfsGit ##[debug]Build.Repository.Uri=https://[email protected]/thetestcompany/test-app/_git/test-app ##[debug]agent.proxyurl=undefined ##[debug]VSTS_ARM_REST_IGNORE_SSL_ERRORS=undefined ##[debug]AZURE_HTTP_USER_AGENT=VSTS_&lt;hash&gt;_build_4_0 ##[debug]Agent.ProxyUrl=undefined ##[debug]Agent.CAInfo=undefined ##[debug]Agent.ClientCert=undefined ##[debug]check path : /home/vsts/work/_tasks/KubernetesManifest_&lt;hash&gt;/0.162.1/node_modules/azure-pipelines-tool-lib/lib.json ##[debug]adding resource file: /home/vsts/work/_tasks/KubernetesManifest_&lt;hash&gt;/0.162.1/node_modules/azure-pipelines-tool-lib/lib.json ##[debug]system.culture=en-US ##[debug]check path : /home/vsts/work/_tasks/KubernetesManifest_&lt;hash&gt;/0.162.1/task.json ##[debug]adding resource file: /home/vsts/work/_tasks/KubernetesManifest_&lt;hash&gt;/0.162.1/task.json ##[debug]system.culture=en-US ##[debug]action=deploy ##[debug]kubernetesServiceConnection=&lt;token&gt; ##[debug]agent.tempDirectory=/home/vsts/work/_temp ##[debug]&lt;token&gt; data authorizationType = AzureSubscription ##[debug]&lt;token&gt;=https://testappk8s-dns-&lt;key&gt;.hcp.westus.azmk8s.io/ ##[debug]&lt;token&gt; auth param serviceAccountCertificate = *** ##[debug]&lt;token&gt; auth param apiToken = *** ##[debug]set KUBECONFIG=/home/vsts/work/_temp/kubectlTask/1577816701759/config ##[debug]Processed: ##vso[task.setvariable variable=KUBECONFIG;issecret=false;]/home/vsts/work/_temp/kubectlTask/1577816701759/config ##[debug]&lt;token&gt; data acceptUntrustedCerts = undefined ##[debug]which 'kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]which 'kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]System.DefaultWorkingDirectory=/home/vsts/work/1/s ##[debug]defaultRoot: '/home/vsts/work/1/s' ##[debug]findOptions.allowBrokenSymbolicLinks: 'false' ##[debug]findOptions.followSpecifiedSymbolicLink: 'true' ##[debug]findOptions.followSymbolicLinks: 'true' ##[debug]matchOptions.debug: 'false' ##[debug]matchOptions.nobrace: 'true' ##[debug]matchOptions.noglobstar: 'false' ##[debug]matchOptions.dot: 'true' ##[debug]matchOptions.noext: 'false' ##[debug]matchOptions.nocase: 'false' ##[debug]matchOptions.nonull: 'false' ##[debug]matchOptions.matchBase: 'false' ##[debug]matchOptions.nocomment: 'false' ##[debug]matchOptions.nonegate: 'false' ##[debug]matchOptions.flipNegate: 'false' ##[debug]pattern: '/home/vsts/work/1/manifests/client.yaml' ##[debug]findPath: '/home/vsts/work/1/manifests/client.yaml' ##[debug]statOnly: 'true' ##[debug]found 1 paths ##[debug]applying include pattern ##[debug]1 matches ##[debug]1 final results ##[debug]agent.tempDirectory=/home/vsts/work/_temp ##[debug]New K8s objects after addin imagePullSecrets are :[{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"name":"client-deployment"},"spec":{"progressDeadlineSeconds":3600,"replicas":1,"selector":{"matchLabels":{"component":"client"}},"template":{"metadata":{"labels":{"component":"client"}},"spec":{"containers":[{"name":"client","image":"***/testapp-client","ports":[{"containerPort":3000}]}],"imagePullSecrets":[{"name":"testappcontainers1741032e-auth"}]}}}},{"apiVersion":"v1","kind":"Service","metadata":{"name":"client-cluster-ip-service"},"spec":{"type":"ClusterIP","selector":{"component":"client"},"ports":[{"port":3000,"targetPort":3000}]}}] ##[debug]agent.tempDirectory=/home/vsts/work/_temp ##[debug]agent.tempDirectory=/home/vsts/work/_temp ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]/usr/bin/kubectl arg: apply ##[debug]/usr/bin/kubectl arg: ["-f","/home/vsts/work/_temp/Deployment_client-deployment_1577816701782,/home/vsts/work/_temp/Service_client-cluster-ip-service_1577816701782"] ##[debug]/usr/bin/kubectl arg: ["--namespace","default"] ##[debug]exec tool: /usr/bin/kubectl ##[debug]arguments: ##[debug] apply ##[debug] -f ##[debug] /home/vsts/work/_temp/Deployment_client-deployment_1577816701782,/home/vsts/work/_temp/Service_client-cluster-ip-service_1577816701782 ##[debug] --namespace ##[debug] default [command]/usr/bin/kubectl apply -f /home/vsts/work/_temp/Deployment_client-deployment_1577816701782,/home/vsts/work/_temp/Service_client-cluster-ip-service_1577816701782 --namespace default deployment.apps/client-deployment unchanged service/client-cluster-ip-service unchanged ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]/usr/bin/kubectl arg: ["rollout","status"] ##[debug]/usr/bin/kubectl arg: Deployment/client-deployment ##[debug]/usr/bin/kubectl arg: ["--namespace","default"] ##[debug]exec tool: /usr/bin/kubectl ##[debug]arguments: ##[debug] rollout ##[debug] status ##[debug] Deployment/client-deployment ##[debug] --namespace ##[debug] default [command]/usr/bin/kubectl rollout status Deployment/client-deployment --namespace default error: deployment "client-deployment" exceeded its progress deadline ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]which '/usr/bin/kubectl' ##[debug]found: '/usr/bin/kubectl' ##[debug]/usr/bin/kubectl arg: get ##[debug]/usr/bin/kubectl arg: service/client-cluster-ip-service ##[debug]/usr/bin/kubectl arg: ["-o","json"] ##[debug]/usr/bin/kubectl arg: ["--namespace","default"] ##[debug]exec tool: /usr/bin/kubectl ##[debug]arguments: ##[debug] get ##[debug] service/client-cluster-ip-service ##[debug] -o ##[debug] json ##[debug] --namespace ##[debug] default [command]/usr/bin/kubectl get service/client-cluster-ip-service -o json --namespace default { "apiVersion": "v1", "kind": "Service", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"client-cluster-ip-service\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"port\":3000,\"targetPort\":3000}],\"selector\":{\"component\":\"client\"},\"type\":\"ClusterIP\"}}\n" }, "creationTimestamp": "name": "client-cluster-ip-service", "namespace": "default", "resourceVersion": "1234045", "selfLink": "/api/v1/namespaces/default/services/client-cluster-ip-service", "uid": "5f077159-2bdd-11ea-af20-3eaa105eb2b3" }, "spec": { "clusterIP": "10.0.181.220", "ports": [ { "port": 3000, "protocol": "TCP", "targetPort": 3000 } ], "selector": { "component": "client" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ##[debug]KUBECONFIG=/home/vsts/work/_temp/kubectlTask/1577816701759/config ##[debug]set KUBECONFIG= ##[debug]Processed: ##vso[task.setvariable variable=KUBECONFIG;issecret=false;] ##[debug]task result: Failed ##[error]error: deployment "client-deployment" exceeded its progress deadline ##[debug]Processed: ##vso[task.issue type=error;]error: deployment "client-deployment" exceeded its progress deadline ##[debug]Processed: ##vso[task.complete result=Failed;]error: deployment "client-deployment" exceeded its progress deadline ##[section]Finishing: Deploy to Kubernetes cluster </code></pre> <p>Then in Azure CLI, it shows the deployment is there, but with no available pods:</p> <pre><code>eox-dev@Azure:~$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE client-deployment 0/1 1 0 3h47m eox-dev@Azure:~$ kubectl describe deployment client-deployment Name: client-deployment Namespace: default CreationTimestamp: Tue, 31 Dec 2019 15:50:30 +0000 Labels: &lt;none&gt; Annotations: deployment.kubernetes.io/revision: 1 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"client-deployment","namespace":"default"},"spec":{"progre... Selector: component=client Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: component=client Containers: client: Image: testappcontainers.azurecr.io/testapp-client Port: 3000/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing False ProgressDeadlineExceeded OldReplicaSets: &lt;none&gt; NewReplicaSet: client-deployment-5688bdc69c (1/1 replicas created) Events: &lt;none&gt; </code></pre> <p>So what am I doing wrong here?</p>
<blockquote> <p>Error from server (BadRequest): container "client" in pod "client-deployment-5688bdc69c-hxlcf" is waiting to start: trying and failing to pull image</p> </blockquote> <p>Based on my experience, this is more relative with <code>imagePullSecrets</code> and the <code>Kubernetes namespace</code>. </p> <p>In your <code>Create imagePullSecret</code> and <code>Deploy to Kubernetes cluster</code> task, I saw that you did not provide the value to task parameter: <code>namespace</code>. This will lead to a new namespace which name is <code>default</code> will be created, since you <strong>unspecified</strong> the namespace.</p> <p>And, the <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-security#kubernetes-secrets" rel="noreferrer">kubernetes secret</a> which generated by <code>createSecret</code> action is seperated for each namespace. In one word, different namespace has different secret value:</p> <blockquote> <p>Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.</p> </blockquote> <hr> <p>Now, let’s back to your build compile process.</p> <p>In your yml definition, <code>Create imagePullSecret</code> will create a <code>secret</code> for new namespace <code>default</code> which created by task automatically as you did not provide the given namespace value. </p> <p>Then, in next task <code>Deploy to Kubernetes cluster</code>, because of the same reason, here the task will re-created a another new namespace <code>default</code>(<strong>Note:</strong> this is not same with the previous one). Also, you could see this progress from the log:</p> <p><a href="https://i.stack.imgur.com/oXgHC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oXgHC.png" alt="enter image description here"></a></p> <p>At this time, the <code>secret</code> that generated from the previous task will not available for the current namespace. BUT, as you know, the <code>ACR</code> is a private container registry which our system must verify whether the <code>kubernetes secret</code> is available.</p> <p>In addition, in your Deploy to Kubernetes cluster task, you were specifying the repository as <code>$(imageRepository)</code> which does not same with the repository you push the image to <strong><code>$(imageRepository)-client</code></strong>. </p> <p>This can also be checked in your log:</p> <p><a href="https://i.stack.imgur.com/hbOB4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hbOB4.png" alt="enter image description here"></a></p> <p>That's why there's no available node in your kubernetes, and you failed to pull the image also.</p> <hr> <p>To avoid the issue, please ensure you provide the <code>namespace</code> value in <code>KubernetesManifest@0</code> task.</p> <pre><code> - task: KubernetesManifest@0 displayName: Create imagePullSecret inputs: action: createSecret secretName: $(imagePullSecret) namespace: $(k8sNamespace) dockerRegistryEndpoint: $(DRServiceConnection) - task: KubernetesManifest@0 displayName: Deploy to Kubernetes cluster inputs: action: deploy namespace: $(k8sNamespace) manifests: | $(System.ArtifactsDirectory)/manifests/deployment.yml imagePullSecrets: | $(imagePullSecret) containers: | $(containerRegistry)/$(imageRepository)-client:$(tag) </code></pre> <p>secret to imagePullSecrets of each namespace</p>
<p>When I restart the docker service in work node, the logs of kubelet in master node report a no such file error.</p> <pre><code># in work node # systemctl restart docker service # in master node # journalctl -u kubelet # failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory </code></pre>
<p>Arghya is right but I would like to add some info you should be aware of:</p> <ol> <li><p>You can execute <code>kubeadm init phase kubelet-start</code> to only invoke a particular step that will write the kubelet configuration file and environment file and then start the kubelet.</p></li> <li><p>After performing a restart there is a chance that swap would re-enable. Make sure to run <code>swapoff -a</code> in order to turn it off.</p></li> <li><p>If you encounter any token validation problems than simply run <code>kubeadm token create --print-join-command</code> and than do the join process with the provided info. Remember that tokens expire after 24 hours by default.</p></li> <li><p>If you wish to know more about <code>kubeadm init phase</code> you can find it <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/" rel="noreferrer">here</a> and <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#instructions" rel="noreferrer">here</a>.</p></li> </ol> <p>Please let me know if that helped. </p>
<p>I am new to AKS and trying to set up the cluster and expose it via an app gateway ingress controller. While I was able to set up the cluster using az commands and was able to deploy and hit it using HTTP. I am having some challenges in enabling HTTPS over 443 in-app gateway ingress and looking to get some help.</p> <ol> <li>Below is our workflow and I am trying to setup app gateway listener on port 443</li> <li>Below is the k8 we used for enabling the ingress. If I apply is without ssl cert it woks but if I give ssl cert I get a 502 bad gateway.</li> <li>Cert is uploaded to KV and Cluster has KV add-on installed. But I am not sure how to attach this specific kv to cluster and whether the cert should be uploaded to gateway or Kubernetes.</li> </ol> <p><a href="https://i.stack.imgur.com/vDe9X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vDe9X.png" alt="enter image description here" /></a></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-web-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: workspace-dev-cluster-cert appgw.ingress.kubernetes.io/cookie-based-affinity: &quot;true&quot; appgw.ingress.kubernetes.io/request-timeout: &quot;90&quot; appgw.ingress.kubernetes.io/backend-path-prefix: &quot;/&quot; spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: frontend-svc port: number: 80 </code></pre>
<p>This link can help you with KV add-on certificate on App GW: <a href="https://azure.github.io/application-gateway-kubernetes-ingress/features/appgw-ssl-certificate/" rel="nofollow noreferrer">https://azure.github.io/application-gateway-kubernetes-ingress/features/appgw-ssl-certificate/</a></p> <p>I use different configuration to set certs on Appgw.</p> <ol> <li>I'm getting certificates via the <a href="https://akv2k8s.io/" rel="nofollow noreferrer">akv2k8s</a> tool. This creates secrets on k8s cluster.</li> <li>Then I use those certs in the ingress configuration. Please check tls definition under spec.</li> </ol> <blockquote> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-web-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: workspace-dev-cluster-cert appgw.ingress.kubernetes.io/cookie-based-affinity: &quot;true&quot; appgw.ingress.kubernetes.io/request-timeout: &quot;90&quot; appgw.ingress.kubernetes.io/backend-path-prefix: &quot;/&quot; spec: tls: - hosts: - yourdomain.com secretName: your-tls-secret-name rules: - http: paths: - path: / pathType: Prefix backend: service: name: frontend-svc port: number: 80 </code></pre> </blockquote>
<p>Now I am using treafik to expose my pods service to external,but I read the treafik deploy document and find out treafik forward request only in one namespace.For example, A namespaces request could not forward to B namespace ,should I deploy multi-treafik in kubernetes cluster?Now I have 6 namespace,should I deploy 6 treafik? It is wired, what is the best practice about this situation? I found from internet but find nothing talking about this.</p>
<p>If I understand you correctly this is unfortunately not possible and it was a conscious decision to do it that way:</p> <blockquote> <p>Cross namespace references would be a prime source of privilege escalation attacks.</p> </blockquote> <p>But in case you don't care about security rules there is a workaround (bear in mind that it will not work on every platform). You need to use Kubernetes services of type <code>externalName</code>, which would reference the services from your other namespaces. See the example below:</p> <p>a. you have <code>serviceA</code> in <code>namespaceA</code></p> <p>b. create <code>serviceB</code> in <code>namespaceB</code></p> <pre><code>spec: ... type: ExertalName externalName: serviceA.namespaceA.svc.cluster.local </code></pre> <p>c. add ingress rule into <code>ingressB</code> in <code>namespaceB</code></p> <pre><code> - path: /**** backend: serviceName: serviceB servicePort: *** </code></pre> <p>However it would be safer if you just deploy multiple ingress treafiks for each namespace.</p> <p>I hope it helps. </p>
<p>I've read everywhere that to set Https to access a kubernetes cluster you need to have an Ingress <strong>and not simply a LoadBalancer service</strong> which also exposes the cluster outside.</p> <p>My question is pretty theoretical: if an Ingress (and it is) is composed of a <strong>LoadBalancer</strong> service, a <strong>Controller</strong> (a deployment/pod of an nginx image for example) and a set of <strong>Rules</strong> (in order to correctly proxy the incoming requests inside the cluster), <strong>why can't we set Https in front of a LoadBalancer instead of an Ingress</strong>?</p> <p>As title of exercise <strong>I've built the three components separately</strong> by myself (a LoadBalancer, a Controller/API Gateway with some Rules): these three together already get the incoming requests and proxy them inside the cluster according to specific rules so, I can say, I have built an Ingress by myself. Can't I add https to this structure and do I need to set a redundant part (a k8s Ingress) in front of the cluster?</p>
<p>Not sure if I fully understood your question.</p> <p>In <code>Kubernetes</code> you are exposing you cluster/application using <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a>, which is well described <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">here</a>. Good compare of all <code>services</code> can be found in <a href="https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" rel="nofollow noreferrer">this article</a>.</p> <p>When you are creating service type <code>LoadBalancer</code> it creates <code>L4 LoadBalancer</code>. <strong>L4</strong> is aware of information like <code>source IP:port</code> and <code>destination IP:port</code>, but don't have any information about application layer (Layer 7). <code>HTTP/HTTPS LoadBalancers</code> are on Layer 7, so they are aware of application. More information about Load Balancing can be found <a href="https://cloud.google.com/load-balancing/docs/load-balancing-overview" rel="nofollow noreferrer">here</a>.</p> <blockquote> <p>Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols, such as IP address and TCP or UDP port</p> <p>Layer 7-based load balancing to add content-based routing decisions based on attributes, such as the HTTP header and the uniform resource identifier</p> </blockquote> <p><a href="https://docs.nginx.com/nginx-ingress-controller/overview/#what-is-the-ingress" rel="nofollow noreferrer">Ingress</a> is something like <code>LoadBalancer</code> with L7 support. </p> <blockquote> <p>The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.</p> </blockquote> <p><code>Ingress</code> also provides many advantages. For example if you have many services in your cluster you can create one <code>LoadBalancer</code> and <code>Ingress</code> which will be able to redirect traffic to proper service and allows you to cut costs of creating a few <code>LoadBalancers</code>. </p> <p>In order for the <code>Ingress</code> resource to work, the cluster must have an <code>ingress controller</code> running.</p> <blockquote> <p>The Ingress controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress controller implementations. In the case of NGINX, the Ingress controller is deployed in a pod along with the load balancer.</p> </blockquote> <p>There are many <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controllers</a>, but the most popular is <code>Nginx Ingress Controller</code></p> <p>So my answer regarding:</p> <blockquote> <p>why can't we set Https in front of a LoadBalancer instead of an Ingress?</p> </blockquote> <p>It's not only about securing your cluster using HTTPS but also many capabilities and features which Ingress provides.</p> <p>Very good documentation regarding HTTP(S) Load Balancing can be found on <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Docs</a>.</p>
<p>When exposing a service via ingress inginx on kubernetes, the only http response code upon requesting something is 200.</p> <p>The web app is basically a form that uploads images, stores them somewhere, and responds with the according to code (e.g. bad request, or created with the URI location in the header). It expects a Request with multipart/form-data on an API address as following: "http://{anyAddress}/api/images?directory={whereToStore}?processing={someProcessingTags}"</p> <p>The web app works as expected locally and as a single container on docker.</p> <p>So when the service is accessed via ingress, it first responds with the form as expected. You can specify settings and an image to upload. Upon sending the Request the file is correctly uploaded to the web app and processed, the web app then sends the expected 201 created, but the response that gets back to the browser is always 200 OK. I have no idea why.</p> <p>This is on a docker for desktop + kubernetes server that runs locally.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-body-size: 25m spec: rules: - http: paths: - path: /image-upload backend: serviceName: image-upload-service servicePort: http --- apiVersion: v1 kind: Service metadata: name: image-upload-service spec: type: LoadBalancer selector: run: image-upload ports: - name: http port: 80 targetPort: api protocol: TCP - name: https port: 443 targetPort: api protocol: TCP --- apiVersion: apps/v1 kind: Deployment metadata: name: image-upload-cluster labels: run: image-upload spec: selector: matchLabels: run: image-upload replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 0 template: metadata: labels: run: image-upload spec: volumes: - name: shared-volume hostPath: path: /exports containers: - name: image-upload image: calyxa/image-service imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /exports name: shared-volume resources: requests: cpu: 10m ports: - containerPort: 3000 name: api readinessProbe: httpGet: path: / port: 3000 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 </code></pre> <p>I expect upon sending a Request with an image file to get a Response with an according to status code, that is 400 bad request or 201 created (with a location in the header).</p> <p>The Response is always 200 OK, even when the web app crashes because of the Request. </p> <p>If the web app itself is not running (because it crashed, for instance), I get a 503 service unavailable, as expected.</p>
<p>Jesus, I tried to make this work all day and found the solution after posting my question. So, in order to pass around parameters in api calls (in the http address) one does need to make the ingress rewrite-target rule like so:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/proxy-body-size: 25m #nginx.ingress.kubernetes.io/ingress.class: public spec: rules: - http: paths: - path: /image-upload/?(.*) backend: serviceName: image-upload-service servicePort: http </code></pre> <p>with the interesting parts being</p> <pre><code>rewrite-target: /$1 </code></pre> <p>and</p> <pre><code>path: /image-upload/?(.*) </code></pre> <p>hope this helps someone else. cheers!</p>
<p>I'm trying to install older version of helm and tiller on minikube locally and keep on getting the <code>Error: error installing: the server could not find the requested resource</code> erorr message - no clue on how else to approach the problem;</p> <p>Steps I did:</p> <ul> <li>Accoording to this site: <a href="https://medium.com/@nehaguptag/installing-older-version-of-helm-downgrading-helm-8f3240592202" rel="nofollow noreferrer">https://medium.com/@nehaguptag/installing-older-version-of-helm-downgrading-helm-8f3240592202</a></li> </ul> <pre><code>$ brew unlink kubernetes-helm $ brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/78d64252f30a12b6f4b3ce29686ab5e262eea812/Formula/kubernetes-helm.rb $ brew switch kubernetes-helm 2.9.1 </code></pre> <ul> <li>Other than that just: <code>minikube start</code></li> <li>Set kubectl to use minikube: <code>kubectl config set-context minikube</code></li> <li>Change docker to run/download images on minikube: <code>eval $(minikube docker-env)</code></li> </ul> <p>The error message i get is:</p> <pre><code>MacBook-Pro% helm init Creating /Users/rwalas/.helm Creating /Users/rwalas/.helm/repository Creating /Users/rwalas/.helm/repository/cache Creating /Users/rwalas/.helm/repository/local Creating /Users/rwalas/.helm/plugins Creating /Users/rwalas/.helm/starters Creating /Users/rwalas/.helm/cache/archive Creating /Users/rwalas/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /Users/rwalas/.helm. Error: error installing: the server could not find the requested resource </code></pre> <p>** Update</p> <p>This bug report helps a little but issues still exists: <a href="https://github.com/helm/helm/issues/6374" rel="nofollow noreferrer">https://github.com/helm/helm/issues/6374</a></p> <p>current workaround seems to be something like this:</p> <p><code>helm init --output yaml &gt; tiller.yaml</code> and update the tiller.yaml:</p> <p>change to apps/v1 add the selector field</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-system spec: replicas: 1 strategy: {} selector: matchLabels: app: helm name: tiller </code></pre> <p>and: </p> <ul> <li><code>kubectl apply -f tiller.yaml</code></li> <li><code>helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -</code></li> </ul> <hr> <h3>Resolved:</h3> <p>And these steps helped me in the end, which I suggest to everyone who want to use older versions of helm</p> <pre><code># 1. Check which binary you would like: https://github.com/helm/helm/releases and copy address wget -c https://get.helm.sh/helm-v3.0.2-darwin-amd64.tar.gz tar -zxvf helm-v3.0.2-darwin-amd64.tar.gz rm -rf ~/.helm mv &lt;directory_of_download&gt;/Darwin-AMD64&lt;or whatever other name it was named&gt;/helm /usr/local/bin/helm </code></pre>
<p>There are two things to consider:</p> <ol> <li><p>Check which binary you would like: <a href="https://github.com/helm/helm/releases" rel="nofollow noreferrer">https://github.com/helm/helm/releases</a> and copy address <code>wget -c https://get.helm.sh/helm-v3.0.2-darwin-amd64.tar.gz tar -zxvf helm-v3.0.2-darwin-amd64.tar.gz rm -rf ~/.helm mv &lt;directory_of_download&gt;/Darwin-AMD64&lt;or whatever other name it was named&gt;/helm /usr/local/bin/helm</code></p></li> <li><p>The newest versions of K8s got some problems with installing Helm. Try to use the 1.15.4 version of K8s when starting your minikube as it was an approved workaround. <code>minikube delete</code> and than <code>minikube start --kubernetes-version=1.15.4</code>. After that helm init. </p></li> </ol>
<p>I have the following pods on my kubernetes (1.18.3) cluster:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod1 1/1 Running 0 14m pod2 1/1 Running 0 14m pod3 0/1 Pending 0 14m pod4 0/1 Pending 0 14m </code></pre> <p>pod3 and pod4 cannot start because the node has capacity for 2 pods only. When pod1 finishes and quits, then the scheduler picks either pod3 or pod4 and starts it. So far so good.</p> <p>However, I also have a high priority pod (hpod) that I'd like to start before pod3 or pod4 when either of the running pods finishes and quits.</p> <p>So I created a priorityclass can be found in the kubernetes docs:</p> <pre><code>kind: PriorityClass metadata: name: high-priority-no-preemption value: 1000000 preemptionPolicy: Never globalDefault: false description: "This priority class should be used for XYZ service pods only." </code></pre> <p>I've created the following pod yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: hpod labels: app: hpod spec: containers: - name: hpod image: ... resources: requests: cpu: "500m" memory: "500Mi" limits: cpu: "500m" memory: "500Mi" priorityClassName: high-priority-no-preemption </code></pre> <p>Now the problem is that when I start the high prio pod with kubectl apply -f hpod.yaml, then the scheduler terminates a running pod to allow the high priority pod to start despite I've set 'preemptionPolicy: Never'.</p> <p>The expected behaviour would be to postpone starting hpod until a currently running pod finishes. And when it does, then let hpod start before pod3 or pod4.</p> <p>What am I doing wrong?</p>
<p><strong>Prerequisites:</strong></p> <p>This solution was tested on Kubernetes <code>v1.18.3</code>, docker 19.03 and Ubuntu 18. Also text editor is required (i.e. <code>sudo apt-get install vim</code>).</p> <p>In Kubernetes documentation under <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#how-to-disable-preemption" rel="nofollow noreferrer">How to disable preemption</a> you can find <strong>Note</strong>:</p> <blockquote> <p>Note: In Kubernetes <code>1.15</code> and later, if the feature <code>NonPreemptingPriority</code> is <code>enabled</code>, <code>PriorityClasses</code> have the option to set <code>preemptionPolicy: Never</code>. This will prevent pods of that <code>PriorityClass</code> from preempting other pods.</p> </blockquote> <p>Also under <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#non-preempting-priority-class" rel="nofollow noreferrer">Non-preempting PriorityClass</a> you have information:</p> <blockquote> <p>The use of the PreemptionPolicy field requires the <code>NonPreemptingPriority</code> feature gate to be <code>enabled</code>.</p> </blockquote> <p>Later if you will check thoses <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">Feature Gates</a> info, you will find that <code>NonPreemptingPriority</code> is <code>false</code>, so as default it's disabled.</p> <p>Output with your current configuration:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-normal 1/1 Running 0 32s nginx-normal-2 1/1 Running 0 32s $ kubectl apply -f prio.yaml pod/nginx-priority created$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-normal-2 1/1 Running 0 48s nginx-priority 1/1 Running 0 8s </code></pre> <p>To enable <code>preemptionPolicy: Never</code> you need to apply <code>--feature-gates=NonPreemptingPriority=true</code> to 3 files:</p> <blockquote> <p>/etc/kubernetes/manifests/kube-apiserver.yaml</p> <p>/etc/kubernetes/manifests/kube-controller-manager.yaml</p> <p>/etc/kubernetes/manifests/kube-scheduler.yaml</p> </blockquote> <p>To check if this <code>feature-gate</code> is enabled you can check by using commands:</p> <pre><code>ps aux | grep apiserver | grep feature-gates ps aux | grep scheduler | grep feature-gates ps aux | grep controller-manager | grep feature-gates </code></pre> <p>For quite detailed information, why you have to edit thoses files please check <a href="https://github.com/kubernetes/kubernetes/issues/86680" rel="nofollow noreferrer">this Github thread</a>.</p> <pre><code>$ sudo su # cd /etc/kubernetes/manifests/ # ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml </code></pre> <p>Use your text editor to add feature gate to those files</p> <pre><code># vi kube-apiserver.yaml </code></pre> <p>and add <code>- --feature-gates=NonPreemptingPriority=true</code> under <code>spec.containers.command</code> like in example bellow:</p> <pre><code>spec: containers: - command: - kube-apiserver - --feature-gates=NonPreemptingPriority=true - --advertise-address=10.154.0.31 </code></pre> <p>And do the same with 2 other files. After that you can check if this flags were applied.</p> <pre><code>$ ps aux | grep apiserver | grep feature-gates root 26713 10.4 5.2 565416 402252 ? Ssl 14:50 0:17 kube-apiserver --feature-gates=NonPreemptingPriority=true --advertise-address=10.154.0.31 </code></pre> <p>Now you have redeploy your <code>PriorityClass</code>.</p> <pre><code>$ kubectl get priorityclass NAME VALUE GLOBAL-DEFAULT AGE high-priority-no-preemption 1000000 false 12m system-cluster-critical 2000000000 false 23m system-node-critical 2000001000 false 23m $ kubectl delete priorityclass high-priority-no-preemption priorityclass.scheduling.k8s.io &quot;high-priority-no-preemption&quot; deleted $ kubectl apply -f class.yaml priorityclass.scheduling.k8s.io/high-priority-no-preemption created </code></pre> <p>Last step is to deploy pod with this <code>PriorityClass</code>.</p> <p><strong>TEST</strong></p> <pre><code>$ kubectl get po NAME READY STATUS RESTARTS AGE nginx-normal 1/1 Running 0 4m4s nginx-normal-2 1/1 Running 0 18m $ kubectl apply -f prio.yaml pod/nginx-priority created $ kubectl get po NAME READY STATUS RESTARTS AGE nginx-normal 1/1 Running 0 5m17s nginx-normal-2 1/1 Running 0 20m nginx-priority 0/1 Pending 0 67s $ kubectl delete po nginx-normal-2 pod &quot;nginx-normal-2&quot; deleted $ kubectl get po NAME READY STATUS RESTARTS AGE nginx-normal 1/1 Running 0 5m55s nginx-priority 1/1 Running 0 105s </code></pre>
<p>I have a cluster with 7 nodes and a lot of services, nodes, etc in the Google Cloud Platform. I'm trying to get some metrics with StackDriver Legacy, so in the Google Cloud Console -> StackDriver -> Metrics Explorer I have all the set of anthos metrics listed but when I try to create a chart based on that metrics it doesn't show the data, actually the only response that I get in the panel is <code>no data is available for the selected time frame</code> even changing the time frame and stuffs.</p> <p>Is right to think that with anthos metrics I can retrieve information about my cronjobs, pods, services like failed initializations, jobs failures ? And if so, I can do it with StackDriver Legacy or I need to Update to StackDriver kubernetes Engine Monitoring ?</p>
<p>Anthos solution, includes what’s called <a href="https://cloud.google.com/gke-on-prem/docs/overview" rel="nofollow noreferrer">GKE-on prem</a>. I’d take a look at the instructions to use logging and <a href="https://cloud.google.com/gke-on-prem/docs/how-to/administration/logging-and-monitoring" rel="nofollow noreferrer">monitoring on GKE-on prem</a>. Stackdriver monitors GKE On-Prem clusters in a similar way as cloud-based GKE clusters.</p> <p>However, there’s <a href="https://cloud.google.com/gke-on-prem/docs/concepts/logging-and-monitoring#logging_and_monitoring" rel="nofollow noreferrer">a note</a> where they say that currently, Stackdriver only collects cluster logs and system component metrics. The full Kubernetes Monitoring experience will be available in a future release.</p> <p>You can also check that you’ve met all the <a href="https://cloud.google.com/gke-on-prem/docs/concepts/logging-and-monitoring#stackdriver_requirements." rel="nofollow noreferrer">configuration requirements</a>.</p>
<p>I have bunch of users. Every user should be able to create/change/delete substances in namespaces like <code>*-stage</code>. Namespaces can be added or removed dynamically. I can create ServiceAccount in every namespace and grant privileges.</p> <p>I created pod in k8s and install kubectl and ssh into it. So every user has access to this pod and can use kubectl. I know that I can mount ServiceAccount secrets to pod. As far as I have different ServiceAccounts for every namespace I don't know how to grant privileges to all <code>*-stage</code> namespaces for every user. I don't want to create <code>cluster-admin</code> ClusterRoleBinding for ServiceAccount, cause users should be able to modify only <code>*-stage</code> namespaces. Can you help me please?</p>
<p>I am posting a community wiki answer based on OP's solution for better visibility:</p> <blockquote> <p>Actually, I have already solved problem. I create <code>["*"]</code> role in every <code>*-stage</code> namespace and bind it to ServiceAccount. Then I mount ServiceAccount to kubectl pod which is available over ssh. So every user has unlimited access to <code>*-stage</code> namespaces.</p> </blockquote> <p>Also I am adding links for the official docs regarding <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">ServiceAccount</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">role-based access control</a> as a supplement. </p>
<p>I have created two seperate GKE clusters on K8s 1.14.10.</p> <blockquote> <p><a href="https://stackoverflow.com/questions/58309735/vpn-access-to-in-house-network-not-working-after-gke-cluster-upgrade-to-1-14-6">VPN access to in-house network not working after GKE cluster upgrade to 1.14.6</a></p> </blockquote> <p>I have followed this and the IP masquerading agent documentation. I have tried to test this using a client pod and server pod to exchange messages. I'm using Internal node IP to send message and created a ClusterIP to expose the pods.</p> <p>I have allowed requests for every instance in firewall rules for ingress and egress i.e <strong>0.0.0.0/0</strong>. <a href="https://i.stack.imgur.com/VP7du.png" rel="nofollow noreferrer">Pic:This is the description of the cluster which I have created</a> The config map of the IP masquerading agent stays the same as in the documentation. I'm able to ping the other node from within the pod but curl request says connection refused and tcpdump shows no data.</p> <p>Problem: I need to communicate from cluster A to cluster B in gke 1.14 with ipmasquerading set to true. I either get connection refused or i/o timeout. I have tried using internal and external node IPs as well as using a loadbalancer.</p>
<p>You have provided quite general information and without details I cannot provide specific scenario answer. It might be related to how did you create clusters or other firewalls settings. Due to that I will provide correct steps to creation and configuration 2 clusters with firewall and <code>masquerade</code>. Maybe you will be able to find which step you missed or misconfigured.</p> <p><strong>Clusters configuration (node,pods,svc) are on the bottom of the answer.</strong></p> <p><strong>1. Create VPC and 2 clusters</strong></p> <p>In docs it says about 2 different projects but you can do it in one project. Good example of VPC creation and 2 clusters can be found in GKE docs. <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#creating_a_network_and_two_subnets" rel="nofollow noreferrer">Create VPC</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#creating_a_cluster_in_your_first_service_project" rel="nofollow noreferrer">Crate 2 clusters</a>. In cluster <code>Tier1</code> you can enable <code>NetworkPolicy</code> now instead of enabling it later. After that you will need to create <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#console_8" rel="nofollow noreferrer">Firewall Rules</a>. You will also need to add <code>ICMP</code> protocol to firewall rule.</p> <p>At this point you should be able to ping between nodes from 2 clusters.</p> <p>For additional Firewall rules (allowing connection between pods, svc, etc) please check <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc#creating_additional_firewall_rules" rel="nofollow noreferrer">this docs</a>.</p> <p><strong>2. Enable <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">IP masquerade agent</a></strong></p> <p>As mentioned in docs, to run <code>IPMasquerade</code>:</p> <blockquote> <p>The ip-masq-agent DaemonSet is automatically installed as an add-on with --nomasq-all-reserved-ranges argument in a GKE cluster, if one or more of the following is true:</p> </blockquote> <blockquote> <p>The cluster has a network policy.</p> </blockquote> <p><strong>OR</strong></p> <blockquote> <p>The Pod's CIDR range is not within 10.0.0.0/8.</p> </blockquote> <p>It mean that <code>tier-2-cluster</code> already have <code>ip-masq-agent</code> in <code>kube-system</code> namespace (because <code>The Pod's CIDR range is not within 10.0.0.0/8.</code>). And if you enabled <code>NetworkPolicy</code> during creation of <code>tier-1-cluster</code> it should be have also installed. If not, you will need to enable it using command:</p> <p><code>$ gcloud container clusters update tier-1-cluster --update-addons=NetworkPolicy=ENABLED --zone=us-central1-a</code></p> <p>To verify if everything is ok you have to check if <code>Daemonset ip-masq-agent</code> pods were created. (Each pod for node).</p> <pre><code>$ kubectl get ds ip-masq-agent -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ip-masq-agent 3 3 3 3 3 beta.kubernetes.io/masq-agent-ds-ready=true 168m </code></pre> <p>If you will SSH to any of your nodes you will be able to see default <code>iptables</code> entries.</p> <pre><code>$ sudo iptables -t nat -L IP-MASQ Chain IP-MASQ (1 references) target prot opt source destination RETURN all -- anywhere 169.254.0.0/16 /* ip-masq: local traffic is not subject to MASQUERADE */ RETURN all -- anywhere 10.0.0.0/8 /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 172.16.0.0/12 /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 192.168.0.0/16 /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 240.0.0.0/4 /* ip-masq: RFC 5735 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 192.0.2.0/24 /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 198.51.100.0/24 /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 203.0.113.0/24 /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 100.64.0.0/10 /* ip-masq: RFC 6598 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 198.18.0.0/15 /* ip-masq: RFC 6815 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 192.0.0.0/24 /* ip-masq: RFC 6890 reserved range is not subject to MASQUERADE */ RETURN all -- anywhere 192.88.99.0/24 /* ip-masq: RFC 7526 reserved range is not subject to MASQUERADE */ MASQUERADE all -- anywhere anywhere /* ip-masq: outbound traffic is subject to MASQUERADE (must be last in chain) */ </code></pre> <p><strong>3. Deploy test application</strong></p> <p>I've used Hello application from <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">GKE docs</a> and deployed on both Clusters. In addition I have also deployed ubuntu image for tests.</p> <p><strong>4. Apply proper configuration for IPMasquerade</strong> This config need to be on the <code>source</code> cluster.</p> <p>In short, if destination CIDR is in <code>nonMasqueradeCIDRs:</code>, it will show it internal IP, otherwise it will show NodeIP as source.</p> <p>Save to file <code>config</code> below text:</p> <pre><code>nonMasqueradeCIDRs: - 10.0.0.0/8 resyncInterval: 2s masqLinkLocal: true </code></pre> <p>Create IPMasquarade ConfigMap</p> <pre><code>$ kubectl create configmap ip-masq-agent --from-file config --namespace kube-system </code></pre> <p>It will overwrite <code>iptables</code> configuration</p> <pre><code>$ sudo iptables -t nat -L IP-MASQ Chain IP-MASQ (2 references) target prot opt source destination RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: local traffic is not subject to MASQUERADE */ MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */ </code></pre> <p><strong>5. Tests:</strong></p> <p><strong>When IP is Masqueraded</strong></p> <p>SSH to Node form <code>Tier2</code> cluster and run:</p> <pre><code>sudo toolbox bash apt-get update apt install -y tcpdump </code></pre> <p>Now you should listen using command below. Port <code>32502</code> is <code>NodePort</code> service from <code>Tier 2</code> Cluster</p> <pre><code>tcpdump -i eth0 -nn -s0 -v port 32502 </code></pre> <p>In Cluster <code>Tier1</code> you need to enter ubuntu pod and curl <code>NodeIP:NodePort</code></p> <pre><code>$ kubectl exec -ti ubuntu -- bin/bash </code></pre> <p>You will need to install curl <code>apt-get install curl</code>.</p> <p>curl NodeIP:NodePort (Node which is listening, NodePort from service from Cluster Tier 2).</p> <p><strong>CLI:</strong></p> <pre><code>root@ubuntu:/# curl 172.16.4.3:32502 Hello, world! Version: 2.0.0 Hostname: hello-world-deployment-7f67f479f5-h4wdm </code></pre> <p>On Node you can see entry like:</p> <pre><code>tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 12:53:30.321641 IP (tos 0x0, ttl 63, id 25373, offset 0, flags [DF], proto TCP (6), length 60) 10.0.4.4.56018 &gt; 172.16.4.3.32502: Flags [S], cksum 0x8648 (correct), seq 3001889856 </code></pre> <p><code>10.0.4.4</code> is <code>NodeIP</code> where Ubuntu pod is located.</p> <p><strong>When IP was not Masqueraded</strong></p> <p>Remove <code>ConfigMap</code> from Cluster Tier 1</p> <pre><code>$ kubectl delete cm ip-masq-agent -n kube-system </code></pre> <p>Change in file <code>config</code> CIDR to <code>172.16.4.0/22</code> which is <code>Tier 2</code> nodes pool and reapply CM</p> <pre><code>$ kubectl create configmap ip-masq-agent --from-file config --namespace kube-system </code></pre> <p>SSH to any node from Tier 1 to check if <code>iptables rules</code> were changed.</p> <pre><code>sudo iptables -t nat -L IP-MASQ Chain IP-MASQ (2 references) target prot opt source destination RETURN all -- anywhere 172.16.4.0/22 /* ip-masq-agent: local traffic is not subject to MASQUERADE */ MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */ </code></pre> <p>Now for test I have again used Ubuntu pod and curl the same ip like before.</p> <pre><code>tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 13:16:50.316234 IP (tos 0x0, ttl 63, id 53160, offset 0, flags [DF], proto TCP (6), length 60) 10.4.2.8.57876 &gt; 172.16.4.3.32502 </code></pre> <p><code>10.4.2.8</code> is internal IP of Ubuntu pod.</p> <p><strong>Configuration for Tests:</strong></p> <p><strong>TIER1</strong></p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/hello-world-deployment-7f67f479f5-b2qqz 1/1 Running 0 15m 10.4.1.8 gke-tier-1-cluster-default-pool-e006097b-5tnj &lt;none&gt; &lt;none&gt; pod/hello-world-deployment-7f67f479f5-shqrt 1/1 Running 0 15m 10.4.2.5 gke-tier-1-cluster-default-pool-e006097b-lfvh &lt;none&gt; &lt;none&gt; pod/hello-world-deployment-7f67f479f5-x7jvr 1/1 Running 0 15m 10.4.0.8 gke-tier-1-cluster-default-pool-e006097b-1wbf &lt;none&gt; &lt;none&gt; ubuntu 1/1 Running 0 91s 10.4.2.8 gke-tier-1-cluster-default-pool-e006097b-lfvh &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/hello-world NodePort 10.0.36.46 &lt;none&gt; 60000:31694/TCP 14m department=world,greeting=hello service/kubernetes ClusterIP 10.0.32.1 &lt;none&gt; 443/TCP 115m &lt;none&gt; NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node/gke-tier-1-cluster-default-pool-e006097b-1wbf Ready &lt;none&gt; 115m v1.14.10-gke.36 10.0.4.2 35.184.38.21 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 node/gke-tier-1-cluster-default-pool-e006097b-5tnj Ready &lt;none&gt; 115m v1.14.10-gke.36 10.0.4.3 35.184.207.20 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 node/gke-tier-1-cluster-default-pool-e006097b-lfvh Ready &lt;none&gt; 115m v1.14.10-gke.36 10.0.4.4 35.226.105.31 Container-Optimized OS from Google 4.14.138+ docker://18.9.7&lt;none&gt; 100m v1.14.10-gke.36 10.0.4.4 35.226.105.31 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 </code></pre> <p><strong>TIER2</strong></p> <pre><code>$ kubectl get pods,svc,nodes -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/hello-world-deployment-7f67f479f5-92zvk 1/1 Running 0 12m 172.20.1.5 gke-tier-2-cluster-default-pool-57b1cc66-xqt5 &lt;none&gt; &lt;none&gt; pod/hello-world-deployment-7f67f479f5-h4wdm 1/1 Running 0 12m 172.20.1.6 gke-tier-2-cluster-default-pool-57b1cc66-xqt5 &lt;none&gt; &lt;none&gt; pod/hello-world-deployment-7f67f479f5-m85jn 1/1 Running 0 12m 172.20.1.7 gke-tier-2-cluster-default-pool-57b1cc66-xqt5 &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/hello-world NodePort 172.16.24.206 &lt;none&gt; 60000:32502/TCP 12m department=world,greeting=hello service/kubernetes ClusterIP 172.16.16.1 &lt;none&gt; 443/TCP 113m &lt;none&gt; NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node/gke-tier-2-cluster-default-pool-57b1cc66-84ng Ready &lt;none&gt; 112m v1.14.10-gke.36 172.16.4.2 35.184.118.151 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 node/gke-tier-2-cluster-default-pool-57b1cc66-mlmn Ready &lt;none&gt; 112m v1.14.10-gke.36 172.16.4.3 35.238.231.160 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 node/gke-tier-2-cluster-default-pool-57b1cc66-xqt5 Ready &lt;none&gt; 112m v1.14.10-gke.36 172.16.4.4 35.202.94.194 Container-Optimized OS from Google 4.14.138+ docker://18.9.7 </code></pre>
<p>I have a Kubernetes Deployment, in whose Pods I need a command run periodically. The Kubernetes <code>CronJob</code> object creates a new Pod. I would prefer to specify a cronjob, that runs inside the container of the Pod. Is there any way, I can specify this in the deployment yaml?</p> <p>I have no access to the Dockerfile, but am using pre-built Images.</p>
<p>As far as I can tell, what I was trying to do is actually not really meant to be done. I was not able to find a reasonable way to achieve this with purely pre-built images. All solutions ended up requiring some form of custom image as a sidecar or modifying the original Dockerfile.</p> <p>How I solved my problem:</p> <p>My CronJob was supposed to check a file mounted into the container file system as a ConfigMap, I opted for a sidecar, that has the ConfigMap mounted as well. Instead of a conventional CronJob I opted for a shell script, that periodically did what I needed it to do and had this shell script run as the entry point of my alpine sidecar.</p>
<p>I have a pod that has both node affinity and pod affinity. could some help me understand how would things behave in such a scenario?</p> <p>Node 1:</p> <pre><code> label: schedule-on : gpu </code></pre> <p>Node 2:</p> <pre><code> label: schedule-on : gpu </code></pre> <p>Node 3:</p> <pre><code> label: schedule-on : non-gpu </code></pre> <p>Manifest</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: test strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: test spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: schedule-on operator: In values: - gpu podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - test topologyKey: schedule-on </code></pre> <p>the output of the above is: Pods are getting scheduled on a different node [node1,node2]</p> <p>ideal output: Pod needs to be scheduled on the same node [node1]</p> <p>Here is my finding.</p> <p>Finding 1: I believe node affinity is taking precedence and pod affinity is getting ignored</p>
<p>It's the union of node affinity and pod affinity. since both the pod has the same topology key domain . hence making them in the same colocation the pods can get scheduled in different nodes but in same colocation .</p> <p>When matching the topology key and placing the pod. Value of the key is also considered</p>
<p>I created Kubernetes cluster on AWS with the help of kOps and I want to install LoadBalancer on it so I run the command</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml </code></pre> <p>And got the following error:</p> <pre><code>namespace/kube-ingress created serviceaccount/nginx-ingress-controller created service/nginx-default-backend created deployment.apps/nginx-default-backend created configmap/ingress-nginx created service/ingress-nginx created deployment.apps/ingress-nginx created unable to recognize &quot;https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml&quot;: no matches for kind &quot;ClusterRole&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; unable to recognize &quot;https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml&quot;: no matches for kind &quot;Role&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; unable to recognize &quot;https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml&quot;: no matches for kind &quot;ClusterRoleBinding&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; unable to recognize &quot;https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml&quot;: no matches for kind &quot;RoleBinding&quot; in version &quot;rbac.authorization.k8s.io/v1beta1&quot; </code></pre> <p>I tried to edit my cluster so that v1beta1 API will be supported by adding the following</p> <pre><code>apiVersion: kops.k8s.io/v1alpha2 kind: Cluster //ommited spec: //ommited kubeAPIServer: runtimeConfig: rbac.authorization.k8s.io/v1beta1: &quot;true&quot; //ommited </code></pre> <p>But I keep getting the same error</p> <p>Additional info, output of the following command</p> <pre><code>kubectl version </code></pre> <p>looks like this</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.7&quot;, GitCommit:&quot;1dd5338295409edcfff11505e7bb246f0d325d15&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:23:52Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;windows/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.2&quot;, GitCommit:&quot;8b5a19147530eaac9476b0ab82980b4088bbc1b2&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-09-15T21:32:41Z&quot;, GoVersion:&quot;go1.16.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
<p>Try applying this. <code>k apply -f</code> I used the v1 here in the apiVersion so you need to do it to all the resources as needed to fit your k8s version.</p> <pre><code># clusterRole for 1.22 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-addon: ingress-nginx.addons.k8s.io name: nginx-ingress-controller namespace: kube-ingress rules: - apiGroups: - &quot;&quot; resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - &quot;&quot; resources: - nodes verbs: - get - apiGroups: - &quot;&quot; resources: - services verbs: - get - list - watch - apiGroups: - &quot;extensions&quot; resources: - ingresses verbs: - get - list - watch - apiGroups: - &quot;&quot; resources: - events verbs: - create - patch - apiGroups: - &quot;extensions&quot; resources: - ingresses/status verbs: - update </code></pre>
<p>I am running Apache Kafka on Kubernetes via Strimzi operator. I am trying to install Kafka Connect with mysql debezium connector.</p> <p>This is the Connector configuration file:</p> <pre><code>apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: &quot;true&quot; spec: version: 3.1.0 replicas: 1 bootstrapServers: &lt;bootstrap-server&gt; config: group.id: connect-cluster offset.storage.topic: connect-cluster-offsets config.storage.topic: connect-cluster-configs status.storage.topic: connect-cluster-status config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 build: output: type: docker image: &lt;my-repo-in-ecr&gt;/my-connect-cluster:latest pushSecret: ecr-secret plugins: - name: debezium-mysql-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.9.0.Final/debezium-connector-mysql-1.9.0.Final-plugin.tar.gz </code></pre> <p>I created the ecr-secret in this way:</p> <pre><code>kubectl create secret docker-registry ecr-secret \ --docker-server=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com \ --docker-username=AWS \ --docker-password=$(aws ecr get-login-password) \ --namespace=default </code></pre> <p>The error I get is the following:</p> <blockquote> <p>error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for &quot;/my-connect-cluster:latest&quot;: POST https:/ │ │ Stream closed EOF for default/my-connect-cluster-connect-build (my-connect-cluster-connect-build)</p> </blockquote> <p>I am not sure what permission I should check. I already tried to use a configuration of the aws cli with a role with admin priviledge just to debug but I got the same error. Any guess?</p>
<p>I thought some role was missing from the node in the EKS cluster but that is not the case since the only thing needed to authenticate is the information contained in the secret. <br> <br></p> <p>The error was actually in the secret creation: two details are very relevant:</p> <ol> <li>the --region flag in the aws ecr get-login-password command was missing and therefore a different password was generated.</li> <li>the https:// is needed in front of the docker-server</li> </ol> <p>Below the right command for the secret generation.</p> <pre><code>kubectl create secret docker-registry ecr-secret \ --docker-server=https://${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com \ --docker-username=AWS \ --docker-password=$(aws ecr get-login-password --region eu-central-1) \ --namespace=default </code></pre>
<p>I have generated a config file with Oracle cloud for Kubernetes, The generated file keeps throwing the error "Not enough data to create auth info structure. ", wat are methods for fixing this</p> <p>I have created a new oracle cloud account and set up a cluster for Kubernetes (small with only 2 nodes using quick setup) when I upload the generated config file, to Kubernetes dashboard, it throws the error "Not enough data to create auth info structure".</p> <p><img src="https://i.stack.imgur.com/ELObF.png" alt="problem screen"></p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqRENDQW5TZ0F3SUJBZ0lVZFowUzdXTTFoQUtDakRtZGhhbWM1VkxlRWhrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNWVk14RGpBTUJnTlZCQWdUQlZSbGVHRnpNUTh3RFFZRFZRUUhFd1pCZFhOMAphVzR4RHpBTkJnTlZCQW9UQms5eVlXTnNaVEVNTUFvR0ExVUVDeE1EVDBSWU1ROHdEUVlEVlFRREV3WkxPRk1nClEwRXdIaGNOTVRrd09USTJNRGt6T0RBd1doY05NalF3T1RJME1Ea3pPREF3V2pCZU1Rc3dDUVlEVlFRR0V3SlYKVXpFT01Bd0dBMVVFQ0JNRlZHVjRZWE14RHpBTkJnTlZCQWNUQmtGMWMzUnBiakVQTUEwR0ExVUVDaE1HVDNKaApZMnhsTVF3d0NnWURWUVFMRXdOUFJGZ3hEekFOQmdOVkJBTVRCa3M0VXlCRFFUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLSDFLeW5lc1JlY2V5NVlJNk1IWmxOK05oQ1o0SlFCL2FLNkJXMzQKaE5lWjdzTDFTZjFXR2k5ZnRVNEVZOFpmNzJmZkttWVlWcTcwRzFMN2l2Q0VzdnlUc0EwbE5qZW90dnM2NmhqWgpMNC96K0psQWtXWG1XOHdaYTZhMU5YbGQ4TnZ1YUtVRmdZQjNxeWZYODd3VEliRjJzL0tyK044NHpWN0loMTZECnVxUXp1OGREVE03azdwZXdGN3NaZFBSOTlEaGozcGpXcGRCd3I1MjN2ZWV0M0lMLzl3TXN6VWtkRzU3MnU3aXEKWG5zcjdXNjd2S25QM0U0Wlc1S29YMkRpdXpoOXNPZFkrQTR2N1VTeitZemllc1FWNlFuYzQ4Tk15TGw4WTdwcQppbEd2SzJVMkUzK0RpWXpHbFZuUm1GU1A3RmEzYmFBVzRtUkJjR0c0SXk5QVZ5TUNBd0VBQWFOQ01FQXdEZ1lEClZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGUFprNlI0ZndpOTUKOFR5SSt0VWRwaExaT2NYek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0g2RVFHbVNzakxsQURKZURFeUVFYwpNWm9abFU5cWs4SlZ3cE5NeGhLQXh2cWZjZ3BVcGF6ZHZuVmxkbkgrQmZKeDhHcCszK2hQVjJJZnF2bzR5Y2lSCmRnWXJJcjVuLzliNml0dWRNNzhNL01PRjNHOFdZNGx5TWZNZjF5L3ZFS1RwVUEyK2RWTXBkdEhHc21rd3ZtTGkKRmd3WUJHdXFvS0NZS0NSTXMwS2xnMXZzMTMzZE1iMVlWZEFGSWkvTWttRXk1bjBzcng3Z2FJL2JzaENpb0xpVgp0WER3NkxGRUlKOWNBNkorVEE3OGlyWnJyQjY3K3hpeTcxcnFNRTZnaE51Rkt6OXBZOGJKcDlNcDVPTDByUFM0CjBpUjFseEJKZ2VrL2hTWlZKNC9rNEtUQ2tUVkFuV1RnbFJpTVNiRHFRbjhPUVVmd1kvckY3eUJBTkkxM2QyMXAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://czgkn3bmu4t.uk-london-1.clusters.oci.oraclecloud.com:6443 name: cluster-czgkn3bmu4t contexts: - context: cluster: cluster-czgkn3bmu4t user: user-czgkn3bmu4t name: context-czgkn3bmu4t current-context: context-czgkn3bmu4t kind: '' users: - name: user-czgkn3bmu4t user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - ce - cluster - generate-token - --cluster-id - ocid1.cluster.oc1.uk-london-1.aaaaaaaaae3deztchfrwinjwgiztcnbqheydkyzyhbrgkmbvmczgkn3bmu4t command: oci env: [] </code></pre> <p>if you could help me resolve this I would be extremely grateful</p>
<p>You should be able to solve this by downloading a v1 <code>kubeconfig</code>. Then specifying the <code>--token-version=1.0.0</code> flag on the create <code>kubeconfig</code> command.</p> <pre><code>oci ce cluster create-kubeconfig &lt;options&gt; --token-version=1.0.0 </code></pre> <p>Then use that <code>kubeconfig</code> in the dashboard.</p>
<p>I am having the below Pod definition.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mongodb spec: volumes: - name: mongodb-data awsElasticBlockStore: volumeID: vol-0c0d9800c22f8c563 fsType: ext4 containers: - image: mongo name: mongodb volumeMounts: - name: mongodb-data mountPath: /data/db ports: - containerPort: 27017 protocol: TCP </code></pre> <p>I have created volumne in AWS and tried to mount to the container. The container is not starting.</p> <pre><code>kubectl get po NAME READY STATUS RESTARTS AGE mongodb 0/1 ContainerCreating 0 6m57s </code></pre> <p>When I created the volume and assigned it to a Availability zone where the node is running and and the pod was scheduled on that node, the volume was mounted successfully. If the pod is not scheduled on the node, the mount fails. How can I make sure that the volume can be accessed by all the nodes</p>
<p>According to the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>There are some restrictions when using an awsElasticBlockStore volume:</p> <ul> <li>the nodes on which Pods are running must be AWS EC2 instances</li> <li>those instances need to be in the same region and availability-zone as the EBS volume</li> <li>EBS only supports a single EC2 instance mounting a volume</li> </ul> </blockquote> <p>Make sure all of the above are met. If your nodes are in different zones than you might need to create additional EBS volumes, for example:</p> <pre><code>aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2 </code></pre> <p>Please let me know if that helped.</p>
<p>I have a Kubernetes cluster set up in DigitalOcean. The cluster is configured to auto-scale using HPA(Horizontal Pod Autoscaler). I want to prevent termination of a pod that got scaled up in the last 1 hour to avoid thrashing and saving the bill. Following are the two reasons for the same:</p> <ol> <li>Due to unpredictable traffic, sometimes new pods scale up and down multiple times in an hour. Because of the nature of the application, 50-60 new users need a new pod to handle the traffic. </li> <li>DigitalOcean droplets are charged per hour. Even if the droplet was up for 15 minutes, They would charge it for an hour. So, sometimes we are paying for 5 droplets in an hour which could have been paid for just 1 droplet. </li> </ol> <p>From the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">documentation</a>, I could not find anything related to this. Any hack for the same would be helpful. </p>
<p>Yes we can do this. I currently doing this experimentation almost related to your question.</p> <p>Try to find Following things while autoscaling.</p> <ol> <li>Time taken for HPA to calculate Replica needed</li> <li>Time taken for pod to Spin up.</li> <li>Time taken to Droplet spin up.</li> <li>Time taken for pods spin down.</li> <li>Time taken to Droplet Spin down.</li> </ol> <p><strong>Case 1: Time taken for HPA to calculate Replica needed (HPA)</strong></p> <p>HPA detect the changes, As soon as get metrics immediately or atleast within 15 secs. Depends on <code>horizontal-pod-autoscaler-sync-period</code> By default it is set to 15 secs. As soon HPA get Metric, it calculates Replica Needed.</p> <p><strong>Case 2: Time taken for pod to Spin up. (HPA)</strong></p> <p>As soon as HPA calculate Desired Replicas, Pods start spin up. But it depends on <code>ScaleUp Policy</code>. You can set this as per your use case.And also depend on Droplet available, cluster autoscaler</p> <p>For Example: You can tell HPA, Hey, please spin up 4 pods in 15 secs OR Spin up 100 % of current available pods in 20 secs.</p> <p>Now HPA, will take decision to select anyone policy, which make more impact(Most changes in replica count). If <code>100% pods &gt; 4 pods</code> ,Second policy takeover, otherwise first Policy can take over. Process repeats until reach the desried replica.</p> <p>If you need scaled up Pod count immediately, you set policy as spin up 100 % pods in 1 secs, hence it try to spin up 100 % of current replica count in every secs until match the Desired Replica count.</p> <p><strong>Case 3: Time taken to Droplet spin up. (Cluster Autoscaler)</strong></p> <p>Time Taken For:</p> <ul> <li>Cluster autoscaler to detect pending pods and start spinning droplet: <code>1 min 05 secs</code> (approx)</li> <li>Droplet spin up , but Not Ready State: <code>1 min 20 secs</code></li> <li>Droplet to each READY STATE: <code>10 - 20 secs</code></li> </ul> <p><code>Total Time taken to droplet Available: 2 min 40 secs (approx)</code></p> <p><strong>Case 4: Time taken for pod to spin down. (HPA)</strong></p> <p>It depends on ScalDown Policy, as like as Case 2.</p> <p><strong>Case 5: Time taken to Droplet Spin down. (Cluster Autoscaler)</strong></p> <p>After all the Target pods terminated from the Droplet(Time taken depends on case 4).</p> <p>Digital Ocean set Taints to node like <code>DeletionCandidate...=&lt;timestamp&gt;:NopreferSchedule</code></p> <p>Ten mins from taint set, droplet starts spin down.</p> <p><strong>Conclusion:</strong></p> <p>If you need Node for one hour to stay alive (utilize as max because of hourly charge) And Not cross one hour(if above 1 hr, it billed as 2 hr)</p> <p>You can set, StabilizatioWindowSeconds = 1 hr - DigitalOcean Time Interval to delete</p> <p>Theoretically, <code>StabilizatioWindowSeconds = 1 hr - 10 mins = 50 mins (3000 secs)</code></p> <p>Practically Time taken for all Pods to terminate may vary depend on the scale down policy, your application etc...</p> <p>So I set approx(according to my case) <code>StabilizatioWindowSeconds = 1 hr - 20 mins = 40 mins (2400 secs)</code></p> <p>Thus, your Scaled up pods can now alive for 40 mins, And starts terminating after 40 mins (In my case all pods terminated within max of 5 mins). So balance 15 mins for digital ocean to destroy the droplet.</p> <p>CAUTION: Time calculated are depending on my use case and environment etc..</p> <p>Add HPA behavior config for reference</p> <pre><code>behavior: scaleDown: stabilizationWindowSeconds: 2400 selectPolicy: Max policies: - type: percent value: 100 periodSeconds: 15 scaleUp: stabilizationWindowSeconds: 0 selectPolicy: Max policies: - type: Percent value: 100 periodSeconds: 1 </code></pre>
<p>I am installed harbor in my kubernetes v1.18 cluster, what makes me confusing is that when I login harbor using default username and password: <code>admin/Harbor123456</code>, it give me error: <code>405 Method Not Allowed</code>.</p> <p><a href="https://i.stack.imgur.com/zzxKF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zzxKF.png" alt="enter image description here" /></a></p> <p>405 is show that you should using <code>GET</code> but actuallly using <code>POST</code> error, I do not think the harbor login page would make this mistake.why would this happen and what should I do to fix it? By the way, This is the kubernetes traefik config:</p> <pre><code>spec: entryPoints: - web routes: - kind: Rule match: Host(`harbor-portal.dolphin.com`) services: - name: harbor-harbor-portal port: 80 - kind: Rule match: Host(`harbor-portal.dolphin.com`) &amp;&amp; PathPrefix(`/c`) services: - name: harbor-harbor-core port: 80 - kind: Rule match: Host(`harbor-portal.dolphin.com`) &amp;&amp; PathPrefix(`/v2`) services: - name: harbor-harbor-core port: 80 - kind: Rule match: Host(`harbor-portal.dolphin.com`) &amp;&amp; PathPrefix(`/api`) services: - name: harbor-harbor-core port: 80 - kind: Rule match: Host(`harbor-portal.dolphin.com`) &amp;&amp; PathPrefix(`/service`) services: - name: harbor-harbor-core port: 80 - kind: Rule match: Host(`harbor-portal.dolphin.com`) &amp;&amp; PathPrefix(`/chartrepo`) services: - name: harbor-harbor-core port: 80 </code></pre> <p>the kubernetes ingress is traefik 2.2.1. This is the log output of harbor portal in kubernetes pod:</p> <pre><code>2020-08-03T16:50:17.415502118Z 10.11.157.67 - - [03/Aug/2020:16:50:17 +0000] &quot;GET / HTTP/1.1&quot; 200 856 &quot;-&quot; &quot;Go-http-client/1.1&quot; 2020-08-03T16:50:18.242118851Z 192.168.31.30 - - [03/Aug/2020:16:50:18 +0000] &quot;GET / HTTP/1.1&quot; 200 856 &quot;-&quot; &quot;kube-probe/1.18&quot; 2020-08-03T16:50:18.307214547Z 192.168.31.30 - - [03/Aug/2020:16:50:18 +0000] &quot;POST /c/login HTTP/1.1&quot; 405 559 &quot;http://harbor-portal.dolphin.com/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects&quot; &quot;Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36&quot; 2020-08-03T16:50:19.233495082Z 192.168.31.30 - - [03/Aug/2020:16:50:19 +0000] &quot;POST /c/login HTTP/1.1&quot; 405 559 &quot;http://harbor-portal.dolphin.com/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects&quot; &quot;Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36&quot; </code></pre> <p>send the request from harbor portal kubernetes pod:</p> <pre><code>nginx [ / ]$ curl -X POST 'http://localhost:8080/c/login' \ &gt; -H 'Connection: keep-alive' \ &gt; -H 'Accept: application/json, text/plain, */*' \ &gt; -H 'DNT: 1' \ &gt; -H 'User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36' \ &gt; -H 'Content-Type: application/x-www-form-urlencoded' \ &gt; -H 'Origin: http://harbor-portal.dolphin.com' \ &gt; -H 'Referer: http://harbor-portal.dolphin.com/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects' \ &gt; -H 'Accept-Language: en,zh-CN;q=0.9,zh;q=0.8,zh-TW;q=0.7,fr;q=0.6' \ &gt; --data-raw 'principal=admin&amp;password=Harbor123456' \ &gt; --compressed \ &gt; --insecure &lt;html&gt; &lt;head&gt;&lt;title&gt;405 Not Allowed&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;405 Not Allowed&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.16.1&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; &lt;!-- a padding to disable MSIE and Chrome friendly error page --&gt; </code></pre>
<p>I installed harbor(v1.10.4) by helm with alb ingress. When login with correct user&amp;password it returns 405 error.<br /> Followed this issue(<a href="https://github.com/goharbor/harbor-helm/issues/485#issuecomment-686222551" rel="nofollow noreferrer">https://github.com/goharbor/harbor-helm/issues/485#issuecomment-686222551</a>), I solved this question. The key reason is that ingress rules leads login request to portal. Just change the portal related rule to the buttom could sovle it.</p> <p>origin ingress:</p> <pre><code>spec: rules: - host: harbor.xxx.com http: paths: - backend: serviceName: harbor-harbor-portal servicePort: 80 path: /* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /api/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /service/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /v2/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /chartrepo/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /c/* pathType: ImplementationSpecific </code></pre> <p>after modify:</p> <pre><code>spec: rules: - host: harbor.xxx.com http: paths: - backend: serviceName: harbor-harbor-core servicePort: 80 path: /api/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /service/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /v2/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /chartrepo/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-core servicePort: 80 path: /c/* pathType: ImplementationSpecific - backend: serviceName: harbor-harbor-portal servicePort: 80 path: /* pathType: ImplementationSpecific </code></pre>
<p>I am running a minio deployment in a Kubernetes Cluster. I used to have the access- and secret key in clear text in the yaml files as follows:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: minio spec: ... containers: - name: minio volumeMounts: - name: data mountPath: &quot;/data&quot; image: minio/minio:RELEASE.2021-04-06T23-11-00Z args: - gateway - nas - /data env: - name: MINIO_ACCESS_KEY value: &quot;minio&quot; - name: MINIO_SECRET_KEY value: &quot;mysupersecretkey&quot; ... </code></pre> <p>This works fine. However when I move the credetials into a kubernetes secret, minio does no longer recognize these credetials, even though they are loaded into the same environment variables:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... containers: - name: minio ... env: - name: MINIO_ACCESS_KEY valueFrom: secretKeyRef: name: minio-secret key: minioAccessKey - name: MINIO_SECRET_KEY valueFrom: secretKeyRef: name: minio-secret key: minioSecretKey ... </code></pre> <p>I can confirm, that these credentials get mounted properly into the container as environment variables:</p> <pre class="lang-sh prettyprint-override"><code>$ echo $MINIO_ACCESS_KEY minio $ echo $MINIO_SECRET_KEY mysupersecretkey </code></pre> <p>But minio does not recognize these credentials:</p> <p><a href="https://i.stack.imgur.com/7axqf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7axqf.png" alt="minio error" /></a></p> <p>Is there any difference, to how these variables are used, when they originate from a kubernetes secret?</p> <p><strong>EDIT</strong></p> <p>I have also tried with the <code>MINIO_ROOT_USER</code> and <code>MINIO_ROOT_PASSWORD</code> variables, as the docs suggest. However, resulting in the same error using kubernetes secrets and no error with clear text.</p>
<p>I have solved the problem, which was caused by the way the credentials were written into the Kubernetes secrets. It turned out, that the tools I was using appended a <code>\n (0x0a)</code> newline character to the base64 encoded secret.</p> <p>This is why the credentials never matched what I entered into the login UI.</p>
<p>All I ever get are CORS errors while on localhost and in the cloud. It works if I manually type in localhost or I manually get the service external IP and input that into the k8s deployment file before I deploy it, but the ability to automate this is impossibly if I have to launch the services, get the external IP and then put that into the configs before I launch each time. </p> <p>API service</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: labels: app: api name: api-service spec: ports: - port: 8080 # expose the service on internal port 80 protocol: TCP targetPort: 8080 # our nodejs app listens on port 8080 selector: app: api # select this application to service type: LoadBalancer status: loadBalancer: {} </code></pre> <p>Client Service</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: client-service spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: app: client type: LoadBalancer status: loadBalancer: {} </code></pre> <p>API deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: api name: api-deployment spec: selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - image: mjwrazor/docker-js-stack-api:latest name: api-container imagePullPolicy: IfNotPresent resources: {} stdin: true tty: true workingDir: /app ports: - containerPort: 8080 args: - npm - run - start envFrom: - configMapRef: name: server-side-configs restartPolicy: Always volumes: null serviceAccountName: "" status: {} </code></pre> <p>Client Deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: client name: client-deployment spec: selector: matchLabels: app: client template: metadata: labels: app: client spec: restartPolicy: Always serviceAccountName: "" containers: - image: mjwrazor/docker-js-stack-client:latest name: client-container imagePullPolicy: IfNotPresent resources: {} ports: - containerPort: 80 status: {} </code></pre> <p>I tried adding an ingress</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app-ingress annotations: nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, DELETE, OPTIONS" nginx.ingress.kubernetes.io/cors-allow-origin: http://client-service.default nginx.ingress.kubernetes.io/cors-allow-credentials: "true" spec: rules: - host: api-development.default http: paths: - backend: serviceName: api-service servicePort: 8080 </code></pre> <p>But didn't help either. here is the server.js </p> <pre class="lang-js prettyprint-override"><code>const express = require("express"); const bodyParser = require("body-parser"); const cors = require("cors"); const app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(cors()); app.get("/", (req, res) =&gt; { res.json({ message: "Welcome" }); }); require("./app/routes/customer.routes.js")(app); // set port, listen for requests const PORT = process.env.PORT || 8080; app.listen(PORT, () =&gt; { console.log(`Server is running on port ${PORT}.`); }); </code></pre> <p>But like I said I am trying to get this to resolve via the hostnames of the services and not have to use the external IP, is this even possible or did I misunderstand something along the way.</p> <p>The client sends an axios request. Cannot use environment variables since you can't inject environment variables from k8s after the project is been build through webpack and docker into an image. I did find a really hacky way of creating a file with window global variables and then have k8s overwrite that file with new window variables. But again I have to get the external IP of the api first then do that.</p>
<p>As we discussed in the comments you need to get a real domain name in order to make it work as Automatic DNS resolution in your case basically requires it.</p>
<p>I am newbie to Google Cloud and GKE and i am trying to setup NFS Persistent Volumes with Kubernetes on GKE with the help of following link : <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="nofollow noreferrer">https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266</a></p> <p>I had followed the instructions and i was able to achieve the desired results as mentioned in the blog but i need to access the shared folder (/uploads) from an external world so can someone help me to achieve it or any pointers or any suggestions to achieve the same</p>
<p>I have followed the <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="nofollow noreferrer">doc</a> and implemented the steps on my test GKE cluster like you. Just I have one observation about the current API version for deployment. We need to use apiVersion: apps/v1 instead of apiVersion: extensions/v1beta1. Then I test with a busybox pod to mount the volume and the test was successful.</p> <p><a href="https://i.stack.imgur.com/Wi6oZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wi6oZ.png" alt="BusyBox POD" /></a></p> <p>Then I exposed the service “nfs-server” as service type “Load Balancer” like below</p> <p><a href="https://i.stack.imgur.com/2YxcH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2YxcH.png" alt="Expose nfs-server" /></a></p> <p>and found the external load balancer endpoints like (LB_Public_Ip):111 in Services &amp; Ingress tab. I allowed ports 111, 2049, 20048 in firewall. After that I took a redhat based VM in the GCP project and installed “sudo dnf install nfs-utils -y”. Then you may use the below command to see the nfs exports list. Then you can mount it as expected.</p> <p>-sudo showmount -e LB_Public_IP</p>
<p>My kubernetes PKI expired (API server to be exact) and I can't find a way to renew it. The error I get is</p> <pre><code>May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922595 8751 server.go:417] Version: v1.14.2 May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922784 8751 plugins.go:103] No cloud provider specified. May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922800 8751 server.go:754] Client rotation is on, will bootstrap in background May 27 08:43:51 node1 kubelet[8751]: E0527 08:43:51.925859 8751 bootstrap.go:264] Part of the existing bootstrap client certificate is expired: 2019-05-24 13:24:42 +0000 UTC May 27 08:43:51 node1 kubelet[8751]: F0527 08:43:51.925894 8751 server.go:265] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory </code></pre> <p>The documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</a> describes how to renew but it only works if the API server is not expired. I have tried to do a </p> <pre><code>kubeadm alpha cert renew all </code></pre> <p>and do a reboot but that just made the entire cluster fail so I did a rollback to a snapshot (my cluster is running on VMware).</p> <p>The cluster is running and all containers seem to work but I can't access it via kubectl so I can't really deploy or query.</p> <p>My kubernetes version is 1.14.2.</p>
<p>So the solution was to (first a backup)</p> <pre><code>$ cd /etc/kubernetes/pki/ $ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/ $ kubeadm init phase certs all --apiserver-advertise-address &lt;IP&gt; $ cd /etc/kubernetes/ $ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/ $ kubeadm init phase kubeconfig all $ reboot </code></pre> <p>then</p> <pre><code>$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config </code></pre> <p>that did the job for me and thanks for your hints :)</p>
<p>We are trying to start <a href="https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html" rel="nofollow noreferrer">metricbeat</a> on typhoon kubernetes cluster. But after startup its not able to get some pod specific events like restart etc because of the following</p> <h3>Corresponding metricbeat.yaml snippet</h3> <pre class="lang-yaml prettyprint-override"><code># State metrics from kube-state-metrics service: - module: kubernetes enabled: true metricsets: - state_node - state_deployment - state_replicaset - state_statefulset - state_pod - state_container - state_cronjob - state_resourcequota - state_service - state_persistentvolume - state_persistentvolumeclaim - state_storageclass # Uncomment this to get k8s events: #- event period: 10s hosts: [&quot;kube-state-metrics:8080&quot;] </code></pre> <h3>Error which we are facing</h3> <pre class="lang-sh prettyprint-override"><code>2020-07-01T10:31:02.486Z ERROR [kubernetes.state_statefulset] state_statefulset/state_statefulset.go:97 error making http request: Get http://kube-state-metrics:8080/metrics: lookup kube-state-metrics on *.*.*.*:53: no such host 2020-07-01T10:31:02.611Z WARN [transport] transport/tcp.go:52 DNS lookup failure &quot;kube-state-metrics&quot;: lookup kube-state-metrics on *.*.*.*:53: no such host 2020-07-01T10:31:02.611Z INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.state_node: error doing HTTP request to fetch 'state_node' Metricset data: error making http request: Get http://kube-state-metrics:8080/metrics: lookup kube-state-metrics on *.*.*.*:53: no such host 2020-07-01T10:31:03.313Z ERROR process_summary/process_summary.go:102 Unknown or unexpected state &lt;P&gt; for process with pid 19 2020-07-01T10:31:03.313Z ERROR process_summary/process_summary.go:102 Unknown or unexpected state &lt;P&gt; for process with pid 20 </code></pre> <p>I can add some other info which is required for this.</p>
<p>Make sure you have the Kube-State-Metrics deployed in your cluster in the kube-system namespace to make this work. Metricbeat will not come with this by default.</p> <p>Please refer <a href="https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html" rel="nofollow noreferrer">this</a> for detailed deployment instructions.</p>
<p>My Jenkins X installation, mid-project, is now becoming very unstable. (Mainly) Jenkins pods are failing to start due to disk pressure.</p> <p>Commonly, many pods are failing with</p> <blockquote> <p>The node was low on resource: [DiskPressure].</p> </blockquote> <p>or</p> <blockquote> <p>0/4 nodes are available: 1 Insufficient cpu, 1 node(s) had disk pressure, 2 node(s) had no available volume zone. Unable to mount volumes for pod "jenkins-x-chartmuseum-blah": timeout expired waiting for volumes to attach or mount for pod "jx"/"jenkins-x-chartmuseum-blah". list of unmounted volumes=[storage-volume]. list of unattached volumes=[storage-volume default-token-blah] Multi-Attach error for volume "pvc-blah" Volume is already exclusively attached to one node and can't be attached to another</p> </blockquote> <p>This may have become more pronounced with more preview builds for projects with npm and the massive <code>node-modules</code> directories it generates. I'm also not sure if Jenkins is cleaning up after itself.</p> <p>Rebooting the nodes helps, but not for very long.</p>
<p>Let's approach this from the Kubernetes side. There are few things you could do to fix this:</p> <ol> <li>As mentioned by @Vasily check what is causing disk pressure on nodes. You may also need to check logs from: <ul> <li><code>kubeclt logs: kube-scheduler events logs</code></li> <li><code>journalctl -u kubelet: kubelet logs</code></li> <li><code>/var/log/kube-scheduler.log</code></li> </ul></li> </ol> <p>More about why those logs below.</p> <ol start="2"> <li><p>Check your Eviction Thresholds. Adjust Kubelet and Kube-Scheduler configuration if needed. See what is happening with both of them (logs mentioned earlier might be useful now). More info can be found <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">here</a></p></li> <li><p>Check if you got a correctly running Horizontal Pod Autoscaler: <code>kubectl get hpa</code> You can use standard kubectl commands to setup and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-horizontal-pod-autoscaler-in-kubectl" rel="nofollow noreferrer">manage your HPA.</a></p></li> <li><p>Finally, the volume related errors that you receive indicates that we might have problem with PVC and/or PV. Make sure you have your volume in the same zone as node. If you want to mount the volume to a specific container make sure it is not exclusively attached to another one. More info can be found <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">here</a></p></li> </ol> <p>I did not test it myself because more info is needed in order to reproduce the whole scenario but I hope that above suggestion will be useful.</p> <p>Please let me know if that helped.</p>
<p>I have the following ingress controller and the host that contains <code>api</code> answers to this url <code>https://api.example.com/docs</code>.</p> <p>Now I would like to configure this nginx ingress access the <code>/docs</code> endpoint using <code>https://docs.example.com</code>. I have tried to use the annotation rewrite-target but I can't figure out how to accomplish this.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: api-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/ingress.class: nginx certmanager.k8s.io/acme-challenge-type: dns01 certmanager.k8s.io/cluster-issuer: letsencrypt spec: rules: - host: example.com http: paths: - path: / backend: serviceName: web-service servicePort: 80 - host: api.example.com http: paths: - path: / backend: serviceName: api-service servicePort: 80 tls: - hosts: - example.com secretName: example.com </code></pre>
<p>What you need here is the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#app-root" rel="nofollow noreferrer">app-root annotation</a>. If you use something like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /docs </code></pre> <p>incoming requests at <code>example.com/</code> will be rewritten internally to <code>example.com/docs</code>.</p> <p>Please let me know if that helped.</p>
<p>Can we assign <code>Persistent volume claim</code> to a Persistent Volume after it is in <code>Released</code> state? Tried it but can't </p>
<p>Yes. Take a look at the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p><strong>Reclaiming</strong> </p> <p>When a user is done with their volume, they can delete the PVC objects from the > API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been <code>released</code> of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.</p> <p><strong>Retain</strong> </p> <p>The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. An administrator can manually reclaim the volume with the following steps.</p> <ol> <li>Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted. </li> <li>Manually clean up the data on the associated storage asset accordingly. </li> <li>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.</li> </ol> </blockquote> <p>Please let me know if that helped.</p>
<p>I try to mount a folder that is non-root user(xxxuser) at kubernetes and I use hostPath for mounting. But whenever container is started, it is mounted with user (1001) not xxxuser. It is always started with user (1001). How can I mount this folder with xxxuser ? </p> <p>There is many types of volumes but I use hostPath. Before started; I change folder user and group with chown and chgrp commands. And then; mounted this folder as volume. Container started and I checked user of folder but it always user (1001). Such as;</p> <p>drwxr-x---. 2 1001 1001 70 May 3 14:15 configutil/</p> <pre><code>volumeMounts: - name: configs mountPath: /opt/KOBIL/SSMS/home/configutil volumes: - name: configs hostPath: path: /home/ssmsuser/configutil type: Directory </code></pre> <p>drwxr-x---. 2 xxxuser xxxuser 70 May 3 14:15 configutil/</p>
<p>I try what you have recomend but my problem is still continue. I add below line to my yaml file:</p> <pre><code>spec: securityContext: runAsUser: 999 runAsGroup: 999 fsGroup: 999 </code></pre> <p>I use 999 because I use 999 inside my Dockerfile. Such as;</p> <pre><code>RUN groupadd -g 999 ssmsuser &amp;&amp; \ useradd -r -u 999 -g ssmsuser ssmsuser USER ssmsuser </code></pre>
<p>I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. <code>helm upgrade</code> <code>--set-file</code> does not take a binary file. This seem to be the issue I found here: <a href="https://github.com/helm/helm/issues/3276" rel="nofollow noreferrer">https://github.com/helm/helm/issues/3276</a>. This truststore files are stored in Jenkins Credential store.</p>
<p>As the command itself is described below:</p> <pre><code>--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2) </code></pre> <p>it is also important to know <a href="https://v2.helm.sh/docs/using_helm/#the-format-and-limitations-of-set" rel="nofollow noreferrer">The Format and Limitations of --set</a>.</p> <p>The error you see: <code>Error: failed parsing --set-file data...</code> means that the file you are trying to use does not meet the requirements. See the example below:</p> <blockquote> <p><code>--set-file key=filepath</code> is another variant of <code>--set</code>. It reads the file and use its content as a value. An example use case of it is to inject a multi-line text into values without dealing with indentation in YAML. Say you want to create a brigade project with certain value containing 5 lines JavaScript code, you might write a values.yaml like:</p> <pre><code>defaultScript: | const { events, Job } = require("brigadier") function run(e, project) { console.log("hello default script") } events.on("run", run) </code></pre> <p>Being embedded in a YAML, this makes it harder for you to use IDE features and testing framework and so on that supports writing code. Instead, you can use <code>--set-file defaultScript=brigade.js</code> with <code>brigade.js</code> containing:</p> <pre><code>const { events, Job } = require("brigadier") function run(e, project) { console.log("hello default script") } events.on("run", run) </code></pre> </blockquote> <p>I hope it helps. </p>
<p>I am currently working on the development of an API. I want to upload data via a frontend and this data is then processed on a server. I would like to develop the individual steps independently of one another and also maintain them independently. The first step of this dag is quite complex. There are a lot of different testing machines, each providing a slightly different set of raw data. Depending on the machine, a different task has to be started. In this step the file should be loaded and standardized. Afterwards, the data is passed on to several tasks for analysis (preferably also in parallel) and the results are finally stored in a database. The file size varies from a few MB to several GB.</p> <p>The API will be extended later, performing calculations using the data previously stored in the database. The calculations are intensive in CPU times and memory usage.</p> <p>Since I have not much knowledge of such complex structures, I have currently checked the following systems.</p> <p><strong>Argo</strong></p> <p>It requires its own Kubernetes cluster to deploy the system. Then you could deploy and run the individual tasks in containers in the cluster. The distribution of the resources takes over K8s. However, setting up the cluster is quite complex for me, since the system is to be set up on a cloud hoster at the end, which does not support Kubernetes natively. This would have to be set up manually. Another disadvantage is that the execution is quite slow, because one container has to be pushed and started at a time. Advantage extremely flexible and expandable. Possibly also future-proof, since each container can be maintained individually.</p> <p><strong>Celery and Prefect</strong></p> <p>You can easily code the individual processes in Python and define the dependencies. In Celery, you also have a task queue and therefore you can manage many individual tasks effectively. With Prefect, tasks that do not run in Python can be intercepted well via containers. However, there is no resource management and the systems do not appear to be so well extendable.</p> <p><strong>What other benefits does Argo offer over Prefect/Celery?</strong></p> <p>Thx</p>
<p>Full disclaimer that I work for Prefect but I think Prefect can indeed slot in nicely here where you have an API that hits Prefect's API to create_flow_runs to process that data that came in through the frontend. It also sounds like you need a decentralized solution where the compute happens on different machines. Prefect lets you do this by having multiple agents on those different machines, and then you can invoke flow runs that get picked up by the respective agents.</p> <p>Yes, you are also right non Python tasks can be abstracted with containers and by using tasks to spin up containers. There is some degree of resource management offered by the agents because you can have them on multiple machines. If you decide to go to Kubernetes also for better resource management, Prefect also supports that and the Kubernetes agent can control the resources per container.</p> <p>This might be a good <a href="https://discourse.prefect.io/t/can-i-run-a-flow-of-flows-that-triggers-each-child-flow-on-a-different-machine/180" rel="nofollow noreferrer">resource</a> for you</p>
<p>My yaml file</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: auto labels: app: auto spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: metadata: labels: app: auto spec: containers: - name: auto image: busybox imagePullPolicy: Always ports: - containerPort: 9080 imagePullSecrets: - name: imageregistery restartPolicy: Never </code></pre> <p>The pods are killed appropriately but the job ceases to kill itself post 100 seconds.</p> <p>Is there anything that we could do to kill the job post the container/pod's functionality is completed.</p> <pre><code>kubectl version --short Client Version: v1.6.1 Server Version: v1.13.10+IKS kubectl get jobs --namespace abc NAME DESIRED SUCCESSFUL AGE auto 1 1 26m </code></pre> <p>Thank you,</p>
<p>The default way to delete jobs after they are done is to use <code>kubectl delete</code> command.</p> <p>As mentioned by @Erez:</p> <blockquote> <p>Kubernetes is keeping pods around so you can get the logs,configuration etc from it.</p> </blockquote> <p>If you don't want to do that manually you could write a script running in your cluster that would check for jobs with completed status and than delete them.</p> <p>Another way would be to use TTL feature that deletes the jobs automatically after a specified number of seconds. However, <strong>if you set it to zero it will clean them up immediately</strong>. For more details of how to set it up look <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">here</a>. </p> <p>Please let me know if that helped. </p>
<p>My PVC is in lost state so I tried to delete the PVC but I am getting the PVC not found error how to clear this issue please find the below screen shots.<br /> <a href="https://i.stack.imgur.com/RyDtZ.png" rel="nofollow noreferrer">pvc listed</a></p> <p><a href="https://i.stack.imgur.com/vd3VR.png" rel="nofollow noreferrer">error</a></p>
<p>Based on the screeshots that you have shared, I could see your PVC is in <code>jenkins</code> namespace. But you are running <code>kubectl delete pvc jenkins-pv-claim</code> command in <code>default</code> namespace.</p> <p>To resolve this error, you have to run <code>kubectl</code> command in <code>jenkins</code> namespace</p> <pre><code>kubectl delete pvc jenkins-pv-claim -n jenkins </code></pre>
<p>I want to have my Cloud Composer environment (Google Cloud's managed Apache Airflow service) start pods on a <strong>different</strong> kubernetes cluster. How should I do this?</p> <p><em>Note that Cloud composer runs airflow on a kubernetes cluster. That cluster is considered to be the composer "environment". Using the default values for the <code>KubernetesPodOperator</code>, composer will schedule pods on its own cluster. However in this case, I have a different kubernetes cluster on which I want to run the pods.</em></p> <p>I can connect to the worker pods and run a <code>gcloud container clusters get-credentials CLUSTERNAME</code> there, but every now and then the pods get recycled so this is not a durable solution.</p> <p>I noticed that the <a href="https://airflow.apache.org/_api/airflow/contrib/operators/kubernetes_pod_operator/index.html#airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator" rel="nofollow noreferrer"><code>KubernetesPodOperator</code></a> has both an <code>in_cluster</code> and a <code>cluster_context</code> argument, which seem useful. I would expect that this would work:</p> <pre class="lang-py prettyprint-override"><code>pod = kubernetes_pod_operator.KubernetesPodOperator( task_id='my-task', name='name', in_cluster=False, cluster_context='my_cluster_context', image='gcr.io/my/image:version' ) </code></pre> <p>But this results in <code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. Expected object with name CONTEXTNAME in kube-config/contexts list</code></p> <p>Although if I run <code>kubectl config get-contexts</code> in the worker pods, I can see the cluster config listed.</p> <p>So what I fail to figure out is:</p> <ul> <li>how to make sure that the context for my other kubernetes cluster is available on the worker pods (or should that be on the nodes?) of my composer environment?</li> <li>if the context is set (as I did manually for testing purposes), how can I tell airflow to use that context?</li> </ul>
<p>Check out the <a href="https://airflow.apache.org/_api/airflow/contrib/operators/gcp_container_operator/index.html" rel="noreferrer">GKEPodOperator</a> for this.</p> <p>Example usage from the docs : </p> <pre><code>operator = GKEPodOperator(task_id='pod_op', project_id='my-project', location='us-central1-a', cluster_name='my-cluster-name', name='task-name', namespace='default', image='perl') </code></pre>
<p>New to kubernetes i´m trying to move a current pipeline we have using a queing system without k8s.</p> <p>I have a perl script that generates a list of batch jobs (yml files) for each of the samples that i have to process. Then i run <code>kubectl apply --recursive -f 16S_jobscripts/</code></p> <p>For example each sample needs to be treated sequentially and go through different processing</p> <p>Exemple:</p> <p>SampleA -> clean -> quality -> some_calculation </p> <p>SampleB -> clean -> quality -> some_calculation </p> <p>and so on for 300 samples.</p> <p>So the idea is to prepare all the yml files and run them sequentially. This is working.</p> <p>BUT, with this approach i need to wait that all samples are processed (let´s say that all the clean jobs need to completed before i run the next jobs quality).</p> <p>what would be the best approach in such case, run each sample independently ?? how ?</p> <p>The yml below describe one Sample for one job. You can see that i´m using a counter (mergereads-1 for sample1(A))</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: merge-reads-1 namespace: namespace-id-16s labels: jobgroup: mergereads spec: template: metadata: name: mergereads-1 labels: jobgroup: mergereads spec: containers: - name: mergereads-$idx image: .../bbmap:latest command: ['sh', '-c'] args: [' cd workdir &amp;&amp; bbmerge.sh -Xmx1200m in1=files/trimmed/1.R1.trimmed.fq.gz in2=files/trimmed/1.R2.trimmed.fq.gz out=files/mergedpairs/1.merged.fq.gz merge=t mininsert=300 qtrim2=t minq=27 ratiomode=t &amp;&amp; ls files/mergedpairs/ '] resources: limits: cpu: 1 memory: 2000Mi requests: cpu: 0.8 memory: 1500Mi volumeMounts: - mountPath: '/workdir' name: db volumes: - name: db persistentVolumeClaim: claimName: workdir restartPolicy: Never </code></pre>
<p>If i understand you correctly you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#parallel-jobs" rel="nofollow noreferrer">parallel-jobs</a> with a use of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-patterns" rel="nofollow noreferrer">Job Patterns</a>.</p> <blockquote> <p>It does support parallel processing of a set of independent but related work items.</p> </blockquote> <p>Also you can consider using Argo. <a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">https://github.com/argoproj/argo</a></p> <blockquote> <p>Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).</p> </blockquote> <p>Please let me know if that helps. </p>
<p>I am using Rancher. I have deployed a cluster with 1 master &amp; 3 worker nodes. All Machines are VPSes with 2 vCPU, 8GB RAM and 80GB SSD.</p> <p>After the cluster was set up, the CPU reserved figure on Rancher dashboard was 15%. After metrics were enabled, I could see CPU used figures too and now CPU reserved had become 44% and CPU used was 16%. I find those figures too high. Is it normal for Kubernetes a cluster to consume this much CPU by itself?</p> <p><a href="https://i.stack.imgur.com/YcFb9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YcFb9.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/kspQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kspQt.png" alt="enter image description here"></a></p> <p>Drilling down into the metrics, if find that the networking solution that Rancher uses - Canal - consumes almost 10% of CPU resources. Is this normal?</p> <p><a href="https://i.stack.imgur.com/PRFH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRFH6.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/nChGs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nChGs.png" alt="enter image description here"></a></p> <p>Rancher v2.3.0 User Interface v2.3.15 Helm v2.10.0-rancher12 Machine v0.15.0-rancher12-1</p> <p><a href="https://i.stack.imgur.com/fne9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fne9L.png" alt="enter image description here"></a></p>
<p>This "issue" is known for some time now and it affects smaller clusters. Kuberenetes is very CPU hungry relative to small clusters and this is currently by design. I have found multiple threads reporting this for different kind of setups. <a href="https://github.com/docker/for-mac/issues/2601" rel="nofollow noreferrer">Here</a> is an example.</p> <p>So the short answer is: yes, Kubernetes setup consumes these amounts of CPU when used with relative small clusters.</p> <p>I hope it helps.</p>
<p>I'm new to Terraform and Helm world! I need to set up Istio on the AWS EKS cluster. I was able to set up the EKS cluster using Terraform. I'm thinking of installing ISTIO on top of the EKS cluster using Terraform by writing terraform modules. However, I found that we can set up Istio on top of eks using the helm chart.</p> <p>Can someone help me to answer my few queries:</p> <ol> <li>Should I install Istio using Terraform? If yes, Is there any terraform module available or How can I write one?</li> <li>Should I install Istio using Helm Chart? If yes, what are the pros and cons of it?</li> <li>I need to write a pipeline to install Istio on EKS cluster. Should I use a combination of both terraform and Helm as provider?</li> </ol> <p>Thank you very much for your time. Appreciate all your help!</p>
<p>To extend @Chris 3rd option of terraform + helm provider,</p> <p>as for version 1.12.0+ of istio they officially have a working helm repo:</p> <p><a href="https://istio.io/latest/docs/setup/install/helm/" rel="noreferrer">istio helm install</a></p> <p>and that with terraform's helm provider <a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs" rel="noreferrer">Terraform helm provider</a> allows an easy setup that is configured only by terraform:</p> <pre class="lang-hcl prettyprint-override"><code>provider &quot;helm&quot; { kubernetes { // enter the relevant authentication } } locals { istio_charts_url = &quot;https://istio-release.storage.googleapis.com/charts&quot; } resource &quot;helm_release&quot; &quot;istio-base&quot; { repository = local.istio_charts_url chart = &quot;base&quot; name = &quot;istio-base&quot; namespace = var.istio-namespace version = &quot;1.12.1&quot; create_namespace = true } resource &quot;helm_release&quot; &quot;istiod&quot; { repository = local.istio_charts_url chart = &quot;istiod&quot; name = &quot;istiod&quot; namespace = var.istio-namespace create_namespace = true version = &quot;1.12.1&quot; depends_on = [helm_release.istio-base] } resource &quot;kubernetes_namespace&quot; &quot;istio-ingress&quot; { metadata { labels = { istio-injection = &quot;enabled&quot; } name = &quot;istio-ingress&quot; } } resource &quot;helm_release&quot; &quot;istio-ingress&quot; { repository = local.istio_charts_url chart = &quot;gateway&quot; name = &quot;istio-ingress&quot; namespace = kubernetes_namespace.istio-ingress-label.id version = &quot;1.12.1&quot; depends_on = [helm_release.istiod] } </code></pre> <p><strong>This is the last step that was missing to make this production ready</strong></p> <p>It is no longer needed to locally keep the helm charts with the null_resource</p> <p>If you wish to override the default helm values it is nicely shown in here: <a href="https://artifacthub.io/packages/search?kind=0&amp;org=istio&amp;ts_query_web=istio&amp;official=true&amp;sort=relevance&amp;page=1" rel="noreferrer">Artifact hub</a>, choose the relevant chart and see the values</p>
<p>I'm trying to configure a Kubernetes job to run a set of bash and python scripts that contains some AWS CLI commands.</p> <p>Is there a good image out there for doing that? Do I need to create a custom docker image? I just want a container with these tools installed, what's the easiest way of getting there?</p>
<p>Easiest would be getting any image from Docker hub containing AWS CLI, for example <a href="https://hub.docker.com/r/woahbase/alpine-awscli" rel="nofollow noreferrer"><code>woahbase/alpine-awscli</code></a>.</p> <p>You can use it in a following way <code>kubectl run aws-cli --image=woahbase/alpine-awscli</code></p> <p>This would create a <code>pod</code> names aws-cli which would contain following image. You would need to upload your scripts to the <code>pod</code> or mount a storage <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a>.</p> <p>Keep in mind this is not recommended as this image does not belong to you and you have no idea if there were no changes before checking.</p> <p>I would create my own Docker hub repo and build my own image, something like the following:</p> <pre><code>FROM alpine:3.6 RUN apk -v --update add \ python \ py-pip \ groff \ less \ mailcap \ &amp;&amp; \ pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic &amp;&amp; \ apk -v --purge del py-pip &amp;&amp; \ rm /var/cache/apk/* VOLUME /root/.aws VOLUME /project WORKDIR /project ENTRYPOINT ["aws"] </code></pre>
<p>From my understanding, we can use Ingress class annotation to use multiple Nginx ingress controllers within a cluster. But I have a use case where I need to use multiple ingress controllers within the same namespace to expose different services in the same namespace using the respective ingress rules created. I follow <a href="https://kubernetes.github.io/ingress-nginx/deploy/#azure" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#azure</a> to create a sample ingress controller. What all params should I modify if I want to have multiple Nginx ingress controllers within the same namespace.</p> <p>Thanks in advance</p>
<p>It's not clear from your post if you intend to deploy multiple nginx-ingress controllers or different ingress controllers. However, both can be deployed in the same namespace.</p> <p>In the case of deploying different ingress controllers, it should be easy enough to deploy in the same namespace and use class annotations to specify which ingress rule is processed by which Ingress-controller. However, in case you want to deploy multiple nginx-ingress-controllers in the same namespace, you would have to use update the name/labels or other identifiers to something different.</p> <p>E.g - The link you mentioned, <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml</a> , would need to be updated as -</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-3.33.0 app.kubernetes.io/name: ingress-nginx-internal app.kubernetes.io/instance: ingress-nginx-internal app.kubernetes.io/version: 0.47.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-internal namespace: ingress-nginx automountServiceAccountToken: true </code></pre> <p>assuming we call the 2nd nginx-ingress-controller as <em>ingress-nginx-internal</em>; Likewise, all resources created in your link need to be modified and to deploy them in the same namespace.</p> <p>In addition, you would have to update the deployment args to specify the ingress.class, your controllers would target -</p> <pre><code>spec: template: spec: containers: - name: nginx-ingress-internal-controller args: - /nginx-ingress-controller - '--ingress-class=nginx-internal' - '--configmap=ingress/nginx-ingress-internal-controller' </code></pre> <p>The link <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/</a> explains how to control multiple ingress controllers.</p>
<p>I was able to remove the Taint from master but my two worker nodes installed bare metal with Kubeadmin keep the unreachable taint even after issuing command to remove them. It says removed but its not permanent. And when I check taints still there. I also tried patching and setting to null but this did not work. Only thing I found on SO or anywhere else deals with master or assumes these commands work. </p> <p>UPDATE: I checked the timestamp of the Taint and its added in again the moment it is deleted. So in what sense is the node unreachable? I can ping it. Is there any kubernetes diagnostics I can run to find out how it is unreachable? I checked I can ping both ways between master and worker nodes. So where would log would show error which component cannot connect? </p> <pre><code>kubectl describe no k8s-node1 | grep -i taint Taints: node.kubernetes.io/unreachable:NoSchedule </code></pre> <p>Tried: </p> <pre><code>kubectl patch node k8s-node1 -p '{"spec":{"Taints":[]}}' </code></pre> <p>And</p> <pre><code>kubectl taint nodes --all node.kubernetes.io/unreachable:NoSchedule- kubectl taint nodes --all node.kubernetes.io/unreachable:NoSchedule- node/k8s-node1 untainted node/k8s-node2 untainted error: taint "node.kubernetes.io/unreachable:NoSchedule" not found </code></pre> <p>result is it says untainted for the two workers nodes but then I see them again when I grep</p> <pre><code> kubectl describe no k8s-node1 | grep -i taint Taints: node.kubernetes.io/unreachable:NoSchedule $ k get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 10d v1.14.2 k8s-node1 NotReady &lt;none&gt; 10d v1.14.2 k8s-node2 NotReady &lt;none&gt; 10d v1.14.2 </code></pre> <p>UPDATE: Found someone had same problem and could only fix by resetting the cluster with Kubeadmin</p> <pre><code> https://forum.linuxfoundation.org/discussion/846483/lab2-1-kubectl-untainted-not-working </code></pre> <p>Sure hope I dont have to do that every time the worker nodes get tainted.</p> <pre><code>k describe node k8s-node2 Name: k8s-node2 Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node2 kubernetes.io/os=linux Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":”d2:xx:61:c3:xx:16"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.xx.1.xx kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true </code></pre> <p>CreationTimestamp: Wed, 05 Jun 2019 11:46:12 +0700</p> <pre><code> Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message </code></pre> <p>---- ------ ----------------- ------------------ ------ -------</p> <pre><code> MemoryPressure Unknown Fri, 14 Jun 2019 10:34:07 +0700 Fri, 14 Jun 2019 10:35:09 +0700 NodeStatusUnknown Kubelet stopped posting node status. DiskPressure Unknown Fri, 14 Jun 2019 10:34:07 +0700 Fri, 14 Jun 2019 10:35:09 +0700 NodeStatusUnknown Kubelet stopped posting node status. PIDPressure Unknown Fri, 14 Jun 2019 10:34:07 +0700 Fri, 14 Jun 2019 10:35:09 +0700 NodeStatusUnknown Kubelet stopped posting node status. Ready Unknown Fri, 14 Jun 2019 10:34:07 +0700 Fri, 14 Jun 2019 10:35:09 +0700 NodeStatusUnknown Kubelet stopped posting node status. </code></pre> <p>Addresses:</p> <pre><code> InternalIP: 10.10.10.xx Hostname: k8s-node2 Capacity: cpu: 2 ephemeral-storage: 26704124Ki memory: 4096032Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 24610520638 memory: 3993632Ki pods: 110 System Info: Machine ID: 6e4e4e32972b3b2f27f021dadc61d21 System UUID: 6e4e4ds972b3b2f27f0cdascf61d21 Boot ID: abfa0780-3b0d-sda9-a664-df900627be14 Kernel Version: 4.4.0-87-generic OS Image: Ubuntu 16.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.3 Kubelet Version: v1.14.2 Kube-Proxy Version: v1.14.2 PodCIDR: 10.xxx.10.1/24 Non-terminated Pods: (18 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- heptio-sonobuoy sonobuoy-systemd-logs-daemon-set- 6a8d92061c324451-hnnp9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h istio-system istio-pilot-7955cdff46-w648c 110m (5%) 2100m (105%) 228Mi (5%) 1224Mi (31%) 6h55m istio-system istio-telemetry-5c9cb76c56-twzf5 150m (7%) 2100m (105%) 228Mi (5%) 1124Mi (28%) 6h55m istio-system zipkin-8594bbfc6b-9p2qc 0 (0%) 0 (0%) 1000Mi (25%) 1000Mi (25%) 6h55m knative-eventing webhook-576479cc56-wvpt6 0 (0%) 0 (0%) 1000Mi (25%) 1000Mi (25%) 6h45m knative-monitoring elasticsearch-logging-0 100m (5%) 1 (50%) 0 (0%) 0 (0%) 3d20h knative-monitoring grafana-5cdc94dbd-mc4jn 100m (5%) 200m (10%) 100Mi (2%) 200Mi (5%) 3d21h knative-monitoring kibana-logging-7cb6b64bff-dh8nx 100m (5%) 1 (50%) 0 (0%) 0 (0%) 3d20h knative-monitoring kube-state-metrics-56f68467c9-vr5cx 223m (11%) 243m (12%) 176Mi (4%) 216Mi (5%) 3d21h knative-monitoring node-exporter-7jw59 110m (5%) 220m (11%) 50Mi (1%) 90Mi (2%) 3d22h knative-monitoring prometheus-system-0 0 (0%) 0 (0%) 400Mi (10%) 1000Mi (25%) 3d20h knative-serving activator-6cfb97bccf-bfc4w 120m (6%) 2200m (110%) 188Mi (4%) 1624Mi (41%) 6h45m knative-serving autoscaler-85749b6c48-4wf6z 130m (6%) 2300m (114%) 168Mi (4%) 1424Mi (36%) 6h45m knative-serving controller-b49d69f4d-7j27s 100m (5%) 1 (50%) 100Mi (2%) 1000Mi (25%) 6h45m knative-serving networking-certmanager-5b5d8f5dd8-qjh5q 100m (5%) 1 (50%) 100Mi (2%) 1000Mi (25%) 6h45m knative-serving networking-istio-7977b9bbdd-vrpl5 100m (5%) 1 (50%) 100Mi (2%) 1000Mi (25%) 6h45m kube-system canal-qbn67 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10d kube-system kube-proxy-phbf5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1693m (84%) 14363m (718%) memory 3838Mi (98%) 11902Mi (305%) ephemeral-storage 0 (0%) 0 (0%) Events: &lt;none&gt; </code></pre>
<p>Problem was that swap was turned on the worker nodes and thus kublet crashed exited. This was evident from syslog file under /var, thus the taint will get re-added until this is resolved. Perhaps someone can comment on the implications of allowing kublet to run with swap on?:</p> <pre><code>kubelet[29207]: F0616 06:25:05.597536 29207 server.go:265] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/xvda5 partition#0114191228#0110#011-1] Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Unit entered failed state. Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 16 06:25:15 k8s-node2 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart. Jun 16 06:25:15 k8s-node2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Jun 16 06:25:15 k8s-node2 systemd[1]: Started kubelet: The Kubernetes Node Agent. </code></pre>
<p>I use <code>.kube/config</code> to access Kubernetes api on a server. I am wondering does the token in config file ever get expired? How to prevent it from expire?</p>
<p>This is OAuth provider specific. For example GKE uses <a href="https://cloud.google.com/endpoints/docs/openapi/authentication-method#google_id_token_authentication" rel="nofollow noreferrer">this</a>.</p> <p>So in short, auth provider issues you a JWT token a proof you are auth, which contains a data like expiration time, according to documentation it cannot be more than 60 min in case of google accounts.</p> <p>I hope it helps.</p>
<p>I'm trying to add a configuration file in <code>/usr/share/logstash/config</code> inside a docker run by kubernetes , I console onto the docker using <code>$ kubectl exe -it "docker_name" -- /bin/bash</code>.</p> <p>I create then a .conf file in <code>/usr/share/logstash/config/</code> , but when try to save the configurations it give me an error :</p> <pre><code>pipeline/input main.conf E166: Cant open linked file for writing. </code></pre> <p>I'm not sure if what I'm doing is right in beginning or there should be some better way to achieve this ?</p>
<p>Error <code>E166 Can't open linked file for writing</code></p> <blockquote> <p>You are trying to write to a file which can't be overwritten, and the file is a link (either a hard link or a symbolic link). Writing might still be possible if the directory that contains the link or the file is writable, but Vim now doesn't know if you want to delete the link and write the file in its place, or if you want to delete the file itself and write the new file in its place. If you really want to write the file under this name, you have to manually delete the link or the file, or change the permissions so that Vim can overwrite</p> </blockquote> <p>You should use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> to change the configfile</p>
<p>I am using Kubernetes Java client API <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> for fetching all namespaces present. I am Getting Error-</p> <pre><code>io.kubernetes.client.ApiException: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:443 at io.kubernetes.client.ApiClient.execute(ApiClient.java:801) at io.kubernetes.client.apis.CoreV1Api.listNamespaceWithHttpInfo(CoreV1Api.java:15939) at io.kubernetes.client.apis.CoreV1Api.listNamespace(CoreV1Api.java:15917) at com.cloud.kubernetes.KubernetesNamespacesAPI.fetchAllNamespaces(KubernetesNamespacesAPI.java:25) at com.cloud.spark.sharedvariable.ClouzerConfigurations.setKubernetesEnvironment(ClouzerConfigurations.java:45) </code></pre> <p>I tried creating cluster role binding and giving permission to the user.</p> <p>Here is my code snippet:</p> <pre><code>public static List&lt;String&gt; fetchAllNamespaces(){ try { return COREV1_API.listNamespace(null, &quot;true&quot;, null, null, null, 0, null, Integer.MAX_VALUE, Boolean.FALSE) .getItems().stream().map(v1Namespace -&gt; v1Namespace.getMetadata().getName()) .collect(Collectors.toList()); }catch(Exception e) { e.printStackTrace(); return new ArrayList&lt;&gt;(); } } </code></pre> <p>Please let me know if I am missing anything. Thanks in advance.</p>
<p>I'm facing the same exception too. After several survey to the client lib's source code, I think you need to make sure of two things.</p> <ul> <li>first of all, can you access your api-server?</li> <li>secondly, you need to check your ApiClient bootstrap order.</li> </ul> <p><strong>Which way do you use to config your connection</strong></p> <p>The first thing here may not correlated to your case or the lib. The api client lib supports three ways of configuration, to communicate with K8S apiserver from both inside of pod or out of cluster.</p> <ul> <li>read env KUBECONFIG </li> <li>read ${home}/.kube/config</li> <li>read the service account resides under /var/run/secrets/kubernetes.io/serviceaccount/ca.crt</li> </ul> <p>If you are using the lib inside a Pod, normally it will try to using the third way.</p> <p><strong>How you bootstrap your client.</strong></p> <p>You must keep in mind to invoke </p> <pre class="lang-java prettyprint-override"><code>Configuration.setDefaultApiClient(apiClient); </code></pre> <p>before you init a CoreV1Api or your CRD api. The reason is quite simply, because under all of the Api class, for example under the class of io.kubernetes.client.api.CoreV1Api</p> <pre class="lang-java prettyprint-override"><code> public class CoreV1Api { private ApiClient apiClient; public CoreV1Api() { this(Configuration.getDefaultApiClient()); } ... } </code></pre> <p>If you haven't set the Configuration's defaultApiClient, it will use all default config, which the basePath will be <strong>localhost:443</strong>, then you will face the error.</p> <p>Under the example package, The client has already created lots of examples and use case. The full configuration logic may be as below:</p> <pre class="lang-java prettyprint-override"><code>public class Example { public static void main(String[] args) throws IOException, ApiException { ApiClient client = Config.defaultClient(); Configuration.setDefaultApiClient(client); // now you are safe to construct a CoreV1Api. CoreV1Api api = new CoreV1Api(); V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } } } </code></pre> <p>Just keeps in mind, order is important if you are using default constructor to init a XXXApi.</p>
<p>I have a monolithic application that is being broken down into domains that are microservices. The microservices live inside a kubernetes cluster using the istio service mesh. I'd like to start replacing the service components of the monolith little by little. Given the UI code is also running inside the cluster, microservices are inside the cluster, but the older web api is outside the cluster, is it possible to use a VirtualService to handle paths I specify to a service within the cluster, but then to forward or proxy the rest of the calls outside the cluster?</p> <p><a href="https://i.stack.imgur.com/MF03C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MF03C.png" alt="enter image description here" /></a></p>
<p>You will have to define a ServiceEntry so Istio will be aware of your external service. That ServiceEntry can be used as a destination in a VirtualService. <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination</a></p>
<p>I am running Kubernetes in GKE and I created a default ingress for one of my services, however I am unable to access my service, because ingress default healthcheck (the one that expects to receive a 200 return code when it queries the root path: <code>/</code>) is not working.</p> <p>The reason for this is that my service is returning 400 on the root path (<code>/</code>), because it expects to receive a request with a specific <code>Host</code> header, like: <code>Host: my-api.com</code>. How do I configure my ingress to add this header to the root healthcheck?</p> <p>Note: I managed to configure this in the GCP console, but I would like to know how can I configure this on my yaml, so that I won't have to remember to do this if I have to recreate my ingress.</p> <p>Ingress:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: backend annotations: kubernetes.io/ingress.global-static-ip-name: "backend" networking.gke.io/managed-certificates: "api-certificate" spec: rules: - host: my-api.com http: paths: - path: /* backend: serviceName: backend-service servicePort: http </code></pre> <p>Service:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: backend-service namespace: backend annotations: beta.cloud.google.com/backend-config: '{"ports": {"80":"backend-web-backend-config"}}' spec: selector: app: backend-web ports: - name: http targetPort: 8000 port: 80 type: NodePort </code></pre> <p>Backend Config:</p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: backend-web-backend-config namespace: backend spec: timeoutSec: 120 </code></pre> <p>Deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: backend-web namespace: backend labels: app: backend-web spec: selector: matchLabels: app: backend-web template: metadata: labels: app: backend-web spec: containers: - name: web image: backend:{{VERSION}} imagePullPolicy: IfNotPresent ports: - containerPort: 8000 protocol: TCP command: ["run"] resources: requests: memory: "800Mi" cpu: 150m limits: memory: "2Gi" cpu: 1 livenessProbe: httpGet: httpHeaders: - name: Accept value: application/json path: "/healthcheck" port: 8000 initialDelaySeconds: 15 timeoutSeconds: 5 periodSeconds: 30 readinessProbe: httpGet: httpHeaders: - name: Accept value: application/json path: "/healthcheck" port: 8000 initialDelaySeconds: 15 timeoutSeconds: 5 periodSeconds: 30 </code></pre>
<p>You are using a GCE Ingress, there is no way yet to have such configuration using a GCE Ingress. I have seen that Google will release a new feature "user defined request headers" for GKE, which will allow you to specify additional headers that the load balancer adds to requests. This new feature will solve your problem, but we will have to wait until Google release it, and as per what I can see it will be on 1.7 version [1].</p> <p>With this being said, there is one alternative left, use NGINX Ingress Controller instead of GCE Ingress. NGINX support headers changes[2], but this means that you will have to re-deploy your Ingress. </p> <p>[1] <a href="https://github.com/kubernetes/ingress-gce/issues/566#issuecomment-524312141" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/issues/566#issuecomment-524312141</a></p> <p>[2] <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/</a></p>
<p>I have a sidecar that only allows egress traffic on the namespace it is being deployed. This limits also external calls. Is there a way to add an external host to the sidecar, something like:</p> <pre><code> apiVersion: &quot;networking.istio.io/v1beta1&quot;, kind: &quot;Sidecar&quot;, metadata:{ name: &quot;egress-sidecar&quot;, namespace: &quot;namespace&quot;, }, spec:{ workloadSelector:{ labels:{ app: 'target_app' } }, egress:[ { hosts:[ &quot;namespace/*&quot;, &quot;google.com/*&quot; # &lt;--- something like this, this does not work ] } ], outboundTrafficPolicy:{ mode: &quot;REGISTRY_ONLY&quot; } } </code></pre>
<p>Ik think you'll need at least a ServiceEntry (<a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/service-entry/</a>) for the external service (e.g. <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a>) and then you can refer to it in the egress section of your Sidecar definition. Depending in which namespace you register the mentioned ServicEntry you can define the following in the hosts section under the egress section of your Sidecar definition :</p> <p>*/www.google.com (ServiceEntry anywhere in the Service Mesh)</p> <p>./www.google.com (ServiceEntry in the same namespace as your Sidecar definition)</p> <p>(<a href="https://istio.io/latest/docs/reference/config/networking/sidecar/#IstioEgressListener" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/sidecar/#IstioEgressListener</a>)</p>
<p>I’m doing some <code>chaos testing</code> in <code>K8s</code>. My platform use <code>Istio envoy sidecar</code>, and as some use case of my chaos scenarios, I would like to stop/kill a envoy proxy without kill the service container, and see what is the standard behavior.</p> <p>So far I don’t manage to figure how to do it through <code>kubectl/istioctl</code>.</p> <p>Any idea how to accomplish this?</p> <p>Regards</p>
<p>Use the /quitquitquit endpoint:</p> <p><a href="https://www.envoyproxy.io/docs/envoy/latest/operations/admin#post--quitquitquit" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/operations/admin#post--quitquitquit</a></p> <p>curl -sf -XPOST <a href="http://127.0.0.1:15020/quitquitquit" rel="nofollow noreferrer">http://127.0.0.1:15020/quitquitquit</a></p>
<p>I am running Istio 1.14 minimal on K8s 1.22</p> <p>I am testing different outbound connections from inside the mesh and they all fail with errors like:</p> <pre><code>Execution of class com.microsoft.aad.msal4j.AcquireTokenByAuthorizationGrantSupplier failed. com.microsoft.aad.msal4j.MsalClientException: javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake </code></pre> <p>or</p> <pre><code>Cannot send curl request: SSL connect error - trying again </code></pre> <p>As a test, I tried a curl from inside the istio-enabled pod:</p> <pre><code>curl https://www.google.com #this failed curl http://www.google.com #this worked </code></pre> <p>Then, after some reading I created a ServiceEntry:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google spec: hosts: - www.google.com ports: - number: 443 name: https protocol: HTTPS resolution: DNS location: MESH_EXTERNAL </code></pre> <p>This solved the issue and now curl works for both http and https towards google.</p> <p>My question is how can I extrapolate this in order for all outbound traffic to be allowed? I cannot create Service Entries for all external resources I am trying to connect to.</p> <p>Sorry for the bad explanation, but I am very new to both K8s and Istio.</p>
<p>Set <strong>meshConfig.outboundTrafficPolicy.mode</strong> to <strong>ALLOW_ANY</strong> in your Istio installation configuration.</p> <p><a href="https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode</a></p>
<p>I have made a service.yaml and have created the service.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: cass-operator-service spec: type: LoadBalancer ports: - port: 9042 targetPort: 9042 selector: name: cass-operator </code></pre> <p>Is there a way to check on which pods the service has been applied?</p> <p>I want that using the above service, I connect to a cluster in Google Cloud running Kubernetes/Cassandra on external_ip/port (9042 port). But using the above service, I am not able to.</p> <p><code>kubectl get svc</code> shows</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-service LoadBalancer 10.51.247.82 34.91.214.233 9042:31902/TCP 73s </code></pre> <p>So probably the service is listening on 9042 but is forwarding to pods in 31902. I want both ports to be 9042. Is it possible to do?</p>
<p>The best way is to follow labels and selectors.</p> <p>Your pod has a label section, and the service uses it in the selector section. You can find some examples in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">official documentation for services</a>.</p> <p>You can find the selectors of your service with:</p> <pre><code>kubectl describe svc cass-operator-service </code></pre> <p>You can list your labels with:</p> <pre><code>kubectl get pods --show-labels </code></pre>
<p>I am trying to use the following <a href="https://cloud.ibm.com/docs/containers?topic=containers-file_storage#add_file" rel="nofollow noreferrer">https://cloud.ibm.com/docs/containers?topic=containers-file_storage#add_file</a>:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ibmc-file labels: billingType: 'monthly' region: us-south zone: dal10 spec: accessModes: - ReadWriteMany resources: requests: storage: 12Gi storageClassName: ibmc-file-silver </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:11 imagePullPolicy: Always ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage persistentVolumeClaim: claimName: ibmc-file </code></pre> <p>But the PVC is never "Bound" and gets stuck as "Pending".</p> <pre><code>➜ postgres-kubernetes kubectl describe pvc ibmc-file Name: ibmc-file Namespace: default StorageClass: ibmc-file-silver Status: Pending Volume: Labels: billingType=monthly region=us-south zone=dal10 Annotations: ibm.io/provisioning-status=failed: Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missin... kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"billingType":"monthly","region":"us-south","zone":"dal10"},"n... volume.beta.kubernetes.io/storage-provisioner=ibm.io/ibmc-file Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d External provisioner is provisioning volume for claim "default/ibmc-file" Warning ProvisioningFailed 10m (x3 over 10m) ibm.io/ibmc-file_ibm-file-plugin-5d7684d8c5-xlvks_db50c480-500f-11e9-ba08-cae91657b92d failed to provision volume with StorageClass "ibmc-file-silver": Storage creation failed with error: {Code:E0013, Description:User doesn't have permissions to create or manage Storage [Backend Error:Validation failed due to missing permissions[NAS_MANAGE] for User[id:xxx, name:xxxm_2018-11-20-07.35.49, email:xxx, account:xxx]], Type:MissingStoragePermissions, RC:401, Recommended Action(s):Run `ibmcloud ks api-key-info` to see the owner of the API key that is used to order storage. Then, contact the account administrator to add the missing storage permissions. If infrastructure credentials were manually set via `ibmcloud ks credentials-set`, check the permissions of that user. Delete the PVC and re-create it. If the problem persists, open an IBM Cloud support case.} Normal ExternalProvisioning 7m (x22 over 10m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator Normal ExternalProvisioning 11s (x26 over 6m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-file" or manually created by system administrator </code></pre>
<p>@atkayla Could you try running <code>kubectl get secret storage-secret-store -n kube-system -o yaml | grep slclient.toml: | awk '{print $2}' | base64 --decode</code> to see what API key is used in the storage secret store? If this also shows your name and email address, then the file storage plug-in uses the permissions that are assigned to you. </p> <p>You might have the permissions to create the cluster, but you might lack some storage permissions that do not let you create the storage. Are you the owner of the account and have the possibility to check the permissions? You should have <code>Add/Upgrade Storage (StorageLayer)</code>, and <code>Storage Manage</code>. </p> <p>If you do not have these permissions, add these and then run <code>ibmcloud ks api-key-set</code> to update the API key. The storage secret store is automatically refreshed after 5-15 minutes. Then, you can try again. </p>
<p>I've setup a K8S-cluster in GKE and installed RabbitMQ (from the marketplace) and Istio (via Helm). I can access rabbitMQ from pods until I enable the envoy proxy to be injected into these pods, but after that the traffic will not reach rabbitMQ, and I can't figure out how to enable traffic to the rabbitmq service.</p> <p>There is a service <em>rabbitmq-rabbitmq-svc</em> (in the <em>rabbitmq</em> namespace) that is of type LoadBalancer. I've tried a simple busybox when I don't have envoy running and then I have no trouble telneting to rabbitmq (port 5672), but as soon as I try with automatic envoy injection envoy prevents the traffic. I tried unsuccessfully to add a DestinationRule. (I've added a rule but it makes no difference)</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: rabbitmq-rabbitmq-svc spec: host: rabbitmq.rabbitmq.svc.cluster.local trafficPolicy: loadBalancer: simple: LEAST_CONN </code></pre> <p>It seems like it should be a simple solution, but I can't figure it out... :/</p> <p><strong>UPDATE</strong> Turns out it was a simple error in the hostname, ended up using this and it works:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: rabbitmq-rabbitmq-svc spec: host: rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local </code></pre>
<p>Turns out it was a simple error in the hostname, the correct one was <code>rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local</code></p>
<p>I'm trying to set which cri-o socket to use by kubeadm ! </p> <p>To achieve this I should use the flag <code>--cri-socket /var/run/crio/crio.sock</code></p> <hr> <p>The current command is in the form <code>kubeadm init phase &lt;phase_name&gt;</code>. I must add the <code>--cri-socket</code> flag to it. </p> <p>I edited the command this way <code>kubeadm init --cri-socket /var/run/crio/crio.sock phase &lt;phase_name&gt;</code>.</p> <p>Unfortunatly I am getting the <strong>error</strong> <code>Error: unknown flag: --cri-socket</code>.<br> => It seems that the argument <code>phase &lt;phase_name&gt;</code> and the flag <code>--cri-socket /var/run/crio/crio.sock</code> is not compatible.</p> <p>How do I fix that ?<br> Thx</p> <hr> <p><strong>##################Update 1######################</strong> </p> <p><strong>File</strong> : <em>/etc/kubernetes/kubeadm-config.yaml</em> </p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.10.3.15 bindPort: 6443 certificateKey: 9063a1ccc9c5e926e02f245c06b8xxxxxxxxxxx nodeRegistration: name: p3kubemaster1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master criSocket: /var/run/crio/crio.sock </code></pre>
<p>I see two things that may help:</p> <ol> <li>Check <code>/var/lib/kubelet/kubeadm-flags.env</code> if it is properly configured. </li> </ol> <blockquote> <p>In addition to the flags used when starting the kubelet, the file also contains dynamic parameters such as the cgroup driver and whether to use a different CRI runtime socket (--cri-socket).</p> </blockquote> <p>More details can be found <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#workflow-when-using-kubeadm-init" rel="nofollow noreferrer">here</a>.</p> <ol start="2"> <li>Check your init config file (<code>kubeadm init --config string</code> will show you the path do the configuration file) and try to add something like this:</li> </ol> <hr> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration nodeRegistration: criSocket: "unix:///var/run/crio/crio.sock" </code></pre> <hr> <p>Please let me know if that helped.</p>
<p>I have some confusion on AKS Node pool upgrades and Patching. Could you please clarify on this.</p> <ol> <li><p>I have one AKS node pool, which is having 4 nodes, so now I want to upgrade the kubernetes version only in two nodes of node pool. Is it possible?</p> <p>if it is possible to upgrade in two nodes, then how we can upgrade remaining two nodes? and how we can find out which two nodes are having old kubernetes version instead of latest kubernetes version</p> </li> <li><p>While doing the Upgrade process, will it create two new nodes with latest kubernetes version, and then will it delete old nodes in node pool?</p> </li> <li><p>Actually azure automatically applies patches on nodes, but will it creates new nodes with new patches and deleted old nodes?</p> </li> </ol>
<p><strong>1</strong>. According to the docs:</p> <ul> <li>you can <a href="https://learn.microsoft.com/en-us/azure/aks/node-image-upgrade#upgrade-a-specific-node-pool" rel="nofollow noreferrer">upgrade specific node pool</a>.</li> </ul> <p>So the approach with additional node-pool mentioned by <a href="https://stackoverflow.com/a/63501711/11207414">4c74356b41</a>.</p> <hr /> <ul> <li>Additional info:</li> </ul> <blockquote> <p><a href="https://learn.microsoft.com/en-us/azure/aks/node-updates-kured#node-upgrades" rel="nofollow noreferrer"><strong>Node upgrades</strong></a></p> </blockquote> <p>There is an additional process in AKS that lets you upgrade a cluster. <strong>An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates</strong>. An AKS upgrade performs the following actions:</p> <ul> <li>A new node is deployed with the latest security updates and Kubernetes version applied.</li> <li>An old node is cordoned and drained.</li> <li>Pods are scheduled on the new node.</li> <li>The old node is deleted.</li> </ul> <p><strong>2</strong>. By default, AKS uses one additional node to configure upgrades.</p> <p>You can control this process by increase <a href="https://learn.microsoft.com/en-us/azure/aks/node-image-upgrade#upgrade-node-images-with-node-surge" rel="nofollow noreferrer">--max-surge parameter</a></p> <blockquote> <p>To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value.</p> </blockquote> <p><strong>3</strong>. <a href="https://learn.microsoft.com/en-us/azure/aks/node-updates-kured" rel="nofollow noreferrer">Security and kernel updates to Linux nodes</a>:</p> <blockquote> <p>In an AKS cluster, your <strong>Kubernetes nodes run as Azure virtual machines (VMs)</strong>. These Linux-based VMs use an Ubuntu image, <strong>with the OS configured</strong> to automatically <strong>check for updates every night</strong>. <strong>If security or kernel updates are available, they are automatically downloaded and installed</strong>.</p> </blockquote> <blockquote> <p><strong>Some security updates</strong>, such as kernel updates, <strong>require a node reboot</strong> to finalize the process. A Linux node that requires a reboot creates a file named /var/run/reboot-required. <strong>This reboot process doesn't happen automatically</strong>.</p> </blockquote> <hr /> <ul> <li><a href="https://www.stackrox.com/post/2020/03/azure-kubernetes-aks-security-best-practices-part-4-of-4/" rel="nofollow noreferrer">This tutorial summarize the process of Cluster Maintenance and Other Tasks</a></li> </ul>
<p>Can i use any tools from openssl, ssh-keygen and cfssl to create SSH certificates or the TLS certificates for kubernetes components? Is there any difference in using these three tools? Will the keys/certificates generated by these tools be same?</p>
<p>From kubernetes point of view please take a look for docs about using openssl and probably more advanced cfssl <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">here</a>.</p> <p>Note:</p> <blockquote> <p>All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded.</p> </blockquote> <p>I'm not expert in this matter but you can take a look for community posts like:</p> <ul> <li><p><a href="https://serverfault.com/questions/9708/what-is-a-pem-file-and-how-does-it-differ-from-other-openssl-generated-key-file">Certificate standards</a> </p></li> <li><p><a href="https://security.stackexchange.com/questions/29876/what-are-the-differences-between-ssh-generated-keysssh-keygen-and-openssl-keys">differences between ssh generated keys(ssh-keygen) and OpenSSL keys (PEM)</a> </p></li> <li><p><a href="https://serverfault.com/questions/706336/how-to-get-a-pem-file-from-ssh-key-pair">pem file from ssh key pair</a> </p></li> <li><p><a href="https://blog.cloudflare.com/introducing-cfssl/" rel="nofollow noreferrer">Introducing CFSSL</a></p></li> <li><p><a href="https://knowledge.digicert.com/solution/SO26630.html" rel="nofollow noreferrer">certificate convertion</a> </p></li> <li><p><a href="https://support.ssl.com/Knowledgebase/Article/View/19/0/der-vs-crt-vs-cer-vs-pem-certificates-and-how-to-convert-them" rel="nofollow noreferrer">Certificates Convertion</a> </p></li> </ul> <p>How is X.509 used in SSH? X.509 certificates are used as a key storage: Instead of keeping SSH keys in a proprietary format, the software keeps the keys in X.509 certificates. When the SSH key exchange is done, the keys are taken from the certificates.</p> <p>Note - advantages of using X.509 certificates:</p> <blockquote> <p>An <a href="https://en.wikipedia.org/wiki/X.509" rel="nofollow noreferrer">X.509 certificate</a> contains a public key and an identity (a hostname, or an organization, or an individual), and is either signed by a certificate authority or self-signed. When a certificate is signed by a trusted certificate authority, or validated by other means, someone holding that certificate can rely on the public key it contains to establish secure communications with another party, or validate documents digitally signed by the corresponding private key.</p> </blockquote> <p>Hope this help:</p>
<p>I want to run patching of statefulsets for a specific use case from a Pod via a cronjob. To do so I created the following plan with a custom service account, role and rolebinding to permit the Pod access to the apps api group with the patch verb but I keep running into the following error:</p> <pre><code>Error from server (Forbidden): statefulsets.apps &quot;test-statefulset&quot; is forbidden: User &quot;system:serviceaccount:test-namespace:test-serviceaccount&quot; cannot get resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;test-namespace&quot; </code></pre> <p>my k8s plan:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: labels: env: test name: test-serviceaccount namespace: test-namespace --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: env: test name: test-role namespace: test-namespace rules: - apiGroups: - apps/v1 resourceNames: - test-statefulset resources: - statefulsets verbs: - patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: name: test-binding namespace: test-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: test-role subjects: - kind: ServiceAccount name: test-serviceaccount namespace: test-namespace --- apiVersion: batch/v1beta1 kind: CronJob metadata: labels: name:test-job namespace: test-namespace spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 3 jobTemplate: metadata: labels: env: test spec: activeDeadlineSeconds: 900 backoffLimit: 1 parallelism: 1 template: metadata: labels: env: test spec: containers: - args: - kubectl -n test-namespace patch statefulset test-statefulset -p '{&quot;spec&quot;:{&quot;replicas&quot;:0}}' - kubectl -n test-namespace patch statefulset test-statefulset -p '{&quot;spec&quot;:{&quot;replicas&quot;:1}}' command: - /bin/sh - -c image: bitnami/kubectl restartPolicy: Never serviceAccountName: test-serviceaccount schedule: '*/5 * * * *' startingDeadlineSeconds: 300 successfulJobsHistoryLimit: 3 suspend: false </code></pre> <p>So far to debug:</p> <ol> <li><p>I have checked if the pod and serviceaccount association worked as expected and it looks like it did. I see the name of secret mounted on the Pod the cronjob starts is correct.</p> </li> <li><p>Used a simpler role where apiGroups was &quot;&quot; i.e. all core groups and tried to &quot;get pods&quot; from that pod, same error</p> </li> </ol> <p>role description:</p> <pre><code>Name: test-role Labels: env=test Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- statefulsets.apps/v1 [] [test-statefulset] [patch] </code></pre> <p>rolebinding description:</p> <pre><code>Name: test-binding Labels: env=test Annotations: &lt;none&gt; Role: Kind: Role Name: test-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount test-serviceaccount test-namespace </code></pre>
<p>Stateful sets need two verbs to apply a patch : GET and PATCH. PATCH alone wont work</p>
<p>I've been running a community version of the MongoDB replica set in Kubernetes for over a year now. For reference, I am deploying with these Helm charts: <a href="https://github.com/mongodb/helm-charts/tree/main/charts/community-operator" rel="nofollow noreferrer">https://github.com/mongodb/helm-charts/tree/main/charts/community-operator</a> <a href="https://github.com/mongodb/helm-charts/tree/main/charts/community-operator-crds" rel="nofollow noreferrer">https://github.com/mongodb/helm-charts/tree/main/charts/community-operator-crds</a></p> <p>I now have a need to spin up a couple extra replica sets. I cannot reuse the existing rs because this new deployment is software from one of our subcontractors and we want to keep it separate. When I try to start a new rs, all the rs on the namespace get into a bad state, failing readiness probes.</p> <p>Do the different replica sets each require their own operator? In an attempt to test that theory, I modified the values.yaml and deployed an operator for each rs but I'm still getting the error.</p> <p>I think I am missing a config on the DB deployment that tells it which operator to use, but I can't find that config option in the helm chart (referencing the 2nd link from earlier <a href="https://github.com/mongodb/helm-charts/blob/main/charts/community-operator-crds/templates/mongodbcommunity.mongodb.com_mongodbcommunity.yaml" rel="nofollow noreferrer">https://github.com/mongodb/helm-charts/blob/main/charts/community-operator-crds/templates/mongodbcommunity.mongodb.com_mongodbcommunity.yaml</a>)</p> <p><strong>EDIT:</strong> For a little extra info, it seems like the mongodb can be used without issue. Kubernetes is just showing an error, saying the readiness probes have failed.</p>
<p>EDIT 2: I eventually managed to resolve the final remaining issue. When you deploy multiple replica sets using this method, you must make sure they each have a unique value for scramCredentialsSecretName.</p> <p><s>EDIT: Actually, this didn't fix the entire problem. It seems to work for a few hours then all the readiness probes for all rs start to fail again.</s></p> <p>It looks like you can deploy multiple operators and rs in a single namespace. What I was missing was linking the operators and rs together.</p> <p>I also left out an important detail. I don't actually deploy the mongodb using that helm chart because the config options are too limited. At the bottom of the values.yaml there's a setting called <code> createResource: false</code> which I have set to false. I then deploy a separate yaml that defines the mongo replicaset like so:</p> <pre><code>apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: my-mongo-rs spec: members: 3 type: ReplicaSet version: &quot;5.0.5&quot; security: authentication: modes: [&quot;SCRAM&quot;] # tls: # enabled: true # certificateKeySecretRef: # name: mongo-tls # caConfigMapRef: # name: mongo-ca statefulSet: spec: template: spec: containers: - name: &quot;mongodb-agent&quot; resources: requests: cpu: 200m memory: 500M limits: {} - name: &quot;mongod&quot; resources: requests: cpu: 1000m memory: 5G limits: {} serviceAccountName: my-mongodb-database volumeClaimTemplates: - metadata: name: data-volume spec: storageClassName: my-retainer-sc accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 20G # replicaSetHorizons: # - external: myhostname.com:9000 # - external: myhostname.com:9001 # - external: myhostname.com:9002 users: - name: my-mongo-user db: admin passwordSecretRef: # a reference to the secret that will be used to generate the user's password name: my-mongo-user-creds roles: - db: admin name: clusterAdmin - db: admin name: readWriteAnyDatabase - db: admin name: dbAdminAnyDatabase - db: admin name: userAdminAnyDatabase - db: admin name: root scramCredentialsSecretName: my-user-scram additionalMongodConfig: storage.wiredTiger.engineConfig.journalCompressor: zlib </code></pre> <p>Anywhere in this config where you see &quot;my&quot;, I actually use my app name but I've genericized it for this post.</p> <p>To link the operator and rs together, it's done using a name. In the operator yaml it's this line:</p> <pre><code>database: name: my-mongodb-database </code></pre> <p>That name creates a serviceaccount in kubernetes and you must define your database pod to use that specific serviceaccount. Otherwise, it tries to default to a serviceaccount named mongodb-database which either won't exist, or you'll end up with multiple rs using the same serviceaccount (therefore, the same operator).</p> <p>And in the rs yaml it's this line:</p> <pre><code>serviceAccountName: my-mongodb-database </code></pre> <p>This will link it to the correct serviceaccount.</p>
<p>So I am trying to figure out how can I configure an Horizontal Pod Autoscaler from a custom metric reading from Prometheus that returns CPU usage with percentile 0.95</p> <p>I have everything set up to use custom metrics with prometheus-adapter, but I don't understand how to create the rule in Prometheus. For example, if I go to Grafana to check some of the Graphs that comes by default I see this metric:</p> <pre><code>sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace="api", pod_name="api-xxxxx9b-bdxx", container_name!="POD", cluster=""}) by (container_name) </code></pre> <p>But how can I modify that to be percentile 95? I tried with histogram_quantile function but it says no datapoints found:</p> <pre><code>histogram_quantile(0.95, sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace="api", pod_name="api-xxxxx9b-bdxx", container_name!="POD", cluster=""}) by (container_name)) </code></pre> <p>But even if that works, will the pod name and namespace be filled by prometheus-adapter or prometheus when using custom metrics?</p> <p>And every example I find using custom metrics are not related with CPU. So... other question I have is how people is using autoscaling metrics in production? I'm used to scale based on percentiles but I don't understand how is this managed in Kubernetes.</p>
<p>If I understand you correctly you don't have to use custom metrics in order to horizontally autoscale your pods. By default, you can automatically scale the number of Kubernetes pods based on the observed CPU utilization. Here is the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">official documentation</a> with necessary details.</p> <blockquote> <p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).</p> <p>The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user.</p> </blockquote> <p>And here you can find the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">walkthrough</a> of how to set it up.</p> <p>Also, <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale" rel="nofollow noreferrer">here</a> is the <code>kubectl autoscale</code> command documentation. </p> <p>Example: <code>kubectl autoscale rc foo --max=5 --cpu-percent=80</code></p> <p><em>Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80%</em></p> <p>I believe that it is the easiest way so no need to complicate it with some custom metrics.</p> <p>Please let me know if that helped.</p>
<p>i am struggling to create a custom subject when receiving alerts from my AlertManager, i am doing it with manifest file:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] </code></pre> <p>i have read that i need to add headers under the emailConfigs tab, but when i do like follows:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] headers: - subject: &quot;MyTestSubject&quot; </code></pre> <p>or</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: my-name labels: alertmanagerConfig: email alertconfig: email-config spec: route: groupBy: - node groupWait: 30s groupInterval: 5m repeatInterval: 12h receiver: 'myReceiver' receivers: - name: 'Name' emailConfigs: - to: [email protected] headers: subject: &quot;MyTestSubject&quot; </code></pre> <p>I receive following errors:</p> <p>either:</p> <p>com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers, ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers[0]): missing required field &quot;key&quot; in com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers, ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers[0]): missing required field &quot;value&quot; in com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers];</p> <p>or</p> <p>error: error validating &quot;alert-config.yaml&quot;: error validating data: ValidationError(AlertmanagerConfig.spec.receivers[0].emailConfigs[0].headers): invalid type for com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers.emailConfigs.headers: got &quot;map&quot;, expected &quot;array&quot;</p> <p>i have checked other solutions and everyone is doing it like headers: subject: mySubject<br /> but for some reason to me, it doesn't work</p>
<p>I managed to get it working:</p> <pre><code> emailConfigs: - to: myreceiver@mail headers: - key: subject value: &quot;Custom subject goes here&quot; </code></pre> <p>cheers :)</p>
<p>I am trying to develop an application on kubernetes with hot-reloading (instant code sync). I am using <a href="https://github.com/loft-sh/devspace" rel="nofollow noreferrer">DevSpace</a>. When running my application on a minikube cluster, everything works and I am able to hit the ingress to reach my FastAPI docs. The problem is when I try to use devspace, I can exec into my pods and see my changes reflected right away, but then when I try to hit my ingress to reach my FastAPI docs, I get a 502 bad gateway.</p> <p>I have an <code>api-pod.yaml</code> file as such:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: project-api spec: replicas: 1 selector: matchLabels: app: project-api template: metadata: labels: app: project-api spec: containers: - image: project/project-api:0.0.1 name: project-api command: [&quot;uvicorn&quot;] args: [&quot;endpoint:app&quot;, &quot;--port=8000&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;] imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api/v1/project/tasks/ port: 8000 initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 600 failureThreshold: 3 ports: - containerPort: 8000 name: http protocol: TCP --- apiVersion: v1 kind: Service metadata: name: project-api spec: selector: app: project-api ports: - port: 8000 protocol: TCP targetPort: http type: ClusterIP </code></pre> <p>I have an <code>api-ingress.yaml</code> file as such:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: project-ingress spec: rules: - http: paths: - path: /api/v1/project/tasks/ pathType: Prefix backend: service: name: project-api port: number: 8000 ingressClassName: nginx --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: nginx spec: controller: k8s.io/ingress-nginx </code></pre> <p>Using <code>kubectl get ep</code>, I get:</p> <pre><code>NAME ENDPOINTS AGE project-api 172.17.0.6:8000 17m </code></pre> <p>Using <code>kubectl get svc</code>, I get:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE project-api ClusterIP 10.97.182.167 &lt;none&gt; 8000/TCP 17m </code></pre> <p>Using <code>kubectl get ingress</code> I get:</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE api-ingress nginx * 192.168.64.112 80 17m </code></pre> <p>to reiterate, my problem is when I try reaching the FastAPI docs using <code>192.168.64.112/api/v1/project/tasks/docs</code> I get a 502 bad gateway.</p> <p>Im running:</p> <pre><code>MacOS Monterey: 12.4 Minikube version: v1.26.0 (with hyperkit as the vm) Ingress controller: k8s.gcr.io/ingress-nginx/controller:v1.2.1 Devspace version: 5.18.5 </code></pre>
<p>I believe the problem was within DevSpace. I am now comfortably using <a href="https://github.com/tilt-dev/tilt" rel="nofollow noreferrer">Tilt</a>. Everything is working as expected.</p>
<p>I am currently looking into setting up Kubernetes pods for a project on GCP.</p> <p>The problem - I need to set a persistent shared volume which will be used by multiple nodes. I need all nodes to be able to read from the volume and only one node must be able to write on the volume. So I need some advice what's the best way to achieve that?</p> <p>I have checked the Kubernetes documentation and know that <code>GCEPersistentDisks</code> does not support <code>ReadWriteMany</code> but anyway this access mode I think will be an overkill. Regarding the <code>ReadOnlyMany</code> I get that nodes can read from the PV but I don't understand how or what can actually modify the PV in this case. Currently my best bet is setting up NFS with GCE persistent disk.</p> <p>Also the solution should be able to run on the cloud or on premise. Any advice will be appreciated :)</p>
<p>According to official <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>A PVC to PV binding is a one-to-one mapping.</p> <p>A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.</p> </blockquote> <p>So I am afraid that this would be impossible to do it how you described.</p> <p>However, you may want to try <a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">task queue</a></p> <p>Please let me know if that helped.</p>
<p>I have created a 2 org each having 2 peers and 1 orderer using solo configuration, which later on will be changed to raft configuration.</p> <p>kubernetes cluster consist of 3 vagrant VMs, with 1 master and 2 node workers. They are linked using flannel.</p> <p>I have been following this <a href="https://medium.com/@zhanghenry/how-to-deploy-hyperledger-fabric-on-kubernetes-2-751abf44c807" rel="nofollow noreferrer">post</a>. Everything has been doing well until peer channel create section. </p> <p>deployed pods</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-5rsfd 1/1 Running 0 17h kube-system coredns-fb8b8dccf-vjs75 1/1 Running 0 17h kube-system etcd-k8s-master 1/1 Running 0 17h kube-system kube-apiserver-k8s-master 1/1 Running 0 17h kube-system kube-controller-manager-k8s-master 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-hpbfz 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-kb4j2 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-r5npk 1/1 Running 0 17h kube-system kube-proxy-9mqj9 1/1 Running 0 17h kube-system kube-proxy-vr9zt 1/1 Running 0 17h kube-system kube-proxy-xz2fg 1/1 Running 0 17h kube-system kube-scheduler-k8s-master 1/1 Running 0 17h org1 ca-7cfc7bc4b6-k8bjm 1/1 Running 0 16h org1 cli-55dd4df5bb-6vn7g 1/1 Running 0 16h org1 peer0-org1-5c65b984d5-685bp 2/2 Running 0 16h org1 peer1-org1-7b9cf7fbd4-hf9b9 2/2 Running 0 16h org2 ca-567ccf7dcd-sgbxz 1/1 Running 0 16h org2 cli-76bb768f7f-mt9nx 1/1 Running 0 16h org2 peer0-org2-6c8fbbc7f8-n6msn 2/2 Running 0 16h org2 peer1-org2-77fd5f7f67-blqpk 2/2 Running 0 16h orgorderer1 orderer0-orgorderer1-7b6947868-d9784 1/1 Running 0 16h</code></pre> </div> </div> </p> <p>error message when I tried to create channel</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>vagrant@k8s-master:~/articles-master/fabric_on_kubernetes/Fabric-on-K8S/setupCluster/crypto-config/peerOrganizations$ kubectl exec -it cli-55dd4df5bb-6vn7g bash --namespace=org1 root@cli-55dd4df5bb-6vn7g:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -o orderer0.orgorderer1:7050 -c mychannel -f ./channel-artifacts/channel.tx 2019-06-19 00:41:31.465 UTC [msp] GetLocalMSP -&gt; DEBU 001 Returning existing local MSP 2019-06-19 00:41:31.465 UTC [msp] GetDefaultSigningIdentity -&gt; DEBU 002 Obtaining default signing identity 2019-06-19 00:41:51.466 UTC [grpc] Printf -&gt; DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: i/o timeout"; Reconnecting to {orderer0.orgorderer1:7050 &lt;nil&gt;} Error: Error connecting due to rpc error: code = Unavailable desc = grpc: the connection is unavailable</code></pre> </div> </div> </p> <p>at the linked post, there are some people having the same problem. Some has solved it by using an ip instead of domain name. I tried to put ip, but it didn't work. What should I do to fix this problem?</p>
<p>There are few things you can do in order to fix it:</p> <ol> <li><p>Check if you meet all the <a href="https://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html#prerequisites" rel="nofollow noreferrer">Prerequisites</a> </p></li> <li><p>Check the crypto material for your network or generate a new one: <code>cryptogen generate --config=crypto-config.yaml --output=</code></p></li> <li><p>Check your firewall configuration. You may need to allow the appropriate ports through: <code>firewall-cmd --add-port=xxxx/tcp --permanent</code></p></li> <li><p>Check your iptables service. You may need to stop it.</p></li> </ol> <p>Please let me know if any of the above helped.</p>
<p>I looked at <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> library but it requires RBAC enabled in cluster. Is any other way to retrieve pods in kubernetes programatically?</p>
<p>As per Kubernetes Java Client library you can find there:</p> <ol> <li><p><a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java" rel="nofollow noreferrer">InClusterClient</a> Example (Configure a client while running inside the Kubernetes cluster.).</p></li> <li><p><a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java" rel="nofollow noreferrer">KubeConfigFileClient</a> Example: (Configure a client to access a Kubernetes cluster from outside.) </p></li> </ol> <p>The first example from inside the cluster is using serviceaccount applied to the POD.</p> <p>The second example from outside the cluster is using kubeconfig file.</p> <p>In the official docs you can find java example of <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#java-client" rel="nofollow noreferrer">Accessing Kubernetes API</a> Using Java client I it uses kubeconfig file by default stored in <code>$HOME/.kube/config</code>. In addition you can find there other examples how to programmatically access the Kubernetes API with the list of <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Officially-supported Kubernetes client libraries</a> and <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#community-maintained-client-libraries" rel="nofollow noreferrer">Community-maintained client libraries</a> </p> <p>Please refer also to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">Authorization Modes</a></p> <blockquote> <p>Kubernetes RBAC allows admins to configure and control access to Kubernetes resources as well as the operations that can be performed on those resources. RBAC can be enabled by starting the API server with --authorization-mode=RBAC</p> <p>Kubernetes includes a built-in role-based access control (RBAC) mechanism that allows you to configure fine-grained and specific sets of permissions that define how a given GCP user, or group of users, can interact with any Kubernetes object in your cluster, or in a specific Namespace of your cluster.</p> </blockquote> <p>Additional resources:</p> <ul> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a> </li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow noreferrer">Accessing Clusters</a> </li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a> </li> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">Authorization Modes</a> </li> <li><a href="https://www.replex.io/blog/kubernetes-in-production" rel="nofollow noreferrer">Kubernetes in Production</a> </li> </ul> <p>Hope this help.</p>
<p>I try to use haproxy as load balance and haproxy-ingress as ingress controller in k8s. </p> <p>my load balance config:</p> <pre><code>frontend MyFrontend_80 bind *:80 bind *:443 mode tcp default_backend TransparentBack_https backend TransparentBack_https mode tcp balance roundrobin option ssl-hello-chk server MyWebServer1 10.5.5.53 server MyWebServer2 10.5.5.54 server MyWebServer3 10.5.5.55 </code></pre> <p>Ingress file:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: li namespace: li annotations: # add an annotation indicating the issuer to use. cert-manager.io/cluster-issuer: "letsencrypt-staging" #haproxy.org/forwarded-for: true kubernetes.io/ingress.class: haproxy ingress.kubernetes.io/ssl-redirect: "true" spec: rules: - host: a.b.c http: paths: - path: /storage backend: serviceName: li-frontend servicePort: 80 tls: - hosts: - a.b.c secretName: longhorn-ui-tls </code></pre> <p>li-frontend is a dashboard ui service.</p> <p>All is ok when I set the path field to blank in my ingress. and page is not normal when the path field seted to /storage or any non blank value.</p> <p>I find some link not get correct position, e.g.</p> <pre><code>requst correct value /main.js /storage/main.js </code></pre> <p>I found this in nginx-ingress:</p> <pre><code>#nginx.ingress.kubernetes.io/configuration-snippet: | #rewrite ^/main(.*)$ /storage/main$1 redirect; </code></pre> <p>Does haproxy-ingress has same function? I try these, but no effect:</p> <pre><code>ingress.kubernetes.io/app-root: /storage ingress.kubernetes.io/rewrite-target: /storage </code></pre> <p>In addition, I use rewrite in nginx-ingress, but it don't work on websocket.</p> <p>Sorry for my pool english.</p>
<p>for HAProxy:</p> <p>you have to use haproxy annotation:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: web-ingress namespace: default annotations: # replace all paths with / haproxy.org/path-rewrite: &quot;/&quot; # remove the prefix /foo... &quot;/bar?q=1&quot; into &quot;/foo/bar?q=1&quot; haproxy.org/path-rewrite: (.*) /foo\1 # add the suffix /foo ... &quot;/bar?q=1&quot; into &quot;/bar/foo?q=1&quot; haproxy.org/path-rewrite: ([^?]*)(\?(.*))? \1/foo\2 # strip /foo ... &quot;/foo/bar?q=1&quot; into &quot;/bar?q=1&quot; haproxy.org/path-rewrite: /foo/(.*) /\1 spec: # ingress specification... </code></pre> <p>Ref: =&gt; <a href="https://www.haproxy.com/documentation/kubernetes/1.4.5/configuration/ingress/" rel="noreferrer">https://www.haproxy.com/documentation/kubernetes/1.4.5/configuration/ingress/</a></p>
<p>Requirement: We need to access the Kubernetes REST end points from our java code. Our basic operations using the REST end points are to Create/Update/Delete/Get the deployments.</p> <p>We have downloaded the kubectl and configured the kubeconfig file of the cluster in our Linux machine. We can perform operations in that cluster using the kubectl. We got the bearer token of that cluster running the command 'kubectl get pods -v=8'. We are using this bearer token in our REST end points to perform our required operations. </p> <p>Questions:</p> <ol> <li>What is the better way to get the bearer token? </li> <li>Will the bearer token gets change during the lifecycle of the cluster?</li> </ol>
<p>Q: What is the better way to get the bearer token?</p> <p>A: Since you have configured access to the cluster, you might use </p> <pre><code>kubectl describe secrets </code></pre> <p>Q: Will the bearer token gets change during the lifecycle of the cluster?</p> <p>A: Static tokens do not expire. </p> <p>Please see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="noreferrer">Accessing Clusters</a> and <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="noreferrer">Authenticating</a> for more details. </p>
<p>I am trying to use the <code>VolumeSnapshot</code> backup <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">mechanism</a> promoted in <code>kubernetes</code> to <code>beta</code> from <code>1.17</code>.</p> <p>Here is my scenario:</p> <p>Create the nginx deployment and the PVC used by it</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - name: my-pvc mountPath: /root/test volumes: - name: my-pvc persistentVolumeClaim: claimName: nginx-pvc </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: finalizers: null labels: name: nginx-pvc name: nginx-pvc namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: premium-rwo </code></pre> <p>Exec into the running <code>nginx</code> container, cd into the PVC mounted path and create some files</p> <pre class="lang-sh prettyprint-override"><code>▶ k exec -it nginx-deployment-84765795c-7hz5n bash root@nginx-deployment-84765795c-7hz5n:/# cd /root/test root@nginx-deployment-84765795c-7hz5n:~/test# touch {1..10}.txt root@nginx-deployment-84765795c-7hz5n:~/test# ls 1.txt 10.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt lost+found root@nginx-deployment-84765795c-7hz5n:~/test# </code></pre> <p>Create the following <code>VolumeSnapshot</code> using as source the <code>nginx-pvc</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: namespace: default name: nginx-volume-snapshot spec: volumeSnapshotClassName: pd-retain-vsc source: persistentVolumeClaimName: nginx-pvc </code></pre> <p>The <code>VolumeSnapshotClass</code> used is the following</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: snapshot.storage.k8s.io/v1beta1 deletionPolicy: Retain driver: pd.csi.storage.gke.io kind: VolumeSnapshotClass metadata: creationTimestamp: &quot;2020-09-25T09:10:16Z&quot; generation: 1 name: pd-retain-vsc </code></pre> <p>and wait until it becomes <code>readyToUse: true</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 items: - apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: creationTimestamp: &quot;2020-11-04T09:38:00Z&quot; finalizers: - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection generation: 1 name: nginx-volume-snapshot namespace: default resourceVersion: &quot;34170857&quot; selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/nginx-volume-snapshot uid: ce1991f8-a44c-456f-8b2a-2e12f8df28fc spec: source: persistentVolumeClaimName: nginx-pvc volumeSnapshotClassName: pd-retain-vsc status: boundVolumeSnapshotContentName: snapcontent-ce1991f8-a44c-456f-8b2a-2e12f8df28fc creationTime: &quot;2020-11-04T09:38:02Z&quot; readyToUse: true restoreSize: 8Gi kind: List metadata: resourceVersion: &quot;&quot; selfLink: &quot;&quot; </code></pre> <p>Delete the <code>nginx</code> deployment and the initial PVC</p> <pre class="lang-sh prettyprint-override"><code>▶ k delete pvc,deploy --all persistentvolumeclaim &quot;nginx-pvc&quot; deleted deployment.apps &quot;nginx-deployment&quot; deleted </code></pre> <p>Create a new PVC, using the previously created <code>VolumeSnapshot</code> as its <code>dataSource</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: finalizers: null labels: name: nginx-pvc-restored name: nginx-pvc-restored namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi dataSource: name: nginx-volume-snapshot kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io </code></pre> <pre><code>▶ k create -f nginx-pvc-restored.yaml persistentvolumeclaim/nginx-pvc-restored created ▶ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc-restored Bound pvc-56d0a898-9f65-464f-8abf-90fa0a58a048 8Gi RWO standard 39s </code></pre> <p>Set the name of the new (restored) PVC to the nginx deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - name: my-pvc mountPath: /root/test volumes: - name: my-pvc persistentVolumeClaim: claimName: nginx-pvc-restored </code></pre> <p>and create the <code>Deployment</code> again</p> <pre class="lang-sh prettyprint-override"><code>▶ k create -f nginx-deployment-restored.yaml deployment.apps/nginx-deployment created </code></pre> <p><code>cd</code> into the PVC mounted directory. It should contain the previously created files but its empty</p> <pre class="lang-sh prettyprint-override"><code>▶ k exec -it nginx-deployment-67c7584d4b-l7qrq bash root@nginx-deployment-67c7584d4b-l7qrq:/# cd /root/test root@nginx-deployment-67c7584d4b-l7qrq:~/test# ls lost+found root@nginx-deployment-67c7584d4b-l7qrq:~/test# </code></pre> <pre><code>▶ k version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17&quot;, GitVersion:&quot;v1.17.12&quot;, GitCommit:&quot;5ec472285121eb6c451e515bc0a7201413872fa3&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-09-16T13:39:51Z&quot;, GoVersion:&quot;go1.13.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17+&quot;, GitVersion:&quot;v1.17.12-gke.1504&quot;, GitCommit:&quot;17061f5bd4ee34f72c9281d49f94b4f3ac31ac25&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-19T17:00:22Z&quot;, GoVersion:&quot;go1.13.15b4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
<p>This is a community wiki answer posted for more clarity of the current problem. Feel free to expand on it.</p> <p>As mentioned by @pkaramol, this is an on-going issue registered under the following thread:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/96225" rel="nofollow noreferrer">Creating an intree PVC with datasource should fail #96225</a></p> <blockquote> <p><strong>What happened:</strong> In clusters that have intree drivers as the default storageclass, if you try to create a PVC with snapshot data source and forget to put the csi storageclass in it, then an empty PVC will be provisioned using the default storageclass.</p> <p><strong>What you expected to happen:</strong> PVC creation should not proceed and instead have an event with an incompatible error, similar to how we check proper csi driver in the csi provisioner.</p> </blockquote> <p>This issue has not yet been resolved at the moment of writing this answer.</p>
<p>I want to connect from Google Cloud Function to Kubernetes (GKE) container. Specifically, the container has postgres database and I want to read records in a table.</p> <p>In Golang:</p> <pre><code>func ConnectPostgres(w http.ResponseWriter, r *http.Request) { db, err := sql.Open("postgres", "postgresql://[email protected]:5432/myDatabase") if err != nil { http.Error(w, "Error opening conn:" + err.Error(), http.StatusInternalServerError) } defer db.Close() err = db.Ping() if err != nil { http.Error(w, "Error ping conn:" + err.Error(), http.StatusInternalServerError) } rows, err := db.Query("SELECT * FROM myTable") fmt.Println(rows) w.Write([]byte(rows)) } </code></pre> <p>10.32.0.142 is the Internal IP of the pod having the container.</p> <p>But when the cloud function tries to Ping to postgres container, the request gets timed out.</p> <p>How can I solve this?</p>
<p>You need to connect Cloud Function to VPC first, detailed here: <a href="https://cloud.google.com/functions/docs/connecting-vpc" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/connecting-vpc</a> </p>
<p>Assume I have an app <code>A</code> container and another container called <code>resources-preparation</code> which will try to create DB tables and etc in order to bootstrap app <code>A</code>. </p> <p>App <code>A</code> container and <code>resources-preparation</code> container are living in different pods. How can I bring up App <code>A</code> container after <code>resources-preparation</code> container completes. </p> <p>PS: <code>resources-preparation</code> container is not a service at all. So I may not be able to use the <code>waitfor</code> image to detect the <code>resources-preparation</code> container completes.</p>
<p>It seems there is a kind of architectural inconsistency: the existing application architecture does not fit Kubernetes paradigm well: </p> <ul> <li>The pod <code>A</code> is tied to the pod <code>resources-preparation</code>, so has to wait for its successful completion, whereas k8s assumes independent or loosely coupled microservices. </li> <li>Being tightly dependent, containers <code>A</code> and <code>resources-preparation</code> are put into different pods whereas assisting applications should be placed into the same container with the primary one. See the Discussion <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">Communicate Between Containers in the Same Pod</a>. </li> <li>The pod <code>A</code> is dependent on an external database whereas in k8s microservices should work with their own database or replica to keep independency. </li> <li>The pods <code>A</code> and <code>resources-preparation</code> should communicate via k8s API. That means the pod <code>A</code> should fetch information about the <code>resources-preparation</code> completion from the <code>kube-apiserver</code>. </li> </ul> <p>The listed principles cause extra overhead but this is the price you pay for the redundancy Kubernetes relies on. </p> <p>Possible approaches to the problem: </p> <ol> <li><p>Redesign or modify the application and backend database accordingly with the k8s principles, decompose them into a set of loosely coupled microservices. As a supposal: </p> <ul> <li>a) let the app <code>A</code> start with its DB replica independently; </li> <li>b) in parallel let the <code>resources-preparation</code> to start and create tables in its own replica; </li> <li>c) then add the new tables to the existing Replication or create a new Replication. In this approach the pod <code>A</code> does not have to wait for the pod <code>resources-preparation</code>. The DB replication will be waiting instead. That way the dependency will be moved off the k8s level to the upper layer. </li> </ul> <p>Unfortunately, adaptation of existing applications to k8s could be challenging and often requires re-developing the application from scratch. It is the time- and resource-consuming task. </p> <p>A good whitepaper is available here: <a href="https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper" rel="nofollow noreferrer">Principles of container-based application design</a>. </p></li> <li><p>Since the <code>resources-preparation</code> is an assisting container for the <code>A</code>, put both containers into the same pod. That way the sample code from the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">Init Containers</a> concept will do exactly the container <code>A</code> needs. What's important for the container <code>A</code> awaiting for the <code>resources-preparation</code> completion is that: </p> <ul> <li>Init containers always <strong>run to completion</strong>.</li> <li>Each init container <strong>must complete successfully</strong> before the next one starts. </li> </ul></li> <li><p>If you can not join both containers into the same pod for some reason, as a workaround the application components could be put into a "wrapper" that helps them to pretend behaving as loosely coupled microservices. This wrapper should be implemented below the pod level to be transparent for Kubernetes: around the container or application. In a simple case you might launch the application <code>A</code> from within a shell script with the <code>until</code> loop. The script should fetch the status of the <code>resources-preparation</code> pod running in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> via the <code>kube-apiserver</code> to make decision if the application <code>A</code> may start or not. </p> <p>A REST API request could be used for that (see the answer <a href="https://stackoverflow.com/questions/52890977/kubernetes-api-server-serving-pod-logs/52894762#52894762">Kubernetes API server, serving pod logs</a>). </p> <p>A way to authenticate on the <code>kube-apiserver</code> should be provided for the API request to work. The theory and practical examples are here: </p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">Access Clusters Using the Kubernetes API</a> </p> <p><a href="https://medium.com/@nieldw/curling-the-kubernetes-api-server-d7675cfc398c" rel="nofollow noreferrer">cURLing the Kubernetes API server</a> </p></li> </ol>
<p>I would like to know if there is any way to externalize my hostaliases in order to read from values file to vary by environment.</p> <p><code> deployment.yaml ... hostAliases: valueFrom: configMapKeyRef: name: host-aliases-configuration key: hostaliases </p> <p>configmap.yaml kind: ConfigMap metadata: name: host-aliases-configuration data: hostaliases: | {{ .Values.hosts }}</p> <p>values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net" </code></p> <p>This doest work:</p> <p>helm install --name gateway .</p> <p>Error: release gateway failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: decode slice: expect [ or n, but found {, error found in #10 byte of ...|Aliases":{"valueFrom|..., bigger context ...|config","name":"config-volume"}]}],"hostAliases":{"valueFrom":{"configMapKeyRef":{"key":"hostaliases|...</p> <p>I would like to know if there is any way to externalize this urls by env, using another approach maybe.</p>
<p>For the main question you got an error while configMapKeyRef is expecting key - value parameters instead of the array provide by configmap.</p> <p><strong>1.</strong> You can try:</p> <pre><code>deployment.yaml ... hostAliases: {{ toYaml .Values.hosts | indent 4 }} values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net" </code></pre> <p>Note - hostAliases:</p> <blockquote> <p>Because of the managed-nature of the file, any user-written content will be overwritten whenever the hosts file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, <strong>it is not suggested to modify the contents of the file</strong>.</p> </blockquote> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">HostAliases</a></p> <p>In addition those addressees will be used only from the POD level. </p> <p><strong>2.</strong> It's not clear what you are trying to do.</p> <p>Take a look at <strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">external IPs</a></strong> it should be done at service level.</p> <blockquote> <p>If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.</p> </blockquote> <p>hope this help</p>
<p>In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.</p> <p>Now, the container running rclone needs to run with <code>--privileged</code>, which is a problem for me, since my <code>main-container</code> will run user code which I have no control of and can be potentially harmful to my Pod.</p> <p>The solution I’m trying now is to have a <code>sidecar-container</code> just for the task of running rclone, mounting S3 in a <code>/shared_storage</code> folder, and sharing this folder with the <code>main-container</code> through a Volume <code>shared-storage</code>. This is a simplified pod.yml file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never volumes: - name: shared-storage emptyDir: {} containers: - name: main-container image: busybox command: [&quot;sh&quot;, &quot;-c&quot;, &quot;sleep 1h&quot;] volumeMounts: - name: shared-storage mountPath: /shared_storage # mountPropagation: HostToContainer - name: sidecar-container image: mycustomsidecarimage securityContext: privileged: true command: [&quot;/bin/bash&quot;] args: [&quot;-c&quot;, &quot;python mount_source.py&quot;] env: - name: credentials value: XXXXXXXXXXX volumeMounts: - name: shared-storage mountPath: /shared_storage mountPropagation: Bidirectional </code></pre> <p>The pod runs fine and from <code>sidecar-container</code> I can read, create and delete files from my S3 bucket.<br /> But from <code>main-container</code> no files are listed inside of <code>shared_storage</code>. I can create files (if I set <code>readOnly: false</code>) but those do not appear in <code>sidecar-container</code>.</p> <p>If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting <code>main-container</code> read from it.</p> <p>In <code>mount_source.py</code> I am running rclone with <code>--allow-other</code> and I have edit <code>etc/fuse.conf </code> as suggested <a href="https://github.com/rclone/rclone/issues/811#issuecomment-255599253" rel="nofollow noreferrer">here</a>.</p> <p>Does anyone have an idea on how to solve this problem?</p>
<p>I've managed to make it work by using:</p> <ul> <li><code>mountPropagation: HostToContainer</code> on <code>main-container</code></li> <li><code>mountPropagation: Bidirectional</code> on <code>sidecar-container</code></li> </ul> <p>I can control read/write permissions to specific mounts using <code>readOnly: true/false</code> on <code>main-container</code>. This is of course also possible to set within <code>rclone mount</code> command.</p> <p>Now the <code>main-container</code> does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!</p> <p>Interestingly, it doesn't seem to work if I set <code>volumeMount:mountPath</code> to be a sub-folder of the rclone mounted path. So if I want to grant <code>main-container</code> different read/write permissions to different subpaths, I had to create a separate <code>rclone mount</code> for each sub-folder.</p> <p>I'm not 100% sure if there's any extra security concerns with that approach though.</p>
<p>I would like to validate, that deployments, which have Pod- and NodeAffinities (+AntiAffinity) are configured according to internal guidelines.</p> <p>Is there a possibility to get deployments (or Pods) using kubectl and limit the result to Objects, that have such an affinity configured?</p> <p>I have played around with the jsonpath output, but was unsuccessful so far.</p>
<p>hope you are enjoying your Kubernetes journey !</p> <p>If you need to use affinities (especially with <code>preferredDuringSchedulingIgnoredDuringExecution</code> (explications below)) and just want to just &quot;find&quot; deployments that actually have affinities, you can use this:</p> <pre><code>❯ k get deploy -o custom-columns=NAME:&quot;.metadata.name&quot;,AFFINITIES:&quot;.spec.template.spec.affinity&quot; NAME AFFINITIES nginx-deployment &lt;none&gt; nginx-deployment-vanilla &lt;none&gt; nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]] </code></pre> <p>Every <code>&lt;none&gt;</code> pattern indicates that there is no affinity in the deployment.</p> <p>However, with affinities, if you want to get only the deployments that have affinities without the deployments that don't have affinities, use this:</p> <pre><code>❯ k get deploy -o custom-columns=NAME:&quot;.metadata.name&quot;,AFFINITIES:&quot;.spec.template.spec.affinity&quot; | grep -v &quot;&lt;none&gt;&quot; NAME AFFINITIES nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]] </code></pre> <p>And if you just want the names of the deployments that have affinities, consider using this little script:</p> <pre><code>❯ k get deploy -o custom-columns=NAME:&quot;.metadata.name&quot;,AFFINITIES:&quot;.spec.template.spec.affinity&quot; --no-headers | grep -v &quot;&lt;none&gt;&quot; | awk '{print $1}' nginx-deployment-with-affinities </code></pre> <p>But, do not forget that <code>nodeSelector</code> is the simplest way to constrain Pods to nodes with specific labels. (more info here: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity</a>). Also remember that (according to the same link) the <code>requiredDuringSchedulingIgnoredDuringExecution</code> type of node Affinity functions like nodeSelector, but with a more expressive syntax ! So If you don't need <code>preferredDuringSchedulingIgnoredDuringExecution</code> when dealing with affinities consider using nodeSelector !</p> <p>After reading the above link, if you want to deal with nodeSelector you can use the same mechanic I used before:</p> <pre><code>❯ k get deploy -o custom-columns=NAME:&quot;.metadata.name&quot;,NODE_SELECTOR:&quot;.spec.template.spec.nodeSelector&quot; NAME NODE_SELECTOR nginx-deployment map[test-affinities:test] nginx-deployment-vanilla &lt;none&gt; </code></pre>
<p>created a very simple nginx pod and run into status <code>ImagePullBackoff</code></p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned reloader/nginx to aks-appnodepool1-22779252-vmss000000 Warning Failed 29m kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to resolve reference &quot;docker.io/library/nginx:latest&quot;: failed to do request: Head &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot;: dial tcp 52.200.78.26:443: i/o timeout Warning Failed 27m kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to resolve reference &quot;docker.io/library/nginx:latest&quot;: failed to do request: Head &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot;: dial tcp 52.21.28.242:443: i/o timeout Warning Failed 23m kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to resolve reference &quot;docker.io/library/nginx:latest&quot;: failed to do request: Head &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot;: dial tcp 3.223.210.206:443: i/o timeout Normal Pulling 22m (x4 over 32m) kubelet Pulling image &quot;nginx&quot; Warning Failed 20m (x4 over 29m) kubelet Error: ErrImagePull Warning Failed 20m kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to resolve reference &quot;docker.io/library/nginx:latest&quot;: failed to do request: Head &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot;: dial tcp 3.228.155.36:443: i/o timeout Warning Failed 20m (x7 over 29m) kubelet Error: ImagePullBackOff Warning Failed 6m41s kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to resolve reference &quot;docker.io/library/nginx:latest&quot;: failed to do request: Head &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot;: dial tcp 52.5.157.114:443: i/o timeout Normal BackOff 2m17s (x65 over 29m) kubelet Back-off pulling image &quot;nginx&quot; </code></pre> <p>Checked network status:</p> <ul> <li>A VM in the same subnet can access &quot;https://registry-1.docker.io/v2/library/nginx/manifests/latest&quot; and <code>telnet 52.5.157.114 443</code> successful.</li> <li><code>docker pull nginx</code> successfully on the VM in the same subnet.</li> <li><code>kubectl exec</code> into a running pod in the same cluster can <code>wget https://registry-1.docker.io/v2/library/nginx/manifests/latest</code> successfully. . What is the possible problem?</li> </ul>
<p>When I wget/curl or anything you want to access</p> <pre><code>https://registry-1.docker.io/v2/library/nginx/manifests/latest </code></pre> <p>It says</p> <pre><code>{&quot;errors&quot;:[{&quot;code&quot;:&quot;UNAUTHORIZED&quot;,&quot;message&quot;:&quot;authentication required&quot;,&quot;detail&quot;:[{&quot;Type&quot;:&quot;repository&quot;,&quot;Class&quot;:&quot;&quot;,&quot;Name&quot;:&quot;library/nginx&quot;,&quot;Action&quot;:&quot;pull&quot;}]}]} </code></pre> <p>However this is because you need to be logged in to pull this image from this repository.</p> <p>2 solutions:</p> <p>The first is simple, in the image field just replace this url by <code>nginx:latest</code> and it should work</p> <p>The second: create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">regcred</a></p>
<p>I have a kubernetes cluster with nginx-ingress and I try to redirect <a href="http://test.domain.com" rel="nofollow noreferrer">http://test.domain.com</a> to <a href="http://www.test.domaine.com" rel="nofollow noreferrer">http://www.test.domaine.com</a> with nginx.ingress.kubernetes.io/from-to-www-redirect annotation.</p> <p>But it doesn't seem to work.</p> <p>My ingress ressource :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; spec: rules: - host: www.test.domain.com http: paths: - path: / backend: serviceName: app servicePort: 8080 - host: test.domain.com http: paths: - path: / backend: serviceName: app servicePort: 8080 </code></pre> <p>I have tried many configuration but I can't make it work. The nginx.ingress.kubernetes.io/from-to-www-redirect annotation have no effect!</p>
<p>This is a community wiki answer. Feel free to expand it.</p> <p>In order to make it work you also need to use the <code>nginx.ingress.kubernetes.io/configuration-snippet</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">annotation</a>. For example:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'example.com' ) { rewrite ^ https://www.example.com$request_uri permanent; } </code></pre> <p>As already mentioned by @apoorvakamath in the comments, you can refer to <a href="https://www.informaticsmatters.com/blog/2020/06/03/redirecting-to-www.html" rel="nofollow noreferrer">this guide</a> for a step by step detailed example:</p> <blockquote> <p>The snippet is the clever aspect fo the solution. It allows you to add dynamic configuration to the ingress controller. We use it to detect the use of the non-www form of the URL and then simply issue a re-write that is pushed out (and back via your domain) to the www form.</p> </blockquote>