Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am having the following questions:</p> <p>Actually, I am using grafana open source service in Azure Virtual Machine. I would like to see the Kubernetes SLA Metrics on Grafana. When I had googled for it, then I had got the following page: <a href="https://grafana.com/docs/grafana-cloud/kubernetes/" rel="nofollow noreferrer">https://grafana.com/docs/grafana-cloud/kubernetes/</a></p> <p>When I tried to follow the tutorial, I am not able to find the onboarding or lightning icon. Is this integration only for grafana enterprise? If no, please let me know, how to proceed further.</p> <p>Thanks for the answers in advance.</p> <p>Regards, Chaitanya</p>
chaitanya kumar Dondapati
<p>There is now an Azure Managed Grafana service on Azure. When create an instance you can grant it access to Azure Monitor which will serve up statistics from your AKS clusters.</p>
Mitch Denny
<p>I'm using ignite chart in kubernetes, in memory deployment without persistent volumes, how can I configure default tables to be created automatically after restart of all ignite pods?</p>
NoamiA
<p>You can specify them in your <code>IgniteConfiguration</code> using Java or Spring XML, via Query Entities mechanism:</p> <p><a href="https://ignite.apache.org/docs/latest/SQL/indexes#configuring-indexes-using-query-entities" rel="nofollow noreferrer">https://ignite.apache.org/docs/latest/SQL/indexes#configuring-indexes-using-query-entities</a></p> <p>In this case all the caches and corresponding tables will be recreated when cluster is started.</p>
alamar
<p>I'm adding a proxy in front of kubernetes API in order to authenticate users (among other actions) with a homemade authentication system.</p> <p><a href="https://i.stack.imgur.com/7OevY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7OevY.png" alt="enter image description here"></a></p> <p>I've modified my kube configuration to have kubectl hitting the proxy. The proxy has its own kubeconfig with a valid certificate-authority-data, so I don't need any credentials on my side.</p> <p>So far this is working fine, here is the minimum configuration I need locally:</p> <pre><code>clusters: - cluster: server: http://localhost:8080 name: proxy contexts: - context: cluster: proxy name: proxy current-context: proxy </code></pre> <p>Now the authentication should be based on a token, that I hoped I would be able to pass as part of the kubectl request header.</p> <p>I tried multiple configuration, adding a user with a token in the kubeconfig such as</p> <pre><code>clusters: - cluster: server: http://localhost:8080 name: proxy contexts: - context: cluster: proxy user: robin name: proxy current-context: proxy users: - name: robin user: token: my-token </code></pre> <p>Or specifying a auth-provider such as</p> <pre><code>clusters: - cluster: server: http://localhost:8080 name: proxy contexts: - context: cluster: proxy user: robin name: proxy current-context: proxy users: - name: robin user: auth-provider: config: access-token: my-token </code></pre> <p>I even tried without any user, just by adding my token as part of the preferences, as all I want is to have the token in the header</p> <pre><code>clusters: - cluster: server: http://localhost:8080 name: proxy contexts: - context: cluster: proxy name: proxy current-context: proxy preferences: token: my-token </code></pre> <p>But I was never able to see my-token as part of the request header on the proxy side. Dumping the request, all I got is:</p> <pre><code>GET /api/v1/namespaces/default/pods?limit=500 HTTP/1.1 Host: localhost:8080 Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json Accept-Encoding: gzip User-Agent: kubectl/v1.11.0 (darwin/amd64) kubernetes/91e7b4f </code></pre> <p>I am obviously missing something here, how can kubectl not pass the user information in its header? Let's say I do not have a proxy, how is the "kubectl -> kubernetes" token authentication working?</p> <p>If someone has any experience at adding this kind of authentication layer between kubernetes and a client, I could use some help :) </p>
Charrette
<p>Token credentials are only sent over TLS-secured connections. The server must be https://...</p>
Jordan Liggitt
<p>While deploying a Kubernetes application, I want to check if a particular PodSecurityPolicy exists, and if it does then skip installing it again. I came across the <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="noreferrer">helm lookup function</a>, which allows us to check the existing K8 resources. While I understand how to use this function to get all the resources of same kind, how do I use this function to check if a PodSecurityPolicy named &quot;myPodSecurityPolicy&quot; exists.</p> <p>I tried something like this:</p> <pre><code>{{- if ne (lookup &quot;v1&quot; &quot;PodSecurityPolicy&quot; &quot;&quot; &quot;&quot;) &quot;myPodSecurityPolicy&quot;}} &lt;do my stuff&gt; {{- end }} </code></pre> <p>But it doesn't look like I can compare it this way, seeing an error -</p> <pre><code>error calling ne: invalid type for comparison </code></pre> <p>Any inputs? Thanks in advance.</p>
user4202236
<p>Please check your API version and PSP name. Lookup is returning a <code>map</code> or <code>nil</code> not a string and that's why you are getting that error. The following is working for me. For negative expression, just add <code>not</code> after <code>if</code>.</p> <pre><code>{{- if (lookup &quot;policy/v1beta1&quot; &quot;PodSecurityPolicy&quot; &quot;&quot; &quot;example&quot;) }} &lt;found: do your stuff&gt; {{- end }} </code></pre> <p>HTH</p>
Faheem
<p>I'm trying to lock down a namespace in kubernetes using <strong>RBAC</strong> so I followed this <a href="https://blog.viktorpetersson.com/2018/06/15/kubernetes-rbac.html" rel="nofollow noreferrer">tutorial</a>.<br> I'm working on a <strong>baremetal cluster</strong> (no minikube, <strong>no cloud provider</strong>) and installed kubernetes using Ansible.</p> <p>I created the folowing <strong>namespace :</strong></p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: lockdown </code></pre> <p><strong>Service account :</strong></p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: sa-lockdown namespace: lockdown </code></pre> <p><strong>Role :</strong></p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: lockdown rules: - apiGroups: [""] # "" indicates the core API group resources: [""] verbs: [""] </code></pre> <p><strong>RoleBinding :</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rb-lockdown subjects: - kind: ServiceAccount name: sa-lockdown roleRef: kind: Role name: lockdown apiGroup: rbac.authorization.k8s.io </code></pre> <p>And finally I tested the authorization using the next command</p> <pre><code>kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown </code></pre> <p>This <strong>SHOULD</strong> be returning "No" but I got "Yes" :-( </p> <p>What am I doing wrong ?<br> Thx</p>
Doctor
<p>A couple possibilities:</p> <ol> <li>are you running the "can-i" check against the secured port or unsecured port (add --v=6 to see). Requests made against the unsecured (non-https) port are always authorized. </li> <li>RBAC is additive, so if there is an existing clusterrolebinding or rolebinding granting "get pods" permissions to that service account (or one of the groups system:serviceaccounts:lockdown, system:serviceaccounts, or system:authenticated), then that service account will have that permission. You cannot "ungrant" permissions by binding more restrictive roles</li> </ol>
Jordan Liggitt
<p>I would like full details on NodeGroups in an EKS cluster, for example volume size as set using <code>--node-volume-size</code> in the command <code>eksctl create nodegroup ...</code> but also many other details of a NodeGroup that can be set with that command.</p> <p><code>eksctl get nodegroup</code> gives limited data, omitting volume size. See below.</p> <p><code>kubectl get node</code> (or <code>kubectl describe node</code>) gives more information (see at bottom). However, this is information for the Node, not the Node Group. Node Groups have their own details, such as configuration for auto-scaling, and in fact can be zero-sized. Also, the <code>kubectl</code> output does not match the <code>--node-volume-size</code> value -- in this case 33 GB, as can be confirmed in the AWS EBS console.</p> <p>I need data on the level of <code>eksctl</code> (EC2 VMs) rather than <code>kubectl</code> (Kubernetes nodes), though of course these do align. </p> <p>This is just one example of the fields set in <code>eksctl create nodegroup ...</code> which are not in the (rather thin-looking) JSON. How can I get a full description of the Node Group?</p> <pre><code>$ eksctl get nodegroup --name one-node-group --cluster clus-bumping --region=us-east-2 --output json [ { "StackName": "eksctl-cluster1-nodegroup-one-node-group", "Cluster": "cluster1", "Name": "one-node-group", "MaxSize": 1, "MinSize": 1, "DesiredCapacity": 1, "InstanceType": "t2.small", "ImageID": "", "CreationTime": "2020-05-27T07:18:32.496Z", "NodeInstanceRoleARN": "" } ] </code></pre> <p>Relevant output of <code>kubectl</code> concerning volumes:</p> <pre><code>% kubectl describe node Name: ip-192-168-39-36.us-east-2.compute.internal ... Capacity: ... attachable-volumes-aws-ebs: 39 ... ephemeral-storage: 34590700Ki ... Allocatable: attachable-volumes-aws-ebs: 39 ephemeral-storage: 30805047244 ... Resource Requests Limits ... -------- -------- ------ ephemeral-storage 0 (0%) 0 (0%) ... attachable-volumes-aws-ebs 0 0 ... </code></pre>
Joshua Fox
<p>Answer: This is simply a limitation in <code>eksctl</code>. The limitation exists because the node-group configuration is not implicit in the EC2-based cluster itself, but rather it would need to be saved specially in cluster metadata.</p> <p>See eksctl GitHub issues: <a href="https://github.com/weaveworks/eksctl/issues/2255" rel="nofollow noreferrer">2255</a> and <a href="https://github.com/weaveworks/eksctl/issues/642" rel="nofollow noreferrer">642</a>.</p> <p>Yet the information <em>does</em> exist, and you can get it with the AWS SDK for EKS function <a href="https://docs.aws.amazon.com/sdk-for-go/v2/api/service/eks/#Client.DescribeNodegroupRequest" rel="nofollow noreferrer"><code>DescribeNodegroup</code></a>.</p>
Joshua Fox
<p>In Ubuntu <code>16.04</code> I'm trying to deploy <code>Kubespray2.5</code> using Ansible Playbook(<code>2.9.7)</code> command and getting error:</p> <p>I have deployed kubespray many times with version 2.5 but this time only i am getting this error. Please help to me.</p> <p><code>ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml</code></p> <pre><code> TASK [docker : ensure docker-engine repository public key is installed] ******************************************************* Friday 08 May 2020 16:28:44 +0530 (0:00:20.450) 0:04:46.454 ************ FAILED - RETRYING: ensure docker-engine repository public key is installed (4 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (4 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (4 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (3 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (3 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (3 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (2 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (2 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (2 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (1 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (1 retries left). FAILED - RETRYING: ensure docker-engine repository public key is installed (1 retries left). failed: [node2] (item=58118E89F3A912897C070ADBF76221572C52609D) =&gt; {"attempts": 4, "changed": false, "item": "58118E89F3A912897C070ADBF76221572C52609D", "msg": "Failed to download key at https://apt.dockerproject.org/gpg: HTTP Error 404: Not Found"} failed: [node3] (item=58118E89F3A912897C070ADBF76221572C52609D) =&gt; {"attempts": 4, "changed": false, "item": "58118E89F3A912897C070ADBF76221572C52609D", "msg": "Failed to download key at https://apt.dockerproject.org/gpg: HTTP Error 404: Not Found"} failed: [node1] (item=58118E89F3A912897C070ADBF76221572C52609D) =&gt; {"attempts": 4, "changed": false, "item": "58118E89F3A912897C070ADBF76221572C52609D", "msg": "Failed to download key at https://apt.dockerproject.org/gpg: HTTP Error 404: Not Found"} NO MORE HOSTS LEFT ************************************************************************************************************ to retry, use: --limit @/root/kubespray/cluster.retry PLAY RECAP ******************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 node1 : ok=62 changed=0 unreachable=0 failed=1 node2 : ok=64 changed=0 unreachable=0 failed=1 node3 : ok=62 changed=9 unreachable=0 failed=1 </code></pre>
Ranvijay Sachan
<p>The <a href="https://apt.dockerproject.org/" rel="nofollow noreferrer">https://apt.dockerproject.org/</a> repository has been shut down on March 31, 2020.</p> <p>Your playbook is outdated; acquire a newer version or adjust it according to the linked instructions yourself.</p>
AKX
<p>I am deploying angular application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error. The application was working fine until I made some recent functional changes and redeploy using the yaml/config files that was used for the initial deployment. I'm clueless what is wrong here</p> <p>Note:</p> <p>1.This is not the duplicate of <a href="https://stackoverflow.com/questions/72064326/nginx-502-bad-gateway-error-after-deploying-the-angular-application-to-kubernetes">72064326</a>, as the server is listening to correct port on nginx.conf</p> <p>Here are my files</p> <p>1.Docker file</p> <pre><code># stage1 as builder FROM node:16.14.0 as builder FROM nginx:alpine #!/bin/sh ## Remove default nginx config page # RUN rm -rf /etc/nginx/conf.d # COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf COPY ./.nginx/nginx.conf /etc/nginx/conf.d/default.conf ## Remove default nginx index page RUN rm -rf /usr/share/nginx/html/* # Copy from the stahg 1 COPY dist/appname /usr/share/nginx/html EXPOSE **8080** ENTRYPOINT [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;] </code></pre> <p>nginx.conf (custom nginx)</p> <pre><code> server { listen 8080; root /usr/share/nginx/html; include /etc/nginx/mime.types; location /appname/ { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } location ~ \.(js|css) { root /usr/share/nginx/html; # try finding the file first, if it's not found we fall # back to the meteor app try_files $uri /index.html =404; } } </code></pre> <p>3.Deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: com.xxx.path: /platform/custom name: appname namespace: yyyyyy spec: selector: matchLabels: io.kompose.service: appname replicas: 1 template: metadata: labels: clusterName: custom2 department: customplatform io.kompose.service: appname com.xxxx.monitor: application com.xxxx.platform: custom com.xxxx.service: appname spec: containers: - env: - name: ENVIRONMENT value: yyyyyy resources: requests: memory: &quot;2048Mi&quot; limits: memory: &quot;4096Mi&quot; image: cccc.rrr.xxxx.aws/imgrepo/imagename:latest imagePullPolicy: Always securityContext: name: image ports: - containerPort: 8080 restartPolicy: Always </code></pre> <p>Service.yaml</p> <pre><code>kind: Service metadata: annotations: com.xxxx.path: /platform/custom labels: clusterName: custom2 department: customplatform io.kompose.service: appname com.xxxx.monitor: application com.xxxx.platform: custom com.xxxx.service: appname name: appname namespace: yyyyyy spec: ports: - name: &quot;appname&quot; port: **8080** targetPort: 8080 selector: io.kompose.service: appname </code></pre> <p>5.Ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: custom-ingress namespace: yyyyyy annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/proxy-redirect-from: &quot;http://custom-yyyyyy.dev.xxxx.aws:8080/&quot; nginx.ingress.kubernetes.io/proxy-redirect-to: &quot;$scheme://$http_host/&quot; spec: rules: - host: custom-yyyyyy.dev.xxxx.aws http: paths: - backend: serviceName: appname servicePort: 8080 path: /appname ```[![appliction screenshot][1]][1] [1]: https://i.stack.imgur.com/CX3k1.png </code></pre>
suku
<p>The screenshot you have attached shows an nginx error. Initially I thought it meant that it was a configuration error on your pod (an error in the actual container).</p> <p>But then I noticed you are using an NGINX ingress controller, so most likely the issue is in the ingress controller.</p> <p>I would proceed mechanically as with anything related with Kubernetes ingress.</p> <p>In particular:</p> <ol> <li>Check the logs on the ingress controller, for error messages. In particular, I don't have experience with NGINX ingress controller, but health checking in mixed protocols (https external, http in the service) tend to be tricky. With ALB controller, I always check that the target groups have backend services. And in your case I would first test without the redirect-from and redirect-to annotations. Again I haven't used NGINX controller but <code>&quot;$scheme://$http_host/&quot;</code> looks strange.</li> <li>Check that the service has endpoints defined (<code>kubectl endpoints appname -n yyyyyy</code>) which will tell you if the pods are running and if the service is connected to the pods.</li> </ol>
Gonfva
<p>I am trying to mount a directory called rstudio which is residing in /mnt/rstudio. But when I try to mount using persistent volume, the directory is showing up but not the files inside rstudio. Here's my deployment file</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: rsp-deployment spec: selector: matchLabels: app: rsp replicas: 1 strategy: {} template: metadata: labels: app: rsp spec: nodeSelector: kubernetes.io/hostname: testserver.local volumes: - name: rsp-persistent-storage persistentVolumeClaim: claimName: pv-claim-rsp containers: - env: - name: RSP_LICENSE value: MY LICENSE image: rstudio/rstudio-server-pro:latest name: rstudio-server-pro ports: - containerPort: 8787 - containerPort: 5559 volumeMounts: - name: rsp-persistent-storage mountPath: /tmp/rstudio resources: {} securityContext: privileged: true restartPolicy: Always status: {} --- kind: Service apiVersion: v1 metadata: name: rstudio-server-pro spec: selector: app: rsp ports: - protocol: TCP name: &quot;8787&quot; port: 8787 targetPort: 8787 type: NodePort </code></pre> <p>And my pv and pvc files are as follows</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-volume-rsp spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/rstudio&quot; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-claim-rsp spec: accessModes: - ReadWriteOnce storageClassName: &quot;&quot; resources: requests: storage: 5Gi </code></pre> <p>Inside my /mnt/rstudio there are these much files.</p> <pre><code>[root@test-server rstudio]# ls launcher.conf launcher-mounts launcher.pub rserver.conf launcher.kubernetes.conf launcher-mounts_working logging.conf rsession.conf launcher.kubernetes.profiles.conf launcher.pem notifications.conf r-versions </code></pre> <p>But after the pod is up and running, the directory is showing empty. Any idea why? Thanks in advance!</p>
Siddharth
<p>LGTM. I am getting files if I swap the image with <code>nginx</code>. I would check two things:</p> <ol> <li>Permissions: Check what permissions the files have. You may have to update your permissions or UID to access the files.</li> <li>Does <code>rstudio</code> image use that path? It may be processing that folder when it starts. Try mounting to a different path and see if you can see the files.</li> </ol> <p>Also, make sure you are launching the pod on the node where the host path exists. I am assuming <code>testserver.local</code> and <code>test-server</code> are the same.</p> <p>HTH</p>
Faheem
<p><strong>TL;DR</strong></p> <p>My pods mounted Azure file shares are (inconsistently) being deleted by either Kubernetes / Helm when deleting a deployment.</p> <p><strong>Explanation</strong></p> <p>I've recently transitioned to using Helm for deploying Kubernetes objects on my Azure Kubernetes Cluster via the DevOps release pipeline.</p> <p>I've started to see some unexpected behaviour in relation to the Azure File Shares that I mount to my Pods (as Persistent Volumes with associated Persistent Volume Claims and a Storage Class) as part of the deployment.</p> <p>Whilst I've been finalising my deployment, I've been pushing out the deployment via the Azure Devops release pipeline using the built in Helm tasks, which have been working fine. When I've wanted to fix / improve the process I've then either manually deleted the objects on the Kubernetes Dashboard (UI), or used Powershell (command line) to delete the deployment.</p> <p>For example:</p> <pre><code>helm delete myapp-prod-73 helm del --purge myapp-prod-73 </code></pre> <p>Not every time, but more frequently, I'm seeing the underlying Azure File Shares also being deleted as I'm working through this process. There's very little around the web on this, but I've also seen an article outlining similar issues over at: <a href="https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting" rel="nofollow noreferrer">https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting</a>.</p> <p>Has anyone in the community come across this issue?</p>
Matt Woodward
<p><strong>Credit goes to</strong> <a href="https://twitter.com/tomasrestrepo" rel="nofollow noreferrer">https://twitter.com/tomasrestrepo</a> here on pointing me in the right direction (the author of the article I mentioned above).</p> <p>The behaviour here was a consequence of having the Reclaim Policy on the Storage Class &amp; Persistent Volume set to "Delete". When switching over to Helm, I began following their commands to Delete / Purge the releases as I was testing. What I didn't realise, was that deleting the release would also mean that Helm / K8s would also reach out and delete the underlying Volume (in this case an Azure Fileshare). This is documented over at: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete</a></p> <p>I'll leave this Q &amp; A here for anyone else that misses this subtly with the way in which the Storage Classes, Persistent Volumes (PVs) &amp; underlying storage operates under K8s / Helm.</p> <p><strong>Note</strong>: I think this issue was made slightly more obscure by the fact I was manually creating the Azure Fileshare (through the Azure Portal) and trying to mount that as a static volume (as per <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/azure-files-volume</a>) within my Helm Chart, but that the underlying volume wasn't immediately being deleted when the release was deleted (sometimes an hour later?).</p>
Matt Woodward
<p>I am trying to automate my deployment to Azure AKS, but trying to work out how to reference the image name in my manifest file. At the moment I have commented out the image name in the manifest file so see if that works but getting an error:</p> <blockquote> <p>##[error]TypeError: Cannot read property 'trim' of undefined</p> </blockquote> <p><strong>This is my Github workflow file:</strong></p> <pre><code>on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@master - uses: Azure/docker-login@v1 with: login-server: registry.azurecr.io username: ${{ secrets.REGISTRY_USERNAME }} password: ${{ secrets.REGISTRY_PASSWORD }} - run: | docker build . --file Dockerfile_nginx -t registry.azurecr.io/k8sdemo:${{ github.sha }} docker push registry.azurecr.io/k8sdemo:${{ github.sha }} - uses: Azure/k8s-set-context@v1 with: kubeconfig: ${{ secrets.KUBE_CONFIG }} - uses: Azure/k8s-deploy@v1 with: manifests: | k8s/mg-k8s/nginx.yaml images: | registry.azurecr.io/k8sdemo:${{ github.sha }} imagepullsecrets: | secret </code></pre> <p><strong>This is my manifest file:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginxstaticservice namespace: mg-staging labels: app: nginxstatic spec: selector: k8s-app: traefik-ingress-lb ports: - name: http port: 80 targetPort: 80 protocol: TCP # selector: # app: nginxstatic --- apiVersion: apps/v1 kind: Deployment metadata: name: nginxstatic-deployment namespace: mg-staging labels: app: nginxstatic spec: replicas: 1 selector: matchLabels: app: nginxstatic template: metadata: labels: app: nginxstatic spec: containers: - name: nginxstatic # image: imagePullPolicy: &quot;Always&quot; ports: - containerPort: 80 volumeMounts: - name: nginx-config mountPath: /etc/nginx/conf.d volumes: - name: nginx-config configMap: name: nginx-configmap imagePullSecrets: - name: secret </code></pre>
Rutnet
<p>Update: @Rutnet figured out the way to pass the new tag using <code>Azure/k8s-deploy1@1</code> action. From the <a href="https://github.com/Azure/k8s-deploy" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>(Optional) Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files. This multiline input accepts specifying multiple artifact substitutions in newline separated form. For example -</p> <pre><code>images: | contosodemo.azurecr.io/foo:test1 contosodemo.azurecr.io/bar:test2 </code></pre> <p>In this example, all references to <code>contosodemo.azurecr.io/foo</code> and <code>contosodemo.azurecr.io/bar</code> are searched for in the image field of the input manifest files. For the matches found, the tags test1 and test2 are substituted.</p> </blockquote> <p>Based on the documentation, the manifest file needs to have references to the original image with a default tag. The action will replace the tags with the ones specified. The manifest in question has the <code>image</code> commented. It should have been something like:</p> <pre class="lang-yaml prettyprint-override"><code> spec: containers: - name: nginxstatic image: registry.azurecr.io/k8sdemo:some_tag </code></pre> <hr /> <p>Original Reply:</p> <p>There are several ways of achieving this. You can use templating tools like <a href="https://helm.sh/docs/intro/quickstart/" rel="nofollow noreferrer">Helm</a> or <a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">Kustomize</a>. In this case, you can just use <a href="https://askubuntu.com/a/20416">sed</a> before you apply the manifest. Add a place holder in the manifest file and replace that with sed inline. See the following example:</p> <pre><code>... - run: | sed -i.bak &quot;/NGINX_IMAGE_URL/registry.azurecr.io\/k8sdemo:${{ github.sha }}&quot; k8s/mg-k8s/nginx.yaml - uses: Azure/k8s-deploy@v1 with: manifests: | k8s/mg-k8s/nginx.yaml images: | registry.azurecr.io/k8sdemo:${{ github.sha }} imagepullsecrets: | secret ... </code></pre> <p>Add the NGINX_IMAGE_URL place holder in the manifest file:</p> <pre><code>... spec: containers: - name: nginxstatic image: NGINX_IMAGE_URL ... </code></pre> <p>HTH</p>
Faheem
<p>My deployment is working fine. i just try to use local persistent volume for storing data on local of my application. after that i am getting below error.</p> <blockquote> <p>error: error validating &quot;xxx-deployment.yaml&quot;: error validating data: ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field &quot;volumeMounts&quot; in io.k8s.api.core.v1.LocalObjectReference; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: xxx namespace: xxx spec: selector: matchLabels: app: xxx replicas: 3 template: metadata: labels: app: xxx spec: containers: - name: xxx image: xxx:1.xx imagePullPolicy: &quot;Always&quot; stdin: true tty: true ports: - containerPort: 80 imagePullPolicy: Always imagePullSecrets: - name: xxx volumeMounts: - mountPath: /data name: xxx-data restartPolicy: Always volumes: - name: xx-data persistentVolumeClaim: claimName: xx-xx-pvc </code></pre>
Dharmendra jha
<p>You need to move the <code>imagePullSecret</code> further down. It's breaking the container spec. <code>imagePullSecret</code> is defined at the pod spec level while <code>volumeMounts</code> belongs to the container spec</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: xxx namespace: xxx spec: selector: matchLabels: app: xxx replicas: 3 template: metadata: labels: app: xxx spec: containers: - name: xxx image: xxx:1.xx imagePullPolicy: &quot;Always&quot; stdin: true tty: true ports: - containerPort: 80 imagePullPolicy: Always volumeMounts: - mountPath: /data name: xxx-data imagePullSecrets: - name: xxx restartPolicy: Always volumes: - name: xx-data persistentVolumeClaim: claimName: xx-xx-pvc </code></pre>
Faheem
<p>Want to understand how pod1 claimed PVC with <code>accessMode: ReadWriteOnce</code> is able to share with pod2 when <code>storageclass glusterfs</code> is created?Shouldn't it fail as I need to specify the <code>accessMode</code> as <code>ReadWriteMany</code>?</p> <p>-> Created <code>storageclass</code> as <code>glusterfs</code> with <code>type:distributed</code></p> <p>-> PV created on top of the <code>storageclass</code> above and pvc is done with <code>AccessMode: ReadWriteOnce</code></p> <p>-> First Pod attached the above PVC created</p> <p>-> Second Pod trying to attach the same PVC created and it does work and able to access the files which first pod created</p> <p>Tried another flow without a <code>storageclass</code> and directly creating PVC from the cinder storage and the below error shows up,</p> <p><code>Warning FailedAttachVolume 28s attachdetach-controller Multi-Attach error for volume "pvc-644f3e7e-8e65-11e9-a43e-fa163e933531" Volume is already used by pod(s) pod1</code></p> <p>Trying to understand why this is not happening when the <code>storageclass</code> is created and assigned to PV? </p> <p>How I am able to access the files from the second pod when the <code>AccessMode: ReadWriteOnce</code>? According to k8s documentation if multiple pods in different nodes need to access it should be ReadWriteMany. </p> <p>If <code>RWO</code> access mode works then is it safe for both the pods to read and write? Will there be any issues? What is the role of <code>RWX</code> if <code>RWO</code> works just fine in this case?</p> <p>Would be great if some experts can give an insight into this. Thanks.</p>
Melwyn Jensen
<p>Volumes are <code>RWO</code> per node, not per Pod. Volumes are mounted to the node and then bind mounted to containers. As long as pods are scheduled to the same node, <code>RWO</code> volume can be bind mounted to both containers at the same time.</p>
Tuminoid
<p>I'm using Azure Kubernetes Services and was wondering if creating a new load balancing service would require me to re-create my deployments. I've been experimenting a bit, but I can't tell if the responses from the service were just delayed because of startup time and I'm impatient, or if the loadbalancer doesn't create endpoints for existing deployments (which seems weird to me)</p>
Carson
<p>You don't need to redeploy the application if you just want to expose the service. A service can be exposed as a load balancer or other types. When you create a service of type LoadBalancer, the cloud controllers in AKS will create the Load Balancer Azure resource and setup the backend configuration based on the existing endpoints. Azure load balancer provisioning may take some time and you can check the status with <code>kubectl get svc</code>. If the status of the <code>External-IP</code> is pending that means it's being created. The load balancer is created in a few minutes. If it takes longer, you may have to see if there are any permissions or other configuration issuees.</p> <pre><code>$ kubectl create deploy nginx --image=nginx deployment.apps/nginx created $ kubectl expose deploy/nginx --port 80 --type LoadBalancer service/nginx exposed $ kubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.96.28.31 &lt;pending&gt; 80:30643/TCP 63s $ kubectl get svc nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.0.13.232 52.nnn.nnn.nn 80:31540/TCP 111s </code></pre>
Faheem
<p>Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my <em>cloud build</em> scripts:</p> <pre><code>- name: 'gcr.io/cloud-builders/kubectl' id: 'deploy' args: - 'apply' - '-f' - 'k8s' - '--recursive' env: - 'CLOUDSDK_COMPUTE_ZONE=${_REGION}' - 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}' </code></pre> <p>And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:</p> <blockquote> <ul> <li>spec.clusterIP: Invalid value: "": field is immutable</li> <li>metadata.resourceVersion: Invalid value: "": must be specified for an update</li> </ul> </blockquote> <p>So I'm wondering if this is an expected behavior for Service resources?</p> <p>Here's my YAML config for my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {name} namespace: {namespace} annotations: beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}' spec: ports: - port: {port-num} targetPort: {port-num} selector: app: {label} environment: {env} type: NodePort </code></pre>
irvifa
<p>This is due to <a href="https://github.com/kubernetes/kubernetes/issues/71042" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/71042</a></p> <p><a href="https://github.com/kubernetes/kubernetes/pull/66602" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/66602</a> should be picked to 1.11</p>
Jordan Liggitt
<p>I am trying to create an ingress controller that points to a service that I have exposed via NodePort.</p> <p>Here is the yaml file for the ingress controller (taken from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a>):</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: example-ingress spec: rules: - host: hello-world.info http: paths: - path: / backend: serviceName: appName servicePort: 80 </code></pre> <p>I can connect directly to the node port and the frontend is displayed.</p> <p>Please note that I am doing this because the frontend app is unable to connect to other deployments that I have created and I read that an ingress controller would be able to solve the issue. Will I still have to add an Nginx reverse proxy? If so how would I do that? I have tried adding this to the nginx config file but with no success.</p> <pre><code>location /middleware/ { proxy_pass http://middleware/; } </code></pre>
HonoredTarget
<p>You must use a proper hostname to reach the route defined in the <code>Ingress</code> object. Either update your <code>/etc/hosts</code> file or use <code>curl -H &quot;hello-world.info&quot; localhost</code> type command. Alternatively, you can delete the <code>host</code> mapping and redirect all traffic to one default service.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: example-ingress spec: rules: - http: paths: - path: / backend: serviceName: appName servicePort: 80 </code></pre>
Faheem
<p>I am deploying liquibase scripts using CI CD pipeline. There are some instances where a liquibase changeset transaction might take very long and the pod may go down during that time. For example, a changeset adds a new non-null column into an existing table which already has a million records. A default value will be added to the existing rows of table. I would like to know what happens when the pod goes down after few rows are updated with default value.</p>
San N
<p>The answer somewhat depends on your database and were it actually is killed. What liquibase tries to do is:</p> <p>First, update the databasechangeloglock table as &quot;locked&quot; and commit it</p> <p>Then, for each <em><strong>changeset</strong></em></p> <ol> <li>Start a transaction</li> <li>Execute each change in the changeSet, execute the SQL statement(s) required</li> <li>Mark the changeset as ran in the databasechangelog table</li> <li>Commit the transaction</li> </ol> <p>Finally, update the databasechangeloglock table as &quot;unlocked&quot; and commit it.</p> <p>If the pod is killed randomly in that process, the impact will depend on exactly where it was killed and what is going on.</p> <p>Percentage of time is in #2 above, so that's likely where it is killed. Because we try to run in a transaction, when the connection is cut the database should automatically roll back the transaction. <em><strong>BUT:</strong></em> some statements are auto-committing and can mess that up and leave things partly done.</p> <p>If you have a changeset that is just doing an update of all the rows and the pod is killed during that, most databases can just roll back that update and none of the rows will be updated and next time liquibase runs it knows the changeset has not been ran and it will retry the update.</p> <p>If you have a changeset that adds a column AND updates the row and it is killed during the update, most databases will have committed the &quot;add column&quot; so the rollback will only undo the update of values. And since the changeset is not marked as ran, the next update will try to run it again and will fail with a &quot;column already exists&quot; exception.</p> <p>For that reason, it's best to have a single change per changeSet unless they can all be ran in a single transaction.</p> <p>If it fails anywhere else in that process, it's still the same &quot;database will roll back the current transaction, so it depends on what happens to be in the current transaction&quot;.</p> <p>Regardless of where it fails in the changeSet, you'll also have an issue with the &quot;unlock the databasechangeloglock table&quot; step not being ran. The next liquibase run will block until it's unlocked. For managed CICD systems, the infrastructure can do a better job of &quot;make sure just one version of liquibase is running&quot; than liquibase does with the databasechangeloglock table, so you can add a &quot;liquibase unlock&quot; as the first step of your pod to be safe.</p>
Nathan Voxland
<p>I need to be able to assign custom environment variables to each replica of a pod. One variable should be some random uuid, another unique number. How is it possible to achieve? I'd prefer continue using "Deployment"s with replicas. If this is not feasible out of the box, how can it be achieved by customizing replication controller/controller manager? Are there hooks available to achieve this?</p>
rubenhak
<p>You can use the downward API to inject the metadata.uid of the pod as an envvar, which is unique per pod</p>
Jordan Liggitt
<p>I'm in the process of containerizing various .NET core API projects and running them in a Kubernetes cluster using Linux. I'm fairly new to this scenario (I usually use App Services with Windows) and a questions regarding best practices regarding secure connections are starting to come up:</p> <ol> <li><p>Since these will run as pods inside the cluster my assumption is that I only need to expose port 80 correct? It's all internal traffic managed by the service and ingress. But is this a good practice? Will issues arise once I configure a domain with a certificate and secure traffic starts hitting the running pod?</p> </li> <li><p>When the time comes to integrate SSL will I have have to worry about opening up port 443 on the containers or managing any certificates within the container itself or will this all be managed by Ingress, Services (or Application Gateway since I am using AKS)? Right now when I need to test locally using HTTPS I have to add a self-signed certificate to the container and open port 443 and my assumption is this should not be in place for production!</p> </li> <li><p>When I do deploy into my cluster (I'm using AKS) with just port 80 open and I assign a LoadBalancer service I get a Public IP address. I'm used to using Azure App Services where you can use the global Miscrosoft SSL certificate right out of the box like so: <strong><a href="https://your-app.azurewebsites.net" rel="nofollow noreferrer">https://your-app.azurewebsites.net</a></strong> However when I go to the Public IP and configure a DNS label for something like: <strong>your-app.southcentralus.cloudapp.azure.com</strong> It does not allow me to use HTTPS like App Services does. Neither does the IP address. Maybe I don't have something configured properly with my Kubernetes instance?</p> </li> <li><p>Since many of these services are going to be public facing API endpoints (but consumed by a client application) they don't need to have a custom domain name as they won't be seen by the majority of the public. Is there a way to leverage secure connections with the IP address or the <strong>.cloudapp.azure.com</strong> domain? It would be cost/time prohibitive if I have to manage certificates for each of my services!</p> </li> </ol>
INNVTV
<ol> <li><p>It depends on where you want to terminate your TLS. For most use cases, the ingress controller is a good place to terminate the TLS traffic and keep everything on HTTP inside the cluster. In that case, any HTTP port should work fine. If port 80 is exposed by Dotnet core by default then you should keep it.</p> </li> <li><p>You are opening port 443 locally because you don't have the ingress controller configured. You can install ingress locally as well. In production, you would not need to open any other ports beyond a single HTTP port as long as the ingress controller is handling the TLS traffic.</p> </li> <li><p>Ideally, you should not expose every service as Load Balancer. The services should be of type <code>ClusterIP</code>, only exposed inside the cluster. When you deploy an ingress controller, it will create a Load Balancer service. That will be the only entry point in the cluster. Ingress controller will then accept and route traffic to individual services by either hostname or paths.</p> </li> <li><p>Let's Encrypt is a free TLS certificate signing service that you can use for your setup. If you don't own the domain name, you can use https-01 challenge to verify your identity and get the certificate. Cert Manager project makes it easy to configure Let's Encrypt certificates in any k8s clusters.</p> </li> </ol> <ul> <li><a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">https://cert-manager.io/docs/installation/kubernetes/</a></li> <li><a href="https://cert-manager.io/docs/tutorials/acme/ingress/" rel="nofollow noreferrer">https://cert-manager.io/docs/tutorials/acme/ingress/</a> (Ignore the Tillter part if you have deployed it using <code>kubectl</code> or helm3)</li> </ul> <p>Sidebar: If you are using Application Gateway to front your applications, consider using <a href="https://azure.microsoft.com/en-us/blog/application-gateway-ingress-controller-for-azure-kubernetes-service/" rel="nofollow noreferrer">Application Gateway Ingress Controller</a></p>
Faheem
<p><a href="https://i.stack.imgur.com/9uUKq.png" rel="nofollow noreferrer"><strong>RESPONSE HEADER</strong></a></p> <p>Why am I receiving a network error? Does anyone have a clue what layer this is occurring / how I can resolve this issue?</p> <p><strong>What I've Tried</strong><br /> (1) Checked CORS... everything seems to be ok.<br /> (2) Tried to add timeouts in YAML file as annotations in my LB.<br /> (Note) The request seems to be timing out after 60 seconds</p> <p><strong>Process:</strong><br /> (1) Axios POST request triggered from front via button click.<br /> (2) Flask server (back) receives POST request and begins to process.<br /> [ERROR OCCURS HERE] (3) Flask server is still processing request on the back; however the client receives a 504 timeout, and there is also some CORS origin mention (don't think this is the issue though, as I've set my CORS settings properly, and this doesn't pop up for any other requests...).<br /> (4) Server responds with a 200 and successfully sets data.</p> <p><strong>Current stack:</strong><br /> (1) AWS EKS / Kubernetes for deployment (relevant config shown).<br /> (2) Flask backend.<br /> (3) React frontend.</p> <p>My initial thoughts are that this has to do with the deployment... works perfectly fine in a local context, but I think that there is some timeout setting; however, I'm unsure where this is / how I can increase the timeout. For additional context, this doesn't seem to happen with short-lived requests... just this one particular that takes more time.</p>
L L
<p>If it's failing specifically for long running calls then you may have to adjust your ELB idle timeout. It's 60 seconds by default. Check out the following resource for reference:</p> <p><a href="https://aws.amazon.com/blogs/aws/elb-idle-timeout-control/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/elb-idle-timeout-control/</a></p> <p>Some troubleshooting tips <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-error-message.html#ts-elb-errorcodes-http504" rel="nofollow noreferrer">here</a>.</p>
Faheem
<p>I've a ASP.NET core console application that I run in a container on Kubernetes. In the deployment.yaml I've set the environment variable:</p> <pre><code>env: - name: "ASPNETCORE_ENVIRONMENT" value: "Development" </code></pre> <p>And in the console application I've the following code:</p> <pre><code>static void Main(string[] args) { Console.WriteLine("Env: " + Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")); } </code></pre> <p>But the ASPNETCORE_ENVIRONMENT is empty, how can I get to the configured environment variable? I use the same steps in a core webapi project and there I get the variable as follow:</p> <pre><code>public Startup(IHostingEnvironment env) { env.EnvironmentName } </code></pre> <p>This works in the core webapi but I don't have IHostingEnvironment in the console app.</p>
BvdVen
<p>We use this template to run a ASP.NET Core App in our Kubernetes cluster:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: &lt;namespace&gt; name: &lt;application name&gt; spec: replicas: 1 minReadySeconds: 15 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: &lt;app-label&gt; spec: containers: - name: &lt;container-name&gt; image: &lt;image-name&gt; ports: - containerPort: 80 env: - name: ASPNETCORE_ENVIRONMENT value: "Release" </code></pre>
maveonair
<p>I have upload.yaml file which is uploads a script to mongo, I package with helm.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: generateName: upload-strategy-to-mongo-v2 spec: parallelism: 1 completions: 1 template: metadata: name: upload-strategy-to-mongo spec: volumes: - name: upload-strategy-to-mongo-scripts-volume configMap: name: upload-strategy-to-mongo-scripts-v3 containers: - name: upload-strategy-to-mongo image: mongo env: - name: MONGODB_URI value: @@@@ - name: MONGODB_USERNAME valueFrom: secretKeyRef: name: mongodb-user key: @@@@ - name: MONGODB_PASSWORD valueFrom: secretKeyRef: name: mongodb-user key: @@@@@ volumeMounts: - mountPath: /scripts name: upload-strategy-to-mongo-scripts-volume command: [&quot;mongo&quot;] args: - $(MONGODB_URI)/ravnml - --username - $(MONGODB_USERNAME) - --password - $(MONGODB_PASSWORD) - --authenticationDatabase - admin - /scripts/upload.js restartPolicy: Never --- apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: upload-strategy-to-mongo-scripts-v3 data: upload.js: | // Read the object from file and parse it var data = cat('/scripts/strategy.json'); var obj = JSON.parse(data); // Upsert strategy print(db.strategy.find()); db.strategy.replaceOne( { name : obj.name }, obj, { upsert: true } ) print(db.strategy.find()); strategy.json: {{ .Files.Get &quot;strategy.json&quot; | quote }} </code></pre> <p>now I am using generateName to generate a custom name every time I install it. I require to have multiple packages been installed and I require the name to be dynamic.</p> <p>Error When I install this script with <code>helm install &lt;name&gt; &lt;tar.gz file&gt; -n &lt;namespace&gt;</code> I get the following error</p> <pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty </code></pre> <p>but I am able to install if I don't use generateName. Any ideas?</p> <p>I looked at various resources but they don't seem to answer how to install via helm. references looked: <a href="https://stackoverflow.com/questions/48023475/add-random-string-on-kubernetes-pod-deployment-name;">Add random string on Kubernetes pod deployment name</a> <a href="https://github.com/kubernetes/kubernetes/issues/44501" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/44501</a> ; <a href="https://zknill.io/posts/kubernetes-generated-names/" rel="nofollow noreferrer">https://zknill.io/posts/kubernetes-generated-names/</a></p>
xavlock
<p>This seems to be a known issue. Helm doesn't work with <code>generateName</code>. For unique names, you can use the <a href="https://helm.sh/docs/chart_template_guide/builtin_objects/" rel="nofollow noreferrer">Helm's build in properties</a> like <code>Revision</code> or <code>Name</code>. See the following link for reference:</p> <ul> <li><a href="https://github.com/helm/helm/issues/3348#issuecomment-482369133" rel="nofollow noreferrer">https://github.com/helm/helm/issues/3348#issuecomment-482369133</a></li> </ul>
Faheem
<p>I am developing a website that runs a simulation given a user-submitted script. I tried to follow some Online Judge architectures, but in my case, I need to send user input and receive the output in realtime, like a simulation.</p> <p>I tried Kubernetes Jobs, but isn't seems so easy to communicate with the container, especially if I need a Kubernetes client on the language that I am working.</p> <p>So, my question is: Given this scenario, what is the best approach to orchestrate multiple containers with interactive I/O programmatically?</p> <p><img src="https://i.stack.imgur.com/QTAqr.png" alt="Diagram" /></p> <p>*Obs.: I am not worrying about security yet.</p>
Ícaro Lima
<p>Please take a look at the design of the spark operator:</p> <p><a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/spark-on-k8s-operator</a></p> <p>That has a somewhat similar design to what you’re targeting. Similarly, Argo Workflow is another example:</p> <p><a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">https://github.com/argoproj/argo</a></p>
Faheem
<p>Has anyone managed to use managed identity with Bridge to Kubernetes?</p> <p>I've been reading these articles: <a href="https://learn.microsoft.com/en-us/visualstudio/bridge/managed-identity?view=vs-2019" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/bridge/managed-identity?view=vs-2019</a></p> <p><a href="https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes</a></p> <p><a href="https://learn.microsoft.com/en-us/visualstudio/bridge/configure-bridge-to-kubernetes" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/bridge/configure-bridge-to-kubernetes</a></p> <p>but I cannot get this to work</p> <pre><code>enableFeatures: ManagedIdentity </code></pre> <p>I think there may be a documentation issue because it refers to both <code>KubernetesLocalProcessConfig.yaml</code> and <code>KubernetesLocalConfig.yaml</code></p> <p>I've tried both names. If I put the above yaml in KubernetesLocalProcessConfig.yaml I get a yaml serialization error. If I put it in KubernetesLocalConfig.yaml it doesn't seem to do anything so I suspect KubernetesLocalProcessConfig.yaml is the correct name, but I can't find any details of the correct yaml other than on the &quot;Use managed identity with Bridge to Kubernetes&quot; page linked above.</p>
Ian1971
<p>I worked it out by decompiling the extension. It is a documentation issue. The correct file name is indeed <code>KubernetesLocalProcessConfig.yaml</code></p> <p>and the below yaml will work (note the <strong>-</strong> was missing in the docs)</p> <pre><code>version: 0.1 enableFeatures: - ManagedIdentity </code></pre>
Ian1971
<p>I need to download files into a specific folder of a container on a pod, at startup. The image for this container already has an existing folder with other files in it. (example is adding plugin jars to an application) </p> <p>I've attempted the below example, however k8s volumeMounts overwrites the folder on container. </p> <p>In the example below '/existing-folder-on-my-app-image/' is a folder on the my-app image which already contains files. When using the below I only get the downloaded plugin.jar in folder '/existing-folder-on-my-app-image/' and existing files are removed.</p> <p>I want to add other files to this folder, but still keep those files which where there to start with. </p> <p>How can I stop k8s from overwriting '/existing-folder-on-my-app-image/' to only have the files from initContainer? </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app spec: initContainers: - name: config-data image: joosthofman/wget:1.0 command: ["sh","-c","wget https://url.to.plugins/plugin.jar --no-check-certificate"] volumeMounts: - name: config-data mountPath: /config containers: - name: my-app image: my-app:latest volumeMounts: - name: config-data mountPath: /existing-folder-on-my-app-image/ volumes: - name: config-data emptyDir: {} </code></pre>
Melissa
<p>volume mounts always shadow the directory they are mounted to. a volume mount is the only way for an init container to manage files that are also visible to another container in the pod. if you want to copy files into a directory that already contains files in the main container image, you'll need to perform that copy as part of the container startup</p>
Jordan Liggitt
<p>Difficulty running airflow commands when running Airflow on Kubernetes that I installed from the Helm stable/airflow repo. For instance I try to exec into the scheduler pod and run <code>airflow list</code> and I get the following error:</p> <pre><code>airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the KubernetesExecutor airlow </code></pre> <p>Ok so I switch to the celery executor.</p> <p>Same thing</p> <pre><code>airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the CeleryExecutor </code></pre> <p>So what is the correct way to run airflow CLI commands when running on K8s?</p>
alex
<p>Make sure you are using <code>bash</code>. <code>/home/airflow/.bashrc</code> imports the environment variables from <code>/home/airflow/airflow_env.sh</code> to setup the connection. The following are some examples:</p> <pre class="lang-bash prettyprint-override"><code>kubectl exec -ti airflow-scheduler-nnn-nnn -- /bin/bash $ airflow list_dags </code></pre> <p>Or with shell you can import the env vars yourself:</p> <pre class="lang-bash prettyprint-override"><code>kubectl exec -ti airflow-scheduler-nnn-nnn -- sh -c &quot;. /home/airflow/airflow_env.sh &amp;&amp; airflow list_dags&quot; </code></pre>
Faheem
<p>Observed two kinds of syntaxes for PV &amp; PVC creation in AWS EKS.</p> <p>1)Using vol Id while creating both PV &amp; PVC (Create volume manually and using that id) 2)Without using vol Id (dynamic provisioning of PV)</p> <p>example-1:</p> <pre><code>- apiVersion: &quot;v1&quot; kind: &quot;PersistentVolume&quot; metadata: name: &quot;pv-aws&quot; spec: capacity: storage: 10G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: gp2 awsElasticBlockStore: volumeID: vol-xxxxxxxx fsType: ext4 </code></pre> <p>In this case, I am creating volume manually and using that I'm creating both PV &amp; PVC</p> <p>example-2:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 spec: accessModes: - ReadWriteOnce storageClassName: gp2 resources: requests: storage: 20Gi </code></pre> <p>In this case by just creating PVC its creating volume in the backend (AWS) and PV.</p> <p>What is the difference and in which to use in which scenarios? Pros and cons?</p>
SNR
<p>It should be based on your requirements. Static provisioning is generally not scalable. You have to create the volumes outside of the k8s context. Mounting existing volumes would be useful in disaster recovery scenarios.</p> <p>Using <a href="https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html" rel="nofollow noreferrer">Storage classes</a>, or dynamic provisioning, is generally preferred because of the convenience. You can create roles and <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota" rel="nofollow noreferrer">resource quotas</a> to control and limit the storage usage and decrease operational overhead.</p>
Faheem
<p>Did someone already get this error : pods "k8s-debian9-charming-but-youthful-merkle" is forbidden: pod does not have "kubernetes.io/config.mirror" annotation, node "k8s-uk1-node-002" can only create mirror pods ?</p> <p>Why the node is configured to create only mirror pods ? How to unconfigure this ? Is this RBAC policies ?</p> <p>I created the kubernetes cluster with terraform and ansible on an openstack, with kubespray</p> <p>Any help is welcome, thanks by advance, Greg</p>
Greg
<p>The NodeRestriction admission plugin is responsible for enforcing that limitation, to prevent nodes from creating pods that expand their access to resources like serviceaccounts and secrets unrelated to their existing workloads</p>
Jordan Liggitt
<p>I have deployed an AWS ALB Controller and I create listeners with ingress resources in a EKS cluster.</p> <p>The steps I followed are the following:</p> <ul> <li>I had an ingress for a service named <code>first-test-api</code> and all where fine</li> <li>I deploy a new Helm release [<code>first</code>] with just renaming the chart from <code>test-api</code> to <code>main-api</code>. So now is <code>first-main-api</code>.</li> <li>Noting seems to break in terms of k8s resources , but...</li> <li>the <code>test-api.mydomain.com</code> listener in the AWS ALB is stuck to the old service</li> </ul> <p>Has anyone encounter such a thing before?</p> <p>I could delete the listener manually, but I don't want to. I'd like to know what is happening and why it didn't happened automatically :)</p> <p>EDIT:</p> <p>The ingress had an ALB annotation that enabled the deletion protection.</p>
Kostas Demiris
<p>I will provide some generic advice on things I would look at, but it might be better to detail a small example.</p> <p>Yes, ALB controller should automatically manage changes on the backend.</p> <p>I would suggest ignoring the helm chart and looking into the actual objects:</p> <ul> <li><code>kubectl get ing -n &lt;namespace&gt;</code> shows the ingress you are expecting?</li> <li><code>kubectl get ing -n &lt;ns&gt; &lt;name of ingress&gt; -o yaml</code> points to the correct/new service?</li> <li><code>kubectl get svc -n &lt;ns&gt; &lt;name of new svc&gt;</code> shows the new service?</li> <li><code>kubectl get endpoints -n &lt;ns&gt; &lt;name of new svc&gt;</code> shows the pod you are expecting?</li> </ul> <p>And then gut feeling.</p> <ol> <li>Check the labels in your new service are differents from the labels in the old service if you expect to both services serve different things.</li> <li>Get the logs of the ALB controller. You will see registering/deregistering stuff. Sometimes errors. Especially if the role the node/service account doesn't have the proper IAM permissions.</li> </ol> <p>Happy to modify the answer if you expand the question with more details.</p>
Gonfva
<p>I'm not able to deploy Wordpress using Rancher Catalog and bitnami/wordpress Helm Chart. MariaDB pod runs fine but wordpress pod errors out as</p> <p><code>ReplicaSet &quot;wordpress-557fcb8469&quot; has timed out progressing.; Deployment does not have minimum availability.</code></p> <p>Also from the wordpress pod logs:</p> <p><code>Error executing 'postInstallation': Failed to connect to wordpress-mariadb:3306 after 36 tries</code></p> <p>and when using the wordpress pod shell:</p> <pre><code>I have no name!@wordpress-557fcb8469-gj585:/$ mysql -h wordpress-mariadb -u root -p Enter password: ERROR 2005 (HY000): Unknown MySQL server host 'wordpress-mariadb' (-3) </code></pre> <p>but</p> <pre><code>I have no name!@wordpress-557fcb8469-gj585:/$ mysql -h 10.42.0.8 -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 16597 Server version: 10.3.22-MariaDB Source distribution </code></pre> <p>Any clue what can be set wrong?</p> <p><strong>Version of Helm and Kubernetes</strong>:</p> <pre><code>version.BuildInfo{Version:&quot;v3.2.4&quot;, GitCommit:&quot;0ad800ef43d3b826f31a5ad8dfbb4fe05d143688&quot;, GitTreeState:&quot;clean&quot;, GoVersion:&quot;go1.13.12&quot;} </code></pre> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;17&quot;, GitVersion:&quot;v1.17.6&quot;, GitCommit:&quot;d32e40e20d167e103faf894261614c5b45c44198&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-05-20T13:16:24Z&quot;, GoVersion:&quot;go1.13.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;18&quot;, GitVersion:&quot;v1.18.3&quot;, GitCommit:&quot;2e7996e3e2712684bc73f0dec0200d64eec7fe40&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-05-20T12:43:34Z&quot;, GoVersion:&quot;go1.13.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Pods and services in the <code>wordpress</code> namespace:</p> <pre><code>&gt; kubectl get pods,svc -owide --namespace=wordpress NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/wordpress-6647794f9b-4mmxd 0/1 Running 20 104m 10.42.0.19 dev-app &lt;none&gt; &lt;none&gt; pod/wordpress-mariadb-0 1/1 Running 1 26h 10.42.0.14 dev-app &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/wordpress LoadBalancer 10.43.91.13 &lt;pending&gt; 80:30158/TCP,443:30453/TCP 26h app.kubernetes.io/instance=wordpress,app.kubernetes.io/name=wordpress,io.cattle.field/appId=wordpress service/wordpress-mariadb ClusterIP 10.43.178.123 &lt;none&gt; 3306/TCP 26h app=mariadb,component=master,io.cattle.field/appId=wordpress,release=wordpress </code></pre>
JackTheKnife
<blockquote> <p>Unknown MySQL server host 'wordpress-mariadb' (-3)</p> </blockquote> <p>The error indicates a DNS failure. Please review your <a href="https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/" rel="nofollow noreferrer">CoreDNS configuration</a>. Check CoreDNS logs for any errors. CoreDNS forwards external queries to the underlined node. Check the node DNS configuration in <code>/etc/resolv.conf</code> as well.</p>
Faheem
<p>I have a persistentVolumeClaim fix (50go) and I wouldlike to divise this volume in 3 parts :</p> <pre><code>volumes: - name: part1 persistentVolumeClaim: claimName: myDisk - name: part2 persistentVolumeClaim: claimName: myDisk - name: part3 persistentVolumeClaim: claimName: myDisk </code></pre> <p>Can I edit in this config file like : storage for part1 = 10go or part2 = 30go ?</p>
jerome12
<p>Persistent volumes or claims can’t be divided. Also, there is a 1:1 relationship between Volumes and Claims. You can, however, create multiple PVs/PVCs from a storage classes. See the following example for reference. I am creating three PVCs from one storage class. It will also create relevant PVs for me as well. In case of <code>hostpath</code> SC, it will use <code>/var/lib/docker</code> folder for storage by default. If you want to control the path, you will have to create the PVs yourself as well.</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: part1 spec: storageClassName: hostpath accessModes: - ReadWriteOnce resources: requests: storage: 10Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: part2 spec: storageClassName: hostpath accessModes: - ReadWriteOnce resources: requests: storage: 20Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: part3 spec: storageClassName: hostpath accessModes: - ReadWriteOnce resources: requests: storage: 20Mi </code></pre>
Faheem
<p>I'm new to AKS, ACR, and DevOps Pipelines and I'm trying to setup a CI/CD pipeline for my application which uses Spring Boot backend and Angular frontend. Below is the azure pipeline yaml file that I am using</p> <pre><code># Deploy to Azure Kubernetes Service # Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service # https://learn.microsoft.com/azure/devops/pipelines/languages/docker trigger: - master resources: - repo: self variables: # Container registry service connection established during pipeline creation dockerRegistryServiceConnection: 'c2ed88c0-0d3b-4ea1-b8e0-7cc40c5c81d3' imageRepository: 'pvedanabhatlarepoui' containerRegistry: 'pkvcontainerregistry.azurecr.io' dockerfilePath: '**/Dockerfile' tag: '$(Build.BuildId)' imagePullSecret: 'pkvcontainerregistry2152213e-auth' # Agent VM image name vmImageName: 'ubuntu-latest' stages: - stage: Build displayName: Build stage jobs: - job: Build displayName: Build pool: vmImage: $(vmImageName) steps: - task: Maven@3 inputs: mavenPomFile: 'party_ui_backend/pom.xml' goals: 'clean verify' publishJUnitResults: false javaHomeOption: 'JDKVersion' mavenVersionOption: 'Default' mavenAuthenticateFeed: false effectivePomSkip: false sonarQubeRunAnalysis: false - task: Docker@2 displayName: Build and push an image to container registry inputs: command: buildAndPush repository: $(imageRepository) dockerfile: $(dockerfilePath) containerRegistry: $(dockerRegistryServiceConnection) tags: | $(tag) - upload: manifests artifact: manifests - stage: Deploy displayName: Deploy stage dependsOn: Build jobs: - deployment: Deploy displayName: Deploy pool: vmImage: $(vmImageName) environment: 'pvedanabhatlarepoui-7912.default' strategy: runOnce: deploy: steps: - task: KubernetesManifest@0 displayName: Create imagePullSecret inputs: action: createSecret secretName: $(imagePullSecret) dockerRegistryEndpoint: $(dockerRegistryServiceConnection) - task: KubernetesManifest@0 displayName: Deploy to Kubernetes cluster inputs: action: deploy manifests: | $(Pipeline.Workspace)/manifests/deployment.yml $(Pipeline.Workspace)/manifests/service.yml imagePullSecrets: | $(imagePullSecret) containers: | $(containerRegistry)/$(imageRepository):$(tag) </code></pre> <p>Here is the deployment.yaml and service.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: springboot-k8s-mssql spec: selector: matchLabels: app: springboot-k8s-mssql replicas: 3 template: metadata: labels: app: springboot-k8s-mssql spec: containers: - name: springboot-k8s-mssql image: pkvcontainerregistry.azurecr.io/pvedanabhatlarepoui ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: springboot-k8s-mssql labels: name: springboot-k8s-mssql spec: type: LoadBalancer ports: - port: 80 selector: app: springboot-k8s-mssql </code></pre> <p>As you can see, I am using party_ui_backend/pom.xml in the azure pipeline yaml, and this pom is the one for spring boot back end. This got deployed successfully, as shown below.</p> <pre><code>NAME READY STATUS RESTARTS AGE springboot-k8s-mssql-66876d98c-5rsj9 1/1 Running 0 58m springboot-k8s-mssql-66876d98c-67xz5 1/1 Running 0 58m springboot-k8s-mssql-66876d98c-rqzn6 1/1 Running 0 58m </code></pre> <p>Now I want to deploy angular front end also in the same deployment.yaml. How can I do this? Please let me know in case more details are needed.</p>
Phani
<p>You need to do build the docker container for the Angular application separately. Ideally, it should be deployed alongside the SpringBoot application. You will need a new build and deploy pipeline with a potentially separate manifests file.</p> <p>Reference articles on how to build docker images for Angular apps:</p> <ul> <li><a href="https://dzone.com/articles/how-to-dockerize-angular-app" rel="nofollow noreferrer">https://dzone.com/articles/how-to-dockerize-angular-app</a></li> <li><a href="https://developer.okta.com/blog/2020/06/17/angular-docker-spring-boot#create-a-docker-container-for-your-angular-app" rel="nofollow noreferrer">https://developer.okta.com/blog/2020/06/17/angular-docker-spring-boot#create-a-docker-container-for-your-angular-app</a></li> </ul>
Faheem
<p>For an HA kubernetes cluster, I don't find confirmation if all etcd members are answering read query from apiserver or direct client access, or only master etcd member is doing read/write operations ?</p> <p>Write acces is well described, only master etcd member is doing it. But for a K8S cluster with 3 etcd (or more), is only master etcd member working ?</p> <p>etcd documentation say: &lt;&lt; Increasing the cluster size can enhance failure tolerance and provide better read performance. Since clients can read from any member, increasing the number of members increases the overall read throughput.</p> <p>Decreasing the cluster size can improve the write performance of a cluster, with a trade-off of decreased resilience. Writes into the cluster are replicated to a majority of members of the cluster before considered committed. Decreasing the cluster size lowers the majority, and each write is committed more quickly.>></p> <p><a href="https://coreos.com/etcd/docs/latest/v2/runtime-configuration.html" rel="nofollow noreferrer">https://coreos.com/etcd/docs/latest/v2/runtime-configuration.html</a></p> <p>Is it true in K8S implementation context depending of the type of client (apiserver, calico, etc.) ?</p>
acra
<p>Yes, reads are served by any etcd member in an HA Kubernetes cluster</p>
Jordan Liggitt
<p>I have installed istio 1.6.7 in an AKS cluster using <code>istioctl</code>. I have enabled the istio operator using <code>init</code> command. When I try to enable Grafana and Kiali using a separate yaml on top of the installed istio system with <code>kubectl</code>, the istio ingress gateway pod is recreated and my custom configurations are deleted.</p> <p>The documentation specifies that we can install add-ons with <code>kubectl</code>.</p> <p>Add-on yaml is as follows:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: grafana: enabled: true </code></pre>
vishnu rajasekharan
<p>I am assuming you are referring to the <a href="https://istio.io/latest/docs/setup/install/standalone-operator/" rel="nofollow noreferrer">Standalone Operator Installation</a> guide. When updating the configuration, you have to change the original manifest and not create a new one. Your specified manifest doesn't contain any profile or metadata information. It should look like the following:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: example-istiocontrolplane spec: profile: default addonComponents: grafana: enabled: true </code></pre>
Faheem
<p>I am having trouble enabling webhook authentication for the kubelet API. My cluster is deployed with kubeadm. <a href="https://stackoverflow.com/questions/44855609/enabling-kubelet-server-bearer-token-authentication">This post is similar, but not the same issue</a></p> <p>I can authenticate to my API server with a bearer token just fine:</p> <pre><code>curl -k https://localhost:6443/api --header "Authorization: Bearer $TOKEN" </code></pre> <p>I cannot authenticate against the kubelet api with the same header. I have enabled the following on the API server:</p> <pre><code>--authorization-mode=Node,RBAC --anonymous-auth=false --runtime-config=authentication.k8s.io/v1beta1=true,authorization.k8s.io/v1beta1=true </code></pre> <p>The following is enabled on the kubelet node(s) (via /var/lib/kubelet/config.yaml)</p> <pre><code>address: 0.0.0.0 apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s </code></pre> <p>Despite this, I get a "403 forbidden" when curling the /metrics endpoint on the kubelet. Something to note, I can perform the same API call against a cluster deployed with KOPS just fine. I am not sure what the difference is. </p>
jsirianni
<p>The 403 indicates you successfully authenticated (or you would have gotten a 401 error), the kubelet checked with the apiserver if you were authorized to access kubelet metrics (otherwise it would have just allowed it), it got a definitely response from the apiserver (otherwise you would have gotten a 500 error), and the apiserver indicated the authenticated user is not authorized to access kubelet metrics. </p> <p>See <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization</a> for details about what permission needs to be granted to access various endpoints on the kubelet's API. For metrics, the <code>nodes/metrics</code> resource in the <code>""</code> apiGroup must be granted. </p>
Jordan Liggitt
<p>I've run into a problem accessing Azure Key Vault whilst converting some services from Microsoft Service Fabric to Kubernetes. Within our ASP.NET core ConfigureServices call, we call AddAzureKeyVault from Microsoft.Extensions.Configuration.AzureKeyVaultConfigurationExtensions to inject some sensitive parts of our configuration as shown below.</p> <pre class="lang-cs prettyprint-override"><code>public void ConfigureServices(IServiceCollection services) { var config = new ConfigurationBuilder() .SetBasePath(AppContext.BaseDirectory) .AddJsonFile(&quot;appsettings.json&quot;, optional: false, reloadOnChange: true) .AddAzureKeyVault( Environment.GetEnvironmentVariable(&quot;KEYVAULT_LOCATION&quot;), Environment.GetEnvironmentVariable(&quot;KEYVAULT_CLIENT_ID&quot;), Environment.GetEnvironmentVariable(&quot;KEYVAULT_CLIENT_SECRET&quot;) ).Build(); //... } </code></pre> <p>This works fine in a docker image running locally, yet once deployed into Azure Kubernetes service the pod is failing with the following...</p> <pre><code>Microsoft.Azure.KeyVault.Models.KeyVaultErrorException: Operation returned an invalid status code 'BadRequest' at Microsoft.Azure.KeyVault.KeyVaultClient.GetSecretsWithHttpMessagesAsync(String vaultBaseUrl, Nullable`1 maxresults, Dictionary`2 customHeaders, CancellationToken cancellationToken) </code></pre> <p>Looking at the source of Microsoft.Extensions.Configuration.AzureKeyVaultConfigurationExtensions I can reproduce the issue with this minimal code.</p> <pre class="lang-cs prettyprint-override"><code>public void ConfigureServices(IServiceCollection services) { GetSecrets( Environment.GetEnvironmentVariable(&quot;KEYVAULT_LOCATION&quot;), Environment.GetEnvironmentVariable(&quot;KEYVAULT_CLIENT_ID&quot;), Environment.GetEnvironmentVariable(&quot;KEYVAULT_CLIENT_SECRET&quot;) ).GetAwaiter().GetResult(); } public async Task GetSecrets(string loc, string clientId, string clientSecret) { KeyVaultClient.AuthenticationCallback callback = (authority, resource, scope) =&gt; GetTokenFromClientSecret(authority, resource, clientId, clientSecret); IKeyVaultClient _client = new KeyVaultClient(callback); //Exception thrown here var secrets = await _client.GetSecretsAsync(loc).ConfigureAwait(false); } private static async Task&lt;string&gt; GetTokenFromClientSecret(string authority, string resource, string clientId, string clientSecret) { var authContext = new AuthenticationContext(authority); var clientCred = new ClientCredential(clientId, clientSecret); var result = await authContext.AcquireTokenAsync(resource, clientCred); return result.AccessToken; } </code></pre> <p>My question is, what is different about this authentication when called from within a pod in AKS, as opposed to a local docker image, that would lead this call to fail?</p> <p>I've confirmed the pod can access the wider internet, the key vault is not firewalled and Insights is showing some Authentication failures in the key vault logs. The Client Id and Secret are correct and have the correct rights since this works locally in Docker. What am I missing?</p>
darwinawardee
<p>I just ran your code sample on my machine, local cluster, and an AKS cluster. It's working in all places. Please check your environment. You may be looking in the wrong place. There must be something in your local environment. Try running it in a different environment. For example: from home (without VPN).</p>
Faheem
<p>I'm trying to connect my k8s cluster to my ceph cluster with this manual: <a href="https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/" rel="nofollow noreferrer">https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/</a></p> <p>I want to deploy rbd-provision pods into kube-system namespace like this <a href="https://paste.ee/p/C1pB4" rel="nofollow noreferrer">https://paste.ee/p/C1pB4</a> </p> <p>After deploying pvc I get errors because my pvc is in default namespace. Can I do with that anything? I read docs and if I'm right I can't use ServiceAccount with 2 ns, or can?</p>
Kirill Ponomarev
<p>Service accounts can be granted permissions in another namespace. </p> <p>For example, within the namespace "acme", grant the permissions in the <code>view</code> <code>ClusterRole</code> to the service account in the namespace "acme" named "myapp" :</p> <pre><code>kubectl create rolebinding myapp-view-binding \ --clusterrole=view --serviceaccount=acme:myapp \ --namespace=acme </code></pre>
Jordan Liggitt
<p>I am trying to create a Pod in Kubernetes using <code>curl</code>. </p> <p>This is the YAML:</p> <pre><code>cat &gt; nginx-pod.yaml &lt;&lt;EOF apiVersion: v1 kind: Pod metadata: name: nginx1 spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF </code></pre> <p>I have token with permissions to do it and I wrote the following <code>curl</code> command: </p> <pre><code>curl -k -v -X POST -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json;charset=utf-8' https://127.0.0.1:6443/api/v1/namespaces/default/pods --data '{"name":"","namespace":"default","content":"apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx1\nspec:\n containers:\n - name: nginx\n image: nginx:1.7.9\n ports:\n - containerPort: 80\n","validate":true}' </code></pre> <p>Which should be equivalent to the <code>nginx-pod.yaml</code> file.<br> The YAML is ok because when I run<code>kubectl create -f nginx.pod.yaml</code> it creates it.<br> But when I tried to run it with <code>curl</code> I received: </p> <pre><code>&lt; Content-Length: 617 &lt; { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "Pod \"\" is invalid: [metadata.name: Required value: name or generateName is required, spec.containers: Required value]", "reason": "Invalid", "details": { "kind": "Pod", "causes": [ { "reason": "FieldValueRequired", "message": "Required value: name or generateName is required", "field": "metadata.name" }, { "reason": "FieldValueRequired", "message": "Required value", "field": "spec.containers" } ] }, "code": 422 * Connection #0 to host 127.0.0.1 left intact </code></pre> <p>I tried to change the <code>Content-Type</code> to <code>Content-type: text/x-yaml</code> but it didn't help. </p> <p>Any idea what can be the reason? </p> <p>One of the errors is regarding the "metadata.name" field.</p>
E235
<p>make sure you set content type to application/yaml, and use --binary-data with yaml… --data drops newlines</p>
Jordan Liggitt
<p>I've created K8S cluster in AWS and generated certificates per each component and they can connect eachother. But while I'm trying to get logs or installing an application via Helm, i'M getting below error : </p> <pre><code>$ helm install ./.helm Error: forwarding ports: error upgrading connection: error dialing backend: x509: certificate is valid for bla-bla-bla.eu-central-1.elb.amazonaws.com, worker-node, .*.compute.internal, *.*.compute.internal, *.ec2.internal, bla-bla-bla.eu-central-1.elb.amazonaws.com, not ip-172-20-74-98.eu-central-1.compute.internal` </code></pre> <p>and my certificate is :</p> <pre><code>X509v3 Subject Alternative Name: DNS:bla-bla-bla.eu-central-1.elb.amazonaws.com, DNS:worker-node, DNS:.*.compute.internal, DNS:*.*.compute.internal, DNS:*.ec2.internal, DNS:bla-bla-bla.eu-central-1.elb.amazonaws.com, IP Address:172.20.32.10, IP Address:172.20.64.10, IP Address:172.20.96.10` </code></pre> <p>Thanks for your help best,</p>
Muhammet Arslan
<p>Wildcard certificates can only be used for a single segment of DNS names. You will need a certificate valid for <code>ip-172-20-74-98.eu-central-1.compute.internal</code> or <code>*.eu-central-1.compute.internal</code></p>
Jordan Liggitt
<p>I'm trying to bootstrap a etcd cluster within my kubernetes cluster, here is the relevant section of the pod definition</p> <pre><code> - name: etcd image: quay.io/coreos/etcd:v2.2.0 ports: - containerPort: 2379 - containerPort: 2380 - containerPort: 4001 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP args: - "-name" - "etcd0" - "-advertise-client-urls" - http://${POD_IP}:2379,http://${POD_IP}:4001 - "-initial-advertise-peer-urls" - "http://${POD_IP}:2380" - "-listen-peer-urls" - "http://0.0.0.0:2380" - "-initial-cluster" - 'etcd0=http://${POD_IP}:2380' - "-initial-cluster-state" - "new" </code></pre> <p>However when I apply the POD_IP environment variable seems to get mangled, evidenced by the log:</p> <pre><code> advertise URLs of "etcd0" do not match in --initial-advertise-peer-urls [http://$%7BPOD_IP%7D:2380] and --initial-cluster [http://$%7BPOD_IP%7D:2380] </code></pre> <p>Has anyone seen anything similar to this?</p>
Tom Hadlaw
<p>The arguments are not interpreted by a shell, so curly braces don't get you the behavior you want. If you want to use an envvar value in an arg, variable references like <code>$(VAR_NAME)</code> are expanded using the container's environment.</p>
Jordan Liggitt
<p>I'm running Kubernetes 1.6.2 with RBAC enabled. I've created a user <code>kube-admin</code> that has the following Cluster Role binding</p> <pre class="lang-yaml prettyprint-override"><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: k8s-admin subjects: - kind: User name: kube-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io </code></pre> <p>When I attempt to <code>kubectl exec</code> into a running pod I get the following error.</p> <pre class="lang-sh prettyprint-override"><code>kubectl -n kube-system exec -it kubernetes-dashboard-2396447444-1t9jk -- /bin/bash error: unable to upgrade connection: Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy) </code></pre> <p>My guess is I'm missing a <code>ClusterRoleBinding</code> ref, which role am I missing?</p>
Curtis Allen
<p>The connection between kubectl and the api is fine, and is being authorized correctly.</p> <p>To satisfy an exec request, the apiserver contacts the kubelet running the pod, and that connection is what is being forbidden.</p> <p>Your kubelet is configured to authenticate/authorize requests, and the apiserver is not providing authentication information recognized by the kubelet.</p> <p>The way the apiserver authenticates to the kubelet is with a client certificate and key, configured with the <code>--kubelet-client-certificate=... --kubelet-client-key=...</code> flags provided to the API server.</p> <p>See <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview" rel="noreferrer">https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview</a> for more information. </p>
Jordan Liggitt
<p>Been working on a disaster recovery plan for my Kubernetes cluster and I am able to make snap shots of my managed disks but im not sure how to bind a recovered manager disk to an existing volumn cliam so I can re hydrate my data after a loss of data.</p>
Hizzy
<p>You can mount any disk <a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume#mount-disk-as-volume" rel="nofollow noreferrer">manually</a> as a volume in a POD to recover data. Better approach would be to use <a href="https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md" rel="nofollow noreferrer">Velero</a> to take k8s configuration backup. It will save the disk and PVC information and should restore the volume claims smoothly.</p> <p>Additionally, have you looked at <a href="https://github.com/kubernetes-sigs/azuredisk-csi-driver" rel="nofollow noreferrer">AzureCSI</a> drivers? That's the hot stuff in AKS right now. It does support Volume <a href="https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/deploy/example/snapshot" rel="nofollow noreferrer">snapshotting and recovery</a> from within the cluster. Best practice still would be to use <a href="https://velero.io/blog/csi-integration/" rel="nofollow noreferrer">Velero for configuration</a> with CSI to backup whenever possible.</p>
Faheem
<p>I'm running kubernetes v1.11.5 and I'm installing helm with a tiller deployment for each namespace. Let's focus on a single namespace. This is the tiller service account configuration:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: marketplace-int --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-manager namespace: marketplace-int rules: - apiGroups: - "" - extensions - apps - rbac.authorization.k8s.io - roles.rbac.authorization.k8s.io - authorization.k8s.io resources: ["*"] verbs: ["*"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-binding namespace: marketplace-int subjects: - kind: ServiceAccount name: tiller namespace: marketplace-int roleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.io </code></pre> <p>When I try to deploy a chart I get this error:</p> <pre><code>Error: release citest failed: roles.rbac.authorization.k8s.io "marketplace-int-role-ns-admin" is forbidden: attempt to grant extra privileges: [{[*] [*] [*] [] []}] user=&amp;{system:serviceaccount:marketplace-int:tiller 5c6af739-1023-11e9-a245-0ab514dfdff4 [system:serviceaccounts system:serviceaccounts:marketplace-int system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]} {[*] [ extensions apps rbac.authorization.k8s.io roles.rbac.authorization.k8s.io authorization.k8s.io] [*] [] []}] ruleResolutionErrors=[] </code></pre> <p>The error comes when trying to create rbac config for that namespace (with tiller sa):</p> <pre><code># Source: marketplace/templates/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: app: citest chart: marketplace-0.1.0 heritage: Tiller release: citest namespace: marketplace-int name: marketplace-int-role-ns-admin rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] </code></pre> <p>The error message clearly says that the tiller service account doesn't have permission for <code>roles.rbac.authorization.k8s.io</code> but that permission is granted as showed previously.</p> <pre><code>$kubectl describe role tiller-manager Name: tiller-manager Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"tiller-manager","namespace":"marketplace-i... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- * [] [] [*] *.apps [] [] [*] *.authorization.k8s.io [] [] [*] *.extensions [] [] [*] *.rbac.authorization.k8s.io [] [] [*] *.roles.rbac.authorization.k8s.io [] [] [*] </code></pre> <p>Honestly, I don't fully understand the error message to check if the <code>ownerrules</code> are fine and I'm trying to find out what does it means this kind of messages that seems to be related with the role description: <code>{[*] [*] [*] [] []}</code></p> <p>Any clue about what permissions I am missing?</p>
RuBiCK
<p>This is due to permission escalation prevention in RBAC. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping</a> for details. </p> <p>Permission to create a role object is necessary, but not sufficient. </p> <p>A user can only create/update a role if at least one of the following things is true:</p> <ol> <li><p>they already have all the permissions contained in the role, at the same scope as the object being modified (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role). In your case, that would mean the user attempting to create the role must already have <code>apiGroups=*, resources=*, verbs=*</code> permissions within the namespace where it is attempting to create the role. You can grant this by granting the cluster-admin clusterrole to the serviceaccount within that namespace with a rolebinding. </p></li> <li><p>they are given explicit permission to perform the "escalate" verb on the roles or clusterroles resource in the rbac.authorization.k8s.io API group (Kubernetes 1.12 and newer)</p></li> </ol>
Jordan Liggitt
<p>I've been using K8S ConfigMap and Secret to manage our properties. My design is pretty simple, that keeps properties files in a git repo and use build server such as Thoughtworks GO to automatically deploy them to be ConfigMaps or Secrets (on choice condition) to my k8s cluster.</p> <p>Currently, I found it's not really efficient that I have to always delete the existing ConfigMap and Secret and create the new one to update as below:</p> <ol> <li><p><code>kubectl delete configmap foo</code></p> </li> <li><p><code>kubectl create configmap foo --from-file foo.properties</code></p> </li> </ol> <p>Is there a nice and simple way to make above one step and more efficient than deleting current? potentially what I'm doing now may compromise the container that uses these configmaps if it tries to mount while the old configmap is deleted and the new one hasn't been created.</p>
James Jiang
<p>You can get YAML from the <code>kubectl create configmap</code> command and pipe it to <code>kubectl apply</code>, like this:</p> <pre><code>kubectl create configmap foo --from-file foo.properties -o yaml --dry-run=client | kubectl apply -f - </code></pre>
Jordan Liggitt
<p>Kubernetes ships with a <code>ConfigMap</code> called <code>coredns</code> that lets you specify DNS settings. I want to modify or patch a small piece of this configuration by adding:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: upstreamNameservers: | ["1.1.1.1", "1.0.0.1"] </code></pre> <p>I know I can use <code>kubectrl edit</code> to edit the <code>coredns</code> <code>ConfigMap</code> is there some way I can take the above file containing only the settings I want to insert or update and have it merged on top of or patched over the existing <code>ConfigMap</code>?</p> <p>The reason for this is that I want my deployment to be repeatable using CI/CD. So, even if I ran my Helm chart on a brand new Kubernetes cluster, the settings above would be applied.</p>
Muhammad Rehan Saeed
<p>This will apply the same patch to that single field:</p> <pre><code>kubectl patch configmap/coredns \ -n kube-system \ --type merge \ -p '{"data":{"upstreamNameservers":"[\"1.1.1.1\", \"1.0.0.1\"]"}}' </code></pre>
Jordan Liggitt
<p>I am trying to add at this <strong>--server</strong> argument three https endpoint to make it HA. Is that possible?</p> <pre class="lang-sh prettyprint-override"><code>kubectl config set-cluster Zarsk \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://${k8s_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${instance}.kubeconfig </code></pre>
manzion_111
<p>No, only a single connection URL is supported</p>
Jordan Liggitt
<p>Am trying to better understand RBAC in kubernetes. Came across this unexpected situation where authorization test using <code>kubectl auth can-i</code> and actual results are different. In short, newly created user should not be able to get pods as per this test, however this user can actually get pods.</p> <p>Version:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>kubectl config for user in questions:</p> <pre><code>$ kubectl config view --minify apiVersion: v1 clusters: - cluster: certificate-authority: /home/master/ca.pem server: https://192.168.1.111:6443 name: jdoe contexts: - context: cluster: jdoe user: jdoe name: jdoe current-context: jdoe kind: Config preferences: {} users: - name: jdoe user: client-certificate: /home/master/jdoe.pem client-key: /home/master/jdoe-key.pem </code></pre> <p>The test against authorization layer says jdoe cannot get pods.</p> <pre><code>$ kubectl auth can-i get pods --as jdoe no </code></pre> <p>However, jdoe can get pods:</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-87554c57b-ttgwp 1/1 Running 0 5h kube-system coredns-5f7d467445-ngnvf 1/1 Running 0 1h kube-system coredns-5f7d467445-wwf5s 1/1 Running 0 5h kube-system weave-net-25kq2 2/2 Running 0 5h kube-system weave-net-5njbh 2/2 Running 0 4h </code></pre> <p>Got similar results from auth layer after switching back to admin context:</p> <pre><code>$ kubectl config use-context kubernetes Switched to context "kubernetes". $ kubectl config view --minify apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://192.168.1.111:6443 name: kubernetes contexts: - context: cluster: kubernetes user: admin name: kubernetes current-context: kubernetes kind: Config preferences: {} users: - name: admin user: client-certificate: /home/master/admin.pem client-key: /home/master/admin-key.pem </code></pre> <p>From here too, user jdoe is not supposed to get pods.</p> <pre><code>$ kubectl auth can-i get pods --as jdoe no </code></pre> <p>Output of <code>kubectl config view</code></p> <pre><code>$ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /home/master/ca.pem server: https://192.168.1.111:6443 name: jdoe - cluster: certificate-authority-data: REDACTED server: https://192.168.1.111:6443 name: kubernetes - cluster: certificate-authority: /home/master/ca.pem server: https://192.168.1.111:6443 name: master contexts: - context: cluster: jdoe user: jdoe name: jdoe - context: cluster: kubernetes user: admin name: kubernetes - context: cluster: master user: master name: master current-context: kubernetes kind: Config preferences: {} users: - name: admin user: client-certificate: /home/master/admin.pem client-key: /home/master/admin-key.pem - name: jdoe user: client-certificate: /home/master/jdoe.pem client-key: /home/master/jdoe-key.pem - name: master user: client-certificate: /home/master/master.pem client-key: /home/master/master-key.pem </code></pre>
papu
<p><code>kubectl get pods</code> with no specific pod name actually does a list. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb</a> for details about what verb corresponds to a given request. </p> <p>What does <code>can-i list pods</code> return?</p>
Jordan Liggitt
<p>Trying to install Elastic Stack</p> <p>Hi I have connected to a cluster via gitlab self hosted ee.13.2.2</p> <p>I am using gitlab and I installed the ingress, prometheus, cert manager and the runner but when I try to install Elastic stack it will not install. Does anyone know where I can look to find the correct logs to figure out why it will not install?</p> <p>The error says:</p> <p>Something went wrong while installing Elastic Stack</p> <pre><code>Operation failed. Check pod logs for install-elastic-stack for more details. </code></pre> <p><a href="https://i.stack.imgur.com/h7we0.png" rel="nofollow noreferrer">Error-1</a> <a href="https://i.stack.imgur.com/7b7Tf.png" rel="nofollow noreferrer">Error-2</a></p> <pre><code>+ export 'HELM_HOST=localhost:44134' + helm init --client-only + tiller -listen localhost:44134 -alsologtostderr Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com [main] 2020/08/15 19:22:35 Starting Tiller v2.16.9 (tls=false) [main] 2020/08/15 19:22:35 GRPC listening on localhost:44134 [main] 2020/08/15 19:22:35 Probes listening on :44135 [main] 2020/08/15 19:22:35 Storage driver is ConfigMap [main] 2020/08/15 19:22:35 Max history per release is 0 Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Not installing Tiller due to 'client-only' flag having been set + helm repo add elastic-stack https://charts.gitlab.io &quot;elastic-stack&quot; has been added to your repositories + helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the &quot;elastic-stack&quot; chart repository ...Successfully got an update from the &quot;stable&quot; chart repository Update Complete. + helm upgrade elastic-stack elastic-stack/elastic-stack --install --atomic --cleanup-on-fail --reset-values --version 3.0.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/elastic-stack/config/values.yaml [tiller] 2020/08/15 19:22:44 getting history for release elastic-stack [storage] 2020/08/15 19:22:44 getting release history for &quot;elastic-stack&quot; Release &quot;elastic-stack&quot; does not exist. Installing it now. [tiller] 2020/08/15 19:22:44 preparing install for elastic-stack [storage] 2020/08/15 19:22:44 getting release history for &quot;elastic-stack&quot; [tiller] 2020/08/15 19:22:44 rendering elastic-stack chart using values 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/podsecuritypolicy.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch-curator/templates/role.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/role.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch-curator/templates/rolebinding.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch-curator/templates/hooks/job.install.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/rolebinding.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/ingress.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch-curator/templates/psp.yml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/configmap.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch/templates/serviceaccount.yaml&quot; is empty. Skipping. 2020/08/15 19:22:44 info: manifest &quot;elastic-stack/charts/elasticsearch-curator/templates/serviceaccount.yaml&quot; is empty. Skipping. [tiller] 2020/08/15 19:22:44 performing install for elastic-stack [tiller] 2020/08/15 19:22:44 executing 1 crd-install hooks for elastic-stack [tiller] 2020/08/15 19:22:44 hooks complete for crd-install elastic-stack [tiller] 2020/08/15 19:22:45 executing 1 pre-install hooks for elastic-stack [tiller] 2020/08/15 19:22:45 hooks complete for pre-install elastic-stack [storage] 2020/08/15 19:22:45 getting release history for &quot;elastic-stack&quot; [storage] 2020/08/15 19:22:45 creating release &quot;elastic-stack.v1&quot; [kube] 2020/08/15 19:22:45 building resources from manifest [kube] 2020/08/15 19:22:45 creating 11 resource(s) [kube] 2020/08/15 19:22:46 beginning wait for 11 resources with timeout of 5m0s [kube] 2020/08/15 19:22:48 Pod is not ready: gitlab-managed-apps/elastic-stack-filebeat-h544s &lt;= this line ran lots of times (100ish) [tiller] 2020/08/15 19:27:46 warning: Release &quot;elastic-stack&quot; failed: timed out waiting for the condition [storage] 2020/08/15 19:27:46 updating release &quot;elastic-stack.v1&quot; [tiller] 2020/08/15 19:27:46 failed install perform step: release elastic-stack failed: timed out waiting for the condition INSTALL FAILED PURGING CHART then eventually: Successfully purged a chart! Error: release elastic-stack failed: timed out waiting for the condition </code></pre>
Jake
<p>Please try running it again. Helm sometimes times out rather quickly. If it's failing consistently, look inside elastic-stack namespace. See which pods are not starting inside it and what errors are you getting there.</p>
Faheem
<p>I can't find documentation on how to create user group on Kubernetes with <code>yaml</code> file. I'd like gather some authenticated users in group using their e-mail accounts.</p> <p>I'd like to write in <code>yaml</code> something like :</p> <pre><code> kind: GoupBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: "frontend-developers" namespace: development subjects: - kind: User name: [email protected],[email protected] apiGroup: "" </code></pre>
Toren
<p>Groups are determined by the configured authentication method. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> for details about how each authenticator determines the group membership of the authenticated user.</p>
Jordan Liggitt
<p>I am using external nginx loadbalancer and trying to configure K8s Master but its failing with below error :</p> <blockquote> <p>error uploading configuration: unable to create configmap: configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"**</p> </blockquote> <p>For me it more looks like cert issue but i am having hard time to find what i am missing , any help is appreciated in our infrastructure we use F5 loadbalancer in front of apiserver and i am seeing the same issue there this is the env i have created for troubleshooting</p> <p><strong>kubeadm-config:</strong></p> <pre><code> apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.0 apiServerCertSANs: - "ec2-23-23-244-63.compute-1.amazonaws.com" api: controlPlaneEndpoint: "ec2-23-23-244-63.compute-1.amazonaws.com:6443" etcd: external: endpoints: - https://172.31.32.160:2379 caFile: /etc/kubernetes/pki/etcd/ca.crt certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key networking: # This CIDR is a calico default. Substitute or remove for your CNI provider. podSubnet: "10.244.0.0/16" </code></pre> <p><strong>Env :</strong> Kubelet : 1.11.1 kubeadm 1.11.1 kubectl 1.11.1</p> <p><strong>Output</strong></p> <pre><code> [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller- manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 41.036802 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace error uploading configuration: unable to create configmap: configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system" </code></pre> <p><strong>logs:</strong> </p> <pre><code> Unable to register node "ip-172-31-40-157" with API server: nodes is forbidden: User "system:anonymous" cannot create nodes at the cluster scope tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list services at the cluster on_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-40-157" not found tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list services at the cluster tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco :172] Unable to update cni config: No networks found in /etc/cni/net.d </code></pre> <p>Nginx :</p> <pre><code> upstream mywebapp1 { server 172.31.40.157:6443; } server { listen 6443 ssl; server_name ec2-23-23-244-63.compute-1.amazonaws.com; ssl on; ssl_certificate /opt/certificates/server.crt; ssl_certificate_key /opt/certificates/server.key; ssl_trusted_certificate /opt/certificates/ca.crt; location / { proxy_pass https://mywebapp1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Nginx Server : 172-31-44-203 Master Server : 172-31-40-157 </code></pre> <p><strong>I am using Self Signed Certs and the CA to generate all the certs including the one in nginx are same</strong> </p> <p>I had same issue in our infrastructure when we use f5 loadbalancer </p>
mannam14387
<p>If your nodes speak to the apiserver through a load balancer, and expect to use client certificate credentials to authenticate (which is typical for nodes), the load balancer must not terminate or re-encrypt TLS, or the client certificate information will be lost and the apiserver will see the request as anonymous. </p>
Jordan Liggitt
<p>is jsonPath supported in kubernetes http api ?</p> <p>for ex; how the following translates to in http API ?</p> <pre><code>kubectl get pods -o=jsonpath='{.items[0]}' </code></pre>
user19937
<p>It's not supported by the API, you would need to evaluate that jsonpath against the API response. </p>
Jordan Liggitt
<p>I can't get a rolebinding right in order to get node status from an app which runs in a pod on GKE.</p> <p>I am able to create a pod from there but not get node status. Here is the role I am creating:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-reader rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: [&quot;nodes&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;] </code></pre> <p>This is the error I get when I do a getNodeStatus:</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;nodes \&quot;gke-cluster-1-default-pool-36c26e1e-2lkn\&quot; is forbidden: User \&quot;system:serviceaccount:default:sa-poc\&quot; cannot get nodes/status at the cluster scope: Unknown user \&quot;system:serviceaccount:default:sa-poc\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;name&quot;: &quot;gke-cluster-1-default-pool-36c26e1e-2lkn&quot;, &quot;kind&quot;: &quot;nodes&quot; }, &quot;code&quot;: 403 } </code></pre> <p>I tried with some minor variations but did not succeed.</p> <p>Kubernetes version on GKE is 1.8.4-gke.</p>
unludo
<p>Subresource permissions are represented as <code>&lt;resource&gt;/&lt;subresource&gt;</code>, so in the role, you would specify <code>resources: ["nodes","nodes/status"]</code></p>
Jordan Liggitt
<p>I'd like to mount volume if it exists. For example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: "/etc/foo" volumes: - name: foo secret: secretName: mysecret </code></pre> <p>is the example from the documentation. However if the secret <code>mysecret</code> doesn't exist I'd like to skip mounting. That is optimistic/optional mount point.</p> <p>Now it stalls until the secret is created.</p>
nmiculinic
<p>secret and configmap volumes can be marked optional, and result in empty directories if the associated secret or configmap doesn't exist, rather than blocking pod startup</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: /etc/foo volumes: - name: foo secret: secretName: mysecret optional: true </code></pre>
Jordan Liggitt
<p>In Kubernetes object metadata, there are <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata" rel="noreferrer">the concepts of <code>resourceVersion</code> and <code>generation</code></a>. I understand the notion of <code>resourceVersion</code>: it is an optimistic concurrency control mechanism—it will change with every update. What, then, is <code>generation</code> for?</p>
Laird Nelson
<p>resourceVersion changes on every write, and is used for optimistic concurrency control</p> <p>in some objects, generation is incremented by the server as part of persisting writes affecting the <code>spec</code> of an object.</p> <p>some objects' <code>status</code> fields have an <code>observedGeneration</code> subfield for controllers to persist the generation that was last acted on.</p>
Jordan Liggitt
<p>I created a pod with context: </p> <pre><code>kubectl config set-context DevDan-context \ --cluster=kubernetes \ --namespace=development \ --user=DevDan </code></pre> <p>I see the pod if I run <code>kubectl --context=DevDan-context get pods</code>. </p> <p>But if I run just <code>kubectl get pods</code> I don't see the pod. </p> <p>Is there a way to show all the pods with the column of context?<br> I know there is <code>kubectl get pods --all-namespaces</code> but it just show the pods and doesn't write what context each pod has: </p> <pre><code>ubuntu@HOST:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE development nginx-8586cf59-plq29 1/1 Running 0 19h kube-system etcd-HOST 1/1 Running 1 9d kube-system kube-apiserver-HOST 1/1 Running 1 9d kube-system kube-controller-manager-HOST 1/1 Running 1 9d kube-system kube-dns-6f4fd4bdf-25ddh 3/3 Running 3 9d kube-system kube-flannel-ds-5vgd9 1/1 Running 1 9d kube-system kube-flannel-ds-vvstn 1/1 Running 0 9d kube-system kube-proxy-62xrx 1/1 Running 1 9d kube-system kube-proxy-g7w7l 1/1 Running 0 9d kube-system kube-scheduler-HOST 1/1 Running 1 9d </code></pre> <p>The workaround is to run <code>kubectl config get-contexts</code> and then take each context and run <code>kubectl --context=&lt;my_context&gt; get pods</code> but if I have lots of context it can be problematic. </p> <p>I also tried to run </p>
E235
<p><code>set-context</code> sets attributes of a given context</p> <p><code>use-context</code> switches to it as the default, so that <code>get pods</code> with no --context arg uses it by default</p>
Jordan Liggitt
<p>I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:</p> <pre><code>kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash </code></pre> <p>I download the correct version of kubectl inside the running container, make it executable and try to run</p> <pre><code>./kubectl get pods </code></pre> <p>but get the following error:</p> <pre><code>Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list pods in the namespace "default" </code></pre> <p>Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run <code>helm</code> inside the container. According to the docs I found, this should work fine as soon as <code>kubectl</code> is working fine.</p>
Achim
<blockquote> <p>Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?</p> </blockquote> <p>Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> file to authenticate itself.</p> <blockquote> <p>How do I allow the serviceaccount to list the pods?</p> </blockquote> <p>That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.</p> <p>See <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions</a> for more information.</p> <p>I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the <code>serviceAccountName</code> in the helm pods). Then, to grant superuser permissions to that service account, run: </p> <pre><code>kubectl create clusterrolebinding helm-superuser \ --clusterrole=cluster-admin \ --serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME </code></pre>
Jordan Liggitt
<p>The <code>kubernetes</code> service is in the <code>default</code> namespace. I want to move it to <code>kube-system</code> namespace. So I did it as follow:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl get svc kubernetes -o yaml &gt; temp.yaml </code></pre> <p>This generates <code>temp.yaml</code> using current <code>kubernetes</code> service information. Then I changed the value of namespace to <code>kube-system</code> in <code>temp.yaml</code>. Lastly, I ran the following command:</p> <pre class="lang-sh prettyprint-override"><code>kubectl replace -f temp.yaml </code></pre> <p>But I got the error:</p> <pre><code>Error from server: error when replacing &quot;temp.yaml&quot;: service &quot;kubernetes&quot; not found </code></pre> <p>I think there is no service named <code>kubernetes</code> in the <code>kube-system</code> namespace.</p> <p>Who can tell me how can to do this?</p>
sope
<p>Name and namespace are immutable on objects. When you try to change the namespace, <code>replace</code> looks for the service in the new namespace in order to overwrite it. You should be able to do <code>create -f ...</code> to create the service in the new namespace</p>
Jordan Liggitt
<p>I have a AKS cluster in which I have deployed three Docker containers in three different namespaces. The first pod needs to check the availability of the other two pods. What I mean is that POD-1 monitors the POD-2 and POD-3. Is there any option to monitor the pod this way? Can POD-1 generate logs or alerts if any error occurs in the other two pods? The first pods is a C# application.</p>
june alex
<p>You can use the Kubernetes API client SDK for C# (<a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp</a>) to connect with the API Server and get the status of the desired PODS.</p> <p>You may need to create a Service Account and assign it to your POD if you get permission issues.</p> <blockquote> <p>Client libraries often handle common tasks such as authentication for you. Most client libraries can discover and use the Kubernetes Service Account to authenticate if the API client is running inside the Kubernetes cluster, or can understand the kubeconfig file format to read the credentials and the API Server address.</p> </blockquote> <p><a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/client-libraries/</a></p>
Faheem
<p>I'm creating a pod with a volumeMount set to <code>mountPropagation: Bidirectional</code>. When created, the container is mounting the volume with <code>"Propagation": "rprivate"</code>. </p> <p>From the k8s <a href="https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation" rel="nofollow noreferrer">docs</a> I would expect <code>mountPropagation: Bidirectional</code> to result in a volume mount propagation of <code>rshared</code></p> <p>If I start the container directly with <code>docker</code> this is working. </p> <p>Some info:</p> <p><strong>Deployment Yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test spec: selector: matchLabels: app: test strategy: type: RollingUpdate template: metadata: labels: app: test spec: containers: - image: gcr.io/google_containers/busybox:1.24 command: - sleep - "36000" name: test volumeMounts: - mountPath: /tmp/test mountPropagation: Bidirectional name: test-vol volumes: - name: test-vol hostPath: path: /tmp/test </code></pre> <p><strong>Resulting mount section from <code>docker inspect</code></strong></p> <pre><code>"Mounts": [ { "Type": "bind", "Source": "/tmp/test", "Destination": "/tmp/test", "Mode": "", "RW": true, "Propagation": "rprivate" }….. </code></pre> <p><strong>Equivalent Docker run</strong></p> <pre><code>docker run --restart=always --name test -d --net=host --privileged=true -v /tmp/test:/tmp/test:shared gcr.io/google_containers/busybox:1.24 </code></pre> <p><strong>Resulting Mounts section from <code>docker inspect</code> when created with <code>docker run</code></strong></p> <pre><code>"Mounts": [ { "Type": "bind", "Source": "/tmp/test", "Destination": "/tmp/test", "Mode": "shared", "RW": true, "Propagation": "shared" }... </code></pre> <p><strong>Output of kubectl version</strong></p> <pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Using <code>rke version v0.1.6</code></p>
Will
<p>this was a regression fixed in 1.10.3 in <a href="https://github.com/kubernetes/kubernetes/pull/62633" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/62633</a></p>
Jordan Liggitt
<p>I have a k8s job setup as a helm hook for pre-install and pre-upgrade stages.</p> <p>&quot;helm.sh/hook&quot;: pre-install,pre-upgrade</p> <p>Is there a way to know inside the job/pod which stage it is getting executed - whether it is pre-install or pre-upgrade ?</p>
Dhanuj Dharmarajan
<p>You could create separate jobs/pods manifests assigning them different arguments/env variables to keep track of the hook events. I haven't seen anything in the tool itself.</p>
Faheem
<p>When kubelet tries to start on my Kubernetes worker nodes, I'm getting messages like this in the system log:</p> <pre><code>May 25 19:43:57 ip-10-240-0-223 kubelet[4882]: I0525 19:43:57.627389 4882 kubelet_node_status.go:82] Attempting to register node worker-1 May 25 19:43:57 ip-10-240-0-223 kubelet[4882]: E0525 19:43:57.628967 4882 kubelet_node_status.go:106] Unable to register node "worker-1" with API server: nodes is forbidden: User "system:node:" cannot create nodes at the cluster scope: unknown node for user "system:node:" May 25 19:43:58 ip-10-240-0-223 kubelet[4882]: E0525 19:43:58.256557 4882 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: services is forbidden: User "system:node:" cannot list services at the cluster scope: unknown node for user "system:node:" May 25 19:43:58 ip-10-240-0-223 kubelet[4882]: E0525 19:43:58.257381 4882 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:node:" cannot list pods at the cluster scope: unknown node for user "system:node:" </code></pre> <p>If I'm reading these correctly, the problem is that the node is using the username <code>system:node:</code> when connecting to the API server rather than <code>system:node:worker-1</code>. But as far as I can tell, it should be using a worker-specific one. Here's my <code>kubeconfig</code> (with private stuff elided):</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: [elided] server: https://[elided]:6443 name: kubernetes-the-hard-way contexts: - context: cluster: kubernetes-the-hard-way user: system:node:worker-1 name: default current-context: default kind: Config preferences: {} users: - name: system:node:worker-1 user: client-certificate-data: [elided] client-key-data: [elided] </code></pre> <p>I was under the impression that the <code>user</code>s specified there were the ones used when contacting the API, but clearly I'm wrong. Is there somewhere else I've missed out a reference to <code>worker-1</code>?</p> <p>I'm following the <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes the Hard Way</a> tutorial, but adjusting it for AWS as I go, so this problem is almost certainly a mistake I made when adjusting the config files. If there are any other config files that I should provide to make this easier/possible to debug, please do let me know.</p>
Giles Thomas
<p>The server determines the user from the CN of the certificate. Check the script that generated the certificate, it likely had an unset var when it created the CN in the form <code>CN=system:node:$NODE</code></p>
Jordan Liggitt
<p>I'm trying to run mongo commands through my bash script file.</p> <p>I have sample.sh -- run with ./sample.sh</p> <p>I want to run below command inside my sample.sh file. It is bash script. I want to mongo commands run inside the file, not inside the shell.</p> <pre><code>kubectl exec $mongoPodName -c mongo-mongodb -- mongo db.createUser({user:&quot;admin&quot;,pwd:&quot;123456&quot;,roles:[{role:&quot;root&quot;,db:&quot;admin&quot;}]}) </code></pre> <p>I get the error below :</p> <pre><code>./generate-service-dbs.sh: line 37: syntax error near unexpected token `(' ./generate-service-dbs.sh: line 37: `kubectl exec $mongoPodName -c mongo-mongodb -- mongo db.createUser({user:&quot;admin&quot;,pwd:&quot;123456&quot;,roles:[{role:&quot;root&quot;,db:&quot;admin&quot;}]})' </code></pre>
Hakimeh Mordadi
<p>Don't you have to run command with <code>--eval</code>?</p> <pre class="lang-bash prettyprint-override"><code>kubectl exec $mongoPodName -c mongo-mongodb -- mongo --eval 'db.createUser({user:&quot;admin&quot;,pwd:&quot;123456&quot;,roles:[{role:&quot;root&quot;,db:&quot;admin&quot;}]})' </code></pre>
Faheem
<p>everyone</p> <p>Recently I upgraded my k8s cluster to v1.10.3, then I rolled it back to v1.9.8, then to v1.8.12. After that I found something I can't understand.</p> <p>I can list deployment in my default namespace:</p> <pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE dunking-hedgehog-helmet 1 1 1 1 197d kube-system-tomcat-official 1 1 1 1 197d mongodb 1 1 1 1 152d smelly-pig-mysql 1 1 1 1 204d </code></pre> <p>But can't in my kube-system namespace:</p> <pre><code># kubectl get deploy -nkube-system Error from server: no kind "Deployment" is registered for version "apps/v1" </code></pre> <p>Also, the logs of apiserver startup:</p> <pre><code>E0530 10:47:09.511854 1 cacher.go:277] unexpected ListAndWatch error: storage/cacher.go:/daemonsets: Failed to list *extension s.DaemonSet: no kind "DaemonSet" is registered for version "apps/v1" E0530 10:47:09.534114 1 cacher.go:277] unexpected ListAndWatch error: storage/cacher.go:/daemonsets: Failed to list *extension s.DaemonSet: no kind "DaemonSet" is registered for version "apps/v1" E0530 10:47:09.577678 1 cacher.go:277] unexpected ListAndWatch error: storage/cacher.go:/replicasets: Failed to list *extensio ns.ReplicaSet: no kind "ReplicaSet" is registered for version "apps/v1" E0530 10:47:09.580008 1 cacher.go:277] unexpected ListAndWatch error: storage/cacher.go:/deployments: Failed to list *extensio ns.Deployment: no kind "Deployment" is registered for version "apps/v1" E0530 10:47:09.580234 1 cacher.go:277] unexpected ListAndWatch error: storage/cacher.go:/deployments: Failed to list *extensio ns.Deployment: no kind "Deployment" is registered for version "apps/v1" </code></pre> <p>We all know API version apps/v1 is added since v1.9.0, so why does v1.8.12 try to register Deployment for version "apps/v1" ?</p>
Kun Li
<p>In 1.10, objects in the apps API group began persisting in etcd in apps/v1 format (introduced in 1.9). </p> <p>Rolling back to 1.9.x from 1.10.x is safe</p> <p>If you want to roll back further to 1.8.x, you must first round trip all the apps/v1 resources (daemonsets, deployments, replicasets, statefulsets) to ensure they are persisted in etcd in a format that 1.8 can read. </p> <p>The error you are getting indicates there is apps/v1 content in etcd which the kubernetes 1.8 apiserver cannot decode (since apps/v1 didn't exist in 1.8). The solution is to upgrade to 1.9.x, get/put all existing apps/v1 resources before downgrading to kube 1.8 again. </p>
Jordan Liggitt
<p>I created a namespace <code>xxx</code>; the role for this namespace is to get pods, services, etc. I created a service account <code>yyy</code> and a role binding <code>yyy</code> to the role in namespace <code>xxx</code>.</p> <p>When I try to check something through the API with a secret token, for example</p> <pre><code>curl -kD - -H "Authorization: Bearer $TOKEN https://localhost:6443/api/v1/namespaces/xxx/pods </code></pre> <p>I get a "403 forbidden" error.</p> <p>So I a cluster role binding of my service account <code>yyy</code> to cluster role <code>view</code>, and after that of course a user can see pods of my namespace, but can see other pods from other namespaces too.</p> <p>How can I restrict service account <code>yyy</code> tee see pods, services, etc. only from a specific namespace?</p>
kris
<p>To allow access only in a specific namespace create a rolebinding, not a clusterrolebinding:</p> <p><code>kubectl create rolebinding my-viewer --clusterrole=view --serviceaccount=xxx:yyy -n xxx</code></p>
Jordan Liggitt
<p>I am working on developing tools to interact with Kubernetes. I have OpenShift setup with the allow all authentication provider. I can log into the web console as I would expect.</p> <p>I have also been able to setup a service account and assign a cluster role binding to the service account user. Despite this, when I access the REST API using a token of that service account, I get forbidden. </p> <p>Here is what happens when I try to setup role bindings via OpenShift commands:</p> <pre><code>[root@host1 ~]# oadm policy add-cluster-role-to-user view em7 --namespace=default [root@host1 ~]# oadm policy add-cluster-role-to-user cluster-admin em7 --namespace=default [root@host1 ~]# oadm policy add-cluster-role-to-user cluster-reader em7 --namespace=default [root@host1 ~]# oc get secrets | grep em7 em7-dockercfg-hnl6m kubernetes.io/dockercfg 1 18h em7-token-g9ujh kubernetes.io/service-account-token 4 18h em7-token-rgsbz kubernetes.io/service-account-token 4 18h TOKEN=`oc describe secret em7-token-g9ujh | grep token: | awk '{ print $2 }'` [root@host1 ~]# curl -kD - -H "Authorization: Bearer $TOKEN" https://localhost:8443/api/v1/pods HTTP/1.1 403 Forbidden Cache-Control: no-store Content-Type: application/json Date: Tue, 19 Jun 2018 15:36:40 GMT Content-Length: 260 { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "User \"system:serviceaccount:default:em7\" cannot list all pods in the cluster", "reason": "Forbidden", "details": { "kind": "pods" }, "code": 403 } </code></pre> <p>I can also try using the yaml file from (<a href="https://stackoverflow.com/questions/49667239/openshift-admin-token">Openshift Admin Token</a>): # creates the service account "ns-reader" apiVersion: v1 kind: ServiceAccount metadata: name: ns-reader namespace: default</p> <pre><code>--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: # "namespace" omitted since ClusterRoles are not namespaced name: global-reader rules: - apiGroups: [""] # add other rescources you wish to read resources: ["pods", "secrets"] verbs: ["get", "watch", "list"] --- # This cluster role binding allows service account "ns-reader" to read pods in all available namespace kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-ns subjects: - kind: ServiceAccount name: ns-reader namespace: default roleRef: kind: ClusterRole name: global-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>When I run this, I get the following error:</p> <pre><code>[root@host1 ~]# kubectl create -f stack_overflow_49667238.yaml error validating "stack_overflow_49667238.yaml": error validating data: API version "rbac.authorization.k8s.io/v1" isn't supported, only supports API versions ["federation/v1beta1" "v1" "authentication.k8s.io/v1beta1" "componentconfig/v1alpha1" "policy/v1alpha1" "rbac.authorization.k8s.io/v1alpha1" "apps/v1alpha1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "extensions/v1beta1" "batch/v1" "batch/v2alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>I have tried several different API versions from the list but they all failed in a similar way.</p>
Josh A
<p><code>oadm policy add-cluster-role-to-user view em7</code> grants to the user named <code>em7</code></p> <p>you need to grant permissions to the service account, e.g. <code>oadm policy add-cluster-role-to-user view system:serviceaccount:default:em7</code></p>
Jordan Liggitt
<p>I'm trying to run a deployment on a Kubernetes cluster at work through a GitLab CI/CD process (i.e. I don't control most of the configs). I'm also new to Kubernetes, so please forgive me if this is basic and obvious.</p> <p>I have created my rolebindings:</p> <pre><code>kubectl create rolebinding [foo] --clusterrole=edit --serviceaccount=[bar]:default </code></pre> <p>And added my tokens and all settings to GitLab</p> <p>When the deployment kicks off however, it will always fail at deployment with:</p> <p><code>Error from server (Forbidden): error when creating "/builds/bar/baz/deployment.yml": service is forbidden: User "system:serviceaccount:bar:bar-service-account" cannot create services in namespace "bar"</code></p> <p>I thought I should be working in <code>system:serviceaccount:bar:default</code>. why is <code>:default</code> being replaced with <code>:bar-service-account</code> and/or how do I fix this.</p> <p>Many many thanks in advance</p>
Justin
<p>You are granting permissions to the default service account with the rolebinding you are creating. However, the deployment is not using that service account. If you look at the deployment manifest, it will have a serviceAccountName of bar-service-account. </p> <p>Either change the deployment to use the default service account or change the rolebinding to grant permissions to the service account being used. </p>
Jordan Liggitt
<p>I am using Google Kubernetes Engine and have the Google HTTPS Load Balancer as my ingress.</p> <p>Right now the load balancer uses Let's Encrypt certificates. However, is there a simple way to ensure that the certificates are automatically renewed prior to their 90 day expiry?</p>
Dan
<p>You have not specified how you configured Let's Encrypt for your load balancer. Right now Google does not offer this for you, so I assume you mean you set the Let's Encrypt certificate yourself. In this case, Google can't renew your certificate.</p> <p>Until there's an official support you can install a third-party add-on like <code>cert-manager</code> to automate certificate configuration and renewal. There's a GKE tutorial for doing this at <a href="https://github.com/ahmetb/gke-letsencrypt" rel="nofollow noreferrer">https://github.com/ahmetb/gke-letsencrypt</a>.</p>
ahmet alp balkan
<p>I'm writing a shell script which needs to login into the pod and execute a series of commands in a kubernetes pod.</p> <p>Below is my sample_script.sh:</p> <pre><code>kubectl exec octavia-api-worker-pod-test -c octavia-api bash unset http_proxy https_proxy mv /usr/local/etc/octavia/octavia.conf /usr/local/etc/octavia/octavia.conf-orig /usr/local/bin/octavia-db-manage --config-file /usr/local/etc/octavia/octavia.conf upgrade head </code></pre> <p>After running this script, I'm not getting any output. Any help will be greatly appreciated</p>
ahmed meraj
<p>Are you running all these commands as a single line command? First of all, there's no <code>;</code> or <code>&amp;&amp;</code> between those commands. So if you paste it as a multi-line script to your terminal, likely it will get executed locally.</p> <p>Second, to tell bash to execute something, you need: <code>bash -c "command"</code>.</p> <p>Try running this:</p> <pre><code>$ kubectl exec POD_NAME -- bash -c "date &amp;&amp; echo 1" Wed Apr 19 19:29:25 UTC 2017 1 </code></pre> <p>You can make it multiline like this:</p> <pre><code>$ kubectl exec POD_NAME -- bash -c "date &amp;&amp; \ echo 1 &amp;&amp; \ echo 2" </code></pre>
ahmet alp balkan
<p>I have an OpenShift server I'm trying to work on. When I use:</p> <p><code>oc apply -f</code> </p> <p>My request is successfully processed. </p> <p>However, many tools I want to use for development expect <code>kubectl</code> to be used to connect to the cluster. In most circumstances, I can use <code>kubectl</code> with no problems. </p> <p>However, in the case of <code>kubectl apply -f</code>, this fails with error:</p> <p><code>Error from server (Forbidden): unknown</code></p> <p>While running <code>oc apply -f</code> works as expected.</p> <p>To try to bypass this, I created (on Windows) a symbolic link named "kubectl" to the <code>oc</code> executable. Unexpectedly, this caused the <code>oc</code> command to start behaving like the <code>kubectl</code> command, and again receiving the same error. </p> <p>Calling the full path to the link, as well as simply renaming "oc.exe" to "kubectl.exe" produces the same error. I confirmed I was indeed calling the correct bianry by running <code>kubectl version</code>, and I received different results from the vanilla binary vs. the openshift wrapper.</p> <p>Enabling verbose logging: <code>kubectl apply -f service.yaml --v 6</code> gives this output</p> <pre><code>I0625 13:09:43.873985 11744 loader.go:357] Config loaded from file C:\Users\username/.kube/config I0625 13:09:44.175973 11744 round_trippers.go:436] GET https://myhost:443/swagger-2.0.0.pb-v1 403 Forbidden in 286 milliseconds I0625 13:09:44.179974 11744 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "unknown", "reason": "Forbidden", "details": { "causes": [ { "reason": "UnexpectedServerResponse", "message": "unknown" } ] }, "code": 403 }] </code></pre> <p>Running using <code>oc apply -f service.yaml --v 6</code> gives almost the same result, except that after the initial 403 error, I get the following output:</p> <pre><code>I0625 13:37:19.403259 7228 round_trippers.go:436] GET https://myhost:443/api 200 OK in 5 milliseconds </code></pre> <p>And the command continues on to work correctly.</p> <p>Is there anyway I can bypass this odd behavior, and somehow use <code>kubectl</code> to successfully run <code>kubectl apply -f</code> against the OpenShift server?</p> <p><strong>Edit:</strong></p> <p>The <code>oc</code> binary wrapper uses kubectl 1.9.<br> The vanilla <code>kubectl</code> command uses 1.10.<br> The server is kubectl 1.6 and openshift 3.6.</p>
Himself12794
<p>what client and server version do you have?</p> <p>kubectl attempts to do client-side validation when it creates/applies objects. to do this, it first downloads the openapi schema. when making requests to an older server, that endpoint may not be recognized, and may result in a "forbidden" error</p> <p>setting <code>--validate=false</code> when making the kubectl apply call will bypass that client-side validation (server-side validation is still performed)</p>
Jordan Liggitt
<p>I have quite a few failures when starting kube-apiserver in 1.10.11 K8s version. Its health check comes back with poststarthook/rbac/bootstrap-roles failed. Very annoyingly, for security reasons, the reason is "reason withheld" How do I know what this check is? Am I missing some permissions / bindings? I'm upgrading from 1.9.6. Release notes didn't clearly mention anything like this is required. </p>
user3421490
<p>All the details can be accessed with a super user credential or on the unsecured port (if you are running with that enabled) at <code>/healthz/&lt;name-of-health-check&gt;</code></p> <p>The RBAC check in particular reports unhealthy until the initial startup is completed and default roles are verified to exist. Typically, no user action is required to turn the check healthy, it simply reports that the apiserver should not be added to a load balancer yet, and reports healthy after a couple seconds, once startup completes. Persistent failure usually means problems communicating with etcd (I'd expect the /healthz/etcd check to be failing in that case as well). That behavior has been present since RBAC was introduced, and is not new in 1.10</p>
Jordan Liggitt
<p>I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access. </p> <p>Can you suggest the errors/changes that had to be done?</p>
vamsi krishna
<p>Ensure the RBAC authorization mode is still being used (<code>--authorization-mode=…,RBAC</code> is part of the apiserver arguments)</p> <p>If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users:</p> <p><code>kubectl get clusterrolebindings -o yaml | grep -C 20 system:authenticated</code></p>
Jordan Liggitt
<p>I'd like to see the 'config' details as shown by the command of:</p> <pre><code>kubectl config view </code></pre> <p>However this shows the entire config details of all contexts, how can I filter it (or perhaps there is another command), to view the config details of the CURRENT context?</p>
Chris Stryczynski
<p><code>kubectl config view --minify</code> displays only the current context</p>
Jordan Liggitt
<p>I am following the instructions as given <a href="https://kubernetes.io/docs/getting-started-guides/gce/" rel="nofollow noreferrer">here</a>.</p> <p>I used the following command to get a running cluster, in gcloud console I typed: <code>curl -sS https://get.k8s.io | bash</code> as described in the link, after that, I ran the command <code>kubectl cluster-info</code> from that I got:</p> <pre><code>kubernetes-dashboard is running at https://35.188.109.36/api/v1/proxy/namespaces/kube- system/services/kubernetes-dashboard </code></pre> <p>but when I go to that url from firefox, the message that comes is: </p> <pre><code>User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched." </code></pre> <p>Expected behavior: Should ask for an admin name and password to connect to the dashboard.</p>
Akash
<p>Is there a reason why you did not use GKE (Google Kubernetes Engine) which provides the dashboard add-on installed out of the box?</p> <p>In your case, simply:</p> <ul> <li>the kubernetes-dashboard addon might not be installed (but logs say so, so I think this is not the problem)</li> <li>network configuration that makes <code>kubectl proxy</code> work might not be there</li> <li>the <code>curl .. | sh</code> script you used probably did not configure the authentication properly.</li> </ul> <p>I recommend using GKE as this works out of the box. You can find documentation here: <a href="https://cloud.google.com/kubernetes-engine/docs/oss-ui" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/oss-ui</a></p> <hr> <p>If you still want to use GCE, I recommend running <code>kubectl proxy</code> on your workstation (not your kubernetes nodes) and visiting <code>http://127.0.0.1:8001/ui</code> on your browser to see if it works.</p> <p>If you get an error about not having enough permissions, you might be using a Kubernetes version new enough that enforces RBAC policies on pods like dashboard which access the API. You can grant those permissions by running:</p> <pre><code>kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default </code></pre> <hr> <p>I also recommend trying out GKE UI in Google Cloud Console: <a href="https://console.cloud.google.com/kubernetes" rel="nofollow noreferrer">https://console.cloud.google.com/kubernetes</a></p>
ahmet alp balkan
<p>I'm encountering a situation where pods are occasionally getting evicted after running out of memory. Is there any way to set up some kind of alerting where I can be notified when this happens?</p> <p>As it is, Kubernetes keeps doing its job and re-creating pods after the old ones are removed, and it's often hours or days before I'm made aware that a problem exists at all.</p>
Ameo
<p>GKE exports Kubernetes Events (<code>kubectl get events</code>) to Stackdriver Logging, to the "GKE Cluster Operations" table: <a href="https://i.stack.imgur.com/SsvHR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SsvHR.png" alt=""></a></p> <p>Next, write a query specifically targeting evictions (the query I pasted below might not be accurate):</p> <p><a href="https://i.stack.imgur.com/AbhF7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AbhF7.png" alt="enter image description here"></a></p> <p>Then click "CREATE METRIC" button.</p> <p>This will create a Log-based Metric. On the left sidebar, click "Logs-based metrics" and click the "Create alert from metric" option on the context menu of this metric:</p> <p><a href="https://i.stack.imgur.com/BxsKn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BxsKn.png" alt="enter image description here"></a></p> <p>Next, you'll be taken to Stackdriver Alerting portal. You can set up alerts there based on thresholds etc.</p>
ahmet alp balkan
<p>What is the difference between Objects and Resouces in Kubernetes world? </p> <p>I couldn't find it from <a href="https://kubernetes.io/docs/concepts/" rel="noreferrer">https://kubernetes.io/docs/concepts/</a> . I wonder they make no distinction about them but seems they see objects as a high-level concept of resources. </p>
Ryota Hashimoto
<p>A representation of a specific group+version+kind is an object. For example, a v1 Pod, or an apps/v1 Deployment. Those definitions can exist in manifest files, or be obtained from the apiserver.</p> <p>A specific URL used to obtain the object is a resource. For example, a list of v1 Pod objects can be obtained from the <code>/api/v1/pods</code> resource. A specific v1 Pod object can be obtained from the <code>/api/v1/namespaces/&lt;namespace-name&gt;/pods/&lt;pod-name&gt;</code> resource.</p> <p>API discovery documents (like the one published at /api/v1) can be used to determine the resources that correspond to each object type.</p> <p>Often, the same object can be retrieved from and submitted to multiple resources. For example, v1 Pod objects can be submitted to the following resource URLs:</p> <ol> <li><code>/api/v1/namespaces/&lt;namespace-name&gt;/pods/&lt;pod-name&gt;</code></li> <li><code>/api/v1/namespaces/&lt;namespace-name&gt;/pods/&lt;pod-name&gt;/status</code></li> </ol> <p>Distinct resources allow for different server-side behavior and access control. The first URL only allows updating parts of the pod spec and metadata. The second URL only allows updating the pod status, and access is typically only given to kubelets.</p> <p>Authorization rules are based on the resources for particular requests.</p>
Jordan Liggitt
<p>I'm trying to automatically provision a loadbalancer on GCP by using the <code>ingress</code> object with our GKE cluster.</p> <p>I have three GKE deployments and each is available with a service on port <code>8080</code> with a unique nodePort. </p> <p>When using <code>ingress-fanout.yaml</code>, it creates 4 backend services instead of the 3 specified in the yaml. The 4th service defaults to all unmatched routes. I assume the 4th service is because we don't match unmapped routes in the yaml. </p> <p>How can one map unmatched routes to one of the services? Is that possible?</p> <p>Here's <code>ingress-fanout.yaml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fanout-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "our-static-ip" ingress.gcp.kubernetes.io/pre-shared-cert: "our-ssl-cert" kubernetes.io/ingress.allow-http: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: rules: - host: our-website.com http: paths: - path: /* backend: serviceName: li-frontend servicePort: 8080 - path: /backend/* backend: serviceName: li-django servicePort: 8080 - path: /notifications/* backend: serviceName: li-notifications servicePort: 8080 </code></pre> <p><strong>Update:</strong> I removed many of the original questions and narrowed the scope of the question. When health checks started succeeding, that cleared the old issues.</p>
Mike
<p>First of all, "backends" have nothing to do with the "paths" you specified. "backends" on GCP Console are pointing to your GKE node pools.</p> <p>Ingress supports adding a default backend. You could have tried just searching for "ingress default backend". You can find documentation about this here: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#single-service-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#single-service-ingress</a></p> <p>Basically doing this will set a default backend when nothing else is matched:</p> <pre><code>spec: backend: serviceName: testsvc servicePort: 80 rules: [...your stuff here...] </code></pre>
ahmet alp balkan
<p>I started working on kubernetes. I already worked with single container pods. Now i want to working on multiple container pod. i read the statement like </p> <blockquote> <p>if an application needs several containers running on the same host, why not just make a single container with everything you need?</p> </blockquote> <p>means two containers with single IP address. My dought is, in which cases two or more containers uses same host?</p> <p>Could you please anybody explain me above scenario with an example? </p>
BSG
<p>This is called "multiple processes per container". <a href="https://docs.docker.com/config/containers/multi-service_container/" rel="nofollow noreferrer">https://docs.docker.com/config/containers/multi-service_container/</a></p> <p>t's discussed on the internet many times and it has many gotchas. Basically there's not a lot of benefit of doing it.</p> <p>Ideally you want container to host 1 process and its threads/subprocesses.</p> <p>So if your database process is in a crash loop, let it crash and let docker restart it. This should not impact your web container.</p> <p>Also putting processes in separate containers lets you set separate memory/CPU limits so that you can set different limits for your <em>web</em> container and <em>database</em> container.</p> <p>That's why Kubernetes exposes POD concept which lets you run multiple containers in the same namespace. Read this page fully: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/</a></p>
ahmet alp balkan
<p>I'm kind of a newbie at using GCP/Kubernetes. I want to deploy both a GRPC service and a client to GCP. </p> <p>I have read a lot about it and have tried several things. There's something on cloud endpoints where you compile your proto file and do an api.config.yaml. (<a href="https://cloud.google.com/endpoints/docs/grpc/get-started-grpc-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/endpoints/docs/grpc/get-started-grpc-kubernetes-engine</a>)</p> <p>That's not what I'm trying to do. I want to upload a GRPC service with it's .proto and expose its HTTP/2 public IP address and port. Then, deploy a GRPC client that interacts with that address and exposes REST endpoints.</p> <p>How can I get this done?</p>
Chidi Williams
<p>To deploy a grpc application to GKE/Kubernetes:</p> <ol> <li>Learn about gRPC, follow one of the quickstarts at <a href="https://grpc.io/docs/quickstart/" rel="noreferrer">https://grpc.io/docs/quickstart/</a></li> <li>Learn about how to build Docker images for your application. <ul> <li>Follow this Docker tutorial: <a href="https://docs.docker.com/get-started/part2/#conclusion-of-part-two" rel="noreferrer">https://docs.docker.com/get-started/part2/#conclusion-of-part-two</a></li> </ul></li> <li>Once you have a Docker image, follow <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a> tutorial to learn how to: <ul> <li>push a container image to Google Container Registry</li> <li>create a GKE cluster</li> <li>deploy the container image</li> <li>expose it on public internet on an IP address.</li> </ul></li> </ol> <p>These should be good to start with.</p> <p>Note that gRPC apps aren't much different than just HTTP web server apps. As far as Kubernetes is concerned, they're just a container image with a port number. :)</p>
ahmet alp balkan
<p>I am running geth full node <a href="https://github.com/ethereum/go-ethereum/wiki/geth" rel="nofollow noreferrer">https://github.com/ethereum/go-ethereum/wiki/geth</a> on Google Cloud platform on a VM instance. Currently, I have mounted a SSD and write the chain data to it.</p> <p>I want to now run it on multiple VM instances and use a load balancer for serving the requests made by Dapp. I can do this using a normal load balancer and create VMs and autoscale. However, I have the following questions:</p> <ol> <li>SSD seems to be a very important part of blockchain syncing speed. If I simply create VM images and add it for autoscaling, it won't help much because the blockchain will take time to sync.</li> <li>If I want to run these nodes on kubernetes cluster, what's the best way to use the disk?</li> </ol>
kosta
<p>Take a look at this Kubernetes Engine tutorial which shows you how to run StatefulSets with automatic persistent volume provisioning: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps</a></p> <p>Take a look at this Kubernetes Engine tutorial which shows you how to provision SSD disks <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#ssd_persistent_disks" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#ssd_persistent_disks</a> </p> <p>With these + HorizontalPodAutoscaler, you should be able to create a StatefulSet with auto-scaling and each pod will get its own SSD disk.</p>
ahmet alp balkan
<p>I run the kube-apiserver with my self-signed certificate:</p> <pre><code>/opt/bin/kube-apiserver \ --etcd_servers=http://master:2379,http://slave1:2379,http://slave2:2379 \ --logtostderr=false \ --v=4 \ --client-ca-file=/home/kubernetes/ssl/ca.crt \ --service-cluster-ip-range=192.168.3.0/24 \ --tls-cert-file=/home/kubernetes/ssl/server.crt \ --tls-private-key-file=/home/kubernetes/ssl/server.key </code></pre> <p>Then I run the kubelet with the kubeconfig:</p> <pre><code>/opt/bin/kubelet \ --address=0.0.0.0 \ --port=10250 \ --api_servers=https://master:6443 \ --kubeconfig=/home/kubernetes/ssl/config.yaml \ --logtostderr=false \ --v=4 </code></pre> <p>The content of the config.yaml is below:</p> <pre><code>apiVersion: v1 kind: Config clusters: - name: ubuntu cluster: insecure-skip-tls-verify: true server: https://master:6443 contexts: - context: cluster: "ubuntu" user: "ubuntu" name: development current-context: development users: - name: ubuntu user: client-certificate: /home/kubernetes/ssl/ca.crt client-key: /home/kubernetes/ssl/ca.key </code></pre> <p>So, I thought the kubelet will not verify the self-signed certificate of apiserver, but the logs showed:</p> <pre><code>E1009 16:48:51.919749 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.919876 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.923153 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority E1009 16:48:52.821556 100724 event.go:194] Unable to write event: 'Post https://master:6443/api/v1/namespaces/default/events: x509: certificate signed by unknown authority' (may retry after sleeping) E1009 16:48:52.922414 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.922433 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.924432 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority </code></pre> <p>So I am confused with the meaning of the <code>insecure-skip-tls-verify</code>...</p>
Sun Gengze
<p>TL;DR. You can comment out &quot;certificate-authority-data:&quot; key to get it working.</p> <hr /> <p>More info</p> <p>There is an open issue (<a href="https://github.com/kubernetes/kubernetes/issues/13830" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/13830</a>) with the behavior of that flag when a client cert/key is provided. When a client certificate is provided, the insecure flag is ignored.</p>
Jordan Liggitt
<p>I have a server running on Kubernetes to handle hourly processing jobs. Thinking of using a service to expose the pods, and using an (external) cron job to hit the load balancer so that kubernetes can autoscale to handle the higher load as required. However in implementation, if the cron job sends, say, 100 requests at the same time while there's only 1 pod, all the traffic will go to that pod whereas subsequently spun up pods will still not have any traffic to handle. </p> <p>How can I get around this issue? Is it possible for me to scale up the pods first using a cron job before making the requests? Or should I make requests with a time delay so as to give time for the pods to get spun up? Or other suggestions are also welcome!</p>
jlyh
<p>If you're looking for serverless-style instant scale-up, something like <a href="https://github.com/knative/" rel="nofollow noreferrer">https://github.com/knative/</a> might be something you can use on top of Kubernetes/GKE.</p> <p>Other than that, the only way to scale up pods on Kubernetes today is the Horizontal Pod Autoscaler, which will take a look at CPU/memory averages, (and if you're on GKE, it can use Custom Stackdriver Metrics you can expose from your app using Prometheus etc.).</p>
ahmet alp balkan
<p>I would like to disable the logging of the health checks produced by my Ingress, on my pods.</p> <p>I have a GCE ingress, distributing two pods, and I would like to clear up the logs i get from them.</p> <p>Do you have any idea ?</p> <p>Thanks,</p>
Simon Lacoste
<p>(It's not clear what do you mean by disabling logs. So I'll make an assumption.)</p> <p>If your application is logging something when it gets a request, you can check the user agent of the request to disable requests from Google Load Balancer health checking.</p> <p>When you provision a GCE ingress, your app will get a Google Cloud HTTP Load Balancer (L7). This LB will make health requests with header:</p> <pre><code>User-agent: GoogleHC/1.0 </code></pre> <p>I recommend checking for a case-insensitive header ("user-agent") and again a case-insenstive check to see if its value is starting with "googlehc".</p> <p>This way, you can distinguish Google HTTP (L7) load balancer health requests and leave them out of your logs.</p>
ahmet alp balkan
<p>I'm running a Google Kubernetes Engine with the "private-cluster" option. I've also defined "authorized Master Network" to be able to remotely access the environment - this works just fine. Now I want to setup some kind of CI/CD pipeline using Google Cloud Build - after successfully building a new docker image, this new image should be automatically deployed to GKE. When I first fired off the new pipeline, the deployment to GKE failed - the error message was something like: "Unable to connect to the server: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout". As I had the "authorized master networks" option under suspicion for being the root cause for the connection timeout, I've added 0.0.0.0/0 to the allowed networks and started the Cloud Build job again - this time everything went well and after the docker image was created it was deployed to GKE. Good.</p> <p>The only problem that remains is that I don't really want to allow the whole Internet being able to access my Kubernetes master - that's a bad idea, isn't it?</p> <p>Are there more elegant solutions to narrow down access by using allowed master networks and also being able to deploy via cloud build?</p>
Mizaru
<p>It's <strong>currently</strong> not possible to add Cloud Build machines to a VPC. Similarly, Cloud Build does not announce IP ranges of the build machines. So you can't do this today without creating a "ssh bastion instance" or a "proxy instance" on GCE within that VPC.</p> <p>I suspect this would change soon. GCB existed before GKE private clusters and private clusters are still a beta feature.</p>
ahmet alp balkan
<p>I'd like to ask if is there possibility to create shared quota for few namespaces, e.g. for team</p> <p>Scenario:</p> <pre><code>cluster: 100c 100gb ram </code></pre> <pre><code>quota for TEAM A 40c/40gb quota for TEAM B 60c/60gb </code></pre> <p>Some namespaces from TEAM An e.g.</p> <pre><code>teama-dev teama-test teama-stage teama-int </code></pre> <p>Have to be all limited to quota 40c/40gb</p> <p>Same for TEAM B.</p> <p>Case is that I don't want specify quotas directly for namespaces, but for team, or group of namespaces</p>
lukasz
<p>That is not possible in Kubernetes today.</p> <p>OpenShift supports quota across multiple namespaces via <a href="https://docs.openshift.com/container-platform/3.3/admin_guide/multiproject_quota.html" rel="nofollow noreferrer">ClusterResourceQuota</a>, and it is possible something like that might make it into Kubernetes in the future, but does not exist yet.</p>
Jordan Liggitt
<p>I have configured 1.8 cluster 1 master, 2 nodes using kuebadm. When I shutdown and restart the nodes kubelet is not starting, its compiling about the certificate. Same steps worked older version of Kubernetes.</p> <pre><code>Oct 2 22:02:32 k8sn-01 kubelet: I1002 22:02:32.854542 2795 client.go:75] Connecting to docker on unix:///var/run/docker.sock Oct 2 22:02:32 k8sn-01 kubelet: I1002 22:02:32.854569 2795 client.go:95] Start docker client with request timeout=2m0s Oct 2 22:02:32 k8sn-01 kubelet: I1002 22:02:32.860544 2795 feature_gate.go:156] feature gates: map[] Oct 2 22:02:32 k8sn-01 kubelet: W1002 22:02:32.860638 2795 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly Oct 2 22:02:32 k8sn-01 kubelet: W1002 22:02:32.861608 2795 server.go:381] invalid kubeconfig: invalid configuration: [unable to read client-cert /var/run/kubernetes/kubelet-client.crt for default-auth due to open /var/run/kubernetes/kubelet-client.crt: no such file or directory, unable to read client-key /var/run/kubernetes/kubelet-client.key for default-auth due to open /var/run/kubernetes/kubelet-client.key: no such file or directory] Oct 2 22:02:32 k8sn-01 kubelet: error: failed to run Kubelet: no client provided, cannot use webhook authorization Oct 2 22:02:32 k8sn-01 systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE Oct 2 22:02:32 k8sn-01 systemd: Unit kubelet.service entered failed state. Oct 2 22:02:32 k8sn-01 systemd: kubelet.service failed. </code></pre> <p>Not sure why its missing the certificate after reboot. I removed and re-created the cluster multiple times same result.</p> <pre><code>NAME STATUS ROLES AGE VERSION k8sm-01 Ready master 10m v1.8.0 k8sn-01 NotReady &lt;none&gt; 6m v1.8.0 k8sn-02 NotReady &lt;none&gt; 6m v1.8.0 </code></pre> <p>Any tips to resolve this issue?</p> <p>Thanks SR</p>
sfgroups
<p>This is due to <a href="https://github.com/kubernetes/kubernetes/issues/53288" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/53288</a></p> <p>For kubeadm, this issue was just resolved with a config change in an updated package (rpm 1.8.0-1, deb 1.8.0-01)</p> <p>If you already have a node in this state, you must remove the existing <code>/etc/kubernetes/kubelet.conf</code> file which contains references to erased certificate files as well.</p>
Jordan Liggitt
<p>I am following along <a href="https://bani.com.br/2018/09/istio-sidecar-injection-enabling-automatic-injection-adding-exceptions-and-debugging/" rel="nofollow noreferrer">this article</a> and try this on GKE. After adding the argument <code>- --log_output_level=default:debug</code> the change seems accepted as I get <code>deployment.extensions/istio-sidecar-injector edited </code>, but I how do I know for sure? </p> <p>The output of <code> pod=$(kubectl -n istio-system get pods -l istio=sidecar-injector -o jsonpath='{.items[0].metadata.name}') </code> and then <code> kubectl -n istio-system logs -f $pod </code> is the same as before and when I do (again)<code>kubectl -n istio-system edit deployment istio-sidecar-injector</code> the added argument is not there...</p>
musicformellons
<p>Depends on how installed Istio on GKE. There are multiple ways to install Istio from GKE.</p> <p>If you're installing from <a href="http://cloud.google.com/istio" rel="nofollow noreferrer">http://cloud.google.com/istio</a> which installs a Google-managed version of istio to your cluster, editing like <code>kubectl -n istio-system edit deployment istio-sidecar-injector</code> is a really bad idea, because Google will either revert it or the next version will wipe your modifications (so don't do it).</p> <p>If you're installing yourself from Istio open source release, Istio is distributed as a Helm chart, and has bunch of kubernetes .yaml manifests. You can go edit those YAML manifests –or update Helm values.yaml files to add that argument. Then you can perform the Istio installation with the updated values.</p> <p>If you're interested in getting help debugging istio, please get to a contributor community forum like Istio on Rocket Chat: <a href="https://istio.rocket.chat/" rel="nofollow noreferrer">https://istio.rocket.chat/</a> .</p>
ahmet alp balkan
<p>I'm running a simple spring microservice project with Minikube. I have two projects: lucky-word-client (on port 8080) and lucky-word-server (on port 8888). lucky-word-client has to communicate with lucky-word-server. I want to inject the static Nodeport of lucky-word-server (<a href="http://192" rel="nofollow noreferrer">http://192</a>.<strong>*.</strong>.100:32002) as an environment variable in my Kuberenetes deployment script of lucky-word-client. How I could do?</p> <p>This is deployment of lucky-word-server:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: lucky-server spec: selector: matchLabels: app: lucky-server replicas: 1 template: metadata: labels: app: lucky-server spec: containers: - name: lucky-server image: lucky-server-img imagePullPolicy: Never ports: - containerPort: 8888 </code></pre> <p>This is the service of lucky-word-server:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: lucky-server spec: selector: app: lucky-server ports: - protocol: TCP targetPort: 8888 port: 80 nodePort: 32002 type: NodePort </code></pre> <p>This is the deployment of lucky-word-client:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: lucky-client spec: selector: matchLabels: app: lucky-client replicas: 1 template: metadata: labels: app: lucky-client spec: containers: - name: lucky-client image: lucky-client-img imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>This is the service of lucky-word-client:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: lucky-client spec: selector: app: lucky-client ports: - protocol: TCP targetPort: 8080 port: 80 type: NodePort </code></pre>
Danny
<p>Kubernetes automatically injects services as environment variables. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a></p> <p><strong>But you should not use this.</strong> This won't work unless all the services are in place when you create the pod. It is inspired by &quot;docker&quot; which also moved on to DNS based service discovery now. So &quot;environment based service discovery&quot; is a thing of the past.</p> <p>Please rely on DNS service discovery. Minikube ships with <code>kube-dns</code> so you can just use the <code>lucky-server</code> hostname (or one of <code>lucky-server[.default[.svc[.cluster[.local]]]]</code> names). Read the documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
ahmet alp balkan
<p>We have a kubernetes cluster (v1.10.2) that behaves very strange when we set to <strong>false</strong> the <strong>"anonymous-auth"</strong> flag. We use the flannel network plugin and the logs in the flannel docker related to this issue are the following:</p> <pre><code>E0515 09:59:13.396856 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" E0515 09:59:13.397503 1 reflector.go:304] github.com/coreos/flannel/subnet/kube/kube.go:295: Failed to watch *v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=167760&amp;timeoutSeconds=469&amp;watch=true: dial tcp 10.96.0.1:443: getsockopt: connection refused E0515 09:59:14.398383 1 reflector.go:201] github.com/coreos/flannel/subnet/kube/kube.go:295: Failed to list *v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused E0515 09:59:15.419773 1 reflector.go:201] github.com/coreos/flannel/subnet/kube/kube.go:295: Failed to list *v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused E0515 09:59:16.420411 1 reflector.go:201] github.com/coreos/flannel/subnet/kube/kube.go:295: Failed to list *v1.Node: Get https://10.96.0.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: connection refused </code></pre> <p>Also, the events (<code>kubectl get events</code>) related to this issue seems to be:</p> <pre><code>kube-system ........ kube-apiserver-{hash} Pod spec.containers{kube-apiserver} Warning Unhealthy kubelet, {...} Liveness probe failed: HTTP probe failed with statuscode: 401 </code></pre> <p>This problem seems to happen randomly, every few minutes, the kubernetes cluster is not responsive during this time (<code>The connection to the server xx.xx.xx.xx was refused - did you specify the right host or port?</code>) for a few minutes (random again) and recovers by itself.</p>
Adi Fatol
<p>are you running the apiserver as a static pod? does it have a liveness check defined to call the /healthz endpoint? if so, that is likely run as an anonymous user and is failing when anonymous requests are disabled, causing the apiserver pod to be restarted</p>
Jordan Liggitt
<p>I cannot find a way to remove GPU (accelerator resource) from Google Kubernetes Engine (GKE) cluster. There is no official documentation on how to make change to it. Can you suggest a proper way to do so? The UI is gray out and it cannot allow me to make change from the console. </p> <p>Here is the screenshot when I click to edit cluster. <a href="https://i.stack.imgur.com/BlxR5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BlxR5.png" alt="screenshot of GKE edit node"></a></p> <p>Thank you</p>
Fony Lew
<p>You cannot edit settings of a Node Pool once it is created.</p> <p>You should create a new node pool with the settings you want (GPU, machine type etc) and delete the old node pool.</p> <p>There's a tutorial on how to migrate to a new node pool smoothly here: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool</a> If you don't care about pods terminating gracefully, you can create a new pool and just delete the old one.</p> <p>You can find more content about this at <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a>.</p>
ahmet alp balkan
<p>I was learning Kubernetes authentication, and authorization using RBAC. Now a question always puzzling me, How exactly the users in Kubeconfig file(eg /home/.kube/config) differ from the users of basic-auth-file in kube-apiserver startup command? </p> <p>I have gone through the offical documents, and there seems no any relations between them. Please kindly help me figure it out. Thank you! </p>
Jepsenwan
<p>A kubeconfig file contains three types of stanzas: clusters, users, and contexts.</p> <ol> <li><p>A cluster stanza describes how kubectl should reach a particular cluster. It has the URL and optionally a CA bundle to use to verify a secure connection.</p></li> <li><p>A user stanza describes credentials kubectl should send. A user stanza in a kubeconfig file can reference a x509 client certificate, or a bearer token, or a basic auth username/password.</p></li> <li><p>A context stanza pairs a cluster and a user stanza and gives it a name (e.g. "the 'development' context uses the 'development' cluster definition and the 'testuser' user credentials")</p></li> </ol> <p>The "current-context" attribute in a kubeconfig file indicates what context should be used by default when invoking kubectl.</p> <blockquote> <p>How exactly the users in Kubeconfig file(eg /home/.kube/config) differ from the users of basic-auth-file in kube-apiserver startup command? </p> </blockquote> <p>Only the credentials in user definitions in a kubeconfig are sent to the server. The name has no meaning apart from the reference from the context stanza.</p> <p>User definitions in a kubeconfig file can contain many types of credentials, not just basic auth credentials.</p>
Jordan Liggitt
<p>I'm setting up a k8s cluster on GKE. A wildcard DNS <code>*.server.com</code> will point to a Ingress controller. Internally to the cluster, there will be webserver pods, each exposing a unique service. The Ingress controller will use the server name to route to the various services. </p> <p>Servers will be created and destroyed on a nearly daily basis. I'd like to know if there's a way to add and remove a named server from the ingress controller without editing the whole list of named servers. </p>
Laizer
<p>It appears like you're planning to host multiple domain names on a single Load Balancer (==single <code>Ingress</code> resource). If not, this answer doesn't apply.</p> <p>You can do this by configuring <code>Ingress</code> with a long list of domain names like:</p> <pre><code>spec: rules: - host: cats.server.com http: paths: - path: /* backend: serviceName: cats servicePort: 8080 - host: dogs.server.com http: paths: - path: /* backend: serviceName: dogs servicePort: 8080 - [...] </code></pre> <p>If that's your intention, <strong>there's no way of doing this without editing this whole list</strong> and applying it to the cluster every time.</p> <p>You can build a tool to construct this manifest file, then apply the changes. The Ingress controller is smart enough that existing domains will not see a downtime if they're still on the list.</p> <p>However the domains you removed from the list will also be removed from the URL Map of the load balancer and hence stop accepting the traffic.</p>
ahmet alp balkan
<p>Bellow is the how I watch kubernetes to get deployments status(include added\modified\delete ......)</p> <pre><code>[boomer@bzha kubernetes]$ curl http://10.110.200.24:8080/apis/extensions/v1beta1/watch/namespaces/kube-system/deployments {"type":"ADDED","object":{"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"influxdb","namespace":"kube-system","selfLink":"/apis/extensions/v1beta1/namespaces/kube-system/deployments/influxdb","uid":"37a07ded-6a25-11e8-b67e-0050568ddfc2","resourceVersion":"6896550","generation":2,"creationTimestamp":"2018-06-07T07:34:31Z","labels":{"k8s-app":"influxdb","task":"monitoring"},"annotations":{"deployment.kubernetes.io/revision":"2"}},"spec":{"replicas":1,"selector":{"matchLabels":{"k8s-app":"influxdb","task":"monitoring"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"influxdb","task":"monitoring"}},"spec":{"volumes":[{"name":"tz-config","hostPath":{"path":"/usr/share/zoneinfo/Asia/Shanghai","type":""}}],"containers":[{"name":"influxdb","image":"hub.skyinno.com/google_containers/heapster-influxdb-amd64:v1.3.3","resources":{"limits":{"cpu":"4","memory":"4Gi"},"requests":{"cpu":"100m","memory":"128Mi"}},"volumeMounts":[{"name":"tz-config","mountPath":"/etc/localtime"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1,"maxSurge":1}}},"status":{"observedGeneration":2,"replicas":1,"updatedReplicas":1,"readyReplicas":1,"availableReplicas":1,"conditions":[{"type":"Available","status":"True","lastUpdateTime":"2018-06-07T07:34:31Z","lastTransitionTime":"2018-06-07T07:34:31Z","reason":"MinimumReplicasAvailable","message":"Deployment has minimum availability."}]}}} {"type":"ADDED","object":{"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"prometheus-core","namespace":"kube-system","selfLink":"/apis/extensions/v1beta1/namespaces/kube-system/deployments/prometheus-core","uid":"fa7b06da-6a2a-11e8-9521-0050568ddfc2","resourceVersion":"8261846","generation":6,"creationTimestamp":"2018-06-07T08:15:45Z","labels":{"app":"prometheus","component":"core"},"annotations":{"deployment.kubernetes.io/revision":"6"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"prometheus","component":"core"}},"template":{"metadata":{"name":"prometheus-main","creationTimes ...... </code></pre> <p>when I first start curl to watch the deployment api, I notice that it will return all deployments list first (which type is <strong>ADDED</strong>), my question is that: </p> <ol> <li>Is it always list deployments first when I watch the api? </li> <li>When I watch something else, like service or ingress, does it show the same behavior? </li> <li>Where could I find document or code about this?</li> </ol>
zhashuyu
<p>Yes, watching without a resourceVersion specified first sends ADDED events for all existing objects.</p> <p>It is more common to perform a list operation first to obtain all existing objects, then start a watch passing the resourceVersion from the returned list result to watch for changes from that point. </p>
Jordan Liggitt
<p>I want to execute equivalent of</p> <p><code>kubectl get all -l app=myapp -n mynamespace</code></p> <p>or</p> <p><code>kubectl label all -l version=1.2.0,app=myapp track=stable --overwrite</code></p> <p>using client-go</p> <p>I looked at <a href="https://github.com/kubernetes/client-go/blob/master/dynamic" rel="noreferrer">dynamic</a> package, but it seems like it needs <code>GroupVersionResource</code>, which is different for, say, Service objects and Deployment objects. Also when I pass <code>schema.GroupVersionResource{Group: "apps", Version: "v1"}</code> it doesn't find anything, when I pass <code>schema.GroupVersionResource{Version: "v1"}</code> it finds only namespace object and also doesn't looks for labels, though I provided label options:</p> <pre><code>resource := schema.GroupVersionResource{Version: "v1"} listOptions := metav1.ListOptions{LabelSelector: fmt.Sprintf("app=%s", AppName), FieldSelector: ""} res, listErr := dynamicClient.Resource(resource).Namespace("myapps").List(listOptions) </code></pre> <p>I also looked at runtime package, but didn't find anything useful. I took a look at how <code>kubectl</code> implement this, bit haven't figured it out yet, too many levels of abstractions.</p>
Kseniia Churiumova
<p>You can't list &quot;all objects&quot; with one call.</p> <p>Unfortunately the way Kubernetes API is architected is via API groups, which have multiple APIs under them.</p> <p>So you need to:</p> <ol> <li>Query all API groups (<code>apiGroup</code>)</li> <li>Visit each API group to see what APIs (<code>kind</code>) it exposes.</li> <li>Actually query that <code>kind</code> to get all the objects (here you may actually filter the list query with the label).</li> </ol> <p>Fortunately, <code>kubectl api-versions</code> and <code>kubectl api-resources</code> commands do these.</p> <p>So to learn how kubectl finds all &quot;kinds&quot; of API resources, run:</p> <pre><code>kubectl api-resources -v=6 </code></pre> <p>and you'll see kubectl making calls like:</p> <ul> <li><code>GET https://IP/api</code></li> <li><code>GET https://IP/apis</code></li> <li>then it visits every api group: <ul> <li><code>GET https://IP/apis/metrics.k8s.io/v1beta1</code></li> <li><code>GET https://IP/apis/storage.k8s.io/v1</code></li> <li>...</li> </ul> </li> </ul> <p>So if you're trying to clone this behavior with client-go, you should use the same API calls, <strike>or better just write a script just shells out to <code>kubectl api-resources -o=json</code> and script around it.</strike></p> <p>If you aren't required to use client-go, there's a <a href="https://github.com/corneliusweig/ketall" rel="nofollow noreferrer">kubectl plugin called <code>get-all</code></a> which exists to do this task.</p>
ahmet alp balkan
<p>While the <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">kubernetes golang api example for out-of-cluster authentication works fine</a>, and <a href="https://gist.github.com/innovia/fbba8259042f71db98ea8d4ad19bd708" rel="nofollow noreferrer">creating a service account and exporting the bearer token works great</a>, it feels silly to write the pieces to a temporary file only to tell the API to read it. Is there an API way to pass these pieces as an object rather than write to a file?</p> <pre><code> clusterData := map[string]string{ "BEARER_TOKEN": bearerToken, "CA_DATA": clusterCA, "ENDPOINT": clusterUrl, } const kubeConfigTmpl = ` apiVersion: v1 clusters: - cluster: certificate-authority-data: {{.CA_DATA}} server: {{.HOST_IP_ADDRESS}} name: kubernetes contexts: - context: cluster: kubernetes namespace: default user: lamdba-serviceaccount-default-kubernetes name: lamdba-serviceaccount-default-kubernetes current-context: lamdba-serviceaccount-default-kubernetes kind: Config preferences: {} users: - name: lamdba-serviceaccount-default-kubernetes user: token: {{.BEARER_TOKEN}} ` t := template.Must(template.New("registration").Parse(kubeConfigTmpl)) buf := &amp;bytes.Buffer{} if err := t.Execute(buf, clusterData); err != nil { panic(err) } registrationPayload := buf.String() d1 := []byte(registrationPayload) err := ioutil.WriteFile("/tmp/config", d1, 0644) </code></pre>
SteveCoffman
<p>The <code>rest.Config</code> struct passed to the <code>NewFromConfig</code> client constructors lets you specify bearer tokens and/or client certificate/key data directly. </p>
Jordan Liggitt
<p>I've updated the SSL certificate for my Kubernetes Ingress services, but I don't know how to restart the instances to use the updated cert secret without manually deleting and restarting the Ingress instances. That isn't ideal because of the number of ingresses that are making use of that specific cert (all sitting on the same TLD). How do I force it to use the updated secret?</p>
moberemk
<p>You shouldn't need to delete the Ingress object to use the updated TLS Secret.</p> <p>GKE Ingress controller (<a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce</a>) automatically picks up the updated Secret resource and updates it. (Open an issue on the repo if it doesn't).</p> <p>Example:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl describe ingress foobar Name: foobar Labels: &lt;none&gt; Namespace: default Address: 123.234.123.234 Ingress Class: &lt;none&gt; Default backend: &lt;default&gt; TLS: my-secret terminates (...) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 6m29s loadbalancer-controller TargetProxy &quot;&lt;redacted&gt;&quot; certs updated Normal Sync 6m25s (x78 over 12h) loadbalancer-controller Scheduled for sync </code></pre> <p>- here the certificate from a secret 'my-secret' has been successfully reloaded 6m29s ago.</p> <p>If you're not seeing the changes in ~10-20 minutes, I recommend editing the Ingress object trivially (for example, add a label or an annotation) so that the ingress controller picks up the object again and evaluates goal state vs the current state, then goes ahead to make the changes (update the TLS secret).</p>
ahmet alp balkan
<p>I've already manage to have max one pod with containerized application per node. Now I want to create Kubernetes cluster in which each user will have access to one, personal node. As a result I want to have architecture where each user will work in isolated environment. I was thinking about some LoadBalancing with Ingress rules, but I am not sure it's a good way to achieve this.</p> <p><a href="https://i.stack.imgur.com/AdnYL.png" rel="nofollow noreferrer">Architecture</a></p>
N0va
<p>Kubernetes in general is not a fit for you.</p> <p>If you're trying to do "max 1 per pod node" or "a node for every user", you should not force yourself to use Kubernetes, maybe what you're looking for is just virtual machines.</p>
ahmet alp balkan
<p>I'm running jobs on EKS. After trying to start a job with invalid yaml, it doesn't seem to let go of the bad yaml and keeps giving me the same error message even after correcting the file.</p> <ol> <li>I successfully ran a job.</li> <li>I added an environment variable with a boolean value in the <code>env</code> section, which raised this error: <ul> <li><code>Error from server (BadRequest): error when creating "k8s/jobs/create_csv.yaml": Job in version "v1" cannot be handled as a Job: v1.Job: Spec: v1.JobSpec: Template: v1.PodTemplateSpec: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|oduction"},{"name":"RAILS_LOG_TO_STDOUT","value":true},{"name":"AWS_REGION","value":"us-east-1"},{"n|...</code></li> </ul></li> <li>I changed the value to be a string <code>yes</code>, but the error message continues to show the original, bad yaml.</li> <li>No jobs show up in <code>kubectl get jobs --all-namespaces</code> <ul> <li>So I don't know where this old yaml would be hiding.</li> </ul></li> </ol> <p>I thought this might be because I didn't have <code>imagePullPolicy</code> set to <code>Always</code>, but it happens even if I run the <code>kubectl</code> command locally.</p> <p>Below is my job definition file:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: generateName: create-csv- labels: transformer: AR spec: template: spec: containers: - name: create-csv image: my-image:latest imagePullPolicy: Always command: ["bin/rails", "create_csv"] env: - name: RAILS_ENV value: production - name: RAILS_LOG_TO_STDOUT value: yes - name: AWS_REGION value: us-east-1 - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws key: aws_access_key_id - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws key: aws_secret_access_key restartPolicy: OnFailure backoffLimit: 6 </code></pre>
jstim
<p>"yes" must be quoted in yaml or it gets treated as a keyword that means a boolean true</p> <p>Try this:</p> <pre><code>value: "yes" </code></pre>
Jordan Liggitt
<p>For a deployed Kubernetes CronJob named <code>foo</code>, how can I manually run it immediately? This would be for testing or manual runs outside its configured schedule.</p>
Dan Tanner
<p>You can start a job based on an existing job's configuration, and a cronjob is just another type of job. </p> <p>Syntax:<br> <code>kubectl create job --from=cronjob/$CronJobName $NameToGiveThePodThatWillBeCreated</code> </p> <p>e.g.:<br> <code>kubectl create job --from=cronjob/foo foo-manual-1</code></p>
Dan Tanner
<p>I set <code>concurrencyPolicy</code> to <code>Allow</code>, here is my <code>cronjob.yaml</code>:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: gke-cron-job spec: schedule: '*/1 * * * *' startingDeadlineSeconds: 10 concurrencyPolicy: Allow successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: run: gke-cron-job spec: restartPolicy: Never containers: - name: gke-cron-job-solution-2 image: docker.io/novaline/gke-cron-job-solution-2:1.3 env: - name: NODE_ENV value: 'production' - name: EMAIL_TO value: '[email protected]' - name: K8S_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name ports: - containerPort: 8080 protocol: TCP </code></pre> <p>After reading docs: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs</a></p> <p>I still don't understand how to use <code>concurrencyPolicy</code>.</p> <p>How can I run my cron job concurrency?</p> <p>Here is the logs of cron job: </p> <pre class="lang-sh prettyprint-override"><code>☁ nodejs-gcp [master] ⚡ kubectl logs -l run=gke-cron-job &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660540-gmwvc', VERSION: '1.0.2' } [2019-01-28T07:29:10.593Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660600-wbl5g', VERSION: '1.0.2' } [2019-01-28T07:30:11.405Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660660-8mn4r', VERSION: '1.0.2' } [2019-01-28T07:31:11.099Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } </code></pre> <p>As you can see, the <strong>timestamp</strong> indicates that the cron job is not concurrency.</p>
Lin Du
<p>It's because you're reading the wrong documentation. CronJobs aren't a GKE-specific feature. For the full documentation on CronJob API, refer to the Kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy</a> (quoted below).</p> <p>Concurrency policy decides whether a new container can be started while the previous CronJob is still running. If you have a CronJob that runs every 5 minutes, and sometimes the Job takes 8 minutes, then you may run into a case where multiple jobs are running at a time. This policy decides what to do in that case.</p> <blockquote> <h2>Concurrency Policy</h2> <p>The .spec.concurrencyPolicy field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. the spec may specify only one of the following concurrency policies:</p> <ul> <li><code>Allow</code> (default): The cron job allows concurrently running jobs</li> <li><code>Forbid</code>: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run</li> <li><code>Replace</code>: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run</li> </ul> <p>Note that concurrency policy only applies to the jobs created by the same cron job. If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.</p> </blockquote>
ahmet alp balkan
<p>Hello StackOverflow users,</p> <p>I've started working in the Kubernetes space recently and saw that <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/" rel="noreferrer">Custom Resource Definitions(CRDs) types are not namespaced and are available to to all namespaces.</a></p> <p>I was wondering why it isn't possible to make a CRD type scoped to a namespace. Any help would be appreciated!</p>
awgreene
<p>See <a href="https://github.com/kubernetes/kubernetes/issues/65551#issuecomment-400909534" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/65551#issuecomment-400909534</a> for a discussion of this issue.</p> <p>A particular CRD can define a custom resource that is namespaced or cluster-wide, but the type definition (the CRD itself) is cluster-wide and applies uniformly to all namespaces.</p>
Jordan Liggitt