Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>For one POD, three images has been created. The problem here is that there is no communication between containers within same pod. How should my application connected with these three containers?</p> <p>My pod have below containers.</p> <pre><code>[dev-application dev-app-nginx dev-app-redis] </code></pre> <p>Here I am able see only rails is running but redis and nginx is not running. Because Redis and nix is running as different containers in same pod.</p> <pre><code>kubectl exec -ti test-deployment-5f59864c8b-mv4kk sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Defaulting container name to dev-application. Use 'kubectl describe pod/test-deployment-5f59864c8b-mv4kk -n dev-app' to see all of the containers in this pod. # rails -v Rails 4.2.11.3 # redis -v sh: 2: redis: not found # nginx -v sh: 3: nginx: not found # </code></pre> <p>Below the yam file I am using</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: dev-app name: test-deployment spec: replicas: 1 template: metadata: labels: app: Dev-app spec: nodeSelector: cloud.io/sec-zone-green: &quot;true&quot; containers: - name: dev-application image: hub.docker.net/appautomation/dev.app.1.0:latest command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;while true; do echo test; sleep 20;done&quot;] resources: limits: memory: 8Gi cpu: 5 requests: memory: 8Gi cpu: 5 ports: - containerPort: 3000 - name: dev-app-nginx image: hub.docker.net/appautomation/dev.nginx.1.0:latest resources: limits: memory: 4Gi cpu: 4 requests: memory: 4Gi cpu: 4 ports: - containerPort: 80 - name: dev-app-redis image: hub.docker.net/appautomation/dev.redis.1.0:latest resources: limits: memory: 4Gi cpu: 4 requests: memory: 4Gi cpu: 4 ports: - containerPort: 6379 </code></pre>
User1984
<p>Use <strong>localhost</strong> to communicate with other containers within the same pod.</p> <p>E.g. the addresses to the containers are</p> <ul> <li>127.0.0.1:3000</li> <li>127.0.0.1:80</li> <li>127.0.0.1:6379</li> </ul>
Jonas
<p>How can I delete more than couple of pods at a time?</p> <p>The commands I run:</p> <pre><code>kubectl delete pod pod1 kubectl delete pod pod2 kubectl delete pod pod3 </code></pre> <p>The approach I want to use:</p> <pre><code>kubectl delete pod pod1 pod2 pod3 </code></pre> <p>Any commands or style that can help me do this? Thanks!</p>
cosmos-1905-14
<p>The approach that you say that you want:</p> <pre><code>kubectl delete pod pod1 pod2 pod3 </code></pre> <p><strong>actually works</strong>. Go ahead and use it if you want.</p> <p>In Kubernetes it is more common to operate on subsets that share common labels, e.g:</p> <pre><code>kubectl delete pod -l app=myapp </code></pre> <p>and this command will delete all pods with the label <code>app: myapp</code>.</p>
Jonas
<p>I'm in the process of moving web services from one Kubernetes cluster to another. The goal is to do that without service interruption.</p> <p>This is difficult with cert-manager and HTTP challenges, because cert-manager on the new cluster can only retrieve a certificate once the DNS entry points to that cluster. However, if I switch the DNS entry to the new cluster, clients will potentially talk to the new cluster <em>before</em> a valid certificate has been generated. This is like a chicken-and-egg problem.</p> <p>How do I move the cert-manager certificates to the new cluster, so that it already has the certs once I make the DNS switch?</p>
theDmi
<p>Certificates are stored in Kubernetes secrets. Cert-manager will pick up existing secrets instead of creating new ones, if the secret matches the ingress object.</p> <p>So assuming that the ingress object looks the same on both clusters, and that the same namespace is used, copying the secret is as simple as this:</p> <pre class="lang-sh prettyprint-override"><code>kubectl --context OLD_CLUSTER -n NAMESPACE get secret SECRET_NAME --output yaml \ | kubectl --context NEW_CLUSTER -n NAMESPACE apply -f - </code></pre> <ul> <li>Replace <code>OLD_CLUSTER</code> and <code>NEW_CLUSTER</code> with the kubectl context names of the respective clusters (see <code>kubectl config get-contexts</code>).</li> <li>Replace <code>SECRET_NAME</code> with the name of the secret where the certificate is stored. This name can be found in the ingress.</li> <li>Replace <code>NAMESPACE</code> with the actual namespace that you're using.</li> </ul> <p>The command simply exports the secret in YAML format, and then uses <code>kubectl apply -f</code> to create the same resource in the new cluster.</p> <p>Once the ingress is in place on the new cluster, you can verify that the cert works by using <code>openssl s_client</code>:</p> <pre class="lang-sh prettyprint-override"><code>openssl s_client -connect CLUSTER_IP:443 -servername SERVICE_DNS_NAME </code></pre> <p>Again, replace <code>CLUSTER_IP</code> and <code>SERVICE_DNS_NAME</code> accordingly.</p>
theDmi
<p>Very new to Kubernetes. In the past I've used <code>kubectl</code>/Kustomize to deploy pods/services using the same repetitive pattern:</p> <ol> <li>On the file system, in my project, I'll have two YAML files such as <code>kustomization.yml</code> and <code>my-app-service.yml</code> that look something like:</li> </ol> <h3><code>kustomization.yml</code></h3> <pre><code>resources: - my-app-service.yml </code></pre> <h3><code>my-app-service.yml</code></h3> <pre><code>apiVersion: v1 kind: Service metadata: name: my-app-service ... etc. </code></pre> <ol start="2"> <li>From the command-line, I'll run something like the following:</li> </ol> <pre><code>kubectl apply -k /path/to/the/kustomization.yml </code></pre> <p>And if all goes well -- <strong>boom</strong> -- I've got the service running on my cluster. Neato.</p> <p>I am now trying to troubleshoot something and want to deploy a single container of the <a href="https://hub.docker.com/r/google/cloud-sdk/" rel="nofollow noreferrer"><code>google/cloud-sdk</code></a> image on my cluster, so that I can SSH into it and do some network/authorization-related testing.</p> <p>The idea is:</p> <ol> <li>Deploy it</li> <li>SSH in, do my testing</li> <li>Delete the container/clean it up</li> </ol> <p>I <em>believe</em> K8s doesn't deal with containers and so I'll probably need to deploy a 1-container pod or service (not clear on the difference there), but you get the idea.</p> <p>I <em>could</em> just create 2 YAML files and run the <code>kubectl apply ...</code> on them like I do for everything else, but that feels a bit heavy-handed. I'm wondering if there is an easier way to do this all from the command-line without having to create YAML files each time I want to do a test like this. <strong>Is there?</strong></p>
hotmeatballsoup
<p>You can create a pod running a container with the given image with this command:</p> <pre><code>kubectl run my-debug-pod --image=google/cloud-sdk </code></pre> <hr /> <p>And if you want to create a Deployment without writing the Yaml, you can do:</p> <pre><code>kubectl create deployment my-app --image=my-container-image </code></pre> <p>You could also just generate the Yaml and save it to file, and then use <code>kubectl apply -f deployment.yaml</code>, generate it with:</p> <pre><code>kubectl create deployment my-app --image=my-container-image \ --dry-run=client -o yaml &gt; deployment.yaml </code></pre>
Jonas
<p>I am trying to update my deployment with latest image content on Azure Kubernetes Service every time some code is committed to github . I have made a stage in my build pipeline to build and push the image on docker hub which is working perfectly fine. however in my release pipeline the image is being used as an artifact and is being deployed to the Azure Kubernetes Service , but the problem is that the image on AKS in not updating according to the image pushed on Docker Hub with latest code.</p> <p>Right now each time some commit happens i have to manually update the image on AKS via the Command</p> <p><em>kubectl set image deployment/demo-microservice demo-microservice=customerandcontact:contact</em> </p> <p>My Yaml File</p> <p><a href="https://i.stack.imgur.com/jAMCZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAMCZ.png" alt="enter image description here"></a></p> <p>Can anyone tell the error/changes if any in my yaml file to automatically update the image on AKS.</p>
Shubham Tiwari
<p>When you relese a new image to the container registry under the same tag it does not mean anything to Kubernetes. If you run <code>kubectl apply -f ...</code> and the image name and tag remains the same, it still won't do anything as there is no configuration change. There are two options:</p> <ol> <li><p>Give a new tag on each build and change the <code>:contact</code> to the new tag in the yaml and run kubectl apply</p></li> <li><p>For dev environment only (do not do it in Stage or Prod) leave the same tag (usually a tag <code>:latest</code> is used) and after a new image is deployed to registry run <code>kubectl delete pod demo-microservice</code>. Since you've set image pull policy to Always, this will cause Kubernetes pull a new image from the registry and redeploy the pod.</p></li> </ol> <p>The second approach is a workaround just for testing.</p>
Piotr Gwiazda
<p>I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.</p> <p>One of the tasks I have is to setup automation to take a supplied namespace <em>name</em> parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.</p> <p>I can do this simply with a script that uses <code>kubectl apply</code> like this:</p> <pre><code>kubectl create namespace $namespaceName kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName </code></pre> <p>But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of &quot;deploying&quot; things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.</p> <p>How can I create a Kubernetes namespace and rolebinding using Helm?</p>
Vaccano
<p>Almost <a href="https://github.com/nginxinc/kubernetes-ingress/blob/v1.10.0/deployments/helm-chart/templates/rbac.yaml#L2" rel="nofollow noreferrer">any chart</a> for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments</p> <pre class="lang-yaml prettyprint-override"><code># templates/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: {{ .Release.Namespace }} name: {{ .Values.bindingName }} roleRef: apiGroup: &quot;&quot; kind: ClusterRole name: {{ .Values.clusterRole }} subjects: - apiGroup: &quot;&quot; kind: User name: {{ .Values.user }} </code></pre> <p>then a <code>values.yaml</code> isn't strictly required, but helps folks know what values could be provided:</p> <pre class="lang-yaml prettyprint-override"><code># values.yaml bindingName: deployment-edit clusterRole: edit user: deployer </code></pre> <p>Helm v3 <a href="https://helm.sh/docs/helm/helm_install/#options" rel="nofollow noreferrer">has <code>--create-namespace</code></a> which will create the provided <code>--namespace</code> if it doesn't already exist, which isn't very declarative but does achieve the end result just like the <code>kubectl</code> version</p> <p>It's also theoretically possible to have the chart create the <code>Namespace</code> but I would not guess that <code>helm remove the-namespaced-rolebinding</code> will do the right thing, since the order of item removal matters a lot:</p> <pre class="lang-yaml prettyprint-override"><code># templates/00namespace.yaml apiVersion: v1 kind: Namespace metadata: name: {{ .Values.theNamespace }} </code></pre> <p>and then run <code>helm --namespace kube-system ...</code> or any NS other than the real one, since it doesn't yet exist</p>
mdaniel
<p>Imagine the following deployment definition in kubernetes:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: env: staging spec: ... </code></pre> <p>I have two questions in particular:</p> <p>1). The label <code>env: staging</code> won't be available in created pods. how can I access this data programmatically in <code>client-go</code>?</p> <p>2). When pod is created/updated, how can I found which deployment it belongs to?</p>
Elaheh
<blockquote> <p>1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?</p> </blockquote> <p>You can get the <code>Deployment</code> using client-go. See the example <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="nofollow noreferrer">Create, Update &amp; Delete Deployment</a> for operations on a <code>Deployment</code>.</p> <blockquote> <p>2). when pod is created/updated, how can I found which deployment it belongs to?</p> </blockquote> <p>When a <code>Deployment</code> is created, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> is created that manage the <code>Pods</code>.</p> <p>See the <code>ownerReferences</code> field of a <code>Pod</code> to see what <code>ReplicaSet</code> manages it. This is described in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#how-a-replicaset-works" rel="nofollow noreferrer">How a ReplicaSet works</a></p>
Jonas
<p>From the time I have upgraded the versions of my eks terraform script. I keep getting error after error.</p> <p>currently I am stuck on this error:</p> <blockquote> <p>Error: Get <a href="http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller" rel="nofollow noreferrer">http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller</a>: dial tcp 127.0.0.1:80: connect: connection refused</p> <p>Error: Get <a href="http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller" rel="nofollow noreferrer">http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller</a>: dial tcp 127.0.0.1:80: connect: connection refused</p> </blockquote> <p>The script is working fine and I can still use this with old version but I am trying to upgrade the cluster version .</p> <p><strong>provider.tf</strong></p> <pre><code>provider "aws" { region = "${var.region}" version = "~&gt; 2.0" assume_role { role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform" } } provider "kubernetes" { config_path = ".kube_config.yaml" version = "~&gt; 1.9" } provider "helm" { service_account = "${kubernetes_service_account.tiller.metadata.0.name}" namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}" kubernetes { config_path = ".kube_config.yaml" } } terraform { backend "s3" { } } data "terraform_remote_state" "state" { backend = "s3" config = { bucket = "${var.backend_config_bucket}" region = "${var.backend_config_bucket_region}" key = "${var.name}/${var.backend_config_tfstate_file_key}" # var.name == CLIENT role_arn = "${var.backend_config_role_arn}" skip_region_validation = true dynamodb_table = "terraform_locks" encrypt = "true" } } </code></pre> <p><strong>kubernetes.tf</strong></p> <pre><code>resource "kubernetes_service_account" "tiller" { #depends_on = ["module.eks"] metadata { name = "tiller" namespace = "kube-system" } automount_service_account_token = "true" } resource "kubernetes_cluster_role_binding" "tiller" { depends_on = ["module.eks"] metadata { name = "tiller" } role_ref { api_group = "rbac.authorization.k8s.io" kind = "ClusterRole" name = "cluster-admin" } subject { kind = "ServiceAccount" name = "tiller" api_group = "" namespace = "kube-system" } } </code></pre> <p>terraform version: 0.12.12 eks module version: 6.0.2</p>
gamechanger17
<p>It means your <code>server:</code> entry in your <code>.kube_config.yml</code> is pointing to the wrong port (and perhaps even the wrong protocol, as normal kubernetes communication travels over <code>https</code> and is secured via mutual TLS authentication), or there is no longer a proxy that <em>was</em> listening on <code>localhost:80</code>, or perhaps the <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips" rel="nofollow noreferrer"><code>--insecure-port</code></a> used to be <code>80</code> and is now <code>0</code> (as is strongly recommended)</p> <p>Regrettably, without more specifics, no one can <strong>guess</strong> what the correct value was or should be changed to</p>
mdaniel
<p>I'm using the <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">C# client API</a> and I'm trying to:</p> <ol> <li>Get all pods associated with a given StatefulSet</li> <li>Get the cluster IP address of each pod</li> </ol> <p>I am aware of <code>Kubernetes.CoreV1.ListNamespacedPodAsync</code> and <code>Kubernetes.AppsV1.ListNamespacedStatefulSetAsync</code>. The part that I'm missing is how to</p> <ol> <li>Know whether a given pod is associated with the StatefulSet</li> <li>Discover a pod's IP address</li> </ol>
fr0
<p>A feature of Kubernetes StatefulSet is <em>Stable Network Identity</em>. This requires that additional <em>headless Services</em> are created, in addition to the <code>StatefulSet</code> resource.</p> <p>When this is done, you would typically access these instances by their <strong>hostname</strong> instead of using IP address directly. See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">Stable Network ID</a> for documentation about this.</p> <p>E.g. after <em>headless Services</em> are created you could access 3 instances on these <strong>hostnames</strong></p> <ul> <li>my-app-0</li> <li>my-app-1</li> <li>my-app-2</li> </ul> <p>(when accessed from within the same namespace)</p>
Jonas
<p>When i am trying to mount application log volume from containers to host getting error: Operation not permitted</p> <pre><code>spec: securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 initContainers: - name: volume-mount-permission image: xx.xx.xx.xx/orchestration/credit-card command: - sh - -c - chown -R 1000:1000 /opt/payara/appserver/glassfish/logs/credit-card - chgrp 1000 /opt/payara/appserver/glassfish/logs/credit-card volumeMounts: - name: card-corp-logs mountPath: /opt/payara/appserver/glassfish/logs/credit-card readOnly: false containers: - name: credit-card image: xx.xx.xx.xx/orchestration/credit-card imagePullPolicy: Always securityContext: privileged: true runAsUser: 1000 ports: - name: credit-card containerPort: 8080 readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 10 periodSeconds: 5 successThreshold: 1 volumeMounts: - name: override-setting-storage mountPath: /p/config - name: credit-card-teamsite mountPath: /var/credit-card/teamsite/card_corp </code></pre> <p>Container Path - /opt/payara/appserver/glassfish/logs/credit-card to hostPath</p> <p>Can anyone please help me out where i am doing mistake in deployment yml file.</p>
Ravikant Kumar
<pre><code> securityContext: runAsUser: 1000 runAsGroup: 3000 </code></pre> <p>means you cannot <code>chown 1000:1000</code> because that user is not a member of <strong>group</strong> <code>1000</code></p> <p>Likely you will want to run that <code>initContainer:</code> as <code>runAsUser: 0</code> in order to allow it to perform arbitrary <code>chown</code> operations</p> <p>You also truncated your YAML that would have specified the <code>volumes:</code> that are being mounted by your <code>volumeMounts:</code> -- there is a chance that you are trying to mount a volume type that -- regardless of your <code>readOnly: false</code> declaration -- cannot be modified. <code>ConfigMap</code>, <code>Secret</code>, Downward API, and a bunch of others also will not respond to mutation requests, even as <code>root</code>.</p>
mdaniel
<p>I built an application using Docker Compose which included an Nginx instance accepting connections on port 80:</p> <pre><code> nginx: image: nginx:1.15.12-alpine container_name: nginx volumes: - etc. ports: - 80:80 </code></pre> <p>I'd like to spin up this application on Kubernetes running on my local machine (macOS). So I've run <code>kompose convert</code>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/#kompose-convert" rel="nofollow noreferrer">documented here</a>.</p> <p>This generated <code>nginx-service.yaml</code> which looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 () creationTimestamp: null labels: io.kompose.service: nginx name: nginx spec: ports: - name: "80" port: 80 targetPort: 80 selector: io.kompose.service: nginx status: loadBalancer: {} </code></pre> <p>I ran <code>kubectl apply</code> with all of the YAML files produced by <code>kompose</code>, and then <code>kubectl describe svc nginx</code>:</p> <pre><code>Name: nginx Namespace: myproject Labels: io.kompose.service=nginx Annotations: kompose.cmd=kompose convert kompose.version=1.18.0 () kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kompose.cmd":"kompose convert","kompose.version":"1.18.0 ()"},"creationTimestamp":null,... Selector: io.kompose.service=nginx Type: ClusterIP IP: 172.30.110.242 Port: 80 80/TCP TargetPort: 80/TCP Endpoints: Session Affinity: None Events: &lt;none&gt; </code></pre> <p>However, I cannot access the web server by navigating to <code>http://172.30.110.242:80</code> on the same machine.</p> <p>There is documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">accessing services running on clusters</a>. I'm new to k8s and I'm not sure how to diagnose the problem and pick the right solution of the options they list.</p> <p>Is it a defect in <code>kompose</code> that it did not generate a comparable service config file? </p>
rgov
<p>See:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p> <p>Your "connect to" URL from your local machine to a K8 world will not be "172.x.x.x". It will probably be 192.168.99.100:33333 (port number wil be different).. run this: </p> <p><code>minikube service myservicename -n "default" --url</code> </p> <p>see what that gives you</p> <p>but basically, you need to "expose" the k8 world to the outside world.</p> <p>this yml should help</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservicename namespace: mycustomnamespace labels: name: myservicemetadatalabel spec: type: NodePort ports: - name: myfirstportname port: 80 targetPort: 80 selector: myexamplelabelone: mylabelonevalue myexamplelabeltwo: mylabeltwovalue </code></pre> <p>the selector will refer to your pod/container setup.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myfirstpodmetadataname namespace: mycustomnamespace labels: myexamplelabelone: mylabelonevalue myexamplelabeltwo: mylabeltwovalue </code></pre> <p>"selectors" is outside scope of this question...but the above will give you the breadcrumb you need</p> <p>also see:</p> <p><a href="https://stackoverflow.com/questions/48361841/how-to-expose-k8-pods-to-the-public-internet/48366206#48366206">How to expose k8 pods to the public internet?</a></p>
granadaCoder
<p>Helllo does any one have an idea and can help here ? - I can not run a for me very complicated curl command i have some troubles to understand why its not working</p> <p>I try to copy data via curl into a Wordpress Directory - at a POD from Kubernetes</p> <pre><code>kubectl exec $WPPOD -- curl --request GET --header 'PRIVATE-TOKEN: *******' 'https://gitlab.com/api/v4/projects/*****/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' &gt; /var/www/html/wp-content/ai1wm-backups sh: 7: cannot create  /var/www/html/wp-content/ai1wm-backups -: Directory nonexistent </code></pre> <p>Also from within the cube this is not working</p> <pre><code># curl --request GET --header 'PRIVATE-TOKEN: Z7-RByYpUJcnWU_STpuz' 'https://gitlab.com/api/v4/projects/14628452/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' &gt; /var/www/html/wp-content/ai1wm-backups/all-in-one-wp-migration-unlimited-extension.zip -O -J -L sh: 3: cannot create  /var/www/html/wp-content/ai1wm-backups/all-in-one-wp-migration-unlimited-extension.zip -O -J -L: Directory nonexistent </code></pre> <p>But if i check the directory within the cube it is fine </p> <pre><code># cd /var/www/html/wp-content/ai1wm-backups # ls index.php web.config </code></pre> <p>Thanks to the helpful input i have now a solution</p> <pre><code>kubectl exec $WPPOD -- curl --fail --output /var/www/html/wp-content/ai1wm-backups/all-in-one-wp-migration-unlimited-extension.zip --request GET --header 'PRIVATE-TOKEN: *******' 'https://gitlab.com/api/v4/projects/*****/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' </code></pre>
Bliv_Dev
<p>You have two pieces of bad shell going on here:</p> <p>The first one is because the redirection is happening <strong>on your machine</strong>. The second is because everything after the <code>&gt;</code> is a filename, but you have included random arguments to <code>curl</code> in them.</p> <p>To solve the first one, package the whole command into a shell literal:</p> <pre><code>kubectl exec $WPPOD -- sh -c "curl --request GET --header 'PRIVATE-TOKEN: *******' 'https://gitlab.com/api/v4/projects/*****/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' &gt; /var/www/html/wp-content/ai1wm-backups" </code></pre> <p>I would even go so far as to say "don't use redirection," since if you inform <code>curl</code> of the output file, and add <code>--fail</code> to it, then it will avoid writing to that file on server error, which isn't true when using a shell redirection: the shell <strong>will</strong> create that file, no matter what, possibly making it empty; thus:</p> <pre><code>kubectl exec $WPPOD -- curl --fail --output /var/www/html/wp-content/ai1wm-backups --request GET --header 'PRIVATE-TOKEN: *******' 'https://gitlab.com/api/v4/projects/*****/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' </code></pre> <p>For the second problem, it's a simple matter of re-arranging the arguments to be compliant with shell syntax:</p> <pre><code>curl -O -J -L --request GET --header 'PRIVATE-TOKEN: Z7-RByYpUJcnWU_STpuz' 'https://gitlab.com/api/v4/projects/14628452/repository/files/infrastructure%2Fwordpress%2Fdeploy%2Fall-in-one-wp-migration-unlimited-extension%2Ezip/raw?ref=Add_WP_MySQL' &gt; /var/www/html/wp-content/ai1wm-backups/all-in-one-wp-migration-unlimited-extension.zip </code></pre> <p>Although in that case, you have conflicting <code>curl</code> behaviors: <a href="https://curl.haxx.se/docs/manpage.html#-O" rel="nofollow noreferrer">the <code>-O</code> option</a> is going to write out the file <em>in the current directory</em>, so your shell redirect is only going to receive any messages written by <code>curl</code>, and <em>not</em> the content of that URL</p> <p>All of this has nothing to do with kubernetes, or a directory, or a copy, and those tags should be removed.</p>
mdaniel
<p>I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster. </p> <p>I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.</p> <ul> <li><p>Docker SQL Server creation (elevated powershell/docker desktop):</p> <pre><code>docker pull mcr.microsoft.com/mssql/server:2017-latest docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest </code></pre></li> <li><p>definitions.yaml</p> <pre><code>#Pods in the cluster apiVersion: v1 kind: Pod metadata: name: pod-1 labels: app: podnet type: module spec: containers: - name: container1 image: username/image1 --- apiVersion: v1 kind: Pod metadata: name: pod-2 labels: app: podnet type: module spec: containers: - name: container2 image: username/image2 --- #Service created in an attempt to contact external SQL Server DB apiVersion: v1 kind: Service metadata: name: ext-sql-service spec: ports: - port: 1433 targetPort: 1433 type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: ext-sql-service subsets: - addresses: - ip: (Docker IP for DB Instance) ports: - port: 1433 </code></pre></li> </ul> <p>Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).</p>
C1pher6710
<p>When running from local docker, you connection string is NOT your local machine. It is the local docker "world", that happens to be running on your machine.</p> <p>host.docker.internal:1433</p> <p>The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.</p> <p>......</p> <p>If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:</p> <p>ServerName:</p> <p>my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local</p> <p>Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.</p> <p>my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)</p> <p>Note there is no port number here.</p> <p>This is documented here:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services</a></p>
granadaCoder
<p>I'm trying to automate the process of horizontal scale up and scale down of elasticsearch nodes in kubernetes cluster.</p> <p>Initially, I deployed an elasticsearch cluster (3 master, 3 data &amp; 3 ingest nodes) on a Kubernetes cluster. Where, <code>cluster.initial_master_nodes</code> was:</p> <pre class="lang-yaml prettyprint-override"><code>cluster.initial_master_nodes: - master-a - master-b - master-c </code></pre> <p>Then, I performed scale down operation, reduced the number of master node 3 to 1 (unexpected, but for testing purpose). While doing this, I deleted <code>master-c</code>, <code>master-b</code> nodes and restarted <code>master-a</code> node with the following setting:</p> <pre class="lang-yaml prettyprint-override"><code>cluster.initial_master_nodes: - master-a </code></pre> <p>Since the elasticsearch nodes (i.e. pods) use persistant volume, after restarting the node, the <code>master-a</code> slowing the following logs:</p> <pre><code>"message": "master not discovered or elected yet, an election requires at least 2 nodes with ids from [TxdOAdryQ8GAeirXQHQL-g, VmtilfRIT6KDVv1R6MHGlw, KAJclUD2SM6rt9PxCGACSA], have discovered [] which is not a quorum; discovery will continue using [] from hosts providers and [{master-a}{VmtilfRIT6KDVv1R6MHGlw}{g29haPBLRha89dZJmclkrg}{10.244.0.95}{10.244.0.95:9300}{ml.machine_memory=12447109120, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 5, last-accepted version 40 in term 5" } </code></pre> <p>Seems like it's trying to find <code>master-b</code> and <code>master-c</code>.</p> <p>Questions:</p> <ul> <li>How to overwrite cluster settings so that <code>master-a</code> won't search for these deleted nodes?</li> </ul>
Kamol Hasan
<p>The <code>cluster.initial_master_nodes</code> setting only has an effect the first time the cluster starts up, but to avoid some very rare corner cases you should never change its value once you've set it and generally you should remove it from the config file as soon as possible. From <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html#initial_master_nodes" rel="noreferrer">the reference manual</a> regarding <code>cluster.initial_master_nodes</code>:</p> <blockquote> <p>You should not use this setting when restarting a cluster or adding a new node to an existing cluster.</p> </blockquote> <p>Aside from that, Elasticsearch uses a <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-quorums.html" rel="noreferrer">quorum-based election protocol</a> and says the following:</p> <blockquote> <p>To be sure that the cluster remains available you <strong>must not stop half or more of the nodes in the voting configuration at the same time</strong>. </p> </blockquote> <p>You have stopped two of your three master-eligible nodes at the same time, which is more than half of them, so it's expected that the cluster no longer works.</p> <p>The reference manual also contains <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-adding-removing-nodes.html#modules-discovery-removing-nodes" rel="noreferrer">instructions for removing master-eligible nodes</a> which you have not followed:</p> <blockquote> <p>As long as there are at least three master-eligible nodes in the cluster, as a general rule it is best to remove nodes one-at-a-time, allowing enough time for the cluster to automatically adjust the voting configuration and adapt the fault tolerance level to the new set of nodes.</p> <p>If there are only two master-eligible nodes remaining then neither node can be safely removed since both are required to reliably make progress. To remove one of these nodes you must first inform Elasticsearch that it should not be part of the voting configuration, and that the voting power should instead be given to the other node.</p> </blockquote> <p>It goes on to describe how to safely remove the unwanted nodes from the voting configuration using <code>POST /_cluster/voting_config_exclusions/node_name</code> when scaling down to a single node.</p>
Dave Turner
<p>I try to run some ansible tasks with the k8s module. Locally this works perfect, but on my Jenkins instance, it fails with the following error message:</p> <blockquote> <p>...</p> <p>MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxxxxxxxxxxxxx', port=443): Max retries exceeded with url: /version (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 }</p> </blockquote> <p>I am quite sure this is because the Jenkins requires a proxy to communicate to the outside world. I´ve seen how to set up ansible for using a proxy, but that does not seem to work with the k8s module. Any ideas? Here´s what I´ve tried so far:</p> <pre><code> - hosts: ansible_server connection: local gather_facts: no environment: https_proxy: "xxx" http_proxy: "xxx" tasks: - name: Gather facts to check connectivity k8s_facts: api_key: "{{api_key}}" host: "{{cluster_url}}" kind: Project register: listed_projects </code></pre> <p>PS: I added the -vvv flag and can see that it tries to use the proxy somehow:</p> <blockquote> <p> EXEC /bin/sh -c '/usr/bin/python &amp;&amp; sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/clustering/k8s/k8s_facts.py PUT /root/.ansible/tmp/ansible-local-1fHx5f6/tmpDUhlNa TO /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/ /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py &amp;&amp; sleep 0' EXEC /bin/sh -c 'https_proxy=xxx http_proxy=xxx /usr/bin/python /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py &amp;&amp; sleep 0' EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/ > /dev/null 2>&amp;1 &amp;&amp; sleep 0'</p> </blockquote>
EllisTheEllice
<p>I agree with @ilias-sp but it also appears that <a href="https://github.com/ansible/ansible/blob/v2.8.5/lib/ansible/module_utils/k8s/common.py" rel="nofollow noreferrer"><code>k8s/common.py</code></a> does not support the <a href="https://github.com/kubernetes-client/python/blob/v10.0.1/kubernetes/client/configuration.py#L103" rel="nofollow noreferrer"><code>configuration.proxy</code> attribute</a>, , and as best I can tell <code>urllib3</code> does not honor those proxy environment variables the way "normal" urllib does, opting instead to use its own <code>ProxyManager</code> that is driven by an explicit constructor kwarg</p> <p>However, thanks to the "override" mechanism of ansible, I believe you can test this theory:</p> <ol> <li>Copy <code>k8s_facts.py</code> into the <code>library</code> folder of your playbook</li> <li>Modify it to expose <code>proxy</code> in the <a href="https://github.com/ansible/ansible/blob/v2.8.5/lib/ansible/module_utils/k8s/common.py#L128-L139" rel="nofollow noreferrer"><code>AUTH_ARG_MAP</code></a>, which I believe the patch below will do (the patch is against v2.8.5 so you may need to fiddle with it if your version is different)</li> <li><p>Explicitly set your <code>proxy:</code> attribute on your new <code>k8s_facts</code> module and see if it works</p> <pre><code>- k8s_facts: host: api-server-whatever kind: Project proxy: http://my-proxy:3128 </code></pre></li> <li>Assuming it does, <a href="https://github.com/ansible/ansible/issues" rel="nofollow noreferrer">open an issue in ansible</a> to let them know</li> </ol> <pre><code>--- a/library/k8s_facts.py 2019-10-08 22:23:24.000000000 -0700 +++ b/library/k8s_facts.py 2019-10-08 22:24:50.000000000 -0700 @@ -130,13 +130,14 @@ ''' -from ansible.module_utils.k8s.common import KubernetesAnsibleModule, AUTH_ARG_SPEC +from ansible.module_utils.k8s.common import KubernetesAnsibleModule, AUTH_ARG_SPEC, AUTH_ARG_MAP import copy class KubernetesFactsModule(KubernetesAnsibleModule): def __init__(self, *args, **kwargs): + AUTH_ARG_MAP['proxy'] = 'proxy' KubernetesAnsibleModule.__init__(self, *args, supports_check_mode=True, **kwargs) @@ -163,6 +164,7 @@ namespace=dict(), label_selectors=dict(type='list', default=[]), field_selectors=dict(type='list', default=[]), + proxy=dict(type='str', required=False), ) ) return args </code></pre>
mdaniel
<p>Is possible to gain k8s cluster access with serviceaccount token?</p> <p>My script does not have access to a kubeconfig file, however, it does have access to the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token.</p> <p>Here are the steps I tried but it is not working.</p> <ol> <li>kubectl config set-credentials sa-user --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)</li> <li>kubectl config set-context sa-context --user=sa-user</li> </ol> <p>but when the script ran &quot;kubectl get rolebindings&quot; I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User &quot;system:serviceaccount:test:default&quot; cannot list resource &quot;rolebindings&quot; in API group &quot;rbac.authorization.k8s.io&quot; in the namespace &quot;test&quot;</p>
Chris Jones
<blockquote> <p>Is possible to gain k8s cluster access with serviceaccount token?</p> </blockquote> <p>Certainly, that's the point of a ServiceAccount token. The question you appear to be asking is &quot;why does my <code>default</code> ServiceAccount not have all the privileges I want&quot;, which is a different problem. One will benefit from reading <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="nofollow noreferrer">the fine manual on the topic</a></p> <p>If you want the <code>default</code> SA in the <code>test</code> NS to have privileges to read things in its NS, you must create a Role scoped to that NS and then declare the relationship explicitly. SAs do not automatically have those privileges</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: test name: test-default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: whatever-role-you-want subjects: - kind: ServiceAccount name: default namespace: test </code></pre> <blockquote> <p>but when the script ran &quot;kubectl get pods&quot; I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User &quot;system:serviceaccount:test:default&quot; cannot list resource &quot;rolebindings&quot; in API group &quot;rbac.authorization.k8s.io&quot; in the namespace &quot;test&quot;</p> </blockquote> <p>Presumably you mean you can <code>kubectl get rolebindings</code>, because I would not expect running <code>kubectl get pods</code> to emit that error</p>
mdaniel
<p>In the <code>create_namespaced_job</code> method there is no parameter that exists to define <code>preStop</code> and <code>postStart</code> handlers.</p> <pre><code>V1Job create_namespaced_job(namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager) </code></pre> <p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job" rel="nofollow noreferrer">Source</a>.</p> <p>So, how to add these life-cycle handlers to a job or pod by Python Kubernetes?</p>
shiva
<blockquote> <p>In the create_namespaced_job method there is no parameter that exists to define preStop and postStart handlers.</p> </blockquote> <p>The <code>preStop</code> and <code>postStart</code> handlers exists on the containers. You linked to the documentation for <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job" rel="nofollow noreferrer">create_namespaced_job</a> and the parameter <code>body</code> is a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Job.md" rel="nofollow noreferrer">V1Job</a>, and the <code>spec</code> of that has a <code>template</code>, that has a <code>spec</code> of type <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodSpec.md" rel="nofollow noreferrer">V1PodSpec</a> and there you find a field <code>container[list]</code>and there you find a field <code>lifecycle</code> of type <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Lifecycle.md" rel="nofollow noreferrer">V1Lifecylce</a> that has the <code>preStop</code> and <code>postStart</code>-handlers.</p> <p>The documentation can also be navigated with <code>kubectl explain</code>, e.g:</p> <pre><code>kubectl explain podTemplate.template.spec.containers.lifecycle </code></pre>
Jonas
<p>I am new to gitlab CI. So I am trying to use <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml" rel="noreferrer">https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml</a>, to deploy simple test django app to the kubernetes cluster attached to my gitlab project using a custom chat <a href="https://gitlab.com/aidamir/citest/tree/master/chart" rel="noreferrer">https://gitlab.com/aidamir/citest/tree/master/chart</a>. All things goes well, but the last moment it show error message from kubectl and it fails. here is output of the pipeline:</p> <pre><code>Running with gitlab-runner 12.2.0 (a987417a) on docker-auto-scale 72989761 Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ... Running on runner-72989761-project-13952749-concurrent-0 via runner-72989761-srm-1568200144-ab3eb4d8... Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/myporject/kubetest/.git/ Created fresh repository. From https://gitlab.com/myproject/kubetest * [new branch] master -&gt; origin/master Checking out 3efeaf21 as master... Skipping Git submodules setup Authenticating with credentials from job payload (GitLab Registry) $ auto-deploy check_kube_domain $ auto-deploy download_chart Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Not installing Tiller due to 'client-only' flag having been set "gitlab" has been added to your repositories No requirements found in /builds/myproject/kubetest/chart/charts. No requirements found in chart//charts. $ auto-deploy ensure_namespace NAME STATUS AGE kubetest-13952749-production Active 46h $ auto-deploy initialize_tiller Checking Tiller... Tiller is listening on localhost:44134 Client: &amp;version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"} [debug] SERVER: "localhost:44134" Kubernetes: &amp;version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"} Server: &amp;version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"} $ auto-deploy create_secret Create secret... secret "gitlab-registry" deleted secret/gitlab-registry replaced $ auto-deploy deploy secret "production-secret" deleted secret/production-secret replaced Deploying new release... Release "production" has been upgraded. LAST DEPLOYED: Wed Sep 11 11:12:21 2019 NAMESPACE: kubetest-13952749-production STATUS: DEPLOYED RESOURCES: ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE production-djtest 1/1 1 1 46h ==&gt; v1/Job NAME COMPLETIONS DURATION AGE djtest-update-static-auik5 0/1 3s 3s ==&gt; v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-storage-pvc Bound nfs 10Gi RWX 3s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE djtest-update-static-auik5-zxd6m 0/1 ContainerCreating 0 3s production-djtest-5bf5665c4f-n5g78 1/1 Running 0 46h ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE production-djtest ClusterIP 10.0.0.146 &lt;none&gt; 5000/TCP 46h NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace kubetest-13952749-production -l "app.kubernetes.io/name=djtest,app.kubernetes.io/instance=production" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80 error: arguments in resource/name form must have a single resource and name ERROR: Job failed: exit code 1 </code></pre> <p>Please help me to find the reason of the error message. </p> <p>I did look to the auto-deploy script from the image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0. There is a settings variable to disable rollout status check </p> <pre><code> if [[ -z "$ROLLOUT_STATUS_DISABLED" ]]; then kubectl rollout status -n "$KUBE_NAMESPACE" -w "$ROLLOUT_RESOURCE_TYPE/$name" fi </code></pre> <p>So setting</p> <pre><code>variables: ROLLOUT_STATUS_DISABLED: "true" </code></pre> <p>prevents job fail. But I still have no answer why the script does not work with my custom chat?. When I do execution of the status checking command from my laptop it shows nothing errors.</p> <pre><code>kubectl rollout status -n kubetest-13952749-production -w "deployment/production-djtest" deployment "production-djtest" successfully rolled out </code></pre> <p>I also found a complaint to a similar issue <a href="https://gitlab.com/gitlab-com/support-forum/issues/4737" rel="noreferrer">https://gitlab.com/gitlab-com/support-forum/issues/4737</a>, but there is no activity on the post.</p> <p>It is my gitlab-ci.yaml:</p> <pre><code>image: alpine:latest variables: POSTGRES_ENABLED: "false" DOCKER_DRIVER: overlay2 ROLLOUT_RESOURCE_TYPE: deployment DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501 stages: - build - test - deploy # dummy stage to follow the template guidelines - review - dast - staging - canary - production - incremental rollout 10% - incremental rollout 25% - incremental rollout 50% - incremental rollout 100% - performance - cleanup include: - template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml variables: CI_APPLICATION_REPOSITORY: eu.gcr.io/myproject/django-test </code></pre>
Testobile Testossimo
<blockquote> <p>error: arguments in resource/name form must have a single resource and name</p> </blockquote> <p>That issue you linked to has <code>Closed (moved)</code> in its status because it was moved from <a href="https://gitlab.com/gitlab-org/gitlab-ce/issues/66016#note_203406467" rel="noreferrer">issue 66016</a>, which has what I believe is the real answer:</p> <blockquote> <p>Please try adding the following to your .gitlab-ci.yml:</p> </blockquote> <pre><code>variables: ROLLOUT_RESOURCE_TYPE: deployment </code></pre> <p>Using <strong>just</strong> the <code>Jobs/Deploy.gitlab-ci.yml</code> omits <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/v12.2.5/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml#L55" rel="noreferrer">the <code>variables:</code> block from <code>Auto-DevOps.gitlab-ci.yml</code></a> which correctly sets that variable</p> <p>In your case, I think you just need to move that <code>variables:</code> up to the top, since (afaik) one cannot have two top-level <code>variables:</code> blocks. I'm actually genuinely surprised your <code>.gitlab-ci.yml</code> passed validation</p> <hr> <p>Separately, if you haven't yet seen, you can set the <code>TRACE</code> variable to <a href="https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/blob/v0.2.2/src/bin/auto-deploy#L3" rel="noreferrer">switch auto-deploy into <code>set -x</code> mode</a> which is super, super helpful in seeing exactly what it is trying to do. I believe your command was trying to run <code>rollout status /whatever-name</code> and with just a slash, it doesn't know what kind of name that is.</p>
mdaniel
<p>Given this <code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: deployment spec: revisionHistoryLimit: 5 template: spec: containers: {{- include &quot;app.container&quot; (merge .Values.app $) | nindent 8 }} {{- include &quot;ports&quot; (merge .Values.app ) | nindent 8 }} {{- range $k, $v := .Values.extraContainers }} {{- $nameDict := dict &quot;name&quot; $k -}} {{- include &quot;app.container&quot; (mustMergeOverwrite $.Values.app $nameDict $v) | nindent 8 }} {{- include &quot;ports&quot; (merge $nameDict $v ) | nindent 8 }} {{- end }} </code></pre> <p>This helpers file...</p> <pre><code>{{/* vim: set filetype=helm: */}} {{/* app container base */}} {{- define &quot;app.containerBase&quot; -}} - name: {{ .name | default &quot;app&quot; }} image: {{ .image.name }}:{{ .image.tag }} {{- if .command }} command: {{- range .command }} - {{ . | quote }} {{- end }} {{- end }} {{- if .args }} args: {{- range .args }} - {{ . | quote }} {{- end }} {{- end }} {{- range .envVars }} - name: {{ .name }} {{- with .value }} value: {{ . | quote }} {{- end }} {{- with .valueFrom }} valueFrom: {{- toYaml . | nindent 8 }} {{- end }} {{- end }} {{- if or .Values.configMaps .Values.secrets .Values.configMapRef }} envFrom: {{- if .Values.configMaps }} - configMapRef: name: &quot;{{- include &quot;app.fullname&quot; $ }}&quot; {{- end }} {{- range .Values.configMapRef }} - configMapRef: name: {{ . }} {{- end }} {{- range $name, $idk := $.Values.secrets }} - secretRef: name: &quot;{{- include &quot;app.fullname&quot; $ }}-{{ $name }}&quot; {{- end }} {{- end }} {{- end -}} {{/* app container */}} {{- define &quot;app.container&quot; -}} {{- include &quot;app.containerBase&quot; . }} resources: limits: cpu: {{ .resources.limits.cpu | default &quot;100m&quot; | quote }} memory: {{ .resources.limits.memory | default &quot;128Mi&quot; | quote }} {{- if .resources.limits.ephemeralStorage }} ephemeral-storage: {{ .resources.limits.ephemeralStorage }} {{- end }} requests: cpu: {{ .resources.requests.cpu | default &quot;100m&quot; | quote }} memory: {{ .resources.requests.memory | default &quot;128Mi&quot; | quote }} {{- if .resources.requests.ephemeralStorage }} ephemeral-storage: {{ .resources.requests.ephemeralStorage }} {{- end }} securityContext: runAsNonRoot: true runAsUser: {{ .securityContext.runAsUser }} runAsGroup: {{ .securityContext.runAsGroup }} allowPrivilegeEscalation: false {{- end -}} {{/* ports */}} {{- define &quot;ports&quot; -}} {{- if .port }} ports: - containerPort: {{ .port }} protocol: {{ .protocol | default &quot;tcp&quot; | upper }} {{- range .extraPorts}} - containerPort: {{ required &quot;.port is required.&quot; .port }} {{- if .name }} name: {{ .name }} {{- end }} {{- if .protocol }} protocol: {{ .protocol | upper }} {{- end }} {{- end }} {{- end }} {{- end -}} </code></pre> <p>This values.yaml</p> <pre><code>extraContainers: extra-container2: image: name: xxx.dkr.ecr.eu-west-1.amazonaws.com/foobar-two tag: test command: - sleep - 10 envVars: - name: FOO value: BAR probes: readinessProbe: path: /ContainerTwoReadinessPath livenessProbe: path: /ContainerTwoLivenessPath resources: limits: cpu: 200Mi memory: 2Gi requests: cpu: 200Mi memory: 2Gi extra-container3: image: name: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-three tag: latest command: - sleep - 10 envVars: - name: FOO value: BAZ probes: readinessProbe: enabled: false livenessProbe: enabled: false app: image: name: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar tag: latest probes: readinessProbe: path: /_readiness enabled: true livenessProbe: path: /_liveness enabled: true port: 100 resources: limits: cpu: 100Mi memory: 1Gi requests: cpu: 100Mi memory: 1Gi </code></pre> <p>Why does <code>helm template</code> result in the below:</p> <pre><code>--- # Source: corp-service/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deployment spec: revisionHistoryLimit: 5 template: spec: containers: - name: app image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar:latest resources: limits: cpu: &quot;100Mi&quot; memory: &quot;1Gi&quot; requests: cpu: &quot;100Mi&quot; memory: &quot;1Gi&quot; securityContext: runAsNonRoot: true runAsUser: 2000 runAsGroup: 2000 allowPrivilegeEscalation: false ports: - containerPort: 100 protocol: TCP - name: extra-container2 image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-two:test command: - &quot;sleep&quot; - &quot;10&quot; - name: FOO value: &quot;BAR&quot; resources: limits: cpu: &quot;200Mi&quot; memory: &quot;2Gi&quot; requests: cpu: &quot;200Mi&quot; memory: &quot;2Gi&quot; securityContext: runAsNonRoot: true runAsUser: 2000 runAsGroup: 2000 allowPrivilegeEscalation: false - name: extra-container3 image: xx.dkr.ecr.eu-west-1.amazonaws.com/foobar-three:latest command: - &quot;sleep&quot; - &quot;10&quot; - name: FOO value: &quot;BAZ&quot; resources: limits: cpu: &quot;200Mi&quot; memory: &quot;2Gi&quot; requests: cpu: &quot;200Mi&quot; memory: &quot;2Gi&quot; securityContext: runAsNonRoot: true runAsUser: 2000 runAsGroup: 2000 allowPrivilegeEscalation: false </code></pre> <p>i.e why does <code>extra-container3</code> have <code>resources:</code> set with values from <code>extra-container2</code> instead of taking them from the default set of resources found under <code>.values.app</code>.</p> <p>essentially, i want <code>extracontainers</code> to use default values from <code>.values.app</code> unless i explicitly set them inside the <code>extracontainers</code> section. however it seems that if a previous iteration of the loop over <code>extracontainers</code> defines some values, they are used in the next iteration if they are not overriden.</p> <p>the resources for extra-container3 i am expecting are:</p> <pre><code>resources: limits: cpu: 100Mi memory: 1Gi requests: cpu: 100Mi memory: 1Gi </code></pre> <p>what am i doing wrong?</p>
gurpsone
<p>So there are a couple of things going on here, but mostly the answer is that <code>(mergeMustOverwrite)</code> <strong>mutates</strong> the <code>$dest</code> map, which causes your range to &quot;remember&quot; the last value it saw, which according to your question isn't the behavior you want. The simplest answer is to use <code>(deepCopy $.Values.app)</code> as the <code>$dest</code>, but there's an asterisk to that due to another bug we'll cover in a second</p> <pre><code> {{- $nameDict := dict &quot;name&quot; $k -}} - {{- include &quot;app.container&quot; (mustMergeOverwrite $.Values.app $nameDict $v) | nindent 8 }} + {{- include &quot;app.container&quot; (mustMergeOverwrite (deepCopy $.Values.app) $nameDict $v) | nindent 8 }} {{- include &quot;ports&quot; (merge $nameDict $v ) | nindent 8 }} {{- end }} </code></pre> <p>You see, <code>(deepCopy $.Values.app)</code> stack-overflows <code>helm</code> because of your inexplicable use of <code>(merge .Values.app $)</code> drags in <code>Chart</code>, <code>Files</code>, <code>Capabilities</code> and the whole world. I'm guessing your <code>_helpers.tpl</code> must have been copy-pasted from somewhere else, which explains the erroneous <em>relative</em> <code>.Values</code> reference inside that <code>define</code>. One way to fix that is to remove the <code>.Values.configMaps</code>, so it will track the <code>.app</code> context like I expect you meant, or you can change the first <code>(merge)</code> to artificially create a <code>&quot;Values&quot;: {}</code> item just to keep the template from blowing up when in tries to reference <code>.app.Values.configMaps</code>. The correct one will depend on what you were intending, but <code>(merge .Values.app $)</code> is almost certainly not it</p> <p>so, either:</p> <pre><code>--- a/templates/_helpers.tpl +++ b/templates/_helpers.tpl @@ -28,13 +28,13 @@ app container base {{- toYaml . | nindent 8 }} {{- end }} {{- end }} - {{- if or .Values.configMaps .Values.secrets .Values.configMapRef }} + {{- if or .configMaps .secrets .configMapRef }} envFrom: - {{- if .Values.configMaps }} + {{- if .configMaps }} - configMapRef: name: &quot;{{- include &quot;app.fullname&quot; $ }}&quot; {{- end }} - {{- range .Values.configMapRef }} + {{- range .configMapRef }} - configMapRef: name: {{ . }} {{- end }} </code></pre> <p><em>or</em></p> <pre><code> containers: - {{- include &quot;app.container&quot; (merge .Values.app $) | nindent 8 }} + {{- include &quot;app.container&quot; (merge .Values.app (dict &quot;Values&quot; (dict))) | nindent 8 }} {{- include &quot;ports&quot; (merge .Values.app ) | nindent 8 }} </code></pre>
mdaniel
<p>I am trying to find the simpliest method to use kubernetes in production. YAML templates look like an overhead to me. E.g. all I want is expose simple backend service. I can do it with kubectl with 2 lean commands:</p> <pre><code>kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080 </code></pre> <p>But is it fine to use this in production? I would like to use Infrastructure-as-a-Code approach.</p> <p>Is there a way to avoid working with kubernetes template files directly and still follow best practices? E.g. generating Yaml files from docker-compose files or similar?</p> <p>What is the expected kubectl usage in production? Just</p> <pre><code>kubectl apply -f &lt;folder&gt; </code></pre> <p>while it is developers job to maintain template files in <code>&lt;folder&gt;</code> ? Is there a Declarative Management with kubectl without writing kubernetes templates myself? E.g. some files that contain the minimal info needed to templates to be generated.</p> <p>Really want to use Kubernetes, please advice the simplest way to do this!</p>
Anatolii Stepaniuk
<p>I am quoting the original question here:</p> <blockquote> <p>kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080</p> </blockquote> <blockquote> <p>But is it fine to use this in production? I would like to use Infrastructure-as-a-Code approach.</p> </blockquote> <p>Your first approach is a list of commands, that must be executed in a certain order. In addition, all these commands are not <em>idempotent</em>, so you cannot run the commands multiple times.</p> <p>Also from the original question:</p> <blockquote> <p>Is there a way to avoid working with kubernetes template files directly and still follow best practices? E.g. generating Yaml files from docker-compose files or similar?</p> </blockquote> <blockquote> <p>What is the expected kubectl usage in production? Just</p> </blockquote> <blockquote> <p><code>kubectl apply -f &lt;folder&gt;</code></p> </blockquote> <p>The second approach is declarative, you only describe what you want, and the command is idempotent, so it can be run many times without problems. Your desired state is written in text files, so any change can be managed with a version control system, e.g. Git and the process can be done with validation in a CI/CD pipeline.</p> <p>For production environments, it is best practice to use version control system like git for what your cluster contain. This make it easy to recover or recreate your system.</p>
Jonas
<p>Imagine in a Master-Node-Node setup where you deploy a service with pod anti-affinity on the Nodes: An update of the Deployment will cause another pod being created but the scheduler not being able to schedule, because both Nodes have the anti-affinity.</p> <p><strong>Q:</strong> How could one more flexibly set the anti-affinity to allow the update?</p> <pre><code>affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - api topologyKey: kubernetes.io/hostname </code></pre> <p>With an error </p> <pre><code>No nodes are available that match all of the following predicates:: MatchInterPodAffinity (2), PodToleratesNodeTaints (1). </code></pre>
eljefedelrodeodeljefe
<p>Look at <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="noreferrer">Max Surge</a></p> <p>If you set Max Surge = 0, you are telling Kubernetes that you won't allow it to create more pods than the number of replicas you have setup for the deployment. This basically forces Kubernetes to remove a pod before starting a new one, and thereby making room for the new pod first, getting you around the podAntiAffinity issue. I've utilized this mechanism myself, with great success.</p> <p><strong>Config example</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment ... spec: replicas: &lt;any number larger than 1&gt; ... strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 type: RollingUpdate ... template: ... spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - api topologyKey: kubernetes.io/hostname </code></pre> <p><strong>Warning:</strong> Don't do this if you only have one replica, as it will cause downtime because the only pod will be removed before a new one is added. If you have a huge number of replicas, which will make deployments slow because Kubernetes can only upgrade 1 pod at at a time, you can crank up <em>maxUnavailable</em> to enable Kubernetes to remove a higher number of pods at a time.</p>
Silas Hansen
<p>I am running single node K8s cluster and my machine has 250GB memory. I am trying to launch multiple pods each needing atleast 50GB memory. I am setting my pod specifications as following.</p> <p><a href="https://i.stack.imgur.com/xS1Em.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xS1Em.png" alt="enter image description here" /></a></p> <p>Now I can launch first four pods and they starts &quot;Running&quot; immediately; however the fifth pod is &quot;Pending&quot;. <code>kubectl describe pods</code> shows <code>0/1 node available; 1 insufficient memory</code></p> <p>now at first this does not make sense - i have 250GB memory so <strong>i should be able to launch five pods</strong> each requesting 50GB memory. <strong>shouldn't i?</strong></p> <p>on second thought - this might be too tight for K8s to fulfill as, if it schedules the fifth pod than the worker node has no memory left for anything else (kubelet kube-proxy OS etc). if this is the case for denial than <strong>K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?</strong></p> <p>where can i find more details on related topic? i tried googling but nothing came which specifically addresses these questions. can you help?</p>
ankit patel
<blockquote> <p>now at first this does not make sense - i have 250GB memory so i should be able to launch five pods each requesting 50GB memory. shouldn't i?</p> </blockquote> <p>This depends on how much memory that is &quot;allocatable&quot; on your node. Some memory may be reserved for e.g. OS or other system tasks.</p> <p>First list your nodes:</p> <pre><code>kubectl get nodes </code></pre> <p>Then you get a list of your nodes, now describe one of your nodes:</p> <pre><code>kubectl describe node &lt;node-name&gt; </code></pre> <p>And now, you should see how much &quot;allocatable&quot; memory the node has.</p> <p>Example output:</p> <pre><code>... Allocatable: cpu: 4 memory: 2036732Ki pods: 110 ... </code></pre> <h2>Set custom reserved resources</h2> <blockquote> <p>K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?</p> </blockquote> <p>Yes, this can be configured. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">Reserve Compute Resources for System Daemons</a>.</p>
Jonas
<p>Currently practicing with Kubernetes (managed, on DO), I ran into a issue I couldn't resolve for two days. I have nginx-ingress setup along with cert-manager, and a domain where git.domain.com points to the IP of the load balancer. I can reach my Gitea deployment via the web, everything seems to work.</p> <p>What I want to achieve now is, that I can also use SSH like so</p> <pre><code>git clone [email protected]:org/repo.git </code></pre> <p>So I somehow need to expose the container port 22 via the service, then via the ingress. I tried a couple of things, but none of them seemed to work, probably because I'm a starter at K8S. Here is the working setup I use.</p> <p>Service definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gitea-service spec: selector: app: gitea ports: - name: gitea-http port: 3000 targetPort: gitea-http - name: gitea-ssh port: 22 targetPort: gitea-ssh </code></pre> <p>Ingress definiton</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - git.domain.com secretName: letsencrypt-prod rules: - host: git.domain.com http: paths: - backend: serviceName: gitea-service servicePort: gitea-http </code></pre> <p>And part of my deployment, just to make sure:</p> <pre><code>... ports: - containerPort: 3000 name: gitea-http - containerPort: 22 name: gitea-ssh ... </code></pre> <p>Sorry if it's a dumb question, I think there is some basics that I confuse here. Thanks!</p>
Teecup
<blockquote> <p>So I somehow need to expose the container port 22 via the service, then via the ingress</p> </blockquote> <p>So yes and no: an Ingress is specifically for virtual-hosting using the <code>host:</code> header (or SNI) of the incoming request to know which backend to use. There is no such mechanism in SSH, or at the very least there's no Ingress controller that I'm aware of which supports protocols other than http for doing that.</p> <p>However, the <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx Ingress controller</a> supports <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">TCP and UDP services</a> so long as you can assign a dedicated port for them (which in your case, you can). You would create a <code>ConfigMap</code> entry saying which port on the <strong>ingress controller's</strong> <code>Service</code> to map to the port on <strong>gitea's</strong> <code>Service</code>, and then you'll need to expose port 22 on whatever is Internet-facing in Digital Ocean that routes traffic to the ingress controller's <code>Service</code>.</p> <pre><code>[Internet] -&gt; :22[load balancer] --&gt; :12345[controller Service] --&gt; :22[gitea-service] </code></pre> <p>There are <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md#servicebetakubernetesiodo-loadbalancer-protocol" rel="noreferrer">Digital Ocean annotations</a> that you can use to switch certain ports over to TCP, but I didn't study that further than a quick search</p> <p>I just used the nginx ingress controller as a concrete example, but the haproxy based ingress controllers will almost certainly do that, and other controllers may have similar options, because your question is a reasonable one</p>
mdaniel
<p>While running kubernetes clusters, I've noticed that when a secret's value is changed <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">pods that use it as an environment variable</a> are <em>not</em> rebuilt and my applications <em>don't</em> receive a <code>SIGTERM</code> event.</p> <p>While I know it's technically possible to update the environment of a running process using something like <a href="https://www.gnu.org/s/gdb/" rel="nofollow noreferrer">gdb</a>, this is a horrible thing to do and I assume k8s doesn't do this.</p> <p>Is there a signal that is sent to an effected process when this situation occurs, or some other way to handle this?</p>
Mike Marcacci
<p>No, nor does any such thing happen on <code>ConfigMap</code> mounts, env-var injection, or any other situation; signals are sent to your process only as a side-effect of Pod termination</p> <p>There are <a href="https://duckduckgo.com/?q=kubernetes+reload+configmap+secret&amp;atb=v73-4_q&amp;ia=web" rel="nofollow noreferrer">innumerable</a> solutions to <a href="https://github.com/stakater/Reloader#readme" rel="nofollow noreferrer">do rolling update on <code>ConfigMap</code> or <code>Secret</code> change</a> but you have to configure what you would want your cluster to do and under what circumstances, because there is no way that a one-size-fits-all solution would work in all the ways that kubernetes is used in the world</p>
mdaniel
<p>I have a Tekton <code>Pipeline</code> and <code>PipelineRun</code> definitions. But, I couldn't achieve to run <code>Pipeline</code> via passing parameter.</p> <pre><code>apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: build-deploy- labels: tekton.dev/pipeline: build-deploy spec: serviceAccountName: tekton-build-bot pipelineRef: name: build-deploy params: - name: registry-address value: $(REG_ADDRESS) - name: repo-address #value: $(REPO_ADDRESS) value: $(REPO_ADDRESS) - name: repo-name value: $(REPO_NAME) - name: version value: $(VERSION) workspaces: - name: source persistentVolumeClaim: claimName: my-pvc </code></pre> <p>How can I pass while parameters while trying to run that runner with the following command <code>kubectl create -f pipelinerun.yaml</code>?</p> <p>Example:</p> <p><code>value: $(REG_ADDRESS)</code> -&gt; I wanted to pass registry address as right before the running pipeline instead of giving hard-coded constant.</p>
jdev
<p>You cannot pass those parameters when using <code>kubectl create</code>.</p> <p>There are two alternatives:</p> <h2>Use tkn cli</h2> <p>You can use <a href="https://github.com/tektoncd/cli" rel="nofollow noreferrer">tkn</a>, a purpose made CLI for Tekton. Then you can start a run of a Pipeline with, e.g.:</p> <pre><code>tkn pipeline start build-deploy \ --param registry-address=yay \ --param repo-name=nay \ --workspace name=source,claimName=my-pvc </code></pre> <h2>Initiate pipeline with Trigger</h2> <p>You can setup a <a href="https://github.com/tektoncd/triggers" rel="nofollow noreferrer">Trigger</a> that initiates runs of your Pipeline on certain events, e.g. when you push to Git.</p> <p>Then your <code>PipelineRun</code> template with parameter mapping is done using a <a href="https://github.com/tektoncd/triggers/blob/main/docs/triggertemplates.md" rel="nofollow noreferrer">TriggerTemplate</a></p>
Jonas
<p>I am check etcd(3.3.13) cluster status using this command:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z work]# /opt/k8s/bin/etcdctl endpoint health --cluster https://172.19.150.82:2379 is unhealthy: failed to connect: context deadline exceeded https://172.19.104.231:2379 is unhealthy: failed to connect: context deadline exceeded https://172.19.104.230:2379 is unhealthy: failed to connect: context deadline exceeded Error: unhealthy cluster </code></pre> <p>check etcd member:</p> <pre><code>[root@iZuf63refzweg1d9dh94t8Z work]# /opt/k8s/bin/etcdctl member list 56298c42af788da7, started, azshara-k8s02, https://172.19.104.230:2380, https://172.19.104.230:2379 5ab2d0e431f00a20, started, azshara-k8s01, https://172.19.104.231:2380, https://172.19.104.231:2379 84c70bf96ccff30f, started, azshara-k8s03, https://172.19.150.82:2380, https://172.19.150.82:2379 </code></pre> <p>my cluster is deploy success?If not,how to solve the context deadline exceeded error? I tried this:</p> <pre><code> export ETCDCTL_API=3 [root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint status --write-out=table +----------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +----------------+------------------+---------+---------+-----------+-----------+------------+ | 127.0.0.1:2379 | 5ab2d0e431f00a20 | 3.3.13 | 2.0 MB | false | 20 | 39303798 | +----------------+------------------+---------+---------+-----------+-----------+------------+ [root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint health 127.0.0.1:2379 is healthy: successfully committed proposal: took = 1.816293ms [root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint health --cluster https://172.19.150.82:2379 is unhealthy: failed to connect: context deadline exceeded https://172.19.104.231:2379 is unhealthy: failed to connect: context deadline exceeded https://172.19.104.230:2379 is unhealthy: failed to connect: context deadline exceeded Error: unhealthy cluster </code></pre>
Dolphin
<blockquote> <p>how to solve the context deadline exceeded error?</p> </blockquote> <p>That error is misleading; it is <em>usually</em> caused by <code>etcdctl</code> not providing credentials and/or not using the same <code>ETCDCTL_API=</code> value as the cluster. If it's a modern <code>etcd</code> version, you'll want <a href="https://github.com/etcd-io/etcd/blob/v3.4.0/Documentation/dev-guide/interacting_v3.md" rel="noreferrer"><code>export ETCDCTL_API=3</code></a>, followed by providing the same <a href="https://github.com/etcd-io/etcd/blob/v3.4.0/Documentation/op-guide/security.md#basic-setup" rel="noreferrer"><code>--cert-file=</code> and <code>--key-file=</code> and likely <code>--trusted-ca-file=</code></a> as was used to start <code>etcd</code> itself.</p>
mdaniel
<p>I have seen the one-pod &lt;-&gt; one-container rule, which seems to apply to business logic pods, but has exceptions when it comes to shared network/volume related resources.</p> <p>What are encountered production uses of deploying pods without a deployment configuration?</p>
Olshansky
<p>I use pods directly to start a Centos (or other operating system) container in which to verify connections or test command line options.</p> <p>As a specific example, below is a shell script that starts an <code>ubuntu</code> container. You can easily modify the manifest to test secret access or change the service account to test access control.</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/bash RANDOMIZER=$(uuid | cut -b-5) POD_NAME=&quot;bash-shell-$RANDOMIZER&quot; IMAGE=ubuntu NAMESPACE=$(uuid) kubectl create namespace $NAMESPACE kubectl apply -f - &lt;&lt;EOF apiVersion: v1 kind: Pod metadata: name: $POD_NAME namespace: $NAMESPACE spec: containers: - name: $POD_NAME image: $IMAGE command: [&quot;/bin/bash&quot;] args: [&quot;-c&quot;, &quot;while true; do date; sleep 5; done&quot;] hostNetwork: true dnsPolicy: Default restartPolicy: Never EOF echo &quot;---------------------------------&quot; echo &quot;| Press ^C when pod is running. |&quot; echo &quot;---------------------------------&quot; kubectl -n $NAMESPACE get pod $POD_NAME -w echo kubectl -n $NAMESPACE exec -it $POD_NAME -- /bin/bash kubectl -n $NAMESPACE delete pod $POD_NAME kubectl delete namespace $NAMESPACE </code></pre>
David Medinets
<p>So, we are thinking about switching our production server to Kubernetes. Right now, it's not really professional as it's only one single nginx instance. 99% of the time our server needs to handle about 30 requests a minute, which it easily manages to do. On <strong>specific times, that we exactly know of in before</strong>, there can be 2000 Users on the same second expecting a service. That is obviously way too much for it and last time it often returned a 502. Kubernetes Autoscaling seems like a good approach for that. The only problem is that we need the extra containers at that specific time, not 30 seconds after.</p> <p>So is there a way to &quot;manually&quot; scale? Like telling to Kubernetes to prepare 4 containers at exactly 8 PM MESZ e.g.?</p>
crimbler2
<p>There are a number of way to autoscale Pods in Kubernetes.</p> <h2>Horizontal Pod Autoscaler</h2> <p>You can use an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> to scale up Pods reactively, based on metrics. You can also use custom metrics.</p> <h2>Manually adjust number of replicas</h2> <p>You can also manually set the number of <code>replicas:</code> in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. This can be done <em>declaratively</em> by changing the manifest, or <em>imperatively</em> by using kubectl:</p> <pre><code>kubectl scale deployment my-app --replicas=5 </code></pre> <h2>Use a CronJob to adjust the number of replicas</h2> <p>You can also use a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> with a kubectl-image that does the command above to scale up number of replicas at a specific time.</p> <h2>Use Knative Serving Autoscaler to autoscale per request</h2> <p>When using Knative Serving, you can use the <a href="https://knative.dev/docs/serving/autoscaling/concurrency/" rel="nofollow noreferrer">Knative Service Autoscaler</a> to scale up based on the number of requests.</p>
Jonas
<p>I have an image which contains data inside /usr/data/webroot. This data should be moved on container init to /var/www/html.</p> <p>Now I stumbeled upon InitContainers. As I understand it, it can be used to execute tasks on container initialization. </p> <p>But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.</p> <p>I suppose that the container with the magento image is not available when the initContainers task runs, so no content is available to move to the new directory.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: amo-magento labels: app: amo-magento spec: replicas: 1 selector: matchLabels: app: amo-magento template: metadata: labels: app: amo-magento tier: frontend spec: initContainers: - name: setup-magento image: busybox:1.28 command: ["sh", "-c", "mv -r /magento/* /www"] volumeMounts: - mountPath: /www name: pvc-www - mountPath: /magento name: magento-src containers: - name: amo-magento image: amo-magento:0.7 # add google gcr.io path after upload imagePullPolicy: Never volumeMounts: - name: install-sh mountPath: /tmp/install.sh subPath: install.sh - name: mage-autoinstall mountPath: /tmp/mage-autoinstall.sh subPath: mage-autoinstall.sh - name: pvc-www mountPath: /var/www/html - name: magento-src mountPath: /usr/data/webroot # maybe as secret - can be used as configMap because it has not to be writable - name: auth-json mountPath: /var/www/html/auth.json subPath: auth.json - name: php-ini-prod mountPath: /usr/local/etc/php/php.ini subPath: php.ini # - name: php-memory-limit # mountPath: /usr/local/etc/php/conf.d/memory-limit.ini # subPath: memory-limit.ini volumes: - name: magento-src emptyDir: {} - name: pvc-www persistentVolumeClaim: claimName: magento2-volumeclaim - name: install-sh configMap: name: install-sh # kubectl create configmap mage-autoinstall --from-file=build/docker/mage-autoinstall.sh - name: mage-autoinstall configMap: name: mage-autoinstall - name: auth-json configMap: name: auth-json - name: php-ini-prod configMap: name: php-ini-prod # - name: php-memory-limit # configMap: # name: php-memory-limit </code></pre>
John Jameson
<blockquote> <p>But I don't know if the task is beeing excuted after the amo-magento pods are created, or if the init task runs, and after that the magento pods are created.</p> </blockquote> <p>For sure the latter, that's why you are able to specify an <strong>entirely different</strong> <code>image:</code> for your <code>initContainers:</code> task -- they are related to one another only in that they run on the same Node and, as you have seen, share volumes. Well, I said "for sure" but you have a slight misnomer: after that the <code>magneto</code> <strong>containers</strong> are created -- the Pod is the collection of <em>every</em> colocated container, <code>initContainers:</code> and <code>container:</code> containers</p> <p>If I understand your question, the fix to your <code>Deployment</code> is just to update the <code>image:</code> in your <code>initContainer:</code> to be the one which contains the magic <code>/usr/data/webroot</code> and then update your shell command to reference the correct path inside that image:</p> <pre><code> initContainers: - name: setup-magento image: your-magic-image:its-magic-tag command: ["sh", "-c", "mv -r /usr/data/webroot/* /www"] volumeMounts: - mountPath: /www name: pvc-www # but **removing** the reference to the emptyDir volume </code></pre> <p>and then by the time the <code>container[0]</code> starts up, the PVC will contain the data you expect</p> <hr> <p>That said, I am actually <em>pretty sure</em> that you want to remove the PVC from this story, since -- by definition -- it is persistent across Pod reboots and thus will only accumulate files over time (since your <code>sh</code> command does not currently clean up <code>/www</code> before moving files there). If you replaced all those <code>pvc</code> references to <code>emptyDir: {}</code> references, then those directories would always be "fresh" and would always contain just the content from the tagged image declared in your <code>initContainer:</code></p>
mdaniel
<p>When I run <code>kubectl</code> inside of a pod it defaults to "in-cluster config" (defined by files in <code>/var/run/secrets/kubernetes.io/serviceaccount</code>). If I want to wrap <code>kubectl</code> inside of a call to Python subprocess with <code>shell=False</code>, how do I tell <code>kubectl</code> where to find the in-cluster config?</p> <p>Since when I run <code>shell=False</code> none of the environment makes it into the subprocess. It seems I need to explicitly pass some environment variables or other system state to the subprocess call for <code>kubectl</code> to discover the in-cluster config. </p> <p>How does <code>kubectl</code> discover this config? Are there a simple few variables to pass through?</p>
Joe J
<p>You will have to construct a <code>KUBECONFIG</code> by hand, given those values, since that's more-or-less exactly what the <a href="https://github.com/kubernetes-client/python-base/blob/474e9fb32293fa05098e920967bb0e0645182d5b/config/incluster_config.py#L81-L86" rel="nofollow noreferrer">python client does anyway</a>. In short, either in python or via the following commands:</p> <pre class="lang-sh prettyprint-override"><code>kubectl config set-cluster the-cluster --server="https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}" --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt kubectl config set-credentials pod-token --token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubectl config set-context pod-context --cluster=the-cluster --user=pod-token kubectl config use-context pod-context </code></pre> <p>and then you're off to the races</p>
mdaniel
<p>I develop a .NET API and I want to deploy it in a Kubernetes pod. I'm using InClusterConfig():</p> <pre><code>var config = KubernetesClientConfiguration.InClusterConfig(); _client = new Kubernetes(config); </code></pre> <p>I follow this issue (<a href="https://stackoverflow.com/questions/63232463/accessing-kubernetes-service-from-c-sharp-docker">Accessing kubernetes service from C# docker</a>) and I understand that InClusterConfig uses the default service account of the namespace where you are deploying the pod.</p> <p>Is it possible to specify that the configuration is taken from another service account than default? Are there other methods to retrieve configuration from a Kubernetes pod?</p>
glorfindel
<p>The Pod will use the ServiceAccount specified in the Pod-manifest, if not specified the <code>default</code> ServiceAccount in the namespace will be used.</p> <p>You can specify the ServiceAccount for a Pod with the <code>serviceAccountName:</code>-field:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: my-service-account ... </code></pre> <p>Also see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a></p>
Jonas
<p>Trying to install kubernetes on virtualbox using ansible:</p> <p>in master-playbook.yml</p> <pre><code> - name: Install comodo cert copy: src=BCPSG.pem dest=/etc/ssl/certs/ca-certificates.crt - name: Update cert index shell: /usr/sbin/update-ca-certificates - name: Adding apt repository for Kubernetes apt_repository: repo: deb https://packages.cloud.google.com/apt/dists/ kubernetes-xenial main state: present filename: kubernetes.list validate_certs: False </code></pre> <p>now, Vagrantfile calls the playbook:</p> <pre><code>config.vm.define "k8s-master" do |master| master.vm.box = IMAGE_NAME master.vm.network "private_network", ip: "192.168.50.10" master.vm.hostname = "k8s-master" master.vm.provision "ansible" do |ansible| ansible.playbook = "kubernetes-setup/master-playbook.yml" end end </code></pre> <p>but i am getting error:</p> <blockquote> <pre><code>TASK [Adding apt repository for Kubernetes] ************************************ fatal: [k8s-master]: FAILED! =&gt; {"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", </code></pre> <p>"module_stdout": "Traceback (most recent call last):\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\", line 113, in \r\n _ansiballz_main()\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\", line 105, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1555907987.70663-229510485563848/AnsiballZ_apt_repository.py\", line 48, in invoke_module\r\n imp.load_module('<strong>main</strong>', mod, module, MOD_DESC)\r\n File \"/tmp/ansible_apt_repository_payload_GXYAmU/<strong>main</strong>.py\", line 550, in \r\n File \"/tmp/ansible_apt_repository_payload_GXYAmU/<strong>main</strong>.py\", line 542, in main\r\n File \"/usr/lib/python2.7/dist-packages/apt/cache.py\", line 487, in update\r\n raise FetchFailedException(e)\r\napt.cache.FetchFailedException: W:The repository '<a href="https://packages.cloud.google.com/apt/dists" rel="nofollow noreferrer">https://packages.cloud.google.com/apt/dists</a> kubernetes-xenial Release' does not have a Release file., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch <a href="https://packages.cloud.google.com/apt/dists/dists/kubernetes-xenial/main/binary-amd64/Packages" rel="nofollow noreferrer">https://packages.cloud.google.com/apt/dists/dists/kubernetes-xenial/main/binary-amd64/Packages</a> server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none, E:Some index files failed to download. They have been ignored, or old ones used instead.\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}</p> </blockquote>
kamal
<p>As is described in <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl" rel="nofollow noreferrer">the fine manual</a>, you must first add the GPG signing key with <code>apt-key</code> or the ansible module <a href="https://docs.ansible.com/ansible/2.7/modules/apt_key_module.html#apt-key-module" rel="nofollow noreferrer"><code>apt_key:</code></a></p> <p>Similarly listed on that page, the correct apt repo is <code>deb https://apt.kubernetes.io/ kubernetes-xenial main</code></p> <p>So yes, while you entirely borked your CA chain of trust with the first command, I suspect you would have subsequently encountered untrusted package signatures with the next steps since you did not teach apt apt the kubernetes package signing key.</p>
mdaniel
<p>I'm having an extremely hard time setting up EKS on AWS. I've followed this tutorial: <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers</a></p> <p>I got up to the <code>~/.kube/config</code> file and when I try to run <code>kubectl get svc</code> I'm prompted with the below.</p> <pre><code>▶ kubectl get svc Please enter Username: Alex Please enter Password: ******** Error from server (Forbidden): services is forbidden: User "system:anonymous" cannot list services in the namespace "default" </code></pre> <p>I'm unsure where to find the username and password for this entry. Please point me to the exact place where I can find this information.</p> <p>I think this also has to do with EKS RBAC. I'm not sure how to get around this without having access to the server.</p>
Alex Miles
<p>This issue occurs if your <code>user</code> configuration isn't working in your <code>kubeconfig</code>, or if you are on a version of <code>kubectl</code> less than v1.10</p>
monokrome
<p>So, a StatefulSet creates a new volume for each of its pod.</p> <p>How does it maintain consistency of the written data. Because, each pod may serve a different client at a specific moment in time and will be writing different stuff to the volume. But, if this client tries to access the data later, it will have to connect to the same pod somehow to access its own data. Do these pods talk to each other to share data?</p> <p>I may have asked a silly question</p> <p>I think I know the answer, but I am confused. I won't tell the answer just to get an unbiased reply</p>
Aniket
<blockquote> <p>How does it maintain consistency of the written data. Because, each pod may serve a different client at a specific moment in time and will be writing different stuff to the volume. But, if this client tries to access the data later, it will have to connect to the same pod somehow to access its own data. Do these pods talk to each other to share data?</p> </blockquote> <p>Kubernetes does <em>nothing</em> about this. But you are right, these things are needed. The <em>application is responsible</em> for this. There are many ways to do this, e.g. using <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft Consensus Algorithm</a></p>
Jonas
<p>I am playing with Kubernetes on <a href="https://labs.play-with-k8s.com" rel="nofollow noreferrer">https://labs.play-with-k8s.com</a>.<br> I tried to use the <code>kubectl proxy</code> following the instructions in <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/#using-kubectl-to-start-a-proxy-server" rel="nofollow noreferrer">Kubernete's website</a>.<br> On the Master node (192.168.0.13) I ran: <code>kubectl proxy --port=8080</code>: </p> <pre><code>[node1 ~]$ kubectl proxy --port=8080 Starting to serve on 127.0.0.1:8080 </code></pre> <p>On the Worker node I ran <code>curl -v http://192.168.0.13:8080</code> and it failed: </p> <pre><code>[node2 ~]$ curl -v http://192.168.0.13:8080 * About to connect() to 192.168.0.13 port 8080 (#0) * Trying 192.168.0.13... * Connection refused * Failed connect to 192.168.0.13:8080; Connection refused * Closing connection 0 curl: (7) Failed connect to 192.168.0.13:8080; Connection refused </code></pre> <p>Any idea why the connection is refused ? </p>
E235
<blockquote> <p>Starting to serve on 127.0.0.1:8080</p> </blockquote> <p>As shown in the message it emits on startup, that is because <code>kubectl proxy</code> <strong>only</strong> listens on localhost (i.e. <code>127.0.0.1</code>), unless you instruct it otherwise:</p> <pre><code>kubectl proxy --address=0.0.0.0 --accept-hosts='.*' </code></pre> <p>and that <code>--accept-hosts</code> business is a regular expression for hosts (presumably <code>Referer</code> headers? DNS lookups?) from which <code>kubectl</code> will accept connections, and <code>.*</code> is a regex that matches every string including the empty ones.</p>
mdaniel
<p>For any node, we can just do</p> <pre><code>node, err := clientset.corev1().Node(&quot;node_name&quot;).Get(...) </code></pre> <p>I am looking for something similar to view the kubelet agent. <code>clientset.corev1().Pod</code> does not have them since it is not managed by api server.</p> <p>Is there a way to programmatically retrieve this information?</p>
Fermat's Little Student
<blockquote> <p>Is there a way to programmatically retrieve this information?</p> </blockquote> <p>No. The kubelet is not a <em>resource</em> (e.g. <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects" rel="nofollow noreferrer">API resource / object</a>) in Kubernetes.</p>
Jonas
<p>I want to deploy kafka on kubernetes.</p> <p>Because I will be streaming with high bandwidth from the internet to kafka I want to use the hostport and advertise the hosts "dnsName:hostPort" to zookeeper so that all traffic goes directly to the kafka broker (as opposed to using nodeport and a loadbalancer where traffic hits some random node which redirects it creating unnecessary traffic).</p> <p>I have setup my kubernetes cluster on amazon. With <code>kubectl describe node ${nodeId}</code> I get the internalIp, externalIp, internal and external Dns name of the node. </p> <p>I want to pass the externalDns name to the kafka broker so that it can use it as advertise host.</p> <p>How can I pass that information to the container? Ideally I could do this from the deployment yaml but I'm also open to other solutions.</p>
herm
<blockquote> <p>How can I pass that information to the container? Ideally I could do this from the deployment yaml but I'm also open to other solutions.</p> </blockquote> <p>The first thing I would try is <code>envFrom: fieldRef:</code> and see if it will let you reach into the <code>PodSpec</code>'s <code>status:</code> field to grab the <code>nodeName</code>. I deeply appreciate that isn't the <code>ExternalDnsName</code> you asked about, but if <code>fieldRef</code> works, it could be a lot less typing and thus could be a good tradeoff.</p> <p>But, with "I'm also open to other solutions" in mind: don't forget that -- unless instructed otherwise -- each Pod is able to interact with the kubernetes API, and with the correct RBAC permissions it can request the very information you're seeking. You can do that either as a <code>command:</code> override, to do setup work before launching the kafka broker, or you can do that work in an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a>, write the external address into a shared bit of filesystem (with <code>volume: emptyDir: {}</code> or similar), and then any glue code for slurping that value into your kafka broker.</p> <p>I am 100% certain that the <code>envFrom: fieldRef:</code> construct that I mentioned earlier can acquire the <code>metadata.name</code> and <code>metadata.namespace</code> of the Pod, at which point the Pod can ask the kubernetes API for its own <code>PodSpec</code>, extract the <code>nodeName</code> from the aforementioned <code>status:</code> field, then ask the kubernetes API for the Node info, and voilà, you have all the information kubernetes knows about that Node.</p>
mdaniel
<p>When reading through the kubernetes document I noticed that NodePort type service always automatically creates a ClusterIP and ingress traffic targeting NodePort will be routed to ClusterIP. My question is that why is this necessary? Why can't kubeproxy directly does load balancing for this NodePort through forwarding? ClusterIP doesn't seem to be necessary to support NodePort and it seems to introduce additional overhead.</p>
Michael Ma
<p>Even <code>Service</code> of type <code>NodePort</code> does not directly contact Pods, they still go through the Service's cluster IP, and its associated Pod selector rules. With newer kubernetes I think you can also influence whether traffic is round-robin or weighted distributed, which wouldn't work if the NodePort directly contacted the Pods</p> <p>Also, NodePorts are opened on <strong>every</strong> member of the cluster, but -- in most cases -- you don't have a Pod running on <em>every</em> member of the cluster, so it still has to use the Service IP to route to the actual Node upon which an in-service Pod can service that traffic.</p> <p>Think of NodePorts an a mechanism to bridge the "outside world" with the "cluster world," rather than a shortcut mechanism to side-step iptables or ipvs</p>
mdaniel
<p>I am trying to view portal that build with angular uses netcore backend runs on docker swarm fluently. When I try to deploy angular image on openshift, I get following error;</p> <pre><code>[emerg] 1#1: bind() to 0.0.0.0:80 failed (13: Permission denied) nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) </code></pre> <p>First I created nginx deployment as root user using &quot;nginx:1.19.6-alpine&quot; and defined service account(anyuid), it works fine. Then I try to create openshift deployment with &quot;nginxinc/nginx-unprivileged&quot; image to run as non-root user. I had change nginx.conf according to &quot;nginxinc/nginx-unprivileged&quot; image. I defined service account again but it throws &quot;bind() to 0.0.0.0:80 failed (13: Permission denied)&quot; error.</p> <p>Container 80 port open. There was no ingress. Service uses 80 port to expose route. What could be the solution ?</p> <p>Here is my Dockerfile;</p> <pre><code>### STAGE 1: Build ### FROM node:12.18-alpine as build-env ENV TZ=Europe/Istanbul RUN export NG_CLI_ANALYTICS=false COPY ng/package.json ng/package-lock.json ng/.npmrc ./ COPY ng/projects/package.json ./projects/package.json RUN npm install &amp;&amp; pwd &amp;&amp; ls -ltra COPY ./ng/ ./ RUN time node --max_old_space_size=12000 node_modules/@angular/cli/bin/ng build project --configuration production WORKDIR /usr/src/app/dist/ COPY ng/.npmrc ./ RUN npm publish WORKDIR /usr/src/app/ RUN time node --max_old_space_size=12000 node_modules/@angular/cli/bin/ng build portal --configuration production ### STAGE 2: Run ### FROM nginxinc/nginx-unprivileged:1.23-alpine as runtime-env ENV TZ=Europe/Istanbul COPY ng/nginx.conf /etc/nginx/nginx.conf COPY ng/nginx.template.conf /etc/nginx/nginx.template.conf COPY --from=build-env /usr/src/app/dist/portal/ /usr/share/nginx/html/ CMD [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;envsubst &lt; /usr/share/nginx/html/assets/env.template.js &gt; /usr/share/nginx/html/assets/env.js &amp;&amp; envsubst '$API_URL' &lt; /etc/nginx/nginx.template.conf &gt; /etc/nginx/conf.d/default.conf &amp;&amp; exec nginx -g 'daemon off;'&quot;] </code></pre> <p>nginx.conf file :</p> <pre><code> worker_processes auto; # nginx.conf file taken from nginxinc_nginx-unprivileged image error_log /var/log/nginx/error.log notice; pid /tmp/nginx.pid; events { worker_connections 1024; } http { proxy_temp_path /tmp/proxy_temp; client_body_temp_path /tmp/client_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] &quot;$request&quot; ' '$status $body_bytes_sent &quot;$http_referer&quot; ' '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } </code></pre> <p>nginx.template.conf</p> <pre><code>server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; try_files $uri $uri/ /index.html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location /api { proxy_pass ${API_URL}; proxy_pass_request_headers on; #rewrite /api/(.*) /$1 break; } } </code></pre> <p>I have used all service accounts on deployment such as nonroot, hostaccess, hostmount-anyuid, priviledged, restricted and anyuid.</p> <p>Also I tried to add following command to dockerfile:</p> <pre><code>&quot;RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx &amp;&amp; \ chmod -R 770 /var/cache/nginx /var/run /var/log/nginx&quot; </code></pre> <p>Gets it from <a href="https://stackoverflow.com/questions/54360223/openshift-nginx-permission-problem-nginx-emerg-mkdir-var-cache-nginx-cli">here</a>.</p>
ahmet budak
<p>OpenShift will not run your container as root, so it cannot listen on port 80. Choose a port &gt;1024, e.g. port 8080 instead, and it should work.</p>
Jonas
<p>When run <code>kubectl get cs</code> on centos 7 I got below error message. </p> <pre><code>No resources found. Error from server (Forbidden): componentstatuses is forbidden: User "system:node:&lt;server-name&gt;" cannot list componentstatuses at the cluster scope </code></pre> <p>I can confirm the api server is running <code>kubectl cluster-info</code></p> <pre><code>Kubernetes master is running at https://&lt;server-IP&gt;:6443 KubeDNS is running at https://&lt;server-IP&gt;:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>Also I have below in <code>~/.bash_profile</code></p> <pre><code>export http_proxy=http://&lt;proxy-server-IP&gt;:3128 export https_proxy=http://&lt;proxy-server-IP&gt;:3128 export no_proxy=$no_proxy,127.0.0.1,localhost,&lt;server-IP&gt;,&lt;server-name&gt; export KUBECONFIG=/etc/kubernetes/kubelet.conf </code></pre> <p>Not only <code>kubectl get cs</code> yield the error message, <code>kubectl apply -f kubernetes-dashboard.yaml</code> yield similar error message</p> <pre><code>Error from server (Forbidden): error when retrieving current configuration of: Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret" Name: "kubernetes-dashboard-certs", Namespace: "kube-system" Object: &amp;{map["kind":"Secret" "metadata":map["labels":map["k8s-app":"kubernetes-dashboard"] "name":"kubernetes-dashboard-certs" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "type":"Opaque" "apiVersion":"v1"]} from server for: "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" is forbidden: User "system:node:&lt;server-name&gt;" cannot get secrets in the namespace "kube-system": no path found to object </code></pre>
Antelope
<blockquote> <p><code>export KUBECONFIG=/etc/kubernetes/kubelet.conf</code></p> </blockquote> <p>Is completely incorrect; you are, as the error message is cheerfully trying to inform you, attempting to perform cluster operations as a <code>Node</code>, not as one of the Users or <code>ServiceAccount</code>s. RBAC is almost explicitly designed to stop you from doing exactly what you are currently doing. You would never want a <code>Node</code> to be able to read sensitive credentials nor create arbitrary <code>Pod</code>s at cluster scope. </p> <p>If you want to be all caviler about it, then ssh into a master Node and use the <code>cluster-admin</code> credentials usually found in <code>/etc/kubernetes/admin.conf</code> (or a similar file -- depending on how your cluster was provisioned). If you don't already <em>have</em> a <code>cluster-admin</code> credential, then create an X.509 certificate that is signed by a CA that the apiserver trusts with an Organization (<code>O=</code> in X.509 parlance) of <code>cluster-admin</code> and then create yourself a <code>ServiceAccount</code> (or whatever) with a <code>ClusterRoleBinding</code> of <code>cluster-admin</code> and go from there.</p>
mdaniel
<p>I've setup a cronjob in Openshift. I can see its logs through the Web Console but I want to receive a mail containing the output of the job whenever it completes. How do I go about implementing this?</p>
vatsal
<p>You can add sth. like "mailx" to your cronjob image and forward the output.</p> <p>In the following example "mailgateway.default.svc" is a service route to a mailgateway outside the cluster:</p> <pre><code>&lt;output_producing_command&gt; | mailx -E -v -s "Subject" -S smtp=smtp://mailgateway.default.svc:25 -S from="[email protected] (Foo Bar)" [email protected] 2&gt;&amp;1 </code></pre> <blockquote> <p>-E If an outgoing message does not contain any text in its first or only message part, do not send it but discard it silently, effectively setting the skipemptybody variable at program startup. This is useful for sending messages from scripts started by cron(8).</p> </blockquote>
Milde
<p>I'm using Ansible and the <code>k8s</code> module for deploying applications to an OpenShift cluster. In general this is working really well.</p> <p>However, when I try to set the port value, in a deployment config, using a value from a variable, things are not so happy.</p> <p>I have the following ansible task as an example:</p> <pre><code>- name: Create app service k8s: name: "{{ name | lower }}" state: present definition: apiVersion: v1 kind: Service metadata: annotations: labels: app: "{{ name | lower }}" name: "{{ name | lower }}" namespace: "{{ name | lower }}" spec: ports: - name: "{{ port }}-tcp" port: "{{ port }}" protocol: TCP targetPort: "{{ port | int }}" &lt;--- the problem! selector: deploymentconfig: "{{ name | lower }}" sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>The variable is set in a yaml file, which is read into the playbook, and the variable is set like <code>port: "5000"</code>.</p> <p>If I change this to <code>port: 5000</code> then it solves the problem, but I use this variable in several other places and other playbooks, so I would prefer to keep the variable as is.</p> <p>I have tried using the approaches to solve this: <code>"{{ port | int }}"</code></p> <p>An example of the error I get is:</p> <pre><code>fatal: [localhost]: FAILED! =&gt; {"changed": false, "error": 422, "msg": "Failed to patch object: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Service \\\"myapp\\\" is invalid: spec.ports[0].targetPort: Invalid value: \\\"7001\\\": must contain at least one letter or number (a-z, 0-9)\",\"reason\":\"Invalid\",\"details\":{\"name\":\"usdt-wallet\",\"kind\":\"Service\",\"causes\":[{\"reason\":\"FieldValueInvalid\",\"message\":\"Invalid value: \\\"5000\\\": must contain at least one letter or number (a-z, 0-9)\",\"field\":\"spec.ports[0].targetPort\"}]},\"code\":422}\n", "reason": "Unprocessable Entity", "status": 422} </code></pre>
Magick
<p>According to the posted error message, your problem isn't <code>|int</code> or <code>|string</code> -- although I agree the error message is misleading:</p> <blockquote> <p>"message": "Service \"usdt-wallet\" is invalid: spec.ports[0].targetPort: Invalid value: \"70001\": must contain at least one letter or number (a-z, 0-9)",</p> </blockquote> <p>but it is caused by trying to use 70001 as a target port but TCP ports must be in the range 1 to 65535 inclusive, as stated by <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#serviceport-v1-core" rel="nofollow noreferrer">the fine manual</a></p>
mdaniel
<p>Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?</p> <p>I am running the pod with MongoDB. Getting the error </p> <blockquote> <p>chown: changing ownership of '/data/db': Operation not permitted .</p> </blockquote> <p>Cloud any body, Please suggest me how to resolve the error? (or)</p> <p>Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?</p>
BSG
<blockquote> <p>chown: changing ownership of '/data/db': Operation not permitted .</p> </blockquote> <p>You'll want to either launch the mongo container as <code>root</code>, so that you <em>can</em> <code>chown</code> the directory, or if the image prohibits it (as some images already have a <code>USER mongo</code> clause that prohibits the container from escalating privileges back up to <code>root</code>), then one of two things: supersede the user with a <code>securityContext</code> stanza in <code>containers:</code> or use an <code>initContainer:</code> to preemptively change the target folder to be the mongo UID:</p> <p>Approach #1:</p> <pre><code>containers: - name: mongo image: mongo:something securityContext: runAsUser: 0 </code></pre> <p><em>(which may require altering your cluster's config to permit such a thing to appear in a <code>PodSpec</code>)</em></p> <p>Approach #2 (which is the one I use with Elasticsearch images):</p> <pre><code>initContainers: - name: chmod-er image: busybox:latest command: - /bin/chown - -R - "1000" # or whatever the mongo UID is, use string "1000" not 1000 due to yaml - /data/db volumeMounts: - name: mongo-data # or whatever mountPath: /data/db containers: - name: mongo # then run your container as before </code></pre>
mdaniel
<p>I have successfully deployed a multi master Kubernetes cluster using the repo <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray</a> and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..</p>
manu thankachan
<p>It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with <em>existing</em> machines; they do offer some terraform toys in their repo for <em>provisioning</em> machines, but kubespray itself does not get into that business.</p> <p>You have a few options available to you:</p> <h2>Post-provision using <code>scale.yml</code></h2> <ol> <li>Provision the new Node using your favorite mechanism</li> <li>Create an inventory file containing it, and the <code>etcd</code> machines (presumably so kubespray can issue etcd certificates for the new Node</li> <li>Invoke the <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.10.0/scale.yml" rel="nofollow noreferrer"><code>scale.yml</code></a> playbook</li> </ol> <p>You may enjoy <a href="https://github.com/ansible/awx#readme" rel="nofollow noreferrer">AWX</a> in support of that.</p> <h2>Using plain <code>kubeadm join</code></h2> <p><em>This is the mechanism I use for my clusters, FWIW</em></p> <ol> <li><p>Create a kubeadm join token using <code>kubeadm token create --ttl 0</code> (or whatever TTL you feel comfortable using)</p> <p>You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances</p> </li> <li><p>Use the <a href="https://cloudinit.readthedocs.io/en/18.5/" rel="nofollow noreferrer">cloud-init</a> mechanism to ensure that <code>docker</code>, <code>kubeadm</code>, and <code>kubelet</code> binaries are present on the machine</p> <p>You are welcome to use an AMI for doing that, too, if you enjoy building AMIs</p> </li> <li><p>Then invoke <code>kubeadm join</code> as described here: <a href="https://kubernetes.io/docs/setup/independent/high-availability/#install-workers" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/#install-workers</a></p> </li> </ol> <h2>Use a Machine Controller</h2> <p>There are plenty of <a href="https://github.com/kubermatic/machine-controller#readme" rel="nofollow noreferrer">&quot;machine controller&quot; components</a> that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too</p> <p>Our friends over at Kubedex have <a href="https://kubedex.com/autoscaling/" rel="nofollow noreferrer">an entire page devoted to this question</a></p>
mdaniel
<p>To build a highly available <code>k8s</code> cluster, you need to build an <code>etcd</code> cluster, which I see in the official <code>k8s</code> documentation.</p> <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology" rel="nofollow noreferrer">Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node. The same applies to the local kube-controller-manager and kube-scheduler instances.</a></p> <p>That is, <code>kube-apiservice</code> only communicates with <code>etcd</code> of its own node, can we understand that reads and writes happen on <code>etcd</code> of the same node,</p> <p>But when I was studying <code>etcd</code>, I was told that the clients in the <code>etcd</code> cluster read data through <code>Follower</code> and write data through <code>Leader</code>.</p> <pre class="lang-bash prettyprint-override"><code>┌──[[email protected]]-[~/ansible/kubescape] └─$ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cert=&quot;/etc/kubernetes/pki/etcd/server.crt&quot; --key=&quot;/etc/kubernetes/pki/etcd/server.key&quot; --cacert=&quot;/etc/kubernetes/pki/etcd/ca.crt&quot; endpoint status --cluster -w table +-----------------------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +-----------------------------+------------------+---------+---------+-----------+-----------+------------+ | https://192.168.26.100:2379 | ee392e5273e89e2 | 3.5.4 | 37 MB | false | 100 | 3152364 | | https://192.168.26.102:2379 | 11486647d7f3a17b | 3.5.4 | 36 MB | false | 100 | 3152364 | | https://192.168.26.101:2379 | e00e3877df8f76f4 | 3.5.4 | 36 MB | true | 100 | 3152364 | +-----------------------------+------------------+---------+---------+-----------+-----------+------------+ ┌──[[email protected]]-[~/ansible/kubescape] └─$ </code></pre> <p>In fact, there is only one <code>Leader</code> in the cluster. Does this read/write separation of <code>etcd</code> clusters apply to <code>k8s</code>?</p> <p>I understand that this is contradictory to what is said above. I would like to know how <code>etcd</code> reads and writes in a <code>k8s</code> cluster.</p> <hr /> <p>I know very little about <code>etcd</code>, thanks for clearing up my confusion!</p>
山河以无恙
<blockquote> <p>In fact, there is only one Leader in the cluster. Does this read/write separation of etcd clusters apply to Kubernetes?</p> </blockquote> <p>Yes. In an etc cluster, there is only one leader that does the writes. But etcd internally forwards all requests that needs <em>consensus</em> (e.g. writes) to the <em>leader</em>, so the client application (Kubernetes in our case) does not need to know what etcd node is the leader.</p> <p>From <a href="https://etcd.io/docs/v3.5/faq/" rel="nofollow noreferrer">etcd FAQ</a>:</p> <blockquote> <p><strong>Do clients have to send requests to the etcd leader?</strong></p> </blockquote> <blockquote> <p>Raft is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member.</p> </blockquote>
Jonas
<p>I have a repository with a Kubernetes deployment YAML. Pipelines run on each commit that builds and pushes an image into our repository, versioned with the commit (eg. <code>my_project:bm4a83</code>). Then I'm updating the deployment image</p> <p><code>kubectl set image deployment/my_deployment my_deployment=my_project:bm4a83</code>.</p> <p>This works, but I also want to keep the rest of the deployment YAML specification in version control.</p> <p>I thought I could just keep it in the same repository, but that means my changes that may only be infrastructure (eg, changing <code>replicas</code>) triggers new builds, without code changes.</p> <p>What felt like it made the most sense was keeping the deployment YAML in a totally separate repository. I figured I can manage all the values from there, independently from actual code changes. The only problem with that is the <code>image</code> key would need to be kept up to date. The only way around that, is working with some floating <code>:latest</code>-type version, but I don't really think that's ideal.</p> <p>What's a sensible workflow for managing this? Am I missing something entirely?</p>
Richard Taylor
<blockquote> <p>What's a sensible workflow for managing this? Am I missing something entirely?</p> </blockquote> <p>Some of the answer depends on the kind of risk you're trying to drive down with any process you have in place. If it's "the cluster was wiped out by a hurricane and I need to recover my descriptors," then <a href="https://github.com/heptio/ark#readme" rel="nofollow noreferrer">Heptio Ark</a> is a good solution for that. If the risks are more "human-centric," then IMHO you will have to walk a very careful line between locking down all the things and crippling the very agile, empowering, tools that kubernetes provides to a team. A concrete example of that model running up against your model is: what happens when a developer edits a Deployment but does not (remember|know) to update the descriptor in the repo? So do you revoke the edit rights? Use some diff-esque logic to detect a changed in-cluster config?</p> <p>To speak to something you said <em>specifically</em>: it is a highly suboptimal idea to commit a descriptor change just to resize a <code>(Deployment|ReplicationController|StatefulSet)</code>. Separately, a well-built CI pipeline would also understand if no <em>buildable</em> artifact changed and bail out (either early, or not even triggering a build, if the CI tool is that smart).</p> <p>Finally, if you do want to carry on with the current situation, then the best practice I can think of is textual replacement right before applying a descriptor:</p> <pre><code>$ grep "image: " the-deployment.yml image: example.com/something:#CI_PIPELINE_IID# $ sed -i'' -e "s/#CI_PIPELINE_IID#/${CI_PIPELINE_IID}/" the-deployment.yml $ kubectl apply -f the-deployment.yml </code></pre> <p>so that the copy in the repo remains textually pristine, and also isn't inadvertently <em>actually applied</em> since it won't actually result in a runnable Deployment.</p>
mdaniel
<p>i am getting </p> <pre><code>0/7 nodes are available: 2 node(s) had taints that the pod didn't tolerate, 5 node(s) had volume node affinity conflict. </code></pre> <p>for my prometheus server pod but if i check each nodes there are no taints. and there is enough cpu and memory to be allocated.. what am i missing here?</p> <p>i tried deleting the pods and even the deployment object but the error still persists</p> <p>all nodes have 0 taints.. this is a fresh prometheus install on a new kubernetes cluster the yaml files that i have used to work until now when i needed to deploy a new kubernetes cluster</p>
markrosario
<blockquote> <p>0/7 nodes are available: 2 node(s) had taints that the pod didn't tolerate, 5 node(s) had volume node affinity conflict. </p> </blockquote> <p>The message is specific: it's not the <em>taints</em> that are keeping your prometheus pods off of your workers, it's the <strong>volume</strong> that is the problem. If you are in AWS, it's because your volume is in an availability zone that your workers are not (so, a <code>us-west-2a</code> volume and <code>us-west-2c</code> workers, for example)</p> <p>The shortest path to success in your situation may be to either recreate the volume in the correct A.Z. if it was empty, or manually create a new volume and copy the data into an A.Z. that matches your workers, or (of course) spin up a new worker in the A.Z. that matches the volume</p> <blockquote> <p>all nodes have 0 taints..</p> </blockquote> <p>Is for sure not true for two reasons: because the scheduler clearly says there are two Nodes with taints, and because unless you specifically stripped them off, the masters are almost always(?) provisioned with <code>node.kubernetes.io/master:NoSchedule</code> taints explicitly to keep workloads off of them</p>
mdaniel
<p>I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like &quot;You want to deploy stateful app to k8s? Use StatefulSet!&quot; However, I couldn't see like &quot;You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet&quot; ever.</p> <p>Even nobody says &quot;I don't recommend to use StatefulSet for stateless app&quot;, many stateless apps is deployed through Deployment, like it is the standard.</p> <p>The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps?</p> <p>I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong.</p> <p>I'm really curious about this question.</p> <p>P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs.</p> <p>I googled &quot;When not to use StatefulSet&quot;, &quot;when Deployment is better than StatefulSet&quot;, &quot;Why Deployment is used for stateless apps&quot;, or something more questions.</p> <p>I also see the k8s docs about StatefulSet either.</p>
Eddy Kim
<h2>Different Priorities</h2> <p>What happens when a Node becomes unreachable in a cluster?</p> <h2>Deployment - Stateless apps</h2> <p>You want to maximize availability. As soon as Kubernetes detects that there are fewer than the desired number of replicas running in your cluster, the controllers spawn new replicas of it. Since these apps are stateless, it is very easy to do for the Kubernetes controllers.</p> <h2>StatefulSet - Stateful apps</h2> <p>You want to maximize availability - but not you must ensure <strong>data consistency</strong> (the state). To ensure <em>data consistency</em>, each replica has its own unique ID, and there are never multiple replicas of this ID, e.g. it is unique. This means that you cannot spawn up a new replica, unless that you are sure that the replica on the unreachable Node are terminated (e.g. stops using the Persistent Volume).</p> <h3>Conclusion</h3> <p>Both Deployment and StatefulSet try to maximize the availability - but StatefulSet cannot sacrifice <strong>data consistency</strong> (e.g. your state), so it cannot act as fast as Deployment (stateless) apps can.</p> <p>These priorities does not only happens when a Node becomes unreachable, but at all times, e.g. also during upgrades and deployments.</p>
Jonas
<p>I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.</p> <p>For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.</p> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> <p>I considered using <code>NodePort</code> services, but this requires the nodes themselves to be publicly accessible (relying on <code>kube-proxy</code> to route to the appropriate node). I'd prefer to avoid this if possible.</p> <p><code>LoadBalancer</code> services seem like another option, though I don't want to create a dedicated cloud load balancer for <em>each</em> TCP service I want to expose.</p> <p>I'm aware that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">NGINX Ingress controller supports exposing TCP and UDP services</a>, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static <code>ConfigMap</code>.</p>
cjheppell
<blockquote> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> </blockquote> <p>Well, that depends on how you expect the ultimate user to <strong>address</strong> those Services? As you pointed out, with an Ingress, it is possible to use virtual hosting to route all requests to the same Ingress controller, and then use the <code>Host:</code> header to dispatch within the cluster.</p> <p>With a TCP service, such as PostgreSQL, there is no such header. So, you would necessarily have to have either an IP based mechanism, or assign each one a dedicated port on your Internet-facing IP</p> <p>If your clients are IPv6 aware, assigning each Service a dedicated IP address is absolutely reasonable, given the absolutely massive IP space that IPv6 offers. But otherwise, you have two knobs to turn: the IP and the port.</p> <p>From there, how you get those connections routed within your cluster to the right Service is going to depend on how you solved the first problem</p>
mdaniel
<p>I've got a single-node Kubernetes "cluster" built with <code>kubeadm</code> in AWS. </p> <p>I have deployed a simple Nginx deployment with this config:</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: nginx0-deployment labels: app: nginx0-deployment spec: replicas: 1 selector: matchLabels: app: nginx0 template: metadata: labels: app: nginx0 spec: containers: - name: nginx image: k8s.gcr.io/nginx:latest ports: - containerPort: 80 name: backend-http </code></pre> <p>I have also created an AWS ELB LoadBalancer:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: nginx0-service balancing-enabled: "true" spec: selector: app: nginx0-deployment ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer </code></pre> <p>This created the ELB and opened the relevant ports in the K8s instance security group.</p> <pre><code>{ec2-instance} ~ $ kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx0-deployment-548f99f47c-ns75w 1/1 Running 0 3m45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 25h service/nginx0-service LoadBalancer 10.106.179.191 acfc4150....elb.amazonaws.com 80:30675/TCP 63s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx0-deployment 1/1 1 1 3m45s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx0-deployment-548f99f47c 1 1 1 3m45s </code></pre> <p>However something is still missing between the ELB and the POD because browsing to <code>http://acfc4150....elb.amazonaws.com/</code> doesn't work - Chrome says ERR_EMPTY_RESPONSE.</p> <p>I guess it's something to do with the <em>ELB port mapping</em> <strong>80:30675/TCP</strong> - I have checked the incoming traffic on the instance and I see packets for port <strong>30675</strong> but nothing goes back. As if the mapping between this port and the POD's port 80 wasn't set up?</p> <p>Any idea what should I add to my manifests to make it work?</p> <p>Thanks!</p>
KeepLearning
<p>You have the wrong labels; your <strong>Deployment</strong> has <code>app: nginx0-deployment</code> but your <strong>Pods</strong> have <code>app: nginx0</code> and <code>Service</code>s don't target Deployments, they target Pods</p> <p>Update your <code>Service</code> to have:</p> <pre><code>spec: selector: app: nginx0 </code></pre> <p>instead</p>
mdaniel
<p>I have a helm repo:</p> <p><code>helm repo list</code></p> <blockquote> <p>NAME URL</p> <p>stable <a href="https://kubernetes-charts.storage.googleapis.com" rel="noreferrer">https://kubernetes-charts.storage.googleapis.com</a></p> <p>local <a href="http://127.0.0.1:8879/charts" rel="noreferrer">http://127.0.0.1:8879/charts</a></p> </blockquote> <p>and I want to list all the charts available or search the charts under <code>stable</code> helm repo.</p> <p>How do I do this?</p> <p>No command so far to list available charts under a helm repo or just verify that a chart exists.</p>
uberrebu
<p>First, <strong>always</strong> update your local cache:</p> <pre><code>helm repo update </code></pre> <p>Then, you can list all charts by doing:</p> <pre><code>helm search repo </code></pre> <p>Or, you can do a case insensitive match on any part of chart name using the following:</p> <pre><code>helm search repo [your_search_string] </code></pre> <p>Lastly, if you want to list all the versions you can use the -l/--version argument:</p> <pre><code># Lists all versions of all charts helm search repo -l # Lists all versions of all chart names that contain search string helm search repo -l [your_search_string] </code></pre>
rouble
<p>How do I set the <code>client_max_body_size</code> parameter for a single subdomain? I have a server that will accept file uploads up to 5TB. All the examples I've looked at show you how to set it globally. I have multiple rules in my ingress.yaml, I don't want every single rule to inherit the <code>client_max_body_size</code> parameter, only the file upload server should.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: example.com http: paths: - backend: serviceName: homepage servicePort: 80 - host: storage.example.com http: paths: - backend: serviceName: storage servicePort: 80 </code></pre> <p>In the above ingress.yaml, I want to set <code>client_max_body_size</code> for the <code>storage</code> service only, which is located at the host <code>storage.example.com</code>.</p>
Eric Guan
<p>Because I don't see <code>client-max-body-size</code> on the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">list of annotations</a>, that leads me to believe you'll have to use the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">custom-config-snippet</a> to include the <code>client_max_body_size 5tb;</code> for that Ingress:</p> <pre><code>metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | client_max_body_size 5tb; </code></pre> <p>However, given that you said that you only want it for <code>storage.example.com</code>, you'll need to split the Ingress config for <code>storage.example.com</code> out into its own Ingress resource, since (AFAIK) the annotations are applied to every <code>host:</code> record in the Ingress resource.</p>
mdaniel
<p>I am running kube on a cluster of nodes. When i set the context on the cluster I get an error that says "error: You must be logged in to the server (Unauthorized)" when i try to run kubectl get pods </p> <p>If I unset the current-context the error goes away and i can view my pods and nodes. So my guess is it has something to do with my kubeconfig but I'm not sure where i got it wrong.</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: /home/user/ssl/ca.pem server: https://&lt;ip for my master node&gt; name: default-cluster contexts: - context: cluster: default-cluster user: user name: kubeflow current-context: kubeflow kind: Config preferences: {} users: - name: user user: client-certificate: /home/user/ssl/client-ca.pem client-key: /home/user/ssl/client-ca-key.pem </code></pre> <p>EDIT: </p> <p>Kube version 1.14</p> <pre><code>user@kube01:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <ul> <li>Running on a cluster with 3 master nodes and 4 workers. Not on GCP or any cloud platform </li> </ul>
Chirrut Imwe
<blockquote> <p>If I unset the current-context the error goes away and i can view my pods and nodes. So my guess is it has something to do with my kubeconfig but I'm not sure where i got it wrong.</p> </blockquote> <p>That very likely means you are running <code>kubectl</code> on the master nodes themselves, and the master nodes are listening on <code>:8080</code> with the unauthenticated port (since <code>kubectl</code> uses <code>http://127.0.0.1:8080</code> by default if there is no kubeconfig)</p> <p>So yes, it's very likely due to your cert being signed by a CA that the apiserver does not trust.</p> <p>You can check that via:</p> <pre><code>openssl x509 -in /home/user/ssl/client-ca.pem -noout -text </code></pre> <p>and then look at the CA and compare the <code>issuer</code> from your <code>client-ca</code> against the <code>subject</code> of your CA:</p> <pre><code>openssl x509 -in /home/user/ssl/ca.pem -noout -text </code></pre> <p>I am sure there are some fingerprints and serial numbers and that kind of stuff that need to match up, but I don't have that <code>openssl</code> command line handy</p>
mdaniel
<p>I am having one AWS EKS Cluster up &amp; running. I need to add node group with taint. So that I can deploy the pod on particular node group in EKS. I can do it in azure AKS using the following command.</p> <pre class="lang-bash prettyprint-override"><code>az aks nodepool add --resource-group rg-xx --cluster-name aks-xxx --name np1 --node-count 1 --node-vm-size xxx --node-taints key=value:NoSchedule --no-wait </code></pre> <p>How to achieve same in AWS EKS?</p>
deepak
<p>You can use this example: <a href="https://eksctl.io/usage/autoscaling/#scaling-up-from-0" rel="nofollow noreferrer">https://eksctl.io/usage/autoscaling/#scaling-up-from-0</a></p> <pre><code>nodeGroups: - name: ng1-public ... labels: my-cool-label: pizza taints: feaster: &quot;true:NoSchedule&quot; </code></pre>
pb100
<p>I'm upgrading my elasticsearch on kubernetes cluster from 5.6.10 to elasticsearch 6.1.4. However, I can't even get es 6.1.4 to launch. </p> <p>I keep getting the error <code>unknown setting [xpack.license.self_generated.type]</code>. </p> <p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/6.x/license-settings.html" rel="nofollow noreferrer">Per the docs</a>, I tried setting the value to basic, <code>xpack.license.self_generated.type=basic</code> and I've also omitted the value all together.</p> <p>I've seen a few others run into this error but none of their fixes have worked for me.</p> <p>Help is much appreciated!</p> <p>My stateful set yaml</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: elastic-data labels: app: elastic-data area: devs role: nosql version: "6.1.4" environment: elastic spec: serviceName: elastic-data replicas: 1 updateStrategy: type: RollingUpdate template: metadata: labels: app: elastic-data area: devs role: nosql version: "6.1.4" environment: elastic annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "sysctl", "image": "busybox", "imagePullPolicy": "IfNotPresent", "command": ["sysctl", "-w", "vm.max_map_count=262144"], "securityContext": { "privileged": true } } ]' spec: terminationGracePeriodSeconds: 10 securityContext: runAsUser: 1000 fsGroup: 1000 containers: - name: elastic-data image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4 resources: requests: memory: "512Mi" limits: memory: "1024Mi" env: - name: ES_JAVA_OPTS value: -Xms512m -Xmx512m command: ["/bin/bash", "-c", "~/bin/elasticsearch-plugin remove x-pack; elasticsearch"] args: - -Ecluster.name=elastic-devs - -Enode.name=${HOSTNAME} - -Ediscovery.zen.ping.unicast.hosts=elastic-master.default.svc.cluster.local - -Enode.master=false - -Enode.data=true - -Enode.ingest=false - -Enetwork.host=0.0.0.0 - -Expack.license.self_generated.type=basic ports: - containerPort: 9300 name: transport - containerPort: 9200 name: http volumeMounts: - name: data-volume mountPath: /usr/share/elasticsearch/data readinessProbe: tcpSocket: port: 9300 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 3 livenessProbe: tcpSocket: port: 9300 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 3 volumeClaimTemplates: - metadata: name: data-volume spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi </code></pre>
Mike
<p>As they are trying to communicate, you forgot to remove the configuration property from <code>config/elasticsearch.yml</code>. So, the full revised <code>command:</code> would be</p> <pre><code>~/bin/elasticsearch-plugin remove x-pack sed -i.bak -e /xpack.license.self_generated.type/d config/elasticsearch.yml elasticsearch </code></pre> <p>Don't get me wrong, it's <strong>very silly</strong> of them to bomb over a config property for something that doesn't exist, but that's apparently the way it is.</p> <hr> <p>p.s. you may be happier with the <code>--purge</code> option, since when I tried that command locally <code>elasticsearch-plugin</code> cheerfully advised:</p> <blockquote> <p>-> preserving plugin config files [/usr/share/elasticsearch/config/x-pack] in case of upgrade; use --purge if not needed</p> </blockquote> <p>thus: <code>./bin/elasticsearch-plugin remove x-pack --purge</code></p>
mdaniel
<p>I'm struggling to understand the difference between Deployments and Pods in Kubernetes.</p> <blockquote> <p>A Deployment provides declarative updates for Pods and ReplicaSets.</p> </blockquote> <blockquote> <p>Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.</p> </blockquote> <p>It seems like <code>kind:Pod</code> can be used interchangeably <code>kind: Deployment</code> and deployments allow for <strong>Replica</strong> (which is pretty much the point of Kubernetes). Why would you ever use a Pod?</p> <p>Can someone:</p> <ul> <li>Explain the <em>essential difference</em> between Pods/Deployments +</li> <li>Describe a use case where pods are preferable over deployments?</li> </ul>
Casper Dijkstra
<p>In short:</p> <p>With Pods</p> <ol> <li>if it dies it dies. Period.</li> <li>you can define only one copy of particular pod. If you need to have X copies you have to define in YAML file(s) X pods</li> <li>usually you will never see pods created directly in production environments. Too unreliable. Why ? because of 1.</li> </ol> <p>With Deployment</p> <ol> <li>you define desired state of a pod. If pod dies (for whatever reason) then deployment creates new pod.</li> <li>and more generic: you can define that you want to have X running replicas of the same pod. If one or more of them die(s) the Deployment creates new to match X</li> </ol>
pb100
<p>I am running a kubernetes cluter with 6 nodes (cluser-master &amp; kubernetes slave0-4) on "Bionic Beaver"ubuntu and I'm using Weave To install kubernetes I followed <a href="https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/" rel="nofollow noreferrer">https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/</a> and installed weave after installing whatever was reccommended here after clean removing it(it doesn't show up in my pods anymore)</p> <p><code>kubectl get pods --all-namespaces</code> returns:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-g8psp 0/1 ContainerCreating 0 77m kube-system coredns-fb8b8dccf-pbfr7 0/1 ContainerCreating 0 77m kube-system etcd-cluster-master 1/1 Running 5 77m kube-system kube-apiserver-cluster-master 1/1 Running 5 77m kube-system kube-controller-manager-cluster-master 1/1 Running 5 77m kube-system kube-proxy-72s98 1/1 Running 5 73m kube-system kube-proxy-cqmdm 1/1 Running 3 63m kube-system kube-proxy-hgnpj 1/1 Running 0 69m kube-system kube-proxy-nhjdc 1/1 Running 5 72m kube-system kube-proxy-sqvdd 1/1 Running 5 77m kube-system kube-proxy-vmg9k 1/1 Running 0 70m kube-system kube-scheduler-cluster-master 1/1 Running 5 76m kube-system kubernetes-dashboard-5f7b999d65-p7clv 0/1 ContainerCreating 0 61m kube-system weave-net-2brvt 2/2 Running 0 69m kube-system weave-net-5wlks 2/2 Running 16 72m kube-system weave-net-65qmd 2/2 Running 16 73m kube-system weave-net-9x8cz 2/2 Running 9 63m kube-system weave-net-r2nhz 2/2 Running 15 75m kube-system weave-net-stq8x 2/2 Running 0 70m </code></pre> <p>and if I go with <code>kubectl describe $(kube dashboard pod name) --namespace=kube-system</code> it returns:</p> <pre><code>NAME READY STATUS RESTARTS AGE kubernetes-dashboard-5f7b999d65-p7clv 0/1 ContainerCreating 0 64m rock64@cluster-master:~$ rock64@cluster-master:~$ kubectl describe pods kubernetes-dashboard-5f7b999d65-p7clv --namespace=kube-system Name: kubernetes-dashboard-5f7b999d65-p7clv Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: kubernetes-slave1/10.0.0.215 Start Time: Sun, 14 Apr 2019 10:20:42 +0000 Labels: k8s-app=kubernetes-dashboard pod-template-hash=5f7b999d65 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/kubernetes-dashboard-5f7b999d65 Containers: kubernetes-dashboard: Container ID: Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 Image ID: Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-znrkr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; kubernetes-dashboard-token-znrkr: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-znrkr Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 64m default-scheduler Successfully assigned kube-system/kubernetes-dashboard-5f7b999d65-p7clv to kubernetes-slave1 Warning FailedCreatePodSandBox 64m kubelet, kubernetes-slave1 Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "loopback" in path [/opt/cni/bin], failed to clean up sandbox container "4e6d9873f49a02e86cef79e338ce97162291897b2aaad1ddb99c5e066ed42178" network for pod "kubernetes-dashboard-5f7b999d65-p7clv": NetworkPlugin cni failed to teardown pod "kubernetes-dashboard-5f7b999d65-p7clv_kube-system" network: failed to find plugin "portmap" in path [/opt/cni/bin]] Normal SandboxChanged 59m (x25 over 64m) kubelet, kubernetes-slave1 Pod sandbox changed, it will be killed and re-created. Normal SandboxChanged 49m (x18 over 53m) kubelet, kubernetes-slave1 Pod sandbox changed, it will be killed and re-created. Normal SandboxChanged 46m (x13 over 48m) kubelet, kubernetes-slave1 Pod sandbox changed, it will be killed and re-created. Normal SandboxChanged 24m (x94 over 44m) kubelet, kubernetes-slave1 Pod sandbox changed, it will be killed and re-created. Normal SandboxChanged 4m12s (x26 over 9m52s) kubelet, kubernetes-slave1 Pod sandbox changed, it will be killed and re-created.``` </code></pre>
MatrixKiller420
<blockquote> <p><code>failed to find plugin "loopback" in path [/opt/cni/bin]</code></p> </blockquote> <p>As the helpful message is trying to explain to you, it appears you have a botched CNI installation. Anytime you see <code>FailedCreatePodSandBox</code> or <code>SandboxChanged</code>, it's always(?) related to a CNI failure.</p> <p>The very short version is to grab <a href="https://github.com/containernetworking/plugins/releases/tag/v0.7.5" rel="nofollow noreferrer">the latest CNI plugins</a> package, unpack it into <code>/opt/cni/bin</code>, ensure they are executable, and restart ... uh, likely the machine, but certainly the offending Pod and most likely <code>kubelet</code>, too.</p> <p>p.s. You will have a much nicer time here on SO by <a href="https://stackoverflow.com/search?q=pod+sandboxchanged">conducing a little searching</a>, as this is a <strong>very common</strong> question</p>
mdaniel
<p>I am running Mongodb pod in kunernetes cluster using nfs volume.I am writing pod.yml like this</p> <p><a href="https://i.stack.imgur.com/qPtlS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qPtlS.png" alt="enter image description here"></a></p> <p>but i am getting the below error<a href="https://i.stack.imgur.com/2iAVA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2iAVA.png" alt="enter image description here"></a></p> <p>Could anybody suggest how to resolve the above issue?</p>
BSG
<blockquote> <p>I am running Mongodb pod in kunernetes cluster using nfs volume.I am writing pod.yml like this</p> </blockquote> <p>Putting <code>mongo</code> in the <code>command:</code> is erroneous. The <strong>daemon</strong> is <code>mongod</code> and the image <a href="https://github.com/docker-library/mongo/blob/27496625/4.0/Dockerfile#L91" rel="nofollow noreferrer">would have started it automatically</a> had you not provided any <code>command:</code> at all.</p> <p>Further, all <code>command:</code> lines are <code>exec</code>-ed, so putting <code>bash -c</code> in front of a command is just moving that command out of pid 1 and unnecessarily adding memory consumption to your container.</p>
mdaniel
<p>I created my kubernetes cluster using KubeSpray on AWS. Now I am trying to get the Ingress Controller to work. My understanding is that I need to apply <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml</a> which will create all the resources that I need including a network load balancer.</p> <p>However, the LoadBalancer never comes out of pending status:</p> <pre><code>$ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.233.28.147 &lt;pending&gt; 80:31304/TCP,443:31989/TCP 11m ingress-nginx-controller-admission ClusterIP 10.233.58.231 &lt;none&gt; 443/TCP 11m </code></pre> <p>Describing the service does not seem to provide any interesting information.</p> <pre><code>$ kubectl -n ingress-nginx describe service ingress-nginx-controller Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=0.34.1 helm.sh/chart=ingress-nginx-2.11.1 Annotations: kubectl.kubernetes.io/last-applied-configuration: {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;service.beta.kubernetes.io/aws-load-balancer-backend-protocol&quot;:&quot;tcp&quot;,&quot;serv... service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true service.beta.kubernetes.io/aws-load-balancer-type: nlb Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP: 10.233.28.147 Port: http 80/TCP TargetPort: http/TCP NodePort: http 31304/TCP Endpoints: 10.233.97.22:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 31989/TCP Endpoints: 10.233.97.22:443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 30660 Events: &lt;none&gt; </code></pre> <p>How can this issue be debugged?</p> <p>UPDATE:</p> <p>The output of <code>kubectl -n kube-system logs -l component=kube-controller-manager</code> is:</p> <pre><code>E0801 21:12:29.429759 1 job_controller.go:793] pods &quot;ingress-nginx-admission-create-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found E0801 21:12:29.429788 1 job_controller.go:398] Error syncing job: pods &quot;ingress-nginx-admission-create-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found I0801 21:12:29.429851 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-create&quot;, UID:&quot;4faad8c5-9b1e-4c23-a942-94be181d590f&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506255&quot;, FieldPath:&quot;&quot;}): type: 'Warning' reason: 'FailedCreate' Error creating: pods &quot;ingress-nginx-admission-create-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found E0801 21:12:29.483485 1 job_controller.go:793] pods &quot;ingress-nginx-admission-patch-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found E0801 21:12:29.483512 1 job_controller.go:398] Error syncing job: pods &quot;ingress-nginx-admission-patch-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found I0801 21:12:29.483679 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-patch&quot;, UID:&quot;92ee0e43-2711-4b37-9fd6-958ef3c95b31&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506257&quot;, FieldPath:&quot;&quot;}): type: 'Warning' reason: 'FailedCreate' Error creating: pods &quot;ingress-nginx-admission-patch-&quot; is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount &quot;ingress-nginx-admission&quot; not found I0801 21:12:39.436590 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-create&quot;, UID:&quot;4faad8c5-9b1e-4c23-a942-94be181d590f&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506255&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-85x58 I0801 21:12:39.489303 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-patch&quot;, UID:&quot;92ee0e43-2711-4b37-9fd6-958ef3c95b31&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506257&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-sn8xv I0801 21:12:41.448425 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-create&quot;, UID:&quot;4faad8c5-9b1e-4c23-a942-94be181d590f&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506297&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Completed' Job completed I0801 21:12:42.481264 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Job&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-admission-patch&quot;, UID:&quot;92ee0e43-2711-4b37-9fd6-958ef3c95b31&quot;, APIVersion:&quot;batch/v1&quot;, ResourceVersion:&quot;1506304&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Completed' Job completed </code></pre> <p>I do have the PodSecurityPolicy admission controller enabled. I updated the <code>deploy.yaml</code> file with the following changes.</p> <ul> <li>Add the following to all ClusterRole and Role resources.</li> </ul> <pre><code>- apiGroups: [policy] resources: [podsecuritypolicies] resourceNames: [privileged] verbs: [use] </code></pre> <ul> <li>Add the following to the end of the file.</li> </ul> <pre><code>--- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.11.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.34.1 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: default </code></pre> <p><strong>Question Responses:</strong></p> <ul> <li><p>The IAM roles were created by the ansible playbooks in the Kubespray <code>contrib/terraform/aws</code> directory.</p> </li> <li><p>A classic load balancer was created for the apiserver by those ansible scripts.</p> </li> </ul>
David Medinets
<p>I have two answers to this question.</p> <p><strong>one</strong> - add the <code>cloud-provider</code> option to your <code>ansible-playbook</code> command as shown below.</p> <pre><code>ansible-playbook \ -vvvvv \ -i ./inventory/hosts \ ./cluster.yml \ -e ansible_user=centos \ -e cloud_provider=aws \ -e bootstrap_os=centos \ --become \ --become-user=root \ --flush-cache \ -e ansible_ssh_private_key_file=$PKI_PRIVATE_PEM \ | tee kubespray-cluster-$(date &quot;+%Y-%m-%d_%H:%M&quot;).log </code></pre> <p><strong>two</strong></p> <p>Uncomment the cloud_provider option in group_vars/all.yml and set it to 'aws'</p> <p><strong>proof</strong></p> <p>I've tried the first answer.</p> <pre><code>$ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.233.57.196 aa....amazonaws.com 80:32111/TCP,443:31854/TCP 109s ingress-nginx-controller-admission ClusterIP 10.233.11.133 &lt;none&gt; 443/TCP 109s </code></pre>
David Medinets
<p>Is it possible to use Kubespray with Bastion but on custom port and with agent forwarding? If it is not supported, what changes does one need to do?</p>
kboom
<p>Always, since you can configure that at three separate levels: via the host user's <code>~/.ssh/config</code>, via the entire playbook with <code>group_vars</code>, or as inline config (that is, on the command line or in the inventory file).</p> <p>The ssh config is hopefully straightforward:</p> <pre><code>Host 1.2.* *.example.com # or whatever pattern matches the target instances ProxyJump someuser@some-bastion:1234 # and then the Agent should happen automatically, unless you mean # ForwardAgent yes </code></pre> <p>I'll speak to the inline config next, since it's a little simpler:</p> <pre><code>ansible-playbook -i whatever \ -e '{"ansible_ssh_common_args": "-o ProxyJump=\"someuser@jump-host:1234\""}' \ cluster.yaml </code></pre> <p>or via the inventory in the same way:</p> <pre><code>master-host-0 ansible_host=1.2.3.4 ansible_ssh_common_args="-o ProxyJump='someuser@jump-host:1234'" </code></pre> <p>or via <code>group_vars</code>, which you can either add to an existing <code>group_vars/all.yml</code>, or if it doesn't exist then create that <code>group_vars</code> directory containing the <code>all.yml</code> file as a child of the directory containing your inventory file</p> <p>If you have more complex ssh config than you wish to encode in the inventory/command-line/group_vars, you can also instruct the ansible-invoked ssh to use a dedicated config file via the <a href="https://docs.ansible.com/ansible/2.6/user_guide/intro_inventory.html?highlight=ansible_ssh_extra_args#list-of-behavioral-inventory-parameters" rel="nofollow noreferrer"><code>ansible_ssh_extra_args</code></a> variable:</p> <pre><code>ansible-playbook -e '{"ansible_ssh_extra_args": "-F /path/to/special/ssh_config"}' ... </code></pre>
mdaniel
<p>I am trying to start minikube behind a corporate proxy on Windows machine. I am using the following start command</p> <pre><code>minikube start --alsologtostderr --vm-driver="hyperv" --docker-env http_proxy=http://proxyabc.uk.sample.com:3128 --docker-env https_proxy=http://proxyabc.uk.sample.com:3128 --docker-env "NO_PROXY=localhost,127.0.0.1,192.168.211.157:8443" </code></pre> <p><strong>minikube version = 0.28.0</strong></p> <p><strong>kubectl version = 1.9.2</strong></p> <p>I've also tried setting the no proxy variable before the command</p> <p>set NO_PROXY="$NO_PROXY,192.168.211.158/8443"</p> <p>But everytime I run the "minikube start" command I end up with the following message</p> <p><strong><em>Error starting cluster: timed out waiting to unmark master: getting node minikube: Get <a href="https://192.168.211.155:8443/api/v1/nodes/minikube" rel="nofollow noreferrer">https://192.168.211.155:8443/api/v1/nodes/minikube</a>: Forbidden</em></strong></p> <p>I have already tried solutions at </p> <p><a href="https://github.com/kubernetes/minikube/issues/2706" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2706</a> <a href="https://github.com/kubernetes/minikube/issues/2363" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2363</a></p>
Asif
<blockquote> <p><code>set NO_PROXY="$NO_PROXY,192.168.211.158/8443"</code></p> </blockquote> <p>That slash is not the port, it's the <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDR</a> which defines how many IPs should be excluded from the proxy. Separately, it appears you somehow included the colon in the one provided to <code>--docker-env</code>, which I think is also wrong.</p> <p>And, the <code>$NO_PROXY,</code> syntax in your <code>set</code> command is also incorrect, since that's the unix-y way of referencing environment variables -- you would want <code>set NO_PROXY="%NO_PROXY%,...</code> just be careful since unless you <em>already have</em> a variable named <code>NO_PROXY</code>, that <code>set</code> will expand to read <code>set NO_PROXY=",192.168.etcetc"</code> which I'm not sure is legal syntax for that variable.</p>
mdaniel
<p><strong>Some quick background:</strong> creating an app in golang, running on minikube on MacOS 10.14.2</p> <pre><code>karlewr [0] $ kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><strong>The Issue:</strong> I cannot access my pod via it's pod IP from inside my cluster. This problem is only happening with this one pod which leads me to believe I have a misconfiguration somewhere.</p> <p>My pods spec is as follows:</p> <pre><code>containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" ports: - name: http containerPort: 8080 protocol: TCP livenessProbe: httpGet: path: /ping port: 8080 initialDelaySeconds: 60 readinessProbe: httpGet: path: /ping port: 8080 initialDelaySeconds: 60 </code></pre> <p>What's weird is that I can access it by port-forwarding that pod on port <code>8080</code> and running <code>curl localhost:8080/ping</code> before the liveness and readiness probes run and after the pod has been initialized. This returns 200 OK.</p> <p>Also during this time before <code>CrashLoopBackoff</code>, if I ssh into my minikube node and run <code>curl http://172.17.0.21:8080/ping</code> I get <code>curl: (7) Failed to connect to 172.17.0.21 port 8080: Connection refused</code>. The IP used is my pod's IP.</p> <p>But then when I describe the pod after the <code>initialDelaySeconds</code> period, I see this:</p> <pre><code> Warning Unhealthy 44s (x3 over 1m) kubelet, minikube Readiness probe failed: Get http://172.17.0.21:8080/ping: dial tcp 172.17.0.21:8080: connect: connection refused Warning Unhealthy 44s (x3 over 1m) kubelet, minikube Liveness probe failed: Get http://172.17.0.21:8080/ping: dial tcp 172.17.0.21:8080: connect: connection refused </code></pre> <p>Why would my connection be getting refused only from the pod's IP?</p> <p><strong>Edit</strong> I am not running any custom networking things, just minikube out of the box</p>
rkarlewicz
<blockquote> <p>Why would my connection be getting refused only from the pod's IP?</p> </blockquote> <p>Because your program is apparently only listening on localhost (aka <code>127.0.0.1</code> aka <code>lo0</code>)</p> <p>Without knowing more about your container we can't advise you further, but that's almost certainly the problem based on your description.</p>
mdaniel
<p>I'm trying to update an image in Kubernetes by using the following command:</p> <pre><code>kubectl set image deployment/ms-userservice ms-userservice=$DOCKER_REGISTRY_NAME/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT --insecure-skip-tls-verify </code></pre> <p>But when I receive the following error:</p> <pre><code>error: the server doesn't have a resource type "deployment" </code></pre> <p>I have checked that i am in the right namespace, and that the pod with the name exists and is running.</p> <p>I can't find any meaningful resources in regards to this error.</p> <p>Sidenote: I'm doing this through Bitbucket and a pipeline, which also builds the image i want to use. </p> <p>Any suggestions?</p>
TietjeDK
<blockquote> <p>I have a suspicion that it has something to do with user - not much help from the error message.</p> </blockquote> <p>@TietjeDK is correct that it is just a misleading error message. It means one of two things is happening (or maybe both): the <code>kubectl</code> binary is newer than the supported version range of the cluster (so: using a v1.11 binary against a v1.8 cluster, for example) or the provided JWT is incorrectly signed.</p> <p>You should be very very careful with <code>--insecure-skip-tls-verify</code> not only because it's bad security hygiene but also because if a kubeconfig is incorrect -- as is very likely the case here -- then seeing the x509 error is a much clearer indication than trying to troubleshoot an invalid JWT.</p> <p>The indicator that makes me believe it is actually the signature of the token, and not its contents, is that if it were the contents you would seen an RBAC message <code>User "[email protected]" cannot list deployments in $namespace namespace</code>, meaning the apiserver did unpack the JWT and found its assertions were insufficient for the operation. But if you sign a JWT using a random key, the JWT will not unpack since it will fail public key validation and be rejected outright.</p> <p>So, the tl;dr is two-fold:</p> <ol> <li>fix the kubeconfig to actually contain the correct certificate authority (CA) for the cluster, so <code>--insecure-skip-tls-verify</code> isn't required</li> <li>while fixing kubeconfig, issue a new token for the (<code>User</code> | <code>ServiceAccount</code>) that comes from the cluster it is designed to interact with</li> </ol>
mdaniel
<p>I need to populate an env file located into pod filesystem filled by an init container previously.</p> <p>I look up <code>envFrom</code> <a href="https://kubernetes.io/docs/reference/federation/extensions/v1beta1/definitions/#_v1_envfromsource" rel="nofollow noreferrer">documentation</a> but I've not been able to figure out how to use it and I've been able to find some relevant examples in internet.</p> <p>Guess an init container creates a file on <code>/etc/secrets/secrets.env</code>, so the pod container spec has to look up on <code>/etc/secrets/secrets.env</code> in order to populate env variables.</p>
Jordi
<p>You will not be able to reference any filesystem component to populate an environment variable using the PodSpec, because it creates a chicken-and-egg problem: kubernetes cannot create the filesystem without a complete PodSpec, but it cannot resolve variables in the PodSpec without access to the Pod's filesystem</p> <p>If <code>/etc/secrets</code> is a <code>volume</code> that the <code>initContainer</code> and the normal <code>container</code> share, then you can supersede the <code>command:</code> of your <code>container</code> to source that into its environment before running the actual command, but that is as close as you're going to get:</p> <pre><code>containers: - name: my-app command: - bash - -ec - | . /etc/secrets/secrets.env ./bin/run-your-server-command-here </code></pre>
mdaniel
<p>Im having trouble invoking functions using kubeless. Here is the function spec</p> <pre><code>--- apiVersion: kubeless.io/v1beta1 kind: Function metadata: name: smk namespace: smktest spec: handler: hello.handler runtime: python2.7 function: | import json def handler(): return "hello world" deployment: spec: template: spec: containers: - env: - name: FOO value: bar name: "smk-deployment" resources: limits: cpu: 100m memory: 100Mi requests: cpu: 100m memory: 100Mi </code></pre> <p>When I try to call the function as below,</p> <pre><code>kubeless function call smk </code></pre> <p>I get </p> <p><code>FATA[0000] Unable to find the service for smk</code></p> <p>Two part question</p> <ol> <li>How do I expose my function as a service</li> <li>How do I specify Environment variables needed by this function ? Thank you</li> </ol> <p><strong><em>Update</em></strong> Running kubeless function ls --namespace=smktest yields below</p> <pre><code>NAME NAMESPACE HANDLER RUNTIME DEPENDENCIES STATUS smk smktest hello.handler python2.7 MISSING: Check controller logs </code></pre> <p>Next I tried <code>kubectl logs -n kubeless -l kubeless=controller</code> there's tons of error logs but I don't see anything specific to this function</p>
smk
<blockquote> <p>When I try to call the function as below,</p> <p><code>kubeless function call smk</code></p> <p>I get</p> <p><code>FATA[0000] Unable to find the service for smk</code></p> <p>Running <code>kubeless function ls --namespace=smktest</code></p> </blockquote> <p>Then surely you would need to include the <code>--namespace=smktest</code> in your invocation command, too:</p> <pre><code>kubeless function call --namespace=smktest smk </code></pre> <hr> <blockquote> <p>How do I specify Environment variables needed by this function ? Thank you</p> </blockquote> <p>As best I can tell, there seems to be two approaches in use:</p> <ul> <li><a href="https://github.com/kubeless/kubeless/blob/v0.6.0/pkg/apis/kubeless/v1beta1/function.go#L45" rel="nofollow noreferrer">Provide a <code>Deployment</code> template</a>, which the <a href="https://github.com/kubeless/kubeless/blob/v0.6.0/pkg/controller/function_controller.go#L311" rel="nofollow noreferrer"><code>function controller</code> appears to merge</a> but as far as I know <code>container: image:</code> is required in a <code>Deployment</code>, so you'd have to specify one in order to get access to its <code>env:</code> declaration</li> <li>Otherwise "cheat" and use the <code>Pod</code>s <code>ServiceAccount</code> token to <a href="https://github.com/kubeless/functions/blob/d80825f1d21e2686a1f817a45d333729a360da8a/incubator/slack/bot.py#L11" rel="nofollow noreferrer">request cluster resources manually</a> which might include a <code>ConfigMap</code>, <code>Secret</code>, or even resolving your own <code>kubeless.io/function</code> manifest and pulling something out of its annotations or similar</li> </ul>
mdaniel
<p>Is there any configuration snapshot mechanism on kubernetes?</p> <p>The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.</p> <p>The steps that should be taken.</p> <ul> <li>Take a configuration snapshot</li> <li>Delete the cluster</li> <li>Create a new cluster</li> <li>Apply the configuration snapshot to the new cluster</li> <li>New cluster works like the old one</li> </ul>
gkatzioura
<p>These are the 3 that spring to mind, with <code>kubed</code> being, at least according to their readme, the closest to your stated goals:</p> <ul> <li><a href="https://github.com/heptio/ark#readme" rel="nofollow noreferrer">Ark</a></li> <li><a href="https://github.com/appscode/kubed#readme" rel="nofollow noreferrer">kubed</a></li> <li><a href="https://github.com/pieterlange/kube-backup#readme" rel="nofollow noreferrer">kube-backup</a></li> </ul> <p>I run Ark in my cluster, but (to my discredit) I have not yet attempted to do a D.R. drill using it; I only checked that it is, in fact, making config backups.</p>
mdaniel
<p>I'm currently working on a traditional monolith application, but I am in the process of breaking it up into spring microservices managed by kubernetes. The application allows the uploading/downloading of large files and these files are normally stored on the host filesystem. I'm wondering what would be the most viable method of persisting these files in a microservice architecture?</p>
Matt Greene
<p>You have a bunch of different options, Googling your question you'll find many answers, for any budget and taste. Basically you'd want high-availability storage like AWS S3. You could setup your own dedicated server to store these files as well if you wanted to cut costs, but then you'd have to worry about backups and availability. If you need low latency access to these files then you'd want to have them behind CDN as well.</p>
Denis Pshenov
<p>I stared my kubernetes cluster on AWS EC2 with kops using a private hosted zone in route53. Now when I do something like <code>kubectl get nodes</code>, the cli says that it can't connect to <code>api.kops.test.com</code> as it is unable to resolve it. So I fixed this issue by manually adding <code>api.kops.test.com</code> and its corresponding public IP (got through record sets) mapping in <code>/etc/hosts</code> file.</p> <p>I wanted to know if there is a cleaner way to do this (without modifying the system-wide <code>/etc/hosts</code> file), maybe programmatically or through the cli itself.</p>
Punit Naik
<p>Pragmatically speaking, I would add the public IP as a <code>IP</code> SAN to the master's x509 cert, and then just use the public IP in your kubeconfig. Either that, or the DNS record <em>not</em> in the private route53 zone.</p> <p>You are in a situation where you purposefully made things private, so now they are.</p> <hr> <p>Another option, depending on whether it would be worth the effort, is to use a VPN server in your VPC and then connect your machine to EC2 where the VPN connection can add the EC2 DNS servers to your machine's config as a side-effect of connecting. Our corporate Cisco AnyConnect client does something very similar to that.</p>
mdaniel
<p>have issue with </p> <pre><code>kubectl run -ti </code></pre> <p>in gitlab ci. For testing in CI we run docker container with "npm t" command in interactive mode and it was perfectly work on docker. After migrate to Kubernetes have issue, as kubectl run give next error: <code>Unable to use a TTY - input is not a terminal or the right kind of file</code><br> Job run in image: <strong>lachlanevenson/k8s-kubectl</strong> If run kubectl run from local machine all work. Pls help</p>
Alibek Karimov
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="nofollow noreferrer">PodSpec <code>container:</code></a> has a <code>tty</code> attribute, which defaults to <code>false</code> but which one can set to <code>true</code> (that's what the <code>-t</code> option, which is a shortcut for <code>--tty=true</code>, does in <code>kubectl exec</code>). You can experiment with setting <code>stdin: true</code> but at your peril, since it can hang the Pod waiting for "someone" to type something.</p>
mdaniel
<p>In <a href="https://github.com/kubernetes/client-go/blob/79cb21f5b3b1dd8f8b23bd3f79925b4fda4e2562/tools/cache/reflector.go#L100" rel="nofollow noreferrer">this</a> reflector package it mentions an unstable value being used as a name suffix. It's the number of nanoseconds modulo 12345. Is this meaningful or is it just a synonym for pseudo random so humans don't interpret it?</p> <pre><code>// reflectorDisambiguator is used to disambiguate started reflectors. // initialized to an unstable value to ensure meaning isn't attributed to the suffix. var reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) </code></pre> <p>The word unstable is what is specifically making me unsure. What does it mean in this context?</p> <p>Is there an advantage of doing this over another method of getting a random number under 12345?</p>
Brian
<p>The meaning seems clear:</p> <blockquote> <p><a href="https://github.com/kubernetes/client-go/commit/755bbca820db30c1a1d071d0ab4709fc49c003a5" rel="nofollow noreferrer">Kubernetes-commit: 1da4f4a745bf536c34e377321a252b4774d1a7e0</a></p> <p>tools/cache/reflector.go</p> <pre><code>// reflectorDisambiguator is used to disambiguate started reflectors. // initialized to an unstable value to ensure meaning isn't attributed to the suffix. </code></pre> </blockquote> <p>The suffix behavior should not be deterministic because you should not rely on a particular implementation behavior.</p> <hr> <p>For example, a similar situation occured for Go maps:</p> <blockquote> <p><a href="https://golang.org/ref/spec" rel="nofollow noreferrer">The Go Programming Language Specification</a></p> <p>For statements with range clause</p> <p>The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next. </p> <p><a href="https://golang.org/doc/go1" rel="nofollow noreferrer">Go 1 Release Notes</a></p> <p>Iterating in maps</p> <p>The old language specification did not define the order of iteration for maps, and in practice it differed across hardware platforms. This caused tests that iterated over maps to be fragile and non-portable, with the unpleasant property that a test might always pass on one machine but break on another.</p> <p>In Go 1, the order in which elements are visited when iterating over a map using a for range statement is defined to be unpredictable, even if the same loop is run multiple times with the same map. Code should not assume that the elements are visited in any particular order.</p> <p>This change means that code that depends on iteration order is very likely to break early and be fixed long before it becomes a problem. Just as important, it allows the map implementation to ensure better map balancing even when programs are using range loops to select an element from a map. </p> <p><a href="https://golang.org/doc/go1.3" rel="nofollow noreferrer">Go 1.3 Release Notes</a></p> <p>Map iteration</p> <p>Iterations over small maps no longer happen in a consistent order. Go 1 defines that “The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next.” To keep code from depending on map iteration order, Go 1.0 started each map iteration at a random index in the map. A new map implementation introduced in Go 1.1 neglected to randomize iteration for maps with eight or fewer entries, although the iteration order can still vary from system to system. This has allowed people to write Go 1.1 and Go 1.2 programs that depend on small map iteration order and therefore only work reliably on certain systems. Go 1.3 reintroduces random iteration for small maps in order to flush out these bugs.</p> <p>Updating: If code assumes a fixed iteration order for small maps, it will break and must be rewritten not to make that assumption. Because only small maps are affected, the problem arises most often in tests.</p> </blockquote> <hr> <p>Similar concerns lead to a proposal, that wasn't implemented, to ensure that the order for unstable sorts was unstable:</p> <blockquote> <p><a href="https://github.com/golang/go/issues/13884" rel="nofollow noreferrer">proposal: sort: return equal values in non-deterministic order#13884</a></p> <p>Crazy idea, but what if sort.Sort randomly permuted its input a bit before starting?</p> <p>Go 1.6 has a different sort.Sort than Go 1.5 and I've seen at least a dozen test failures at Google that were implicitly depending on the old algorithm. The usual scenario is that you sort a slice of structs by one field in the struct. If there are entries with that field equal but others unequal and you expect a specific order for the structs at the end, you are depending on sort.Sort's algorithm. A later sort.Sort might make a different choice and produce a different order. This makes programs not portable from one version of Go to another, much like map hash differences used to make programs not portable from one architecture to another. We solved maps by randomizing the iteration order a bit. In the map case it's not a full permutation, but just enough variation to make tests obviously flaky.</p> <p>I wonder if we should do the same for sort.Sort. It would only take N swaps to shuffle things quite well, and we already use Nlog(N) swaps, so N(log(N)+1) is not likely to be noticed. That would also protect a bit against malicious inputs.</p> <p>It would surprise people, especially people who think sort.Sort == sort.Stable. But the rationale is that it's better to surprise them the first time they run the code instead of however many Go releases later.</p> </blockquote> <hr> <p>Here are benchmarks comparing <code>time.Now()</code> to <code>rand.Intn()</code>:</p> <pre><code>package main import "testing" import ( rand "math/rand" "time" ) // https://github.com/kubernetes/client-go/blob/79cb21f5b3b1dd8f8b23bd3f79925b4fda4e2562/tools/cache/reflector.go#L100 var reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) func BenchmarkTimeNow(b *testing.B) { for N := 0; N &lt; b.N; N++ { reflectorDisambiguator = int64(time.Now().UnixNano() % 12345) } } // rand.Intn() func init() { rand.Seed(time.Now().UnixNano()) reflectorDisambiguator = int64(rand.Intn(12345)) } func BenchmarkRandIntn(b *testing.B) { for N := 0; N &lt; b.N; N++ { rand.Seed(time.Now().UnixNano()) reflectorDisambiguator = int64(rand.Intn(12345)) } } </code></pre> <p>Output:</p> <pre><code>$ go test disambiguator_test.go -bench=. goos: linux goarch: amd64 BenchmarkTimeNow-4 20000000 67.5 ns/op BenchmarkRandIntn-4 100000 11941 ns/op $ </code></pre>
peterSO
<p>I am able to scrape Prometheus metrics from a Kubernetes service using this Prometheus job configuration:</p> <pre><code>- job_name: 'prometheus-potapi' static_configs: - targets: ['potapi-service.potapi:1234'] </code></pre> <p>It uses Kubernetes DNS and it gives me the metrics from any of my three pods I use for my service.</p> <p>I would like to see the result for each pod. </p> <p>I am able to see the data I want using this configuration:</p> <pre><code>- job_name: 'prometheus-potapi-pod' static_configs: - targets: ['10.1.0.126:1234'] </code></pre> <p>I have searched and experimented using the service discovery mechanism available in Prometheus. Unfortunately, I don't understand how it should be setup. The <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">service discovery reference</a> isn't really helpful if you don't know how it works.</p> <p>I am looking for an example where the job using the IP number is replaced with some service discovery mechanism. Specifying the IP was enough for me to see that the data I'm looking for is exposed.</p> <p>The pods I want to scrape metrics from all live in the same namespace, <code>potapi</code>. </p> <p>The metrics are always exposed through the same port, <code>1234</code>.</p> <p>Finally, the are all named like this:</p> <pre><code>potapi-deployment-754d96f855-lkh4x potapi-deployment-754d96f855-pslgg potapi-deployment-754d96f855-z2zj2 </code></pre> <p>When I do </p> <pre><code>kubectl describe pod potapi-deployment-754d96f855-pslgg -n potapi </code></pre> <p>I get this description:</p> <pre><code>Name: potapi-deployment-754d96f855-pslgg Namespace: potapi Node: docker-for-desktop/192.168.65.3 Start Time: Tue, 07 Aug 2018 14:18:55 +0200 Labels: app=potapi pod-template-hash=3108529411 Annotations: &lt;none&gt; Status: Running IP: 10.1.0.127 Controlled By: ReplicaSet/potapi-deployment-754d96f855 Containers: potapi: Container ID: docker://72a0bafbda9b82ddfc580d79488a8e3c480d76a6d17c43d7f7d7ab18458c56ee Image: potapi-service Image ID: docker://sha256:d64e94c2dda43c40f641008c122e6664845d73cab109768efa0c3619cb0836bb Ports: 4567/TCP, 4568/TCP, 1234/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP State: Running Started: Tue, 07 Aug 2018 14:18:57 +0200 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4fttn (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-4fttn: Type: Secret (a volume populated by a Secret) SecretName: default-token-4fttn Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>How would you rewrite the job definition given these prerequisites?</p>
Thomas Sundberg
<p>Here they use <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L253-L256" rel="nofollow noreferrer"><code>example.io/scrape=true</code></a> (and similar annotations for specifying the <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L266-L269" rel="nofollow noreferrer">scrape port</a> and the <a href="https://github.com/prometheus/prometheus/blob/v2.3.2/documentation/examples/prometheus-kubernetes.yml#L258-L260" rel="nofollow noreferrer">scrape path</a> if it's not <code>/metrics</code>), which is how one achieves the "autodiscovery" part.</p> <p>If you apply that annotation -- and the relevant config snippets in the Prom config -- to a <code>Service</code>, then Prom will scrape the port and path on the <code>Service</code>, meaning you will have stats for the <code>Service</code> itself, and not the individual Endpoints behind it. Similarly, if you label the <code>Pod</code>s, you will gather metrics for the <code>Pod</code>s but they would need to be rolled up to have a cross-<code>Pod</code> view of the state of affairs. There are multiple different resource types that can be autodiscovered, including <a href="https://prometheus.io/docs/prometheus/2.3/configuration/configuration/#node" rel="nofollow noreferrer">node</a> and <a href="https://prometheus.io/docs/prometheus/2.3/configuration/configuration/#ingress" rel="nofollow noreferrer">ingress</a>, also. They all behave similarly.</p> <p>Unless you have grave CPU or storage concerns for your Prom instance, I absolutely wouldn't enumerate the scrape targets in the config like that: I would use the scrape annotations, meaning you can change who is scraped, what port, etc. without having to reconfigure Prom each time.</p> <p>Be aware that if you want to use their example as-is, and you want to apply those annotations from within the kubernetes resource YAML, ensure that you quote the <code>: 'true'</code> value, otherwise YAML will promote that to be a boolean literal, and kubernetes annotations can only be string values.</p> <p>Applying the annotations from the command line will work just fine:</p> <pre><code>kubectl annotate pod -l app=potapi example.io/scrape=true </code></pre> <p>(BTW, they use <code>example.io/</code> in their example, but there is nothing special about that string except it namespaces the <code>scrape</code> part to keep it from colliding with something else named <code>scrape</code>. So feel free to use your organization's namespace if you wish to avoid having something weird named <code>example.io/</code> in your cluster)</p>
mdaniel
<p>I am running minikube on MacOS, and want to expose the ip address and port for running this example helm chart - <a href="https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/</a></p> <p>I tried to ping the localhost:58064, but it wouldn't connect.</p> <pre><code>helm install --dry-run --debug ./mychart --set service.internalPort=8080 [debug] Created tunnel using local port: '58064' [debug] SERVER: "127.0.0.1:58064" [debug] Original chart version: "" [debug] CHART PATH: /Users/me/Desktop/HelmTest/mychart NAME: messy-penguin REVISION: 1 RELEASED: Tue Jun 12 17:56:41 2018 CHART: mychart-0.1.0 USER-SUPPLIED VALUES: service: internalPort: 8080 COMPUTED VALUES: affinity: {} image: pullPolicy: IfNotPresent repository: nginx tag: stable ingress: annotations: {} enabled: false hosts: - chart-example.local path: / tls: [] nodeSelector: {} replicaCount: 1 resources: {} service: internalPort: 8080 port: 80 type: ClusterIP tolerations: [] HOOKS: MANIFEST: --- # Source: mychart/templates/service.yaml apiVersion: v1 kind: Service metadata: name: messy-penguin-mychart labels: app: mychart chart: mychart-0.1.0 release: messy-penguin heritage: Tiller spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app: mychart release: messy-penguin --- # Source: mychart/templates/deployment.yaml apiVersion: apps/v1beta2 kind: Deployment metadata: name: messy-penguin-mychart labels: app: mychart chart: mychart-0.1.0 release: messy-penguin heritage: Tiller spec: replicas: 1 selector: matchLabels: app: mychart release: messy-penguin template: metadata: labels: app: mychart release: messy-penguin spec: containers: - name: mychart image: "nginx:stable" imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {} MacBook-Pro:~/Desktop/HelmTest quantum-fusion$ curl 127.0.0.1:58064 curl: (7) Failed to connect to 127.0.0.1 port 58064: Connection refused </code></pre>
joe the coder
<p>Because <code>minikube</code> is from the docker-machine family, running <code>minikube ip</code> will output the IP address of the virtual machine, and <strong>that</strong> is the IP upon which you should attempt to contact your cluster, not localhost.</p> <p>Furthermore, <code>[debug] Created tunnel using local port: '58064'</code> is <strong>helm</strong> making a tunnel to the embedded <code>tiller</code> Pod inside your cluster, and is not anything that you should be using at all. That's actually why it is prefixed with <code>[debug]</code>: because it is only useful for extreme circumstances.</p> <p>Finally, you will need to use <code>kubectl port-forward</code> to reach your deployed Pod since the <code>Service</code> is using a <code>ClusterIP</code>, which as its name implies is only valid inside the cluster. You can also create a second <code>Service</code> of <code>type: NodePort</code> and it will allocate a TCP/IP port on the virtual machine's IP that routes to the <code>port:</code> of the <code>Service</code>. You <em>may</em> be able to inform your Helm chart to do that for you, depending on whether the author exposed that kind of decision through the chart's <code>values.yaml</code>.</p> <p>The other "asterisk" to that <code>port-forward</code> versus <code>Service</code> of <code>type: NodePort</code> part is that I see in the output a mention of an <code>Ingress</code> resource for <code>chart-example.local</code>, but that pragmatically is only meaningful if you have a running "ingress controller," but if you do, then it <em>already</em> has a TCP/IP port upon which you should contact your cluster, just ensuring that you provide the connection with a <code>curl -H "host: chart-example.local" http://$(minikube ip):${the_ingress_port}</code> so that the ingress controller can route the request to the correct <code>Ingress</code>.</p>
mdaniel
<p>I have a problem, I need to collect metric data from read-only-port located on 10255, but unfortunately by using netstat I found that such port don't exists at all. Can somebody help with advise, how could I create such port on kubelet or how can I avoid this port for data collection?</p>
volodjaklasik
<p>The <em>kubelet</em> requires a parameter to be set: <strong>--read-only-port=10255</strong> (<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">read more about kubelet</a>) </p> <p>If you are using <em>kubeadm</em> to bootstrap the cluster, you can use a config file to pass in for the kubelet (look for how to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">"Set Kubelet parameters via a config file"</a>)</p> <p>If, for example, you are using <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a>, there's the <strong>kube_read_only_port</strong> variable (commented out by default).</p> <blockquote> <p>Warning! This is not a good practice and the <a href="https://github.com/kubernetes/kubeadm/issues/732" rel="nofollow noreferrer">read-only-port is deprecated</a>. There are ways to read from the secure port but this is another story.</p> </blockquote>
Adi Fatol
<p>I would like to know, which are the endpoints correctly configured for a certain ingress.</p> <p>For instance I have the following ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dictionary spec: tls: - hosts: - dictionary.juan.com secretName: microsyn-secret backend: serviceName: microsyn servicePort: 8080 rules: - host: dictionary.juan.com http: paths: - path: /synonyms/* backend: serviceName: microsyn servicePort: 8080 </code></pre> <p>that is working the following service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: microsyn spec: ports: - port: 8080 targetPort: 8080 protocol: TCP name: http selector: app: microsyn </code></pre> <p>This service is exposing this deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: microsyn spec: replicas: 1 selector: matchLabels: app: microsyn template: metadata: labels: app: microsyn spec: containers: - name: microsyn image: microsynonyms imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>The application is a nodejs app listening on port 8080, that says hello for path '/', in a docker image I just created locally, that's the reason it has a imagePullPolicy: Never, it is just for testing purposes and learn how to create an ingress.</p> <p>So I created the ingress from nginx-ingress and it is up and running with a nodeport for local environment test, later I will take it to a deployment with load balancer:</p> <p><a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example</a></p> <p>So I do the following:</p> <p><code>minikube service list</code></p> <p><code> |---------------|----------------------|--------------------------------| | NAMESPACE | NAME | URL | |---------------|----------------------|--------------------------------| | default | kubernetes | No node port | | default | microsyn | No node port | | kube-system | default-http-backend | http://192.168.99.100:30001 | | kube-system | kube-dns | No node port | | kube-system | kubernetes-dashboard | http://192.168.99.100:30000 | | nginx-ingress | nginx-ingress | http://192.168.99.100:31253 | | | | http://192.168.99.100:31229 | |---------------|----------------------|--------------------------------| </code></p> <p>Brilliant I can do:</p> <p><code>curl http://192.168.99.100:31253/synonyms</code></p> <p>But then I get a:</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.15.2&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>So the only nginx I have is the one from this minikube, and it is working fine. But I cannot see which are the endpoints configured for this ingress...</p> <p>I see the logs that says:</p> <pre><code>2018/08/11 16:07:05 [notice] 117#117: signal process started I0811 16:07:05.313037 1 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"dictionary", UID:"9728e826-9d80-11e8-9caa-0800270091d8", APIVersion:"extensions", ResourceVersion:"57014", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/dictionary was added or updated W0811 16:15:05.826537 1 reflector.go:341] github.com/nginxinc/kubernetes-ingress/nginx-controller/controller/controller.go:413: watch of *v1.ConfigMap ended with: too old resource version: 56348 (56655) </code></pre> <p>So that means that the dictionary ingress has being processed without errors.</p> <p>But then Why I get a 404?</p> <p>Where can I see the endpoints that are configured for this ingress??</p> <p>I would like to run a command that says nginx --> which endpoints are you listening to?</p>
juan garcia
<blockquote> <p>But then Why I get a 404?</p> </blockquote> <p>Ingress resources -- or certainly the one you showed in your question -- use virtual-host routing, so one must set the <code>host:</code> header when interacting with that URL:</p> <pre><code>curl -H 'host: dictionary.juan.com' http://192.168.99.100:31253/synonyms/something </code></pre> <p>I would have to look up the <code>path:</code> syntax to know if <code>/synonyms/*</code> matches <code>/synonyms</code>, which is why I included the slash and extra content in <code>curl</code>.</p> <p>For interacting with it in a non-<code>curl</code> way, you can either change the <code>host:</code> in the <code>Ingress</code> to temporarily be 192.168.99.100 or use <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" rel="nofollow noreferrer">dnsmasq</a> to create a local nameserver where you can override <code>dictionary.juan.com</code> to always return <code>192.168.99.100</code>, and then Chrome will send the correct <code>host:</code> header by itself.</p> <blockquote> <p>Where can I see the endpoints that are configured for this ingress??</p> </blockquote> <p>The question is slightly inaccurate in that <code>Endpoint</code> is a formal resource, and not related to an <code>Ingress</code>, but the answer is:</p> <pre><code>kubectl get endpoints microsyn </code></pre> <blockquote> <p>I would like to run a command that says nginx --> which endpoints are you listening to?</p> </blockquote> <p>First, look up the name of the nginx-ingress Pod (any one of them should do, if you have multiple), and then look at the <code>nginx.conf</code> from the Pod to know exactly what rules it has converted the <code>Ingress</code> resource into:</p> <pre><code>kubectl exec $ingress_pod cat /etc/nginx/nginx.conf </code></pre>
mdaniel
<p>How can we get the real resource usage (not resource requests) of each pod on Kubernetes by command line? Heapster is deprecated. Meanwhile, Metrics-server still does not support <code>kubectl top pod</code>.</p> <ol> <li><p>Heapster - </p> <p>I deployed Heapster using the following command </p> <pre><code>$ heapster/deploy/kube.sh start kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-hlcbl 2/2 Running 0 39m kube-system calico-node-m8jl2 2/2 Running 0 35m kube-system coredns-78fcdf6894-bl94w 1/1 Running 0 39m kube-system coredns-78fcdf6894-fwx95 1/1 Running 0 39m kube-system etcd-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m kube-system heapster-84c9bc48c4-qzt8x 1/1 Running 0 15s kube-system kube-apiserver-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m kube-system kube-controller-manager-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 38m kube-system kube-proxy-nj9f8 1/1 Running 0 35m kube-system kube-proxy-zvr2b 1/1 Running 0 39m kube-system kube-scheduler-ctl.kube.yarnrm-pg0.utah.cloudlab.us 1/1 Running 0 39m kube-system monitoring-grafana-555545f477-jldmz 1/1 Running 0 15s kube-system monitoring-influxdb-848b9b66f6-k2k4f 1/1 Running 0 15s </code></pre> <p>When I used <code>kubectl top</code>, I encountered the following errors.</p> <pre><code>$ kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:) $ kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:) </code></pre></li> <li><p>metrics-server:</p> <p>metrics-server has not supported <code>kubectl top</code> <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="noreferrer">Resource Metrics API</a></p></li> </ol> <p>If anyone already solved the same problem, please help me. Thanks.</p>
Tan Le
<blockquote> <p>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)</p> </blockquote> <p>It sounds like the heapster deployment just forgot to install the <code>Service</code> for <code>heapster</code>; I would expect this would get you past <em>that</em> error, but unknown whether it would actually cause <code>kubectl top pods</code> to start to work:</p> <pre><code>kubectl create -f /dev/stdin &lt;&lt;SVC apiVersion: v1 kind: Service metadata: name: heapster namespace: kube-system spec: selector: whatever-label: is-on-heapster-pods ports: - name: http port: 80 targetPort: whatever-is-heapster-is-listening-on SVC </code></pre>
mdaniel
<p>I have a GKE k8s cluster and wanted to reboot one of the nodes (a vm reboot, and not just the kubelet).</p> <p>I was looking for the correct way (if there is one) than just resetting the vm directly. But I couldnt find anything in the web.</p> <p>So, my plan is to use these steps:</p> <ol> <li>drain the node</li> <li>reboot</li> </ol> <p>Is there a correct (other) way?</p>
rrw
<p>No, that is the correct way -- and you don't have to <code>drain</code> the Node first unless there is some extenuating circumstance. One of the major features of kubernetes is that it will route around the "damage" of having a Node disappear suddenly.</p> <p>You <em>could</em> <code>cordon</code> the Node, if you wish to prevent <em>future</em> Pods from being scheduled on the soon-to-be-rebooted Node, but that's merely a time-saver, and shouldn't affect the reboot process.</p> <p>Just be sure to verify the "schedulable" status of the Node after the reboot if you do use <code>cordon</code> or <code>drain</code>; I can't this very second recall whether they automatically re-register in a schedulable state. </p>
mdaniel
<p>I'm trying to deploy my web service to Google Container Engine:</p> <p>Here's my <strong>deployment.yaml:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: handfree labels: app: handfree spec: replicas: 3 template: metadata: labels: app: handfree spec: containers: - name: handfree image: arycloud/mysecretrepo:latest imagePullPolicy: Always #Ports to expose ports: - name: api_port containerPort: 8000 </code></pre> <p>Here's my <strong>service.yaml:</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: #Service name name: judge spec: selector: app: handfree ports: - protocol: TCP port: 8000 targetPort: 8000 type: LoadBalancer </code></pre> <p>I have created a cluster on Google Container Engine with cluster size 4 and 8 vCPUs, I have successfully get credentials by using the command from connecting link of this cluster.</p> <p>When I try to run the deployment.yml it returns an error as:</p> <blockquote> <p>Error from server (Forbidden): error when retrieving current configuration of: default handfree deployment.yaml</p> <p>from server for: &quot;deployment.yaml&quot; deployments.extensions &quot;handfree&quot; is forbidden: User &quot;client&quot; cannot get deployments.extensions in the namespace &quot;default&quot;: Unknown user &quot;client&quot;.</p> </blockquote> <p>I'm new to kubernetes world, help me, please!</p> <p>Thanks in advance!</p>
Abdul Rehman
<blockquote> <p>Unknown user "client".</p> </blockquote> <p>Means there is no <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#rolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>RoleBinding</code></a> or <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#clusterrolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>ClusterRoleBinding</code></a> with a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#subject-v1beta1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>subjects:</code></a> of <code>type: User</code> with a <code>name:</code> of <code>client</code>.</p> <p>The fix is to create a <code>ClusterRoleBinding</code> or <code>RoleBinding</code> -- depending on whether you want <code>client</code> to have access to <strong>every</strong> <code>Namespace</code> or just <code>default</code> -- and point it at an existing (or created) <code>Role</code> or <code>ClusterRole</code>. The bad news is that since your current credential is invalid, you will need to track down the <code>cluster-admin</code> credential to be able to make that kind of change. Since I haven't used GKE, I can't specify the exact steps.</p> <p>I know those paragraphs are filled with jargon, and for that I'm sorry - it's a complex topic. There are several RBAC summaries, including a <a href="https://about.gitlab.com/2018/08/07/understanding-kubernestes-rbac/" rel="nofollow noreferrer">recent one from GitLab</a>, a <a href="https://www.youtube.com/watch?v=CnHTCTP8d48" rel="nofollow noreferrer">CNCF webinar</a>, and <a href="https://sysdig.com/blog/kubernetes-security-rbac-tls/" rel="nofollow noreferrer">one from Sysdig</a>, and (of course) <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">the kubernetes docs</a></p>
mdaniel
<p>I am looking for a static validator that validates Kubernetes deployment or service yaml files based on custom rules. For example, I can have a rule to disallow some fields in the yaml files (although they are valid fields in K8s), or specify a range for values of a field. The validation is triggered independent of kubectl.</p> <p>The closest solution I found is this kube-lint: <a href="https://github.com/viglesiasce/kube-lint" rel="nofollow noreferrer">https://github.com/viglesiasce/kube-lint</a>. However, it does not seem to be supported since the last commit is March 2017.</p> <p>Can anyone let me know if there is anything else that does the dynamic validation on K8s yaml files based on custom rules?</p>
DylanS
<p>I believe the thing you are looking for is an <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">Admission Controller</a> and its two baked-in kinds "validating" and "mutating." However, as the docs say, if that's not powerful enough for your needs there is also <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">Dynamic Admission Controller</a>.</p> <p>Be sure to watch <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies</a> as it matures out of beta (or, I guess, try it even now)</p> <p>I haven't ever used them to know what the user experience is like (such as: does <code>kubectl</code> offer a friendly message, or just "401: Nope" kind of thing?), but as for the "disallow some fields" part, I am pretty confident they will do exactly as you wish.</p>
mdaniel
<p>I want to scale an application with workers.<br> There could be 1 worker or 100, and I want to scale them seamlessly.<br> The idea is using replica set. However due to domain-specific reasons, the appropriate way to scale them is for each worker to know its: ID and the total number of workers.</p> <p>For example, in case I have 3 workers, I'd have this: </p> <pre><code>id:0, num_workers:3 id:1, num_workers:3 id:2, num_workers:3 </code></pre> <p>Is there a way of using kubernetes to do so?<br> I pass this information in command line arguments to the app, and I assume it would be fine having it in environment variables too.</p> <p>It's ok on size changes for all workers to be killed and new ones spawned. </p>
nmiculinic
<p>Before giving the kubernetes-specific answer, I wanted to point out that it seems like the problem is trying to push cluster-coordination down into the app, which is almost by definition harder than using a distributed system primitive designed for that task. For example, if every new worker identifies themselves in <a href="https://github.com/coreos/etcd#readme" rel="nofollow noreferrer">etcd</a>, then they can <a href="https://github.com/coreos/etcd/blob/v3.2.11/Documentation/learning/api.md#watch-streams" rel="nofollow noreferrer">watch keys</a> to detect changes, meaning no one needs to destroy a running application just to update its list of peers, their contact information, their capacity, current workload, whatever interesting information you would enjoy having while building a distributed worker system.</p> <p>But, on with the show:</p> <hr> <p>If you want stable identifiers, then <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> is the modern answer to that. Whether that is an <em>exact</em> fit for your situation depends on whether (for your problem domain) <code>id:0</code> being "rebooted" still counts as <code>id:0</code> or the fact that it has stopped and started now disqualifies it from being <code>id:0</code>.</p> <p>The running list of cluster size is tricky. If you are willing to be flexible in the launch mechanism, then you can have a <a href="https://github.com/mattn/etcdenv#readme" rel="nofollow noreferrer">pre-launch binary</a> populate the environment right before spawning the actual worker (that example is for reading from etcd directly, but the same principle holds for interacting with the kubernetes API, then launching).</p> <p>You could do that same trick in a more static manner by having an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> write the current state of affairs to a file, which the app would then read in. Or, due to all Pod containers sharing networking, the app could contact a "sidecar" container on <code>localhost</code> to obtain that information via an API.</p> <p>So far so good, except for the</p> <blockquote> <p>on size changes for all workers to be killed and new one spawned</p> </blockquote> <p>The best answer I have for that requirement is that if the app must know its peers at launch time, then I am pretty sure you have left the realm of "scale $foo --replicas=5" and entered into the "destroy the peers and start all afresh" realm, with <code>kubectl delete pods -l some-label=of-my-pods</code>; which is, thankfully, what <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete" rel="nofollow noreferrer">updateStrategy: type: OnDelete</a> does, when combined with the <code>delete pods</code> command.</p>
mdaniel
<p>I am looking for a way to delete PersistentVolumeClaims assigned to pods of a StatefulSet automatically when I scale down the number of instances. Is there a way to do this within k8s? I haven't found anything in the docs, yet.</p>
alpe1
<p>I suspect that a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#lifecycle-v1-core" rel="nofollow noreferrer"><code>preStop</code> Lifecycle Handler</a> could submit a <code>Job</code> to clean up the PVC, assuming the Pod's <code>ServiceAccount</code> had the <code>Role</code> to do so. Unfortunately, the Lifecycle Handler docs say that the <code>exec</code> blocks the Pod deletion, so that's why whatever happened would need to be asynchronous from the Pod's perspective.</p> <p>Another approach might be to unconditionally scan the cluster or namespace with a <code>CronJob</code> and delete unassigned PVCs, or those that match a certain selector.</p> <p>But I don't think there is any <em>inherent</em> ability to do that, given that (at least in my own usage) it's reasonable to scale a <code>StatefulSet</code> up and down, and when scaling it back up then one would actually desire that the <code>Pod</code> regain its identity in the <code>StatefulSet</code>, which typically includes any persisted data.</p>
mdaniel
<p>Kubernetes automatically generates several environment variables for you, like <code>SERVICE1_SERVICE_HOST</code> and <code>SERVICE1_SERVICE_PORT</code>. I would like to use the value of these variables to set my own variables in the deployment.yml, like below:</p> <pre><code>env: - name: MY_NEW_VAR value: ${SERVICE1_SERVICE_HOST} </code></pre> <p>For some reason Kubernetes isn't able to resolve this. When I go inside the container it turns out it has been assigned as a literal string, giving me <code>MY_NEW_VAR = ${SERVICE1_SERVICE_HOST}.</code></p> <p>Is there a way to assign the value of <code>${SERVICE1_SERVICE_HOST}</code> instead?</p>
DraegerMTN
<p>The syntax is <code>$(SERVICE1_SERVICE_HOST)</code>, as one can see in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">the fine manual</a></p>
mdaniel
<p>We're planning to migrate our software to run in kubernetes with auto scalling, this is our current infrastructure:</p> <ol> <li>PHP and apache are running in Google Compute Engine n1-standard-4 (4 vCPUs, 15 GB memory)</li> <li>MySql is running in Google Cloud SQL</li> <li>Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk</li> </ol> <p>I found many posts that recomments to store the data file in the Google Cloud Storage and use the API to fetch the file and uploading to the bucket. We have very limited time so I decide to use NFS to share the data files over the pods, the problem is nfs speed is slow, it's around 100mb/s when I copying the file with pv, the result from iperf is 1.96 Gbits/sec.Do you know how to achieve the same result without implement the cloud storage? or increase the NFS speed?</p>
Jackson Tong
<blockquote> <p>Data files (csv, pdf) and the code are storing in a single SSD Persistent Disk</p> </blockquote> <p>There's nothing stopping you from volume mounting an SSD into the Pod so you can continue to use an SSD. I can only speak to AWS terminology, but some EC2 instances come with "local" SSD hardware, and thus you would only need to use a <code>nodeSelector</code> to ensure your Pods were scheduled onto machines that had said local storage available.</p> <p>Where you're going to run into problems is if you are <em>currently</em> just using one php+apache and thus just one SSD, but now you want to scale the application up and it requires that all php+apache have access to the <em>same</em> SSD. That's a classic distributed application architecture problem, and something kubernetes itself can't fix for you.</p> <p>If you're willing to expend the effort, you can also try any one of the other distributed filesystems (Ceph, GlusterFS, etc) and see if they perform better for your situation. Then again, "We have very limited time" I guess pretty much means that's off the table.</p>
mdaniel
<p>I have Kubernetes cluster hosted in Google Cloud. I created a deployment and defined a hpa rule for it:</p> <pre><code>kubectl autoscale deployment my_deployment --min 6 --max 30 --cpu-percent 80 </code></pre> <p>I want to run a command that editing the <code>--min</code> value, <strong>without remove and re-create a new hpa rule</strong>. Something like: </p> <pre><code>$ kubectl autoscale deployment my_deployment --min 1 --max 30 Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "my_deployment" already exists </code></pre> <p>Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?</p>
No1Lives4Ever
<blockquote> <p>Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?</p> </blockquote> <p>They are editable just as any other resource is, though either <code>kubectl edit hpa $the_hpa_name</code> for an interactive edit, or <code>kubectl patch hpa $the_hpa_name -p '{"spec":{"minReplicas": 1}}'</code> for doing so in a "batch" setting.</p> <p>If you don't know the <code>$the_hpa_name</code>, you can get a list of them like any other resource: <code>kubectl get hpa</code>, and similarly you can view the current settings and status with <code>kubectl get -o yaml hpa $the_hpa_name</code> (or even omit <code>$the_hpa_name</code> to see them all, but that might be a lot of text, depending on your cluster setup).</p>
mdaniel
<p>I am trying to implement CI/CD pipeline for my Spring Boot microservice deployment. Here I am planning to use Kubernetes and Jenkins for that. And I have one on-premise server which I am using for my SVN code repository. I am using SVN for version control. And also for keeping image registry, I am planning to use Dockerhub. I am trying to implement when there is a commit to my SVN code repository, then Jenkins pipeline job need to trigger, compile, test, build, package, build Docker image, pushing to Dockerhub and deploying into Kubernetes cluster.</p> <p><strong>Here my confusion is that:</strong> </p> <p><strong>1.</strong> When I am adding the URL for SVN Dockerhub and credentials, is possible to add these all in my Jenkinsfile ?</p> <p>The reason for doubt is that, when I am exploring implementation example, YouTube videos are showing checking SCM option in pipeline creation and pasting code repository URL (GITHUB URL) within that. So I am totally confused about whether I need to do like that way or is possible to add Docker Hub and svn url in Jenkinsfile?</p> <p><strong>2.</strong> If adding in Jenkinsfile is not possible, then where I can add my Dockerhub URL and credentials for image pushing and pulling process?</p>
Mr.DevEng
<blockquote> <p>When I am adding the URL for SVN Dockerhub and credentials , Is possible to add these all in my Jenkinsfile ?</p> </blockquote> <p>Is it <strong>possible</strong>? Yes. Is it <strong>advisable</strong>? No. If for no other reason than it makes rotating credentials that much harder, since one would need to update <em>every</em> <code>Jenkinsfile</code> in every project.</p> <p>You'll want to take advantage of the <a href="https://jenkins.io/doc/book/using/using-credentials/" rel="nofollow noreferrer">credential store</a> of Jenkins, and then only <em>consume</em> those credentials in the <code>Jenkinsfile</code>. You may also have good success making use of the <a href="https://jenkins.io/doc/book/pipeline/shared-libraries/" rel="nofollow noreferrer">pipeline library</a> in Jenkins, to create a new "step" to do the credential binding and interaction with <a href="https://jenkins.io/doc/pipeline/steps/docker-workflow/#withdockerregistry-sets-up-docker-registry-endpoint" rel="nofollow noreferrer">dockerhub</a> and <a href="https://jenkins.io/doc/pipeline/steps/kubernetes-cd/#kubernetesdeploy-deploy-to-kubernetes" rel="nofollow noreferrer">kubernetes</a>.</p> <p>The advantage to the custom step is that it abstracts away the complexity of doing Jenkins credential lookup, error handling, all that stuff, from the actual projects. The <em>disadvantage</em> of doing that is that it adds opacity, meaning one must ensure the projects know where to find the documentation for that step, and the step needs to be configurable enough for use in all circumstances.</p>
mdaniel
<p>I'm pretty new to Kubernetes, trying to figure it out. I haven't been able to google this answer tho, so I'm stumped. Can Kubernetes mount two secrets to the same path? say given the following deployment:</p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment labels: app: nginx-deployment version: v1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx version: v1 spec: volumes: - name: nginxlocal hostPath: path: /srv/docker/nginx - name: requestcert secret: secretName: requests-certificate - name: mysitecert secret: secretName: mysitecert containers: - name: nginx image: nginx:mainline-alpine # Use 1.15.0 volumeMounts: - name: nginxlocal subPath: config/nginx.conf mountPath: /etc/nginx/nginx.conf - name: requestcert mountPath: /etc/nginx/ssl - name: mysitecert mountPath: /etc/nginx/ssl - name: nginxlocal subPath: logs mountPath: /etc/nginx/logs ports: - containerPort: 443 </code></pre> <p>would it be possible to mount both SSL certs to the same directory (/etc/nginx/ssl/*)?</p> <p>If not, can storing the TLS cert+key as "Opaque" instead of kubernetes.io/tls type work? I tried to combine both certs+keys into one secret of tls type, but kubernetes expected it to be called tls.crt and tls.key, so I had to split it into two secret files. if they could be done as opaque, i think I could remove the two secret values and just use the one opaque entry. </p> <p>Thanks!</p>
Evan R.
<blockquote> <p>would it be possible to mount both SSL certs to the same directory (/etc/nginx/ssl/*)?</p> </blockquote> <p>No, because (at least when using a docker runtime) it uses volume mounts, which behave exactly the same as <code>mount -t ext4 /dev/something /path/something</code> in that <code>/path/something</code> will be last-one-wins.</p> <p>However, you have an only mildly smelly work-around available to you: mount secret <code>requestcert</code> as <code>/etc/nginx/.reqcert</code> (or similar), mount secret <code>mysitecert</code> as <code>/etc/nginx/.sitecert</code>, then supersede the <code>entrypoint</code> of the image and copy the files into place before delegating down to the actual entrypoint:</p> <pre><code>containers: - name: nginx image: etc etc command: - bash - -c - | mkdir -p /etc/nginx/ssl cp /etc/nginx/.*cert/* /etc/nginx/ssl/ # or whatever initialization you'd like # then whatever the entrypoint is for your image /usr/local/sbin/nginx -g "daemon off;" </code></pre> <p>Or, if that doesn't seem like a good idea, you can leverage a disposable, Pod-specific directory in combination with <code>initContainers:</code>:</p> <pre><code>spec: volumes: # all the rest of them, as you had them - name: temp-config emptyDir: {} initContainers: - name: setup-config image: busybox # or whatever command: - sh - -c - | # "stage" all the config files, including certs # into /nginx-config which will evaporate on Pod destruction volumeMounts: - name: temp-config mountPath: /nginx-config # and the rest containers: - name: nginx # ... volumeMounts: - name: temp-config mountPath: /etc/nginx </code></pre> <p>They differ in complexity based on whether you want to have to deal with keeping track of the upstream image's entrypoint command, versus leaving the upstream image untouched, but expending a lot more initialization energy</p>
mdaniel
<p>I work on a Kubernetes cluster based CI-CD pipeline. The pipeline runs like this:</p> <ul> <li>An ECR machine has Docker.</li> <li>Jenkins runs as a container.</li> <li>"Builder image" with Java, Maven etc is built.</li> <li>Then this builder image is run to build an app image(s)</li> <li>Then the app is run in kubernetes AWS cluster (using Helm).</li> <li>Then the builder image is run with params to run Maven-driven tests against the app.</li> </ul> <p>Now part of these steps doesn't require the image to be pushed. E.g. the builder image can be cached or disposed at will - it would be rebuilt if needed.</p> <p>So these images are named like <code>mycompany/mvn-builder:latest</code>.</p> <p>This works fine when used directly through Docker.</p> <p>When Kubernetes and Helm comes, it wants the images URI's, and try to fetch them from the remote repo. So using the "local" name <code>mycompany/mvn-builder:latest</code> doesn't work:</p> <pre><code>Error response from daemon: pull access denied for collab/collab-services-api-mvn-builder, repository does not exist or may require 'docker login' </code></pre> <p>Technically, I can name it <code>&lt;AWS-repo-ID&gt;/mvn-builder</code> and push it, but that breaks the possibility to run all this locally in <code>minikube</code>, because that's quite hard to keep authenticated against the silly AWS 12-hour token (remember it all runs in a cluster).</p> <p>Is it possible to mix the remote repo and local cache? In other words, can I have Docker look at the remote repository and if it's not found or fails (see above), it would take the cached image?</p> <p>So that if I use <code>foo/bar:latest</code> in a Kubernetes resource, it will try to fetch, find out that it can't, and would take the local <code>foo/bar:latest</code>?</p>
Ondra Žižka
<p>I <em>believe</em> an <code>initContainer</code> would do that, provided it had access to <code>/var/run/docker.sock</code> (and your cluster allows such a thing) by conditionally pulling (or <code>docker load</code>-ing) the image, such that when the "main" <code>container</code> starts, the image will always be cached.</p> <p>Approximately like this:</p> <pre><code>spec: initContainers: - name: prime-the-cache image: docker:18-dind command: - sh - -c - | if something_awesome; then docker pull from/a/registry else docker load -i some/other/path fi volumeMounts: - name: docker-sock mountPath: /var/run/docker.lock readOnly: true containers: - name: primary image: a-local-image volumes: - name: docker-sock hostPath: path: /var/run/docker.sock </code></pre>
mdaniel
<p>I'm having trouble accessing a Kubernetes environment variable in my python app's init.py file. It appears to be available in other files, however. </p> <p>My init.py file includes this code <code>app.config.from_object(os.environ['APP_SETTINGS'])</code>. The value of <code>APP_SETTINGS</code> depends on my environment with values being <code>config.DevelopmentConfig</code>, <code>config.StagingConfig</code> or <code>config.ProductionConfig</code>. From here, my app pulls configs from my config.py file, which looks like this:</p> <pre><code>import os basedir = os.path.abspath(os.path.dirname(__file__)) class Config(object): WTF_CSRF_ENABLED = True SECRET_KEY = 'you-will-never-guess' APP_SETTINGS = os.environ['APP_SETTINGS'] # For debug purposes class DevelopmentConfig(Config): TEMPLATES_AUTO_RELOAD = True DEBUG = True class StagingConfig(Config): DEBUG = True class ProductionConfig(Config): DEBUG = False </code></pre> <p>When I set APP_SETTINGS locally in my dev environment in my docker-compose, like so...</p> <pre><code>environment: - APP_SETTINGS=config.DevelopmentConfig </code></pre> <p>everything works just fine. When I deploy to my Staging pod in Kubernetes with <code>APP_SETTINGS=config.StagingConfig</code> set in my Secrets file, I'm greeted with the following error:</p> <pre><code>Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string return getattr(module, obj_name) AttributeError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) ImportError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 3, in &lt;module&gt; from app import app File "/root/app/__init__.py", line 11, in &lt;module&gt; app.config.from_object(os.environ['APP_SETTINGS']) File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object obj = import_string(obj) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string sys.exc_info()[2]) File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are: - missing __init__.py in a package; - package or module path not included in sys.path; - duplicated package or module name taking precedence in sys.path; - missing module, class, function or variable; Debugged import: - 'config' found in '/root/config.py'. - 'config.StagingConfig\n' not found. Original exception: ImportError: module 'config' has no attribute 'StagingConfig ' upgrading database schema... Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string return getattr(module, obj_name) AttributeError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) ImportError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 3, in &lt;module&gt; from app import app File "/root/app/__init__.py", line 11, in &lt;module&gt; app.config.from_object(os.environ['APP_SETTINGS']) File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object obj = import_string(obj) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string sys.exc_info()[2]) File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are: - missing __init__.py in a package; - package or module path not included in sys.path; - duplicated package or module name taking precedence in sys.path; - missing module, class, function or variable; Debugged import: - 'config' found in '/root/config.py'. - 'config.StagingConfig\n' not found. Original exception: ImportError: module 'config' has no attribute 'StagingConfig ' starting metriculous web server... Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string return getattr(module, obj_name) AttributeError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) ImportError: module 'config' has no attribute 'StagingConfig ' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 3, in &lt;module&gt; from app import app File "/root/app/__init__.py", line 11, in &lt;module&gt; app.config.from_object(os.environ['APP_SETTINGS']) File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object obj = import_string(obj) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string sys.exc_info()[2]) File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string raise ImportError(e) werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are: - missing __init__.py in a package; - package or module path not included in sys.path; - duplicated package or module name taking precedence in sys.path; - missing module, class, function or variable; Debugged import: - 'config' found in '/root/config.py'. - 'config.StagingConfig\n' not found. Original exception: ImportError: module 'config' has no attribute 'StagingConfig </code></pre> <p>However, when I hard code the APP_SETTINGS value in my init.py file like so <code>app.config.from_object('config.StagingConfig')</code> and deploy to Kubernetes, it works fine. When I do it this way, I can even confirm that my APP_SETTINGS env var declared in my Settings in Kubernetes exists by logging into my pod and running <code>echo $APP_SETTINGS</code>.</p> <p>Any thoughts about what I'm doing wrong?</p> <p>EDIT #1 - Adding my deployment.yaml file</p> <pre><code>kind: Deployment apiVersion: apps/v1beta2 metadata: annotations: deployment.kubernetes.io/revision: '4' selfLink: /apis/apps/v1beta2/namespaces/tools/deployments/met-staging-myapp resourceVersion: '51731234' name: met-staging-myapp uid: g1fce905-1234-56y4-9c15-12de61100d0a creationTimestamp: '2018-01-29T17:22:14Z' generation: 6 namespace: tools labels: app: myapp chart: myapp-1.0.1 heritage: Tiller release: met-staging spec: replicas: 1 selector: matchLabels: app: myapp release: met-staging template: metadata: creationTimestamp: null labels: app: myapp release: met-staging spec: containers: - name: myapp-web image: 'gitlab.ourdomain.com:4567/ourspace/myapp:web-latest' ports: - containerPort: 80 protocol: TCP env: - name: APP_SETTINGS valueFrom: secretKeyRef: name: myapp-creds key: APP_SETTINGS - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: myapp-creds key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: myapp-creds key: AWS_SECRET_ACCESS_KEY resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always - name: myapp-celery image: 'gitlab.ourdomain.com:4567/ourspace/myapp:celery-latest' env: - name: APP_SETTINGS valueFrom: secretKeyRef: name: myapp-creds key: APP_SETTINGS - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: myapp-creds key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: myapp-creds key: AWS_SECRET_ACCESS_KEY resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always - name: rabbit image: 'rabbitmq:alpine' env: - name: RABBITMQ_DEFAULT_USER value: rabbit_user - name: RABBITMQ_DEFAULT_PASS value: fake_pw resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} imagePullSecrets: - name: gitlab-registry schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 10 progressDeadlineSeconds: 600 status: observedGeneration: 6 replicas: 1 updatedReplicas: 1 readyReplicas: 1 availableReplicas: 1 conditions: - type: Available status: 'True' lastUpdateTime: '2018-01-29T17:22:14Z' lastTransitionTime: '2018-01-29T17:22:14Z' reason: MinimumReplicasAvailable message: Deployment has minimum availability. - type: Progressing status: 'True' lastUpdateTime: '2018-05-25T10:20:49Z' lastTransitionTime: '2018-02-16T20:29:45Z' reason: NewReplicaSetAvailable message: &gt;- ReplicaSet "met-staging-myapp-2615c4545f" has successfully progressed. </code></pre>
hugo
<blockquote> <p><code>werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are:</code></p> </blockquote> <p>It very clearly shows you that the module name has a trailing newline character, which is a very, very, very common error for people who try to <code>echo something | base64</code> and put that value into a kubernetes <code>Secret</code>. The <em>correct</em> way of doing that is either via <code>kubectl create secret generic myapp-creds --from-literal=APP_SETTINGS=config.StagingConfig</code>, or <code>printf '%s' config.StagingConfig | base64</code>. Or, of course, stop putting non-Secret text into a Secret and using either a <code>ConfigMap</code> or just a traditional environment <code>value: config.StagingConfig</code> setting, and reserve the <code>Secret</code> construct for <code>Secret</code> values.</p>
mdaniel
<p>I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,</p> <ol> <li>When I am using Kubernetes, how I can create clusters and nodes ?</li> <li>Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?</li> </ol> <p>I new to containerizing, Docker and Kubernetes world.</p>
Mr.DevEng
<ol> <li><p>My favorite way to create clusters is <a href="https://github.com/kubernetes-incubator/kubespray#readme" rel="nofollow noreferrer">kubespray</a> because I find <a href="https://github.com/ansible/ansible#readme" rel="nofollow noreferrer">ansible</a> very easy to read and troubleshoot, unlike more monolithic &quot;run this binary&quot; mechanisms for creating clusters. The kubespray repo has a <a href="https://www.vagrantup.com" rel="nofollow noreferrer">vagrant</a> configuration file, so you can even try out a full cluster on your local machine, to see what it will do &quot;for real&quot;</p> <p>But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to <em>debug</em> those mechanisms when something inevitably goes wrong</p> </li> <li><p>The short version, as Hitesh said, is &quot;yes,&quot; but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.</p> <p>As for the second part of your question, a <code>Service</code> in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived &quot;promise&quot; that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a <code>Service</code> if you indicate that is what you would like (and similar behavior for other cloud providers)</p> </li> </ol> <p>If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque</p>
mdaniel
<p>I'm trying to make my life easier and am coding a bash script. One of these allows me to kube into a pod with postgres access, get the credentials I need, and run the interactive psql shell. </p> <p>However, upon running</p> <pre><code>kubectl &lt;flags&gt; exec $podname -- bash -c ' get_credentials &amp;&amp; psql &lt;psql args&gt; -i -t </code></pre> <p>the terminal hangs. </p> <p>I can't directly connect to the database, and the process to get the credentials is kinda cumbersome. Is there some bash concept I'm not understanding?</p>
Oliver
<blockquote> <p><code>kubectl &lt;flags&gt; exec $podname</code></p> </blockquote> <p>That <code>exec</code> is missing its <code>-i</code> and <code>-t</code> for <code>--stdin=true</code> and <code>--tty=true</code> to describe to kubernetes that you wish your terminal and the remote terminal to be associated with one another:</p> <p><code>kubectl exec -it $podname -- etc etc</code></p> <p>If you are intending the <code>-i</code> and <code>-t</code> present at the end of your cited example above to be passed to <code>exec</code>, be aware that the double dashes <em>explicitly</em> switch off argument parsing from <code>kubectl</code>, so there is no way it will see them</p>
mdaniel
<p>I have a pod <code>test-1495806908-xn5jn</code> with 2 containers. I'd like to restart one of them called <code>container-test</code>. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?</p> <p>The pod was created using a <code>deployment.yaml</code> with:</p> <pre><code>kubectl create -f deployment.yaml </code></pre>
s5s
<blockquote> <p>Is it possible to restart a single container</p> </blockquote> <p>Not through <code>kubectl</code>, although depending on the setup of your cluster you can "cheat" and <code>docker kill the-sha-goes-here</code>, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)</p> <blockquote> <p>how do I restart the pod</p> </blockquote> <p>That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just <code>kubectl delete pod test-1495806908-xn5jn</code> and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect <code>kubectl get pods</code> to return <code>test-1495806908-xn5jn</code> ever again)</p>
mdaniel
<p>I have Kuberenetes cluster hosted in Google Cloud. </p> <p>I deployed my deployment and added an <code>hpa</code> rule for scaling. </p> <pre><code>kubectl autoscale deployment MY_DEP --max 10 --min 6 --cpu-percent 60 </code></pre> <p>waiting a minute and run <code>kubectl get hpa</code> command to verify my scale rule - As expected, I have 6 pods running (according to min parameter).</p> <pre><code>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE MY_DEP Deployment/MY_DEP &lt;unknown&gt;/60% 6 10 6 1m </code></pre> <p>Now, I want to change the min parameter:</p> <pre><code>kubectl patch hpa MY_DEP -p '{"spec":{"minReplicas": 1}}' </code></pre> <p>Wait for 30 minutes and run the command:</p> <pre><code>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE MY_DEP Deployment/MY_DEP &lt;unknown&gt;/60% 1 10 6 30m </code></pre> <p><strong>expected replicas: 1, actual replicas: 6</strong></p> <p>More information:</p> <ol> <li>You can assume that the system has no computing anything (0% CPU utilization). </li> <li>I waited for more than an hour. Nothing changed. </li> <li>The same behavior is seen when i deleted the scaling rule and deployed it again. The <code>replicas</code> parameter has not changed.</li> </ol> <h2>Question:</h2> <p>If I changed the <code>MINPODS</code> parameter to "1" - why I still have 6 pods? How to make Kubernetes to actually change the <code>min</code> pods in my deployment?</p>
No1Lives4Ever
<blockquote> <p>If I changed the MINPODS parameter to "1" - why I still have 6 pods?</p> </blockquote> <p>I believe the answer is because of the <code>&lt;unknown&gt;/60%</code> present in the output. <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">The fine manual</a> states:</p> <blockquote> <p>Please note that if some of the pod's containers do not have the relevant resource request set, CPU utilization for the pod will not be defined and the autoscaler will not take any action for that metric</p> </blockquote> <p>and one can see an example of <code>0% / 50%</code> in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">the walkthrough page</a>. Thus, I would believe that since kubernetes cannot prove <em>what</em> percentage of CPU is being consumed -- neither above nor below the target -- it takes no action for fear of making <em>whatever</em> the situation is worse.</p> <p>As for why there is a <code>&lt;unknown&gt;</code>, I would hazard a guess it's the dreaded heapster-to-metrics-server cutover that might be obfuscating that information from the kubernetes API. Regrettably, I don't have first-hand experience testing that theory, in order to offer you concrete steps beyond "see if your cluster is collecting metrics in a place that kubernetes can see them."</p>
mdaniel
<p>I am using Kubespray with Kubernetes 1.9</p> <p>What I'm seeing is the following when I try to interact with pods on my new nodes in anyway through kubectl. Important to note that the nodes are considered to be healthy and are having pods scheduled on them appropriately. The pods are totally functional.</p> <pre><code> ➜ Scripts k logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host </code></pre> <p>I am able to ping to the kubeworker nodes both locally where I am running kubectl and from all masters by both IP and DNS.</p> <pre><code>➜ Scripts ping kubeworker-rwva1-prod-14 PING kubeworker-rwva1-prod-14 (10.0.0.111): 56 data bytes 64 bytes from 10.0.0.111: icmp_seq=0 ttl=63 time=88.972 ms ^C pubuntu@kubemaster-rwva1-prod-1:~$ ping kubeworker-rwva1-prod-14 PING kubeworker-rwva1-prod-14 (10.0.0.111) 56(84) bytes of data. 64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=1 ttl=64 time=0.259 ms 64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=2 ttl=64 time=0.213 ms ➜ Scripts k get nodes NAME STATUS ROLES AGE VERSION kubemaster-rwva1-prod-1 Ready master 174d v1.9.2+coreos.0 kubemaster-rwva1-prod-2 Ready master 174d v1.9.2+coreos.0 kubemaster-rwva1-prod-3 Ready master 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-1 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-10 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-11 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-12 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-14 Ready node 16d v1.9.2+coreos.0 kubeworker-rwva1-prod-15 Ready node 14d v1.9.2+coreos.0 kubeworker-rwva1-prod-16 Ready node 6d v1.9.2+coreos.0 kubeworker-rwva1-prod-17 Ready node 4d v1.9.2+coreos.0 kubeworker-rwva1-prod-18 Ready node 4d v1.9.2+coreos.0 kubeworker-rwva1-prod-19 Ready node 6d v1.9.2+coreos.0 kubeworker-rwva1-prod-2 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-20 Ready node 6d v1.9.2+coreos.0 kubeworker-rwva1-prod-21 Ready node 6d v1.9.2+coreos.0 kubeworker-rwva1-prod-3 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-4 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-5 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-6 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-7 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-8 Ready node 174d v1.9.2+coreos.0 kubeworker-rwva1-prod-9 Ready node 174d v1.9.2+coreos.0 </code></pre> <p>When I describe a broken node, it looks identical to one of my functioning ones.</p> <pre><code>➜ Scripts k describe node kubeworker-rwva1-prod-14 Name: kubeworker-rwva1-prod-14 Roles: node Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=kubeworker-rwva1-prod-14 node-role.kubernetes.io/node=true role=app-tier Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Tue, 17 Jul 2018 19:35:08 -0700 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:08 -0700 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Fri, 03 Aug 2018 18:44:59 -0700 Tue, 17 Jul 2018 19:35:18 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.111 Hostname: kubeworker-rwva1-prod-14 Capacity: cpu: 32 memory: 147701524Ki pods: 110 Allocatable: cpu: 31900m memory: 147349124Ki pods: 110 System Info: Machine ID: da30025a3f8fd6c3f4de778c5b4cf558 System UUID: 5ACCBB64-2533-E611-97F0-0894EF1D343B Boot ID: 6b42ba3e-36c4-4520-97e6-e7c6fed195e2 Kernel Version: 4.4.0-130-generic OS Image: Ubuntu 16.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.1 Kubelet Version: v1.9.2+coreos.0 Kube-Proxy Version: v1.9.2+coreos.0 ExternalID: kubeworker-rwva1-prod-14 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system calico-node-cd7qg 150m (0%) 300m (0%) 64M (0%) 500M (0%) kube-system kube-proxy-kubeworker-rwva1-prod-14 150m (0%) 500m (1%) 64M (0%) 2G (1%) kube-system nginx-proxy-kubeworker-rwva1-prod-14 25m (0%) 300m (0%) 32M (0%) 512M (0%) prometheus prometheus-prometheus-node-exporter-gckzj 0 (0%) 0 (0%) 0 (0%) 0 (0%) rabbit-relay rabbit-relay-844d6865c7-q6fr2 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 325m (1%) 1100m (3%) 160M (0%) 3012M (1%) Events: &lt;none&gt; ➜ Scripts k describe node kubeworker-rwva1-prod-11 Name: kubeworker-rwva1-prod-11 Roles: node Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=kubeworker-rwva1-prod-11 node-role.kubernetes.io/node=true role=test Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Fri, 09 Feb 2018 21:03:46 -0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Fri, 03 Aug 2018 18:46:31 -0700 Fri, 09 Feb 2018 21:03:38 -0800 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Fri, 03 Aug 2018 18:46:31 -0700 Mon, 16 Jul 2018 13:24:58 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.218 Hostname: kubeworker-rwva1-prod-11 Capacity: cpu: 32 memory: 131985484Ki pods: 110 Allocatable: cpu: 31900m memory: 131633084Ki pods: 110 System Info: Machine ID: 0ff6f3b9214b38ad07c063d45a6a5175 System UUID: 4C4C4544-0044-5710-8037-B3C04F525631 Boot ID: 4d7ec0fc-428f-4b4c-aaae-8e70f374fbb1 Kernel Version: 4.4.0-87-generic OS Image: Ubuntu 16.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.1 Kubelet Version: v1.9.2+coreos.0 Kube-Proxy Version: v1.9.2+coreos.0 ExternalID: kubeworker-rwva1-prod-11 Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- ingress-nginx-internal default-http-backend-internal-7c8ff87c86-955np 10m (0%) 10m (0%) 20Mi (0%) 20Mi (0%) kube-system calico-node-8fzk6 150m (0%) 300m (0%) 64M (0%) 500M (0%) kube-system kube-proxy-kubeworker-rwva1-prod-11 150m (0%) 500m (1%) 64M (0%) 2G (1%) kube-system nginx-proxy-kubeworker-rwva1-prod-11 25m (0%) 300m (0%) 32M (0%) 512M (0%) prometheus prometheus-prometheus-kube-state-metrics-7c5cbb6f55-jq97n 0 (0%) 0 (0%) 0 (0%) 0 (0%) prometheus prometheus-prometheus-node-exporter-7gn2x 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 335m (1%) 1110m (3%) 176730Ki (0%) 3032971520 (2%) Events: &lt;none&gt; </code></pre> <p>What's going on?</p> <pre><code>➜ k logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host ➜ cat /etc/hosts | head -n1 10.0.0.111 kubeworker-rwva1-prod-14 ubuntu@kubemaster-rwva1-prod-1:~$ ping kubeworker-rwva1-prod-14 PING kubeworker-rwva1-prod-14 (10.0.0.111) 56(84) bytes of data. 64 bytes from kubeworker-rwva1-prod-14 (10.0.0.111): icmp_seq=1 ttl=64 time=0.275 ms ^C --- kubeworker-rwva1-prod-14 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms ubuntu@kubemaster-rwva1-prod-1:~$ kubectl logs -f -n prometheus prometheus-prometheus-node-exporter-gckzj Error from server: Get https://kubeworker-rwva1-prod-14:10250/containerLogs/prometheus/prometheus-prometheus-node-exporter-gckzj/prometheus-node-exporter?follow=true: dial tcp: lookup kubeworker-rwva1-prod-14 on 10.0.0.3:53: no such host </code></pre>
Brando__
<blockquote> <p>What's going on?</p> </blockquote> <p>That name needs to be resolvable from your workstation, because for <code>kubectl logs</code> and <code>kubectl exec</code>, the API sends down the URL for the client to interact <em>directly</em> with the <code>kubelet</code> on the target Node (to ensure that all traffic in the world doesn't travel through the API server).</p> <p>Thankfully, kubespray has a knob through which you can tell kubernetes to prefer the Node's <code>ExternalIP</code> (or, of course, <code>InternalIP</code> if you prefer): <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/roles/kubernetes/master/defaults/main.yml#L82" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray/blob/v2.5.0/roles/kubernetes/master/defaults/main.yml#L82</a></p>
mdaniel
<h1>Context</h1> <p>I have a java application built as a docker image.</p> <p>The image is deployed in a k8s cluster.</p> <p>In the java application, I want to connect to the api server and save something in Secrets.</p> <p>How can I do that with k8s java client?</p> <h1>Current Attempts</h1> <p>The <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">k8s official document</a> says:</p> <blockquote> <p>From within a pod the recommended ways to connect to API are:</p> <ul> <li>run kubectl proxy in a sidecar container in the pod, or as a background process within the container. This proxies the Kubernetes API to the localhost interface of the pod, so that other processes in any container of the pod can access it.</li> <li>use the Go client library, and create a client using the rest.InClusterConfig() and kubernetes.NewForConfig() functions. They handle locating and authenticating to the apiserver.</li> </ul> </blockquote> <p>But I can't find similar functions neither similar examples in java client.</p>
kevin807210561
<p>With the assumption that your Pod has a <code>serviceAccount</code> automounted -- which is the default unless you have specified otherwise -- the <a href="https://github.com/kubernetes-client/java/blob/client-java-parent-2.0.0/util/src/main/java/io/kubernetes/client/util/ClientBuilder.java#L119" rel="nofollow noreferrer"><code>ClientBuilder.cluster()</code></a> method reads the API URL from the environment, reads the cluster CA from the well known location, and similarly the <code>ServiceAccount</code> token from that same location.</p> <p>Then, while not exactly "create a Secret," this <a href="https://github.com/kubernetes-client/java/blob/client-java-parent-2.0.0/examples/src/main/java/io/kubernetes/client/examples/PatchExample.java" rel="nofollow noreferrer"><code>PatchExample</code></a> performs a mutation operation which one could generalize into "create or update a Secret."</p>
mdaniel
<p>Our customer uses internal network. We have k8s application, some yaml files need to download image from internet. I have a win10 computer and I could ssh internal server and access internet. How to download image and then upload to internal server?</p> <p>Some image download site would be:</p> <pre><code>chenliujin/defaultbackend (nginx-default-backend.yaml) quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 </code></pre>
user84592
<blockquote> <p>How to download image and then upload to internal server?</p> </blockquote> <p>The shortest path to success is</p> <pre><code>ssh the-machine-with-internet -- 'bash -ec \ "docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \ docker save quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"' \ | ssh the-machine-without-internet -- 'docker load' </code></pre> <p>You'll actually need to repeat that <code>ssh machine-without-internet -- docker load</code> bit for every Node in the cluster, otherwise they'll attempt to pull the image when they don't find it already in <code>docker images</code>, which brings us to ...</p> <p>You are also free to actually cache the intermediate file, if you wish, as in:</p> <pre><code>ssh machine-with-internet -- 'bash -ec \ "docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0 ; \ docker save -o /some/directory/nginx-ingress-0.15.0.tar quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"' scp machine-with-internet /some/directory/nginx-ingress-0.15.0.tar /some/other/place # and so forth, including only optionally running the first pull-and-save step </code></pre> <p>It is entirely possible to use an <code>initContainer:</code> in the PodSpec to implement any kind of pre-loading of docker images before the main Pod's containers attempt to start, but that's likely going to clutter your PodSpec unless it's pretty small and straightforward.</p> <p>Having said all of that, as @KonstantinVustin already correctly said: having a local docker repository for mirroring the content will save you a ton of heartache</p>
mdaniel
<p>I have a question regarding what is the best approach with K8S in AWS.</p> <p>the way I see it that either I use the EBS directly for PV AND PVC or that I mount the EBS as a regular folder in my EC2 and then use those mounted folders for my PV and PVC.</p> <p>what approach is better in your opinion? it is important to notice that I want my K8s to Cloud agnostic so maybe forcing EBS configuration is less better that using a folder so the ec2 does not care what is the origin of the folder.</p> <p>many thanks</p>
eran meiri
<blockquote> <p>what approach is better in your opinion?</p> </blockquote> <p>Without question: using the PV and PVC. Half the reason will go here, and the other half below. By declaring those as <em>managed resources</em>, kubernetes will cheerfully take care of attaching the volumes to the Node it is scheduling the Pod upon, and detaching it from the Node when the Pod is unscheduled. That will matter in a huge way if a Node reboots, for example, because the attach-detach cycle will happen transparently, no Pager Duty involved. That will not be true if you are having to coordinate amongst your own instances who is alive and should have the volume attached at this moment.</p> <blockquote> <p>it is important to notice that I want my K8s to Cloud agnostic so maybe forcing EBS configuration is less better that using a folder so the ec2 does not care what is the origin of the folder.</p> </blockquote> <p>It still will be cloud agnostic, because what you have told kubernetes -- declaratively, I'll point out, using just text in a yaml file -- is that you wish for some persistent storage to be volume mounted into your container(s) before they are launched. Only drilling down into the nitty gritty will surface the fact that it's provided by an AWS EBS volume. I would almost guarantee you could move those descriptors over to GKE (or Azure's thing) with about 90% of the text exactly the same.</p>
mdaniel
<p>I do understand RBAC and I'm able to create a role using rules en subjects, bind them with a user.</p> <p>For example this role can only list the pods</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: development name: list-pods rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "list"] </code></pre> <p>Now we are using namespaces for every environment we have (dev, staging, prd). Now how do I have to provide this role to users in different namespaces? Do I have to create a clusterRole and bind it with a normal rolebinding or do I have to write the above .yaml once for dev, once for uat and once for prd? Are there somewhere rules written about how to handles those cases? </p>
DenCowboy
<blockquote> <p>Now how do I have to provide this role to users in different namespaces?</p> </blockquote> <p>You do if you want to be able to constrain the ability to list <code>Pod</code>s for that <code>Subject</code> on a per <code>Namespace</code> basis. For example, you might not want people to be able to see <code>Pod</code>s in <code>kube-system</code> (or a hypothetical <code>internal-security</code> namespace). Using the ability to list <code>Pod</code>s as an example makes this hard to imagine, but the ability to list, or view, or both, <code>Secret</code>s or <code>ConfigMap</code>s may make this more tangible. Presumably a <code>Subject</code> can view <code>Secret</code>s for their <em>own</em> project -- or even maybe not -- but not for other projects within the company. That kind of thing.</p> <p>That gets even more real when one thinks about the ability to <code>exec</code> into arbitrary <code>Pod</code>s -- because that's the biggest risk that I can think of to the safety and confidentiality of applications in the cluster.</p> <blockquote> <p>Do I have to create a clusterRole and bind it with a normal rolebinding</p> </blockquote> <p>No, one uses a <code>ClusterRoleBinding</code> for that purpose. "Have to" is not the right question; it depends on whether you want the binding to apply to all namespaces, current <strong>and future</strong>.</p> <blockquote> <p>or do I have to write the above .yaml once for dev, once for uat and once for prd?</p> </blockquote> <p>That also depends on whether those <code>Namespaces</code> have identical risk and identical <code>Subject</code>s who access them.</p> <blockquote> <p>Are there somewhere rules written about how to handles those cases?</p> </blockquote> <p>Definitely not; there's not one-size-fits-all for cluster security. It all depends on the kind of risk one is trying to drive down.</p> <p>As a for-your-consideration, you're not obligated to use the RBAC constraints: you can certainly create a <code>ClusterRoleBinding</code> for every <code>Subject</code> to the <code>cluster-admin</code> <code>ClusterRole</code> and voilà, no more permission management. No more safeguards, either, but that's the spectrum.</p>
mdaniel
<p>I have a dockerfile that looks like this at the moment:</p> <pre><code>FROM golang:1.8-alpine COPY ./ /src ENV GOOGLE_CLOUD_PROJECT = "snappy-premise-118915" RUN apk add --no-cache git &amp;&amp; \ apk --no-cache --update add ca-certificates &amp;&amp; \ cd /src &amp;&amp; \ go get -t -v cloud.google.com/go/pubsub &amp;&amp; \ CGO_ENABLED=0 GOOS=linux go build main.go # final stage FROM alpine ENV LATITUDE "-121.464" ENV LONGITUDE "36.9397" ENV SENSORID "sensor1234" ENV ZIPCODE "95023" ENV INTERVAL "3" ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915" ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json" ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO" RUN apk --no-cache --update add ca-certificates WORKDIR /app COPY --from=0 /src/main /app/ COPY --from=0 /src/key.json /app/ ENTRYPOINT /app/main </code></pre> <p>and the pod config looks like this:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: sensorpub spec: template: metadata: labels: app: sensorpub spec: volumes: - name: google-cloud-key secret: secretName: pubsub-key containers: - name: sensorgen image: gcr.io/snappy-premise-118915/sensorgen:v1 volumeMounts: - name: google-cloud-key mountPath: /var/secrets/google env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/key.json </code></pre> <p>I want to be able to pass in these environment vars:</p> <pre><code>ENV LATITUDE "-121.464" ENV LONGITUDE "36.9397" ENV SENSORID "sensor1234" ENV ZIPCODE "95023" ENV INTERVAL "3" ENV GOOGLE_CLOUD_PROJECT "snappy-premise-118915" ENV GOOGLE_APPLICATION_CREDENTIALS "/app/key.json" ENV GRPC_GO_LOG_SEVERITY_LEVEL "INFO" </code></pre> <p>I want to be able to set the environment variables in the pod config so that the docker file can use those...how do I do that instead of just coding them into the docker image directly?</p>
lightweight
<blockquote> <p>I want to be able to set the environment variables in the pod config so that the docker file can use those...how do I do that instead of just coding them into the docker image directly?</p> </blockquote> <p>There is no need to specify <strong>any</strong> <code>ENV</code> directive in a Dockerfile; those directives only provide defaults in the case where (as in your example <code>PodSpec</code>) they are not provided at runtime.</p> <p>The "how" is to do exactly what you have done in your example <code>PodSpec</code>: populate the <code>env:</code> array with the environment variables you wish to appear in the Pod</p>
mdaniel
<p>We have multiple development teams who work and deploy their applications on kuberenetes. We use helm to deploy our application on kubernetes.</p> <p>Currently the challenge we are facing with one of our shared clusters. We would like to deploy tiller separate for each team. So they have access to their resources. default Cluster-admin role will not help us and we don't want that.</p> <p>Let's say we have multiple namespaces for one team. I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces.</p> <p>Team > multiple namespaces tiller using the service account that has the role ( having full access to namespaces - not all ) associated with it.</p>
Tarun Prakash
<blockquote> <p>I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces</p> </blockquote> <p>According to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">the fine manual</a>, you'll need a <code>ClusterRole</code> per team, defining the kinds of operations on the kinds of resources, but then use a <strong><code>RoleBinding</code></strong> to scope those rules to a specific namespace. The two ends of the binding target will be the team's tiller's <code>ServiceAccount</code> and the team's <code>ClusterRole</code>, and then one <code>RoleBinding</code> instance per <code>Namespace</code> (even though they will be textually identical except for the <code>namespace:</code> portion)</p> <p>I actually would expect you could make an internal helm chart that would automate the specifics of that relationship, and then <code>helm install --name team-alpha --set team-namespaces=ns-alpha,ns-beta my-awesome-chart</code> and then grant <em>your</em> tiller <code>cluster-admin</code> or whatever more restrictive <code>ClusterRole</code> you wish.</p>
mdaniel
<p>Done according to <a href="https://kubernetes.io/docs/tasks/administer-cluster/setup-ha-etcd-with-kubeadm/" rel="nofollow noreferrer">this article</a>.</p> <p>I installed Kubernetes. Then installed etcd cluster that works via HTTPS and listens to the localhost interface only (reachable from inside any Docker container). Now I need persistent volume to install DB cluster. Chose Portworx. It generated daemonset YAML-config. Here is the description of installed daemonset:</p> <pre><code># kubectl describe daemonset portworx --namespace=kube-system Name: portworx Selector: name=portworx Node-Selector: &lt;none&gt; Labels: name=portworx Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"portworx.com/install-source":"http://install.portworx.com/?c=bp_clust... portworx.com/install-source=http://install.portworx.com/?c=bp_cluster&amp;k=etcd:https://127.0.0.1:2379&amp;kbver=1.11.0&amp;s=/dev/xvda1&amp;d=ens3&amp;m=ens3&amp;stork=false&amp;ca=/etc/kubernetes/pki/etcd/ca.crt%%20&amp;cert=/etc... Desired Number of Nodes Scheduled: 2 Current Number of Nodes Scheduled: 2 Number of Nodes Scheduled with Up-to-date Pods: 2 Number of Nodes Scheduled with Available Pods: 0 Number of Nodes Misscheduled: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: name=portworx Service Account: px-account Containers: portworx: Image: portworx/oci-monitor:1.3.4 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: -k etcd:https://127.0.0.1:2379 -c bp_cluster -d ens3 -m ens3 -s /dev/xvda1 -ca /etc/kubernetes/pki/etcd/ca.crt -cert /etc/kubernetes/pki/etcd/server.crt -key /etc/kubernetes/pki/etcd/server.key -x kubernetes Liveness: http-get http://127.0.0.1:9001/status delay=840s timeout=1s period=30s #success=1 #failure=3 Readiness: http-get http://127.0.0.1:9015/health delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: PX_TEMPLATE_VERSION: v3 Mounts: /etc/pwx from etcpwx (rw) /etc/systemd/system from sysdmount (rw) /host_proc/1/ns from proc1nsmount (rw) /opt/pwx from optpwx (rw) /var/run/dbus from dbusmount (rw) /var/run/docker.sock from dockersock (rw) Volumes: dockersock: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: etcpwx: Type: HostPath (bare host directory volume) Path: /etc/pwx HostPathType: optpwx: Type: HostPath (bare host directory volume) Path: /opt/pwx HostPathType: proc1nsmount: Type: HostPath (bare host directory volume) Path: /proc/1/ns HostPathType: sysdmount: Type: HostPath (bare host directory volume) Path: /etc/systemd/system HostPathType: dbusmount: Type: HostPath (bare host directory volume) Path: /var/run/dbus HostPathType: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 22m daemonset-controller Created pod: portworx-67w7m Normal SuccessfulCreate 22m daemonset-controller Created pod: portworx-mxtr8 </code></pre> <p>But in the log of portworx I see that it is trying to connect to etcd via plain HTTP and obviously get error because cannot interpreter the response wrapped to SSL:</p> <pre><code># kubectl logs -f pod/portworx-67w7m --namespace=kube-system &lt;some logs are erased du to lack of relevance&gt; Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: time="2018-07-02T13:19:25Z" level=error msg="Could not load config file /etc/pwx/config.json due to: Error in obtaining etcd version: Get http://127.0.0.1:2379/version: net/http: HTTP/1.x transport connection broken: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\". Please visit http://docs.portworx.com for more information." Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: PXPROCS: px daemon exited with code: 1 Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: 2107 Jul 02 13:19:25 ip-172-31-18-91 px-runc[25417]: 2018-07-02 13:19:25,474 INFO exited: pxdaemon (exit status 1; not expected) </code></pre> <p>What I am doing wrong?</p>
Michael A.
<p>I have no idea why they didn't surface the "cannot read the <code>-cert</code> file" error, but you specified <code>/etc/kubernetes/pki/etcd/server.crt</code> in the options but did not volume mount <code>/etc/kubernetes/pki</code> into the container. For obvious reasons, kubernetes will not <em>automatically</em> volume mount its pki directory, thus, you must specify it.</p> <p>If that <code>DaemonSet</code> was generated for you (as it appears based on the annotation), then what happened is that <em>they</em> are <a href="https://docs.portworx.com/scheduler/kubernetes/etcd-certs-using-secrets.html#edit-portworx-spec" rel="nofollow noreferrer">expecting the certs to live in <code>/etc/pwx/etcdcerts</code></a> (it's in their <a href="https://docs.portworx.com/scheduler/kubernetes/install.html#secure-etcd-and-certificates" rel="nofollow noreferrer">manual provisioning docs</a> also), so when you provided a non-<code>/etc</code> path, the two worlds collided.</p>
mdaniel
<p>I'm having a dockerfile that runs fine with CentOS as a baseimage and enabled systemd, as suggested on CentOS official docker hub image documentation - <a href="https://hub.docker.com/_/centos/" rel="nofollow noreferrer">https://hub.docker.com/_/centos/</a>.</p> <p>I'll have to start my container using this following command - </p> <pre><code>docker run -d -p 8080:8080 -e "container=docker" --privileged=true -d --security-opt seccomp:unconfined --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro myapplicationImage bash -c "/usr/sbin/init" </code></pre> <p>Till here, everything works like a charm, I can run my image and everything works fine. I'm trying to deploy my image to Azure Container service, so I was trying to create a yaml file that uses this docker image and creates a cluster. </p> <p><strong>My Yaml file looks like this.</strong></p> <pre><code>apiVersion: apps/v2beta1 kind: Deployment metadata: name: myapp-test spec: replicas: 1 template: metadata: labels: app: myapp-test spec: containers: - name: myapp-test image: myappregistry.azurecr.io/myapp-test:1.0 ports: - containerPort: 8080 args: ["--allow-privileged=true","bash"] securityContext: capabilities: add: ["SYS_ADMIN"] privileged: true command: [ "-c","/usr/sbin/init" ] imagePullSecrets: - name: myapp-secret-test --- apiVersion: v1 kind: Service metadata: name: myapp-test spec: type: LoadBalancer ports: - port: 8080 selector: app: myapp-test </code></pre> <p>This doesn't spin-up my image. The above is a kubernetes cluster yaml file. I've also tried Docker-Compose.</p> <pre><code>version: '3' services: myapp-test: build: ./myapp-folder environment: - container=docker volumes: - ./sys/fs/cgroup:/sys/fs/cgroup:ro ports: - "8082:8080" privileged: true cap_add: - SYS_ADMIN security_opt: - seccomp:unconfined command: "/usr/sbin/init" </code></pre> <p>Both of these configurations fails to create containers. I'm using same configuration as mentioned in above docker run time command and converted that into yaml. But runtime command works and I can access my application, but yaml files fail. Am I missing anything?</p> <p><strong>here is my kubernetes error:</strong></p> <pre><code>➜ $ kubectl get po --watch NAME READY STATUS RESTARTS AGE myapp-test-6957c57f6c-zmbt6 0/1 RunContainerError 4 9m myapp-test-6957c57f6c-zmbt6 0/1 CrashLoopBackOff 4 9m ➜ $ kubectl get svc --watch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-test LoadBalancer 10.0.X.XXX 120.XXX.XX.XXX 8080:30150/TCP 12m kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 43m </code></pre> <p><strong>In case of Docker Compose:</strong></p> <p>The container gets kicked in fine, but the service inside my application fails to start. I cannot reach my localhost:8080, but container keeps running.</p> <p>I'm thinking if it has something to do with my systemd enabled container while accessing it on compose or cluster?</p> <p>Thanks!</p>
Sidd Thota
<p>According to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">the fine manual</a>, if you provide <code>command:</code> it supersedes <code>ENTRYPOINT</code>, and <code>args:</code> supersedes <code>CMD</code>, meaning your final "command" that image runs is:</p> <pre><code>-c /usr/sbin/init --allow-privileged=true bash </code></pre> <p>which looks very suspicious with the leading <code>-c</code>, especially since your <code>docker-compose.yml</code> only contains <code>/usr/sbin/init</code>.</p>
mdaniel