prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I need to get a work-item from a work-queue and then sequentially run a series of containers to process each work-item. This can be done using initContainers (<a href="https://stackoverflow.com/a/46880653/94078">https://stackoverflow.com/a/46880653/94078</a>)</p> <p>What would be the recommended way of restarting the process to get the next work-item?</p> <ul> <li>Jobs seem ideal but don't seem to support an infinite/indefinite number of completions.</li> <li>Using a single Pod doesn't work because initContainers aren't restarted (<a href="https://github.com/kubernetes/kubernetes/issues/52345" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/52345</a>).</li> <li>I would prefer to avoid the maintenance/learning overhead of a system like argo or brigade.</li> </ul> <p>Thanks!</p>
<p>Jobs <em>should</em> be used for working with work queues. When using work queues you should <strong>not set</strong> the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#jobspec-v1-batch" rel="noreferrer"><code>.spec.comletions</code></a> (or set it to <code>null</code>). In that case Pods will keep getting created until one of the Pods exit successfully. It is a little awkward exiting from the (main) container with a failure state on purpose, but this is the specification. You may set <code>.spec.parallelism</code> to your liking irrespective of this setting; I've set it to <code>1</code> as it appears you do not want any parallelism.</p> <p>In your question you did not specify what you want to do if the work queue gets empty, so I will give two solutions, one if you want to wait for new items (infinite) and one if want to end the job if the work queue gets empty (finite, but indefinite number of items).</p> <p>Both examples use redis, but you can apply this pattern to your favorite queue. Note that the part that pops an item from the queue is not safe; if your Pod dies for some reason after having popped an item, that item will remain unprocessed or not fully processed. See the <a href="https://redis.io/commands/rpoplpush#pattern-reliable-queue" rel="noreferrer">reliable-queue pattern</a> for a proper solution.</p> <p>To implement the sequential steps on each work item I've used <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a>. Note that this really is a primitve solution, but you have limited options if you don't want to use some framework to implement a proper pipeline.</p> <blockquote> <p><strong>There is an <a href="https://asciinema.org/a/174054" rel="noreferrer">asciinema</a> if any would like to see this at work without deploying redis, etc.</strong></p> </blockquote> <h1>Redis</h1> <p>To test this you'll need to create, at a minimum, a redis Pod and a Service. I am using the example from <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="noreferrer">fine parallel processing work queue</a>. You can deploy that with:</p> <pre><code>kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-pod.yaml kubectl apply -f https://rawgit.com/kubernetes/website/master/docs/tasks/job/fine-parallel-processing-work-queue/redis-service.yaml </code></pre> <p>The rest of this solution expects that you have a service name <code>redis</code> in the same namespace as your Job and it does not require authentication and a Pod called <code>redis-master</code>.</p> <h1>Inserting items</h1> <p>To insert some items in the work queue use this command (you will need bash for this to work):</p> <pre><code>echo -ne "rpush job "{1..10}"\n" | kubectl exec -it redis-master -- redis-cli </code></pre> <h1>Infinite version</h1> <p>This version waits if the queue is empty thus it will never complete.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: primitive-pipeline-infinite spec: parallelism: 1 completions: null template: metadata: name: primitive-pipeline-infinite spec: volumes: [{name: shared, emptyDir: {}}] initContainers: - name: pop-from-queue-unsafe image: redis command: ["sh","-c","redis-cli -h redis blpop job 0 &gt;/shared/item.txt"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-1 image: busybox command: ["sh","-c","echo step-1 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-2 image: busybox command: ["sh","-c","echo step-2 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-3 image: busybox command: ["sh","-c","echo step-3 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] containers: - name: done image: busybox command: ["sh","-c","echo all done with `cat /shared/item.txt`; sleep 1; exit 1"] volumeMounts: [{name: shared, mountPath: /shared}] restartPolicy: Never </code></pre> <h1>Finite version</h1> <p>This version stops the job if the queue is empty. Note the trick that the pop init container checks if the queue is empty and all the subsequent init containers <strong>and</strong> the main container immediately exit if it is indeed empty - this is the mechanism that signals Kubernetes that the Job is completed and there is no need to create new Pods for it.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: primitive-pipeline-finite spec: parallelism: 1 completions: null template: metadata: name: primitive-pipeline-finite spec: volumes: [{name: shared, emptyDir: {}}] initContainers: - name: pop-from-queue-unsafe image: redis command: ["sh","-c","redis-cli -h redis lpop job &gt;/shared/item.txt; grep -q . /shared/item.txt || :&gt;/shared/done.txt"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-1 image: busybox command: ["sh","-c","[ -f /shared/done.txt ] &amp;&amp; exit 0; echo step-1 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-2 image: busybox command: ["sh","-c","[ -f /shared/done.txt ] &amp;&amp; exit 0; echo step-2 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] - name: step-3 image: busybox command: ["sh","-c","[ -f /shared/done.txt ] &amp;&amp; exit 0; echo step-3 working on `cat /shared/item.txt` ...; sleep 5"] volumeMounts: [{name: shared, mountPath: /shared}] containers: - name: done image: busybox command: ["sh","-c","[ -f /shared/done.txt ] &amp;&amp; exit 0; echo all done with `cat /shared/item.txt`; sleep 1; exit 1"] volumeMounts: [{name: shared, mountPath: /shared}] restartPolicy: Never </code></pre>
<p>I'm trying to create a Kubernetes cluster using Azure Management API.</p> <pre><code> var credentials = SdkContext.AzureCredentialsFactory .FromFile(Environment.GetEnvironmentVariable("AZURE_AUTH_LOCATION")); var azure = Azure .Configure() .WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic) .Authenticate(credentials) .WithDefaultSubscription(); var kubernetesCluster = azure.KubernetesClusters.Define("aks").WithRegion(Region.EuropeWest) .WithNewResourceGroup("aksResourceGroup").WithLatestVersion().WithRootUsername("aksUsername") .WithSshKey(sshPublicKey).WithServicePrincipalClientId("clientId") .WithServicePrincipalSecret("secret").DefineAgentPool("ap").WithVirtualMachineCount(1) .WithVirtualMachineSize(ContainerServiceVirtualMachineSizeTypes.StandardA0).Attach() .WithDnsPrefix("dns-aks").Create(); </code></pre> <p>In the last line, a CloudException is thrown with the message: Subscription [] could not be found.</p> <p>Even though an exception is thrown, the resource group is created but it is empty.</p> <p>I have logged-in using Azure CLI with that service principal and I have run </p> <pre><code>az account list </code></pre> <p>with the following response:</p> <pre><code>[ { "cloudName": "AzureCloud", "id": "SUBSCRIPTION ID FROM EXCEPTION ABOVE", "isDefault": true, "name": "Pay-As-You-Go", "state": "Enabled", "tenantId": "xxx", "user": { "name": "xxxx", "type": "servicePrincipal" } } ] </code></pre> <p>The App registration exists In Azure Active Directory > App registrations > All apps. I even gave permissions to all possible APIs.</p> <p>Is there anything I did wrong in order to receive that exception message?</p>
<p>According to the error log, it seems you don't set default subscription for your service principal. You could use <code>az account set --subscription &lt;name or id&gt;</code> to set it.</p> <p>If it still does not work, I suggest you could use the following code.</p> <pre><code> var azure = Azure .Configure() .WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic) .Authenticate(credentials) .withSubscription("subscription id") </code></pre> <p>Note: You should give your service principal Owner role on your subscription level. See this <a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#assign-application-to-role" rel="nofollow noreferrer">link</a>. But it seems you had done it, but I suggest you could check again.</p>
<p>I have a Kubernetes cluster having some application pods which are generating multiple log files inside each pod. I want to log those files in a centralized logging solution like elasticsearch. The logs are neither part of stdout/stderr of pod, nor they are mounted as host volume. So basically I need some solution which reads a file from my pod and sends it to elasticsearch or some other logging solution.</p> <p>Also, I need a solution for same use case but in case of standalone Docker containers not running on Kubernetes.</p>
<blockquote> <p>nor they are mounted as host volume</p> </blockquote> <p>There's your problem: you will want to expose a path that <em>is</em> a host volume, such that your existing centralized log slurping tool can see them. The only asterisk I know of when doing that is to be mindful of the permissions, which won't be an issue if your process is running as root, but will be if non-root -- you'll need to volume mount a host directory that has the permissions enabling the container process to open files.</p> <p>But, I've been around long enough to know that there are always "yes, but"s when dealing with containerizing software, so there are two other alternatives you may consider.</p> <p>If you are using one of the existing log aggregation tools that expects all logging output to appear on stdout and stderr of the container, then you will want to take advantage of the ability of a root process inside the container to write to any file to send the logs to the "pid 1" of the container's stdout and stderr, in one of at least two ways I know of:</p> <h3>log directly to stdout</h3> <p>Some logging frameworks will tolerate being given a "file path" and will cheerfully just open it and begin writing to it:</p> <pre><code>&lt;log4j:configuration&gt; &lt;appender name="DOCKER_STDOUT" class="org.apache.log4j.FileAppender"&gt; &lt;param name="File" value="/proc/1/fd/1"/&gt; </code></pre> <p><em>BTW: that is only an example, I don't know right now if log4j tolerates such a thing</em></p> <h3>redirect to stdout</h3> <p>Similar to the previous tactic, with the advantage of not requiring an application configuration change, and working 100% of the time; but with the disadvantage of making the in-cluster deployment a lot more complicated.</p> <p>I had to do this very trick with the <a href="https://hub.docker.com/_/kong/" rel="nofollow noreferrer">kong:0.10</a> container, because their logging situation wrote only to files, and did not tolerate pointing at the file descriptors like above.</p> <p>One would need to modify the <code>command:</code> block to launch the application, and then spawn a <code>tail</code> for the in-container log, or use some kind of post-deployment trick to <code>exec</code> in and start a <code>tail</code>:</p> <pre><code>tail -f /the/inside/file.log &gt; /proc/1/fd/1 </code></pre> <p>choosing to use "tail -F" if the application rotates the file out from underneath <code>tail</code>, and using <code>nohup tail -F ... &amp;</code> if you need to protect the process from termination when the post-deployment <code>exec</code> shell exits</p>
<p>I want to set http/https proxy in pods and pass these variables via environment. However, I need to set no_proxy as well. Which values should I put it to not break somethings in k8s-inside-communication? As far as I know, there are some default services as "kubernetes.default.svc".</p>
<p>You could, as illustrated in <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1467776" rel="nofollow noreferrer">OpenShift Container Platform bug 1467776</a>, add <code>.svc</code> to <code>no_proxy</code> (as in <a href="https://github.com/openshift/openshift-ansible/pull/4678/files" rel="nofollow noreferrer">PR 4678</a>)</p> <blockquote> <p><code>.svc</code> domain was added into service env file after installation.</p> </blockquote> <pre><code>[root@qe-gpei-etcd-sc-master-1 sysconfig]# grep NO_PROXY * -r atomic-openshift-master:NO_PROXY=.cluster.local,.svc,qe-gpei-etcd-sc-master-1,172.30.0.0/16,10.128.0.0/14 docker:NO_PROXY='.cluster.local,.svc,qe-gpei-etcd-sc-master-1' [root@qe-gpei-etcd-sc-master-1 sysconfig]# docker info |grep "No Proxy" WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled No Proxy: .cluster.local,.svc,qe-gpei-etcd-sc-master-1 </code></pre>
<p>I am trying to create Kubernetes cluster using three VMs(Master – 10.x.x.4, Node1 – 10.x.x.150, Node2 – 10.x.x.160).</p> <p>I was able to create the guestbook application successfully following this link: <a href="http://kubernetes.io/v1.0/examples/guestbook/" rel="noreferrer">http://kubernetes.io/v1.0/examples/guestbook/</a>. Only one change I made to frontend-service.yaml: to use NodePort. I can access the frontend service using nodes IP and port number(10.x.x.150:30724 or 10.x.x.160:30724). So everything is working as expected but I am not able to access the frontend service using ClusterIP address(in my case 10.x.x.79).</p> <p>My understanding of NodePort is that the service can be accessed through cluster IP and also on a port on each node of the cluster. How can I access the service through ClusterIP so that I don’t have to access the each node? Am I missing something here?</p> <p><strong>service and pod details</strong></p> <p><strong>$sudo kubectl describe service frontend</strong></p> <pre><code>Name: frontend Namespace: default Labels: name=frontend Selector: name=frontend Type: NodePort IP: 10.x.x.79 Port: &lt;unnamed&gt; 80/TCP NodePort: &lt;unnamed&gt; 30724/TCP Endpoints: 172.x.x.13:80,172.x.x.14:80,172.x.x.11:80 Session Affinity: None No events. </code></pre> <p><strong>$sudo kubectl describe pod frontend-2b5us</strong></p> <pre><code>Name: frontend-2b5us Namespace: default Image(s): gcr.io/google_samples/gb-frontend:v3 Node: 10.x.x.150/10.x.x.150 Labels: name=frontend Status: Running Reason: Message: IP: 172.x.x.11 Replication Controllers: frontend (3/3 replicas created) Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 State: Running Started: Fri, 30 Oct 2015 04:00:40 -0500 Ready: True Restart Count: 0 </code></pre> <p>I tried to search but would not find any solution for my exact problem but I did find similar problem that looks like for GCE.</p> <p><a href="https://stackoverflow.com/questions/32618437/why-cant-i-access-my-kubernetes-service-via-its-ip">Why can&#39;t I access my Kubernetes service via its IP?</a></p>
<p>You do not have ClusterIP service. You do have a NodePort service. To access it, you connect to the NodePort on any of your nodes in the cluster, as you've already discovered. You do get load-balancing here. Even though you connect to a cluster node, the pod you get does not necessarily run on that particular node.</p> <p>Read the relevant section in the documentation at <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types</a> to learn about additional service types. You probably do not want NodePort on GCP.</p> <p>Talking about ClusterIP. To access a ClusterIP service for debugging purposes, you can run <code>kubectl port-forward</code>. You will not actually access the service, but you will directly connect to one of the pods.</p> <p>For example</p> <pre><code>kubectl port-forward frontend-2b5us 80 8080 </code></pre> <p>Now connect to localhost:8080</p> <p>More sophisticated command, which discovers the port on its own, given namespace <code>-n weave</code> and a selector. Taken from <a href="https://www.weave.works/docs/scope/latest/installing/" rel="noreferrer">https://www.weave.works/docs/scope/latest/installing/</a></p> <pre><code>kubectl port-forward -n weave \ "$(kubectl get -n weave pod \ --selector=weave-scope-component=app \ -o jsonpath='{.items..metadata.name}')" \ 4040 </code></pre>
<p>I tried to use configMap to mount some configs in a subdirectory. For example:</p> <pre><code>spec.template.spec.containers.[0].volumeMounts: - name: fh16-volume mountPath: /etc/fh-16/application.log subPath: my-config.txt spec.template.spec.volumes: - name: fh16-volume configMap: name: my-config </code></pre> <p>At this scenario, all mount as expected. But after any changes in configMap, this changes not applied in a container. Need to recreate pod for this.</p> <p>It looks like some bug, but maybe I make some mistake in my configurations? In the case when I don't use subPath directive, all works as expected.</p>
<p>See this note in the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="nofollow noreferrer">Kubernetes docs</a>:</p> <blockquote> <p>Note: A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates." </p> </blockquote>
<p>I'm trying to use rewrite target from kubernetes with <code>gcloud</code> but it doesn't seem to be respected. My code is the following. Maybe there's something I'm not seen:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo-ingress annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: projectip spec: rules: - host: mycustomdomain.com http: paths: - path: /* backend: serviceName: frontend servicePort: 80 - path: /api/* backend: serviceName: backend servicePort: 80 </code></pre> <p>When I do <code>curl mycustomdomain.com/api/something</code> my backend is always receiving <code>backend/api/something</code> instead of <code>backend/something</code>. I'm really out of ideas and I could use some help. </p>
<p>I am assuming you are using the kubernetes ingress-nginx.</p> <p>Looking at your ingress manifest, it seems like the rewrite annotation is wrong.</p> <p>According to the documentation it should be: nginx.ingress.kubernetes.io/rewrite-target</p> <p>Here's the link to th documentation:</p> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#rewrite" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#rewrite</a></p>
<p>I have set up a cluster where there are 2 nodes. One is Master and Other is a node, both on different Azure ubuntu VMs. For networking, I used Canal tool. <code> $ kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu-aniket1 Ready master 57m v1.10.0 ubutu-aniket Ready &lt;none&gt; 56m v1.10.0 </code></p> <p><code> $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system canal-jztfd 3/3 Running 0 57m kube-system canal-mdbbp 3/3 Running 0 57m kube-system etcd-ubuntu-aniket1 1/1 Running 0 58m kube-system kube-apiserver-ubuntu-aniket1 1/1 Running 0 58m kube-system kube-controller-manager-ubuntu-aniket1 1/1 Running 0 58m kube-system kube-dns-86f4d74b45-8zqqr 3/3 Running 0 58m kube-system kube-proxy-k5ggz 1/1 Running 0 58m kube-system kube-proxy-vx9sq 1/1 Running 0 57m kube-system kube-scheduler-ubuntu-aniket1 1/1 Running 0 58m kube-system kubernetes-dashboard-54865c6fb9-kg5zt 1/1 Running 0 26m </code> When I tried to create kubernetes Dashboard with</p> <p><code> $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code> and set proxy as</p> <p><code>sh $ kubectl proxy --address 0.0.0.0 --accept-hosts '.*' Starting to serve on [::]:8001 </code> When I hit url <code>http://&lt;master IP&gt;:8001</code> in browser, it shows following output <code> { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/crd.projectcalico.org", "/apis/crd.projectcalico.org/v1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/policy", "/apis/policy/v1beta1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/rbac.authorization.k8s.io/v1beta1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/apis/storage.k8s.io/v1beta1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-available-controller", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/ca-registration", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/rbac/bootstrap-roles", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-kube-aggregator-informers", "/healthz/poststarthook/start-kube-apiserver-informers", "/logs", "/metrics", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/version" ] } </code> But when I tries to hit <code>http://&lt;master IP&gt;:8001/ui</code> I am not able to see Kubernetes dashboard. Instead I see following output <code> { "paths": [ "/apis", "/apis/", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/healthz", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/metrics", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/version" ] } </code></p> <p>Could you please help me resolving dashboard issue?</p> <p>Thanks in advance</p>
<p>Try go to:</p> <pre><code>http://&lt;master IP&gt;:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre> <p>As mentioned here: <a href="https://github.com/kubernetes/dashboard" rel="noreferrer">https://github.com/kubernetes/dashboard</a></p>
<p>I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.</p> <p>How can this be achieved?</p> <p>I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!</p>
<p>I see several ways to achieve what you want:</p> <ol> <li><p>From my perspective, the best way is to use Kubernetes Secret. @Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="noreferrer">PreStop</a> hook, for example. But, you don't want to go that way. </p></li> <li><p>You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file <code>db_cred.env</code> in the volume which you can mount, for example, to <code>/env</code> path. After that, you can load it by modifying a <code>command</code> of your container in the Pod spec and add the command <code>source /env/db_cred.env</code> before the main script which will start your application. @user2612030 already gave you that idea.</p></li> <li><p>Another alternative way can be <a href="https://www.vaultproject.io/" rel="noreferrer">Vault</a> by Hashicorp, you can use it as storage of all your credentials.</p></li> <li><p>You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - <a href="https://github.com/rusenask/k8s-kv" rel="noreferrer">k8s-kv</a>.</p></li> </ol> <p>But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.</p>
<p>My ultimate goal is to run Kubernetes on a 3-node CoreOS cluster (if anyone has better suggestions, I'm all ears: at this stage, I'm considering CoreOS a complete waste of my time).</p> <p>I've started following the <a href="https://coreos.com/blog/coreos-clustering-with-vagrant.html" rel="nofollow noreferrer">CoreOS guide to run a Vagrant cluster</a> (I even have the <a href="https://amzn.to/2pWIznU" rel="nofollow noreferrer">CoreOs in Action book</a> and that's not much help either) - I have obtained a fresh <code>discovery token</code> and modified the <code>user-data</code> and <code>config.rb</code> files as described there:</p> <pre><code>#cloud-config --- coreos: etcd2: discovery: https://discovery.etcd.io/7b9a3e994a14c2bf530ed88676e3fc97 </code></pre> <p>and:</p> <pre><code>$ cat config.rb # Size of the CoreOS cluster created by Vagrant $num_instances = 3 $update_channel = "stable" </code></pre> <p>the rest is all as in the original <code>coreos-vagrant</code> repository.</p> <p>When first started, it appears that <code>etcd</code> is not started as a service; starting it up with <code>systemctl</code> seems to get it going, but it then does not discover its peers:</p> <pre><code>core@core-01 ~ $ etcdctl member list 8e9e05c52164694d: name=4adff068c464446a8423e9b9f7c28711 peerURLs=http://localhost:2380 clientURLs=http://localhost:2379 isLeader=true </code></pre> <p>the same on all other three Vagrant VMs.</p> <p>It seems to me that either the modified <code>user-data</code> is not picked up, or somehow the discovery token is ignored.</p> <p>I have been googling around, but nothing seems to come up.</p> <p>The main difficulty I'm finding is that virtually all the instructions around CoreOS/etcd point to these YAML files and then state one has to use <code>ct</code> to generate the "real" configuration: but they don't really show how to do this on a running VM, or how to change the running configuration.</p> <p>A couple of questions:</p> <p>1) what would be the "right way" to get the three VMs' <code>etcd</code> to start and finding each other? Reading up <a href="https://coreos.com/etcd/docs/latest/platforms/container-linux-systemd.html" rel="nofollow noreferrer">the guide here</a> is really not that helpful.</p> <p>The three VM are on a 'host-only' network:</p> <pre><code>core@core-01 ~ $ ip address show ... 3: eth1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:76:a6:cf brd ff:ff:ff:ff:ff:ff inet 172.17.8.101/16 brd 172.17.255.255 scope global eth1 </code></pre> <p>(the other two are on <code>102</code> and <code>103</code>)</p> <p>2) is there a "better" (for some definition of "better") way of getting a Kubernetes cluster running on 3 VirtualBox VMs?</p> <p>It seems to me that CoreOS are trying to be too clever for their own good: I've been using various flavors of Linux for the past 10 years+ and getting the etcd cluster to find each other is proving frustratingly difficult.</p> <p>I'm running Ubuntu 17.10 (well, all going well, it'll soon be 18.04 LTS) and Virtualbox 5.2.8.</p> <p>Thanks in advance!</p>
<p>Unfortunately, the documentation you used is currently outdated. Right now <code>ETCD version 3</code> is used as a <code>Kubernetes</code> data storage. It provisions with <code>Ignition</code> (VirtualBox Provider (default)):</p> <blockquote> <p>When using the VirtualBox provider for Vagrant (the default), Ignition is used to provision the machine.</p> </blockquote> <p><strong>1.</strong> Install <a href="https://github.com/coreos/vagrant-ignition" rel="noreferrer">vagrant-ignition</a> plugin (just in case if this plugin isn't automatically installed when using the default Vagrantfile from <code>coreos-vagrant</code> repo):</p> <pre><code>git clone https://github.com/coreos/vagrant-ignition cd vagrant-ignition gem build vagrant-ignition.gemspec vagrant plugin install vagrant-ignition-0.0.3.gem </code></pre> <p><strong>2.</strong> Install <a href="https://github.com/coreos/container-linux-config-transpiler" rel="noreferrer">ct</a>.</p> <p><strong>3.</strong> Clone <a href="https://github.com/coreos/coreos-vagrant" rel="noreferrer">coreos-vagrant</a> repo:</p> <pre><code>git clone https://github.com/coreos/coreos-vagrant cd coreos-vagrant </code></pre> <p><strong>4.</strong> Create <code>config.rb</code> to start three CoreOS VMs:</p> <pre><code>cp config.rb.sample config.rb sed -i 's/$num_instances=1/$num_instances=3/g' config.rb </code></pre> <p><strong>5.</strong> Get etcd discovery token and put it into <code>cl.conf</code>:</p> <pre><code>discovery_token=$(curl -s https://discovery.etcd.io/new\?size\=3) sed -i "s|https://discovery.etcd.io/&lt;token&gt;|$discovery_token|g" cl.conf </code></pre> <p><strong>6.</strong> Use <a href="https://github.com/coreos/container-linux-config-transpiler" rel="noreferrer">config transpiler</a> to write the Ignition config to <code>config.ign</code>:</p> <pre><code>ct --platform=vagrant-virtualbox &lt; cl.conf &gt; config.ign </code></pre> <p><strong>7.</strong> Create etcd cluster:</p> <pre><code>vagrant up </code></pre> <p><code>ETCD</code> cluster is ready:</p> <pre><code>core@core-01 ~ $ etcdctl member list 3655a3141d6f953b: name=core-01 peerURLs=http://172.17.8.101:2380 clientURLs=http://172.17.8.101:2379 isLeader=false 951a7a7a97c94116: name=core-02 peerURLs=http://172.17.8.102:2380 clientURLs=http://172.17.8.102:2379 isLeader=true fd056871037fdb55: name=core-03 peerURLs=http://172.17.8.103:2380 clientURLs=http://172.17.8.103:2379 isLeader=false </code></pre>
<p>As a k8s cluster administrator, I want to specify on which nodes (using labels) pods will be scheduled, but without modifying any PodSpec section.</p> <p>So, nodeSelector, affinity and taints can't be options. Is there any other solution ?</p> <p>PS: the reason I can't modify the PodSpec is that deployed applications are available as Helm charts and I don't have hand on those files. Moreover, if I change the PodSpec, it will be lost on next release upgrade.</p>
<p>You can use the <a href="https://kubernetes.io/docs/admin/admission-controllers/#podnodeselector" rel="nofollow noreferrer">PodNodeSelector</a> admission controller for this:</p> <blockquote> <p>This admission controller has the following behavior:</p> <ol> <li>If the Namespace has an annotation with a key <code>scheduler.kubernetes.io/nodeSelector</code>, use its value as the node selector.</li> <li>If the namespace lacks such an annotation, use the <code>clusterDefaultNodeSelector</code> defined in the PodNodeSelector plugin configuration file as the node selector.</li> <li>Evaluate the pod’s node selector against the namespace node selector for conflicts. Conflicts result in rejection.</li> <li>Evaluate the pod’s node selector against the namespace-specific whitelist defined the plugin configuration file. Conflicts result in rejection.</li> </ol> </blockquote> <p>First of all you will need to <a href="https://kubernetes.io/docs/admin/admission-controllers/#how-do-i-turn-on-an-admission-controller" rel="nofollow noreferrer">enable this admission controller</a>. The way to enable it depends on your environment, but it's done via the parameter <code>kube-apiserver --enable-admission-plugins=PodNodeSelector</code>.</p> <p>Then create a namespace and annotate it with whatever node label you want all Pods in that namespace to have:</p> <pre><code>kubectl create ns node-selector-test kubectl annotate ns node-selector-test \ scheduler.alpha.kubernetes.io/node-selector=mynodelabel=mynodelabelvalue </code></pre> <p>To test it you could do something like this:</p> <pre><code>kubectl run busybox \ -n node-selector-test -it --restart=Never --attach=false --image=busybox kubectl get pod busybox -n node-selector-test -o yaml </code></pre> <p>It should output something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox .... spec: ... nodeSelector: mynodelabel: mynodelabelvalue </code></pre> <p>Now, unless that label exists on some nodes, this Pod will never be scheduled, so put this label on a node to see it scheduled:</p> <pre><code>kubectl label node myfavoritenode mynodelabel=mynodelabelvalue </code></pre>
<p>The Kubernetes <code>client-go</code> package includes a <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="noreferrer">nice example</a> of creating a single deployment using the <code>client-go</code> api.</p> <p>I want to create and destroy many kubernetes resources without waiting for each http request to complete.</p> <p>Is it possible to use the <code>client-go</code> api asynchronously?</p> <p>Are methods like the ones below safe for concurrent calls from multiple goroutines? </p> <pre><code>resultPod, err := clientset.CoreV1().Pods("default").Create(desiredPod) </code></pre>
<p>The k8s client uses <code>http.Client</code> internally which is safe to call concurrently. But it is probably wise to limit the number of concurrent API calls to a reasonable upper limit (I'd start with 4; anything above that is probably not going to improve performance much).</p>
<p>jwilder/nginx-proxy has 1.3K STARS and 10M+ PULLS on Docker Hub. And Watch 262, Star 7701, Fork 1546 on GitHub. <a href="https://github.com/jwilder/nginx-proxy" rel="nofollow noreferrer">https://github.com/jwilder/nginx-proxy</a></p> <p>kubernetes/ingress-nginx has 13 stars on kubeapps.com (one of the most starred charts) and Watch 137, Star 1596, Fork 918 on GitHub. <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> <ol> <li>What's the difference between the two?</li> <li>When would you use one over the other?</li> </ol>
<p>That is 2 different applications, but both are based on Nginx and have the similar function.</p> <ol> <li><p>Nginx-proxy by jwilder is a proxy server for Docker containers which includes <code>docker-gen</code> to generate a configuration for Nginx automatically. You can use it for SSL termination, load balancing etc. But it will be hard to manage nginx-proxy in Kubernetes.</p></li> <li><p>Ingress-nginx by Kubernetes is <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="noreferrer">Ingress Controller</a> which provides <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="noreferrer">Ingress</a> functional for your Kubernetes cluster. It also can do SSL termination and some other things, but it was created especially for use in Kubernetes, and it's abstractions. That means you can create the Ingress object which includes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Services</a> as backends and use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">selectors</a> etc.</p></li> </ol> <p>So, if you are using Kubernetes, Ingress-nginx is the best choice. If you are using just Docker containers without an orchestrator, use Nginx-proxy.</p>
<p>I am struggling to understand the concept of the Capacity parameter on the PersistentVolume, and similarly the storage Request on the PersistentVolumeClaim when dealing with ReadOnlyMany storage. If the storage is mounted in read only- what exactly is the Capacity/Request in relation to? </p> <p>i.e. </p> <pre><code>spec: storageClassName: manual capacity: storage: 50Gi accessModes: - ReadOnlyMany hostPath: path: "/foo/bar" spec: storageClassName: manual accessModes: - ReadOnlyMany resources: requests: storage: 1Mi </code></pre>
<p><strong>PersistentVolume</strong>(PV) is an object usually created by an administrator or auto-provisioned by a cloud provider. </p> <p>You can imagine it as flash drives laying on a table and available for claiming. </p> <p><strong>PersistentVolumeClaim</strong>(PVC) is a minimum specification that PV should comply with. </p> <p>Continuing the flash drive analogy, we can imagine that we need a flash drive that is: </p> <ol> <li>red, (color=red) (strict clause) </li> <li>Produced in the USA (manufactured=usa) (strict clause) </li> <li>Has a minimum capacity of 2Gb (capacity>=2gb) (I put “more or equal” here because that is how kubernetes selects PV by capacity) (not so strict) </li> </ol> <p>(Workaround: You can put capacity value to label and use a selector to filter only the size you want, not less, not more. We skip it for now) </p> <p>And we can find on the table two flash drives that are: </p> <ol> <li>red, (exact match) (don’t care about other colors) </li> <li>Manufactured in the USA (exact match) (don’t care about those produced in China or Vietnam) </li> <li>Have a capacity of 16 Gb and 4Gb </li> </ol> <p>As soon as we need only 2Gb, we take the flash-drive with 4Gb capacity, which is closer to our needs. </p> <p>After you choose the correct “flash drive” (bound PV to your container), you can use its full capacity (4Gb available) even if your claim was smaller (2Gb would be enough). </p> <p>But when you take the flash drive from the table, nobody else can claim it (in case you took the last one). In kubernetes, PV and PVC bind as 1-to-1. So, if you have one PV of 50Gb and two claims of 5GB, only one claim will be satisfied. </p> <p>You can imagine that <strong>ReadOnlyMany</strong> and <strong>ReadWriteOnce</strong> mode in PV as a pile of CD or DVD disks with the same data. You can write something on it and then anybody can read it many times. <strong>ReadWriteOnce</strong> PV can be bound to only one container with write access, but still can be bound to many containers with read only access (take one disk from the pile, there are more of them). </p> <p>There is not much sense to search information by size in our case, so in PVC you can use any number that is smaller than expected from PV, and use selector, access mode, and storage class to find exactly what you need. </p> <p>For example, you can create PV that has NFS share under the hood: </p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-with-data labels: version: stable capacity: 50Gi &lt;-(workaround I’ve mentioned before) spec: capacity: storage: 50Gi accessModes: - ReadWriteOnce - ReadOnlyMany &lt;- (yes, you can have several modes in PV specification) persistentVolumeReclaimPolicy: Retain nfs: path: /opt/export server: 10.10.0.10 </code></pre> <p>The PVC would be something like this: </p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 25Gi selector: matchLabels: version: "stable" matchExpressions: - key: capacity operator: In values: [25Gi, 50Gi] </code></pre> <p>More (boring) information can be found in the documentation: </p> <p><a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">Volumes</a><br> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">Persistent Volumes</a><br> <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">Storage Classes</a><br> <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noreferrer">Dynamic Volume Provisioning</a><br> <a href="https://www.packtpub.com/virtualization-and-cloud/mastering-kubernetes" rel="noreferrer">Gigi Sayfan, Mastering Kubernetes</a> </p>
<p>I am trying to deploy a <code>jupyter notebook</code> from Kubernetes; however, when I start <code>jupyter</code> and it prints a local host link, I am unable to open it on my computer because it's a "local host." Hence, it needs to be opened within the container. </p> <p>However, I was unable to find any type of GUI desktop for kubernetes and I'm unsure how to open a browser to fire up the link. I saw some things about minikube. Is there a way to do this without using minikube?</p> <p>The reason I am trying to install without minikube is because minikube requires hyper V and I have Windows 10 Home which is not compatible with hyper V. </p>
<p>The most common way to access an application in a pod is to use Service. </p> <p>After creation, a Service object is assigned with a unique IP address (ClusterIP) which remains the same during the whole lifespan of the Service object. Pods can use this ClusterIP and port to access a subset of pods with labels matched to Service selector. When several pods are matched, Service chooses one of them as a destination by round-robin principle. </p> <p>For example: </p> <p>You can create a Service for your 2 nginx replicas with kubectl expose: </p> <pre><code>$ kubectl expose deployment/my-nginx service "my-nginx" exposed </code></pre> <p>This is equivalent to <code>kubectl create -f nginx-svc.yaml</code></p> <p>with nginx-svc.yaml content as: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx </code></pre> <p>How to check your Service: </p> <pre><code>$ kubectl get svc my-nginx NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx 10.0.162.149 &lt;none&gt; 80/TCP 21s </code></pre> <p>In some parts of your applications, you may want to expose Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. <br> NodePort mode reserves one port on all cluster nodes and forwards traffic coming to this port to the pod which is matched to the selector. <br> In LoadBalancer mode, Service creates cloud load balancer and forwards traffic from the load balancer to the pod which is matched to the selector. <br> You can read more about it in the document <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">Connecting Applications with Services</a> </p> <p>To avoid creating all these objects manually, you can use helm to generate and run objects based on a template for a particular application. Here is helm repository for jupiter notebook: </p> <p><a href="https://github.com/UNINETT/helm-charts" rel="nofollow noreferrer">https://github.com/UNINETT/helm-charts</a> </p> <p>Kubernetes has WebUI called Dashboard. It doesn´t deploy by default, but it´s easy to deploy when you need it. </p> <p>To deploy Dashboard, execute the following command: </p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>To access Dashboard from your local workstation, you must create a secure channel to your Kubernetes cluster. Run the following command: </p> <pre><code>$ kubectl proxy </code></pre> <p>Now access Dashboard at: </p> <p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a>. </p> <p>To find out how to create sample user and log in, follow Creating <a href="https://github.com/kubernetes/dashboard/wiki/Creating-sample-user" rel="nofollow noreferrer">sample user guide</a>. </p> <p>Since version 1.7 Dashboard by default has a minimal set of privileges and can only be accessed over HTTPS. <a href="https://github.com/kubernetes/dashboard/wiki/Access-control" rel="nofollow noreferrer">Access Control guide</a> can help you to extend user permissions. </p> <p>In case you are limited with OS version, you can use free <a href="https://rominirani.com/tutorial-getting-started-with-kubernetes-on-your-windows-laptop-with-minikube-3269b54a226" rel="nofollow noreferrer">Oracle VirtualBox for minikube</a>.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes docs</a>:</p> <blockquote> <p>For Kubernetes-native applications, Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods.</p> </blockquote> <p>What is the exact difference between Kubernetes-native and non-native applications?</p>
<p>I found the same section and interpret it as :</p> <ul> <li>native are apps that are packed up and run inside k8s as some ‚kind‘. All dockerized apps should be in that category.</li> <li>non native is connected with the k8s cluster infrastructure but not deployed within. A legacy app, an Oracle cluster or your backup robot may fall in this category.</li> </ul>
<p>Kafka noob here! I have two questions:</p> <p>1) Consider few Kafka consumers running on different Kubernetes pods as part of the same Kafka consumer group. The topics being consumed by these pods are compacted topics. Now, let’s say one of the pods goes down and comes up in a while. Now my question is, will the consumer in question, receive all messages from that compacted Kafka topic ? Or, will it receive only the topics which arrived after it had come back from failure?</p> <p>2) I understand that Kafka consumers receive topics from partitions starting from a “committed offset” . How will it work in case of “compacted topics”, as Kafka will send events with the latest offset only</p>
<p>Consumers works the same way for compacted topics as non-compacted ones. During compaction if there are offsets with the same keys then only the latest key stays but the compacted offsets are never deleted it stays pointing to the latest key Eg. if offsets 10,11,12 have the same keys , then only the 12th offset key would be retained after compaction and any consumer fetching offset 10,11 or 12 , it would fetch the same result i.e. key-value stored at offset 12 ( as key-value for 10,11 are deleted)</p> <p>Coming to your questions -</p> <p>1.Consumer can fetch from the desired offset , only thing is that if some offsets are compacted ,you will get the latest value for the compacted offsets</p> <p>2.As explained consumers will continue to fetch from the last committed offset and if the offsets to be fetched has compacted you might get duplicate messages</p> <p>Refer to the compaction logic in detail in the below kafka link <a href="https://kafka.apache.org/documentation.html#design_compactionbasics" rel="nofollow noreferrer">https://kafka.apache.org/documentation.html#design_compactionbasics</a></p>
<p>I have a file for a Job resource, which looks something like below.I need to run multiple instances with this definition with separate arguments for each.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: generateName: abc- spec: template: spec: containers: - name: abc image: index.docker.io/some/image:latest imagePullPolicy: Always imagePullSecrets: - name: some_secret restartPolicy: Never backoffLimit: 4 </code></pre> <p>I can successfully run this job resource with <code>kubectl create -f my-job.yml</code> But, I'm not sure how I pass my arguments corresponding to <strong>command:['arg1','arg2']</strong></p> <p>I think updating the file with my dynamic args for each request is just messy.</p> <p>I tried <code>kubectl patch -f my-job.yml --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/", "value": {"command": ["arg1","arg2"] } }]'</code>, which works well for a Deployment kind but for Job it doesn't work</p> <p>I tried <code>sudo kubectl run explicitly-provide-name-which-i-dont-want-to --image=index.docker.io/some/image:latest --restart=Never -- arg1 arg2</code>, but for this I won't be able to pass the <strong>imagePullSecrets</strong>.</p>
<p>kind of a generic answer here, just trying to guide you. In general what you express is the need to 'parameterize' your kubernetes deployment descriptors. There are different ways some are simple, some others are a bit hacky and finally there is <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">github.com/kubernetes/helm</a>.</p> <p>Personally I would strongly suggest you go through installing Helm on your cluster and then 'migrate' your job or any vanilla kubernetes deployment descriptor into a helm Chart. This will eventually give you the 'parameterization' power that you need to spin jobs in different ways and with different configs.</p> <p>But, if this sounds like too much for you, I can recommend something that I was doing before I discover Helm. Using things like 'bash' / 'envsubst' I was eventually - templating manually the parts of the yaml file, with place holders (e.g env variables) and then I was feedind the yaml to tools like 'envsubst' where they were replacing the placeholders with the values from the environment. Ugly? Yes. Maintenable? maybe for a couple of simple examples. Example of envsubst <a href="https://unix.stackexchange.com/questions/294378/replacing-only-specific-variables-with-envsubst">here</a>.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: spec: template: spec: containers: - name: abc image: index.docker.io/some/image:latest imagePullPolicy: Always imagePullSecrets: - name: $SOME_ENV_VALUE restartPolicy: Never backoffLimit: 4 </code></pre> <p>Hope that helps..but seriously if you have time, consider checking 'Helm'.</p>
<p>I had sticky session working in my dev environment with minibike with following configurations:</p> <p>Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gl-ingress annotations: nginx.ingress.kubernetes.io/affinity: cookie kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip" spec: backend: serviceName: gl-ui-service servicePort: 80 rules: - http: paths: - path: /api/* backend: serviceName: gl-api-service servicePort: 8080 </code></pre> <p>Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gl-api-service labels: app: gl-api annotations: ingress.kubernetes.io/affinity: 'cookie' spec: type: NodePort ports: - port: 8080 protocol: TCP selector: app: gl-api </code></pre> <p>Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).</p>
<p>Session affinity is not available yet in the GCE/GKE Ingress controller.</p> <p>In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.</p> <ol> <li>Use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">NodePort</a> for the Kubernetes Service. Set the value of the port in <code>spec.ports[*].nodePort</code>, otherwise a random one will be assigned</li> <li><a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport" rel="nofollow noreferrer">Disable kube-proxy SNAT load balancing</a></li> <li>Create a Load Balancer from the GCE API, with <a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service#generated_cookie_affinity" rel="nofollow noreferrer">cookie session affinity</a> enabled. As backend use the port from 1.</li> </ol>
<p>I am working on a project using Django on Google Kubernetes Engine and have a few questions on the best architecture format might be. Right now I have several containers in one pod because they are all dependent on each other and communicate frequently. Here is my setup:</p> <p>Pod: webapp</p> <p>Containers:</p> <ul> <li>NGINX (serves static content, sends traffic to django website)</li> <li>website (Django, serves http responses)</li> <li>DB Proxy (MySQL server for website and websocket server)</li> <li>websocket (Django worker to manage websockets)</li> <li>redis (asynchronous functionality)</li> </ul> <p>My plan is to create a deployment that runs several replicas and scales, but is it better to break this pod into multiple pods or a leave it as a single pod?</p> <p>If I were to break it down, what might that architecture look like after adding in the necessary services? I'm new if this doesn't show it lol</p>
<p>It would be better for you to break these containers into multiple pods in order to scale without wasting resources. If you were to scale to for example two instances, a second instance of each container would be started. Your containers will most likely not have identical load and therefore it makes sense to only scale on the containers that are under load, hence it makes sense breaking your 1 pod into several pods. It also makes replacing individual components easier.</p> <p>The following is an example of several pods, each running a single container. The example shows how you can scale the number of pods based on the component that is under load.</p> <p><a href="https://i.stack.imgur.com/MZyhA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MZyhA.png" alt="Kubernetes example"></a></p>
<p>I'm trying to use rewrite target from kubernetes with <code>gcloud</code> but it doesn't seem to be respected. My code is the following. Maybe there's something I'm not seen:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo-ingress annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: projectip spec: rules: - host: mycustomdomain.com http: paths: - path: /* backend: serviceName: frontend servicePort: 80 - path: /api/* backend: serviceName: backend servicePort: 80 </code></pre> <p>When I do <code>curl mycustomdomain.com/api/something</code> my backend is always receiving <code>backend/api/something</code> instead of <code>backend/something</code>. I'm really out of ideas and I could use some help. </p>
<p>The term <code>rewrite-target</code> is misleading: it is about the path of the <strong>incoming</strong> request.</p> <p>This might work: use one ingress with <code>rewrite-target: /</code> and <code>path: /</code> for the frontend only, and one ingress with <code>rewrite-target: /api</code> and <code>path: /</code> for the backend only.</p>
<p>I have a GKE k8s cluster and wanted to reboot one of the nodes (a vm reboot, and not just the kubelet).</p> <p>I was looking for the correct way (if there is one) than just resetting the vm directly. But I couldnt find anything in the web.</p> <p>So, my plan is to use these steps:</p> <ol> <li>drain the node</li> <li>reboot</li> </ol> <p>Is there a correct (other) way?</p>
<p>No, that is the correct way -- and you don't have to <code>drain</code> the Node first unless there is some extenuating circumstance. One of the major features of kubernetes is that it will route around the "damage" of having a Node disappear suddenly.</p> <p>You <em>could</em> <code>cordon</code> the Node, if you wish to prevent <em>future</em> Pods from being scheduled on the soon-to-be-rebooted Node, but that's merely a time-saver, and shouldn't affect the reboot process.</p> <p>Just be sure to verify the "schedulable" status of the Node after the reboot if you do use <code>cordon</code> or <code>drain</code>; I can't this very second recall whether they automatically re-register in a schedulable state. </p>
<p>Is it possible to add existing instances in the GCP to a kubernetes cluster?</p> <p>I can see the inputs while creating the cluster in graphically mode is :</p> <p>Cluster Name, Location, Zone, Cluster Version, Machine type, Size.</p> <p>In command mode: </p> <pre><code>gcloud container clusters create cluster-name --num-nodes=4 </code></pre> <p>I'm having 10 running instances.</p> <p>I need to create the kubernetes cluster with the already existing running instances</p>
<p>On your instance run the following</p> <pre><code>kubelet --api_servers=http://&lt;API_SERVER_IP&gt;:8080 --v=2 --enable_server --allow-privileged kube-proxy --master=http://&lt;API_SERVER_IP&gt;:8080 --v=2 </code></pre> <p>This will connect your slave node to your existing cluster. Its actually surprisingly simple</p>
<p>We have a requirement to connect from a POD in GKE to service running on a VM on it's internal IP address. </p> <p><a href="https://i.stack.imgur.com/nPr6k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nPr6k.png" alt="enter image description here"></a></p> <p>The K8s cluster and the VM are on different network so we setup VPC Peering between these nets: </p> <p><a href="https://i.stack.imgur.com/Ncviz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ncviz.png" alt="enter image description here"></a></p> <p>As how to point to an external IP, we applied a service without a selector as discussed here: </p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors</a></p> <p>The POD should connect to the internal IP of the VM through this service, the service and endpoint description is: </p> <p>kubectl describe svc vm-proxy</p> <pre><code>Name: vm-proxy Namespace: test-environment Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: ClusterIP IP: 10.59.251.146 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP Endpoints: 10.164.0.10:8080 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Whereby the Endpoint, the internal IP of the VM is, en the Service IP is allocated by K8s. </p> <p>The pod simply sets up an http connection to the IP of the Service, but connection is re-fused. (Connection timeout eventually). </p> <p>The use case is pretty straightforward, and documented on k8s documentation, giving the example of connecting to a DB running on a VM. However it doesn't work in our case, and we are not sure if our setup is wrong or this is simply not possible, using an internal IP of a VM. </p>
<p>I reproduced your issue and it worked fine for me. This is what I did:</p> <ol> <li>Create 2 networks (one of them (demo) on 172.16.0.0/16, the other one is my default network, set on 10.132.0.0/20).</li> <li>Set up VPC peering.</li> <li>Created a VM in demo network. It got assigned 172.16.0.2</li> <li>Created the service as you described (with the endpoint pointing to 172.16.0.2).</li> <li>curl from the pod to the service IP and got 200!</li> </ol> <p>If the steps are right, but your configuration is not working, I'd like to know your network IP ranges. Both of them.</p>
<p>I'm trying to figure out how to get this setup to work:</p> <ul> <li>I am using Kube 1.7 (no RBAC) spun up from kops in AWS</li> <li>I have a single nginx ingress controller for my entire cluster that is using a <code>LoadBalancer</code> service in the <code>kube-system</code>, namespace installed via Helm</li> <li>I have <code>cert-manager</code> setup in <code>kube-system</code>, installed via Helm and using <code>ClusterIssuers</code></li> <li>I have <code>external-dns</code> setup in <code>kube-system</code> installed via Helm</li> <li>I have multiple applications, one per namespace, with associated <code>Ingress</code> objects in each namespace.</li> <li>I am annotating the Ingresses with the appropriate annotations for both <code>cert-manager</code> (<code>certmanager.k8s.io/cluster-issuer: letsencrypt-prod</code>) and <code>external-dns</code> (<code>dns.alpha.kubernetes.io/external: app.contoso.com</code>)</li> </ul> <p>In this scenario, <code>cert-manager</code> is reacting appropriately to the <code>Ingress</code> object (modifying it to complete the ACME challenge), but <code>external-dns</code> is not doing anything (logs are saying all hostnames are up to date). If I manually add a Route53 record for the ELB associated with the LB service, everything works as expected. Inspecting the Ingress object, I see that the status block looks like so:</p> <pre><code>status: loadBalancer: ingress: - {} </code></pre> <p>which I suppose is why <code>external-dns</code> isn't reacting? How do I get this to work? Per the documentation</p> <p>More troubleshooting information (pod definitions, ingress definitions, controller logs, etc.) can be found here: <a href="https://gist.github.com/DWSR/f6d596850346223393bec23b289c9731" rel="nofollow noreferrer">https://gist.github.com/DWSR/f6d596850346223393bec23b289c9731</a></p>
<p>I solved this myself. The nginx ingress controller has a <code>--publish-service</code> command line argument which will cause it to update the status fields on the ingress objects which, in turn, will cause <code>external-dns</code> to create the appropriate DNS records. When installing via Helm, simply set <code>.Values.controller.publishService.enabled</code> to <code>true</code> and this will take effect.</p> <p>Sources: </p> <ul> <li><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md</a></li> <li><a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration</a></li> </ul>
<p>I can run this command to create a docker registry secret for a kubernetes cluster:</p> <pre><code>kubectl create secret docker-registry regsecret \ --docker-server=docker.example.com \ --docker-username=kube \ --docker-password=PW_STRING \ [email protected] \ --namespace mynamespace </code></pre> <p>I would like to create the same secret from a YAML file. Does anyone know how this can be set in a YAML file?</p> <p>I need this as a YAML file so that it can be used as a Helm template, which allows for a Helm install command such as this (simplified) one:</p> <pre><code>helm install ... --set docker.user=peter,docker.pw=foobar,docker.email=... </code></pre>
<p>You can write that yaml by yourself, but it will be faster to create it in 2 steps using <code>kubectl</code>:</p> <ol> <li>Generate a 'yaml' file. You can use the same command but in dry-run mode and output mode <code>yaml</code>.</li> </ol> <p>Here is an example of a command that will save a secret into a 'docker-secret.yaml' file for <code>kubectl</code> version &lt; 1.18 (check the version by <code>kubectl version --short|grep Client</code>):</p> <pre><code>kubectl create secret docker-registry --dry-run=true $secret_name \ --docker-server=&lt;DOCKER_REGISTRY_SERVER&gt; \ --docker-username=&lt;DOCKER_USER&gt; \ --docker-password=&lt;DOCKER_PASSWORD&gt; \ --docker-email=&lt;DOCKER_EMAIL&gt; -o yaml &gt; docker-secret.yaml </code></pre> <p>For <code>kubectl</code> version &gt;= 1.18:</p> <pre><code>kubectl create secret docker-registry --dry-run=client $secret_name \ --docker-server=&lt;DOCKER_REGISTRY_SERVER&gt; \ --docker-username=&lt;DOCKER_USER&gt; \ --docker-password=&lt;DOCKER_PASSWORD&gt; \ --docker-email=&lt;DOCKER_EMAIL&gt; -o yaml &gt; docker-secret.yaml </code></pre> <ol start="2"> <li><p>You can apply the file like any other Kubernetes 'yaml':</p> <p><code>kubectl apply -f docker-secret.yaml</code></p> </li> </ol> <p><strong>UPD</strong>, as a question has been updated.</p> <p>If you are using Helm, here is an official <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#creating-image-pull-secrets" rel="noreferrer">documentation</a> about how to create an <code>ImagePullSecret</code>.</p> <p>From a doc:</p> <ol> <li>First, assume that the credentials are defined in the <code>values.yaml</code> file like so:</li> </ol> <pre><code>imageCredentials: registry: quay.io username: someone password: sillyness </code></pre> <ol start="2"> <li>We then define our helper template as follows:</li> </ol> <pre><code>{{- define &quot;imagePullSecret&quot; }} {{- printf &quot;{\&quot;auths\&quot;: {\&quot;%s\&quot;: {\&quot;auth\&quot;: \&quot;%s\&quot;}}}&quot; .Values.imageCredentials.registry (printf &quot;%s:%s&quot; .Values.imageCredentials.username .Values.imageCredentials.password | b64enc) | b64enc }} {{- end }} </code></pre> <ol start="3"> <li>Finally, we use the helper template in a larger template to create the <code>Secret</code> manifest:</li> </ol> <pre><code>apiVersion: v1 kind: Secret metadata: name: myregistrykey type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: {{ template &quot;imagePullSecret&quot; . }} </code></pre>
<p>I have a autobahn twisted websocket running in python which is working in a dev vm correctly but I have been unable to get working when the server is running in openshift.</p> <p>Here is the shortened code which works for me in a vm.</p> <pre><code>from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS from autobahn.twisted.resource import WebSocketResource class MyServerProtocol(WebSocketServerProtocol): def onConnect(self, request): stuff... def onOpen(self): stuff... def onMessage(self,payload): stuff... factory = WebSocketServerFactory(u"ws://0.0.0.0:8080") factory.protocol = MyServerProtocol resource = WebSocketResource(factory) root = File(".") root.putChild(b"ws", resource) site = Site(root) reactor.listenTCP(8080, site) reactor.run() </code></pre> <p>The connection part of the client is as follows:</p> <pre><code>var wsuri; var hostname = window.document.location.hostname; wsuri = "ws://" + hostname + ":8080/ws"; if ("WebSocket" in window) { sock = new WebSocket(wsuri); } else if ("MozWebSocket" in window) { sock = new MozWebSocket(wsuri); } else { log("Browser does not support WebSocket!"); window.location = "http://autobahn.ws/unsupportedbrowser"; } </code></pre> <p>The openshift configuration is as follows:</p> <p>1 pod running with app.py listening on port 8080 tls not enabled I have a non-tls route 8080 > 8080.</p> <p>Firefox gives the following message in the console:</p> <pre><code>Firefox can’t establish a connection to the server at ws://openshiftprovidedurl.net:8080/ws. </code></pre> <p>when I use wscat to connect to the websocket.</p> <pre><code>wscat -c ws://openshiftprovidedurl.net/ws </code></pre> <p>I get the following error:</p> <pre><code>error: Error: unexpected server response (400) </code></pre> <p>and the application log in openshift shows the following:</p> <pre><code>2018-04-03 01:14:24+0000 [-] failing WebSocket opening handshake ('missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)') 2018-04-03 01:14:24+0000 [-] dropping connection to peer tcp4:173.21.2.1:38940 with abort=False: missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False) 2018-04-03 01:14:24+0000 [-] WebSocket connection closed: connection was closed uncleanly (missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)) </code></pre> <p>Any assistance would be appreciated!</p>
<p>Graham Dumpleton hit the nail on the head, I modified the code from </p> <pre><code>factory = WebSocketServerFactory(u"ws://0.0.0.0:8080") </code></pre> <p>to </p> <pre><code>factory = WebSocketServerFactory(u"ws://0.0.0.0:8080", externalPort=80) </code></pre> <p>and it corrected the issue. I had to modify my index to point to the correct websocket but I am now able to connect.</p> <p>Thanks!</p>
<p>I have a Kubernetes 1.10.0, Docker 17.03.2-ce, and Jenkins 2.107.1 running on an Ubuntu 17.04 VM with Kubernetes Plugin 1.5 installed in Jenkins. I have 4 other Ubuntu VM(s) successfully set up as nodes in the cluster, including the untainted master. I can deploy nginx-based services directly and have unfettered access to the dashboard. So, Kubernetes itself seems happy enough.</p> <p>Before you mention it, let me say that we don't have short term plans to run Jenkins master inside Kubernetes itself. So, I'd prefer to get this strategy working.</p> <p>The plugin config for a Kubernetes Cloud is thus:</p> <p>"Name": kubernetes</p> <p>"Kubernetes URL": <a href="https://172.20.43.30:6443" rel="noreferrer">https://172.20.43.30:6443</a></p> <p>from</p> <pre><code># kubectl describe pods/kube-apiserver-jenkins-kube-master --namespace=kube-system | grep Liveness Liveness: http-get https://172.20.43.30:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 </code></pre> <p>after accepting the insecure cert, a browser to <a href="https://172.20.43.30:6443/" rel="noreferrer">https://172.20.43.30:6443/</a> will show</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 } </code></pre> <p>"Kubernetes server certificate key" obtained from</p> <pre><code># kubectl get pods/kube-apiserver-jenkins-kube-master -o yaml --namespace=kube-system | grep tls - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt # cat /etc/kubernetes/pki/apiserver.crt -----BEGIN CERTIFICATE----- MIIDZ****** ******************* ****PP5wigl -----END CERTIFICATE----- </code></pre> <p>"Kubernetes Namespace": jenkins-slaves</p> <p>the jenkins-slaves namespace setup like this ...</p> <p>create jenkins-namespace.yaml and add this:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: jenkins-slaves labels: name: jenkins-slaves spec: finalizers: - kubernetes </code></pre> <p>then</p> <pre><code># kubectl create -f jenkins-namespace.yaml namespace "jenkins-slaves" created # kubectl -n jenkins-slaves create sa jenkins serviceaccount "jenkins" created # kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods role.rbac.authorization.k8s.io "jenkins" created # kubectl create rolebinding jenkins --role=jenkins --serviceaccount=jenkins-slaves:jenkins rolebinding.rbac.authorization.k8s.io "jenkins" created # kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=jenkins-slaves:jenkins clusterrolebinding.rbac.authorization.k8s.io "jenkins" created </code></pre> <p>added a Jenkins credential of "secret text" using the token spit out from</p> <pre><code># kubectl get -n jenkins-slaves sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n jenkins-slaves get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d - </code></pre> <p>a "Test Connection" shows "Connection test successful"</p> <p>It should be noted that that same token can be used to login to the Kubernetes dashboard with full access rights.</p> <p>"Jenkins URL": <a href="http://172.20.43.30:8080" rel="noreferrer">http://172.20.43.30:8080</a></p> <p>"Kubernetes Pod Template:Name": jnlp slave</p> <p>"Kubernetes Pod Template:Namespace": jenkins-slaves</p> <p>"Kubernetes Pod Template:Labels": jenkins-slaves</p> <p>"Kubernetes Pod Template:Usage": Only build jobs with label expressions matching this node</p> <p>"Kubernetes Pod Template:Container Template:Name": jnlp-slave</p> <p>"Kubernetes Pod Template:Container Template:Docker image": jenkins/jnlp-slave</p> <p>"Kubernetes Pod Template:Container Template:Working directory": ./.jenkins-agent</p> <p>At this point, if I create a job and "Restrict where this project can be run" to a "Label Expression" of "jenkins-slaves", I get:</p> <pre><code>Label jenkins-slaves is serviced by no nodes and 1 cloud. Permissions or other restrictions provided by plugins may prevent this job from running on those nodes. </code></pre> <p>If I try to build the job, it will sit in the build queue and the "Build Executor Status" will periodically say "jnlp-slave-##### (offline) (suspended)" and then disappear a couple seconds later.</p> <p>The system log says:</p> <pre><code>Apr 03, 2018 12:16:21 PM SEVERE org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher logLastLines Error in provisioning; agent=KubernetesSlave name: jnlp-slave-t8004, template=PodTemplate{inheritFrom='', name='jnlp slave', namespace='jenkins-slaves', label='jenkins-slaves', nodeSelector='', nodeUsageMode=EXCLUSIVE, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@44dcba2d, containers=[ContainerTemplate{name='jnlp-slave', image='jenkins/jnlp-slave', workingDir='./.jenkins-agent', command='/bin/sh -c', args='cat', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@58f0ceec}]}. Container jnlp exited with error 255. Logs: Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior Warning: SECRET is defined twice in command-line arguments and the environment variable Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable Apr 03, 2018 4:16:16 PM hudson.remoting.jnlp.Main createEngine INFO: Setting up agent: jnlp-slave-t8004 Apr 03, 2018 4:16:16 PM hudson.remoting.jnlp.Main$CuiListener &lt;init&gt; INFO: Jenkins agent is running in headless mode. Apr 03, 2018 4:16:16 PM hudson.remoting.Engine startEngine INFO: Using Remoting version: 3.19 Apr 03, 2018 4:16:16 PM hudson.remoting.Engine startEngine WARNING: No Working Directory. Using the legacy JAR Cache location: /home/jenkins/.jenkins/cache/jars Apr 03, 2018 4:16:17 PM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among [http://172.20.43.30:8080/] Apr 03, 2018 4:16:17 PM hudson.remoting.jnlp.Main$CuiListener error SEVERE: http://172.20.43.30:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found java.io.IOException: http://172.20.43.30:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:197) at hudson.remoting.Engine.innerRun(Engine.java:518) at hudson.remoting.Engine.run(Engine.java:469) Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate Terminating Kubernetes instance for agent jnlp-slave-t8004 Apr 03, 2018 12:16:21 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. Apr 03, 2018 12:16:21 PM INFO okhttp3.internal.platform.Platform log ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path? Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate Terminated Kubernetes instance for agent jenkins-slaves/jnlp-slave-t8004 Apr 03, 2018 12:16:21 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate Disconnected computer jnlp-slave-t8004 Apr 03, 2018 12:16:25 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision Excess workload after pending Kubernetes agents: 1 Apr 03, 2018 12:16:25 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision Template: Kubernetes Pod Template Apr 03, 2018 12:16:25 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. Apr 03, 2018 12:16:25 PM INFO okhttp3.internal.platform.Platform log ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path? Apr 03, 2018 12:16:25 PM INFO hudson.slaves.NodeProvisioner$StandardStrategyImpl apply Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: 0 Apr 03, 2018 12:16:35 PM WARNING io.fabric8.kubernetes.client.Config tryServiceAccount Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. Apr 03, 2018 12:16:35 PM INFO hudson.slaves.NodeProvisioner$2 run Kubernetes Pod Template provisioning successfully completed. We have now 2 computer(s) Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision Excess workload after pending Kubernetes agents: 0 Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision Template: Kubernetes Pod Template Apr 03, 2018 12:16:35 PM INFO okhttp3.internal.platform.Platform log ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path? Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch Created Pod: jnlp-slave-bnz94 in namespace jenkins-slaves Apr 03, 2018 12:16:35 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch </code></pre> <p>-Steve Maring</p> <p>Orlando, FL</p>
<p>I went to <a href="http://172.20.43.30:8080/configureSecurity/" rel="noreferrer">http://172.20.43.30:8080/configureSecurity/</a> and set "Agents:TCP port for JNLP agents" to "random"</p> <p>I then got a "jnlp-slave-ttm5v (suspended)" that stays in the "Build Executor Status"</p> <p>and the log said:</p> <pre><code>Container is waiting jnlp-slave-ttm5v [jnlp-slave]: ContainerStateWaiting(message=Error response from daemon: the working directory './.jenkins-agent' is invalid, it needs to be an absolute path, reason=CreateContainerError, additionalProperties={}) </code></pre> <p>After setting "Working directory" to "/home/jenkins" I saw a pod actually get created on k8s:</p> <pre><code># kubectl get pods --namespace=jenkins-slaves NAME READY STATUS RESTARTS AGE jnlp-slave-1ds27 2/2 Running 0 42s </code></pre> <p>and my job ran successfully!</p> <pre><code>Started by user Buildguy Agent jnlp-slave-1ds27 is provisioned from template Kubernetes Pod Template Agent specification [Kubernetes Pod Template] (jenkins-slaves): * [jnlp-slave] jenkins/jnlp-slave(resourceRequestCpu: , resourceRequestMemory: , resourceLimitCpu: , resourceLimitMemory: ) Building remotely on jnlp-slave-1ds27 (jenkins-slaves) in workspace /home/jenkins/workspace/maven-parent-poms </code></pre>
<p>Dockerfile has a parameter for <em>ENTRYPOINT</em> and while writing Kubernetes deployment YAML file, there is a parameter in Container spec for <em>COMMAND</em>.</p> <p>I am not able to figure out what's the difference and how each is used?</p>
<p>Kubernetes provides us with multiple options on how to use these commands:</p> <p>When you override the default Entrypoint and Cmd in Kubernetes .yaml file, these rules apply:</p> <ul> <li>If you do not supply command or args for a Container, the defaults defined in the Docker image are used.</li> <li>If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.</li> <li>If you supply a command for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run with the args supplied (or no args if none supplied).</li> </ul> <p><em>Here is an example:</em></p> <p>Dockerfile:</p> <pre><code>FROM alpine:latest COPY "executable_file" / ENTRYPOINT [ "./executable_file" ] </code></pre> <p>Kubernetes yaml file:</p> <pre><code> spec: containers: - name: container_name image: image_name args: ["arg1", "arg2", "arg3"] </code></pre> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a></p>
<p>I have installed a cluster of one master and one node using kubadm</p> <p><code>kubeadm version: &amp;version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></p> <p>Whenever I try and install a pod (ngix, gafana, influxdb, heapster, tiller), it always stays in a state of ContainerCreating. </p> <p>I can't figure out how to diagnose the issue to try and get the containers to move to a Running state.</p>
<p>Use the following commands to check logs of Kubelet and diagnose the issue accordingly:</p> <pre><code>systemctl status kubelet journalctl -xeu kubelet </code></pre> <p>For details: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/</a></p>
<p>I'm just new with <code>Kubernetes</code>, I'm using <code>OpenStack</code> and I would like to create a load balancer to access my <code>NodeJs</code> server running on 3 pods. I get a pending loop when my load balancer try to get its public ip. I'm using <code>kubeadm</code> with <code>calico</code>.</p> <p><a href="https://i.stack.imgur.com/YmaTV.png" rel="nofollow noreferrer">Screen: Pending External IP</a></p>
<p>This is a workaround method. You can mention the external IP:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: node-js labels: name: node-js spec: type: LoadBalancer externalIPs: - 10.240.0.4 ports: - port: 80 targetPort: 80 nodePort: 30000 selector: name: node-js</code></pre> </div> </div> </p>
<p>I am trying to create a little Pod example with two containers that share data via an emptyDir volume. In the first container I am waiting a couple of seconds before it gets destroyed.</p> <p>In the postStart I am writing a file to the shared volume with the name "started", in the preStop I am writing a file to the shared volume with the name "finished".</p> <p>In the second container I am looping for a couple of seconds outputting the content of the shared volume but the "finished" file never gets created. Describing the pod doesn't show an error with the hooks either. </p> <p>Maybe someone has an idea what I am doing wrong</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: shared-data-example labels: app: shared-data-example spec: volumes: - name: shared-data emptyDir: {} containers: - name: first-container image: ubuntu command: ["/bin/bash"] args: ["-c", "for i in {1..4}; do echo Welcome $i;sleep 1;done"] imagePullPolicy: Never env: - name: TERM value: xterm volumeMounts: - name: shared-data mountPath: /myshareddata lifecycle: preStop: exec: command: ["/bin/sh", "-c", "echo First container finished &gt; /myshareddata/finished"] postStart: exec: command: ["/bin/sh", "-c", "echo First container started &gt; /myshareddata/started"] - name: second-container image: ubuntu command: ["/bin/bash"] args: ["-c", "for i in {1..20}; do ls /myshareddata;sleep 1;done"] imagePullPolicy: Never env: - name: TERM value: xterm volumeMounts: - name: shared-data mountPath: /myshareddata restartPolicy: Never </code></pre>
<p>It is happening because the final status of your pod is <code>Completed</code> and applications inside containers stopped without any external calls.</p> <p>Kubernetes runs <code>preStop</code> hook only if pod resolves an external signal to stop. Hooks were made to implement a graceful custom shutdown for applications inside a pod when you need to stop it. In your case, your application is already gracefully stopped by itself, so Kubernetes has no reason to call the hook.</p> <p>If you want to check how a hook works, you can try to create <code>Deployment</code> and update it's image by <code>kubectl rolling-update</code>, for example. In that case, Kubernetes will stop the old version of the application, and <code>preStop</code> hook will be called.</p>
<p>I'd like to (programatically) get a list of all schedulable nodes in my kubernetes cluster.</p> <p>I'm fairly sure this used to be possible by looking at <code>.spec.unschedulable</code> in the full output of <code>kubectl get nodes</code> (using JSON or template output), but now it seems this info is inside the <code>scheduler.alpha.kubernetes.io/taints</code> key, which is much harder to parse and just doesn't feel like the right place.</p> <p>Is there some other way to find this info? Am I missing something obvious? I'm using version 1.5.1 currently.</p> <p>UPDATE: I can <em>almost</em> get there with some Go templating:</p> <pre><code>$ kubectl get nodes -o go-template='{{range .items}}{{with $x := index .metadata.annotations "scheduler.alpha.kubernetes.io/taints"}}{{.}}{{end}}{{end}}' [{"key":"dedicated","value":"master","effect":"NoSchedule"}] </code></pre> <p>But that leaves me with a blob of JSON that I can't parse in the template, and I still have to invert the results and get the node name out.</p> <p>UPDATE 2: Apparently unschedulable nodes should have <code>.spec.unschedulable</code> set. This doesn't seem to always be the case; not sure if it's due to a bug or a misunderstanding on my part.</p>
<p>Though quite late to the party, I needed something of the similar today to determine when the scheduling of pods would actually start succeeding:</p> <pre><code>kubectl get nodes -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')].effect}{\"\n\"}{end}" | awk 'NF==1 {print $0}' </code></pre> <p><s>this is essentially just a more compact template than the one the OP posted with the inversion done by filterung with <code>awk</code>.</s></p> <p><em>Edit:</em> after trying to use the command I had posted earlier, I realized that it would only work in the edge case of a single master. Thus I have extended the command to first print all node names along with <code>NoSchedule</code> taints (if present) and afterwards removing all lines with more than one column using <code>awk</code> (This is what <code>NF==1</code> does).</p>
<p>How can I list, using command line, all the pods running in the nodes of a particular instance group?</p> <p>For example, if I have instancegroup "Foo", which has let's say three nodes, N1, N2, and N3, that in turn have pods A and B running on N1, pods C, D and E running on N2, and pod F running on N3.... how can I, using kops/kubectl, make query with input "Foo" and output "A, B, C, D, E, F"?</p> <p>I know that you can query a particular node and list all the pods in it, but I want to query an instancegroup, with many nodes, and get all the pods across all the nodes, with no namespace constraint.</p> <p>Thanks!</p>
<p>One way would be to assign labels to the nodes in your InstanceGroup, perhaps with the name of the instance group (for simplicity) and then use a combination of kubectl commands to query.</p> <p>So in your InstanceGroup spec set <code>nodeLabels</code> to be <code>ig: Foo</code> (see <a href="https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#adding-taints-or-labels-to-an-instance-group" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#adding-taints-or-labels-to-an-instance-group</a>) for details and then run:</p> <p><code>kubectl get pods -o wide --all-namespaces | grep -F -f &lt;(kubectl get nodes -l ig=Foo --no-headers | awk '{print $1}')</code></p>
<p><b>Disclosure :</b> <br> </p> <ul> <li>I have few questions for the containerization and orchestration tools available today in the market.</li> <li>I have worked on docker swarm, kubernetes, and Elastic Bean Stalk.</li> </ul> <p><b>Problem</b>: I want to automate scaling without having to deal with ec2 instances where I don't have to worry about of scaling of instances. I know that GKE provides that but I want to stick with the AWS. The system where I can define scaling triggers based on requests, memory, CPU on the dashboard (same as elastic-beanstalk but I will need to run multiple services. all services will have different scaling triggers ). From what I read, One common thing is kubernetes and ECS is that I have to write scripts based on cloud-watch events. </p> <hr> <p><b>Q.1: For Docker Swarm:</b> <br><br> How is Docker Swarm better to balance the load and auto scale when I have to already provide more than 1 Virtual machines (created by docker-machine) as workers for my manager? <br><br> <b>My View:</b> <br></p> <ul> <li>This is cost wise not good as I will have to anyway pay for this 2 instances. </li> <li>This VM's will still remain present when there is a low load.</li> <li>I don't think except a manually running script there won't be any auto-scaling possible here.</li> <li>I will be managing a single docker-compose.yml here.</li> </ul> <hr> <p><b>Q.2: For Kubernetes:</b> <br><br> Does Kubernetes scale up on instance level? <br><br> <b>My View:</b> <br></p> <ul> <li>Kubernetes provides options for Autoscaling (like horizontal scaling etc.) but this all happens on service level, in the end, there will be multiple pods and containers </li> <li>As per I know everything will happen in Kubernetes Cluster managed by Kops on production, So if it scales on instance level how does it do? as it doesn't have any virtual machine concept like SWARM in docker. </li> <li>I will be managing multiple YAML files here based on my services.</li> </ul> <hr> <p><b>Q.3 For Elastic Bean Stalk:</b> <br><br> If Elastic Bean Stalk can manage my Entire Containerization along with AutoScaling and Load Balancing then how are above 2 so much in demand and better to use? <br><br> <b>My View:</b> <br></p> <ul> <li>Elastic Bean Stalk is more moving towards Fargate nowadays which not available for all zones at present.</li> <li>I have seen in the process that it gives full control by providing a complete configuration dashboard based on my services.</li> <li>It will create a new Instance as per my load and autoscale.</li> </ul> <hr> <p><i> I am confused and <b>unable to convince</b> people who say No to Kubernetes and Docker Swarm, If Someone can Please provide me a detailed overview of What and Why to use in production on AWS? as I don't answer for majorly AutoScaling and LoadBalancing on Production even though knowing this tools above. </i></p> <blockquote> <p>Questions listed above consider AWS as the cloud deployment platform also I would like to let you know that I have a successfully running docker-compose.yml over Docker Swarm and 4 different YAML files for Kubernetes which also work great on Minikube.</p> </blockquote>
<blockquote> <p>I am confused and unable to convince people who say No to Kubernetes and Docker Swarm, If Someone can Please provide me a detailed overview of What and Why to use in production on AWS? </p> </blockquote> <p>Two out of three solutions provided by you are platform-agnostic, so we can talk about them without concentration on AWS.</p> <p>I recommend you to use Kubernetes, and I will try to explain why below.</p> <blockquote> <p>How is Docker Swarm better to balance the load and auto scale when I have to already provide more than 1 Virtual machines (created by docker-machine) as workers for my manager? </p> </blockquote> <p>Docker Swarm is a relatively simple platform for orchestration of Docker applications with quite simple logic. To implement node-based autoscaling, you should use some external tools (in AWS, for instance, you can use an Autoscale group with some rules based on CPU load). And you will have to add some custom scripts to add and remove nodes from Docker Swarm cluster. All that things are possible, but you will need to develop it yourself.</p> <blockquote> <p>Does Kubernetes scale up on instance level? </p> </blockquote> <p>Yes, it does. Kubernetes can scale using <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">cluster-autoscale</a> daemon, which can run inside a cluster, and automatically scale your instances up and down based on several metrics, including custom ones. You do not have to create any scripts, all logic is already implemented, you just need to setup rules.</p> <blockquote> <p>If Elastic Bean Stalk can manage my Entire Containerization along with AutoScaling and Load Balancing then how are above 2 so much in demand and better to use? </p> </blockquote> <p>Elastic Beanstalk is a solution to run applications inside AWS, but you will be limited by its functions. Yes, it can do so many things for you, but if you will need something custom or you will need to create a hybrid cloud solution - that is not an option.</p> <p>Finally, I can tell you that with Kubernetes you will get:</p> <ol> <li>Tons of documentation and community experience.</li> <li>Auto-magic for almost everything, from auto-scaling to A/B testing, and auto-signing Let's Encrypt certifications for your services. You will spend a lot of time implementing all that features in Docker Swarm or Elastic Beanstalk, and some of them can be almost impossible in other orchestrators. </li> <li>Platform agnostics. You can migrate to any platform (even to on-premise) with minimal changes in configurations of your applications. Docker Swarm is also working almost everywhere, but it is less functional. </li> <li>A lot of other things for scheduling, jobs, application distribution, different types of volumes, and many more.</li> </ol> <p>Also, I can recommend you some Kubernetes modules and apps which can be useful for you on (not only) AWS.</p> <ul> <li><a href="https://github.com/jtblin/kube2iam" rel="noreferrer">Kube2iam</a> tool which provides you with AWS IAM role, assigning it directly to your pods, not instances.</li> <li><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">Autoscaling</a> module.</li> <li><a href="https://github.com/jetstack/cert-manager/" rel="noreferrer">Cert-manager</a> to generate LetsEncrypt SSL keys. It has Route53 integration for DNS challenge.</li> <li><a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Nginx-ingress</a> as ingress controller which provides you with a lot of features and the best Nginx experience.</li> <li><a href="https://github.com/kubernetes/kops" rel="noreferrer">Kops</a>. But you already know it.</li> </ul>
<p>I´m new to kubernetes an created a cluster on google cloud platform. Now i´m trying to setup automated deployment from vsts an need to create a kubernetes user for this to get a kubeconfig file for authentication.</p> <p>Now my question is how can i do this? Do i need to create this user with kubectl (if yes how?)? Or is there a way to create the user through gcp console?</p> <p>I searched the web but found nothing that worked. Thanks for any suggestions!</p> <p><strong>Edit:</strong> I now how to connect to my cluster using this gcloud command: <code>gcloud container clusters get-credentials</code>. This work perfectly fine on my local dev machine. But on my vsts build agent i dont have gcloud installed (and also dont want to install it) and need to use only kubectl to connect to my cluster without the gcloud command.</p> <p>I have already figured out that the gcloud command creates a kubeconfig file with the gcloud command as auth provider (so i cant just copy the created kubeconifg file casue it depends on gcloud installed). When i then run kubectl it creates an temporary access token in the kubeconfig. But this token is only valid for about 30 minutes. I need a token that is valid infinitely, so i can use this on my build server.</p>
<p>To connect to Kubernetes cluster in GCP, you can use either user or service account. </p> <p>If you choose user account, run this command: </p> <pre><code>gcloud init </code></pre> <p>or </p> <pre><code>gcloud init --console-only </code></pre> <p>This will bring up GCP authentication dialog. When you pass authentication, you'll be able to operate with permission of the authenticated user. </p> <p>If you choose service account, you need to create it and generate a new key for it. </p> <p>You can do it using <em>GPC console</em> -> <em>IAM &amp; admin</em> -> <em>IAM</em> -> <em>Service accounts</em>.<br> Click on <code>Create service account</code>, select name for the account, select the appropriate role, and click <code>Create</code>.<br> You can generate the key by selecting <code>Furnish a new private key</code> in the account creation dialog box, or generate a new key by clicking on three dots on the right side of existed service account row and selecting <code>Create key</code>. Select JSON format and save the file on disk. </p> <p>Then run the command: </p> <pre><code>gcloud auth activate-service-account &lt;[email protected]&gt; --key-file=&lt;previously_saved_file.json&gt; </code></pre> <p>At this stage, you are authenticated with CGP and ready to connect to your Kubernetes cluster: </p> <p>Next command will update your kubectl configuration to work with your cluster. </p> <pre><code>gcloud container clusters get-credentials &lt;cluster_name&gt; --zone &lt;gcp_availability_zone&gt; --project &lt;gcp_project_name&gt; </code></pre> <p>You can extend or decrease the account's permissions by selecting another role for it in GCP IAM management interface. </p> <p>Official documentation:<br> <a href="https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account" rel="nofollow noreferrer">gcloud auth activate-service-account</a><br> <a href="https://cloud.google.com/sdk/gcloud/reference/init" rel="nofollow noreferrer">gcloud init</a><br> <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer">gcloud container clusters get-credentials</a></p>
<p>I have a gcloud Kubernetes cluster initialized, and I'm using a Dask Client on my local machine to connect to the cluster, but I can't seem to find any documentation on how to upload my dataset to the cluster.</p> <p>I originally tried to just run Dask locally with my dataset loaded in my local RAM, but obviously that's sending it over the network and the cluster is only running at 2% utilization when performing the task.</p> <p>Is there a way to put the dataset onto the Kubernetes cluster so I can get 100% CPU utilization?</p>
<p>Many people store data on a cloud object store, like Amazon's S3, Google Cloud Storage.</p> <p>If you're interested about Dask in particular these data stores are supported in most of the data ingestion functions by using a protocol like the following:</p> <pre><code>import dask.dataframe as dd df = dd.read_csv('gcs://bucket/2018-*-*.csv') </code></pre> <p>You will need to also have the relevant Python library installed to access this cloud storage (gcsfs in this case). See <a href="http://dask.pydata.org/en/latest/remote-data-services.html#known-storage-implementations" rel="nofollow noreferrer">http://dask.pydata.org/en/latest/remote-data-services.html#known-storage-implementations</a> for more information.</p>
<p>I have this idea for what I think should happen with my project, but I want to check in and see if this works on a theoretical level first. Basically I am working on a Django site that is run on Kubernetes, but am struggling a little bit about which way I should set up my replicationSet/statefulSet to manage uploaded content (images).</p> <p>My biggest concern is trying to find out how to scale and maintain uploaded content. My first idea is that I need to have a single volume that has these files written to it, but can I have multiple pods write to the same volume that way while scaling?</p> <p>From what I have gathered, it doesn't seem to work that way. It sounds more like each pod, or at least each node, would have it's own volume. But would a request for an image reach the volume it is stored on? Or should I create a custom backend program to move things around so that it is served off of a NGINX server like my other static content?</p> <p>FYI - this its my first scalable project lol. But I am really just trying to find the best way to manage uploads... or a way in general. I would appreciate any explanations, thoughts, or fancy diagrams on how something like this might work!</p>
<p>Hello I think you should forget kubernetes a bit and think of the architecture and capabilities of your Django application. I guess you have built a web app, that offers some 'upload image' functionality, and then you have code that 'stores' this image somewhere. On the very simple scenario if you run your app on your laptop, the you web app, is configured to save this content to a local folder, a more advanced example is that you deploy your application to a VM or a cloud vm e.g an AWS EC2 instance, and your app is saving the files to the local storage of this EC2 instance. The question is twofold - what happens if we have 2 instances of your web app deployed - can the be configured and run - so that they 'share' the same folder to save the images? I guess this is what you want, other wise your app would not scale horizontally , each user would have to hit each individual instance - in order to upload or retrieve specific images. So having that in mind that is a design decision of your application, which I am pretty sure you have already worked it out, the you need to think - how can I share a folder? a bucket so that all my instances of my web app can save files? If you spinned 3 different vms, on any cloud, you would have to use some kind of clour storage, so that all three instances point to the same physical storage location, or an NFS drive or you could save your data to a cloud storage service S3!</p> <p>Having all the above in mind, and clearly understanding that you need to decouple your application from the notion of locale storage especially if you want to make it as as stateless as it gets (whatever that means to you), having your web app, which is packaged as a docker container and deployed in a kubernetes cluster as a pod - and saving files to the local storage is not going to get any far, since each pod, each docker container will use the underlying kubernetes worker (vm) storage to save files, so another instance will be saving files on some other vm etc etc.</p> <p>Kubernetes provides this kind of abstraction for applications (pods) that want to 'share' within the kubernetes cluster, some local storage and of course persist it. Something that I did not add above is that pod and worker storage (meaning if you save files in the kubernetes worker or pod) once this vm / instance is restarted you will loose your data. So you want something durable. </p> <p>To cut a long story short, </p> <p>1) you can either to deploy your application / pod along with a Persistent Volume Claim assuming that your kubernetes cluster supports it. What is happening is that you can mount to your pod some kind of folder / storage which will be backed up by whatever is available to your cluster - some kind of NFS store. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p> <p>2) You can 'outsource' this need to share a common local storage to some external provider, e.g a common case use an S3 bucket, and not tackle the problem on kubernetes - just keep and provision the app within kubernetes.</p> <p>I hope I gave you some basic ideas.</p>
<p>I've a kubernetes 1.8. I've deploy a lots of services but I've problem remove some pods, it's never delete.</p> <p>This is the pod describe:</p> <pre><code>Name: project-settlement-api-798c8b6688-ldclr Namespace: project Node: 10.93.96.208/10.93.96.208 Start Time: Fri, 10 Nov 2017 18:39:08 -0300 Labels: app=project-settlement-api pod-template-hash=3547462244 run=project Annotations: kubernetes.io/created-by={“kind”:“SerializedReference”,“apiVersion”:“v1",“reference”:{“kind”:“ReplicaSet”,“namespace”:“project”,“name”:“project-settlement-api-798c8b6688”,“uid”:“955c2781-c65f-11e7-ba5... Status: Terminating (expires Fri, 17 Nov 2017 10:25:24 -0300) Termination Grace Period: 0s IP: Created By: ReplicaSet/project-settlement-api-798c8b6688 Controlled By: ReplicaSet/project-settlement-api-798c8b6688 Containers: project-settlement-api: Container ID: Image: Image ID: Port: &lt;none&gt; State: Terminated Exit Code: 0 Started: Mon, 01 Jan 0001 00:00:00 +0000 Finished: Mon, 01 Jan 0001 00:00:00 +0000 Ready: False Restart Count: 0 Environment: SPRING_PROFILES_ACTIVE: docker Mounts: /opt from project-volume (rw) /var/project from project-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-qw23t (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: project-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: project-pvc ReadOnly: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>And this is the pod list where you can find 3 pods that cannot delete:</p> <pre><code>NAME READY STATUS RESTARTS AGE project-companies-api-5847b84599-6tpbq 0/1 Terminating 0 9m project-companies-api-5847b84599-ph4xl 1/1 Running 0 4m project-companies-api-5847b84599-z5vr2 0/1 Terminating 3 11m project-middleware-85888df786-8nxlj 1/1 Running 0 14d project-settlement-api-5cb5d5859d-qrq8b 1/1 Running 0 2m project-settlement-api-798c8b6688-ldclr 0/1 Terminating 0 20d project-transactions-api-779cd489b7-xvgd8 1/1 Running 0 14d project-users-api-7d7c5cd99d-bqg88 1/1 Running 0 14d project-web-57c869d6fc-7qpk5 1/1 Running 0 14d monitor-app-6455cf9dcf-p4ww4 1/1 Running 0 16m </code></pre> <p>I've try a lot command in order to delete it but nothing happend. kubectl delete po/mypod --grace-period=0 --force --namespace mynamespace</p> <p>Anybody can help me?</p> <p>Regards</p>
<p>Try restarting the docker daemon on the master node</p> <p>This issue might be related to what you are experiencing: <a href="https://github.com/kubernetes/kubernetes/issues/51835" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/51835</a></p>
<h1>Issue</h1> <p>I am having trouble applying TLS to the DNS name of my LoadBalancer service for my Kubernetes cluster, and I am at a bit of a loss.</p> <p>This is the first time I have worked with Kubernetes as well as Azure's Manage Container Services. For reasons that are out of my control this api is required to run on Azure's Managed Container Services. </p> <h1>Environment</h1> <p>Cluster is running on Azure using Managed Container Services (preview). I created my environment by following the steps here: <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster</a></p> <p>I created a static IP in Azure to use in the yaml for the loadbalancer service. Furthermore, I created a <code>myprefix.cloudapp.azure.com</code> DNS name for the IP using the following commands (<a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a>)</p> <p><code>IP="XX.XX.XX.XX"</code></p> <p><code>DNSNAME="myprefix"</code></p> <p><code>RESOURCEGROUP=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[resourceGroup]" --output tsv)</code></p> <p><code>PIPNAME=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[name]" --output tsv)</code></p> <p><code>az network public-ip update --resource-group $RESOURCEGROUP --name $PIPNAME --dns-name $DNSNAME</code></p> <h1>Deployment</h1> <p>This is the yaml I am using for my deployment:</p> <p><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-node-express-api-deployment spec: replicas: 2 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: my-node-express-api spec: containers: - name: my-node-express-api-container image: myrepo/my-node-express-api-image:latest ports: - containerPort: 3000 volumes: - name: tls secret: secretName: my-tls-secret</code></p> <h1>Service</h1> <p>This is the yaml for my LoadBalancing Service</p> <p><code>apiVersion: v1 kind: Service metadata: name: my-node-express-api-loadbalancer spec: loadBalancerIP: 52.176.148.91 type: LoadBalancer ports: - port: 80 targetPort: 3000 port: 443 targetPort: 3000 selector: app: my-node-express-api</code></p> <h1>Secret</h1> <p>Yaml for secret</p> <p><code>apiVersion: v1 kind: Secret metadata: name: my-tls-secret namespace: default data: tls.crt: (base64 for myprefix.cloudapp.azure.com.crt) tls.key: (base64 for myprefix.cloudapp.azure.com.key)</code></p> <h1>Note:</h1> <p>Everything works correctly over http when I remove the Secret from my deployment and remove port 443 from the LoadBalancer Service.</p>
<p>On Azure, if you need TLS termination on kubernetes, you can use <strong>Nginx Ingress controller</strong>(Now, Microsoft working with Azure ingress controller which uses Application gateway).</p> <p>To archive this, we can follow those steps:<br> 1 Deploy the Nginx Ingress controller<br> 2 Create TLS certificates<br> 3 Deploy test http service<br> 4 configure TLS termination </p> <p>More information about configure Nginx ingress controller for TLS termination on kubernetes on Azure, please refer to this <a href="https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/" rel="nofollow noreferrer">blog</a>.</p>
<p>I have a gcloud Kubernetes cluster running and a Google bucket that holds some data I want to run on the cluster.</p> <p>In order to use the data in the bucket, I need <code>gcsfs</code> installed on the nodes. How do I install packages like this on the cluster using gcloud, kubectl, etc.?</p>
<p>Check if a recipe like "<a href="https://github.com/pangeo-data/pangeo/wiki/Launch-development-cluster-on-Google-Cloud-Platform-with-Kubernetes-and-Helm" rel="nofollow noreferrer">Launch development cluster on Google Cloud Platform with Kubernetes and Helm</a>" could help.</p> <p>Using <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, you can define workers with <a href="https://github.com/pangeo-data/pangeo/wiki/Launch-development-cluster-on-Google-Cloud-Platform-with-Kubernetes-and-Helm#configure-containers" rel="nofollow noreferrer">additional pip packages</a>:</p> <pre><code>worker: replicas: 8 limits: cpu: 2 memory: 7500000000 pipPackages: &gt;- git+https://github.com/gcsfs/gcsfs.git git+https://github.com/xarray/xarray.git condaPackages: &gt;- -c conda-forge zarr blosc </code></pre>
<p>I am new to Kubernetes and Helm. I am coming from a plain Docker/docker-compose world.</p> <p>I have some complex services running multiple Docker containers that require a lot of configuration parameters and logic. The docker-ized services require a lot of different configuration files, keys and command line arguments on start up. I also require some configuration logic at runtime (some configuration elements have to be generated) that can only execute inside of the container.</p> <p>What I ended up doing is to write a shell script (to use as <code>CMD</code>) that expects environment variables, defines default values, translates those environment variables to command arguments and configuration files.</p> <hr> <p>This is a non-working example of how I build it, without having Kubernetes and Helm in mind.</p> <p><strong>Dockerfile</strong></p> <pre><code>... CMD [ "./bootstrap.sh" ] </code></pre> <p><strong>bootstrap.sh (packaged in image)</strong></p> <pre><code># Define default values, if no environment variables provided on # on "docker run" export CONFIG_VALUE_A=${CONFIG_VALUE_A:="a"} export CONFIG_VALUE_B=${CONFIG_VALUE_B:="b"} export CONFIG_VALUE_C=${CONFIG_VALUE_C:="c"} # write CONFIG_VALUE_A to file echo ${CONFIG_VALUE_A} &gt; ./some-config-file-a.cfg ARGS="--config-file-a ./some-config-file-a.cfg --config-value-b ${CONFIG_VALUE_B} --config-value-c ${CONFIG_VALUE_C}" exec ./my-app ${ARGS} </code></pre> <p>This has the advantage that using the environment variables, I have a standard configuration interface and don't need to deal with volumes for configuration files. </p> <hr> <p>Now, I am stepping into Kubernetes in Helm. Helm has its own parameter concept using the <code>values.yaml</code>. To combine it with what I already have above, I would just map values from the <code>values.yaml</code> with those environment variables.</p> <p><strong>deployment.yaml</strong></p> <pre><code>... spec: ... template: ... spec: containers: - name: my-app ... env: - name: "CONFIG_VALUE_A" value: {{ .Values.config.value_a }} - name: "CONFIG_VALUE_B" value: {{ .Values.config.value_b }} - name: "CONFIG_VALUE_C" value: {{ .Values.config.value_c }} </code></pre> <p><strong>values.yaml</strong></p> <pre><code>config: value_a: a value_b: b value_c: c </code></pre> <p>However, having three configuration layers where I map values back and forth (helm templates => Container environment variables => Config files/CLI arguments) violates the DRY principle and adds a lot of potential for typos/errors that will be hard to find later.</p> <hr> <p>Ideally,</p> <ul> <li>I would just define my configuration structure in the <code>deployment.yaml</code> and my defaults once in Helm's <code>default.yaml</code></li> <li>I would pass those values straight through to the container and let some sort of configuration script build the command line arguments and configuration file, without the use of environment variables as a middle layer</li> <li>Use some sort of type-safe configuration format</li> <li>Keep the total # of lines as low as possible</li> <li>Keep configuration files readable and don't mix up different languages (i.e. JSON defined in a YAML file)</li> </ul> <p>How do you solve complex configuration management with Kubernetes, Helm and Docker? </p>
<p>In the Kubernetes world, configs are usually managed by ConfigMap, which is the main storage of configuration.</p> <p>In your situation, I think you can do it like that (at least if I will do it, I will do it that way):</p> <ol> <li>Create another one template in Helm with kind ConfigMap and create there a structure of a <code>.cfg</code> file for an application. Helm is using GoTemplate format, so it is easy to create any structures there, with iterations etc.</li> <li>Add all default values to the <code>values.yaml</code> file. </li> <li>Edit <code>deployment.yaml</code>. Add a mounting of <code>.cfg</code> file to a path in the container and point the application to it.</li> <li>Use an additional file with values (or several once) and write there overrides of default values into it.</li> </ol> <p>So, that's it. We have:</p> <ul> <li>The ConfigMap with the static configuration in application format, and we can check it at any time. </li> <li>Only one place where we can edit it - in our default and override yamls.</li> <li>Readable <code>key: value</code> YAML format. </li> <li>All logic of the config file generation and sorting out of a container, so we don't need to build a new version just to change an order of options.</li> </ul>
<p>I am trying to deploy a stateful set mounted on a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a>.</p> <p>I installed Kubernetes on AWS via <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">kops</a>.</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>According to <a href="https://github.com/kubernetes/kubernetes/issues/28962" rel="nofollow noreferrer">this issue</a> I need to create the PVC first:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: zk-data-claim spec: storageClassName: default accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: zk-logs-claim spec: storageClassName: default accessModes: - ReadWriteOnce resources: requests: storage: 2Gi </code></pre> <p>The <code>default</code> storage class exists, and the PVC binds a PV successfully:</p> <pre><code>$ kubectl get sc NAME PROVISIONER AGE default kubernetes.io/aws-ebs 20d gp2 (default) kubernetes.io/aws-ebs 20d ssd (default) kubernetes.io/aws-ebs 20d $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE zk-data-claim Bound pvc-5584fdf7-3853-11e8-a73b-02bb35448afe 2Gi RWO default 11m zk-logs-claim Bound pvc-5593e249-3853-11e8-a73b-02bb35448afe 2Gi RWO default 11m </code></pre> <p>I can see these two volumes in the EC2 EBS Volumes list as "available" at first, but then later becomes "in-use". </p> <p>And then ingest it in my StatefulSet</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: zk spec: serviceName: zk-cluster replicas: 3 template: metadata: labels: app: zookeeper spec: volumes: - name: zk-data persistentVolumeClaim: claimName: zk-data-claim - name: zk-logs persistentVolumeClaim: claimName: zk-logs-claim containers: .... volumeMounts: - name: zk-data mountPath: /opt/zookeeper/data - name: zk-logs mountPath: /opt/zookeeper/logs </code></pre> <p><strong>Which fails with</strong> </p> <pre><code>Unable to mount volumes for pod "zk-0_default(83b8dc93-3850-11e8-a73b-02bb35448afe)": timeout expired waiting for volumes to attach/mount for pod "default"/"zk-0". list of unattached/unmounted volumes=[zk-data zk-logs] </code></pre> <p>I'm working in the default namespace.</p> <p>Any ideas what could be causing this failure?</p>
<p>The problem was that my cluster was made with C5 nodes. C5 and M5 nodes follow a different naming convention (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html" rel="nofollow noreferrer">NVMe</a>), and the naming is not recognised.</p> <p>Recreate the cluster with t2 type nodes.</p>
<p>When I ran the KCL as <code>./bin/kcl-bootstrap -p app.properties -j /usr/bin/java -e</code> at the terminal, and then I did <code>ps -ef | grep java</code> and got the pid for it. I issued <code>kill -s TERM $PID_ABOVE</code>. I noticed the node recordProcessor managed to log like this <code>2018-03-31T11:13:46.998 INFO recordProcessor - Shutdown requested 2018-03-31T11:13:48.024 INFO recordProcessor - Shutting down...</code></p> <p>When I ran the equivalent via a docker run command as such <code>docker run -v /tmp:/tmp -v ~/.aws:/root/.aws -ti cde3946e2cf9</code>, issuing <code>Ctrl-C</code> or <code>docker stop cde3946e2cf9</code> or <code>docker kill --signal=SIGTERM cde3946e2cf9</code> to terminate the docker didn't result in the logs being produced above. e,g the recordProcessor didn't get notified about the shutdown request. </p> <p>Our docker is deployed in Kubernetes cluster, and when we are redeploying the docker, I can see that <code>Process terminanted, will initiate shutdown.</code> is being logged by the main processor, but nothing from the worker shutting down request being reported.</p> <p>I would like to trap these events when re-deploying docker to make sure we handle shutdown gracefully. </p> <p>Anyone knows how to make sure the worker does get notified when redeploy docker in cluster?</p> <p>Thanks</p>
<p>For anyone who is facing a similar issue, here are the steps I solved mine:</p> <p>Add a preStop hook in kubernetes template to execute a shell command The command is <code>kill -s TERM $(ps -ef | grep "MultiLangDaemon" | grep -v \"grep\" | awk '{print $1}')</code></p>
<p>I am trying to do spark-submit on minikube(Kubernetes) from local machine CLI with command</p> <pre><code>spark-submit --master k8s://https://127.0.0.1:8001 --name cfe2 --deploy-mode cluster --class com.yyy.Test --conf spark.executor.instances=2 --conf spark.kubernetes.container.image docker.io/anantpukale/spark_app:1.1 local://spark-0.0.1-SNAPSHOT.jar </code></pre> <p>I have a simple spark job jar built on verison 2.3.0. I also have containerized it in docker and minikube up and running on virtual box. Below is exception stack:</p> <pre><code>Exception in thread &quot;main&quot; org.apache.spark.SparkException: Must specify the driver container image at org.apache.spark.deploy.k8s.submit.steps.BasicDriverConfigurationStep$$anonfun$3.apply(BasicDriverConfigurationStep.scala:51) at org.apache.spark.deploy.k8s.submit.steps.BasicDriverConfigurationStep$$anonfun$3.apply(BasicDriverConfigurationStep.scala:51) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.deploy.k8s.submit.steps.BasicDriverConfigurationStep.&lt;init&gt;(BasicDriverConfigurationStep.scala:51) at org.apache.spark.deploy.k8s.submit.DriverConfigOrchestrator.getAllConfigurationSteps(DriverConfigOrchestrator.scala:82) at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:229) at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:227) at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2585) at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:227) at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:192) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 2018-04-06 13:33:52 INFO ShutdownHookManager:54 - Shutdown hook called 2018-04-06 13:33:52 INFO ShutdownHookManager:54 - Deleting directory C:\Users\anant\AppData\Local\Temp\spark-6da93408-88cb-4fc7-a2de-18ed166c3c66 </code></pre>
<p>Look like bug with default value for parameters <code>spark.kubernetes.driver.container.image</code>, that must be <code>spark.kubernetes.container.image</code>. So try specify driver/executor container image directly:</p> <ul> <li>spark.kubernetes.driver.container.image </li> <li>spark.kubernetes.executor.container.image</li> </ul>
<p>I am creating a server group and I want to add a label to the deployment. I don't find any option in the spinnaker UI to add one. Any help on this?</p>
<p>The current version of the Kubernetes cloud provider (v1) does not support configuring labels on Server Groups.</p> <p>The new Kubernetes Provider (v2), which is manifest-based, allows you to configure labels. This version, however, is still in alpha.</p> <p><strong>Sources</strong></p> <p><a href="https://github.com/spinnaker/spinnaker/issues/1624" rel="nofollow noreferrer">https://github.com/spinnaker/spinnaker/issues/1624</a> <a href="https://www.spinnaker.io/reference/providers/kubernetes-v2/" rel="nofollow noreferrer">https://www.spinnaker.io/reference/providers/kubernetes-v2/</a></p>
<p>How to select pods from the kubernetes dashboard using labels? </p> <p>Something similar like this:</p> <pre><code>$ Kubectl get pods -l environment=production,tier=frontend </code></pre>
<p>It's not possible so far (v.1.8.1) on the dashboard to filter by label.</p>
<p>I've been reading about how to setup audit in kubernetes <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-policy" rel="nofollow noreferrer">here</a> which basically says that in order to enable audit I have to specify a yaml policy file to kube-apiserver when starting it up, by using the flag <code>--audit-policy-file</code>.</p> <p>Now, there are two things I don't understand about how to achieve this:</p> <ol> <li>What's the proper way to add/update a startup parameter of the command that runs kube-apiserver? I cannot update the pod, so do I need to clone the pod somehow? Or should I use <code>kops edit cluster</code> as suggested here: <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver</a>. Surprisingly, kubernetes does not create a deployment for this, should I create it myself?</li> <li>In particular to setup audit I have to pass a yaml file as a startup argument. How do I upload/make available this yaml file in order to make a <code>--audit-policy-file=/some/path/my-audit-file.yaml</code>. Do I create a configMap with it and/or a volume? How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?</li> </ol> <p>Thanks!</p>
<blockquote> <p>What's the proper way to add/update a startup parameter of the command that runs kube-apiserver?</p> </blockquote> <p>In 99% of the ways that I have seen kubernetes clusters deployed, the <code>kubelet</code> binary on the Nodes reads the kubernetes descriptors in <code>/etc/kubernetes/manifests</code> on the host filesystem and runs the Pods described therein. So, the answer to the first question is to edit -- or cause the configuration management tool you are using to update -- the file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> (or hopefully a very similarly named file). If you have multiple master Nodes, you will need to repeat that process for all master Nodes. In <em>most</em> cases, the <code>kubelet</code> binary will see the change to the manifest file and will restart the apiserver's Pod automatically, but in the worst case restarting <code>kubelet</code> may be required.</p> <p>Be sure to watch the output of the newly started apiserver's docker container to check for errors, and only roll that change out to the other apiserver manifest files after you have confirmed it works correctly.</p> <blockquote> <p>How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?</p> </blockquote> <p>Roughly the same answer: either via ssh or any on-machine configuration management tool. The only asterisk to this one is that since the apisever's manifest file is a normal <code>Pod</code> declaration, one will wish to be mindful of the <code>volume:</code>s and <code>volumeMount:</code>s just like you would for any other in-cluster <code>Pod</code>. That is likely to be fine if your <code>audit-policy.yaml</code> lives in or under <code>/etc/kubernetes</code>, since that directory is already volume mounted into the Pod (again: most of the time). It's writing out the audit log file that will most likely require changes, since unlike the rest of the config the log file path cannot be <code>readOnly: true</code> and thus will at minimum require a 2nd <code>volumeMount</code> without the <code>readOnly: true</code>, and likely will require a 2nd <code>volume: hostPath:</code> to make the log directory visible into the Pod.</p> <p>I actually haven't tried using a <code>ConfigMap</code> for the apiserver itself, as that's very meta. But, in a multi-master setup, I don't know that it's <em>impossible</em>, either. Just be cautious, because in such a self-referential setup it would be very easy to bring down all masters with a bad configuration since they wouldn't be able to communicate with themselves to read the updated config.</p>
<p>Is it good practics to allow access to the Kubernetes Dashboard access to developers to view the POD raw Logs directly from dashboard console. If yes with read-only access to the dashboard, then can anyone help me how to create a read-only user to access Kubernetes Dashboard? Currently, I can see there is one admin user with admin privilege is created upon deploying Kubernetes Dashboard.</p>
<p>There is a documentation on how to create a user with limited access. Go <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access" rel="nofollow noreferrer">here</a></p>
<p>I'm trying to edit deployment in kubernetes by:</p> <pre><code>kubectl -n &lt;namespace&gt; edit deployment &lt;depolyment_name&gt;. </code></pre> <p>after entering the command, vi windows for editing appears, then I make some changes for example in the command section or in volumeMounts section.</p> <p>but I get the following error:</p> <pre><code>A copy of your changes has been stored to "/tmp/kubectl-edit-hv5dh.yaml" error: map: map[] does not contain declared merge key: name </code></pre> <p>someone can help with it?</p> <p>attached the edit deployment file of apiserver:</p> <p>kubectl -n federation-system edit deployment apiserver</p> <p>(codes between ** ** are the lines i added)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" federation.alpha.kubernetes.io/federation-name: fed creationTimestamp: 2018-04-01T13:26:40Z generation: 1 labels: app: federated-cluster name: apiserver namespace: federation-system resourceVersion: "393140" selfLink: /apis/extensions/v1beta1/namespaces/federation-system/deployments/apiserver uid: &lt;uid&gt; spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: federated-cluster module: federation-apiserver strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: federation.alpha.kubernetes.io/federation-name: fed creationTimestamp: null labels: app: federated-cluster module: federation-apiserver name: apiserver spec: containers: - command: - /fcp - federation-apiserver - --admission-control=NamespaceLifecycle - --advertise-address=&lt;master-ip&gt; - --bind-address=0.0.0.0 - --client-ca-file=/etc/federation/apiserver/ca.crt - --etcd-servers=http://localhost:2379 - --secure-port=8443 - --tls-cert-file=/etc/federation/apiserver/server.crt - --tls-private-key-file=/etc/federation/apiserver/server.key **- --enable-admission-plugins=SchedulingPolicy - --admission-control-config-file=/etc/kubernetes/admission/config.yml** image: gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v1.9.0-alpha.3 imagePullPolicy: IfNotPresent name: apiserver ports: - containerPort: 8443 name: https protocol: TCP - containerPort: 8080 name: local protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/federation/apiserver name: apiserver-credentials readOnly: true **volumeMounts: - mountPath: /etc/kubernetes/admission name: admission-config** - command: - /usr/local/bin/etcd - --data-dir - /var/etcd/data image: gcr.io/google_containers/etcd:3.1.10 imagePullPolicy: IfNotPresent name: etcd resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - {} restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: apiserver-credentials secret: defaultMode: 420 secretName: apiserver-credentials **- name: admission-config configMap: name: admission** status: availableReplicas: 1 conditions: - lastTransitionTime: 2018-04-01T13:26:40Z lastUpdateTime: 2018-04-01T13:26:40Z message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: 2018-04-01T13:26:40Z lastUpdateTime: 2018-04-01T13:27:20Z message: ReplicaSet "apiserver-8484fd45f8" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre> <p>it's happened after I created configMap file:</p> <pre><code>kubectl create -f scheduling-policy-admission.yaml apiVersion: v1 kind: ConfigMap metadata: name: admission namespace: federation-system data: config.yml: | apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: SchedulingPolicy path: /etc/kubernetes/admission/scheduling-policy-config.yml scheduling-policy-config.yml: | kubeconfig: /etc/kubernetes/admission/opa-kubeconfig opa-kubeconfig: | clusters: - name: opa-api cluster: server: http://opa.federation-system.svc.cluster.local:8181/v0/data/kubernetes/placement users: - name: scheduling-policy user: token: deadbeefsecret contexts: - name: default context: cluster: opa-api user: scheduling-policy current-context: default </code></pre> <p>I'm trying to configure Admission Controller in the Federation API.</p> <p>Thanks,</p>
<pre><code> dnsPolicy: ClusterFirst # DELETE imagePullSecrets: # DELETE - {} restartPolicy: Always </code></pre> <p>I would strongly recommend removing that <code>imagePullSecrets</code> block. Since those objects have a mergeKey of <code>name</code>, but that object has no <code>name</code>, it would very easily cause the error you are experiencing. If the YAML was given to your editor in that condition, then I am almost certain that is a kubernetes bug: it should always(?) allow round-tripping YAML via <code>kubectl edit</code>, if for no other reason than this situation right here.</p>
<p>While following the kubernetes article on <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">Using kubeadm to Create a Cluster</a>, I was stuck when the AddOn pods I was trying to install (Nginx, Tiller, Grafana, InfluxDB, Dashboard) would always stay in a state of <strong>Pending</strong>. </p> <p>Checking the message from <code>kubectl describe pod tiller-deploy-df4fdf55d-jwtcz --namespace=kube-system</code> resulted in the following message:</p> <pre><code>Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 51s (x15 over 3m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>When I ran the command from the <strong>Master Isolation</strong> section <code>kubectl taint nodes --all node-role.kubernetes.io/master-</code>, the AddOns would install as expected.</p> <p>At this point I can only suspect (because they are already installed on the master node) that the reason was that I hadn't connected a worker node to the cluster yet for the scheduler to schedule the pods on.</p> <p>The documentation states "your cluster will not schedule pods on the master for security reasons". I know that this is a non-production environment so there is little risk in this situation but what is the risk of removing that taint in a production cluster?</p> <p>Follow-up: If this is a risk, how can I re-add that taint so I can then uninstall the AddOn pods and try to have the scheduler install them on my Worker Node?</p> <p>Environment Details: Operating System - CentOS 7.4.1708 (Core) Kubernetes Version - 1.10</p>
<blockquote> <p>the reason was that I hadn't connected a worker node to the cluster yet for the scheduler to schedule the pods on.</p> </blockquote> <p>100% correct. You will for sure want some worker nodes, otherwise the idea of "scheduling work" becomes very weird.</p> <blockquote> <p>but what is the risk of removing that taint in a production cluster?</p> </blockquote> <p>I am not a kubernetes security expert, but a <em>pragmatic</em> risk is CPU, I/O, and/or memory exhaustion on the master nodes, which would have very severe consequences to the health of the cluster. There is almost never a reason to run any workload on a master node, and almost entirely an increase in risk, so the advice "just don't do it" is well founded.</p> <blockquote> <p>how can I re-add that taint so I can then uninstall the AddOn pods and try to have the scheduler install them on my Worker Node?</p> </blockquote> <p>I'm not sure I follow that question, but I would for sure start by just adding a worker node before trying to do complicated stuff with taints and tolerations.</p>
<p>I have kubeadm cluster deployed in CentOS VM. while trying to deploy <code>ingress controller</code> following <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="nofollow noreferrer">github</a> i noticed that i'm unable to see logs: </p> <pre><code>kubectl logs -n ingress-nginx nginx-ingress-controller-697f7c6ddb-x9xkh --previous Error from server: Get https://192.168.56.34:10250/containerLogs/ingress-nginx/nginx-ingress-controller-697f7c6ddb-x9xkh/nginx-ingress-controller?previous=true: dial tcp 192.168.56.34:10250: getsockopt: connection timed out </code></pre> <p>In 192.168.56.34 (node1) netstat returns: </p> <pre><code>tcp6 0 0 :::10250 :::* LISTEN 1068/kubelet </code></pre> <p>In fact i'm unable to see any logs despite the status of the pod.</p> <p>I disabled both the <code>firewalld</code> and <code>SELinux</code>. </p> <p>I used <code>proxy</code> to enable kubernertes to download images, now i removed the proxy.</p> <p>When navigating to the url in the error above i get <code>Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)</code></p> <p>I'm also able to fetch my nodes: </p> <pre><code>kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 32d v1.9.3 k8s-node1 Ready &lt;none&gt; 30d v1.9.3 k8s-node2 NotReady &lt;none&gt; 32d v1.9.3 </code></pre>
<blockquote> <p>getsockopt: connection timed out</p> </blockquote> <p>Is 99.99999% a firewall issue. If it was "connection refused" then showing the output of netstat would be meaningful, but (as you can see) <code>kubelet</code> is listening on that port just fine -- it's the networking configuration between the machine that is running <code>kubectl</code> and "192.168.56.34" that is incorrectly configured to allow traffic.</p> <p>The apiserver expects that everyone who would want to view logs (or use <code>kubectl exec</code>) can reach that port on every Node in the cluster; so be sure you don't just fix the firewall rule(s) for that one Node -- fix it for all of them.</p>
<p>I was trying to submit a example job to k8s cluster from binary release of spark 2.3.0, the submit command is shown below. However, I have met an wrong master error all the time. I am really sure my k8s cluster is working fine. </p> <pre><code>bin/spark-submit \ --master k8s://https://&lt;k8s-master-ip&gt; \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.executor.instances=3 \ --conf spark.kubernetes.container.image= &lt;image-built-from-dockerfile&gt; \ --conf spark.kubernetes.driver.pod.name=spark-pi-driver \ local:///opt/examples/jars/spark-examples_2.11-2.3.0.jar </code></pre> <p>and the error comes out</p> <blockquote> <p>Error: Master must either be yarn or start with spark, mesos, local</p> </blockquote> <p>and this is the output of <code>kubectl cluster-info</code></p> <pre><code>Kubernetes master is running at https://192.168.0.10:6443 KubeDNS is running at https://192.168.0.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre>
<p>because i am not good at English. so maybe have some wrong grammar. but i will do my best to responds your question. my resolve method is check your $SPARK_HOME and change to your "apache-spark-on-k8s" file path.because spark-submit is default use "${SPARK_HOME}" to run your command.maybe you have two spark environment in the same machine just like me. so command always use your original spark. hope this answer will help you.</p>
<p>I am trying to run a job on GKE for 5 mins and 50 nodes. However when i scale down instances it happens sequentially and thus costing me much more for a 4-5 min job.</p> <p>Is there any way to paralelly delete GKE instances?</p>
<p>Kubernetes cluster has an underlying Instance Group.</p> <p>I was able to delete the nodes in parallel by directly changing the number of nodes in Instance Group from 50 to 5.</p> <p>All nodes were deleted within 30 seconds and GKE had also automatically updated the cluster size with the new value.</p>
<p>I install the latest version of Kubernetes with the following command on Raspberry PI 3 running Raspbian Stretch.</p> <pre><code>$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - &amp;&amp; \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list &amp;&amp; \ sudo apt-get update -q &amp;&amp; \ sudo apt-get install -qy kubeadm </code></pre> <p>Currently this will install v1.10.0.</p> <p>How can I install a specific version of Kubernetes? Let's say v1.9.6.</p>
<p>To install specific version of the package it is enough to define it during the <code>apt-get install</code> command:</p> <pre><code>apt-get install -qy kubeadm=&lt;version&gt; </code></pre> <p>But in the current case <code>kubectl</code> and <code>kubelet</code> packages are installed by dependencies when we install <code>kubeadm</code>, so all these three packages should be installed with a specific version:</p> <pre><code>$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - &amp;&amp; \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list &amp;&amp; \ sudo apt-get update -q &amp;&amp; \ sudo apt-get install -qy kubelet=&lt;version&gt; kubectl=&lt;version&gt; kubeadm=&lt;version&gt; </code></pre> <p>where available <code>&lt;version&gt;</code> is:</p> <pre><code>curl -s https://packages.cloud.google.com/apt/dists/kubernetes-xenial/main/binary-amd64/Packages | grep Version | awk '{print $2}' </code></pre> <p>For your particular case it is:</p> <pre><code>$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - &amp;&amp; \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list &amp;&amp; \ sudo apt-get update -q &amp;&amp; \ sudo apt-get install -qy kubelet=1.9.6-00 kubectl=1.9.6-00 kubeadm=1.9.6-00 </code></pre>
<p>I have created a yaml file for kubernetes cluster which holds two docker images(2 micro services). created a cluster from it . <code>kubectl create -f pod.yaml</code> Cluster is running now.</p> <p>I wish to add another one new docker image(new micro services to get delpoyed) or remove an existing one docker image from that cluster.</p> <p>Is it possible in the same cluster? </p>
<p>I would hold from going into the part where you said <code>which holds two docker images(2 microservices).</code> which might be a separate discussion and is more subjective.</p> <p>You can add another container spec to your YAML and apply it:</p> <pre><code>spec: containers: - name: container1 //.. more stuff spec: containers: - name: container2 //.. more stuff spec: containers: - name: container3 </code></pre> <p>But additionally define <code>deploymentStrategy</code> to be <code>RollingUpdateDeployment</code> so that not all pods are taken down at the same time and they are replaced in a controlled manner without affecting end-user traffic.</p> <p>Of course if your two versions of the application (one with 2 containers and one with 3) are not compatible with rest of the system then this won't solve your problem.</p> <p>In which case - it is best to stand up another deployment for new version of application and divert traffic using DNS/routing mechanisms to new version of application.</p>
<p>While creating the kubernetes cluster using kubeadm in Centos 7, its creating one year kubeapi certificate. For me this is short time for the cluster. How can I create 5 year certificate during cluster setup?</p> <pre><code>* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 * Server certificate: * subject: CN=kube-apiserver * start date: Dec 20 14:32:00 2017 GMT * expire date: Dec 20 14:32:00 2018 GMT * common name: kube-apiserver * issuer: CN=kubernetes </code></pre> <p>I tried this didn't work:</p> <pre><code>openssl genrsa -out ca.key 2048 export MASTER_IP=192.168.16.171 openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt kubeadm reset rm -rf /etc/kubernetes mkdir -p /etc/kubernetes/ca/pki cp ca.key ca.crt /etc/kubernetes/ca/pki/ kubeadm init curl -k -v https://localhost:6443 Server certificate: * subject: CN=kube-apiserver * start date: Apr 15 21:07:24 2018 GMT * expire date: Apr 15 21:07:25 2019 GMT * common name: kube-apiserver * issuer: CN=kubernetes </code></pre> <p>Thanks SR</p>
<p>Follow the Kubernetes documentation on <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">certificates</a> to the CA certificate.</p> <p>If you choose <code>openssl</code> or <code>easyrsa</code> use <code>--days=1825</code>, if you are going with <code>cfssl</code> then in <code>ca-config.json</code> specify <code>5y</code> for <code>.signing.default.expiry</code>.</p> <p>Put the resulting <code>ca.crt</code> and <code>ca.key</code> in <code>/etc/kubernetes/ca/pki</code>. When you run <code>kubeadm init</code> it will detect those files and will not overwrite them; it will use that CA key &amp; certificate to sign the other certificates needed.</p>
<p>Specifically, why do I end up with two external IP addresses when I follow the <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">directions</a> on Google's website for setting up nginx ingress on GKE? </p> <p>The two IP addresses are for an Ingress resource and a Service resource of type LoadBalancer:</p> <pre><code>&gt; kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress example.com 1.1.1.1 80, 443 1d &gt; kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app ClusterIP 10.31.251.77 &lt;none&gt; 8080/TCP 1d kubernetes ClusterIP 10.31.240.1 &lt;none&gt; 443/TCP 1d nginx-ingress-controller LoadBalancer 10.31.246.62 2.2.2.2 80:32603/TCP,443:31763/TCP 1d nginx-ingress-default-backend ClusterIP 10.31.241.48 &lt;none&gt; 80/TCP 1d </code></pre> <p>Here is how I thought it works:</p> <pre><code>User ^ | Service resource of type LoadBalancer &lt;-- Ingress annotated as class nginx ^ | Pod resource with Nginx acting as ingress controller ^ | Service resource of type ClusterIP ^ | Pod resource with server serving message at /hello </code></pre> <p>This is basically the diagram on the tutorial page I linked to. So I expect the load balancer to be of L4 type and have an external IP (and not cost any money to use!). And I expect the Ingress (despite its name) <em>not</em> to have an external IP, because I mark it with the annotation</p> <pre><code>annotations: kubernetes.io/ingress.class: nginx </code></pre> <p>which Google is supposed to recognize as saying I do not want the Ingress resource to use their paid L7 HTTP Load Balancer but my own Nginx controller.</p> <p>I do notice that my <code>/hello</code> page is accessible via the load balancer's IP address, but accessing the ingress' address gives a connection attempt refused error. However it is the Ingress resource which has <code>host:</code> and <code>tls:</code> settings. So which resource do I associate my TLS certificate with? And why does the Ingress resource specify a domain name when it is the LoadBalancer IP at which my website is accessible?</p>
<p>I do not really understand your question, I believe you are a bit confused regarding the ingress resource.</p> <p>Let me explain a bit, after you run in the tutorial:</p> <pre><code>helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true kubectl apply -f ingress-resource.yaml </code></pre> <p>You will have the following situation:</p> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.11.245.77 external-ip-ONE 80:32172/TCP,443:31908/TCP 12m $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ingress-resource * external-ip-TWO 80 1m </code></pre> <p>Checking the external IP in use you will notice that:</p> <ul> <li><p><strong>external-ip-ONE</strong> - corresponds to the forwarding rule, therefore it is the IP you will see in the Load Balancer page</p></li> <li><p><strong>external-ip-TWO</strong> - corresponds to the same IP of the virtual machine where the POD <code>ingress controller</code> it is running</p></li> </ul> <p>Therefore no extra IP is "wasted". Basically you connect to the ingress controller that redirects the traffics according to the specification of the ingress resources to the different backends.</p>
<p>I have a local 3-nodes-cluster of Kubernetes, with weave-Net as the Overlay-Network-Plugin installed. My goal is to try out network policies of kubernetes and to export log messages of this process towards an elk-stack.</p> <p>Unfortunately, I cannot proceed, because I cannot solve my issues with kube-dns. The name resolution seems to work, but the network connection from a pod to a service is problematic.</p> <p>Here some facts about my setup (See below for versions/general config details):</p> <ul> <li>I am logged in to a pod "<code>busybox</code>"</li> <li>I have a service called "<code>nginx</code>", connected with an nginx-pod that is up and running</li> <li>From the busybox, I cannot ping the dns:<code>25 packets transmitted, 0 packets received, 100% packet loss</code></li> <li><p>If I try "<code>nslookup nginx</code>" I get:</p> <pre><code>Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: nginx Address 1: 10.104.126.131 nginx.default.svc.cluster.local </code></pre></li> <li><p>Also I Changed changed the config-File on the busybox-Pod manually, to make the name resolution without FQDN:</p> <pre><code>/ # cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local nginx XXX.XXX.de options ndots:5 </code></pre> <p>This doesn't seem like a good workaround for me, but at least it's working, and the nslookup is giving me the correct IP of the nginx-Service:</p> <pre><code>user@controller:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 10.104.126.131 &lt;none&gt; 80/TCP 3d </code></pre></li> <li><p>Now, back to my networking issue: There doesn't seem to be the correct network interface on the pod that the connection to the service can be established:</p> <pre><code>/ # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:272 errors:0 dropped:0 overruns:0 frame:0 TX packets:350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:23830 (23.2 KiB) TX bytes:32140 (31.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) </code></pre></li> </ul> <p>The busybox-pod has this IP: <code>172.17.0.2</code> while the DNS is in the subnet that starts with 10. IP of dns: <code>10.96.0.10</code>.</p> <ul> <li>Weavenet sometimes crashes on one worker-Node, but in the general case it is shown as "running" and I don't think that this can be the reason.</li> </ul> <p>--> Can anybody see the underlaying configuration mistake in my networking? I'd be glad for hints! :)</p> <p><em>General information:</em></p> <p>Kubernetes/kubectl: <code>v1.9.2</code></p> <p>I used kubeadm to install.</p> <p>uname -a: <code>Linux controller 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux</code></p> <p>docker version:</p> <pre><code> Client: Version: 1.13.1 API version: 1.26 Go version: go1.6.2 Git commit: 092cba3 Built: Thu Nov 2 20:40:23 2017 OS/Arch: linux/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.12) Go version: go1.6.2 Git commit: 092cba3 Built: Thu Nov 2 20:40:23 2017 OS/Arch: linux/amd64 Experimental: false </code></pre> <p>Weavenet: <code>2.2.0</code></p>
<p><strong>None of the Service IPs (aka. Cluster IPs, aka. Portal IPs) respond to ping</strong>, so that's not a good test! A good test is to try the Service IP with an appropriate client, like nslookup for dns, curl for http, etc (and make sure you do that on the correct port too).</p> <p>As you've seen from <code>nslookup</code>, the kube-dns Service is functioning properly.</p> <p>There is a very good document about how these <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">virtual IPs</a> work in Kubernetes. Long story short: you will not find a network interface for this network, it is created via redirections in the kernel (configured by iptables).</p>
<p>I have a keystore.jks file which I need to pass as a env variable for my docker process.</p> <p>I used the below command to store the file as a secret.</p> <blockquote> <p>kubectl create secret generic ssl-keystore-cert --from-file=./keystore.jks</p> </blockquote> <p>Using the above secret in my deployment.yaml as below.</p> <pre><code>{ "name": "SERVER_SSL_KEYSTORE", "valueFrom": { "secretKeyRef": { "name": "ssl-keystore-cert", "key": "keystore.jks" } } } </code></pre> <blockquote> <p>Error: failed to start container "app-service": Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"setenv: invalid argument\"" Back-off restarting failed container</p> </blockquote> <p>Is there anyway to store the keystore.jks in secret or configmap?</p> <p><strong>Debug :-</strong></p> <blockquote> <p>kubectl describe secret ssl-keystore-cert</p> </blockquote> <pre><code>Name: ssl-keystore-cert Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: Opaque Data ==== keystore.jks: 4818 bytes </code></pre>
<p>In your create secret, you refer to <em>keystore.jks</em></p> <p>But in your yaml, you refer to <em>server-ssl.jks</em></p> <p>These should be the same key, but they are differet.</p> <p>To verify the correct key, run:</p> <p><code>kubectl describe secrets/ssl-keystore-cert</code></p>
<p>I am launching Jobs and I'm trying to use the lifecycle hooks to launch a script at start and another one at shutdown of the container. </p> <p>I am also specifying resource limits, and they look like this:</p> <pre><code>resources: required: memory: 1Gi cpu: 1 limits: memory: 1Gi cpu: 1 </code></pre> <p>My cluster currently has <strong>4 nodes with 1 CPU and 4 GB of RAM each</strong>, and is running on EC2 machines.</p> <p>The <code>postStart</code> script is at the moment very simple, and looks like this:</p> <pre><code>export SOME_VAR=some_value node someScript.js </code></pre> <p>The only thing the Node script does is update a value on a database, so it's not an especially intensive task.</p> <p>After launching the job, the following events happen:</p> <p><a href="https://i.stack.imgur.com/aGXKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aGXKe.png" alt="Kubectl Events"></a></p> <p>As you can see the <code>postStart</code> hook fails with error 137, and gives no error message.</p> <p>Any help for solving this issue is highly welcome and appreciated.</p> <h2>Edit 1</h2> <p>Since the first answer has pointed to the fact that the command executed for the cook might not be correctly built, I think it's important to say that I build the jobs using the API Kubernetes publishes through <code>kubectl proxy</code>.</p> <p>This is how I specify the <code>lifecycle</code> instructions:</p> <pre><code>"lifecycle": { "postStart": { "exec": { "command": [ "/bin/sh", "postStart.sh" ] } }, "preStop": { "exec": { "command": [ "/bin/sh", "preStop.sh" ] } } } </code></pre> <p>I think this translates to YAML the way it's supposed to; please correct me if I am wrong on this.</p>
<p>You have 2 problems, so you get 2 answers :-)</p> <h1>Problem 1: too high cpu requirement</h1> <p>You pod specifies the requirement of <code>cpu: 1</code> - this means 1 cpu core. Your nodes have 1 cpu core in total, but are already running some pods, like kube-proxy. So none of them have a full core available for your application, so the scheduling fails.</p> <p>The error message <code>No nodes are available that match all of the predicates: Insufficient cpu (4), PodToleratesNodeTaints (1)</code> means:</p> <ul> <li>Scheduling is not possible at the moment</li> <li>Of all nodes, 4 do not have enough cpu to schedule this pod. <ul> <li>You can verify this by executing <code>kubectl describe node nameofyournode</code>, and look at the <code>Allocatable:</code> and the <code>Allocated resources:</code> part of the output. In <code>Non-terminated Pods:</code> you will see that is taking up some of your cpu, possibly a kube-proxy pod.</li> </ul></li> <li>Of all nodes, 1 has a <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taint</a> that is not <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">tolerated</a> by the pod (this is the master I imagine)</li> </ul> <p>The solution is to lower the requirement for your pod (<code>500mi</code> means 500 millicores, or 0.5 cores):</p> <pre><code>resources: required: memory: 1Gi cpu: 500mi limits: memory: 1Gi cpu: 500mi </code></pre> <p>... or resize your machines so they have 2 cores instead of 1.</p> <h1>Problem 2: bad postStart command</h1> <p>Now what is most curious is that somehow in the end the pod did get scheduled, but thereafter killed. Code 126 means <code>Command invoked cannot execute</code>, so the <code>postStart:</code> command is probably invalid. You did not post the full yaml file, but from the error message it looks like you have specified something like:</p> <pre><code>lifecycle: postStart: exec: command: ["/bin/sh postStart.sh"] </code></pre> <p>please check if that is the case. If so, it is incorrect. You need to separate each parameter into a different element in the <code>command</code> array like so:</p> <pre><code>lifecycle: postStart: exec: command: ["/bin/sh", "postStart.sh"] </code></pre> <p>Alternatively, make sure that <code>postStart.sh</code> is marked executable in the container image and specify a shell shebang in the first line (<code>#!/bin/bash</code>). If you do that you can define the postStart hook like this:</p> <pre><code>lifecycle: postStart: exec: command: ["/path/to/postStart.sh"] </code></pre>
<p>kubernetes PodSecurityPolicy set to runAsNonRoot, pods are not getting started post that Getting error Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root</p> <p>We are creating the user (appuser) uid -> 999 and group (appgroup) gid -> 999 in the docker container, and we are starting the container with that user. </p> <p>But the pod creating is throwing error.</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 53s default-scheduler Successfully assigned app-578576fdc6-nfvcz to appmagent01 Normal SuccessfulMountVolume 52s kubelet, appagent01 MountVolume.SetUp succeeded for volume "default-token-ksn46" Warning DNSConfigForming 11s (x6 over 52s) kubelet, appagent01 Search Line limits were exceeded, some search paths have been omitted, the applied search line is: app.svc.cluster.local svc.cluster.local cluster.local Normal Pulling 11s (x5 over 51s) kubelet, appagent01 pulling image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982" Normal Pulled 11s (x5 over 51s) kubelet, appagent01 Successfully pulled image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982" Warning Failed 11s (x5 over 51s) kubelet, appagent01 Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root . </code></pre>
<p>Here is the <a href="https://github.com/kubernetes/kubernetes/blob/5648200571889140ad246feb82c8f80a5946f167/pkg/kubelet/kuberuntime/security_context.go#L88" rel="noreferrer">implementation</a> of the verification:</p> <pre><code>case uid == nil &amp;&amp; len(username) &gt; 0: return fmt.Errorf("container has runAsNonRoot and image has non-numeric user (%s), cannot verify user is non-root", username) </code></pre> <p>And here is the <a href="https://github.com/kubernetes/kubernetes/blob/5648200571889140ad246feb82c8f80a5946f167/pkg/kubelet/kuberuntime/kuberuntime_container.go#L196" rel="noreferrer">validation call</a> with the comment:</p> <pre><code>// Verify RunAsNonRoot. Non-root verification only supports numeric user. if err := verifyRunAsNonRoot(pod, container, uid, username); err != nil { return nil, cleanupAction, err } </code></pre> <p>As you can see, the only reason of that messages in your case is <code>uid == nil</code>. Based on the comment in the source code, we need to set a numeric user value.</p> <p>So, for the user with UID=999 you can do it in your pod definition <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">like that</a>:</p> <pre><code>securityContext: runAsUser: 999 </code></pre>
<p>I currently have two clusters on GKE - one in <code>eu-west1-b</code> and another in <code>us-east1-b</code>. The pods deployed to the nodes in these clusters need to make location-based requests (for latency testing purposes).</p> <p>I also need to connect to my postgres instance on RDS, which uses IP-based whitelisting for external connections. The nodes in my clusters have ephemeral IPs so I can't use them.</p> <p>I have done a lot of research and gone through lots of SO answers and docs and tutorials and come to the solution that routing traffic through a NAT is pretty much the best/only way to do this right now on GKE.</p> <p><a href="https://serverfault.com/questions/835425/kubernetes-external-connection-through-single-ip">https://serverfault.com/questions/835425/kubernetes-external-connection-through-single-ip</a></p> <p>Similar to that question above, I don't want to route all of my traffic through the NAT. My reason is because I need my requests to come from the internet gateway associated with the current node so it is coming from a particular region.</p> <p>The above question has some answers that almost get me there, but doesn't include any kube-specific configuaration. This is a great tutorial:</p> <p><a href="https://docs.tenable.com/pvs/deployment/Content/GoogleCloudInstructionsNatGateway.htm" rel="nofollow noreferrer">https://docs.tenable.com/pvs/deployment/Content/GoogleCloudInstructionsNatGateway.htm</a></p> <p>But again, is not based on kube.</p> <p>My thinking is that I need to define a service for postgres in my kube cluster, and then tell that to route to the external service through the NAT. Not entirely sure where to start and would appreciate help.</p>
<p>A solution:</p> <ol> <li><p><a href="https://cloud.google.com/vpc/docs/add-remove-network-tags" rel="nofollow noreferrer">Tag</a> your instances in different zones/regions with different tags</p></li> <li><p>Create static IP addresses for each zone/region</p></li> <li><p>Create NAT exit nodes (GCE instances or instance groups) using the external address from above</p></li> <li><p>Create a <a href="https://console.cloud.google.com/networking/routes/add" rel="nofollow noreferrer">route</a> trough each of the NAT exit nodes. Restrict each route with destination IP range for your RDS ingress IP/32 and network tags from Step 1 (so the instances use the correct gateway)</p></li> </ol>
<p>I have a Java application which uses JAAS authentication and hence it needs a below system property.</p> <blockquote> <p>-Djava.security.auth.login.config=/jaas/conf/client_jaas.conf</p> </blockquote> <p>We set this system property in our startup script thru JAVA_OPTS.</p> <blockquote> <p>JAVA_OPTS="${JAVA_OPTS} -Djava.security.auth.login.config=/jaas/conf/client_jaas.conf"</p> </blockquote> <p>I am trying to move this app to Kubernetes and setting as below.</p> <pre><code>"containers": [ { "env": [ { "name": "JAVA_OPTS", "value": "-Djava.security.auth.login.config=/jaas/conf/client_jaas.conf" }, </code></pre> <p>But, I am getting the below error in my application logs.</p> <pre><code>Caused by: java.lang.IllegalArgumentException: Could not find a 'appClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set </code></pre> <p>Is there any other way to set this?</p> <p>Thanks</p>
<p>I don't know if this is a related problem to the following: JAVA_OPTS is not an out-of-the-box environment variable, but a convention. If you take a look at <a href="https://github.com/xetys/jhipsterSampleApplication/blob/master/src/main/docker/Dockerfile" rel="nofollow noreferrer">this example Dockerfile</a> </p> <pre><code>FROM openjdk:8-jre-alpine ENV SPRING_OUTPUT_ANSI_ENABLED=ALWAYS \ JHIPSTER_SLEEP=0 \ JAVA_OPTS="" # add directly the war ADD *.war /app.war EXPOSE 8081 CMD echo "The application will start in ${JHIPSTER_SLEEP}s..." &amp;&amp; \ sleep ${JHIPSTER_SLEEP} &amp;&amp; \ java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /app.war </code></pre> <p>you see that the JAVA_OPTS is first defined as a variable and later been used for the java command itself. With this configuration, you are then able to pass custom java options using ENV vars.</p> <p>So I assume you did everything correctly in kubernetes, but the underlying docker image does not handle it properly</p>
<p>I'm trying to launch Google Container Engine (GKE) in Private GCP network Subnet.</p> <p>I have created custom Google Cloud VPC, then I have created custom Private Network Access Subnet too under that VPC.</p> <p>1) When I create GKE cluster with Private Subnet, still my Kubernetes nodes assigned with Public IP. Why it is so ? As per Google Document private instance should get Private IP.</p> <p>2) If I create cluster in Private, can I connect my container application to Google SQL instance ?</p> <p>3) Is any recommendation to launch GKE cluster should launched in Public Subnet only, not in Private Subnet ?</p>
<p>Private Clusters on GKE are now available in beta. They allow you to restrict public internet from connecting to the master.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters</a></p>
<p>I run a small flask application with gunicorn and multiple worker processes on kubernetes. I would like to collect metrics from this application with prometheus, but the metrics should only be accessible cluster internally on a separate port (as this required in our current setting). </p> <p>For one gunicorn worker process I could use the <code>start_http_server</code> fucntion from the python client library to expose metrics on a different port than the flask app. </p> <p>A minimal example may look like this:</p> <pre><code>from flask import Flask from prometheus_client import start_http_server, Counter NUM_REQUESTS = Counter("num_requests", "Example counter") app = Flask(__name__) @app.route('/') def hello_world(): NUM_REQUESTS.inc() return 'Hello, World!' start_http_server(9001) </code></pre> <p>To start the app do the following: </p> <pre><code>gunicorn --bind 127.0.0.1:8082 -w 1 app:app </code></pre> <p>However this only works for one worker process. </p> <p>In the documentation of the client library is also a <a href="https://github.com/prometheus/client_python#multiprocess-mode-gunicorn" rel="noreferrer">section</a> on how to use prometheus and gunicorn with multiple worker processes by specifying a shared directory for the worker processes as an environment variable where metrics are written to (<code>prometheus_multiproc_dir</code>). </p> <p>So following the documentation the above example for multiple workers would be:</p> <p>A gunicorn config file: </p> <pre><code>from prometheus_client import multiprocess def worker_exit(server, worker): multiprocess.mark_process_dead(worker.pid) </code></pre> <p>The application file:</p> <pre><code>import os from flask import Flask from prometheus_client import Counter NUM_REQUESTS = Counter("num_requests", "Example counter") app = Flask(__name__) @app.route('/') def hello_world(): NUM_REQUESTS.inc() return "[PID {}]: Hello World".format(os.getpid()) </code></pre> <p>To start the app do:</p> <pre><code>rm -rf flask-metrics/ mkdir flask-metrics export prometheus_multiproc_dir=flask-metrics gunicorn --bind 127.0.0.1:8082 -c gunicorn_conf.py -w 3 app:app </code></pre> <p>However in this setting I don't really know how to accesses the metrics stored in flask-metrics on a separate port. Is there way to get this done?</p> <p>I am a bit new to these things, so if I am approaching the problem in the wrong way I am also happy for advice what would be the best way to address my case. </p>
<p>What you would want to do here is start up a separate process just to serve the metrics. Put the <code>app</code> function in <a href="https://github.com/prometheus/client_python#multiprocess-mode-gunicorn" rel="noreferrer">https://github.com/prometheus/client_python#multiprocess-mode-gunicorn</a> in an app of its own, and make sure that <code>prometheus_multiproc_dir</code> is the same for both it and the main application.</p>
<p><strong><em>Summary:</em></strong> Jenkins in K8s minikkube works fine and scales well in case of default jnlp agent but stuck with "Waiting for agent to connect" in case of custom jnlp image. </p> <p><strong><em>Detailed description:</em></strong></p> <p>I'm running the local minikube with Jenkins setup. </p> <p><strong>Jenkins master dockerfile:</strong></p> <pre><code>from jenkins/jenkins:alpine # Distributed Builds plugins RUN /usr/local/bin/install-plugins.sh ssh-slaves # install Notifications and Publishing plugins RUN /usr/local/bin/install-plugins.sh email-ext RUN /usr/local/bin/install-plugins.sh mailer RUN /usr/local/bin/install-plugins.sh slack # Artifacts RUN /usr/local/bin/install-plugins.sh htmlpublisher # UI RUN /usr/local/bin/install-plugins.sh greenballs RUN /usr/local/bin/install-plugins.sh simple-theme-plugin # Scaling RUN /usr/local/bin/install-plugins.sh kubernetes # install Maven USER root RUN apk update &amp;&amp; \ apk upgrade &amp;&amp; \ apk add maven USER jenkins </code></pre> <p><strong>Deployment:</strong></p> <pre><code> apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jenkins spec: replicas: 1 template: metadata: labels: app: jenkins spec: containers: - name: jenkins image: ybushnev/my-jenkins-image:1.3 env: - name: JAVA_OPTS value: -Djenkins.install.runSetupWizard=false ports: - name: http-port containerPort: 8080 - name: jnlp-port containerPort: 50000 volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home volumes: - name: jenkins-home emptyDir: {} </code></pre> <p><strong>Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: jenkins spec: type: NodePort ports: - port: 8080 name: "http" targetPort: 8080 - port: 50000 name: "slave" targetPort: 50000 selector: app: jenkins </code></pre> <p><strong>After deployment I have such services:</strong></p> <pre><code>Yuris-MBP-2% kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins NodePort 10.108.30.10 &lt;none&gt; 8080:30267/TCP,50000:31588/TCP 1h kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 1h </code></pre> <p><strong>Kubernetes master running on:</strong></p> <pre><code>Yuris-MBP-2% kubectl cluster-info | grep master Kubernetes master is running at https://192.168.99.100:8443 </code></pre> <p><strong>Based on configuration above I specify the cloud config in Jenkins:</strong></p> <p><a href="https://i.stack.imgur.com/PUxHF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PUxHF.png" alt="enter image description here"></a></p> <p><strong>And finally I put such configuration for slave pod template:</strong> <a href="https://i.stack.imgur.com/zloN0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zloN0.png" alt="enter image description here"></a></p> <p>As a result, via k8s logs I see such logs on the master:</p> <pre><code>Waiting for agent to connect (41/100): kubernetes-agent-tgskx Waiting for agent to connect (42/100): kubernetes-agent-tgskx Waiting for agent to connect (43/100): kubernetes-agent-tgskx Waiting for agent to connect (44/100): kubernetes-agent-tgskx Waiting for agent to connect (45/100): kubernetes-agent-tgskx </code></pre> <p>Jenkins container seems to be green. No logs in K8s but there are such events happened:</p> <pre><code>Successfully assigned kubernetes-agent-517tl to minikube MountVolume.SetUp succeeded for volume "workspace-volume" MountVolume.SetUp succeeded for volume "default-token-8sgh6" </code></pre> <p><strong>IMPORTANT</strong> If I do not put 'jnlp' inside the container name (I guess this is the important as in another case it takes some default jnlp agent image) slave is spinning up and connecting to the master just fine but even if I have custom docker image inside the 'Docker image' field it doesn't take it as a reference as I can see that Jenkins slave doesn't have such tools/files which it suppose to have based in provided image. Last time I tried to use this image: "gcr.io/cloud-solutions-images/jenkins-k8s-slave" but for me it fails for any image in case I put 'jnlp' as container template name. I tried to play with many images with no luck... Will be very glad for any hint! </p>
<p>I think you should set credentials for your master jenkins to start new pods.</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins </code></pre> <p>And then in your Deployment use the account:</p> <pre><code>spec: serviceAccountName: jenkins </code></pre> <p>Take a look to my previous answer at <a href="https://stackoverflow.com/a/47874390/2718151">https://stackoverflow.com/a/47874390/2718151</a></p> <p>I hope this helps.</p>
<p>I am trying to wrap my brain around the suggested workarounds for the lack of built-in HTTP->HTTPS redirection in ingress-gce, using GLBC. What I am struggling with is how to use this custom backend that is suggested as one option to overcome this limitation (e.g. in <a href="https://stackoverflow.com/q/37001557/2745865">How to force SSL for Kubernetes Ingress on GKE</a>).</p> <p>In my case the application behind the load-balancer does not itself have apache or nginx, and I just can't figure out how to include e.g. apache (which I know way better than nginx) in the setup. Am I supposed to set apache in front of the application as a proxy? In that case I wonder what to put in the proxy config as one can't use those convenient k8s service names there...</p> <p>Or should apache be set up as some kind of a separate backend, which would only get traffic when the client uses plain HTTP? In that case I am missing the separation of backends by protocol in the GCE load-balancer, and while I can see how that could be done manually, the ingress needs to be configured for that, and I can't seem to find any resources explaining how to actually do that.</p> <p>For example, in <a href="https://github.com/kubernetes/ingress-gce#redirecting-http-to-https" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce#redirecting-http-to-https</a> the "application" takes care of the forwaring (it seems to be built on nginx), and while that example works beautifully, it's not possible to do the same thing with the application I am talking about.</p> <p>Basically, my setup is currently this:</p> <pre><code>http://&lt;public ip&gt;:80 -\ &gt; GCE LB -&gt; K8s pod running the application https://&lt;public_ip&gt;:443 -/ (ingress-gce) </code></pre> <p>I know I could block HTTP altogether, but that'd ruin user experience when someone just typed in the domain name in the browser.</p> <p>Currently I have these services set up for the LB:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: myapp spec: type: LoadBalancer ports: - name: http port: 80 targetPort: myapp protocol: TCP selector: app: myapp --- kind: Ingress apiVersion: extensions/v1beta1 metadata: name: myapp-ingress annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: "my-ip" ingress.gcp.kubernetes.io/pre-shared-cert: "my-cert" spec: backend: serviceName: myapp servicePort: 80 rules: - host: my.domain.name http: paths: - path: / backend: serviceName: myapp servicePort: 80 </code></pre> <p>In addition I have GLBC bundled together with the application deployment:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: glbc-configmap data: gce.conf: | [global] node-tags = myapp-k8s-nodepool node-instance-prefix = gke-myapp-k8s-cluster --- kind: Deployment apiVersion: apps/v1beta2 metadata: name: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: name: myapp labels: app: myapp spec: containers: # START application container - name: myapp image: eu.gcr.io/myproject/myapp:latest imagePullPolicy: Always readinessProbe: httpGet: path: /ping port: 8080 ports: - name: myapp containerPort: 8080 # END application container # START GLBC container - name: myapp-glbc image: gcr.io/google_containers/glbc:0.9.7 livenessProbe: httpGet: path: /ping port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 volumeMounts: - mountPath: /etc/glbc-configmap name: cloudconfig readOnly: true args: - --apiserver-host=http://localhost:8080 - --default-backend-service=myapp - --sync-period=300s - --config-file-path=/etc/glbc-configmap/gce.conf </code></pre> <p>I'd greatly appreciate any pointers in addition to more complete solutions.</p>
<p><strong>Edit in May 2020: "HTTP(S) Load Balancing Rewrites and Redirects support is now in General Availability" as stated in <a href="https://issuetracker.google.com/issues/35904733#comment95" rel="nofollow noreferrer">https://issuetracker.google.com/issues/35904733#comment95</a> seems to mean that now it finally would be possible to implement proper rediction rules in the LB itself, without having to resort to having an extra pod or any other tweak of that kind. However, in case the below is of use to someone, I'll leave it there for reference.</strong></p> <p>I was able to find a solution, where the GCE LB directs traffic to Apache (of course this should work for any proxy) which runs as a deployment in K8s cluster. In Apache config, there's a redirect based on X-Forwarded-Proto header, and a reverse proxy rules that point to the application in the cluster.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: apache-httpd-configmap data: httpd.conf: | # Apache httpd v2.4 minimal configuration # This can be reduced further if you remove the accees log and mod_log_config ServerRoot "/usr/local/apache2" # Minimum modules needed LoadModule mpm_event_module modules/mod_mpm_event.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule dir_module modules/mod_dir.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule unixd_module modules/mod_unixd.so LoadModule alias_module modules/mod_alias.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so TypesConfig conf/mime.types PidFile logs/httpd.pid # Comment this out if running httpd as a non root user User nobody # Port to Listen on Listen 8081 # In a basic setup httpd can only serve files from its document root DocumentRoot "/usr/local/apache2/htdocs" # Default file to serve DirectoryIndex index.html # Errors go to stderr ErrorLog /proc/self/fd/2 # Access log to stdout LogFormat "%h %l %u %t \"%r\" %&gt;s %b" common CustomLog /proc/self/fd/1 common Mutex posixsem proxy # Never change this block &lt;Directory /&gt; AllowOverride None Require all denied &lt;/Directory&gt; # Deny documents to be served from the DocumentRoot &lt;Directory "/usr/local/apache2/htdocs"&gt; Require all denied &lt;/Directory&gt; &lt;VirtualHost *:8081&gt; ServerName my.domain.name # Redirect HTTP to load balancer HTTPS URL &lt;If "%{HTTP:X-Forwarded-Proto} -strcmatch 'http'"&gt; Redirect / https://my.domain.name:443/ &lt;/If&gt; # Proxy the requests to the application # "myapp" in the rules relies a K8s cluster add-on for DNS aliases # see https://kubernetes.io/docs/concepts/services-networking/service/#dns ProxyRequests Off ProxyPass "/" "http://myapp:80/" ProxyPassReverse "/" "http://myapp:80/" &lt;/VirtualHost&gt; --- kind: Service apiVersion: v1 metadata: name: apache-httpd spec: type: NodePort ports: - name: http port: 80 targetPort: apache-httpd protocol: TCP selector: app: apache-httpd --- kind: Deployment apiVersion: apps/v1beta2 metadata: name: apache-httpd spec: replicas: 1 selector: matchLabels: app: apache-httpd template: metadata: name: apache-httpd labels: app: apache-httpd spec: containers: # START apache httpd container - name: apache-httpd image: httpd:2.4-alpine imagePullPolicy: Always readinessProbe: httpGet: path: / port: 8081 command: ["/usr/local/apache2/bin/httpd"] args: ["-f", "/etc/apache-httpd-configmap/httpd.conf", "-DFOREGROUND"] ports: - name: apache-httpd containerPort: 8081 volumeMounts: - mountPath: /etc/apache-httpd-configmap name: apacheconfig readOnly: true # END apache container # END containers volumes: - name: apacheconfig configMap: name: apache-httpd-configmap # END volumes # END template spec # END template </code></pre> <p>In addition to the above new manifest yaml, the rule for "myapp-ingress" needed to change so that instead of <code>serviceName: myapp</code> it has <code>serviceName: apache-httpd</code> to make the LB direct traffic to Apache.</p> <p>It seems that this rather minimal Apache setup requires very little CPU and RAM, so it fits just fine in the existing cluster and thus doesn't really cause any direct extra cost.</p>
<p>I have my Kubernetes cluster running in GKE I want to run an application outside the cluster and talk to the Kubernetes API.</p> <p>By using password retrieved from running:</p> <pre><code>gcloud container clusters get-credentials cluster-2 --log-http </code></pre> <p>I am able to access the API with Basic authentication. But I want to create multiple Kubernetes service accounts and configure them with required authorization and use appropriately.</p> <p>So, I created service accounts and obtained the tokens using:</p> <pre><code>kubectl create serviceaccount sauser1 kubectl get serviceaccounts sauser1 -o yaml kubectl get secret sauser1-token-&lt;random-string-as-retrieved-from-previous-command&gt; -o yaml </code></pre> <p>If I try to access the Kubernetes API with the obtained token using Bearer authentication then I get a 401 HTTP error. I thought that some permissions may have to be set for the service account, so based on the documentation <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">here</a>, I created below YAML file:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader subjects: - kind: ServiceAccount name: sauser1 namespace: default roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>and tried to apply it using the below command:</p> <pre><code>kubectl apply -f default-sa-rolebinding.yaml </code></pre> <p>I got the following error:</p> <pre><code>clusterrolebinding "pod-reader" created Error from server (Forbidden): error when creating "default-sa-rolebinding.yaml" : clusterroles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to g rant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["g et"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule {Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&amp;{xyz@gmail. com [system:authenticated] map[authenticator:[GKE]]} ownerrules=[PolicyRule{Res ources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:[ "create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healt hz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/versi on"], Verbs:["get"]}] ruleResolutionErrors=[] </code></pre> <p>I dont know how to proceed from here. Is my approach correct or am I missing something here?</p> <p><strong>UPDATE</strong>: As per the post referred by @JanosLenart in the comments, modified the kubectl command and the above error is not observed. But accessing the API, still gives 401 error. The curl command that I am using is: </p> <pre><code>curl -k -1 -H "Authorization: Bearer &lt;token&gt;" https://&lt;ip-address&gt;/api/v1/namespaces/default/pods -v </code></pre>
<p>@Janos pointed out the potential problem, however I think you will need an actual Cloud IAM Service Account as well, because you said: </p> <blockquote> <p>I want to run an application outside the cluster [...]</p> </blockquote> <p>If you're authenticating to GKE from outside, I believe you can only use the Google IAM identities. (I might be wrong, if so, please let me know.)</p> <p>In this case, what you need to do:</p> <ol> <li>Create an IAM service account and download a json key file of it.</li> <li>set <code>GOOGLE_APPLICATION_CREDENTIALS</code> to that file.</li> <li><p>either:</p> <ul> <li><p>use RBAC like in your question to give permissions to the email address of the IAM Service Account</p></li> <li><p>use IAM Roles to give the IAM Service Account on the GKE API (e.g. <code>Container Developer</code> role is usually sufficient)</p></li> </ul></li> <li><p>Use <code>kubectl</code> command against the cluster (make sure you have a <code>.kube/config</code> file with the cluster's IP/CA cert beforehand) with the environment variable above, it should work.</p></li> </ol>
<p>I receive the following error after running</p> <pre><code>kubectl apply -f node.deployment.yml </code></pre> <p>With the following YAML config:</p> <pre><code> 1 apiVersion: apps/v1 2 kind: Deployment 3 metadata: 4 name: node-deployment 5 labels: 6 app: node-app 7 spec: 8 replicas: 2 9 selector: 10 app: node-app 11 template: 12 metadata: 13 labels: 14 app: node-app 15 spec: 16 containers: 17 - name: node-app 18 image: ubuntu/node 19 ports: 20 - containerPort: 8080 </code></pre> <p>Error:</p> <pre><code>error: error validating "node.deployment.yml": error validating data: ValidationError(Deployment.spec.selector): unknown field "app" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector; if you choose to ignore these errors, turn validation off with --validate=fals </code></pre> <p>I'm using minikube as for a local cluster.</p>
<p>Found the error, forgot to add <code>matchLabels</code> property:</p> <pre><code> 9 selector: 10 matchLabels: 11 app: node-app </code></pre>
<p>I have a statefulset with 3 members. They are accessible from inside the cluster with something like:</p> <pre><code>podname-{0..n}.service.default.svc.cluster.local </code></pre> <p>I'm using the Kubernetes API from a Controller. I just created the Statefulset with:</p> <pre class="lang-go prettyprint-override"><code>import ( "k8s.io/client-go/kubernetes" appsv1 "k8s.io/api/apps/v1" ) ... s := appsv1.StatefulSet{ /* some data */ } kubernetes.AppsV1().StatefulSets(nameSpace).Create(s) </code></pre> <p>Let's assume everything worked correctly, and after a few minutes I have 3 running Pods. How can I get the Pod DNS entries for the newly created Pods?</p> <p>I know I can build the DNS with:</p> <pre><code>fmt.Sprintf("%s-%d.%s.%s.svc.cluster.local", s.Name, idx, service, S.Namespace) </code></pre> <p>But I'm finding some issues:</p> <ol> <li>I will have to calculate the index depending on the amount of replicas</li> <li>I need to know about the service name used for this <code>StatefulSet</code></li> <li>I'm assuming the cluster domain is <code>cluster.local</code> which is not necessarily true</li> </ol> <p>What I think it should exists (but I'm not sure if it really exists), is an API, that given a StatefulSet will allow me to know the DNS names of the Replicas that were created. Does an API like this exists?</p>
<p>When you create the headless <code>Service</code> with the <code>StatefulSet</code>, it should create DNS SRV entries. You can query that to get a list of all host names of the pods corresponding to the headless <code>Service</code> without knowing the number. For example, if using Python you can do:</p> <pre><code>&gt;&gt;&gt; import dns &gt;&gt;&gt; for item in dns.resolver.query('your-app-name-headless', 'SRV'): item ... &lt;DNS IN SRV rdata: 10 25 0 2555bb89.your-app-name-headless.your-project.svc.cluster.local.&gt; &lt;DNS IN SRV rdata: 10 25 0 830bdb18.your-app-name-headless.your-project.svc.cluster.local.&gt; &lt;DNS IN SRV rdata: 10 25 0 aeb532de.your-app-name-headless.your-project.svc.cluster.local.&gt; &lt;DNS IN SRV rdata: 10 25 0 a432c21f.your-app-name-headless.your-project.svc.cluster.local.&gt; </code></pre> <p>See:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#headless-services</a></li> </ul>
<p>I using kbuernetes &amp; docker in a isolate environment without Internet, and I always pull image and save to .tar file in other machine and load to the isolate environment, but sometimes kubernetes's Pod cann't launch successful and says the Pod is pull image but network is not ok. But I check docker's image, the image has loaded, why I also need to pull the same image again?</p> <p><strong>this is docker's image, MYSQL's image has loaded:</strong></p> <pre><code>[root@localhost kubecfg]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.12.0 4a9cd8a2008a 3 weeks ago 230.5 MB docker.io/mysql latest 5195076672a7 3 weeks ago 371.4 MB gcr.io/google_containers/kube-apiserver-amd64 v1.9.2 7109112be2c7 11 weeks ago 210.4 MB </code></pre> <p><strong>this is kubernetes's error log</strong></p> <pre><code>[root@localhost kubecfg]# kubectl describe pod mysql-vmwdw Name: mysql-vmwdw Namespace: default Node: localhost.localdomain/192.168.88.129 Start Time: Mon, 02 Apr 2018 14:14:07 +0800 Labels: app=mysql Annotations: &lt;none&gt; Status: Running IP: 192.168.0.61 Controlled By: ReplicationController/mysql Containers: mysql: Container ID: docker://9aa3128eaa1f330dfd0d6ebf732dca5a99ad49d7d6d4002a2384bdb03e056d7d Image: docker.io/mysql Image ID: docker-pullable://docker.io/mysql@sha256:691c55aabb3c4e3b89b953dd2f022f7ea845e5443954767d321d5f5fa394e28c Port: 3306/TCP State: Waiting Reason: ImagePullBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 10 Apr 2018 14:56:04 +0800 Finished: Wed, 11 Apr 2018 08:56:04 +0800 Ready: False Restart Count: 3 Environment: MYSQL_ROOT_PASSWORD: 123456 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7kq2 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-s7kq2: Type: Secret (a volume populated by a Secret) SecretName: default-token-s7kq2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 19m kubelet, localhost.localdomain MountVolume.SetUp succeeded for volume "default-token-s7kq2" Normal SandboxChanged 19m (x12 over 19m) kubelet, localhost.localdomain Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 19m (x12 over 19m) kubelet, localhost.localdomain Failed create pod sandbox. Warning Failed 9m (x6 over 18m) kubelet, localhost.localdomain Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/mysql/images. You may want to check your internet connection or if you are behind a proxy. Warning Failed 4m (x41 over 18m) kubelet, localhost.localdomain Error: ImagePullBackOff </code></pre> <p>It says: <strong>" Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to <a href="https://index.docker.io/v1/repositories/library/mysql/images" rel="nofollow noreferrer">https://index.docker.io/v1/repositories/library/mysql/images</a>. You may want to check your internet connection or if you are behind a proxy."</strong></p> <p>I had load the image, why docker would pull the same image again? And what should I do to resolve the problem?</p>
<p>It is not clear what is the <code>imagePullPolicy</code> specified. In your case it should be <code>Never</code>. </p> <p><a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulling-images" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#pre-pulling-images</a></p>
<p>I'm fighting with Kubernetes. I've googled a lot and looked at a few answers such as <a href="https://stackoverflow.com/questions/40600419/why-am-i-getting-an-errimagepull-error-in-this-kubernetes-deployment">this one</a> - for example - but can't seem to get it to work.</p> <p>I've created a docker container and push to local registry:</p> <pre><code>sudo docker run -d -p 5000:5000 --name registry registry:2 sudo docker tag i-a/i-a:latest localhost:5000/i-a sudo docker push localhost:5000/i-a </code></pre> <p>The last command gives:</p> <pre><code>The push refers to a repository [localhost:5000/i-a] e0a33c56cca0: Pushed 54ab83ede54d: Pushed f5a58f369605: Pushed cd7100a72410: Pushed latest: digest: sha256:0f30cdf6b4a4e0e382a6cae50c1325103c3b987d9e51c42edea2244a82ae1331 size: 1164 </code></pre> <p>Doing <code>sudo docker pull localhost:5000/i-a</code> gives:</p> <pre><code>Using default tag: latest latest: Pulling from i-a Digest: sha256:0f30cdf6b4a4e0e382a6cae50c1325103c3b987d9e51c42edea2244a82ae1331 Status: Image is up to date for localhost:5000/i-a:latest </code></pre> <p>The config file i-a.yaml:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: i-a name: i-a namespace: default spec: replicas: 3 selector: matchLabels: run: i-a strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: run: i-a spec: containers: - image: localhost:5000/i-a imagePullPolicy: IfNotPresent name: i-a ports: - containerPort: 8090 dnsPolicy: ClusterFirst restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: run: i-a name: i-a namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 8090 selector: run: i-a sessionAffinity: None type: ClusterIP </code></pre> <p>When I do <code>sudo kubectl get pods --all-namespaces</code> I get:</p> <pre><code>... default i-a-3400848339-0x6c9 0/1 ImagePullBackOff 0 13m default i-a-3400848339-7ltp1 0/1 ImagePullBackOff 0 13m default i-a-3400848339-wv092 0/1 ImagePullBackOff 0 13m </code></pre> <p>The <code>ImagePullBackOff</code> turns to <code>ErrImagePull</code>.</p> <p>When I run <code>kubectl describe pod i-a-3400848339-0x6c9</code> I get an error <code>Failed to pull image "localhost:5000/i-a": Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused</code>:</p> <pre><code>Name: i-a-3400848339-0x6c9 Namespace: default Node: minikube/192.168.99.100 Start Time: Mon, 09 Apr 2018 21:11:15 +0200 Labels: pod-template-hash=3400848339 run=i-a Status: Pending IP: 172.17.0.7 Controllers: ReplicaSet/i-a-3400848339 Containers: i-a: Container ID: Image: localhost:5000/i-a Image ID: Port: 8090/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-bmwkd (ro) Environment Variables: &lt;none&gt; Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-bmwkd: Type: Secret (a volume populated by a Secret) SecretName: default-token-bmwkd QoS Class: BestEffort Tolerations: &lt;none&gt; Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 18m 18m 1 {default-scheduler } Normal Scheduled Successfully assigned i-a-3400848339-0x6c9 to minikube 18m 2m 8 {kubelet minikube} spec.containers{i-a} Normal Pulling pulling image "localhost:5000/i-a" 18m 2m 8 {kubelet minikube} spec.containers{i-a} Warning Failed Failed to pull image "localhost:5000/i-a": Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused 18m 2m 8 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "i-a" with ErrImagePull: "Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused" 18m 13s 75 {kubelet minikube} spec.containers{i-a} Normal BackOff Back-off pulling image "localhost:5000/i-a" 18m 13s 75 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "i-a" with ImagePullBackOff: "Back-off pulling image \"localhost:5000/i-a\"" </code></pre> <p>I'm not sure where to look next... (I get 404 when I browse to <a href="http://localhost:5000/v1/repositories/i-a/images" rel="nofollow noreferrer">http://localhost:5000/v1/repositories/i-a/images</a>)</p>
<p>Try with:</p> <pre><code>spec: containers: - image: localhost:5000/i-a imagePullPolicy: Never name: i-a ports: - containerPort: 8090 dnsPolicy: ClusterFirst restartPolicy: Always </code></pre>
<p>I using kbuernetes &amp; docker in a isolate environment without Internet, and I always pull image and save to .tar file in other machine and load to the isolate environment, but sometimes kubernetes's Pod cann't launch successful and says the Pod is pull image but network is not ok. But I check docker's image, the image has loaded, why I also need to pull the same image again?</p> <p><strong>this is docker's image, MYSQL's image has loaded:</strong></p> <pre><code>[root@localhost kubecfg]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.12.0 4a9cd8a2008a 3 weeks ago 230.5 MB docker.io/mysql latest 5195076672a7 3 weeks ago 371.4 MB gcr.io/google_containers/kube-apiserver-amd64 v1.9.2 7109112be2c7 11 weeks ago 210.4 MB </code></pre> <p><strong>this is kubernetes's error log</strong></p> <pre><code>[root@localhost kubecfg]# kubectl describe pod mysql-vmwdw Name: mysql-vmwdw Namespace: default Node: localhost.localdomain/192.168.88.129 Start Time: Mon, 02 Apr 2018 14:14:07 +0800 Labels: app=mysql Annotations: &lt;none&gt; Status: Running IP: 192.168.0.61 Controlled By: ReplicationController/mysql Containers: mysql: Container ID: docker://9aa3128eaa1f330dfd0d6ebf732dca5a99ad49d7d6d4002a2384bdb03e056d7d Image: docker.io/mysql Image ID: docker-pullable://docker.io/mysql@sha256:691c55aabb3c4e3b89b953dd2f022f7ea845e5443954767d321d5f5fa394e28c Port: 3306/TCP State: Waiting Reason: ImagePullBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 10 Apr 2018 14:56:04 +0800 Finished: Wed, 11 Apr 2018 08:56:04 +0800 Ready: False Restart Count: 3 Environment: MYSQL_ROOT_PASSWORD: 123456 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7kq2 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-s7kq2: Type: Secret (a volume populated by a Secret) SecretName: default-token-s7kq2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 19m kubelet, localhost.localdomain MountVolume.SetUp succeeded for volume "default-token-s7kq2" Normal SandboxChanged 19m (x12 over 19m) kubelet, localhost.localdomain Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 19m (x12 over 19m) kubelet, localhost.localdomain Failed create pod sandbox. Warning Failed 9m (x6 over 18m) kubelet, localhost.localdomain Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/mysql/images. You may want to check your internet connection or if you are behind a proxy. Warning Failed 4m (x41 over 18m) kubelet, localhost.localdomain Error: ImagePullBackOff </code></pre> <p>It says: <strong>" Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to <a href="https://index.docker.io/v1/repositories/library/mysql/images" rel="nofollow noreferrer">https://index.docker.io/v1/repositories/library/mysql/images</a>. You may want to check your internet connection or if you are behind a proxy."</strong></p> <p>I had load the image, why docker would pull the same image again? And what should I do to resolve the problem?</p>
<p>You are using <code>:latest</code> tag for the image. When the image tag is <code>latest</code>, kubernetes will set <code>imagePullPolicy</code> to <code>Always</code>. For details, <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">see the official doc</a>.</p> <p>You can either change the tag to something else (eg, <code>docker.io/mysql:8.0</code>) or, simply you can specify <code>imagePullPolicy</code> yourself to <code>Never</code>.</p>
<p>I'm new to Kubernetes and I'm trying to add a PVC in my <code>StatefulSet</code> on Minikube. PV and PVC are shown here:</p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE neo4j-backups 5Gi RWO Retain Bound default/backups-claim manual 1h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE backups-claim Bound neo4j-backups 5Gi RWO manual 51m </code></pre> <p>Basically I want all pods of the StatefulSet to see the contents of that volume as backup files are stored there.</p> <p>StatefulSet used can be found <a href="https://github.com/neo4j-contrib/kubernetes-neo4j/blob/master/cores/statefulset.yaml" rel="nofollow noreferrer">here</a></p> <p>Minikube version: <code>minikube version: v0.25.2</code><br /> Kubernetes version: <code>GitVersion:&quot;v1.9.4&quot;</code></p>
<p>If you use <code>volumeClaimTemplates</code> in <code>StatefulSet</code> k8s will do dynamic provisioning &amp; create one PVC and corresponding PV for each pod, so each one of them gets their own storage. </p> <p>What you want is to create one PV &amp; one PVC and use it in all replicas of Statefulset. </p> <p>Below is example on Kubernetes 1.10 how you can do it, where <code>/var/www/html</code> will be shared by all three Pods, just change <code>/directory/on/host</code> to some local directory on your machine. Also I ran this example on minikube v0.26.0</p> <p>Ofcourse below is just an example to illustrate the idea, but in a real example processes in Pod should be aware of syncronizing access to shared storage. </p> <hr> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 100Gi # volumeMode field requires BlockVolume Alpha feature gate to be enabled. volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /directory/on/host nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - minikube --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-local-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: local-storage --- apiVersion: "apps/v1beta1" kind: StatefulSet metadata: name: nginx spec: serviceName: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx-container image: "nginx:1.12.2" imagePullPolicy: "IfNotPresent" volumeMounts: - name: localvolume mountPath: /var/www/html volumes: - name: localvolume persistentVolumeClaim: claimName: example-local-claim </code></pre>
<p>I have an app written in angular4, I am running on production and sandbox,</p> <p>I create an image and then deploy on kubernetes</p> <p>I have some environment variables differnt to sandbox and production , currently I build two differnt images one for sandbox and one for production:</p> <p>environments under <code>src/envirnments</code>:</p> <p><strong>environment.prod.ts</strong></p> <pre><code>export const environment = { production: true, server_url: 'https://api.example.com/app/', }; </code></pre> <p><strong>environment.sandbox.ts</strong></p> <pre><code>export const environment = { production: false, server_url: 'https://api-sandbox.example.com/app/', }; </code></pre> <p>building image:</p> <p><strong>production</strong> : <code>ng build --prod</code></p> <p><strong>sandbox</strong>: <code>ng build--prod --env=sandbox</code></p> <p>now, how can I use external environment varibales instead? somthing like <code>applicatoion.getEnvirnment('server_url')</code>, like this I dont need to create an image for each environment?</p> <p>this is my <code>deployment.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: angular-web-app namespace: production spec: replicas: 1 revisionHistoryLimit: 1 strategy: type: RollingUpdate template: metadata: labels: app: angular-web-app spec: containers: - name: angular-web-app image: us.gcr.io/my-com/angular-web-app:06.01.2018 ports: - containerPort: 80 env: - name: SERVER_URL value: https://api.example.com </code></pre> <p>here is my dockerfile:</p> <pre><code>FROM nginx COPY dist /usr/share/nginx/html EXPOSE 80 EXPOSE 443 </code></pre> <p>building the image:</p> <pre><code>ng build --prod --env=sandbox docker build --rm -t ${REGISTRY}/${CONTAINER}:${TAG} . </code></pre> <p>I added the environment variable to the deployment and I want the app to take the value from there </p>
<p>alternativly you can read from local config json , and then use volumes, having seperate config map for each of you environments <a href="https://stackoverflow.com/questions/43275995/angular4-how-do-access-local-json">see</a></p> <p>under <code>src/assets/config</code> put the config json,</p> <p>in your code read the url from the config file:</p> <pre><code>private getConfigJSONFile() { return this.http.get("/assets/config/env-vars.json").map((res:any) =&gt; res.json()) } </code></pre> <p>now create the config on kubernetes:</p> <p>here is the config file:</p> <p><strong>configs/env-variables.json</strong></p> <pre><code>{ "api_server_url": "https://api.example.com" } </code></pre> <p>create the config</p> <pre><code>kubectl create configmap angular-env-vars --from-file=env-vars.json=configs/env-variables.json </code></pre> <p>use volumes in your deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: angular-web-app spec: replicas: 1 revisionHistoryLimit: 1 strategy: type: RollingUpdate template: metadata: labels: app: angular-web-app spec: containers: - name: angular-web-app image: us.gcr.io/my-com/angular-web-app:06.01.2018 volumeMounts: - name: env-vars mountPath: /usr/share/nginx/html/assets/config ports: - containerPort: 80 volumes: - name: env-vars configMap: name: angular-env-vars </code></pre>
<p>I am new to Kubernetes and I followed this tutorial for deploying WordPress application with k8s <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">Deploying WordPress</a>.</p> <p>Let's say I have 3 pods with WordPress:4.8-apache image and I have a shared volume that mounts to each pod with the code of WordPress 4.8. and let's say I want to update my WordPress to 4.9 without downtime. So it has to be a point in time when there are some pods that have the code of WordPress 4.8 and some have the code of WordPress 4.9 but all the pods use the same Volume so how is it possible? Are the new pods going to overwrite the code in the shared folder?</p>
<p>Potentially, this could happen. In this sense, it looks to me that this is not the proper way to deploy WordPress following the immutability principles (<a href="https://engineering.bitnami.com/articles/why-your-next-web-service-should-be-immutable.html" rel="nofollow noreferrer">https://engineering.bitnami.com/articles/why-your-next-web-service-should-be-immutable.html</a>). In order to avoid this kind of issues, my advice would be:</p> <ul> <li>Use environment variables for instead of persisting the wp-config.php configuration</li> <li>Store all assets in a service like S3. There are plugins for this (<a href="https://deliciousbrains.com/wp-offload-s3/" rel="nofollow noreferrer">https://deliciousbrains.com/wp-offload-s3/</a>).</li> <li>If you want to use plugins, build an image that bundles all of them (this may be a bit more difficult, but it would save you from the pain of having inconsistencies between the pods).</li> </ul> <p>There could be issues if the newer version implies changes in the database. In that case, I would add some kind of readiness check in the pod definition so, in case the old version fails, it gets removed from the service and only the newer WordPress pods are available. In this case, doing a database backup prior to upgrading would be a good choice as well.</p>
<p>AFAIK administrators can use exec to execute commands in a container running in kubernetes. This means that they can see all the secrets correct?</p> <p>Now if the secret is used to connect to something externally, which that administrator does not have access to, how can I avoid that the adminstrator gets access to that external system?</p> <p>Do I need to use something like Hashicorps Vault? </p>
<blockquote> <p>Do I need to use something like Hashicorps Vault? </p> </blockquote> <p>Generally, yes: you need an external encrypted source in order to separate secret management (readable by admins with the right RBAC) and secrets.</p> <p>For instance, something like <a href="https://github.com/hashicorp/vault-plugin-auth-kubernetes" rel="nofollow noreferrer"><code>hashicorp/vault-plugin-auth-kubernetes</code></a> can help and allows for Kubernetes Service Accounts to authenticate with Vault.</p>
<p>I'm new to kubernetes and I'm struggling with starting my frist pods. I installed kubernetes on my Ubuntu virtual machine, proceeded with </p> <pre><code>kubeadm init </code></pre> <p>followed by other instuctions </p> <pre><code>mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ </code></pre> <p>I can see that the node is up and running </p> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION vm24740 Ready master 12m v1.10.0 </code></pre> <p>Nevertheless my pods won't start:</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE myappdeployment-588bc8ddf4-28jzj 0/1 Pending 0 11m myappdeployment-588bc8ddf4-9bbb9 0/1 Pending 0 11m myappdeployment-588bc8ddf4-fptft 0/1 Pending 0 11m myappdeployment-588bc8ddf4-lxj8p 0/1 Pending 0 11m myappdeployment-588bc8ddf4-xhg5f 0/1 Pending 0 11m </code></pre> <p>This is a detailed view on a pod:</p> <pre><code>kubectl describe pod myappdeployment-588bc8ddf4-28jzj Name: myappdeployment-588bc8ddf4-28jzj Namespace: default Node: &lt;none&gt; Labels: app=myapp pod-template-hash=1446748890 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/myappdeployment-588bc8ddf4 Containers: myapp: Image: jamesquigley/exampleapp:v1.0.0 Port: 9000/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-6rcjb (ro) Conditions: Type Status PodScheduled False Volumes: default-token-6rcjb: Type: Secret (a volume populated by a Secret) SecretName: default-token-6rcjb Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 1m (x37 over 11m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>Would somebody more experienced than me know, why the pods don't start?</p>
<p>Seem like you are running single node (master) k8s: </p> <p>From <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">Documentation</a>: </p> <blockquote> <p>Master Isolation </p> <p>By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:</p> <p><code>kubectl taint nodes --all node-role.kubernetes.io/master-</code> </p> <p>With output looking something like:</p> <p><code>node "test-01" untainted taint key="dedicated" and effect="" not found. taint key="dedicated" and effect="" not found</code>. </p> <p>This will remove <code>the node-role.kubernetes.io/master</code> taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.</p> </blockquote>
<p>I have a spring boot app image which needs the following property.</p> <p>server.ssl.keyStore=/certs/keystore.jks</p> <p>I am loading the keystore file to secrets using the bewloe command.</p> <p>kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks</p> <p>I use the below secret reference in my deployment.yaml</p> <pre><code> { "name": "SERVER_SSL_KEYSTORE", "valueFrom": { "secretKeyRef": { "name": "ssl-keystore-cert", "key": "server-ssl.jks" } } } </code></pre> <p>With the above reference, I am getting the below error.</p> <blockquote> <p>Error: failed to start container "app-service": Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"setenv: invalid argument\"" Back-off restarting failed container</p> </blockquote> <p>If i go with the volume mount option,</p> <pre><code>"spec": { "volumes": [ { "name": "keystore-cert", "secret": { "secretName": "ssl-keystore-cert", "items": [ { "key": "server-ssl.jks", "path": "keycerts" } ] } } ], "containers": [ { "env": [ { "name": "JAVA_OPTS", "value": "-Dserver.ssl.keyStore=/certs/keystore/keycerts" } ], "name": "app-service", "ports": [ { "containerPort": 8080, "protocol": "TCP" } ], "volumeMounts": [ { "name": "keystore-cert", "mountPath": "/certs/keystore" } ], "imagePullPolicy": "IfNotPresent" } ] </code></pre> <p>I am getting the below error with the above approach.</p> <blockquote> <p>Caused by: java.lang.IllegalArgumentException: Resource location must not be null at org.springframework.util.Assert.notNull(Assert.java:134) ~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at org.springframework.util.ResourceUtils.getURL(ResourceUtils.java:131) ~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory.configureSslKeyStore(JettyEmbeddedServletContainerFactory.java:301) ~[spring-boot-1.4.5.RELEASE.jar!/:1.4.5.RELEASE]</p> </blockquote> <p>I tried with the below option also, instead of JAVA_OPTS,</p> <pre><code>{ "name": "SERVER_SSL_KEYSTORE", "value": "/certs/keystore/keycerts" } </code></pre> <p>Still the error is same.</p> <p>Not sure what is the right approach.</p>
<p>I tried to repeat the situation with your configuration. I created a secret used command:</p> <pre><code>kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks </code></pre> <p>I used this YAML as a test environment:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox env: - name: JAVA_OPTS value: "-Dserver.ssl.keyStore=/certs/keystore/server-ssl.jks" ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: secret-volume readOnly: true mountPath: "/cert/keystore" volumes: - name: secret-volume secret: secretName: ssl-keystore-cert </code></pre> <p>As you see, I used "server-ssl.jks" file name in the variable. If you create the secret from a file, Kubernetes will store this file in the secret. When you mount this secret to any place, you just store the file. You tried to use <code>/certs/keystore/keycerts</code> but it doesn't exist, which you see in logs:</p> <blockquote> <p>Resource location must not be null at org.springframework.util.Assert.notNull</p> </blockquote> <p>Because your mounted secret is here <code>/certs/keystore/keycerts/server-ssl.jks</code> </p> <p>It should work, but just fix the paths </p>
<p>I using kbuernetes &amp; docker in a isolate environment without Internet, and I always pull image and save to .tar file in other machine and load to the isolate environment, but sometimes kubernetes's Pod cann't launch successful and says the Pod is pull image but network is not ok. But I check docker's image, the image has loaded, why I also need to pull the same image again?</p> <p><strong>this is docker's image, MYSQL's image has loaded:</strong></p> <pre><code>[root@localhost kubecfg]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.12.0 4a9cd8a2008a 3 weeks ago 230.5 MB docker.io/mysql latest 5195076672a7 3 weeks ago 371.4 MB gcr.io/google_containers/kube-apiserver-amd64 v1.9.2 7109112be2c7 11 weeks ago 210.4 MB </code></pre> <p><strong>this is kubernetes's error log</strong></p> <pre><code>[root@localhost kubecfg]# kubectl describe pod mysql-vmwdw Name: mysql-vmwdw Namespace: default Node: localhost.localdomain/192.168.88.129 Start Time: Mon, 02 Apr 2018 14:14:07 +0800 Labels: app=mysql Annotations: &lt;none&gt; Status: Running IP: 192.168.0.61 Controlled By: ReplicationController/mysql Containers: mysql: Container ID: docker://9aa3128eaa1f330dfd0d6ebf732dca5a99ad49d7d6d4002a2384bdb03e056d7d Image: docker.io/mysql Image ID: docker-pullable://docker.io/mysql@sha256:691c55aabb3c4e3b89b953dd2f022f7ea845e5443954767d321d5f5fa394e28c Port: 3306/TCP State: Waiting Reason: ImagePullBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 10 Apr 2018 14:56:04 +0800 Finished: Wed, 11 Apr 2018 08:56:04 +0800 Ready: False Restart Count: 3 Environment: MYSQL_ROOT_PASSWORD: 123456 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7kq2 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-s7kq2: Type: Secret (a volume populated by a Secret) SecretName: default-token-s7kq2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 19m kubelet, localhost.localdomain MountVolume.SetUp succeeded for volume "default-token-s7kq2" Normal SandboxChanged 19m (x12 over 19m) kubelet, localhost.localdomain Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 19m (x12 over 19m) kubelet, localhost.localdomain Failed create pod sandbox. Warning Failed 9m (x6 over 18m) kubelet, localhost.localdomain Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to https://index.docker.io/v1/repositories/library/mysql/images. You may want to check your internet connection or if you are behind a proxy. Warning Failed 4m (x41 over 18m) kubelet, localhost.localdomain Error: ImagePullBackOff </code></pre> <p>It says: <strong>" Failed to pull image "docker.io/mysql": rpc error: code = Unknown desc = Network timed out while trying to connect to <a href="https://index.docker.io/v1/repositories/library/mysql/images" rel="nofollow noreferrer">https://index.docker.io/v1/repositories/library/mysql/images</a>. You may want to check your internet connection or if you are behind a proxy."</strong></p> <p>I had load the image, why docker would pull the same image again? And what should I do to resolve the problem?</p>
<p>In your case creating the deployment by</p> <pre><code> kubectl run mysql --image=docker.io/mysql --imagePullPolicy=Never </code></pre> <p>should solve the problem. Alternatively you could set <code>imagePullPolicy</code> to <code>Never</code> or <code>IfNotPresent</code> in the <code>yaml</code> spec of your deployment to fix this error.</p>
<p>I am trying to setup Kubernetes <strong>autoscaler</strong> with Amazon AWS as described here: <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md" rel="nofollow noreferrer">DOCS</a> but I am getting this error in my <strong>cluster-autoscaler</strong> pod logs:</p> <pre><code>E0411 09:23:25.529212 1 static_autoscaler.go:118] Failed to update node registry: RequestError: send request failed caused by: Post https://autoscaling.us-west-2a.amazonaws.com/: dial tcp: lookup autoscaling.us-west-2a.amazonaws.com on 10.96.0.10:53: no such host </code></pre> <blockquote> <p>Context:</p> </blockquote> <p>I've created AWS Autoscaling Group named <strong>KubeAutoscale</strong> from Launch Configration with my custom instance AMI which has installed Ubuntu server 16.04 LTS (HVM) and Docker with Kubernetes (just raw install). </p> <p>In AWS Autoscaling Group I've put 2 instances as minimum and maximum of 5 instances (they are in us-west-2a region) and I logged in on one of those 2 and setup Kubernetes cluster, logged in on other instance and add it to created cluster and logged again on master (first) instance run Autoscaler with configuration:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler name: cluster-autoscaler namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: cluster-autoscaler labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler rules: - apiGroups: [""] resources: ["events","endpoints"] verbs: ["create", "patch"] - apiGroups: [""] resources: ["pods/eviction"] verbs: ["create"] - apiGroups: [""] resources: ["pods/status"] verbs: ["update"] - apiGroups: [""] resources: ["endpoints"] resourceNames: ["cluster-autoscaler"] verbs: ["get","update"] - apiGroups: [""] resources: ["nodes"] verbs: ["watch","list","get","update"] - apiGroups: [""] resources: ["pods","services","replicationcontrollers","persistentvolumeclaims","persistentvolumes"] verbs: ["watch","list","get"] - apiGroups: ["extensions"] resources: ["replicasets","daemonsets"] verbs: ["watch","list","get"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["watch","list"] - apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["watch","list","get"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["watch","list","get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: cluster-autoscaler namespace: kube-system labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status"] verbs: ["delete","get","update"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cluster-autoscaler labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-autoscaler subjects: - kind: ServiceAccount name: cluster-autoscaler namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: cluster-autoscaler namespace: kube-system labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cluster-autoscaler subjects: - kind: ServiceAccount name: cluster-autoscaler namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cluster-autoscaler namespace: kube-system labels: app: cluster-autoscaler spec: replicas: 1 selector: matchLabels: app: cluster-autoscaler template: metadata: labels: app: cluster-autoscaler spec: serviceAccountName: cluster-autoscaler containers: - image: k8s.gcr.io/cluster-autoscaler:v0.6.0 name: cluster-autoscaler resources: limits: cpu: 100m memory: 300Mi requests: cpu: 100m memory: 300Mi command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --nodes=2:5:KubeAutoscale env: - name: AWS_REGION value: us-west-2a volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs/ca-certificates.crt readOnly: true imagePullPolicy: "Always" volumes: - name: ssl-certs hostPath: path: "/etc/ssl/certs/ca-certificates.crt" </code></pre>
<p>You have the configuration issue:</p> <pre><code> env: - name: AWS_REGION value: us-west-2a </code></pre> <p>Your AWS region is <code>us-west-2</code>, but AZ is <code>us-west-2a</code>. That's why when Autoscaling generates the URL of autoscaling endpoint, the result is <code>https://autoscaling.us-west-2a.amazonaws.com/</code> instead of <code>https://autoscaling.us-west-2.amazonaws.com/</code> - which is the correct one.</p> <p>To fix it, just set <code>AWS_REGION</code> to <code>us-west-2</code> instead of <code>us-west-2a</code>.</p>
<p>I am running into issues with Kubernetes Service Discovery on Spring Boot applications. </p> <p>I should be able to discover the services whether my spring boot application is running within or out of Kubernetes cluster. Our local development won't be on k8s cluster.</p> <p>I am using Service Discovery via DNS. I tried using <a href="https://github.com/spring-cloud-incubator/spring-cloud-kubernetes" rel="nofollow noreferrer">spring-cloud-starter-kubernetes</a></p> <pre><code> &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-starter-kubernetes&lt;/artifactId&gt; &lt;version&gt;0.2.0.RELEASE&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>As per documentation you should be able to autowire DiscoveryClient and good to go</p> <pre><code>@Autowire private DiscoveryClient discoveryClient; </code></pre> <p>DiscoveryClient is part of spring-cloud-commons. spring-cloud-starter-kuberenetes doesn't have it. </p> <p>Anyone solved similar problem using the same library or a different one? Please share the solution</p> <p>Thanks!</p>
<p>I have solved this issue using the Spring Cloud Kubernetes Dependencies</p> <p><code>&lt;spring.cloud.kubernetes&gt;0.2.0.RELEASE&lt;/spring.cloud.kubernetes&gt;</code></p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-kubernetes-ribbon&lt;/artifactId&gt; &lt;version&gt;${spring.cloud.kubernetes}&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-kubernetes-config&lt;/artifactId&gt; &lt;version&gt;${spring.cloud.kubernetes}&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-kubernetes-core&lt;/artifactId&gt; &lt;version&gt;${spring.cloud.kubernetes}&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-kubernetes-discovery&lt;/artifactId&gt; &lt;version&gt;${spring.cloud.kubernetes}&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>What was very important for me was the ribbon dependency as it makes use of a load balanced rest template in order to substitute service names for the correct pod IP's that are found in your kubernetes cluster.</p> <p>I have created a git repo as part of answering a larger set of questions but it should be more than sufficient if someone is looking a way to implement Kubernetes service discovery in place of Eureka or Consul.</p> <p><a href="https://github.com/foundery-rmb/kubernetes-service-discovery" rel="nofollow noreferrer">https://github.com/foundery-rmb/kubernetes-service-discovery</a></p>
<p>We are migrating some windows components (.net 4.5) to Linux containers (.net core). Existing system is a file processor, which watches a shared location and processes file from there. We can't force existing system to start dropping file at some other location, so new containerised application has to access from same shared location. Can a windows share be seen from docker containers? If yes, then how can I use Kubernete deployment file to achieve it? Please advise. Regards,</p>
<p>This feature is not working by default, but yes, you can do this. <a href="https://github.com/juliohm1978/kubernetes-cifs-volumedriver" rel="nofollow noreferrer">Here is the project</a> which allows you to do this in a proper way. After volume driver setup, you can use Windows share as <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">PersistentVolume</a> in your Kubernetes cluster.</p> <p><strong>*Updates</strong></p> <p>Also you can use windows share as nfs volume in Kubernetes. <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">Here you</a> can find some examples of using nfs in Kubernetes</p>
<p>Is there a way to change the logging level on a running cluster, specifically for a single system component such as kube-proxy? </p> <p>I see some discussion related to this:<br> <a href="https://github.com/kubernetes/test-infra/pull/4311" rel="nofollow noreferrer">https://github.com/kubernetes/test-infra/pull/4311</a> and it seems like some sort of a mechanism was put in place, but it is not clear to me how to use this mechanism.</p> <p>This question came about because we are troubleshooting connections to a NodePort service (which should go through kube-proxy), and at the default level of --v=2 kube-proxy doesn't seem to log any of the connections that it proxies, so seeking to increase it. </p>
<blockquote> <p>it seems like some sort of a mechanism was put in place, but it is not clear to me how to use this mechanism</p> </blockquote> <p>I didn't see anything in that PR that would lead me to believe there is a dynamic (that is: without terminating <code>kube-proxy</code>) mechanism for altering log levels. There is no dynamic logging adjustment mechanism that I'm aware of in any of the kubernetes components.</p> <p>However, <code>kube-proxy</code> (traditionally) runs in a docker container just like any other Pod, and thus is subject to being restarted on termination. So just update its <code>--v</code> in the manifest, kill the container (the one without <code>Pod</code> in its name), and <code>kubelet</code> will start <code>kube-proxy</code> back up, now with the new <code>--v</code> level.</p> <blockquote> <p>(which should go through kube-proxy)</p> </blockquote> <p>Just for clarity, <code>kube-proxy</code> only manages the <code>iptables</code> rules in its default configuration, and so no traffic flows through it that I'm aware of. That's actually why it's safe to just restart <code>kube-proxy</code> at will.</p> <p>You can examine the rules it puts into place with the regular <code>iptables -t nat -L</code> command, and <code>kube-proxy</code> is even helpful enough to add comments to the rules, showing which kubernetes service they represent.</p>
<p>I have Docker and OpenShift client installed on Ubuntu 16.04.3 LTS </p> <pre><code>[vagrant@desktop:~] $ docker --version Docker version 18.01.0-ce, build 03596f5 [vagrant@desktop:~] $ oc version oc v3.7.1+ab0f056 kubernetes v1.7.6+a08f5eeb62 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://127.0.0.1:8443 openshift v3.7.1+282e43f-42 kubernetes v1.7.6+a08f5eeb62 [vagrant@desktop:~] $ </code></pre> <p>Notice server URL <a href="https://127.0.0.1:8443" rel="nofollow noreferrer">https://127.0.0.1:8443</a>. </p> <p>I can start a cluster using <code>oc cluster up</code> </p> <pre><code>vagrant@desktop:~] $ oc cluster up --public-hostname='ocp.devops.ok' --host-data-dir='/var/lib/origin/etcd' --use-existing-config --routing-suffix='cloudapps.lab.example.com' Starting OpenShift using openshift/origin:v3.7.1 ... OpenShift server started. The server is accessible via web console at: https://ocp.devops.ok:8443 </code></pre> <p>I can access the server using <a href="https://ocp.devops.ok:8443" rel="nofollow noreferrer">https://ocp.devops.ok:8443</a> but then the OCP will redirect to <a href="https://127.0.0.1:8443" rel="nofollow noreferrer">https://127.0.0.1:8443</a>. So it redirect to kubernetes server URL I think.</p> <p>This raises the question about <code>public-hostname</code>. What does it do? It is not used by OpenShift I think because it redirects to Kubernetes server URL.</p> <p>How do I change this setting in Kubernetes?</p>
<p>I think that because --public-hostname does not specify the ip to be bound, and that ip currently is 127.0.0.1, som of the config is set to that value, and hence the oauth challenge redirects you there. I hope it might be solved in 3.10. </p> <p>See this issue described in <a href="http://github.com/openshift/origin/issues/13383" rel="nofollow noreferrer">OpensShift's Origin GitHub</a> </p>
<p>I am using the following Helm chart: <a href="https://github.com/kubernetes/charts/tree/master/incubator/elasticsearch-curator" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/incubator/elasticsearch-curator</a> and passing the following in my values.yaml file:</p> <pre><code>config: elasticsearch: hosts: - my-es-aws-endpoint port: 443 ssl: True </code></pre> <p>In the pods logs I see the following exception:</p> <pre><code>Preparing Action ID: 1, "delete_indices" Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen chunked=chunked) File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request six.raise_from(e, None) File "&lt;string&gt;", line 2, in raise_from File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request httplib_response = conn.getresponse() File "/usr/local/lib/python3.6/http/client.py", line 1331, in getresponse response.begin() File "/usr/local/lib/python3.6/http/client.py", line 297, in begin version, status, reason = self._read_status() File "/usr/local/lib/python3.6/http/client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response </code></pre> <p>It seems like it is trying to connect to HTTP, not HTTPS. I have tested the connection from my k8s cluster to es:443 and it works.</p> <p>Do you know if HTTPS is not supported or am I doing something wrong?</p> <p>...</p>
<p>It looks like I was passing the config in the wrong section and it was not picking it up properly. I passed it here and it works:</p> <pre><code> # Having config_yaml WILL override the other config config_yml: |- --- client: hosts: - my-es-aws-endpoint port: 443 use_ssl: True </code></pre>
<p>I have actively been looking for info on the internet concerning this, but it seems to be a somewhat particular use case. </p> <p>I am trying to deploy a docker container running a legacy back-end in a Kubernetes/Openshift cluster. </p> <p>The container is started using an entrypoint.sh script which will initialize dependencies that the back-end requires before booting. </p> <p>I want the back-end as PID 1 - as to capture the back-end logs with docker/openshift. </p> <p>To do this I have an exec command at the end of the entrypoint.sh script which launches my back-end and which thus replaces the entrypoint.sh process - which is assigned PID 1 by docker - with my back-end.</p> <p><strong>The problem:</strong> </p> <p>At the moment the exec is executed in the entrypoint.sh, docker stops capturing the logs, and I thus don't have any logs from my back-end process captured by docker when performing "docker logs $MY_CONTAINER_ID".</p> <p>When entering the container, I do see that everything functions properly: </p> <p>My back-end process is running as PID 1, and the process file descriptors 1/2 are correctly setup capturing STDOUT and STDERR of my back-end process.</p> <p>Does anyone know if this is a missing configuration issue? Or is docker simply designed to work like this considering that I am replacing PID 1 with exec?</p>
<p>I can't see anything wrong in what you are describing. The stdout/stderr of process ID 1 should be captured, as will any sub process if they inherit stdout/stderr of the parent process (process ID 1).</p> <p>Where you can have problems is if an application is set up to log to a normal file and doesn't use stdout/stderr. In these cases if they will only accept a file, use <code>/proc/1/fd/1</code> as the log file path. This will result in the log messages getting output through stdout of process ID 1.</p> <p>Do note that if your application uses a logging framework that wants to do its own log file rotation on the path you give it, you will need to disable that, you want it to keep using the same file path and not try and rename or truncate it. </p>
<p>I'm having troubles with this Vagrantfile that I've defined <a href="https://github.com/pablotoledo/kubernetes-poc/blob/master/Vagrantfile" rel="nofollow noreferrer">https://github.com/pablotoledo/kubernetes-poc/blob/master/Vagrantfile</a>.</p> <p>In this Vagrant file I set:</p> <ul> <li>1 Master</li> <li>2 Workers</li> </ul> <p>And I've defined a few scripts to be runned on each VMs:</p> <ul> <li>SSH Keygen for Master -> script_generate_ssh_key</li> <li>SSH Copy Key to copy id_rsa from master to workers -> script_copy_key</li> <li>A script to install common software on each VM -> script_install_common_software this script is based on <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a></li> <li><strong>Another script to setup the Master node role -> script_setup_master &lt; This is the problematic section</strong></li> <li>The last script is used to join the workers with the master -> script_setup_worker</li> </ul> <p>When I've run "vagrant up" the command "sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.40.10" always it's hang.</p> <p>If I ssh to the master node, I can see that the kube-apiserver container its always being recreated after around 3 minutes.</p> <p>This is the output of a crashed kube-apiserver instance:</p> <pre><code>Flag --insecure-port has been deprecated, This flag will be removed in a future version. Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. I0408 12:24:48.977898 1 server.go:135] Version: v1.10.0 I0408 12:24:48.978217 1 server.go:679] external host was not specified, using 10.0.2.15 I0408 12:24:50.350706 1 plugins.go:149] Loaded 9 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota. I0408 12:24:50.357766 1 master.go:228] Using reconciler: master-count W0408 12:24:50.456319 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources. W0408 12:24:50.472096 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0408 12:24:50.475201 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0408 12:24:50.489986 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. [restful] 2018/04/08 12:24:50 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi [restful] 2018/04/08 12:24:50 log.go:33: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/ [restful] 2018/04/08 12:24:51 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi [restful] 2018/04/08 12:24:51 log.go:33: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/ I0408 12:24:55.219070 1 serve.go:96] Serving securely on [::]:6443 I0408 12:24:55.219144 1 available_controller.go:262] Starting AvailableConditionController I0408 12:24:55.219153 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0408 12:24:55.219698 1 apiservice_controller.go:90] Starting APIServiceRegistrationController I0408 12:24:55.219712 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0408 12:24:55.219755 1 crd_finalizer.go:242] Starting CRDFinalizer I0408 12:24:55.220516 1 crdregistration_controller.go:110] Starting crd-autoregister controller I0408 12:24:55.220529 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller I0408 12:24:55.220552 1 customresource_discovery_controller.go:174] Starting DiscoveryController I0408 12:24:55.220571 1 naming_controller.go:276] Starting NamingConditionController I0408 12:24:55.227100 1 controller.go:84] Starting OpenAPI AggregationController I0408 12:25:05.259553 1 trace.go:76] Trace[439388531]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:24:55.25803138 +0000 UTC m=+6.462614551) (total time: 10.001458879s): Trace[439388531]: [10.001458879s] [10.001376262s] END I0408 12:25:05.475536 1 trace.go:76] Trace[1147394168]: "Create /api/v1/nodes" (started: 2018-04-08 12:24:55.473779876 +0000 UTC m=+6.678363122) (total time: 10.001690768s): Trace[1147394168]: [10.001690768s] [10.001489339s] END I0408 12:25:15.264150 1 trace.go:76] Trace[2095398311]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:05.262783532 +0000 UTC m=+16.467366812) (total time: 10.001282694s): Trace[2095398311]: [10.001282694s] [10.001123521s] END I0408 12:25:22.617868 1 trace.go:76] Trace[351185622]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:12.612633316 +0000 UTC m=+23.817216837) (total time: 10.005165894s): Trace[351185622]: [10.005165894s] [10.004689717s] END I0408 12:25:25.268040 1 trace.go:76] Trace[460596942]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:15.267221356 +0000 UTC m=+26.471804605) (total time: 10.000777946s): Trace[460596942]: [10.000777946s] [10.000596999s] END I0408 12:25:30.744179 1 trace.go:76] Trace[1400508077]: "Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests" (started: 2018-04-08 12:25:20.742377206 +0000 UTC m=+31.946960452) (total time: 10.001739846s): Trace[1400508077]: [10.001739846s] [10.00156572s] END I0408 12:25:35.271775 1 trace.go:76] Trace[850178247]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:25.270857617 +0000 UTC m=+36.475440866) (total time: 10.000858266s): Trace[850178247]: [10.000858266s] [10.00070839s] END I0408 12:25:39.786386 1 trace.go:76] Trace[2021645803]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:29.770900237 +0000 UTC m=+40.975483430) (total time: 10.015433752s): Trace[2021645803]: [10.015433752s] [10.015299731s] END I0408 12:25:45.285287 1 trace.go:76] Trace[2302986]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:35.276453578 +0000 UTC m=+46.481036913) (total time: 10.008728056s): Trace[2302986]: [10.008728056s] [10.008596155s] END E0408 12:25:55.242069 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes) E0408 12:25:55.279175 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints) E0408 12:25:55.279561 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets) E0408 12:25:55.280109 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas) E0408 12:25:55.280477 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io) E0408 12:25:55.280611 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io) E0408 12:25:55.281036 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.VolumeAttachment: the server was unable to return a response in the time allotted, but may still be processing the request (get volumeattachments.storage.k8s.io) E0408 12:25:55.282907 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces) E0408 12:25:55.283131 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts) E0408 12:25:55.283626 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.MutatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get mutatingwebhookconfigurations.admissionregistration.k8s.io) E0408 12:25:55.284185 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges) E0408 12:25:55.285586 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) E0408 12:25:55.286253 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services) E0408 12:25:55.286750 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0408 12:25:55.287667 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io) E0408 12:25:55.292724 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ValidatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io) E0408 12:25:55.293137 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io) E0408 12:25:55.293191 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io) I0408 12:25:55.294035 1 trace.go:76] Trace[448038888]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:45.290257352 +0000 UTC m=+56.494840693) (total time: 10.00357948s): Trace[448038888]: [10.00357948s] [10.003312246s] END E0408 12:25:55.294860 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods) E0408 12:25:56.224200 1 storage_rbac.go:157] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) I0408 12:25:56.801282 1 trace.go:76] Trace[703945258]: "Create /api/v1/nodes" (started: 2018-04-08 12:25:46.799631549 +0000 UTC m=+58.004214890) (total time: 10.001618084s): Trace[703945258]: [10.001618084s] [10.001087054s] END I0408 12:26:02.808827 1 trace.go:76] Trace[1631269070]: "Create /apis/certificates.k8s.io/v1beta1/certificatesigningrequests" (started: 2018-04-08 12:25:52.784610063 +0000 UTC m=+63.989193403) (total time: 10.024138244s): Trace[1631269070]: [10.024138244s] [10.023949067s] END I0408 12:26:05.300199 1 trace.go:76] Trace[494561622]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:25:55.29934912 +0000 UTC m=+66.503932586) (total time: 10.00079884s): Trace[494561622]: [10.00079884s] [10.000554488s] END I0408 12:26:06.234261 1 trace.go:76] Trace[108596673]: "Create /api/v1/namespaces" (started: 2018-04-08 12:25:56.225584357 +0000 UTC m=+67.430167698) (total time: 10.008614333s): Trace[108596673]: [10.008614333s] [10.00842738s] END E0408 12:26:06.236146 1 client_ca_hook.go:78] namespaces "kube-system" is forbidden: not yet ready to handle request E0408 12:26:07.582170 1 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller I0408 12:26:07.582234 1 apiservice_controller.go:94] Shutting down APIServiceRegistrationController E0408 12:26:07.582293 1 cache.go:35] Unable to sync caches for AvailableConditionController controller E0408 12:26:07.582358 1 controller_utils.go:1022] Unable to sync caches for crd-autoregister controller E0408 12:26:07.582384 1 customresource_discovery_controller.go:177] timed out waiting for caches to sync I0408 12:26:07.582408 1 naming_controller.go:280] Shutting down NamingConditionController I0408 12:26:07.582438 1 crd_finalizer.go:246] Shutting down CRDFinalizer I0408 12:26:07.582177 1 controller.go:90] Shutting down OpenAPI AggregationController I0408 12:26:07.582559 1 serve.go:136] Stopped listening on [::]:6443 I0408 12:26:07.584582 1 available_controller.go:266] Shutting down AvailableConditionController I0408 12:26:07.585842 1 crdregistration_controller.go:115] Shutting down crd-autoregister controller I0408 12:26:07.587352 1 customresource_discovery_controller.go:178] Shutting down DiscoveryController I0408 12:26:13.822786 1 trace.go:76] Trace[1412481370]: "Create /api/v1/nodes" (started: 2018-04-08 12:26:03.817519799 +0000 UTC m=+75.022103139) (total time: 10.005184469s): Trace[1412481370]: [10.005184469s] [10.004863636s] END I0408 12:26:15.304918 1 trace.go:76] Trace[38092900]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-04-08 12:26:05.303564401 +0000 UTC m=+76.508147685) (total time: 10.001274076s): Trace[38092900]: [10.001274076s] [10.001122791s] END </code></pre> <p>Could anyone help me, please?</p>
<p>Usually, when you use kubeadm to create kubernetes cluster, you follow typical sequence: </p> <ol> <li>prepare VM (<em>configure CPU, ram, network, drives, vagrant boxes</em>, etc.) </li> <li>add <strong>gpg-keys</strong> and <strong>repositories</strong> </li> <li>configure <strong>sysctl</strong> (<em>bridge-nf-call-ip6tables, bridge-nf-call-iptables</em>) </li> <li>install packages, depending on your system <ul> <li>ubuntu (<em>ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl</em>) </li> <li>centos (<em>go git wget docker kubelet kubectl kubeadm</em>) (<em>crictl</em> on master) </li> </ul></li> <li>run <code>kubeadm init</code> </li> <li>configure <code>kubectl</code> (create ~/.kube/config) </li> <li>configure kubernetes network subsystem (<em>calico, weave</em>, etc.) </li> <li>join workers to cluster </li> </ol> <p>At this point, you usually have a ready-to-use kubernetes cluster. </p> <p>After checking your Vagrant file, I would suggest you to: </p> <ol> <li>change kubernetes baseurl in repo config step: </li> </ol> <blockquote> <p><code>baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\\$basearch</code></p> </blockquote> <ol start="2"> <li>move these lines to $script_install_common_software, </li> </ol> <blockquote> <p>sudo bash -c "echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf"<br> sudo bash -c "echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf"<br> sudo sysctl --system</p> </blockquote> <ol start="3"> <li>copy binary to /usr/bin after installation of crictl (you need this binary only on master) </li> </ol> <blockquote> <p>sudo cp ~/go/bin/crictl /usr/bin</p> </blockquote> <ol start="4"> <li>put these lines <strong>before</strong> deploying calico with kubectl </li> </ol> <blockquote> <p>mkdir -p $HOME/.kube<br> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config<br> sudo chown $(id -u):$(id -g) $HOME/.kube/config</p> </blockquote> <ol start="5"> <li>uncomment worker join commands </li> </ol> <p>This should be enough to bring your cluster to "Ready" state. </p>
<p>I managed to create and deploy a k8s cluster with minikube, running 4 replicas of a simple hello-world node.js app, using the following configuration.</p> <p>Dockerfile for the app:</p> <pre><code>FROM ubuntu:latest RUN apt-get update RUN apt-get -qq update RUN apt-get install -y nodejs npm # TODO could uninstall some build dependencies # debian installs `node` as `nodejs` RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10 COPY package.json package.json RUN npm install COPY . . CMD ["node", "app.js"] </code></pre> <p>k8s Deployment yml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: node-deployment labels: app: node-app spec: replicas: 4 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: my-repo/ubuntu-node:sectry ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: node-service labels: app: node-app spec: type: NodePort ports: - port: 3000 protocol: TCP selector: app: node-app </code></pre> <p>My question is how can add an nginx container for it? I know k8s cluster already load-balancing, but I really wish to use NGINX features.</p>
<p>I would add third option : use an Ingress Controller <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress</a>.</p> <p>It will create all this features that you are looking for and built in on K8s. </p>
<p>I installed kubernetes using these commands on centos7 </p> <pre><code>cat &lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl </code></pre> <p>This will install the latest version, so i'm looking for a method to install a specific version. for example 1.8.1</p> <p>Thank you</p>
<p>To install a specific version of the package it is enough to define it during the <code>yum install</code> command:</p> <pre><code>yum install -y kubelet-&lt;version&gt; kubectl-&lt;version&gt; kubeadm-&lt;version&gt; </code></pre> <p>But for your particular case, when you want to install <code>kubernetes</code> version <code>1.8.1</code>, <code>kubernetes-cni</code> package should be <code>0.5.1</code> version to satisfy dependency requirements:</p> <pre><code>--&gt; Finished Dependency Resolution Error: Package: kubelet-1.8.1-1.x86_64 (kubernetes) Requires: kubernetes-cni = 0.5.1 Available: kubernetes-cni-0.3.0.1-0.07a8a2.x86_64 (kubernetes) kubernetes-cni = 0.3.0.1-0.07a8a2 Available: kubernetes-cni-0.5.1-0.x86_64 (kubernetes) kubernetes-cni = 0.5.1-0 Available: kubernetes-cni-0.5.1-1.x86_64 (kubernetes) kubernetes-cni = 0.5.1-1 Installing: kubernetes-cni-0.6.0-0.x86_64 (kubernetes) kubernetes-cni = 0.6.0-0 </code></pre> <p>So, the final command is:</p> <pre><code>yum install -y kubelet-1.8.1 kubectl-1.8.1 kubeadm-1.8.1 kubernetes-cni-0.5.1 </code></pre>
<p>I managed to create and deploy a k8s cluster with minikube, running 4 replicas of a simple hello-world node.js app, using the following configuration.</p> <p>Dockerfile for the app:</p> <pre><code>FROM ubuntu:latest RUN apt-get update RUN apt-get -qq update RUN apt-get install -y nodejs npm # TODO could uninstall some build dependencies # debian installs `node` as `nodejs` RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10 COPY package.json package.json RUN npm install COPY . . CMD ["node", "app.js"] </code></pre> <p>k8s Deployment yml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: node-deployment labels: app: node-app spec: replicas: 4 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: my-repo/ubuntu-node:sectry ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: node-service labels: app: node-app spec: type: NodePort ports: - port: 3000 protocol: TCP selector: app: node-app </code></pre> <p>My question is how can add an nginx container for it? I know k8s cluster already load-balancing, but I really wish to use NGINX features.</p>
<p>I guess you want to use nginx as a reverse proxy to your Node.js app. If it is the case, you can use one of the following ways to achieve it. </p> <p><strong>Option 1</strong></p> <ol> <li><p>Build nginx and your node.js app into one Docker image. In this image, configure nginx as a reverse proxy and forward the request to your node.js app. For example, the following nginx configure forwards the request to port 3000 in the same container. </p> <pre><code>server { listen 80; server_name localhost; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:3000; } } </code></pre></li> <li><p>You can then deploy this image to k8s cluster, and create a service for it. </p></li> </ol> <p><strong>Option 2</strong></p> <ol> <li>Create 2 docker images: 1 for nginx and 1 for your node.js.</li> <li>Deploy both of them to k8s and create a service for each of them. Use <code>ClusterIP</code> as the service type for node.js image, and <code>LoadBalancer</code> for nginx image. </li> <li>Configure nginx as the reverse proxy, and forward the request to the corresponding cluster ip of the service for node.js image. </li> </ol> <p>To test it on minikube, Option 1 is easier. Option 2 is recommended for a production k8s cluster. </p>
<p>I've just created a new cluster using Google Container Engine running Kubernetes 1.7.5, with the new RBAC permissions enabled. I've run into a problem allocating permissions for some of my services which lead me to the following:</p> <p>The <a href="https://cloud.google.com/container-engine/docs/role-based-access-control" rel="noreferrer">docs</a> for using container engine with RBAC state that the user must be granted the ability to create authorization roles by running the following command:</p> <pre><code>kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=&lt;user-name&gt;] </code></pre> <p>However, this fails due to lack of permissions (which I would assume are the very same permissions which we are attempting to grant by running the above command).</p> <pre><code>Error from server (Forbidden): User "&lt;user-name&gt;" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope.: "Required \"container.clusterRoleBindings.create\" permission." (post clusterrolebindings.rbac.authorization.k8s.io) </code></pre> <p>Any help would be much appreciated as this is blocking me from creating the permissions needed by my cluster services.</p>
<p>Janos's answer will work for GKE clusters that have been created with a password, but I'd recommend avoiding using that password wherever possible (or creating your GKE clusters without a password).</p> <p>Using IAM: To create that <code>ClusterRoleBinding</code>, the caller must have the <code>container.clusterRoleBindings.create</code> permission. Only the <code>OWNER</code> and <code>Kubernetes Engine Admin</code> IAM Roles contain that permission (because it allows modification of access control on your GKE clusters).</p> <p>So, to allow <code>[email protected]</code> to run that command, they must be granted one of those roles. E.g.:</p> <pre><code>gcloud projects add-iam-policy-binding $PROJECT \ --member=user:[email protected] \ --role=roles/container.admin </code></pre>
<p>I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains: </p> <p><code>uwsgi: unrecognized option '--http 127.0.0.1:8080'</code>.</p> <p>The image definitely has the http option because: <code> a. uWSGI was installed via pip3 which includes the http plugin. b. When I run the deployment with --list-plugins, the http plugin is listed. c. The http option is recognized correctly when run locally. </code></p> <p>I am running the Docker image locally with:</p> <p><code>$: docker run &lt;image_name&gt; uwsgi --http 127.0.0.1:8080</code> </p> <p>The container Kubernetes YAML config is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: launch-service-example name: launch-service-example spec: replicas: 1 template: metadata: labels: app: launch-service-example spec: containers: - name: launch-service-example image: &lt;image_name&gt; command: ["uwsgi"] args: - "--http 127.0.0.1:8080" - "--module code.experimental.launch_service_example.__main__" - "--callable APP" - "--master" - "--processes=2" - "--enable-threads" - "--pyargv --test1=3--test2=abc--test3=true" ports: - containerPort: 8080 --- kind: Service apiVersion: v1 metadata: name: launch-service-example-service spec: selector: app: launch-service-example ports: - protocol: TCP port: 8080 targetPort: 8080 </code></pre> <p>The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.</p>
<p>It is happening because of the difference between arguments processing in the console and in the configuration.</p> <p>To fix it, just split your args like that:</p> <pre><code>args: - "--http" - "127.0.0.1:8080" - "--module code.experimental.launch_service_example.__main__" - "--callable" - "APP" - "--master" - "--processes=2" - "--enable-threads" - "--pyargv" - "--test1=3--test2=abc--test3=true" </code></pre>
<p>I installed the puppet kubernetes module to manage pods of my kubernetes cluster with <a href="https://github.com/garethr/garethr-kubernetes/blob/master/README.md" rel="nofollow noreferrer">https://github.com/garethr/garethr-kubernetes/blob/master/README.md</a></p> <p>I am not able to get any pod information back when I run</p> <p>puppet resource kubernetes_pod</p> <p>It just returns an empty line.</p> <p>I am using a minikube k8s cluster to test the puppet module against.</p> <p><code>cat /etc/puppetlabs/puppet/kubernetes.conf</code></p> <p><code>apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://&lt;ip address&gt;:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /root/.minikube/apiserver.crt client-key: /root/.minikube/apiserver.key</code></p> <p>I am able to use curl with the certs to talk to the K8s REST API</p> <p><code>curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://&lt;minikube ip&gt;:844/api/v1/pods/ </code></p>
<p>It looks like the <a href="https://github.com/garethr/garethr-kubernetes/" rel="nofollow noreferrer">garethr-kubernetes</a> package hasn't been updated since August 2017, so you probably need a version of the <a href="https://rubygems.org/gems/kubeclient" rel="nofollow noreferrer">kubeclient gem</a> at least that old. It seems kubeclient 3.0 came out quite recently, so you might want to try the latest version from the 2.5 major (currently 2.5.2).</p>
<p>I am getting issues when trying to getting the information about the nodes created using AKS(Azure Connected Service) for Kubernetes after the execution of creating the clusters and getting the credentials.</p> <p>I am using the azure-cli on ubuntu linux machine.</p> <p>Followed the Url for creation of clusters: <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough</a></p> <p>I get the following error when using the command <code>kubectl get nodes</code> after execution of connecting to cluster using </p> <pre><code>az aks get-credentials --resource-group &lt;resource_group_name&gt; --name &lt;cluster_name&gt; </code></pre> <p>Error:</p> <pre><code> kubectl get nodes </code></pre> <blockquote> <p><strong>Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)</strong></p> </blockquote> <p>I do get the same error when i use :</p> <pre><code>kubectl get pods -n kube-system -o=wide </code></pre> <p>When i connect back as another user by the following commands i.e., </p> <pre><code> mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>I will be able to retrieve the nodes i.e..,</p> <pre><code> kubectl get nodes NAME STATUS ROLES AGE VERSION &lt;host-name&gt; Ready master 20m v1.10.0 ~$ kubectl get pods -n kube-system -o=wide NAME READY STATUS RESTARTS AGE etcd-actaz-prod-nb1 1/1 Running 0 kube-apiserver-actaz-prod-nb1 1/1 Running 0 kube-controller-manager-actaz-prod-nb1 1/1 Running 0 kube-dns-86f4d74b45-4qshc 3/3 Running 0 kube-flannel-ds-bld76 1/1 Running 0 kube-proxy-5s65r 1/1 Running 0 kube-scheduler-actaz-prod-nb1 1/1 Running 0 </code></pre> <p>But this is actually overwriting newly clustered information from file $HOME/.kube/config</p> <p>Am i missing something when we connect to AKS-cluster get-credentials command-let that's leading me to the error </p> <p><strong><code>*Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)*</code></strong></p>
<p>After you </p> <h1>az aks get-credentials -n cluster-name -g resource-group</h1> <p>If should have merged to your local configuration:</p> <h1>/home/user-name/.kube/config</h1> <p>Can you check your config</p> <h1>kubectl config view</h1> <p>And check if it is pointing to the right cluster. </p>
<p>My company has a small pipeline library that we implicitly load for every build. Is there a way to overload the <code>node {</code> block of every build transparently?</p> <p>My specific case is that I'm provisioning kubernetes slaves with the kubernetes plugin, and I want to provide a default YAML template, while allowing users to pick another template or override specific values. Eg:</p> <pre><code>node { // Gets you a Pod with a DinD engine with a low CPU/Mem request/limit } </code></pre> <p>Optionally overridden by name:</p> <pre><code>node('2-core') { // Gets you a Pod with a DinD engine with 2 CPU/ more Mem request/limit } </code></pre> <p>Or overridden with a template:</p> <pre><code>import com.foo.utils.PodTemplates slaveTemplates = new PodTemplates() slaveTemplates.bigPod { node { // Big node } } </code></pre> <p>Or:</p> <pre><code>def label = "mypod-${UUID.randomUUID().toString()}" podTemplate(label: label, yaml: """ apiVersion: v1 kind: Pod metadata: labels: some-label: some-label-value spec: containers: - name: redis image: redis """ ) { node (label) { // Same small pod as before PLUS a redis container } } </code></pre> <p>This seems trickiest, since you want the values of the parent to override the values of the child.</p>
<p>You can do this, but, in my opinion, it will lead to confusing behavior and possibly strange error cases.</p> <p>For example:</p> <p><code>echo.groovy</code></p> <pre><code>def call(String string) { steps.echo "Calling step echo: $string" } </code></pre> <p><code>Jenkinsfile</code></p> <pre><code>echo 'hello' </code></pre> <p>Output:</p> <pre><code>Calling step echo: hello </code></pre> <ul> <li>There is a <a href="http://unethicalblogger.com/2017/08/03/overriding-builtin-steps-pipeline.html" rel="nofollow noreferrer">blog post here</a> that demonstrates this a little more in depth</li> <li>Paid support for some pipeline restriction tools are offered by <a href="https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#pipeline-custom-factories" rel="nofollow noreferrer">CloudBees</a> that might solve your use case</li> </ul> <p>The heaviest way to accomplish this is to of course write a plugin.</p>
<h1>Context</h1> <p>I am deploying a set of services that are containerised using Docker into AWS. No matter which deployment solution is chosen (e.g. raw EC2/ECS/Elastic Beanstalk/Fargate) we will face the issue of "service discovery".</p> <p>To name just a few of the options for service discovery that I've considered:</p> <ul> <li>AWS Route 53 Service Registry</li> <li>Kubernetes</li> <li>Hashicorp Consul</li> <li>Spring Cloud Netflix Eureka </li> </ul> <h1>Specifics Of My Stack</h1> <p>I am developing Java Spring Boot applications using Spring Cloud with the target deployment environment being AWS.</p> <p>Given that my stack is Spring based, spring cloud eureka made sense to me while developing locally. It was easy to set up a single node, integrates well with the stack and ecosystem of choice and required very little set up.</p> <p>Locally, we are using docker compose (not swarm) to deploy services - one of the containers deployed is a single node Eureka service discovery server.</p> <p>However, when we progress outside of local development and into staging or production environment we are considering options like Kubernetes.</p> <h1>My Own Assessment Of Pros/Cons</h1> <h2>AWS Route 53 Service Registry</h2> <p>Requires us to couple code specifically to AWS services. Not a problem per se, we are quite tied in anyway on other parts of the stack (SNS/SQS).</p> <p>Makes running the stack locally slightly more difficult as it relies on Route 53, I suppose we could open up a certain hosted zone for local development.</p> <p>AWS native, no managing service registries or extra "moving parts".</p> <h2>Spring Cloud Eureka</h2> <p>Downside is that thus requires us to deploy and manage a high availability service registry cluster and requires more resources. Another "moving part" to manage. </p> <p>Advantages are that it fits into our stack well (spring ecosystem, spring boot, spring cloud, feign and zuul work well with this). Also can be run locally trivially.</p> <p>I presume we need to configure the networks and registry zone to ensure that that clients publish their host address rather and docker container internal IP address. e.g. if service A is on host A and wants to talk to service B on host B, service B needs to advertise its EC2 address rather than some internal docker IP.</p> <h1>Questions</h1> <p>If we use Kubernetes for orchestration, are there any disadvantages to using something like Spring Cloud Eureka over the built in service discovery options described here <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services</a></p> <p>Given Kube provides this, it seems suboptimal to then use eureka deployed using kube to perform discovery. I presume kube can make some optimisations that impact avaialbility and stability that might nit be possible using eureka. e.g kube would know when deploying a new service - eureka will have to rely on heartbeats/health checks and depending on how that is configured (e.g. frequency) this could result in stale records whereas i presume kube might not suffer from this for planned service shutdown/restarts. I guess it still does for unplanned failures such as a host failure or network partition.</p> <p>Does anyone have any advice on this, do people use services like Kubernetes but use other mechanisms for service discovery rather than those provided by kube. Is there a good reason to do one or the other?</p> <h2>Possible Challenges I Anticipate</h2> <p>We could replace eureka, but relying on Kube to perform discovery will mean that we need to run kube locally to deploy whereas currently we have a simple tiny docker-compose file. Also, I'll have to look at how easy it'll be to ensure that ribbon, zuul and feign play nicely with this.</p> <p>Currently we have ribbon configured with a eureka client so that service A can server to service B just as "service-b" for example and have ribbon resolve a healthy host via a eureka client. I guess we can configure ribbon to not use eureka and use an external Kube service name which will be resolved by Kube DNS at runtime...</p> <h3>Final Note</h3> <p>Thanks in advance for any contribution or advice. I know this might elicit a primarily opinion focused response. But I am hoping someone can provide objective guidance on when one solution might be preferable to another.</p>
<p>Service discovery is something you get out-of-the-box with Kubernetes. So having another external service in your platform will be another application to maintain, deploy and can be a point of failure. So I would stick with the the service discovery provided by Kubernetes.</p>
<p>I have a Kubernetes PersistentVolume that mounts a directory using HostPath and it is cumbersome to boot up a Pod and exec onto it to view or change the data in that volume. Is it possible to locate the data in this volume on my local file system instead?</p>
<p>Docker stores these volumes in <code>/Users/username/.docker/Volumes/your-pv-name/pvc-SOME-UUID</code>.</p> <p>You can actually find the exact location by inspecting the PersistentVolume directly:</p> <pre><code>$ kubectl get pv -o=jsonpath='{.spec.hostPath.path}' some-persistent-volume-claim /Users/youruser/.docker/Volumes/your-pv-name/pvc-4eafca67-3e94-11e8-9f11-025000000001 </code></pre>
<p>HPA status displays <code>132500m / 500</code> in a situation when the actual metric value is lower then <code>100/500</code> (according to the Prometheus). </p> <pre><code>$ kubectl get hpa -n frontend --context testing NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE frontend Deployment/streaming 50237440 / 629145600, 132500m / 500 2 5 2 4d </code></pre> <p>HPA manifest is:</p> <pre><code>--- apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: frontend namespace: streaming spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: streaming minReplicas: 2 maxReplicas: 5 metrics: - type: Pods pods: metricName: redis_memory_used_rss_bytes targetAverageValue: 629145600 - type: Pods pods: metricName: redis_db_keys targetAverageValue: 500 </code></pre> <p>It should print normal results, like:</p> <pre><code>$ kubectl get hpa -n streaming --context streaming-eu NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE frontend Deployment/streaming 50237440 / 629145600, 87 / 500 2 5 2 4d </code></pre> <p>The problem is in that <code>132500m</code> value, which is wrong (Prometheus query reports a normal value). And as <code>HPA</code> didn't scale up on that metric, so it saw it's value somewhat different, I suppose.</p> <p>Use <a href="https://github.com/oliver006/redis_exporter" rel="nofollow noreferrer">oliver006/redis_exporter</a> and it's metrics as a custom <code>Pod</code> metrics with HPA to reproduce this issue.</p> <p><em>Kubernetes version</em>:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}` Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.4-gke.1", GitCommit:"10e47a740d0036a4964280bd663c8500da58e3aa", GitTreeState:"clean", BuildDate:"2018-03-13T18:00:36Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><em>Cloud provider</em>:</p> <pre><code>GKE 1.9.4 </code></pre>
<p>I think that is a metric conversion problem.</p> <p>Here is a good <a href="https://github.com/kubernetes/kubernetes/issues/57365#issuecomment-359467755" rel="nofollow noreferrer">comment</a> from the contributor on the related issue, but it's about the <code>http_requests</code> metric:</p> <blockquote> <p>if you look at the documentation for the Prometheus adapter, you'll see that all cumulative (counter) metrics are converted to rate metrics, since the HPA's algorithm in fundamentally incompatible with scaling on cumulative metrics directly (scaling on cumulative metrics directly doesn't make much sense in general).</p> <p>In your case, your http_requests_total is being converted into http_requests, so it will always show up as milli-requests from the metrics API when using the Prometheus adapter.</p> </blockquote> <p>So, in your case, it is returning something like 132500 millirecords. Just divide value to 1000, and you will get the correct average value.</p>