prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>We have an application deployed on GKE that would benefit from having fast temporary storage on disk.</p> <p>The GKE <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">local SSD</a> feature is almost perfect, however we have multiple pod replicas and would ideally like to support multiple pods on the same node. Mounting the local SSD using <code>hostPath</code> makes that difficult.</p> <p><a href="https://stackoverflow.com/questions/38720828/mount-local-ssd-drive-in-container">This 2016 SO question</a> mentions the idea of mounting <code>emptyDir</code>s on the local SSD which would be perfect, but I understand still isn't an option.</p> <p>There is a <a href="https://groups.google.com/forum/#!topic/kubernetes-users/IVg6QasyxV0" rel="nofollow noreferrer">2017 mailing list thread</a> with the same idea, but the answer was still not positive.</p> <p>The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">GCP docs for local SSDs</a> were recently updated to describe using them via the <code>PersistentVolume</code> abstraction, which sounds sort of promising. Could I use that to achieve what I'm after?</p> <p>The examples seem to show mounting the full local SSD as a <code>PersistentVolume</code>, when my preference is to use an isolated part of it for each pod. We also don't need the data to be persistent - once the pod is deleted we'd be happy for the data to be deleted as well.</p>
<p>Kubernetes 1.11 added an alpha feature called <a href="https://github.com/kubernetes/kubernetes/issues/48677" rel="nofollow noreferrer">Downward API support in volume subPath</a>, which allows volumeMount subpaths to be set using the downward API.</p> <p>I tested this by creating a GKE 1.11 alpha cluster:</p> <pre><code>gcloud container clusters create jh-test --enable-kubernetes-alpha --zone=asia-southeast1-a --cluster-version=1.11.3-gke.18 --local-ssd-count=1 --machine-type=n1-standard-2 --num-nodes=2 --image-type=cos --disk-type=pd-ssd --disk-size=20Gi --no-enable-basic-auth --no-issue-client-certificate --no-enable-autoupgrade --no-enable-autorepair </code></pre> <p>I then created a 2-replica deployment with the following config:</p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: scratch-space mountPath: /tmp/scratch subPath: $(POD_NAME) volumes: - name: scratch-space hostPath: path: "/mnt/disks/ssd0" </code></pre> <p>If I <code>kubectl exec</code>'d into each pod, I had a <code>/tmp/scratch</code> directory that was isolated and very performant.</p> <p>If I SSHd into the host, then I could see a directory for each pod:</p> <pre><code>$ ls -l /mnt/disks/ssd0/ drwx--x--x 14 root root 4096 Dec 1 01:49 foo-6dc57cb589-nwbjw drwx--x--x 14 root root 4096 Dec 1 01:50 foo-857656f4-dzzzl </code></pre> <p>I also tried applying the deployment to a non-alpha GKE 1.11 cluster, but the SSD content ended up looking like this:</p> <pre><code>$ ls -l /mnt/disks/ssd0/ drwxr-xr-x 2 root root 4096 Dec 1 04:51 '$(POD_NAME)' </code></pre> <p>Unfortunately it's not realistic to run our workload on an alpha cluster, so this isn't a pragmatic solution for us yet. We'll have to wait for the feature to reach beta and become available on standard GKE clusters. It does seem to be <a href="https://github.com/kubernetes/kubernetes/issues/64604" rel="nofollow noreferrer">slowly progressing</a>, although the API <a href="https://github.com/kubernetes/community/pull/2871" rel="nofollow noreferrer">will probably change slightly</a>.</p> <p>For kubernetes 1.14, the syntax for <code>volumeMounts</code> has changed to use a new <code>subPathExpr</code> field. The feature remains alpha-only:</p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: scratch-space mountPath: /tmp/scratch subPathExpr: $(POD_NAME) volumes: - name: scratch-space hostPath: path: "/mnt/disks/ssd0" </code></pre>
<p>As <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">direct mount</a> or <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer">persistent volume claim</a> the Azure docs show how to mount an Azure Files storage account to a Kubernetes pod. This mounts the entire storage as the mounted path. How do I instead mount a folder within the Azure Files storage to Kubernetes?</p> <p>On Azure Files, I have the following:</p> <pre><code>AzureFiles |- folder1 |- file1 |- folder2 |- file2 </code></pre> <p>When I mount the Azure Files storage account to Kubernetes (to <code>/mnt/azure</code>) I see this:</p> <pre><code>/mnt |- azure |- folder1 |- file1 |- folder2 |- file2 </code></pre> <p>Instead I'd like to see this when I mount Azure Files' path <code>folder1</code>:</p> <pre><code>/mnt |- azure |- file1 </code></pre> <p>How do I change my <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume#mount-the-file-share-as-a-volume" rel="nofollow noreferrer">Pod definition</a> to specify this path:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - // ... snip ... volumeMounts: - name: azure mountPath: /mnt/azure volumes: - name: azure azureFile: secretName: azure-secret shareName: aksshare readOnly: false // TODO: how to specify path in aksshare </code></pre>
<p><strong>Edit</strong></p> <p>Search for several days, I figure out how to mount a sub-folder of the Azure File Share to the AKS pod. You can set the yaml file like this: </p> <pre><code>volumes: - name: azure azureFile: secretName: azure-secret shareName: share/subfolder readOnly: false </code></pre> <p>Just set the share name with the directory, take care, do not append <code>/</code> in the end. The screenshot of the result is here:</p> <p><a href="https://i.stack.imgur.com/7aaoO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7aaoO.png" alt="enter image description here"></a></p> <p>For more details, see <a href="https://learn.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names%3Chttps://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Frest%2Fapi%2Fstorageservices%2Fnaming-and-referencing-shares--directories--files--and-metadata%23directory-and-file-names&amp;data=02%7C01%7C%7C851c8a59e3b64c291a5508d65ea91a95%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636800477728798195&amp;sdata=v3XdE1qMeZFVcD1qYCoeGyDjmXo4GvwsChdMIws8XA4%3D&amp;reserved=0%3E" rel="noreferrer">Naming and Referencing Shares, Directories, Files, and Metadata</a>.</p>
<p>I want to deploy metallb using terraform. metallb configuration is as follows.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 172.16.99.1-172.16.99.255 </code></pre> <p>Is it possible deploy such --from-file configuration using kubernetes_config_map resource?</p>
<p>You can use the <a href="https://www.terraform.io/docs/configuration/interpolation.html#file-path-" rel="nofollow noreferrer">file() interpolation function</a> to read the file contents. That might look like:</p> <pre><code>resource "kubernetes_config_map" "config" { metadata { namespace = "metallb_system" name = "config" } data { config = "${file(${path.module}/config.yml)}" } } </code></pre> <p>Unlike <code>kubectl create configmap --from-file</code> you do have to specify the filename twice.</p>
<p>I've dockerized a python project that requires the use of several CSVs (~2gb). In order to keep image size down I didn't include the CSVs in the build, instead opting to give the running container the data from a directory outside the container through a volume. Locally, when running through docker, I can just do</p> <pre><code>docker run -v ~/local/path/:/container/path my-image:latest </code></pre> <p>This works, but I'm not sure how to go about doing this in Kubernetes. I've been reading the documentation and am confused by the number of volume types, where the actual CSVs should be stored, etc.</p> <p>Based on the information about the project that I've provided, is there an obvious solution?</p>
<p>If you'd like to replicate that exact behavior from Docker the most common way to do it is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>. Something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: my-image:latest name: my-container volumeMounts: - mountPath: /container/path name: test-volume volumes: - name: test-volume hostPath: path: /usr/local/path type: Directory </code></pre>
<p>I have set up a Node app on Kubernetes. I'm running a single replica and I want 0 down-time when the image is updated. I update my Pod using <code>set Image</code> on Kubernetes.</p> <pre><code>'set', 'image', 'deployment/dev-web' </code></pre> <p>Here's my YAML file </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" generation: 2 labels: io.kompose.service: dev-web name: dev-web namespace: default spec: replicas: 1 selector: matchLabels: io.kompose.service: dev-web strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: io.kompose.service: dev-web spec: containers: - env: image: gcr.io/my-project-link/my-image-link imagePullPolicy: Always name: dev-web-container ports: - containerPort: 2000 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: / port: 2000 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 20m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: 2018-12-07T11:13:21Z lastUpdateTime: 2018-12-07T11:13:21Z message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 2 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre> <p>My app does give 200 response on '/' get therefore Readiness Probe works but when I update the Image, and test it but continuously hitting CURL, it gives me downtime which lasts for like 20-40 seconds. </p>
<p>You setup your <code>maxUnavailable</code> as 1 even when you have only one replica, you should be having <code>maxUnavailable</code> to 0.</p> <pre><code>strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 </code></pre> <p>It basically tells Kubernetes that there should be zero unavailable pods while deploying (<code>maxUnavailable: 0</code>) and there should be one new pod at a time (<code>maxSurge: 1</code>).</p> <p>I am hoping you setup the <code>readiness</code> probe something like this:</p> <pre><code>readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 </code></pre> <p>Basically, this is a check that Kubernetes does in order to make sure that your pod is ready to send traffic to it. Until it is not ready, Kubernetes will not use your pod.</p>
<p>I am trying to fetch Nodeport from a specific pod using kubectl describe command. I came to know from this <a href="https://stackoverflow.com/questions/37464518/how-to-format-the-output-of-kubectl-describe-to-json">question</a> that -o option doesn't work with describe, hence I am trying with following way but I am not getting required value, can someone correct me.</p> <pre><code>kubectl -n core describe svc/pg-debug Name: pg-debug Namespace: core Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=postgresql-default Type: NodePort IP: 172.17.17.19 Port: &lt;unset&gt; 5432/TCP TargetPort: 5432/TCP NodePort: &lt;unset&gt; 24918/TCP Endpoints: 172.16.90.10:5432 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>Following is the command that i am trying to get the value <code>"24918"</code></p> <pre><code>kubectl -n core describe svc/pg-debug | grep NodePort |awk -F: '/nodePort/{gsub(/ /,"",$2)}' </code></pre>
<p>You can use following command to get nodePort from service</p> <pre><code>kubectl get svc pg-debug -n core -o jsonpath='{.spec.ports[].nodePort}' </code></pre> <p>Refs:</p> <ul> <li><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/jsonpath/</a> </li> <li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></li> <li>and see also: <a href="https://github.com/kubernetes/kubectl/issues/25" rel="noreferrer">https://github.com/kubernetes/kubectl/issues/25</a></li> </ul>
<p>I am trying to set up a testing framework for my kubernetes cluster using jenkins and the <a href="https://github.com/dfci/cidc-devops/tree/master/kubernetes/configuration/configmaps" rel="nofollow noreferrer">jenkins kubernetes plugin</a>.</p> <p>I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.</p> <p>Essentially I want to do something like this:</p> <pre><code>podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) { node('pod1') { container('container1') { // start service 1 } } } podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) { node('pod2') { container('container2') { // start service 2 } } stage ('Run test') { node { sh 'run something that causes service 1 to query service 2' } } </code></pre> <p>I have two main problems:</p> <ol> <li><p>Pod lifecycle: As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?</p></li> <li><p>ContainerTemplate from docker image: I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?</p></li> </ol>
<p>It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.</p> <p>The solution to multi-service tests ended up being simply using an pod template that has the <a href="https://cloud.google.com/python/docs/reference/" rel="nofollow noreferrer">google cloud library</a>, and assigning that worker a service-account credential plus a secret key so that it can <code>kubectl</code> commands on the cluster.</p> <p>Dockerfile for worker, replace "X"s with desired version:</p> <pre><code>FROM google/cloud-sdk:alpine // Install some utility functions. RUN apk add --no-cache \ git \ curl \ bash \ openssl // Used to install a custom version of kubectl RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &amp;&amp;\ chmod +x ./kubectl &amp;&amp;\ mv ./kubectl /usr/local/bin/kubectl // Helm to manage deployments. RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get &gt; get_helm.sh &amp;&amp;\ chmod 700 get_helm.sh &amp;&amp; ./get_helm.sh --version vX.XX.X </code></pre> <p>Then in the groovy pipeline:</p> <pre><code>pipeline { agent { kubernetes { label 'kubectl_helm' defaultContainer 'jnlp' serviceAccount 'helm' yaml """ apiVersion: v1 kind: Pod spec: containers: - name: gcloud image: your-docker-repo-here command: - cat tty: true """ } } environment { GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds') } stages { stage('Do something') { steps { container('gcloud') { sh 'kubectl apply -f somefile.yaml' sh 'helm install something somerepo/somechart' } } } } </code></pre> <p>Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests. </p> <p>NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from <code>GOOGLE_APPLICATION_CREDENTIALS</code> as the gsutils tools will be looking for that environmental variable.</p>
<p>TLDR: I'm running a kubernetes cluster using AKS. I have created an external name service to proxy to an external address. I would like to add a 'Host' header to the request. How should I go about doing this?</p> <p>For more context I have an ingress controller that handles all incoming traffic. I want to route a subset of that traffic (based on route) to an external Azure Function. I have set up an ExternalName service with the hostname of my Azure Function and am routing the traffic to it. However, because of the way Microsoft handles routing to it's functions a 'Host' value with the correct FQDN is required in the header.</p> <p>I don't want to make whoever sends the original request include this in the header so I'd like to add it on their behalf when traffic is proxy'd to the Azure Function.</p> <p>Here is my service file:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: azure-function-proxy-service labels: app: proxy-service spec: type: ExternalName externalName: azure-function.azurewebsites.net </code></pre> <p>And the relevant ingress rules code:</p> <pre><code>- host: hostto.proxy.net http: paths: - path: /route/to/proxy backend: serviceName: azure-function-proxy-service servicePort: 80 </code></pre>
<p>I found that what I really wanted to do was modify the header of a request as it passes through my ingress controller. The best option I found to do that was the <code>nginx.ingress.kubernetes.io/configuration-snippet</code> but it did not give me the fine grained control I wanted.</p> <p>Ultimately I had to stand up an additional nginx instance to proxy the requests so that I could have total control over the nginx config.</p>
<p>How does one list all the resourceVersion of a Kubernetes object?</p> <p>For instance, I want to retrieve all the resourceVersions of a configmap from the past. If this is possible I would like to watch all the changes from the past resourceVersion(s) of the configmap (this is my workaround, which I would like to try). Because currently, K8s does not support rollback of ConfigMaps.</p> <p>Here is the ongoing feature request for that:- <a href="https://github.com/kubernetes/kubernetes/issues/22368" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/22368</a></p>
<blockquote> <p>How does one list all the resourceVersion of a Kubernetes object?</p> </blockquote> <p>Not supported by the API side yet as of this writing. (Also, as described in the issue) Also, deleted objects can be kept in etcd for the value of <a href="https://coreos.com/etcd/docs/latest/op-guide/configuration.html" rel="nofollow noreferrer"><code>--auto-compaction-retention</code></a>, I suppose you can change that value if you want to keep the objects longer and query etcd with <code>etdctl</code>.</p> <p>Another sort of brute force option is to <a href="https://support.coreos.com/hc/en-us/articles/115000323894-Creating-etcd-backup" rel="nofollow noreferrer">back up</a> etcd and then <a href="https://coreos.com/etcd/docs/latest/op-guide/recovery.html" rel="nofollow noreferrer">restore</a> it on some other node, then manually query etcd for that particular snapshot.</p> <p>For example to get the <code>kube-proxy</code> ConfigMap:</p> <p>$ etcdctl --endpoints=https://:2379 get "/registry/configmaps/kube-system/kube-proxy" --cert=client.crt --key=client.key --cacert=ca.crt</p> <p>These are some other backup tools:</p> <ul> <li><a href="https://github.com/giantswarm/etcd-backup" rel="nofollow noreferrer">https://github.com/giantswarm/etcd-backup</a></li> <li><a href="https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html" rel="nofollow noreferrer">https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html</a></li> </ul> <p>As you mentioned, there's a feature request for historical versions.</p>
<p>I have an EKS cluster setup in a VPC. The worker nodes are launched in private subnets. I can successfully deploy pods and services.</p> <p>However, I'm not able to perform DNS resolution from within the pods. (It works fine on the worker nodes, outside the container.)</p> <p>Troubleshooting using <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a> results in the following from nslookup (timeout after a minute or so):</p> <p>Server: 172.20.0.10 Address 1: 172.20.0.10</p> <p>nslookup: can't resolve 'kubernetes.default'</p> <p>When I launch the cluster in an all-public VPC, I don't have this problem. Am I missing any necessary steps for DNS resolution from within a private subnet?</p> <p>Many thanks, Daniel</p>
<p>I feel like I have to give this a proper answer because coming upon this question was the answer to 10 straight hours of debugging for me. As @Daniel said in his comment, the issue I found was with my ACL blocking outbound traffic on UDP port 53 which apparently kubernetes uses to resolve DNS records. </p> <p>The process was especially confusing for me because one of my pods worked actually worked the entire time since (I think?) it happened to be in the same zone as the kubernetes DNS resolver.</p>
<p>Is it possible to configure an ingress controller in Kubernetes to route the HTTP requests to a service only if the incoming requests have a certain value for a header?</p> <p>Example</p> <p>An HTTP request with following header</p> <pre><code>X-MY-CUSTOM-HEADER: accepted-value </code></pre> <p>should be forwarded to service1</p> <p>An HTTP request with following header</p> <pre><code>X-MY-CUSTOM-HEADER: invalid-value </code></pre> <p>should be blocked</p> <p>If is possible could you detail a bit or point to some documentation as I wasn't able to find documentation for such usecase </p>
<p>If you are using an nginx ingress controller you can do it with a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">Configuration snippet</a> annotation. Then you can add something like this:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | map $http_x_custom_header $not_ok { default "1"; Value1 "0"; Value2 "0"; Value3 "0"; } if ($not_ok) { return 403; } </code></pre> <p>Some more info <a href="https://stackoverflow.com/a/18972508/2989261">here</a>.</p>
<p>I try to install Traefik Web UI, but under Vagrant, without Minikube. This is a Multi-Node setup. On the host machine, I tried to reach UI both under <code>localhost:8081</code> (guest 8080 port exposed to host in Vagrantfile) and <code>https://localhost:6443/api/v1/namespaces/kube-system/services/https:traefik-web-ui:/proxy/</code> without luck. Can you recommend a working setting? On this last URL, I'm getting this error:</p> <pre class="lang-json prettyprint-override"><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "no endpoints available for service \"https:traefik-web-ui:\"", "reason": "ServiceUnavailable", "code": 503 } </code></pre> <p>SSHing into the master machine, the Web UI is working:</p> <pre class="lang-bash prettyprint-override"><code>vagrant@my-project-master:/vagrant$ kubectl get services --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... traefik-ingress-service ClusterIP 10.110.210.58 &lt;none&gt; 80/TCP,8080/TCP 4m30s traefik-web-ui ClusterIP 10.98.235.112 &lt;none&gt; 80/TCP 4m29s vagrant@my-project-master:/vagrant$ curl 10.110.210.58:8080/dashboard/ &lt;!doctype html&gt;&lt;html class="has-navbar-fixed-top"&gt;&lt;head&gt;&lt;meta charset="utf-8"&gt;&lt;title&gt;Traefik&lt;/title&gt;&lt;base href="./"&gt;&lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;&lt;link rel="icon" type="image/x-icon" href="./assets/images/traefik.icon.png"&gt;&lt;link href="styles.e21e21d47be645f690e6.bundle.css" rel="stylesheet"/&gt;&lt;/head&gt;&lt;body&gt;&lt;app-root&gt;&lt;/app-root&gt;&lt;script type="text/javascript" src="inline.318b50c57b4eba3d437b.bundle.js"&gt;&lt;/script&gt;&lt;script type="text/javascript" src="polyfills.1457c99db4b6dba06e8d.bundle.js"&gt;&lt;/script&gt;&lt;script type="text/javascript" src="scripts.ef668c5c0d42ec4c5e83.bundle.js"&gt;&lt;/script&gt;&lt;script type="text/javascript" src="main.f341693bf9ed22060b42.bundle.js"&gt;&lt;/script&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>The current <code>ui.yml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: Service metadata: name: traefik-web-ui namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - name: web port: 80 targetPort: 8080 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-web-ui namespace: kube-system spec: rules: #- host: traefik-ui.minikube - host: traefik-ui.local http: paths: - path: / backend: serviceName: traefik-web-ui servicePort: web </code></pre> <p>Installed with:</p> <pre class="lang-bash prettyprint-override"><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml #kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml kubectl apply -f /vagrant/hashicorp/manifests/traefik/ui.yml </code></pre> <p>How can I reach the Web UI on the host machine, preferably make it available for the host browser with only an installed certificate?</p> <p>I use the Kubernetes Dashboard similarly, with the certificate installed on the host browser.</p>
<p>I found the correct URL under the API to reach the UI:</p> <p><code>https://localhost:6443/api/v1/namespaces/kube-system/services/http:traefik-ingress-service:8080/proxy/dashboard/</code></p> <p>Making these commands obsolute:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml # or kubectl apply -f /vagrant/hashicorp/manifests/traefik/ui.yml </code></pre>
<p>I am trying to get mongo-replicaset chart working.</p> <p>Kubelet report this error while mongo-replicaset chart deployment:</p> <pre><code>MountVolume.SetUp failed for volume "mongo-two": lstat /mongo/data: no such file or directory </code></pre> <p>On each node, /mongo/data folder <strong>exist</strong>, driving me crazy. Note: on nodes, lstat command doesn't exists, but I suspect kubelet container to bring it.? <a href="https://i.stack.imgur.com/4cSod.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4cSod.png" alt="enter image description here"></a></p> <p>I have 3 persistent volumes:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mongo-[one/two/three] spec: capacity: storage: 40Gi accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /mongo/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - [one/two/three] </code></pre> <p>StatefulSet yaml: (mongo-replicaset helm chart 3.8.0)</p> <pre><code>... volumeMounts: - mountPath: /data/db name: datadir ... volumeClaimTemplates: - metadata: creationTimestamp: null name: datadir spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 40Gi ... </code></pre> <p>Still having "no such directory" on an existing directory.. What's incorrect? I can give additionnal data if needed.</p> <p>Thanks you</p>
<p>The problem came from kubelet which is containerized (because of Rancher installation)</p> <p>I add a volume definition into kubelet containers and it's OK.</p> <p>For those interested by creating persistent local volume in a Rancher - Kubernetes installation, just add this to your cluster yaml in order to kubelet can mount your volume:</p> <pre><code>services: kubelet: extra_binds: - /path_to_mount:/path_to_mount:rshared </code></pre> <p>don't forget the two dot rshared.</p>
<p>I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.</p> <hr> <p><strong>Update</strong></p> <p>It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.</p> <p>Setup and test:</p> <pre><code>kubectl --context=minikube create -f postgres-statefulset.yaml kubectl --context=minikube create -f postgres-service.yaml url=$(minikube service postgres --url --format={{.IP}}:{{.Port}}) psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \ --command='SELECT refobjid FROM pg_depend LIMIT 1' Password for user postgres: refobjid ---------- 1247 </code></pre> <p>postgres-service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres role: service spec: selector: app: postgres type: NodePort ports: - name: postgres port: 5432 targetPort: 5432 protocol: TCP </code></pre> <p>postgres-statefulset.yaml</p> <pre><code>apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: postgres labels: app: postgres role: service spec: replicas: 1 selector: matchLabels: app: postgres role: service serviceName: postgres template: metadata: labels: app: postgres role: service spec: containers: - name: postgres image: postgres:9.6 env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres - name: POSTGRES_DB value: postgres ports: - containerPort: 5432 name: postgres protocol: TCP </code></pre> <hr> <p><strong>Original question</strong></p> <p>I created a deployment running one container (<code>postgres-container</code>) and a NodePort (<code>postgres-service</code>). I can connect to postgresql from within the pod itself:</p> <pre><code>$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \ -- psql --port=5432 --username=postgres --dbname=postgres </code></pre> <p>But I can't connect through the service.</p> <pre><code>$ minikube service --url postgres-service http://192.168.99.100:32254 $ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres psql: could not connect to server: Connection refused Is the server running on host "192.168.99.100" and accepting TCP/IP connections on port 32254? </code></pre> <p>I think postgres is correctly configured to accept remote TCP connections:</p> <pre><code>$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \ -- tail /var/lib/postgresql/data/pg_hba.conf host all all 127.0.0.1/32 trust ... host all all all md5 $ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \ -- grep listen_addresses /var/lib/postgresql/data/postgresql.conf listen_addresses = '*' </code></pre> <p>My service definition looks like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres-container type: NodePort ports: - port: 5432 targetPort: 5432 protocol: TCP </code></pre> <p>And the deployment is:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: app: postgres-container template: metadata: labels: app: postgres-container spec: containers: - name: postgres-container image: postgres:9.6 env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: postgres - name: POSTGRES_DB value: postgres ports: - containerPort: 5432 </code></pre> <p>The resulting service configuration:</p> <pre><code>$ kubectl --context=minikube get service postgres-service -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-12-07T05:29:22Z name: postgres-service namespace: default resourceVersion: "194827" selfLink: /api/v1/namespaces/default/services/postgres-service uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02 spec: clusterIP: 10.109.120.251 externalTrafficPolicy: Cluster ports: - nodePort: 32254 port: 5432 protocol: TCP targetPort: 5432 selector: app: postgres-container sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre> <p>I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?</p>
<p>I just deployed postgres and exposed its service through NodePort and following is my pod and service.</p> <pre><code>[root@master postgres]# kubectl get pods NAME READY STATUS RESTARTS AGE postgres-7ff9df5765-2mpsl 1/1 Running 0 1m [root@master postgres]# kubectl get svc postgres NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgres NodePort 10.100.199.212 &lt;none&gt; 5432:31768/TCP 20s </code></pre> <p>And this is how connected to postgres though the nodeport:</p> <pre><code>[root@master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb Password for user postgresadmin: psql (10.4 (Debian 10.4-2.pgdg90+1)) Type "help" for help. postgresdb=# </code></pre> <p>In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.</p>
<p>Please, any idea why deleting the secret does not work?</p> <p>Thanks!</p> <pre><code>Cloning repository... Cloning into '/builds/systemxy/actuator-sample'... Checking out e503a7da as master... Skipping Git submodules setup $ echo "$GOOGLE_KEY" &gt; key.json $ gcloud auth activate-service-account --key-file key.json Activated service account credentials for: [[email protected]] $ gcloud config set compute/zone europe-west1-c Updated property [compute/zone]. $ gcloud config set project actuator-sample-224932 Updated property [core/project]. $ gcloud config set container/use_client_certificate True Updated property [container/use_client_certificate]. $ gcloud container clusters get-credentials actuator-sample Fetching cluster endpoint and auth data. kubeconfig entry generated for actuator-sample. $ kubectl delete secret registry.gitlab.com **Error from server (Forbidden): secrets "registry.gitlab.com" is forbidden: User "client" cannot delete secrets in the namespace "default"** **ERROR: Job failed: exit code 1`enter code here** </code></pre>
<p>It's kind of odd since <code>gcloud container clusters get-credentials actuator-sample </code> should set the right credentials and the right context. However, it seems that you may have the wrong 'current context' in your <code>~/.kube/config</code> file. You can check with:</p> <pre><code>$ kubectl config get-contexts </code></pre> <p>Since it looks like you are using GKE, your current context should be something like this:</p> <pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE * gke_&lt;project_name&gt;_&lt;region&gt;-&lt;cluster-name&gt; gke_&lt;project_name&gt;_&lt;region&gt;-&lt;cluster-name&gt; gke_&lt;project_name&gt;_&lt;region&gt;-&lt;cluster-name&gt; </code></pre> <p>If not you have to choose that context:</p> <pre><code>$ kubectl config use-context gke_&lt;project_name&gt;_&lt;region&gt;-&lt;cluster-name&gt; </code></pre>
<p>I have a multiplayer game based on microservices architecture which I am trying to figure how to <strong>scale horizontally</strong>. It is currently orchestrated in Docker Swarm but I am considering moving to Kubernetes.</p> <p>Here are the details about the game:</p> <ul> <li>It is a table game with cards</li> <li>Multiple players sit on the same table and play with each other</li> </ul> <p>As it works now, I have a single container that is responsible for all tables. When a player joins the table, he sits down and establishes a websocket connection that is routed to that particular container. All players on all tables are connected to the same container. The game logic and the game events can be easily pushed to all clients. </p> <p>It's currently like that. <strong>All clients that sit on the same table have a connection to the same container</strong>, so it's easy to push dynamic game data back and forth.</p> <pre><code>Client 1+ | Container A +Client 3 | +---------------+ | +---&gt; |---------------| &lt;----+ || Table 1 || |Client 4 Client 2+----&gt; |---------------| &lt;----+ |---------------| || Table 2 || |---------------| |---------------| || Table 3 || +---------------+ | . | | . | | . | +---------------+ </code></pre> <p>However, when you try to scale this by just increasing the number of containers you run into the problem that clients sitting on the same table are connected to different containers. This means that every game action and all shared dynamic game data have to be updated in a database sitting between these containers. However this becomes increasingly hard to write and maintain:</p> <pre><code> Container 1 Container 2 Client 1+ +-------------+ +-------------+ +Client 3 +----&gt; |-------------| |-------------| &lt;------+ || Table 1 || || Table 1 || +----&gt; |-------------| |-------------| &lt;------+Client 4 Cleint 2+ |-------------| |-------------| || Table 2 || || Table 2 || +-------------+ +-------------+ | | | | | | | | | | | | +----+--------+ +-----------+-+ | | | | | | | +------------------------+ | +&gt; | Redis DB | &lt;+ +------------------------+ </code></pre> <p>Rather than designing the components like that, it would be much simpler to somehow route clients that have to sit on the same table to the same container. This is to avoid writing every player action and every public table update into the DB. It would look like this:</p> <pre><code> Game Service +-----------------+ Client 1+ | | + Client 3 | | Container 1 | | +------&gt; +-----------+ &lt;-------+ | |-----------| | Client 2 +-----&gt; || Table 1 || &lt;-------+ Client 4 | |-----------| | | |-----------| | | || Table 2 || | | |-----------| | | +-----------+ | | | | Container 2 | | +-----------+ | | |-----------| | | || Table 3 || | | |-----------| | | |-----------| | | || Table 4 || | | |-----------| | | +-----------+ | | | +-----------------+ </code></pre> <p>Having the above architecture would dramatically decrease the complexity of the app. The problem is that <strong>connections coming from different clients have to be identified and routed to the correct container</strong>. I haven't found a way to do that. Is routing to specific containers within the service possible and with what tool?</p> <p>What is the correct approach to use in my scenario? Also, if manually routing requests to the target container is not a viable option, what would be the correct way to architect this service?</p>
<p>This can be achieved with help of 3rd party libraries, like Istio. </p> <p><a href="https://istio.io/docs/tasks/traffic-management/request-routing/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/request-routing/</a></p> <p>You will have to define VirtualServices depending on your config. For your game services you should use StatefulSet, by doing this you will be able to identify to which service you need to route your traffic.</p>
<p>I am using Helm charts to create and deploy applications into my K8s cluster. </p> <p>One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:</p> <pre><code> data: app.conf: | [sdkkey] # I want to be able to retrieve sdk from aws secrets manager </code></pre> <p>I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap. </p> <p>Any ideas or alternative solutions?</p> <p>Cheers K</p>
<p><em>tl;dr;</em> You can't execute a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>, it is just a static manifest. Use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> instead.</p> <p>ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap. </p> <p>Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secrets</a>, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.</p> <p>Given your description it sounds like your best option would be to use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> to retrieve the credentials and write them to a shared <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir Volume</a> mounted into the container with the application that will use the credentials. </p>
<p>The first question I have is a design question really. This is my first time writing a service that uses a Queue and I am also new to Go. I am trying to determine whether I should write my worker in such a way that it simply pops one message off the queue, processes it, and the dies off. With things like Kubernetes this <em>seems</em> fairly trivial.</p> <p>Or should I have a long-lived worker constantly waiting for new messages but that is relaunched if it dies (by a bug or accident)?</p> <p>The reason I ask this question is that in order to implement the former it feels a little more "hacked up" because I have to write the following using the common go AMQP library from <code>streadway/amqp</code> (read the comments):</p> <pre><code>// Pop will extract a message from the AMQP queue func (v *Queue) Pop() (data []byte, err error) { msgs, err := v.Channel.Consume( v.QueueName, // queue v.ConsmerID, // consumer true, // auto-ack false, // exclusive false, // no-local false, // no-wait nil, // args ) if err != nil { return nil, err } // We have to use for .. range because Consume returns // "&lt;-chan Delivery" but if we only want ONE message popped off // we return on the first one for data := range msgs { return data.Body, nil } // We should never get this far... return nil, errors.New("Something went wrong") } </code></pre> <p>Furthermore what is <code>&lt;-chan Delivery</code> in this case? It seems like some sort of "stream" or object that you can plug into. Is there a way to not have to write a for-loop for these data types?</p> <p>EDIT: I have also discovered that it appears that this code will dequeue the ENTIRE queue even though it only does a for-loop iteration once (as show in the code above). I am not sure why this happens either?</p> <p>Relevant links to code:</p> <ul> <li><a href="https://github.com/streadway/amqp/blob/master/channel.go#L1049" rel="nofollow noreferrer">https://github.com/streadway/amqp/blob/master/channel.go#L1049</a></li> <li><a href="https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/go/worker.go" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-tutorials/blob/master/go/worker.go</a></li> <li><a href="https://github.com/fatih/amqp-examples/blob/master/consumer.go#L132" rel="nofollow noreferrer">https://github.com/fatih/amqp-examples/blob/master/consumer.go#L132</a></li> </ul>
<p>To simply take a single object from a <code>&lt;-chan Delivery</code>, don't use a <code>range</code> loop, but the <a href="https://tour.golang.org/concurrency/2" rel="noreferrer">channel operator <code>&lt;-</code></a>:</p> <pre><code>data := &lt;- msgs return data.Body, nil </code></pre> <p>As to why your entire queue is emptied as soon as you fetch one message: This is most likely due to the <a href="https://www.rabbitmq.com/consumer-prefetch.html" rel="noreferrer">Consumer prefetch</a>. When consuming messages, the client will actually not pop them from the broker one-by-one, but in batches of configurable size (if I recall correctly, around the order of 32 or 64 messages by default). As soon as the broker has published this batch of messages to your consumer, they'll be in your <code>msgs</code> channel; and if you don't read from that channel any more after getting the first message, the rest of them will be gone (at least, with <code>auto-ack</code> enabled -- otherwise, they'll be requeued after the channel is closed).</p> <p>To only fetch one message at a time, use the channel's <a href="https://godoc.org/github.com/streadway/amqp#Channel.Qos" rel="noreferrer"><code>QoS</code> function</a> (with the first parameter being the prefetch count):</p> <pre><code>err := v.Channel.Qos(1, 0, false) </code></pre>
<p>I've previously installed tiller into a specific namespace.</p> <p>I set an environment variable to set the 'tiller' namespace - but I can't remember what this environment variable was called - and can't seem to find it via a web search.</p> <p>What is this key?</p>
<p>By executing <em>helm -h</em> you can see the list of environment variables it uses:</p> <pre><code>Environment: $HELM_HOME set an alternative location for Helm files. By default, these are stored in ~/.helm $HELM_HOST set an alternative Tiller host. The format is host:port $HELM_NO_PLUGINS disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. $TILLER_NAMESPACE set an alternative Tiller namespace (default "kube-system") $KUBECONFIG set an alternative Kubernetes configuration file (default "~/.kube/config") $HELM_TLS_CA_CERT path to TLS CA certificate used to verify the Helm client and Tiller server certificates (default "$HELM_HOME/ca.pem") $HELM_TLS_CERT path to TLS client certificate file for authenticating to Tiller (default "$HELM_HOME/cert.pem") $HELM_TLS_KEY path to TLS client key file for authenticating to Tiller (default "$HELM_HOME/key.pem") $HELM_TLS_VERIFY enable TLS connection between Helm and Tiller and verify Tiller server certificate (default "false") $HELM_TLS_ENABLE enable TLS connection between Helm and Tiller (default "false") </code></pre>
<p>I'm new to OpenShift and Kubernetes.</p> <p>I need to access kube-apiserver on existing OpenShift environment</p> <pre><code>oc v3.10.0+0c4577e-1 kubernetes v1.10.0+b81c8f8 </code></pre> <p>how do I know kube-apiserver is already installed, or how to get it installed?</p> <p>I checked all the containers and there is no even such path /etc/kubernetes/manifests.</p> <p>Here is the list of docker processes on all clusters, could it hide behind one of these?</p> <p><code>k8s_fluentd-elasticseark8s_POD_logging k8s_POD_tiller-deploy k8s_api_master-api-ip-...ec2.internal_kube-system k8s_etcd_master-etcd-...ec2.internal_kube-system k8s_POD_master-controllers k8s_POD_master-api-ip- k8s_POD_kube-state k8s_kube-rbac-proxy k8s_POD_node-exporter k8s_alertmanager-proxy k8s_config-reloader k8s_POD_alertmanager_openshift-monitoring k8s_POD_prometheus k8s_POD_cluster-monitoring k8s_POD_heapster k8s_POD_prometheus k8s_POD_webconsole k8s_openvswitch k8s_POD_openshift-sdn k8s_POD_sync k8s_POD_master-etcd</code></p>
<p>If you just need to verify that the cluster is up and running then you can simply run <code>oc get nodes</code> which communicates with the kube-apiserver to retrieve information.</p> <p><code>oc config view</code> will show where kube-apiserver is hosted under the clusters -> cluster -> server section. On that host machine you can run command <code>docker ps</code> to display the running containers, which should include the kube-apiserver</p>
<p>I'm mounting a local folder into minikube and using that folder inside a pod. The folder contains the code I am developing. It works great but changes I make are not being reflected in the browser. If I exec into the pod I can see my code changes, just not in the browser. </p> <p>If I delete the pod when it is automatically recreated the changes are reflected in the browser. Is this a limitation of the solution? </p> <p>Can anybody please advise a novice?</p>
<p>Have a look at <a href="https://github.com/GoogleContainerTools/skaffold/" rel="nofollow noreferrer">Skaffold</a> — and its <a href="https://github.com/GoogleContainerTools/skaffold/blob/master/examples/annotated-skaffold.yaml" rel="nofollow noreferrer"><code>sync</code></a>; it ensures your yaml files are running inside Minikube and ships files of your selection back and forth.</p>
<p>I have a project that uses the following resources to work:</p> <ul> <li><p>A jsf application running under JBoss and using a PostgreSQL</p></li> <li><p>2 spring boot API using MongoDB. </p></li> </ul> <p>So, I have the following dockers:</p> <ul> <li><p>jsf+JBoss in same docker</p></li> <li><p>PostgreSQL docker</p></li> <li><p>mongo docker</p></li> <li><p>one docker for each spring boot app.</p></li> </ul> <p>In kubernates I need organize this containers in PODs, so my ideia is create the following:</p> <ul> <li><p>A POD for jsf+JBoss docker</p></li> <li><p>Another for PostgreSQL</p></li> <li><p>Another POD for MongoDB</p></li> <li><p>Only one POD for both spring boot app, because they need each other.</p></li> </ul> <p>So, I have 4 POD and 6 containers. Thinking about best practices to use ks8, this is a good way to organize my project?</p>
<p>tl;dr; This doesn't follow Kubernetes best practices. Each application should be a separate <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p> <p>A better way to run this in Kubernetes would be using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> for each individual application, so it would be:</p> <p>One Deployment with a single container for jsf+JBoss</p> <p>One StatefulSet for PostgreSQL (though I would suggest looking at an Operator to manage your PostgreSQL cluster, i.e. <a href="https://kubedb.com/" rel="nofollow noreferrer">kubedb</a></p> <p>One StatefulSet for MongoDB (again, strongly suggest using an Operator to manage your MongoDB cluster, which <a href="https://kubedb.com/" rel="nofollow noreferrer">kubedb</a> can also handle)</p> <p>One deployment <em>each</em> for your Spring Boot applications, assuming they communicate with each other via a network. You can then manage and scale each independently of each other, regardless of their dependency on each other.</p>
<p>I was testing Skaffod and It is a great tool for microservices development. But I do not find any tutorial on how to use it with Java. Is there any support to Maven builds?</p>
<p>Skaffold now supports JIB out of the box which will be more efficient than multistage Dockerfile building! Check out the <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/jib" rel="nofollow noreferrer">JIB Maven example</a> in Skaffold.</p>
<p>I am trying to clone a private git repository(gitLab) into a kubernetes pod, using SSH keys for authentication. I have stored my keys in a secret. Here is the yaml file for the job that does the desired task.</p> <p>Heres the same question, but doesnt give the exact solution : </p> <p><a href="https://stackoverflow.com/questions/41067668/clone-a-secure-git-repo-in-kubernetes-pod">Clone a secure git repo in Kubernetes pod</a></p> <p>Logs of the init container after execution:</p> <pre><code>fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz v3.7.1-66-gfc22ab4fd3 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main] v3.7.1-55-g7d5f104fa7 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community] OK: 9064 distinct packages available OK: 23 MiB in 23 packages Cloning into '/tmp'... Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. </code></pre> <p>The yaml file which works perfectly for public repo:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: nest-build-kaniko labels: app: nest-kaniko-example spec: template: spec: containers: - image: 'gcr.io/kaniko-project/executor:latest' name: kaniko args: ["--dockerfile=/workspace/Dockerfile", "--context=/workspace/", "--destination=aws.dest.cred"] volumeMounts: - mountPath: /workspace name: source - name: aws-secret mountPath: /root/.aws/ - name: docker-config mountPath: /kaniko/.docker/ initContainers: - name: download image: alpine:3.7 command: ["/bin/sh","-c"] args: ['apk add --no-cache git &amp;&amp; git clone https://github.com/username/repo.git /tmp/'] volumeMounts: - mountPath: /tmp name: source restartPolicy: Never volumes: - emptyDir: {} name: source - name: aws-secret secret: secretName: aws-secret - name: docker-config configMap: name: docker-config </code></pre> <p>The yaml file after using git-sync for cloning private repository:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: nest-build-kaniko labels: app: nest-kaniko-example spec: template: spec: containers: - image: 'gcr.io/kaniko-project/executor:latest' name: kaniko args: ["--dockerfile=/workspace/Dockerfile", "--context=/workspace/", "--destination=aws.dest.cred"] volumeMounts: - mountPath: /workspace name: source - name: aws-secret mountPath: /root/.aws/ - name: docker-config mountPath: /kaniko/.docker/ initContainers: - name: git-sync image: gcr.io/google_containers/git-sync-amd64:v2.0.4 volumeMounts: - mountPath: /git/tmp name: source - name: git-secret mountPath: "/etc/git-secret" env: - name: GIT_SYNC_REPO value: "[email protected]:username/repo.git" - name: GIT_SYNC_SSH value: "true" - name: GIT_SYNC_DEST value: "/tmp" - name: GIT_SYNC_ONE_TIME value: "true" securityContext: runAsUser: 0 restartPolicy: Never volumes: - emptyDir: {} name: source - name: aws-secret secret: secretName: aws-secret - name: git-secret secret: secretName: git-creds defaultMode: 256 - name: docker-config configMap: name: docker-config </code></pre>
<p>You can use <a href="https://github.com/kubernetes/git-sync/" rel="noreferrer">git-sync</a></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: git-sync-test spec: selector: matchLabels: app: git-sync-test serviceName: "git-sync-test" replicas: 1 template: metadata: labels: app: git-sync-test spec: containers: - name: git-sync-test image: &lt;your-main-image&gt; volumeMounts: - name: service mountPath: /var/magic initContainers: - name: git-sync image: k8s.gcr.io/git-sync-amd64:v2.0.6 imagePullPolicy: Always volumeMounts: - name: service mountPath: /magic - name: git-secret mountPath: /etc/git-secret env: - name: GIT_SYNC_REPO value: &lt;repo-path-you-want-to-clone&gt; - name: GIT_SYNC_BRANCH value: &lt;repo-branch&gt; - name: GIT_SYNC_ROOT value: /magic - name: GIT_SYNC_DEST value: &lt;path-where-you-want-to-clone&gt; - name: GIT_SYNC_PERMISSIONS value: "0777" - name: GIT_SYNC_ONE_TIME value: "true" - name: GIT_SYNC_SSH value: "true" securityContext: runAsUser: 0 volumes: - name: service emptyDir: {} - name: git-secret secret: defaultMode: 256 secretName: git-creds # your-ssh-key </code></pre> <p>For more details check <a href="https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md" rel="noreferrer">this</a> link.</p>
<p>I was able to get a kubernetes job up and running on AKS (uses docker hub image to process a biological sample and then upload the output to blob storage - this is done with a bash command that I provide in the args section of my yaml file). However, I have 20 samples, and would like to spin up 20 nodes so that I can process the samples in parallel (one sample per node). How do I send each sample to a different node? The "parallelism" option in a yaml file processes all of the 20 samples on each of the 20 nodes, which is not what I want. </p> <p>Thank you for the help.</p>
<p>if you want each instance of the job to be on a different node, you can use daemonSet, thats exactly what it does, provisions 1 pod per worker node.</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: k8s.gcr.io/fluentd-elasticsearch:1.20 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers </code></pre> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p> <p>Another way of doing that - using pod antiaffinity:</p> <pre><code> affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" </code></pre> <p>The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler that it should never co-locate two Pods which have app label as zk in the domain defined by the topologyKey. The topologyKey kubernetes.io/hostname indicates that the domain is an individual node. Using different rules, labels, and selectors, you can extend this technique to spread your ensemble across physical, network, and power failure domains</p>
<p>I am working on migration of existing AWS, SprintBoot based system with 50+ independent repositories into Kubernetes. I am preparing a file containing naming conventions for artifacts, docker images and kubernetes resources (e.g. services, deployment, configmap, secret, ingress, labels etc.) for streamlining the process. I am in dilemma over should I use single or separate file for defining kubernetes resources? I know both will work, however I am inclined to preparing separate resource file for better version control and modularity. </p> <p>Appreciate if you can share your feedback on which one should be preferred? Single file for all k8s resources Or Separate k8s specification file for each resource?</p>
<p>Try to go for separate resources files, these would help in managing the resources better, at the same time ensuring modularity as well. Also, most of the deployments in kubernetes are now being preferred via helm charts, which allows a better way to manage the resources file. </p>
<p>I'm having hard times with the application's release process. The app is being developed in .NET Core and uses 'appsettings.json' that holds connection string to a database. The app should be deployed to Kubernetes cluster in Azure. We have a build and release processes in Azure DevOps so the process is automated, although the problem belongs to a necessity to deploy the same to multiple environments (DEV/QA/UAT) and every env is using its own database. When we build Docker image, the 'appsettings.json' that holds a connection string is being baked-in to the image. The next step pushes the image to a container repository which Release process then uses to deploy the image to a cluster (the steps are classics).</p> <p>Replacing or putting the connection parameters into the variables on the build step is not a big deal. However, this is a Release process that controls the deployment to multiple environments. I don't see how I can substitute the connection string to a database in the Release pipeline... or better to say, how to deploy to three different environments with database connection string properly set for each of them.</p> <p>Please suggest how it can be achieved. The only option I came up with is having three separate build pipelines for every env which doesn't look pretty. The entire idea behind Release is that you can manage the approval process before rolling out the changes to the next environment.</p>
<p>Decided to proceed with Kubernetes secrets. Found a good article around this issue here: <a href="https://strive2code.net/post/2018/12/07/devops-friday-build-a-deployment-pipeline-using-k8s-secrets" rel="nofollow noreferrer">https://strive2code.net/post/2018/12/07/devops-friday-build-a-deployment-pipeline-using-k8s-secrets</a></p>
<p>When using a ConfigMap as a mounted volume, all the existing files and folders inside the mounted location of the pod are deleted.</p> <p>The ConfigMap is created using the command :</p> <pre><code>$ kubectl create configmap tomcat-configmap --from-file=./config/tomcat-config.xml </code></pre> <p>The deployement yaml file : </p> <pre><code>kind: Service apiVersion: v1 metadata: name: tomcat-svc spec: selector: app: tomcat-container ports: - protocol: TCP port: 83 targetPort: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment spec: selector: matchLabels: app: tomcat-container replicas: 1 template: metadata: labels: app: tomcat-container spec: containers: - name: tomcat image: bitnami/tomcat:latest ports: - containerPort: 8080 volumeMounts: - name: tomcat-configmap-volume mountPath: /usr/local/tomcat/webapps/examples/WEB-INF/classes/com/test/app-conf.xml subPath: app-conf.xml volumes: - name: tomcat-configmap-volume configMap: name: tomcat-configmap </code></pre> <p>I tried different config but without success :(</p> <p>Any suggestion would be very helpful</p> <p>I am using kubernetes provided with docker for windows :</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<pre><code>kind: Service apiVersion: v1 metadata: name: tomcat-svc spec: selector: app: tomcat-container ports: - protocol: TCP port: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment spec: selector: matchLabels: app: tomcat-container replicas: 1 template: metadata: labels: app: tomcat-container spec: containers: - name: tomcat image: bitnami/tomcat:latest ports: - containerPort: 8080 volumeMounts: - name: tomcat-configmap-volume mountPath: /usr/local/tomcat/webapps/examples/WEB-INF/classes/com/test #subPath: app-conf.xml volumes: - name: tomcat-configmap-volume configMap: name: tomcat-configmap </code></pre> <p>The answer provided above by Rico should work, anyways I am providing the implementation result below :</p> <p>Comment out the subpath, subpath is really useful when you want to override an existing config file. Also, your tomcat path is /opt/bitnami/tomcat in the image that you are using, and you are mounting a file in a custom dynamically created location in /usr/local/tomcat/ , not sure what you mean by it is overriding the contents </p> <p><a href="https://i.stack.imgur.com/vtOfA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vtOfA.jpg" alt="Screenshot"></a></p>
<p>I tried to install KubeFlow but use the wrong region, how to delete it? I tried to do it from Kubernetes clsuter but keep getting the same error when I try to create a new one: </p> <pre><code>Error 409: 'projects/dpe-cloud-mle/global/deployments/kubeflow' already exists and cannot be created., duplicate </code></pre>
<p>This is a feature of Deployment Manager, which is used to create the cluster. If you create any resource using DM, but edit or delete it manually (=elsewhere in the console), the record of it remains unchanged in the DM. </p> <p>To fix your issue, navigate to <a href="https://console.cloud.google.com/dm/deployments" rel="nofollow noreferrer">Deployment Manager in your GCP Console</a> and delete the relevant deployment. Then you will be able to re-install KubeFlow without this error. </p>
<p>I am trying to create a secret from a JSON value but I keep getting this error executing <code>"secrets.yaml" at &lt;b64enc&gt;: wrong type for value; expected string; got map[string]interface {}</code> when I do <code>helm install</code>.</p> <p><strong>secrets.yaml</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: cloudsql-instance-credentials namespace: wp-{{ .Values.name }} labels: app: wp-{{ .Values.name }} type: Opaque data: credentials.json: {{ .Values.dbCred | b64enc }} </code></pre> <p><strong>values.yaml</strong></p> <pre><code>dbCred: { 'type': '', 'project_id': '', 'private_key_id': '', 'private_key': '-----BEGIN PRIVATE KEY-----\n\n-----END PRIVATE KEY-----\n', 'client_email': '', 'client_id': '', 'auth_uri': '', 'token_uri': '', 'auth_provider_x509_cert_url': '', 'client_x509_cert_url': '' } </code></pre>
<p>You can fix this by changing <code>{{ .Values.dbCred | b64enc }}</code> to <code>{{ toJson .Values.dbCred | b64enc }}</code></p>
<p>API that is written in JAVA spring-boot was deployed in K8s with two pods and Its been 3 weeks that API is successfully running. But, last day it stoped working and produced 503 server unavailable. </p> <p>K8s admin told us that Pods got recreating multiple time whole the day. Though it is started working after restarting my build from Drone, i want to know </p> <ol> <li>Which scenario can make Pods not working?</li> <li>Why K8S is recreating Pods again and again?</li> <li>If it is the memory issue, as i have developed this API in JAVA, doesn't JAVA's garbage collection work here?</li> </ol> <p>Regards, Hearaman.</p>
<p>Which scenario can make Pods not working?</p> <ol> <li>Memory Limits, request Limits, quota in general</li> <li>You Pod have a QoS ( K8S ) of Burstable meaning that it can be destoyed to let other pods live</li> <li>Node/ Workers are down or dained to be updated/maintained</li> <li>You java Heap is causing the app to be destoyed ( generally that's the case )</li> <li>Liveness Probe Issues</li> </ol> <p>Why K8S is recreating Pods again and again?</p> <ul> <li>To make it available again ( You might have a Readiness Issue ) or some volumes issues ( it depends )</li> </ul> <p>If it is the memory issue, as i have developed this API in JAVA, doesn't JAVA's garbage collection work here</p> <ul> <li><p>if you are using java 8, you might want to add some controls over the Heap size when starting app as follow:</p> <p>-XX:+PrintFlagsFinal -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap </p></li> </ul> <p>This is java 8+ feature so you might need to check java documentation</p> <p>Hope this helps</p>
<p>I deploy a cluster (neo4j) with kubeadm based on <a href="https://github.com/neo4j-contrib/kubernetes-neo4j" rel="noreferrer">this</a> guide. Now I have these pods :</p> <pre><code>NAME READY STATUS RESTARTS AGE neo4j-core-0 1/1 Running 0 20h neo4j-core-1 1/1 Running 0 20h neo4j-core-2 1/1 Running 0 20h </code></pre> <p>and these services :</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 60d neo4j ClusterIP None &lt;none&gt; 7474/TCP,6362/TCP 20h nginx ClusterIP None &lt;none&gt; 80/TCP 25h </code></pre> <p>Then I install kubernetes dashboard :</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>So when I do <code>kubectl proxy</code> to access the dashboard, with below link, it says <strong><em>127.0.0.1 refused to connect</em></strong>.</p> <p><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code>.</p> <p>What should I do to access the dashboard?</p> <p>I also create sample user following <a href="https://github.com/kubernetes/dashboard/wiki/Creating-sample-user" rel="noreferrer">this</a> guide. </p>
<p>Kubernetes dashboard fully rely on Apiserver. <code>Connection refused</code> means there is an issue with communication with apiserver. Please see <a href="https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above#kubectl-proxy" rel="noreferrer">https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above#kubectl-proxy</a></p> <p>Also you can try to run </p> <pre><code>kubectl proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*' </code></pre> <p>And check if on other interface(port 8002) rather than <code>127.0.0.1</code> it works.</p>
<p>The documentation does not specify how to add an intermediate SSL certificate: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#tls</a></p> <p>I suppose the next step would be to read the Kubernetes source code.</p>
<p>If you add multiple certificates in <code>tls.cert</code> key in Kubernetes TLS Ingress Configuration. Please do this like this</p> <pre><code>-----BEGIN CERTIFICATE----- &lt;put your certificate value in a single line &gt; -----END CERTIFICATE----- -----BEGIN INTERMEDIATE CERTIFICATE----- &lt;put your certificate value in a single line&gt; -----END INTERMEDIATE CERTIFICATE----- </code></pre> <p>Otherwise, you'll get an error in ssl cert chain validation.</p> <p>Always put main certificate first.</p>
<p>I created a custom <code>HTTPS</code> <code>LoadBalancer</code> (<a href="https://i.stack.imgur.com/AHCLr.png" rel="nofollow noreferrer">details</a>) and I need my Kubernetes <code>Workload</code> to be exposed with this <code>LoadBalancer</code>. For now, if I send a request to this endpoint I get the error 502. </p> <p>When I choose the <code>Expose</code> option in the <code>Workload</code> Console page, there are only <code>TCP</code> and <code>UDP</code> service types available, and a <code>TCP</code> <code>LoadBalancer</code> is created automatically. </p> <p>How do I expose a Kubernetes <code>Workload</code> with an existing <code>LoadBalancer</code>? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (<a href="https://i.stack.imgur.com/JGUEO.png" rel="nofollow noreferrer">healthcheck</a>) </p>
<p>You need to create a kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a>. First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).</p> <p>Then you need to create the ingress.</p> <p>Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment. For example:</p> <p><code>- path: /some-route backend: serviceName: your-service-name servicePort: 443</code></p> <p>On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too. Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.</p> <p>Also the load balancer will work only if the health checks pass. You need to use the route that will return a <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts#criteria-protocol-http" rel="nofollow noreferrer">200 HTTPS response</a> for that.</p>
<p>I'm trying to scale a Structured Streaming pipeline using the Apache Spark 2.3 Scala API running on Kubernetes. The basic flow of the job is like this:</p> <ul> <li>Read a static Dataset that contains ~1,000,000 records that map individual source ids to an output aggregation</li> <li>Read a streaming Dataset from Kafka that contains the time series metrics to be aggregated mapped to their source id</li> <li>Repartition each Dataset based on the source id</li> <li>Join the 2 Datasets on source id (This maps the metrics to the correct output aggregation while also filtering out data from kafka that should not be aggregated)</li> <li>Apply watermark</li> <li>Drop duplicates</li> <li>Aggregate the data</li> <li>Write to Kafka output sink</li> </ul> <p>I'm running on Kubernetes and have configured a cluster with 30 executors each with 3 cores. Kafka is currently streaming 600000 metrics per second per source id and is configured with 600 partitions. I am trying aggregate all of them into 10 distinct outputs (i.e, each output aggregation consists of 60000 distinct source ids). I have the pipeline trigger every 10 seconds to process ~6,000,000 records from Kafka. My aggregation windows are 1 minute non-overlapping and I have my watermark set to 30 seconds. Ideally, I'd like a longer watermark to account for late arriving data, but the drop duplicates/watermark stage seems to be a bottleneck particularly when the Garbage Collector is invoked. Here is some data from a recent run of my pipeline:</p> <p><a href="https://i.stack.imgur.com/ThKpf.png" rel="noreferrer">Processed And Input Rows Per Second</a></p> <p>The graph shows that the pipeline keeps up with the input rows per second for about 8-9 minutes, but then the orange line drops below the green line (~10:01 on the time axis) and the pipeline has a hard time keeping up with the input data rate. I looked into the Spark UI for clues as to why the slow down occurred and found that one executor was taking 55 seconds to perform a GC during the drop duplicates/watermark stage. Here are the summary statistics from the stage and a zoom in on the event timeline:</p> <ul> <li><a href="https://i.stack.imgur.com/shHdG.png" rel="noreferrer">Summary Statistics</a> </li> <li><a href="https://i.stack.imgur.com/2Lrln.png" rel="noreferrer">Single Executor GC slow down</a></li> </ul> <p>I've tried a number of techniques suggested <a href="https://spark.apache.org/docs/latest/tuning.html" rel="noreferrer">here</a>, with mixed results. In particular:</p> <ul> <li>Kryo serialization seemed to have little effect.</li> <li>Using these settings -XX:+UseG1GC -XX:MaxGCPauseMillis=500, reduces the frequency of long pauses but they still occur.</li> <li>I turned on the GC logs and and processed them via <a href="http://gceasy.io" rel="noreferrer">gceasy</a> and tried to follow their recommendations. This indicated that long pause is coming from a Full GC event and the logs do not show the symptom that increasing the number of GC threads would help. The average creation rate is 182.18 mb/sec and the average promotion rate is 49.8 mb/sec</li> <li>I tried reducing the NewRatio to 1, but this resulted in more frequent long pauses with smaller durations (i.e, ~25 seconds per pause instead of 50+ seconds)</li> <li>It's hard to know how much memory my streaming Dataset is using because if I try to cache it, there is an error.</li> </ul> <p>The rest of the memory advice is like "try modifying this parameter or that parameter", but it is tough to try every permutation and it doesn't indicate what behavior I should expect. Can someone point me in the direction of the next steps to follow? I feel like 55 seconds for a GC is unreasonable and there should be some way to tune it so that my job is not hindered by 1 executor.</p>
<p>So I should have replied to this sooner while the solution was fresh in my mind, but I ended up doing a few things that contributed to decreasing the garbage collection time. I don't remember all of the documentation sources that contributed to me being able to resolve this, but I spent a lot of time researching on SO, the gceasy recommendations, and general Java GC literature. Anyway here's what ended up helping:</p> <ul> <li><strong>Limited the number of cores that participate in a full GC event:</strong> I believe this was the biggest contributor to increased performance. I noticed that certain executors would have large GC times during a given micro-batch, and other executors on the same kubernetes VM would have large computation times that were close to (if not exactly) the duration of the GC pause. This correlation led me down a research path where I eventually discovered that the JVM (at least for Java 8) gets its defaults for the GC from the underlying kubernetes VM rather than the limited resources dedicated to the container on which the JVM runs. Since each container had a different instance of the JVM, each executor had default GC parameters assuming it was the only JVM running on the underlying kubernetes VM. The GC parameter that specifies the number of threads available for a Full GC event is ParallelGCThreads. This is set by default by the JVM as a percentage of the total number of cores on the VM. For a 32 core kubernetes VM, it ended up being 23, if I recall correctly. So when a Full GC event occurred, the GC would cause contention on the CPUs being used by the other executors which were conducting normal computations. My theory is that this contention was pushing up the GC/computation runtimes that occurred on the same underlying kubernetes VM. For my particular test, I ended up overriding the default parameters for ConcGCThreads (to 1) and ParallelGCThreads( to 5) since I was running 6 executors per 32 core kubernetes VM.</li> <li><strong>Increased the memory on each executor:</strong> The gceasy graphs never really showed the memory plateau. It only increased as the pipeline continued to run. I ended up increasing the memory dedicated from each executor to ~15 GB from 8 GB and was getting plateaus around ~10 GB after that. The actual amount of memory you need will probably depend on your code.</li> <li><strong>Enabled string de-duplication:</strong> Most of my dataset was strings so this helped decrease the overall memory foot print for my application</li> <li><strong>Modified the initial heap occupancy:</strong> This was recommended in gceasy as well as some SO threads.</li> </ul> <p>So here are the final set of JVM parameters I am using after all that. I hope this helps.</p> <pre><code>-XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:InitiatingHeapOccupancyPercent=35 -XX:+UseStringDeduplication -XX:ConcGCThreads=1 -XX:ParallelGCThreads=5 </code></pre>
<p>I want to upgrade the kubectl client version to 1.11.3.</p> <p>I executed <code>brew install kubernetes-cli</code> but the version doesnt seem to be updating. </p> <pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:06:30Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I'm trying to get the logs for a cell by running this command.</p> <pre><code>kubectl logs -l groupname/cell=my-cell --all-containers=true </code></pre> <p>This works in my VM which has client version 1.11.3. But in my mac it gives me an error saying <code>--all-containers=true</code> flag is not available for <code>kubectl logs</code> command.</p>
<p>Install specific version of <code>kubectl</code></p> <pre class="lang-bash prettyprint-override"><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/&lt;specific-kubectl-version&gt;/bin/darwin/amd64/kubectl </code></pre> <p>For your case if you want to install version <code>v1.11.3</code> then replace <strong>specific-kubectl-version</strong> with <code>v1.11.3</code></p> <p>Then make this binary executable</p> <pre class="lang-bash prettyprint-override"><code>chmod +x ./kubectl </code></pre> <p>Then move this binary to your <code>PATH</code></p> <pre class="lang-bash prettyprint-override"><code>sudo mv ./kubectl $(which kubectl) </code></pre>
<p>I'd like to configure cluster autoscaler on AKS. When scaling down it fails due to PDB:</p> <pre><code>I1207 14:24:09.523313 1 cluster.go:95] Fast evaluation: node aks-nodepool1-32797235-0 cannot be removed: no enough pod disruption budget to move kube-system/metrics-server-5cbc77f79f-44f9w I1207 14:24:09.523413 1 cluster.go:95] Fast evaluation: node aks-nodepool1-32797235-3 cannot be removed: non-daemonset, non-mirrored, non-pdb-assignedkube-system pod present: cluster-autoscaler-84984799fd-22j42 I1207 14:24:09.523438 1 scale_down.go:490] 2 nodes found to be unremovable in simulation, will re-check them at 2018-12-07 14:29:09.231201368 +0000 UTC m=+8976.856144807 </code></pre> <p>All system pods have <code>minAvailable: 1</code> PDB assigned manually. I can imagine that this is not working for PODs with only a single replica like the <code>metrics-server</code>:</p> <pre><code>❯ k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-32797235-0 Ready agent 4h v1.11.4 10.240.0.4 &lt;none&gt; Ubuntu 16.04.5 LTS 4.15.0-1030-azure docker://3.0.1 aks-nodepool1-32797235-3 Ready agent 4h v1.11.4 10.240.0.6 &lt;none&gt; Ubuntu 16.04.5 LTS 4.15.0-1030-azure docker://3.0.1 ❯ ks get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cluster-autoscaler-84984799fd-22j42 1/1 Running 0 2h 10.244.1.5 aks-nodepool1-32797235-3 &lt;none&gt; heapster-5d6f9b846c-g7qb8 2/2 Running 0 1h 10.244.0.16 aks-nodepool1-32797235-0 &lt;none&gt; kube-dns-v20-598f8b78ff-8pshc 4/4 Running 0 3h 10.244.1.4 aks-nodepool1-32797235-3 &lt;none&gt; kube-dns-v20-598f8b78ff-plfv8 4/4 Running 0 1h 10.244.0.15 aks-nodepool1-32797235-0 &lt;none&gt; kube-proxy-fjvjv 1/1 Running 0 1h 10.240.0.6 aks-nodepool1-32797235-3 &lt;none&gt; kube-proxy-szr8z 1/1 Running 0 1h 10.240.0.4 aks-nodepool1-32797235-0 &lt;none&gt; kube-svc-redirect-2rhvg 2/2 Running 0 4h 10.240.0.4 aks-nodepool1-32797235-0 &lt;none&gt; kube-svc-redirect-r2m4r 2/2 Running 0 4h 10.240.0.6 aks-nodepool1-32797235-3 &lt;none&gt; kubernetes-dashboard-68f468887f-c8p78 1/1 Running 0 4h 10.244.0.7 aks-nodepool1-32797235-0 &lt;none&gt; metrics-server-5cbc77f79f-44f9w 1/1 Running 0 4h 10.244.0.3 aks-nodepool1-32797235-0 &lt;none&gt; tiller-deploy-57f988f854-z9qln 1/1 Running 0 4h 10.244.0.8 aks-nodepool1-32797235-0 &lt;none&gt; tunnelfront-7cf9d447f9-56g7k 1/1 Running 0 4h 10.244.0.2 aks-nodepool1-32797235-0 &lt;none&gt; </code></pre> <p>What needs be changed (number of replicas? PDB configuration?) for down-scaling to work?</p>
<p>Basically, this is an administration issues when draining nodes that are configured by PDB ( Pod Disruption Budget ) </p> <p>This is because the evictions are forced to respect the PDB you specify</p> <p>you have two options:</p> <p>Either force the hand:</p> <pre><code>kubectl drain foo --force --grace-period=0 </code></pre> <p>you can check other options from the doc -> <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain</a></p> <p>or use the eviction api:</p> <pre><code>{ "apiVersion": "policy/v1beta1", "kind": "Eviction", "metadata": { "name": "quux", "namespace": "default" } } </code></pre> <p>Anyhow, the drain or the eviction api attempts delete on pod to let them be scheduled elswhere before completely draining the node </p> <p>As mentioned in the docs:</p> <p>the API can respond in one of three ways:</p> <ol> <li>If the eviction is granted, then the pod is deleted just as if you had sent a DELETE request to the pod’s URL and you get back 200 OK.</li> <li>If the current state of affairs wouldn’t allow an eviction by the rules set forth in the budget, you get back 429 Too Many Requests. This is typically used for generic rate limiting of any requests</li> <li>If there is some kind of misconfiguration, like multiple budgets pointing at the same pod, you will get 500 Internal Server Error.</li> </ol> <p>For a given eviction request, there are two cases:</p> <ol> <li><p>There is no budget that matches this pod. In this case, the server always returns 200 OK.</p></li> <li><p>There is at least one budget. In this case, any of the three above responses may apply.</p></li> </ol> <p>If it gets stuck then you might need to do it manually </p> <p>you can read me <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">here</a> or <a href="https://banzaicloud.com/blog/drain/" rel="nofollow noreferrer">here</a> </p>
<p><strong>Version of Helm and Kubernetes</strong>: helm version: 2.12 kubernetes: 1.10.11</p> <p><strong>Which chart</strong>: stable/jenkins</p> <p><strong>What happened</strong>: Jenkins pod gets Init:CrashLoopBackOff.</p> <p>From the <code>descript pod</code> section, it says <code>/var/jenkins_config/apply_config.sh</code> failed with error code 1. </p> <p><strong>How to reproduce it</strong> (as minimally and precisely as possible):</p> <pre><code>helm install --name jenkins -f \ helm-values/jenkins.yaml stable/jenkins \ --namespace kube-system </code></pre> <p>jenkins.yaml</p> <pre><code># Default values for jenkins. # This is a YAML-formatted file. # Declare name/value pairs to be passed into your templates. # name: value ## Overrides for generated resource names # See templates/_helpers.tpl # nameOverride: # fullnameOverride: Master: Name: jenkins-master Image: "jenkins/jenkins" ImageTag: "jdk11" ImagePullPolicy: "Always" # ImagePullSecret: jenkins Component: "jenkins-master" UseSecurity: true # SecurityRealm: # Optionally configure a different AuthorizationStrategy using Jenkins XML # AuthorizationStrategy: |- # &lt;authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy"&gt; # &lt;denyAnonymousReadAccess&gt;true&lt;/denyAnonymousReadAccess&gt; # &lt;/authorizationStrategy&gt; HostNetworking: false AdminUser: admin # AdminPassword: &lt;defaults to random&gt; resources: requests: cpu: "500m" memory: "512Mi" limits: cpu: "2000m" memory: "2048Mi" # Environment variables that get added to the init container (useful for e.g. http_proxy) # InitContainerEnv: # - name: http_proxy # value: "http://192.168.64.1:3128" # ContainerEnv: # - name: http_proxy # value: "http://192.168.64.1:3128" # Set min/max heap here if needed with: JavaOpts: "-Xms512m -Xmx1024m" # JenkinsOpts: "" # JenkinsUrl: "" # If you set this prefix and use ingress controller then you might want to set the ingress path below # JenkinsUriPrefix: "/jenkins" # Enable pod security context (must be `true` if RunAsUser or FsGroup are set) UsePodSecurityContext: true # Set RunAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image. # When setting RunAsUser to a different value than 0 also set FsGroup to the same value: RunAsUser: 1000 FsGroup: 1000 ServicePort: 8080 # For minikube, set this to NodePort, elsewhere use LoadBalancer # Use ClusterIP if your setup includes ingress controller ServiceType: ClusterIP # Master Service annotations ServiceAnnotations: {} # Master Service Labels ServiceLabels: {} # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https # Used to create Ingress record (should used with ServiceType: ClusterIP) HostName: jenkins.aws.tapfeeds.com # NodePort: &lt;to set explicitly, choose port between 30000-32767 # Enable Kubernetes Liveness and Readiness Probes # ~ 2 minutes to allow Jenkins to restart when upgrading plugins. Set ReadinessTimeout to be shorter than LivenessTimeout. HealthProbes: true HealthProbesLivenessTimeout: 90 HealthProbesReadinessTimeout: 60 HealthProbeLivenessFailureThreshold: 12 SlaveListenerPort: 50000 DisabledAgentProtocols: - JNLP-connect - JNLP2-connect CSRF: DefaultCrumbIssuer: Enabled: true ProxyCompatability: true CLI: false # Kubernetes service type for the JNLP slave service # SETTING THIS TO "LoadBalancer" IS A HUGE SECURITY RISK: https://github.com/kubernetes/charts/issues/1341 SlaveListenerServiceType: ClusterIP SlaveListenerServiceAnnotations: {} LoadBalancerSourceRanges: - 0.0.0.0/0 # Optionally assign a known public LB IP # LoadBalancerIP: 1.2.3.4 # Optionally configure a JMX port # requires additional JavaOpts, ie # JavaOpts: &gt; # -Dcom.sun.management.jmxremote.port=4000 # -Dcom.sun.management.jmxremote.authenticate=false # -Dcom.sun.management.jmxremote.ssl=false # JMXPort: 4000 # Optionally configure other ports to expose in the Master container ExtraPorts: # - name: BuildInfoProxy # port: 9000 # List of plugins to be install during Jenkins master start InstallPlugins: - kubernetes:1.13.7 - workflow-job:2.30 - workflow-aggregator:2.6 - credentials-binding:1.17 - git:3.9.1 - blueocean:1.9.0 # Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval # ScriptApproval: # - "method groovy.json.JsonSlurperClassic parseText java.lang.String" # - "new groovy.json.JsonSlurperClassic" # List of groovy init scripts to be executed during Jenkins master start InitScripts: # - | # print 'adding global pipeline libraries, register properties, bootstrap jobs...' # Kubernetes secret that contains a 'credentials.xml' for Jenkins # CredentialsXmlSecret: jenkins-credentials # Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory, # useful to manage encryption keys used for credentials.xml for instance (such as # master.key and hudson.util.Secret) # SecretsFilesSecret: jenkins-secrets # Jenkins XML job configs to provision # Jobs: |- # test: |- # &lt;&lt;xml here&gt;&gt; CustomConfigMap: false # By default, the configMap is only used to set the initial config the first time # that the chart is installed. Setting `OverwriteConfig` to `true` will overwrite # the jenkins config with the contents of the configMap every time the pod starts. OverwriteConfig: false # Node labels and tolerations for pod assignment # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature NodeSelector: {} Tolerations: {} PodAnnotations: {} Ingress: ApiVersion: extensions/v1beta1 Annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" # Set this path to JenkinsUriPrefix above or use annotations to rewrite path Path: "/" TLS: - secretName: jenkins-ingress-tls hosts: - jenkins.aws.tapfeeds.com AdditionalConfig: {} Agent: Enabled: true Image: jenkins/jnlp-slave ImageTag: latest-jdk11 CustomJenkinsLabels: [] # ImagePullSecret: jenkins Component: "jenkins-slave" Privileged: false resources: requests: cpu: "200m" memory: "512Mi" limits: cpu: "500m" memory: "1024Mi" # You may want to change this to true while testing a new image AlwaysPullImage: false # Controls how slave pods are retained after the Jenkins build completes # Possible values: Always, Never, OnFailure PodRetention: Never # You can define the volumes that you want to mount for this container # Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret # Configure the attributes as they appear in the corresponding Java class for that type # https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes volumes: # - type: Secret # secretName: mysecret # mountPath: /var/myapp/mysecret NodeSelector: {} # Key Value selectors. Ex: # jenkins-agent: v1 Persistence: Enabled: true ## A manually managed Persistent Volume and Claim ## Requires Persistence.Enabled: true ## If defined, PVC must be created manually before volume will be bound # ExistingClaim: ## jenkins data Persistent Volume Storage Class ## If defined, storageClassName: &lt;storageClass&gt; ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # StorageClass: "-" Annotations: {} AccessMode: ReadWriteOnce Size: 8Gi volumes: # - name: nothin # emptyDir: {} mounts: # - mountPath: /var/nothing # name: nothing # readOnly: true NetworkPolicy: # Enable creation of NetworkPolicy resources. Enabled: false # For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1' # For Kubernetes v1.7, use 'networking.k8s.io/v1' ApiVersion: networking.k8s.io/v1 ## Install Default RBAC roles and bindings rbac: install: false serviceAccountName: default # Role reference roleRef: cluster-admin # Role kind (RoleBinding or ClusterRoleBinding) roleBindingKind: ClusterRoleBinding </code></pre> <p><strong>Anything else we need to know</strong>:</p> <pre><code>kubectl logs jenkins-7bdb5b97b9-2h8bp </code></pre> <p>Error from server (BadRequest): container "jenkins" in pod "jenkins-7bdb5b97b9-2h8bp" is waiting to start: PodInitializing</p> <pre><code>kubectl logs jenkins-7bdb5b97b9-2h8bp -p </code></pre> <p>Error from server (BadRequest): previous terminated container "jenkins" in pod "jenkins-7bdb5b97b9-2h8bp" not found</p> <pre><code>kubectl describe pod jenkins-7bdb5b97b9-2h8bp </code></pre> <p>--</p> <pre><code>Name: jenkins-7bdb5b97b9-2h8bp Namespace: kube-system Node: ip-172-20-151-70.cn-northwest-1.compute.internal/172.20.151.70 Start Time: Mon, 10 Dec 2018 14:45:29 +0800 Labels: app=jenkins chart=jenkins-0.25.0 component=jenkins-jenkins-master heritage=Tiller pod-template-hash=3686165365 release=jenkins Annotations: checksum/config: e94dd0b017b820686f611035f38940ea53c172af104b0ee8da928e068a5966e8 Status: Pending IP: 100.96.5.13 Controlled By: ReplicaSet/jenkins-7bdb5b97b9 Init Containers: copy-default-config: Container ID: docker://c4dd267ad6c5400caba29f1aa1ff8f5b8fba2c2c6bb573ac4e9bb8bc2bc67cb7 Image: jenkins/jenkins:jdk11 Image ID: docker-pullable://jenkins/jenkins@sha256:cb7dfc139faf74eb37c860600ccd4a8c8df683699bc80db4b1766873c20de0c9 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh /var/jenkins_config/apply_config.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 10 Dec 2018 15:16:16 +0800 Finished: Mon, 10 Dec 2018 15:19:04 +0800 Ready: False Restart Count: 7 Limits: cpu: 2 memory: 2Gi Requests: cpu: 500m memory: 512Mi Environment: &lt;none&gt; Mounts: /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (rw) /var/jenkins_home from jenkins-home (rw) /var/jenkins_plugins from plugin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-slv79 (ro) Containers: jenkins: Container ID: Image: jenkins/jenkins:jdk11 Image ID: Ports: 8080/TCP, 50000/TCP Host Ports: 0/TCP, 0/TCP Args: --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD) --argumentsRealm.roles.$(ADMIN_USER)=admin State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 500m memory: 512Mi Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=12 Readiness: http-get http://:http/login delay=60s timeout=1s period=10s #success=1 #failure=3 Environment: JAVA_TOOL_OPTIONS: -Xms512m -Xmx1024m JENKINS_OPTS: ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /usr/share/jenkins/ref/plugins/ from plugin-dir (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (ro) /var/jenkins_home from jenkins-home (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-slv79 (ro) Conditions: Type Status Initialized False Ready False PodScheduled True Volumes: jenkins-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: jenkins Optional: false plugin-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: secrets-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: jenkins-home: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: jenkins ReadOnly: false default-token-slv79: Type: Secret (a volume populated by a Secret) SecretName: default-token-slv79 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 35m kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal MountVolume.SetUp succeeded for volume "plugin-dir" Normal SuccessfulMountVolume 35m kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal MountVolume.SetUp succeeded for volume "secrets-dir" Normal SuccessfulMountVolume 35m kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal MountVolume.SetUp succeeded for volume "jenkins-config" Normal SuccessfulMountVolume 35m kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-slv79" Normal SuccessfulAttachVolume 35m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-246ced27-fc43-11e8-bf48-0204830a75be" Normal Scheduled 35m default-scheduler Successfully assigned jenkins-7bdb5b97b9-2h8bp to ip-172-20-151-70.cn-northwest-1.compute.internal Normal SuccessfulMountVolume 35m kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal MountVolume.SetUp succeeded for volume "pvc-246ced27-fc43-11e8-bf48-0204830a75be" Normal Pulling 26m (x4 over 35m) kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal pulling image "jenkins/jenkins:jdk11" Normal Pulled 26m (x4 over 35m) kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal Successfully pulled image "jenkins/jenkins:jdk11" Normal Created 26m (x4 over 35m) kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal Created container Normal Started 26m (x4 over 35m) kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal Started container Warning BackOff 57s (x57 over 30m) kubelet, ip-172-20-151-70.cn-northwest-1.compute.internal Back-off restarting failed container </code></pre>
<p>I figure out this issue by below steps:</p> <ol> <li>ssh k8s node</li> <li>journalctl -fu docker.service</li> </ol> <p><code>returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting</code>. </p> <p>gcr.io is a google website, which is blocked in China. I need a VPN to walk it through.</p>
<p>I installed <code>ingress-nginx</code> in a cluster. I tried exposing the service with the <code>kind: nodePort</code> option, but this only allows for a port range between <code>30000-32767</code> (AFAIK)... I need to expose the service at port <code>80</code> for http and <code>443</code> for tls, so that I can link <code>A Records</code> for the domains directly to the service. Does anyone know how this can be done?</p> <p>I tried with <code>type: LoadBalancer</code> before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on <code>80/443</code>.</p>
<p>if you are on bare metal so change your ingress-controller service type to <code>NodePort</code> and add a reverse proxy to flow traffic to your ingress-controller service with selected <code>NodePort</code>.</p> <p>As <a href="https://stackoverflow.com/users/9581141/pramod-v">@Pramod V</a> answerd if you use <code>externalIP</code> in ingress-controller service so you loose real remote address in your EndPoints.</p> <p>A more complete answer could be found <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">Here</a> </p>
<p>so I have been building my application mostly as 12 factor app and now looking at the config part.</p> <p>Right now as it stands I have separate config files for dev and production and through the build process we either build a dev or production image. The code is 100% the same the only thing that changes is the config.</p> <p>Now I 100% understand that in a 12 factor app the config should come from external source such as: environment variables, or maybe a safe store like vault etc...</p> <p>So what the various articles and blogs fail to mention about the config is the how is the config stored/processed. If the code is separated in it's own git repo and it has no config stored with it then how do we handle the config?</p> <p>Do we store the actual config values on a separate git and then some how merge/push/execute those on the target environment (Kubernet config map, marathon JSON config, Vault, etc...) through the build process using some kind of trigger?</p>
<p>There is not a standard but what I've been observing is some common behaviors like:</p> <ol> <li><p><strong>Sensitive information</strong> never gets on versioning system, specially git which is a DCVS (you can clone the repo for other locations). If you don't follow, remember that our existing "security system" is based on the incapacity of read crypto info in a certain time, but in certain point you might be able to read the info. Usually on kubernetes I see operators, managing the service account across multiple namespaces and then other only referring the service account, tools like KMS, Cert manager, Vault and etc. are welcome </p></li> <li><p><strong>Configuration</strong> like env vars, endpoints, are stored and versioned with their own "lifecycle". </p></li> </ol> <p><a href="https://12factor.net/config" rel="nofollow noreferrer">12factor</a> does not meant to separate the configuration of your app from your <strong>repository</strong>, instead suggest not to put into your <strong>app</strong> (like on your container or even binary distribution). </p> <p>In fact if you want to use a separate repo only for config you can do it, but if you want to put aside your project source code the configuration, you can do it as well. It is more a decision based on the size of the project, complexity, segregation of duties and team context. (IMHO)</p> <p>On my case of study for instance, makes sense to separate config on a dedicated repository as production environment has more than 50 cluster, which one with their own isolation stack, also there are different teams managing their own services and using common backing services (db, api, streams...). In my opinion as long as things gets more complex and cross-shared, makes more sense to separate config on independent repository, as there are several teams and resources over multiple clusters.</p>
<p>I have followed the steps in <a href="https://www.kubeflow.org/docs/started/getting-started-gke/" rel="nofollow noreferrer">Kubernetes Engine for Kubeflow</a>.</p> <p>The deployment went fine and all pods/services are up, including the endpoint at <code>https://&lt;name&gt;.endpoints.&lt;project&gt;.cloud.goog/</code>, with the correct <code>&lt;name&gt;</code> and <code>&lt;project&gt;</code> of course.</p> <p>When I went to the above url, I was redirected to a "Sign in with Google" page. I assumed that OAuth was also configured correctly.</p> <p>However, after signing in, I was shown an Access Denied page below.</p> <p>Is there another way to provide access? I thought it was handled by OAuth.</p> <p><img src="https://i.stack.imgur.com/T7L7m.jpg" alt="Access Denied" title="Access Denied"></p>
<p>The deployment created by <code>kfctl.sh</code> in "Deploy Kubeflow on GKE using the command line" also creates a load balancer resource for the ingress into the cluster and secures it using Cloud Identity-Aware Proxy (IAP).</p> <p>To allow access to the resource for new users, go to:</p> <p>Google Cloud Console > IAM &amp; Admin > Identity-Aware Proxy</p> <p>Select the desired resource and click "Add Member".</p> <p>Fill in the user in the "Access Denied" page and select "Cloud IAP > IAP-Secured Web App User" for role.</p> <p>Once the policy change is propagated, the user will be able to access the URL successfully.</p>
<p>I have a kubernetes cluster perfectly working fine. I use below command to get the status of all the pods running in my namespace <code>dataspace</code></p> <pre><code>kubectl get pods -n dataspace -o wide | sort -k7 </code></pre> <p>Above command gives me the output with below columns.</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE deployment-qw3ed 3/3 Running 3 3d 10.38.0.10 MACHINE01 </code></pre> <p>Can we adjust above command so that the output looks like below. <code>NODE</code> column should come after the <code>STATUS</code>:</p> <pre><code>NAME READY STATUS NODE RESTARTS AGE IP deployment-qw3ed 3/3 Running MACHINE01 3 3d 10.38.0.10 </code></pre>
<p>You can re-arrange the output with <code>awk</code> which loses the pretty column separation but you can then make it pretty again with <code>column</code>:</p> <p><code>kubectl get pods -n dataspace -o wide | sort -k7 | awk '{ print $1, $2, $3, $7, $4, $5, $6}' | column -t</code></p>
<p>I´d like to login on my newly installed kubernetes dashboard (k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0) via token and it doesn´t work.</p> <p>I have the exact same problem as on <a href="https://stackoverflow.com/questions/50553233/how-to-log-in-to-kubernetes-dashboard-ui-with-service-accounts-token/50554037#50554037">How to log in to Kubernetes Dashboard UI with Service Account&#39;s token</a> but I verified my token and it fits. I also DON´T get the "Authentication failed..." error.</p> <p>When I enter the token just nothing happens, but I see new entries in the logfile:</p> <pre><code>{"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/login request from 192.168.178.10:60092: { contents hidden }\n","stream":"stdout","time":"2018-12-07T14:59:49.655298186Z"} {"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.655840444Z"} {"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 POST /api/v1/login request from 192.168.178.10:60092: { contents hidden }\n","stream":"stdout","time":"2018-12-07T14:59:49.665272088Z"} {"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.670318659Z"} {"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 192.168.178.10:60092: {}\n","stream":"stdout","time":"2018-12-07T14:59:49.688294191Z"} {"log":"2018/12/07 14:59:49 [2018-12-07T14:59:49Z] Outcoming response to 192.168.178.10:60092 with 200 status code\n","stream":"stdout","time":"2018-12-07T14:59:49.691135283Z"} {"log":"2018/12/07 14:59:52 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.\n","stream":"stdout","time":"2018-12-07T14:59:52.237740364Z"} </code></pre> <p>What I´ve done:</p> <pre><code>kubectl create serviceaccount myservice kubectl get serviceaccount myservice -o yaml </code></pre> <p>Token:</p> <pre><code>TOKEN=$(echo "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltMTVjMlZ5ZG1salpTMTBiMnRsYmkxa09ISnlaQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp0ZVhObGNuWnBZMlVpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUpoWWpFeVlUVmpOUzFtWVRKakxURXhaVGd0WVRZNE55MHdNRFV3TlRZNE9EZzRNak1pTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpHVm1ZWFZzZERwdGVYTmxjblpwWTJVaWZRLm0yR2F4VmNsOTYzVkVjbUltb3dzY25aeWdrd2hQTTBlZmNjUnVoaGNmdlNWXzU5Y29wNkdMc2t0bTRtY1FqcjBnaWhzMTZXZjFrd1VkVjBlTFJNVE1zaWZudlQxR2J6Smd3ZURydTVMbHVteW5tY3Y3Sm1GVDFGLXpJSjI0SFRERVhlVTNtMV9OVjJHcUZHdTNmVTlxOVFscG44ZVRxR2FuNDZLdEM2OTZGUVBqbjFhVnRER28wMlVrU2NwVGRHckNkenFMUjFBT0ZMTXVyUWFjWldIbHlhTmZ4Sy02bU16aDBZdG1seHdfcEFSeVlySXJMVlR2dXlLeDRmQzRvWUx2elVia1pkWmp1eUlJWnFmYXVUMTFKQUFad243MHZyZW1xbVVHTXBsdXNaYVdiU2h3SlJkRWZmMzdjTEd3R3lwdU1SeXI2a3NsVlJiLW50eXdWbHYxQQ==" | base64 -d) echo $TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im15c2VydmljZS10b2tlbi1kOHJyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJteXNlcnZpY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYjEyYTVjNS1mYTJjLTExZTgtYTY4Ny0wMDUwNTY4ODg4MjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpteXNlcnZpY2UifQ.m2GaxVcl963VEcmImowscnZygkwhPM0efccRuhhcfvSV_59cop6GLsktm4mcQjr0gihs16Wf1kwUdV0eLRMTMsifnvT1GbzJgweDru5Llumynmcv7JmFT1F-zIJ24HTDEXeU3m1_NV2GqFGu3fU9q9Qlpn8eTqGan46KtC696FQPjn1aVtDGo02UkScpTdGrCdzqLR1AOFLMurQacZWHlyaNfxK-6mMzh0Ytmlxw_pARyYrIrLVTvuyKx4fC4oYLvzUbkZdZjuyIIZqfauT11JAAZwn70vremqmUGMplusZaWbShwJRdEff37cLGwGypuMRyr6kslVRb-ntywVlv1A </code></pre> <p>I start </p> <pre><code>kubectl proxy --port=9999 --address='192.168.178.10' --accept-hosts="^*$" </code></pre> <p>Does it work just on localhost (I don´t want to install a browser nor desktop)?</p> <p>I´d also like to know, to get the dashboard permanently run, as after "ctrl + c" the "kubectl proxy" command.</p> <p>I found the workarroud</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF </code></pre> <p>Running this command and "skip" on the dashboard and I´m logged in, but how to get rid of this user as I can´t find it anymore via</p> <pre><code>kubectl get serviceaccounts --all-namespaces </code></pre> <p>nor</p> <pre><code>kubectl get serviceaccounts -n kube-system </code></pre> <p>?</p> <p>How to get it run via https?</p> <p>Thanks in advance Tom</p>
<p>The answer to all my questions I found on </p> <p><a href="http://www.joseluisgomez.com/containers/kubernetes-dashboard/" rel="nofollow noreferrer">http://www.joseluisgomez.com/containers/kubernetes-dashboard/</a></p> <p>Access via <code>kubectl proxy</code> is not recommended for productive use (but unfortunately the only explained way on the kubernetes documentation).</p> <p>It´s possible to access out of the box via https, but there are some additional steps required.</p> <ol> <li>Create a certificate:</li> </ol> <p><code>grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d &gt;&gt; kubecfg.crt</code></p> <p><code>grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d &gt;&gt; kubecfg.key</code></p> <p><code>openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"</code></p> <p>You´ll get a kubecfg.p12 which you have to download from kubernetes master and install on your client (double-click, next, next, next - Chrome Browser is recommended).</p> <ol start="2"> <li>Install a service account with ClusterRoleBinding role:</li> </ol> <p><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system EOF</code></p> <p><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF</code></p> <ol start="3"> <li>Get the bearer token for the account "kube-admin":</li> </ol> <p><code>kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')</code></p> <ol start="4"> <li>Access <code>https://&lt;master-ip&gt;:&lt;apiserver-port&gt;/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy</code>, choose "Token" and put in the bearer token you´ll get from previous step and you´re done.</li> </ol> <p>Note: Information concering your cluster you´ll get via <code>kubectl cluster-info</code>.</p>
<p>I am using Kube version v1.13.0. Since Heapster is depreciated from v1.11 I am stuck in enabling the API server for cluster Metrics to implement HPA.</p> <p><a href="https://i.stack.imgur.com/qEYBB.png" rel="noreferrer">Attached Image for reference</a> </p> <p>Can someone guide me for step by step enable for the API Metrics server or any Demo video. It would be really helpful to proceed further. </p> <p>Please let me know if any further information needed.</p> <p>Thanks Deena </p>
<p>I am able to implement HPA using <code>metrics-server</code> as heapster is depreciated. I have followed the following steps:</p> <ol> <li>Clone the metrics-server github repo: <code>git clone https://github.com/kubernetes-incubator/metrics-server.git</code></li> </ol> <p>Go into directory <code>cd deploy/1.8+</code> and run following yaml files:</p> <pre><code>[root@ip-10-0-1-91 1.8+]# kubectl apply -f aggregated-metrics-reader.yaml clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created [root@ip-10-0-1-91 1.8+]# kubectl apply -f auth-reader.yaml rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created [root@ip-10-0-1-91 1.8+]# kubectl apply -f auth-delegator.yaml clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-apiservice.yaml apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created [root@ip-10-0-1-91 1.8+]# kubectl apply -f resource-reader.yaml clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-server-deployment.yaml serviceaccount/metrics-server created deployment.extensions/metrics-server created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-server-service.yaml service/metrics-server created </code></pre> <p>Now create a pod you want to test for autoscaling (taken from kubernetes official docs):</p> <pre><code>[root@ip-10-0-1-91 auto]# kubectl run --generator=run-pod/v1 php-apache -- image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 service/php-apache created deployment.apps/php-apache created </code></pre> <p>Now create a autoscale deployment:</p> <pre><code>[root@ip-10-0-1-91 auto]# kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled </code></pre> <p>Now check the HPA, your metrics are coming or not:</p> <pre><code>[root@ip-10-0-1-91 manifests]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 0%/50% 1 10 1 2m </code></pre> <p>Now generate load from another window using:</p> <pre><code>kubectl run -i --tty load-generator --image=busybox /bin/sh </code></pre> <p>It will open a sh terminal and you can run a load from that sh terminal using:</p> <pre><code>while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done </code></pre> <p>It will take a minute or so to take enough load on your pod and you see a boom:</p> <pre><code>[root@ip-10-0-1-91 manifests]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 120%/50% 1 10 4 7m </code></pre> <p>And pods scaling :</p> <p><a href="https://i.stack.imgur.com/uPffU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uPffU.png" alt="enter image description here"></a></p> <p>Hope this helps to get your HPA working.</p> <p>EDIT:</p> <p>Replace the <code>metrics-server-deployment.yaml</code> file in <code>deploy/1.8+</code> with the following yaml file:</p> <pre><code> apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: Always volumeMounts: - name: tmp-dir mountPath: /tmp </code></pre> <p>Also, enable the <code>--authentication-token-webhook</code> in kubelet.conf, then you will be able to get the HPA.</p> <p>EDIT2: You need to set following properties in the deployment file (in your case it is tomcat) for which you are creating HPA, then only your HPA can fetch metrics from your deployment.</p> <pre><code>resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" </code></pre>
<p>I am using kubernetes cluster to deploy an image using kubectl create -f dummy.yaml . my image is public in docker hub, the size of the image is 1.3 GB. the image pull successfully but it is not running it is " CrashLoopBackOff". when i run creation deployment command "kubectl create -f dummy.yaml" i got:</p> <pre><code> Name READY STATUS RESTARTS AGE dummy-ser-5459bf444d-9b7sz 0/1 CrashLoopBackOff 118 10h </code></pre> <p>I tried to used<br> <code>command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ]</code> in my yaml file, it is work with image size 700 MB but it show CrashLoopBackOff when i use it with other image 1.3 GB, it seems the container after pulling cannot run because the image successful pulled. </p> <p>The describe pods show : </p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned dummy-ser-779 7db4cd4-djqdz to node02 Normal SuccessfulMountVolume 12m kubelet, node02 MountVolume.SetUp succeeded for vol ume "default-token-8p9lq" Normal Created 1m (x4 over 2m) kubelet, node02 Created container Normal Started 1m (x4 over 2m) kubelet, node02 Started container Warning BackOff 53s (x8 over 2m) kubelet, node02 Back-off restarting failed containe r Normal Pulling 41s (x5 over 12m) kubelet, node02 pulling image "xxx/dummyenc:ba ni" Normal Pulled 40s (x5 over 2m) kubelet, node02 Successfully pulled image "xxx </code></pre> <p>Thank you in advanced</p>
<p>I fixed this problem. I got this error because the image was not compatible with the hardware that I tried to run on (ARM7)RPi. I create the image on ubuntu 64bit using docker build for Dockerfile so that image cannot run on Raspberry pi. </p>
<p>Is possible to set a local image in a kubernetes pod yml file?</p> <p>This is my pod yml file, and the question is if I can use a local image to containers tag (in local, I have all files to my api project, dockerfile, etc).</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: name: api-service spec: selector: matchLabels: api-name: api-service replicas: 2 template: metadata: labels: api-name: api-service spec: containers: - name: api-service image: #HERE </code></pre>
<p>By local you mean it doesn't pull from dockerhub or any of the public registry. Yes it's possible if you run a single node kubernetes. You will utlize the docker cache where your kubernetes/kubelet is running.</p> <p>First thing is, you need to set your imagePullPolicy: IfNotPresent. Then, when you build your image, you need to point to the docker instance your kubernetes is using.</p> <p>I do this mostly with minikube, so the dev iteration is faster without pushing to my registry.</p>
<p>I am trying to deploy an application into my Kubernetes cluster. It is a multi node cluster. There are <code>3 m4.2xlrge</code> aws instances.</p> <pre><code>m4.2xlarge vCPU :- 8 Memory :- 32 </code></pre> <p>Now, in my <code>deployment.yaml</code> file for that service, I have mentioned</p> <pre><code>limit: cpu: 11 request: cpu: 11 </code></pre> <p>It is giving error, <code>insufficient cpu</code> and container is not scheduling. I have already <code>(8*3)=24</code> CPU resources available and I requested for <code>11</code> CPU out of it. It should share the CPU resource across nodes. Is the <code>limit</code> and <code>request</code> CPU is applicable for the containers per node? That means, should I have atleast <code>11</code> CPU per aws instance?</p>
<p>A Pod is scheduled on a single Node. The resource <code>requests:</code> help decide where it can be scheduled. If you say <code>requests: {cpu: 11}</code> then there must be some single node with 11 (unreserved) cores available; but if your cluster only has 8-core m4.2xlarge nodes, no single node will be able to support this. Kubernetes can’t “aggregate” cores across nodes in any useful way at this level.</p> <p>If you’re requesting a lot of CPU because your process has a lot of threads to do concurrent processing, consider turning the number of threads down (maybe even to just 1) but then changing the <code>replicas:</code> in a Deployment spec to run many copies of it. Each individual Pod will get scheduled on a single Node, but with many replicas you’ll get many Pods which can be spread across the three Nodes.</p> <p>If your process really needs more than 8 cores to run, then you need individual systems with more than 8 cores; consider an m4.4xlarge (same RAM-to-CPU ratio) or a c4.4xlarge (same total RAM, twice the cores).</p>
<p>I try to see all the kubernetes object used to create a deployment. How to see theses objects created when <code>kubectl run mydep --image=busybox</code> ?</p>
<p>After running <code>kubectl run mydep --image=busybox</code> , Existing objects can be viewed in a ready to use YAML output using below command:</p> <pre><code> kubectl get deployments mydep --export -o yaml </code></pre>
<p>I try to change default context in kubernetes but I get <code>config.lock: permission denied</code> error.</p> <pre><code>$ kubectl config set-context $(kubectl config current-context) --namespace=custom_namespace error: open /home/vagrant/.kube/config.lock: permission denied </code></pre>
<p>Make sure that <code>kubernetes config directory</code> has the same permissions as <code>kubernetes config file</code>.</p> <p><strong>Solution</strong> Beside, steps specified in <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#more-information" rel="nofollow noreferrer">kubernetes documentation</a>.</p> <pre><code>$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>Add change permissions on <code>$HOME/.kube/</code> directory.</p> <pre><code>$ sudo chown -R $(id -u):$(id -g) $HOME/.kube/ </code></pre>
<p>I wrote deployment.yml file in k8s:- </p> <pre><code>spec: containers: - name: consul image: xyz ports: - name: http containerPort: 8500 - name: http1 containerPort: 53 env: - name: SERVICE_8500_IGNORE value: "true" - name: CONSUL_LOCAL_CONFIG value: "{"node_name": "docker","acl_datacenter": "dc1", "acl_default_policy": "deny", "acl_down_policy": "deny", "acl_master_token": "fleeture", "acl_agent_token":"xyz", "enable_script_checks": true}" </code></pre> <p>After running I am getting errors </p> <blockquote> <p>Error : Unexpected characters near "node_name": "docker","acl_datacenter": "dc1", "acl_default_policy": "deny", "acl_down_policy": "deny", "acl_master_token": "xyz", "acl_agent_token":"xyz", "enable_script_checks": true}"". Line : 14 value: "{"node_name": "docker","acl_datacenter": "dc1", "acl_default_policy": "deny", "acl_down_policy": "deny", "acl_master_token": "fleeture", "acl_agent_token":"xyz", "enable_script_checks": true}"</p> </blockquote> <p>Any Idea how to write ENV for line 14?</p>
<p>You need to use <code>\</code> to escape the double-quotes within the value for <code>CONSUL_LOCAL_CONFIG</code> so that entry becomes:</p> <pre><code>- name: CONSUL_LOCAL_CONFIG value: "{\"node_name\": \"docker\",\"acl_datacenter\": \"dc1\", \"acl_default_policy\": \"deny\", \"acl_down_policy\": \"deny\", \"acl_master_token\": \"fleeture\", \"acl_agent_token\":\"xyz\", \"enable_script_checks\": true}" </code></pre>
<p>According to the <a href="https://istio.io/docs/concepts/traffic-management/#virtual-services" rel="nofollow noreferrer">Istio documentation</a>, VirtualServices should be able to route requests to "a completely different service than was requested". I would like to use this feature give services different aliases in different applications.</p> <p>I'm starting with a VirtualService definition like this:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-vs spec: hosts: - my-alias http: - route: - destination: host: my-service </code></pre> <p>The intention is that a client pod in the mesh should be able to send requests to <a href="http://my-alias" rel="nofollow noreferrer">http://my-alias</a> and have them routed to my-service. In the future I'll expand this with match rules to make the alias behave differently for different clients, but even the simple version isn't working.</p> <p>With no other setup, the client fails to resolve my-alias via DNS. I can solve this by adding a selectorless k8s service named my-alias so its DNS resolves, but then the VirtualService doesn't seem to do the redirect. If I add an external host like google.com to the VirtualService, then it does successfully redirect any requests to google.com over to my-service. Using the full hostname (my-alias.default.svc.cluster.local) doesn't help.</p> <p>So it seems like the VirtualService is not allowing me to redirect traffic bound for another service in the mesh. Is this expected, and is there a way I can work around it?</p>
<p>You have to rewrite <code>authority</code> (the HOST header) of your request, since the HTTP routing in Istio is performed by the HOST header.</p> <p>Add an <a href="https://preliminary.istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite" rel="nofollow noreferrer">HTTPRewrite</a> clause to the <code>http</code> clause:</p> <p><code> apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-vs spec: hosts: - my-alias http: - route: - destination: host: my-service rewrite: authority: my-service </code></p>
<p>I am creating a kubernetes cluster in <code>AWS</code> with <code>kops</code>. </p> <p>I understand that I can change the nodes' instance type using following command.</p> <p><code>kops edit ig nodes --name ${NAME}</code></p> <p>It will open this file and I can edit config</p> <pre><code>apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: 2018-12-07T11:09:37Z labels: kops.k8s.io/cluster: &lt;cluster-name&gt;.k8s.local name: nodes spec: image: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17 machineType: t2.large maxSize: 5 minSize: 3 nodeLabels: kops.k8s.io/instancegroup: nodes role: Node subnets: - eu-west-2a - eu-west-2b - eu-west-2c </code></pre> <p>What if I want to create one node with <code>xlarge</code> type and two another in <code>large</code> type. I don't see any option to set these type of value.</p> <p>The reason for this is, in my current infrastructure, the DB is residing in a 2xlarge instance and all other small componentes are in large instance. So I want nodes to be as big a possible. Another solution is instead of three I create only two 2xlarge nodes and let kuberentes manage the deployment. </p>
<p>Create new instance group Then, preview and last apply the changes.</p> <pre><code> Create: $ kops create ig &lt;new-instace-group-name&gt; --state &lt;your-s3-bucket-name&gt; Preview: $ kops update cluster --name &lt;cluster-name&gt; --state="&lt;your-s3-bucket-name&gt;" Apply: $ kops update cluster --name &lt;cluster-name&gt; --state="&lt;your-s3-bucket-name&gt;" --yes </code></pre>
<p>We have a rancher cluster composed by 8 machines : </p> <ul> <li>1 rancher server</li> <li>2 controlplanes</li> <li>3 etcd</li> <li>2 workers</li> </ul> <p>Recently, we lost our rancher server. In the <code>.kube/config</code> generated by rancher, the server referenced is the rancher server :</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: XXX server: https://1.2.3.4/k8s/clusters/c-qz7pq name: k8s </code></pre> <p>However, even without the rancher server up, the kubernetes cluster is still up and running but it's not possible to connect to it with kubectl command line because the rancher server referenced in the <code>.kube/config</code> is down. I tried to add the control plane ip in place of rancher server ip in the <code>.kube/config</code> but it doesn't work.</p> <p>Is there a way to adress directly the kubernetes API working with rancher ?</p> <p>Rancher version : 2.1.3</p> <p>Kubernetes version : 1.11.5</p>
<p>There currently is not a way to do this, but we are working on it and you can track its progress here: <a href="https://github.com/rancher/rancher/issues/13698" rel="nofollow noreferrer">https://github.com/rancher/rancher/issues/13698</a></p> <p>If the cluster is provided by GKE, AKS, or EKS, you have the direct access provided by the cloud providers. Meaning, you can log into their UIs and basically obtain a super admin level kubeconfig</p> <p>For RKE based clusters, a direct-to-k8s, super-admin kubeconfig is not easily exposed, but this would be addressed by the above feature I mentioned</p>
<p>Why service external IP address is in a pending state?</p> <p>Should I enable any load balancer for my kubernetes master? </p> <p>I'm using kubernetes on digital ocean cloud.</p> <p><a href="https://i.stack.imgur.com/BgJ3Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BgJ3Q.png" alt="enter image description here"></a></p>
<p>To provision Load Balancer or Persistent Volumes in Cloud, Kubernetes uses Cloud controller Manager. Follow this guide to know more: <a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="noreferrer">Kubernetes Cloud Controller Manager</a></p> <p>It might happen that at the time of setting up your cluster you did not specified any Cloud controller to use that is why it is getting in pending state.</p> <p>After configuring a Cloud controller Manager you should be able to provision Load Balancer and see External IP.</p> <p>Also this answer here might help: <a href="https://stackoverflow.com/questions/35884643/kubernetes-external-load-balancer-service-on-digitalocean?rq=1">Kubernetes External Load Balancer Service on DigitalOcean</a></p>
<p>The metric <code>kube_pod_container_resource_requests_cpu_cores</code> from <code>kube-state-metrics</code> comes with the labels: </p> <pre><code>kube_pod_container_resource_requests_cpu_cores{app="prometheus",chart="prometheus-8.1.0",component="kube-state-metrics",container="autoscaler",heritage="Tiller",instance="10.233.65.93:8080",job="kubernetes-service-endpoints",kubernetes_name="prometheus-kube-state-metrics",kubernetes_namespace="kube-system",kubernetes_node="k8st01",namespace="kube-system",node="k8snode",pod="kubedns-autoscaler-5db95c6cf5-cvqz4",release="prometheus"} </code></pre> <p>I want to make another label pod_name from pod , with the same values. In the end i want to have both <code>pod="kubedns-autoscaler-5db95c6cf5-cvqz4"</code> and <code>pod_name="kubedns-autoscaler-5db95c6cf5-cvqz4"</code></p> <p>I used the followimng </p> <pre><code>relabel_configs: - source_labels: [pod] target_label: pod_name </code></pre> <p>but it didn't work.</p>
<p>Try to use <code>metric_relabel_configs</code>. Your config block would look something like this:</p> <pre><code>metric_relabel_configs: - source_labels: [pod] target_label: pod_name </code></pre>
<p>I want to run 'kubectl' commands in the container, so i want to install kubectl in the container while building the Docker image. Any help is appreciated!</p>
<p>put this in your Dockerfile</p> <pre><code>RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl RUN chmod +x ./kubectl RUN mv ./kubectl /usr/local/bin </code></pre>
<p>I am working to get an existing application consisting of a bunch of stateless, scalable microservices (and of course also a few stateful ones serving as backends) running on Docker Swarm and Kubernetes. Changing the code of the application is mostly out of the question, so I have to make some existing mechanisms that we have for, e.g., service discovery, work in the Swarm and K8s contexts.</p> <p>One thing that helped me a lot with getting things up and running with Docker Swarm was the template feature of Swarm's "service create" command (<a href="https://docs.docker.com/engine/reference/commandline/service_create/#create-services-using-templates" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/commandline/service_create/#create-services-using-templates</a>), where I can do something like</p> <pre><code>-e my_env_var=foo{{.Task.Slot}} </code></pre> <p>Inside each container that is part of my Swarm service, this will set the env var my_env_var to a value of the form fooX, where "X" is the container's "slot number". To grasp what a slot number is, consider a service with N instances (i.e., scale=N). Each container occupies one slot, and the slots are numbered from 1 to N. </p> <p>That way, I can get a ID inside my container that is unique among all currently alive containers of my service, but at the same time, it is NOT totally random. If I scale a service from, e.g., 1 to 5, the five containers in my service will get the slots 1, 2, 3, 4, and 5. If I scale it down to, e.g., 3, two containers will be stopped (e.g., 2 and 4, leaving me with 1, 3, and 5). But if I scale it up again to 5, the slot numbers will (in general) again be 1 to 5 (and even if they were, e.g., 2-6, that is still better than being totally random).</p> <p>This has proven to be very useful for Swarm-enabling my application and I am desperately looking for something similar in K8s (in particular in the context of K8s deployments, which I am using for our stateless microservices as they seem to be the most suitable K8s concept). I found the possibility to pass the pod name into the container with</p> <pre><code> env: - name: metadata_name valueFrom: fieldRef: fieldPath: metadata.name </code></pre> <p>Alas, the name of a container is a) rather lengthy b) random (i.e., scaling down and up will NOT reuse names), e.g. the pods of a deployment named foo-deployment will be named something like</p> <p>foo-deployment-64db944b84-bwrxx</p> <p>foo-deployment-64db944b84-5jf7c</p> <p>etc. So as far as I understand, the last five characters are guaranteed by K8s to be unique among all active pods of a deployment, but they are NOT reused (rare collisions nonwithstanding) when scaling up and down.</p> <p>Is there any mechanism that corresponds to Swarm's "slot" concept?</p> <p>Regards PalatinateJ</p>
<p>11 Months late, but here is the solution:</p> <p>To get stable container (pod) names within K8S you <em>must</em> use a <code>StatefulSet</code>. <code>StatefulSets</code> are designed for applications that must maintain state, however, if you aren't using K8S volumes for state (keeping them ephemeral) you can use <code>StatefulSets</code> without an issue. There is a simple process of converting your <code>Deployment</code> to a <code>StatefulSet</code>:</p> <ol> <li>Change your <code>apiVersion:</code> to <code>apps/v1</code></li> <li>Change the <code>kind:</code> to <code>StatefulSet</code></li> <li>Under <code>spec:</code> add the <code>selector:</code> tag. This tag will contain everything you use to select the appropriate service. <strong>In addition</strong> you must ensure that any items you have under <code>spec:template:metadata:labels</code> match those under <code>spec:selector:matchLabels</code> (This will make more sense in the provided example)</li> <li>If you have any sort of update strategy, called <code>strategy</code> in a <code>Deployment</code>, change this to <code>updateStrategy</code>. For future reference, if you do not update this, you will end up with both a <code>StatefulSet</code> and a <code>ReplicaSet</code> being deployed as K8S is trying to fill the <code>strategy</code> and your <code>StatefulSet</code> requirements. </li> </ol> <p>After applying these changes, you will have your <code>StatefulSet</code> deployed. How do we get task slots from this? The hostname.</p> <p>Since K8S is maintaining stable pod names, you will have names such as:</p> <pre><code>pod/mypod-0 1/1 Running 0 10m pod/mypod-1 1/1 Running 0 9m </code></pre> <p>when running <code>kubetctl</code>. After that, it is a simple matter of parsing the number out of your pod name.</p> <p>Below is the YAML for a <code>StatefulSet</code>:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: myStatefulSetName labels: app: SomeLabel1 label2: SomeLabel2 spec: replicas: 100 selector: matchLabels: app: SomeLabel1 label2: SomeLabel2 updateStrategy: type: RollingUpdate template: metadata: labels: app: SomeLabel1 label2: SomeLabel2 spec: containers: - name: myPodName image: myPod:latest imagePullPolicy: Always ports: - name: myPodPort containerPort: 8080 </code></pre> <p>The differences become apparent on a equivalent <code>Deployment</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myDeploymentName labels: app: SomeLabel1 label2: SomeLabel2 spec: replicas: 100 strategy: type: RollingUpdate template: metadata: labels: app: SomeLabel1 label2: SomeLabel2 spec: containers: - name: myPodName image: myPod:latest imagePullPolicy: Always ports: - name: myPodPort containerPort: 8080 </code></pre>
<p>I've followed the steps to set up an EKS cluster and successfully have one service which exposes port 31515 from a pod..but I'm stuck at finding out what my public url is. EKS seems to have no such thing so how do I access it from the outside? Or am I not looking in the right place.</p>
<p>Your all External IP's (Public IP's) should be available from <code>kubectl</code> Kubernetes command line. </p> <p>To show these information run: </p> <pre><code>kubectl get services --all-namespaces -o wide </code></pre> <p>If it is needed specify with <code>--kubeconfig</code> flag your unique kubeconfig file.</p> <p>Example:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s </code></pre>
<p>I've been tasked with getting our k8s deployment running as non-Root. A 3rd party library we use needs the home directory for the user to exist and have write access. I've tried using initContainers but any changes I make are essentially discarded, and of course I don't have permissions on the real container</p> <p>I would like to avoid setting all this up in the Dockerfile. Is there anyway to create the user home directory for the user kubernetes container runs as?</p>
<p>Kubernetes uses a container engine to run a pod with a specific image. <em>Most</em> Pods can die at any time, therefore they must be either based on an image with the desired state or have state preserved in a different way. </p> <p>If there is a state that you would like to preserve and using a Dockerfile / Custom Image is not an option, I suggest you either :</p> <ol> <li><p>Set up a volume. volume mount the user directory so that the files are never deleted and permissions are preserved as well. <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p></li> <li><p>Run the container together with a sidecar. Create a pod that has a sidecar container which can run commands in the same network and can share a volume mount with the main container.</p></li> </ol> <p>ie. perhaps you can create a shared volume, then create an image that chowns the directory as root and use this image for a sidecar container. <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/#creating-a-pod-that-runs-two-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/#creating-a-pod-that-runs-two-containers</a></p> <p>However, the simplest solution could be to just modify permissions in the Dockerfile, for example by adding some lines to a Dockerfile. If you are adding lines to you your existing Dockerfile, make sure to add them before your CMD / ENTRYPOINT</p> <p>If you are basing this image off of your already existing one, make sure to put this at the top of your Dockerfile</p> <p><code>FROM Myoriginalimage</code></p> <p>Dockerfile commands:</p> <p><code> USER root RUN chown $USER:$USER_GROUP -R $USER_HOME_DIR USER $USER </code> make sure your CMD/ENTRYPOINT comes after this if you are editing an existing Dockerfile</p> <p>More info on Dockerfile: <a href="https://www.linode.com/docs/applications/containers/how-to-use-dockerfiles/" rel="nofollow noreferrer">https://www.linode.com/docs/applications/containers/how-to-use-dockerfiles/</a></p> <p>After that you can push to a registry, such as dockerhub, assuming you don't have a private registry or can't have access to one for whatever reason.</p> <ul> <li>create an account at hub.docker.com </li> <li>tag your image appropriately</li> <li><code>docker login</code> </li> <li><code>docker push your-image</code></li> </ul> <p>See here for more details: <a href="https://docs.docker.com/docker-hub/repos/" rel="nofollow noreferrer">https://docs.docker.com/docker-hub/repos/</a></p>
<p>I started the claster on a Google compute engine with the help of a kube-up.sh. This script created the master node and minion group. After i dont need it anymore i want to stop a cluster and shutdown all VMs in order not waste money for working of instanses. When i shutdown it (i just shut down all my cluster VMs, because i dont know another way to do it) and then start again in some time my cluster wont work anymore. "kubectl get nodes" dispalays not correct information about nodes ( For example i have A B C nodes == minions, it displays only D that even does not exist) and all comands works very very slow. Maybe i shutdown it not correct. How propery stop cluster and stop VMs in order to start it again in some time? (not delete)</p> <p>What cluster i have:</p> <pre><code>kubernetes-master | us-central1-b kubernetes-minion-group-nq7f | us-central1-b kubernetes-minion-group-gh5k | us-central1-b </code></pre> <p>What displays "kubectl get nodes" command:</p> <pre><code>[root@common frest0512]# kubectl get nodes NAME STATUS AGE VERSION kubernetes-master Ready,SchedulingDisabled 7h v1.8.0 kubernetes-minion-group-02s7 Ready 7h v1.8.0 kubernetes-minion-group-92rn Ready 7h v1.8.0 kubernetes-minion-group-kn2c Ready 7h v1.8.0 </code></pre> <p>Before shutdowning master node it was displayed correct (names and count of minions were the same).</p>
<p>Thanks to <a href="https://stackoverflow.com/users/4517659/carlos">Carlos</a> for the tip.</p> <p>You can follow steps below to detach all active nodes from Kubernetes cluster.</p> <p>1- Go to Kubernetes Engine dashboard and select the cluster.<br> <a href="https://console.cloud.google.com/kubernetes" rel="noreferrer">https://console.cloud.google.com/kubernetes</a></p> <p><a href="https://i.stack.imgur.com/I0Iui.png" rel="noreferrer"><img src="https://i.stack.imgur.com/I0Iui.png" alt="enter image description here"></a></p> <p>2- Go to details and click edit, set pool size to zero (0). </p> <p><a href="https://i.stack.imgur.com/u0wR2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/u0wR2.png" alt="enter image description here"></a></p> <p>3- Validate nodes are shutting down at Compute Engine dashboard<br> <a href="https://console.cloud.google.com/compute" rel="noreferrer">https://console.cloud.google.com/compute</a></p> <p><a href="https://i.stack.imgur.com/RYsyr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RYsyr.png" alt="enter image description here"></a></p>
<p>I was following instructions at <a href="https://kubernetes.io/docs/setup/independent/high-availability/#stacked-control-plane-and-etcd-nodes" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/#stacked-control-plane-and-etcd-nodes</a> and I can't get the secondary master node to join the primary master.</p> <pre><code>$&gt; kubeadm join LB_IP:6443 --token TOKEN --discovery-token-ca-cert-hash sha256:HASH --experimental-control-plane [preflight] running pre-flight checks [discovery] Trying to connect to API Server "LB_IP:6443" [discovery] Created cluster-info discovery client, requesting info from "https://LB_IP:6443" [discovery] Requesting info from "https://LB_IP:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "LB_IP:6443" [discovery] Successfully established connection with API Server "LB_IP:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' One or more conditions for hosting a new control plane instance is not satisfied. unable to add a new control plane instance on a cluster that doesn't use an external etcd Please ensure that: * The cluster has a stable controlPlaneEndpoint address. * The cluster uses an external etcd. * The certificates that must be shared among control plane instances are provided. </code></pre> <p>Here is my admin init config:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration kubernetesVersion: "1.12.3" apiServer: certSANs: - "LB_IP" controlPlaneEndpoint: "LB_IP:6443" networking: podSubnet: "192.168.128.0/17" serviceSubnet: "192.168.0.0/17" </code></pre> <p>And I initialized the primary master node like:</p> <pre><code>kubeadm init --config=./kube-adm-config.yaml </code></pre> <p>I have also copied all the certs to the secondary node and kubectl works on the secondary:</p> <pre><code>[root@secondary ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION primary.fqdn Ready master 8h v1.12.3 </code></pre> <p>I was really hoping to not set up external etcd nodes. The instructions seem pretty straightforward and I don't understand what I am missing.</p> <p>Any advice to help get this stacked control plane multi-master setup with local etcd to work would be appreciated. Or any debugging ideas. Or at least "stacked control plane doesn't work, you must use external etcd".</p>
<p>Upgrading to k8s version 1.13.0 resolved my issue. I think the instructions were specifically for this newer version.</p>
<p>I understand that pods are isolated into another subnet (POD-CIDR, CNI...) Is that possible to reach a pod (using a tool)? I saw that you can use ClusterIP, LB, Externalname, but I cannot reach my pod's ipaddress.</p>
<p>that means something is wrong with your networking. Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p> <ol> <li>all containers can communicate with all other containers without NAT</li> <li>all nodes can communicate with all containers (and vice-versa) without NAT</li> <li>the IP that a container sees itself as is the same IP that others see it as</li> </ol> <p><a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a></p>
<p>My pod does the following:</p> <blockquote> <p>Warning Failed 21m (x4 over 23m) kubelet, 10.76.199.35 Failed to pull image "registryname/image:version1.2": rpc error: code = Unknown desc = Error response from daemon: unauthorized: authentication required</p> </blockquote> <p>but other images will work. The output of</p> <pre><code>ibmcloud cr images </code></pre> <p>doesn't show anything different about the images that don't work. What could be going wrong here?</p>
<p>Given this is in kubenetes and you can see the image in <code>ibmcloud cr images</code> it most likely going to be a misconfiguration of your imagePullSecrets. </p> <p>If you do <code>kubectl get pod &lt;pod-name&gt; -o yaml</code> you will be able to see the what <code>imagePullSecrets</code> are in scope for the pod and check if it looks correct (could be worth comparing it to a pod that is working). </p> <p>It's worth noting that if your cluster is an instance in the IBM Cloud Kubernetes Service a default imagePullSecret for your account is added to the <code>default</code> namespace and therefore if you are running the pod in a different Kubenetes namespace you will need to do additional steps to make that work. This is a good place to start for information on this topic.</p> <p><a href="https://console.bluemix.net/docs/containers/cs_images.html#other" rel="nofollow noreferrer">https://console.bluemix.net/docs/containers/cs_images.html#other</a> </p>
<p>i would want to know how do you manage your service with Helm ? </p> <p>I already know that we are going to have more than <strong>200 microservices</strong>. <em>How to manage them easily</em> ? </p> <p>Each microservice with is own yaml files (<strong>deployment,service,ingress ,values</strong> etc..) or <strong>one several large</strong> (deployment,ingress, etc.. )yaml files <strong>for all</strong> microservices and i push the values yaml file with the specific params for the application.</p>
<p>I'd suggest aiming for an umbrella chart that includes lots of subcharts for the individual services. You can deploy each chart individually but using a single umbrella makes it easier to deploy the whole setup consistently to different environments.</p> <p>Perhaps some microservices will be similar enough that for them you could use the same chart with different parameters (maybe including docker image parameter) but you'll have to work through them to see whether you can do that. You can include the same chart as a dependency multiple times within an umbrella chart to represent different services. </p> <p>Ideally you also want a chart for a service to be individually-deployable so you can deploy and check that service in isolation. To do this you would give each chart its own resources including its own Ingress. But you might decide that for the umbrella chart you prefer to disable the Ingresses in the subcharts and put in a single fan-out Ingress for everything - that comes down to what works best for you. </p>
<p>I want to setup the feature gates argument in the apiserver pod. How I can do it? I have triedit it by using the solution from <a href="https://stackoverflow.com/questions/48718465/kubeadm-kubectl-kube-apiserver-turn-on-feature-gate/">here</a>:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration apiServerExtraArgs: feature-gates: "RuntimeClass=true" </code></pre> <p>But it is not working: </p> <pre><code>kubectl apply -f update_api_server.yaml error: unable to recognize "update_api_server.yaml": no matches for kind "MasterConfiguration" in version "kubeadm.k8s.io/v1alpha1" </code></pre> <p>I'm using k8s v1.13.0. Thanks</p>
<p>It can be done directly in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. Adding this line to the command: <code>- --feature-gates=RuntimeClass=true</code></p>
<p>I am a Kubernetes noobie so my apologies if this is a basic question - I am trying to get a good understanding of this amazing technology.</p> <p>Let's say I have some processing on a pod and I have some variables and arrays. Now I will have a heavy load, and a new pod will be dispensed/replicated - now I have 2 pods rather than 1 to handle my load. What will happen to the memory of the second pod? Is pod 2 replicated with the same memory contents? will memory be shared? If there is some communication across my microservices which requires memory to be consistent, in other words there is some variables dependency, then replicating a pod with fresh memory is catastrophic. For clarifcation, let us say there is a key, value map on one pod. A pod is replicated, then we have two key,value map (one on pod1 and another on pod2). But we want a key,value map containing all the data that is common between these two pods - a shared key,value map across replicated pods - NOT two isolated key,value maps.</p> <p>Sure, I suppose one solution would be to have a shared, central, database for reference. But this is a massive overhead for situations where we just want to share an array of a few hundred elements! What is the Kubernetes approach to this issue?</p>
<p>@Zeruno maybe i am thinking out loud here but what if you used the existing <code>etcd</code> cluster as a key-value store for your application </p> <ol> <li><p>No overhead of maintaining it since its already there and manged by K8s itself that too in a distributed cluster mode. </p></li> <li><p>This might act as a single point of storage that will be accessible by all the pods in that cluster.</p></li> </ol>
<p>I'm trying to make a simple test to build a simple nginx on kubernetes from terraform. This is the first time working terraform.</p> <p>This is the basic terraform file:</p> <pre><code>provider "kubernetes" { host = "https://xxx.xxx.xxx.xxx:8443" client_certificate = "${file("~/.kube/master.server.crt")}" client_key = "${file("~/.kube/master.server.key")}" cluster_ca_certificate = "${file("~/.kube/ca.crt")}" username = "xxxxxx" password = "xxxxxx" } resource "kubernetes_service" "nginx" { metadata { name = "nginx-example" } spec { selector { App = "${kubernetes_pod.nginx.metadata.0.labels.App}" } port { port = 80 target_port = 80 } type = "LoadBalancer" } } resource "kubernetes_pod" "nginx" { metadata { name = "nginx-example" labels { App = "nginx" } } spec { container { image = "nginx:1.7.8" name = "example" port { container_port = 80 } } } } </code></pre> <p>I'm getting the following error after running the terraform apply.</p> <blockquote> <p>Error: Error applying plan:</p> <p>1 error(s) occurred:</p> <ul> <li><p>kubernetes_pod.nginx: 1 error(s) occurred:</p></li> <li><p>kubernetes_pod.nginx: the server has asked for the client to provide credentials (post pods)</p></li> </ul> <p>Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure.</p> </blockquote> <p>I have admin permissions on kubernetes and everything is working correctly. But for some reason I'm getting that error.</p> <p>What I'm doing wrong?</p> <p>Thanks</p> <p><strong>Regarding @matthew-l-daniel question</strong></p> <p>When I'm only using the username/password I get this error:</p> <blockquote> <p>Error: Error applying plan:</p> <p>1 error(s) occurred:</p> <ul> <li><p>kubernetes_pod.nginx: 1 error(s) occurred:</p></li> <li><p>kubernetes_pod.nginx: Post <a href="https://xxx.xxx.xxx.xxx:8443/api/v1/namespaces/default/pods" rel="noreferrer">https://xxx.xxx.xxx.xxx:8443/api/v1/namespaces/default/pods</a>: x509: certificate signed by unknown authority</p></li> </ul> <p>Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure.</p> </blockquote> <p>I tried using the server name or the server ip and got the same error everytime.</p> <p>When using the certs I got the error from the original post, regarding the "credentials"</p> <p>I forgot to mention that this is an openshift installation. I don't believe it will have any impact in the end, but I thought I should mention it.</p>
<p>The solution was rather simple, I was using the master crt and key from openshift on terraform. Then I tested it using the admin crt and key from openshift and it worked.</p>
<p>I have an Istio-enabled EKS Cluster, and my nodes are constantly running out of disk space.</p> <p>Calculating the overall disk usage lead me to the istio-mixer container, which has a <strong>log file</strong> using more than <strong>50GB</strong> of disk space in only <strong>12 days</strong> of uptime:</p> <pre><code>[root@ip-some-ip containers]# pwd /var/lib/docker/containers [root@ip-some-ip containers]# du -schx .[!.]* * | sort -h | tail -n 10 66M 8bf5e8ee5a03096c589ad8f53b9e1a3d3088ca67b0064f3796e406f00336b532 73M 657eca261461d10c5b1b81ab3078d2058b931a357395903808b0145b617c1662 101M bb338296ff06ef42ae6177c8a88e63438c26c398a457dc3f5301ffcb4ef2682b 127M 21f2da86055ad76882730abf65d4465386bb85598f797f451e7ad66726243613 134M 9c2be24e8b9345659b6f208c9f2d4650bb1ece11e0c4b0793aa01fdfebadb44e 383M 5d5fdbe6813ddc3ff2f6eb96f62f8317bd73e24730e2f44ebc537367d9987142 419M 475f8dfc74c3df2bc95c47df56e37d1dfb9181fae9aa783dafabba8283023115 592M 9193c50e586e0c7ecaeb87cecd8be13714a5d6ccd6ea63557c034ef56b07772f 52G 9c6b3e4f26603471d0aa9b6a61b3da5a69001e6b9be34432ffa62d577738c149 54G total [root@ip-192-168-228-194 containers]# du -hs 9c6b3e4*/*.log 52G 9c6b3e4f26603471d0aa9b6a61b3da5a69001e6b9be34432ffa62d577738c149-json.log [root@ip-ip-some-ip containers]# docker ps -a | grep 9c6b3e4f2660 9c6b3e4f2660 d559bdcd7a88 "/usr/local/bin/mi..." 12 days ago Up 12 days k8s_mixer_istio-telemetry-6b5579595f-fvm5x_istio-system_6324c262-f3b5-11e8-b615-0eccb0bb4724_0 </code></pre> <p><strong>My questions are:</strong> </p> <ul> <li>This amount of log output is expected?</li> <li>The mixer log level can be decreased? How? Changing it affects my telemetry metrics?</li> <li>There is a way to configure a log "retention period"?</li> </ul> <p><strong>Additional info:</strong></p> <ul> <li>Istio v1.0.2 (deployed with the offical helm charts; no custom configs)</li> <li>k8s v1.10.11-eks</li> <li>The cluster has approximately 20 pods running in Istio-enabled namespaces</li> </ul>
<p>The default value of logging level in <strong>Mixer</strong> is info. And the logs provided by you, confirms that you have this settings. Therefore, a lot of redundant information gathered in logs and it is possible to decrease logging level for some sources.</p> <p>You can change it in two ways:</p> <ol> <li><p><strong>On working pod without restart.</strong></p> <p>In your logs you can find the following line:</p> <pre><code>2018-12-12T17:54:55.461261Z info ControlZ available at 192.168.87.249:9876 </code></pre> <p>It means, that in the mixer container on 9876 port you can find Istio ControlZ web-interface. To get an access to it from a computer with installed <code>kubectl</code>, you need to run the following command:</p> <pre><code>kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l istio=mixer,istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') 9876:9876 &amp; </code></pre> <p>After that, in your browser go to the link <a href="http://localhost:9876/scopez/" rel="nofollow noreferrer">http://localhost:9876/scopez/</a>, and you will see the following dashboard, where you can change log levels:</p> <p><a href="https://i.stack.imgur.com/yHkID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yHkID.png" alt="enter image description here"></a></p></li> <li><p><strong>Add <code>--log_output_level</code> flag to the <code>istio-telemetry</code> deployment for the <code>mixer</code> container.</strong></p> <p>Here is the description for the flag from the mixer's documentation:</p> <blockquote> <p><strong>--log_output_level string</strong><br> Comma-separated minimum per-scope logging level of messages to output, in the form of :,:,... where scope can be one of [adapters, api, attributes, default, grpcAdapter, loadshedding] and level can be one of [debug, info, warn, error, none] (default "default:info")</p> </blockquote> <p>Note, that for key <code>--log_output_level attributes:warn,api:error</code> in yaml file you need to use one of the following:</p> <ul> <li>value <code>- --log_output_level=attributes:warn,api:error</code> or</li> <li>values <code>- --log_output_level</code> and <code>- attributes:warn,api:error</code> on different lines </li> </ul> <p>The example of the deployment: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: labels: chart: mixer-1.0.4 istio: mixer release: istio name: istio-telemetry namespace: istio-system spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: telemetry istio: mixer istio-mixer-type: telemetry strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" sidecar.istio.io/inject: "false" creationTimestamp: null labels: app: telemetry istio: mixer istio-mixer-type: telemetry spec: containers: - args: #Flags for the Mixer process - --address #Flag on two different lines - unix:///sock/mixer.socket - --configStoreURL=k8s:// #Flag with '=' - --configDefaultNamespace=istio-system - --trace_zipkin_url=http://zipkin:9411/api/v1/spans - --log_output_level=attributes:warn,api:error # &lt;------ THIS LINE IS WHAT YOU ARE LOOKING FOR env: - name: GODEBUG value: gctrace=2 image: docker.io/istio/mixer:1.0.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /version port: 9093 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 name: mixer ports: - containerPort: 9093 protocol: TCP - containerPort: 42422 protocol: TCP resources: requests: cpu: 10m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /sock name: uds-socket - args: - proxy - --serviceCluster - istio-telemetry - --templateFile - /etc/istio/proxy/envoy_telemetry.yaml.tmpl - --controlPlaneAuthPolicy - MUTUAL_TLS env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: docker.io/istio/proxyv2:1.0.4 imagePullPolicy: IfNotPresent name: istio-proxy ports: - containerPort: 15090 name: http-envoy-prom protocol: TCP resources: requests: cpu: 10m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/certs name: istio-certs readOnly: true - mountPath: /sock name: uds-socket dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: istio-mixer-service-account serviceAccountName: istio-mixer-service-account terminationGracePeriodSeconds: 30 volumes: - name: istio-certs secret: defaultMode: 420 optional: true secretName: istio.istio-mixer-service-account - emptyDir: {} name: uds-socket </code></pre></li> </ol> <p><strong>Additionally, you can configure log rotation for mixer process using the following flags:</strong></p> <blockquote> <p>--log_rotate string The path for the optional rotating log file</p> <p>--log_rotate_max_age int The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default 30)</p> <p>--log_rotate_max_backups int The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default 1000)</p> <p>--log_rotate_max_size int The maximum size in megabytes of a log file beyond which the file is rotated (default 104857600)</p> </blockquote> <p>However, I have no possibility to generate a huge amount of such logs and test how it works.</p> <p><strong>Links:</strong></p> <p>Unfortunately, the <a href="https://istio.io/help/ops/component-logging/" rel="nofollow noreferrer">official documentation</a> is not good, but maybe it helps somehow.</p> <p>And as a bonus, <a href="https://istio.io/docs/reference/commands/mixs/#mixs-server" rel="nofollow noreferrer">here</a> is the list of all mixer server flags.</p>
<p>Intention is to execute gatling perf tests from command line .Equivalent docker command is </p> <pre><code>docker run --rm -w /opt/gatling-fundamentals/ tarunkumard/tarungatlingscript:v1.0 ./gradlew gatlingRun-simulations.RuntimeParameters -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30 </code></pre> <p>Now to map above docker run in Kubernetes using kubectl, I have created a pod for which gradlewcommand.yaml file is below</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: gradlecommandfromcommandline labels: purpose: gradlecommandfromcommandline spec: containers: - name: gradlecommandfromcommandline image: tarunkumard/tarungatlingscript:v1.0 workingDir: /opt/gatling-fundamentals/ command: ["./gradlew"] args: ["gatlingRun-simulations.RuntimeParameters", "-DUSERS=500", "- DRAMP_DURATION=5", "-DDURATION=30"] restartPolicy: OnFailure </code></pre> <p>Now pod is created using below command:-</p> <pre><code>kubectl apply -f gradlewcommand.yaml </code></pre> <p>Now comes my actual requirement or question that how do i run or trigger kubectl run command so as to run container inside the above pod created? ,mind you pod name is gradlecommandfromcommandline</p>
<p>Here is the command which solves the problem:</p> <pre><code> kubectl exec gradlecommandfromcommandline -- \ ./gradlew gatlingRun-simulations.RuntimeParameters \ -DUSERS=500 -DRAMP_DURATION=5 -DDURATION=30 </code></pre>
<p>When I do <code>kubectl top pods/nodes</code> I am getting the following error:</p> <blockquote> <p>metrics not available yet</p> </blockquote> <p>When I check the logs I am getting</p> <pre><code>$ kubectl logs metrics-server-7df4c4484d-h75wr -n kube-system -c metrics-server I1008 12:02:55.159069 1 serving.go:273] Generated self-signed cert(apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) [restful] 2018/10/08 12:02:56 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi [restful] 2018/10/08 12:02:56 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/ I1008 12:02:56.358063 1 serve.go:96] Serving securely on [::]:443 E1008 12:03:04.225519 1 reststorage.go:101] unable to fetch node metrics for node &quot;hpa-test&quot;: no metrics known for node &quot;hpa-test&quot; E1008 12:03:07.619489 1 reststorage.go:101] unable to fetch node metrics for node &quot;hpa-test&quot;: no metrics known for node &quot;hpa-test&quot; </code></pre> <p>Also, I am able to ping to hpa-test node from:</p> <pre><code>$ kubectl exec -it metrics-server-7df4c4484d-h75wr -n kube-system sh </code></pre> <p>Also, I have tried looking for solution everywhere but nothing fixed the issue</p>
<p>There are two ways to fix this problem:</p> <p>1) using <strong>heapster</strong> : installing heapster will allow <em>'kubectl top nodes'</em> to work out of the box. However heapster has been deprecated, so you probably should switch to metrics-server.</p> <p>2) using <strong>metrics-server</strong> : unfortunately it may not work out of the box when installing metrics-server... in case it doesn't, you need to update the end of the metrics-server-deployment.yaml (1.8+) file you used for installation and <strong>add a command section</strong> with the right parameters as follow:</p> <pre><code>containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: Always volumeMounts: - name: tmp-dir mountPath: /tmp command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP </code></pre> <p>then simply apply the changes:</p> <blockquote> <p>kubectl apply -f metrics-server-deployment.yaml</p> </blockquote> <p>you should then be able to get results with</p> <blockquote> <p>kubectl top nodes</p> </blockquote> <p>and</p> <blockquote> <p>kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"</p> </blockquote>
<p>For one of requirements , i created a new pod on my default name space using below yaml file</p> <pre><code> apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr1 spec: limits: - max: memory: 5Gi min: memory: 900Mi type: Container </code></pre> <p>Now i need to remove these LimitRange from default namespace in kubernetes?</p>
<p>You created a LimitRange named <code>mem-min-max-demo-lr1</code> in the default namespace. To verify run <code>kubectl get LimitRange -n default</code> , then delete <code>kubectl delete LimitRange mem-min-max-demo-lr1</code> . To further understand this scenario please check this <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/</a></p>
<p>I understand that pods are isolated into another subnet (POD-CIDR, CNI...) Is that possible to reach a pod (using a tool)? I saw that you can use ClusterIP, LB, Externalname, but I cannot reach my pod's ipaddress.</p>
<p>Assuming you have a service defined for your pod (deployment) you can use kubectl to forward local ports to that service. For example:</p> <p><code>$ kubectl port-forward redis-service 6379:6379 --namespace=default</code></p> <p>This would allow you to access your pod/service through local port 6379</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Kubernetes Documentation</a></p> <hr> <p>If you are interested in making your pod publicly available, your best resource would be to define an ingress. This will allow you to map a public DNS hostname and path to your internal kubernetes service</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress Documentation</a></p>
<p>i installed kubernetes via kubeadm under the 1.9 and upgraded it to 1.10.0 a while back using <code>kubeadm upgrade</code>. i'm quite happy with 1.10.x and don't want to deal with potential issues with upgrading to anything newer just yet.</p> <p>So i would like to upgrade to 1.10.11.</p> <p>i tried </p> <pre><code>$ kubeadm upgrade plan v1.10.11 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/plan] computing upgrade possibilities [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.10.0 [upgrade/versions] kubeadm version: v1.10.11 [upgrade/versions] Latest stable version: v1.13.1 [upgrade/versions] Latest version in the v1.1 series: v1.1.8 [upgrade/versions] WARNING: No recommended etcd for requested kubernetes version (v1.13.1) Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 5 x v1.10.0 v1.13.1 2 x v1.10.3 v1.13.1 1 x v1.10.5 v1.13.1 2 x v1.9.6 v1.13.1 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.10.0 v1.13.1 Controller Manager v1.10.0 v1.13.1 Scheduler v1.10.0 v1.13.1 Kube Proxy v1.10.0 v1.13.1 Kube DNS 1.14.8 1.14.8 Etcd 3.1.12 N/A You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.13.1 Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.1. _____________________________________________________________________ </code></pre> <p>it seems to want me to upgrade to 1.13!</p> <p>how can i /just/ upgrade from my v1.10.0 setup to v1.10.11?</p>
<p>I am able to upgrade my kubernetes cluster from 1.10.0 to 1.10.11. </p> <pre><code>[root@ centos]# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-.ec2.internal Ready master 2m v1.10.0 </code></pre> <p>Here is the step by step instructions:</p> <pre><code>export VERSION="1.10.11" export ARCH=amd64 wget https://storage.googleapis.com/kubernetes-release/release/v1.10.11/bin/linux/amd64/kubeadm &gt; /usr/bin/kubeadm chmod a+rx /usr/bin/kubeadm </code></pre> <p>Now check the kubeadm upgrade plan</p> <pre><code>[root@ centos]# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/plan] computing upgrade possibilities [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.10.11 [upgrade/versions] kubeadm version: v1.10.11 [upgrade/versions] Latest stable version: v1.13.1 [upgrade/versions] Latest version in the v1.1 series: v1.1.8 [upgrade/versions] WARNING: No recommended etcd for requested kubernetes version (v1.13.1) Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.10.0 v1.13.1 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.10.11 v1.13.1 Controller Manager v1.10.11 v1.13.1 Scheduler v1.10.11 v1.13.1 Kube Proxy v1.10.11 v1.13.1 Kube DNS 1.14.8 1.14.8 Etcd 3.1.12 N/A You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.13.1 Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.1. </code></pre> <p>Now instead of upgrade apply to v1.13.1 you should use:</p> <pre><code>kubeadm upgrade apply 1.10.11 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.10.11" [upgrade/versions] Cluster version: v1.10.11 [upgrade/versions] kubeadm version: v1.10.11 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y ....... [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.10.11". Enjoy! </code></pre> <p>Now follow the official documentation of upgrade <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/" rel="nofollow noreferrer">here</a> and you will be able to get the system up and running:</p> <p>Drain the master nodes using:</p> <pre><code>kubectl drain ip-.ec2.internal --ignore-daemonsets </code></pre> <p>Upgrade the kubelet and kubeadm package using:</p> <pre><code>yum upgrade kubelet-1.10.11 kubeadm-1.10.11 --disableexcludes=kubernetes </code></pre> <p>Restart kubelet and uncordon the host:</p> <pre><code>systemctl restart kubelet kubectl uncordon $HOST </code></pre> <p>And now you will be able to get you system up and running:</p> <pre><code>[root@ip- centos]# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-.ec2.internal Ready master 32m v1.10.11 [root@ip- centos]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-75675f5897-pmmkc 1/1 Running 0 9m nginx-deployment-75675f5897-wshgz 1/1 Running 0 9m </code></pre>
<p>I want to have a setup where my dns points to my EKS cluster, and my EKS cluster points api.server.com/static</p> <p>to files on S3 and api.server.com/api to my tomcat deployment.</p> <p>I'm curious if and how this is possible.</p>
<p>Yes, you can add a service of "externalName" type pointing to an external domain name. This has to be a FQDN, not a URL though, you need to add routing logic to your ingress.</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: external namespace: default labels: app: external spec: type: ExternalName # &lt;-- this is the service type for external resources externalName: s3.amazon.fqdn # &lt;-- put your external domain here --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: external namespace: default nginx.ingress.kubernetes.io/rewrite-target: / # &lt;-- your routing logic? spec: rules: - http: paths: - path: /static # &lt;-- your routing logic backend: serviceName: external # &lt;-- your service name servicePort: 80 </code></pre>
<p>Since yesterday, I am struggling with this strange issue: node "kmaster" not found. <a href="https://i.stack.imgur.com/VIqtp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VIqtp.png" alt="img_1"></a></p> <p>I tried with multiple combinations of installing kubernetes on jetstream instance.</p> <ol> <li>using calico in ubuntu</li> <li>using flannel in centos</li> <li>and few other ways</li> </ol> <p>I looked it online and found many people have the same issue: <a href="https://github.com/kubernetes/kubernetes/issues/61277" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/61277</a> <a href="https://i.stack.imgur.com/dAupU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dAupU.png" alt="[kubeissue_2.jpg]"></a></p> <p>If someone ran into a similar issue, then please let me know what steps are needed to be taken to resolve it.</p> <p>Thanks.</p>
<p>I have found my solution for this. I was having issue running kubernetes cluster because the kubernetes components are distributed on multiple servers. Once I created the master node and slave(worker) node on the same machine, the issue got resolved.<br> The steps that I took to resolve the issue:<br> 1. on slave/worker machine, run this command: kubeadm reset<br> 2. on master node, generate the token by running this command: kubeadm generate token.<br> 3. use the token generated in master machine on the slave node, so that the node machine can join the kubernetes cluster.</p> <p>Cheers!!</p>
<p>I'm running my application in AWS-EKS and my Jenkins instance is not yet containerized. To deploy our application we are using <code>kubectl</code> command in the Jenkins Pipeline. I'm wondering any best practice or plugin for Kubernetes deployment via Jenkins.</p> <p>So far I found below plugins which do not completely work with my use-case.</p> <p><a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a> - To run dynamic Jenkins agents. May not be suitable for my case.</p> <p><a href="https://github.com/jenkinsci/kubernetes-cd-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-cd-plugin</a> - This plugin is providing the K8s Deployment functionality, but interacting with EKS is a bit of concern with this plugin as in EKS authentication is done by <code>aws-iam-authenticator</code></p> <p>Is there any plugin available in Jenkins to handle Kubernetes deployment?</p> <p>Any pointers would be helpful.</p>
<p>Check out <a href="https://jenkins.io/projects/jenkins-x/" rel="nofollow noreferrer">Jenkins X</a>, it was created specifically for Kubernetes applications</p> <p>Helpful links:</p> <p><a href="https://aws.amazon.com/blogs/opensource/continuous-delivery-eks-jenkins-x/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/continuous-delivery-eks-jenkins-x/</a></p> <p><a href="https://jenkins-x.io/commands/jx_create_cluster_eks/#jx-create-cluster-eks" rel="nofollow noreferrer">https://jenkins-x.io/commands/jx_create_cluster_eks/#jx-create-cluster-eks</a></p>
<p>Running <code>kubectl explain pod.spec.containers</code> shows: </p> <pre><code> volumeDevices &lt;[]Object&gt; volumeDevices is the list of block devices to be used by the container. This is a beta feature. volumeMounts &lt;[]Object&gt; Pod volumes to mount into the container's filesystem. Cannot be updated. </code></pre> <p>Is there a relationship between these two containers properties?</p> <p>Note that <code>kubectl version</code> shows:</p> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} </p> <p>Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}</p> </blockquote>
<p>'volumeDevices' is part of a new beta feature in 1.13 that allows a pod to access a raw block volume instead of a mounted filesystem volume. This is useful for some advanced applications like databases that may have their own filesystem format.</p> <p>You can find the official documentation <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support" rel="nofollow noreferrer">here</a> although it does not seem to be updated for 1.13 yet.</p>
<p>I am a Kubernetes noobie so my apologies if this is a basic question - I am trying to get a good understanding of this amazing technology.</p> <p>Let's say I have some processing on a pod and I have some variables and arrays. Now I will have a heavy load, and a new pod will be dispensed/replicated - now I have 2 pods rather than 1 to handle my load. What will happen to the memory of the second pod? Is pod 2 replicated with the same memory contents? will memory be shared? If there is some communication across my microservices which requires memory to be consistent, in other words there is some variables dependency, then replicating a pod with fresh memory is catastrophic. For clarifcation, let us say there is a key, value map on one pod. A pod is replicated, then we have two key,value map (one on pod1 and another on pod2). But we want a key,value map containing all the data that is common between these two pods - a shared key,value map across replicated pods - NOT two isolated key,value maps.</p> <p>Sure, I suppose one solution would be to have a shared, central, database for reference. But this is a massive overhead for situations where we just want to share an array of a few hundred elements! What is the Kubernetes approach to this issue?</p>
<p>Addressing your question about memory of the second pod, the second <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">pod</a> will be assigned a new memory.</p> <p>Addressing your question about the replicated memory content of the new pod, all the new pod will be replicated depending on the container spec. </p> <p>Addressing your question about shared memory between pods, no the memory is not shared between the pods.If you want to share elements between pods, you need to make sure it is in container spec, such as <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volume</a> or <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">environment variables</a>.</p> <p>Addressing your question about key values, if you are looking for storage solutions, you could consider volumes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">1</a>, <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">2</a>. If you are just looking to pass some key-values to pods, you might consider using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configmaps</a>.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">1</a> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p> <p><a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">2</a> <a href="https://kubernetes.io/docs/concepts/storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/</a></p>
<p>I know just the bare minimum of Kubernetes. However, I wanted to know if there would be any benefit in running 2 containers in a single pod:</p> <ul> <li>1 Container running the application (e.g. a NodeJS app)</li> <li>1 Container running the corresponding local database (e.g. a PouchDB database)</li> </ul> <p>Would this would increase performance or the down-sides of coupling the two containers would overcome any benefits?</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pods</a>, are designed to put together containers that share the same lifecyle. Containers inside the same pod, share some namespaces (like networking) and volumes.<br/> This way, coupling an app with its database could look like a good idea, because the app could just connect to the database through localhost, etc. But it is not! As Diego Velez pointed out, one of the first limitations you could face is scaling your app. If you couple your app with your database, you are forced to scale your database whenever you scale your app, what is not optimal at all and prevents you from benefit of one of the main benefits of using a container orchestrator like kubernetes.<br/> Some good use cases are:</p> <ul> <li>Container with app + container with agent for app metrics, ci agents, etc.</li> <li>CI/CD container (like jenkins agents) + container(s) with tools for CI/CD.</li> <li>Container with app + container with proxy (like in istio making use of the sidecar pattern).</li> </ul>
<p>I am new to Kubernetes. I have searched that you can use a <code>LoadBalancer</code> or <code>NodePort</code> to expose services. I am trying to follow tutorials on <a href="https://kubernetes.io/docs/tutorials/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/</a>.</p> <p>I am able to expose service with <code>LoadBalancer</code> but not <code>NodePort</code>. Here is the command I used.</p> <pre><code>kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080 </code></pre>
<p>I guess you're specifically following <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">'Use a Service to Access an Application in a Cluster'</a></p> <p>Just to note in an answer what you came to already in the comments, you don't set a <code>--port</code> parameter on the <code>expose</code> in that example and just on the <code>run</code>. You set the port that the Pod responds to in the <code>run</code> command but not the port exposed externally in <code>expose</code> - k8s will choose that for you automatically from the NodePort range and then tell you which one it chose. Then you find a public IP for a node in order to access. </p>
<p>I would like to delete a pod periodically, for example every day at 3:00 AM, or in the equivalent, scale down to zero and scale up to 1 again, either mechanism would be good.</p> <p>What would be the most appropriate way to do this in kubernetes ?</p>
<p>While kubernetes have two different resource for this kind of things, one is "cronjobs" which is native in kubernetes for repeat jobs like crontab in linux, and another things is "jobs" which is also native for one-time task.</p> <p>Here is the related link.</p> <p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/</a></p> <p>And also for your scenario, you have to also solve some authorization things to make the cronjob can have right permission to do this.</p>
<h3>Context</h3> <p>We are currently using a few clusters with v1.8.7 (which was created by currently unavailable developers, months ago) and are trying to upgrade to a higher version. However, we wanted to try the same on an cluster we use for experimental &amp; POCs.</p> <h3>What we tried</h3> <p>In doing the same, we tried to run a few kubeadm commands on one of the master nodes, but kubeadm was not found.</p> <p>So, we tried installing the same with commands -</p> <pre><code>apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl </code></pre> <h3>What happened</h3> <p>However, now that node has status Not Ready and kubelet service is failing</p> <p>Any pointers on how to fix this and what we should've done ?</p> <pre><code>root@k8s-master-dev-0:/home/azureuser# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-dev-0 NotReady master 118d v1.8.7 k8s-master-dev-1 Ready master 118d v1.8.7 k8s-master-dev-2 Ready master 163d v1.8.7 k8s-agents-dev-0 Ready agent 163d v1.8.7 k8s-agents-dev-1 Ready agent 163d v1.8.7 k8s-agents-dev-2 Ready agent 163d v1.8.7 root@k8s-master-dev-0:/home/azureuser# systemctl status kubelet.service ● kubelet.service - Kubelet Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: failed (Result: start-limit-hit) since Thu 2018-12-13 14:33:25 UTC; 18h ago Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Control process exited, code=exited status=2 Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: Failed to start Kubelet. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Unit entered failed state. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: Stopped Kubelet. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Start request repeated too quickly. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: Failed to start Kubelet. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Unit entered failed state. Dec 13 14:33:25 k8s-master-dev-0 systemd[1]: kubelet.service: Failed with result 'start-limit-hit'. </code></pre>
<p>The reason your kubelet went into bad state is that you upgraded kubelet package and service file for kubelet must be renewed and If you earlier did some changes must be lost.</p> <p>Following things you can try:</p> <ol> <li>Disabling you swap memory: <code>swapoff -a</code></li> <li>Check your kubelet service file, for kubeadm it is located at <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> and check the value <code>--cgroup-driver</code> and if it is <code>systemd</code> make it <code>cgroupfs</code> and then:</li> </ol> <p>Reload the daemon and restart kubelet:</p> <pre><code>systemctl daemon-reload systemctl restart kubelet </code></pre> <p>Now check if your kubelet started or not. </p> <p>PS: Live upgrade of kubeadm control plane should be done carefully, check my answer on how to upgrade kubeadm</p> <p><a href="https://stackoverflow.com/questions/53771883/how-to-upgrade-kubernetes-from-v1-10-0-to-v1-10-11/53773310#53773310">how to upgrade kubernetes from v1.10.0 to v1.10.11</a></p>
<p>My cluster contains 1 master with 3 worker nodes in which 1 POD with 2 replica sets and 1 service are created. When I try to access the service via the command <code>curl &lt;ClusterIP&gt;:&lt;port&gt;</code> either from 2 worker nodes, sometimes it can feedback Nginx welcome, but sometimes it gets stuck and connection refused and timeout.</p> <p>I checked the Kubernetes Service, POD and endpoints are fine, but no clue what is going on. Please advise.</p> <pre><code>vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready master 23d v1.12.2 192.168.205.10 &lt;none&gt; Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2 k8s-worker1 Ready &lt;none&gt; 23d v1.12.2 192.168.205.11 &lt;none&gt; Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2 k8s-worker2 Ready &lt;none&gt; 23d v1.12.2 192.168.205.12 &lt;none&gt; Ubuntu 16.04.4 LTS 4.4.0-139-generic docker://17.3.2 vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get pod -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default my-nginx-756f645cd7-pfdck 1/1 Running 0 5m23s 10.244.2.39 k8s-worker2 &lt;none&gt; default my-nginx-756f645cd7-xpbnp 1/1 Running 0 5m23s 10.244.1.40 k8s-worker1 &lt;none&gt; kube-system coredns-576cbf47c7-ljx68 1/1 Running 18 23d 10.244.0.38 k8s-master &lt;none&gt; kube-system coredns-576cbf47c7-nwlph 1/1 Running 18 23d 10.244.0.39 k8s-master &lt;none&gt; kube-system etcd-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master &lt;none&gt; kube-system kube-apiserver-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master &lt;none&gt; kube-system kube-controller-manager-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master &lt;none&gt; kube-system kube-flannel-ds-54xnb 1/1 Running 2 2d5h 192.168.205.12 k8s-worker2 &lt;none&gt; kube-system kube-flannel-ds-9q295 1/1 Running 2 2d5h 192.168.205.11 k8s-worker1 &lt;none&gt; kube-system kube-flannel-ds-q25xw 1/1 Running 2 2d5h 192.168.205.10 k8s-master &lt;none&gt; kube-system kube-proxy-gkpwp 1/1 Running 15 23d 192.168.205.11 k8s-worker1 &lt;none&gt; kube-system kube-proxy-gncjh 1/1 Running 18 23d 192.168.205.10 k8s-master &lt;none&gt; kube-system kube-proxy-m4jfm 1/1 Running 15 23d 192.168.205.12 k8s-worker2 &lt;none&gt; kube-system kube-scheduler-k8s-master 1/1 Running 18 23d 192.168.205.10 k8s-master &lt;none&gt; kube-system kubernetes-dashboard-77fd78f978-4r62r 1/1 Running 15 23d 10.244.1.38 k8s-worker1 &lt;none&gt; vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 23d &lt;none&gt; my-nginx ClusterIP 10.98.9.75 &lt;none&gt; 80/TCP 75s run=my-nginx vagrant@k8s-master:~/_projects/tmp1$ sudo kubectl get endpoints NAME ENDPOINTS AGE kubernetes 192.168.205.10:6443 23d my-nginx 10.244.1.40:80,10.244.2.39:80 101s </code></pre>
<p>This sounds odd but it could be that one of your pods is serving traffic and the other is not. You can try shelling into the pods:</p> <pre><code>$ kubectl exec -it my-nginx-756f645cd7-rs2w2 sh $ kubectl exec -it my-nginx-756f645cd7-vwzrl sh </code></pre> <p>You can see if they are listening on port 80:</p> <pre><code>$ curl localhost:80 </code></pre> <p>You can also see if your service has the two endpoints <code>10.244.2.28:80</code> and <code>10.244.1.29:80</code>.</p> <pre><code>$ kubectl get ep my-nginx $ kubectl get ep my-nginx -o=yaml </code></pre> <p>Also, try to connect to each one of your endpoints from a node:</p> <pre><code>$ curl 10.244.2.28:80 $ curl 10.244.2.29:80 </code></pre>
<p>I have a similar issue as below with exposing kubernetes services via type: LoadBalancer on my local minikube cluster. <a href="https://github.com/kubernetes/minikube/issues/384" rel="nofollow noreferrer">LoadBalancer support</a></p> <p>I saw that there is a workaround for this and it can be fixed via running the command <code>minikube tunnel</code>. <a href="https://github.com/kubernetes/minikube/blob/157eeeff2219afde6f085f515a5283476af6c280/docs/tunnel.md#windows" rel="nofollow noreferrer">Minikube Tunnel Design Doc</a></p> <p>However the command 'minikube tunnel' doesn't work for me and it throws an error of "Unknown command". Does anyone know if i am missing anything? </p> <p>Thanks for the help! </p>
<p>The tunnel feature is <a href="https://github.com/kubernetes/minikube/pull/3015#issuecomment-443392547" rel="nofollow noreferrer">quite new and may not be in your minikube version</a> so try <a href="https://stackoverflow.com/questions/45002364/how-to-upgrade-minikube">upgrading to latest minikube</a></p>
<p>I created two PersistentVolumeClaims(one for redis, one for persistent logs) and tried to mount both in a single deployment, but after creating the deployment, I get the following error:</p> <p><code>nodes are available: 3 node(s) didn't match node selector, 4 node(s) had no available volume zone.</code></p> <p>However as soon as I remove one PVC from the deployment yaml file, it works fine. I am running it on Google Cloud Platform using Kubernetes engine.</p> <p>pvc1:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-log spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 20Gi storageClassName: standard </code></pre> <p>pvc2:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-redis spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 20Gi storageClassName: standard </code></pre> <p>deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: 'prod-deployment' spec: replicas: 1 template: metadata: labels: app: foo release: canary environment: production spec: containers: - name: api-server image: 'foo:latest' volumeMounts: - mountPath: /logs name: log-storage - name: redis image: 'redis' volumeMounts: - mountPath: /data name: redis-data volumes: - name: redis-data persistentVolumeClaim: claimName: pvc-redis - name: log-storage persistentVolumeClaim: claimName: pvc-log </code></pre>
<p>This is similar to <a href="https://github.com/kubernetes/kubernetes/issues/34583" rel="nofollow noreferrer">this</a>. It's most likely due to a PVC trying to create a volume on an availability zone where you don't have a node in. You can try restricting the standard <a href="https://kubernetes.io/docs/concepts/storage/storage-classes" rel="nofollow noreferrer"><code>StorageClass</code></a> to just the availability zones where you have Kubernetes nodes. Something like this:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-central1-a - us-central1-b </code></pre>
<p>I have deployed a stateful set app on my Kubernetes cluster. It has persistent volumes etc and a replica of one. The issue I face is that when I turn off the node where the stateful set is running, the pod does not restart on a new node. It continues waiting for the node to be up and eventually restarts on the same node. Is there some setting in the StatefulSet spec which I am missing. I have followed the example mentioned in Kubernetes guides to set up the same: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components</a></p> <p>What am I missing?</p>
<p>This is by design. When a node goes "down", the master does not know whether it was a safe down (deliberate shutdown) or a network partition. Thus PVC with that node remains on the same node and master mark the pods on that node as <code>Unknown</code></p> <p>By default, Kubernetes always try to create pod on the same node where PVC is provisioned, which is the reason the pod always comes up on the same node when deleted.</p> <p>This PVC goes onto other node only when you <code>cordon</code> the node, <code>drain</code> the node and <code>delete</code> the node from cluster, Now master knows this node doesn't exist in cluster. Hence master moves PVC to another node and pod comes up on that node.</p>
<p>I installed K8s on my two Ubuntus in VirtualBox (Master and Node01). After installation (I proceeded according K8s doc site) I typed <code>kubectl get nodes</code> and got bot servers in status <em>Ready</em>. But after restart systems I got this: </p> <pre><code># kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>I checked kubelet service and it is running: </p> <pre><code># systemctl status kubelet kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Mon 2017-04-24 10:01:51 CEST; 15min ago Docs: http://kubernetes.io/docs/ Main PID: 13128 (kubelet) Tasks: 21 Memory: 48.2M CPU: 58.014s CGroup: /system.slice/kubelet.service ├─13128 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local └─13164 journalctl -k -f Apr 24 10:16:40 master kubelet[13128]: I0424 10:16:40.204156 13128 kuberuntime_manager.go:752] Back-off 5m0s restarting failed container=weave pod=weave-net-5qgvz_kube-system(4b7bb2f0-2691-11e7-bfb6-080027229776) Apr 24 10:16:40 master kubelet[13128]: E0424 10:16:40.204694 13128 pod_workers.go:182] Error syncing pod 4b7bb2f0-2691-11e7-bfb6-080027229776 ("weave-net-5qgvz_kube-system(4b7bb2f0-2691-11e7-bfb6-080027229776)"), skipping: fail Apr 24 10:16:42 master kubelet[13128]: I0424 10:16:42.972302 13128 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b59d0d9-2692-11e7-bfb6-080027229776-default-token-h3v7c" (spec.Name: " Apr 24 10:16:48 master kubelet[13128]: I0424 10:16:48.949731 13128 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2bb42bc1-2692-11e7-bfb6-080027229776-default-token-h3v7c" (spec.Name: " Apr 24 10:16:51 master kubelet[13128]: I0424 10:16:51.978663 13128 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b023c31-2692-11e7-bfb6-080027229776-default-token-h3v7c" (spec.Name: " Apr 24 10:16:52 master kubelet[13128]: I0424 10:16:52.909589 13128 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/4b7bb2f0-2691-11e7-bfb6-080027229776-default-token-gslqd" (spec.Name: " Apr 24 10:16:53 master kubelet[13128]: I0424 10:16:53.186057 13128 kuberuntime_manager.go:458] Container {Name:weave Image:weaveworks/weave-kube:1.9.4 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env: Apr 24 10:16:53 master kubelet[13128]: I0424 10:16:53.188091 13128 kuberuntime_manager.go:742] checking backoff for container "weave" in pod "weave-net-5qgvz_kube-system(4b7bb2f0-2691-11e7-bfb6-080027229776)" Apr 24 10:16:53 master kubelet[13128]: I0424 10:16:53.188717 13128 kuberuntime_manager.go:752] Back-off 5m0s restarting failed container=weave pod=weave-net-5qgvz_kube-system(4b7bb2f0-2691-11e7-bfb6-080027229776) Apr 24 10:16:53 master kubelet[13128]: E0424 10:16:53.189136 13128 pod_workers.go:182] Error syncing pod 4b7bb2f0-2691-11e7-bfb6-080027229776 ("weave-net-5qgvz_kube-system(4b7bb2f0-2691-11e7-bfb6-080027229776)"), skipping: fail </code></pre> <p>Here is systemd log file with restarted kubelet: <a href="https://drive.google.com/open?id=0B3EINM-iyF06SXNvUWRoQWZzS2M" rel="noreferrer">Google Drive</a>.</p> <p>... I'm not sure what I missed in doc or what happend with kubelet. Can I ask you for help? :] </p> <p>• Ubuntu version </p> <pre><code>cat /etc/os-release NAME="Ubuntu" VERSION="16.04.2 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.2 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial </code></pre> <p>• Kernel </p> <pre><code># uname -a Linux ubuntu 4.4.0-72-generic #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>• Kubectl version </p> <pre><code># kubectl version Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>• Kubeadm version </p> <pre><code># kubeadm version kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>• Kubelet version </p> <pre><code># kubelet --version Kubernetes v1.6.1 </code></pre> <p>• Docker version </p> <pre><code># docker version Client: Version: 1.11.2 API version: 1.23 Go version: go1.5.4 Git commit: b9f10c9 Built: Wed Jun 1 22:00:43 2016 OS/Arch: linux/amd64 Server: Version: 1.11.2 API version: 1.23 Go version: go1.5.4 Git commit: b9f10c9 Built: Wed Jun 1 22:00:43 2016 OS/Arch: linux/amd64 </code></pre>
<p>I had the same problem with kubernetes 1.12.3 and ubuntu 16.04.05. I then looked at the kubernetes log by running the command</p> <p><code>$ journalctl -u kubelet</code></p> <p>and then in the log i saw that k8s was complaining (exiting with status 255) about swap being on.</p> <p>So i then turned swap off by running</p> <p><code>$ swapoff -a</code></p> <p>Then i edited fstab and commented out the entry for swap</p> <pre><code>$ vi /etc/fstab #comment out line with swap </code></pre> <p>and then rebooting the system. After the system came back up, i verified that swap was disabled by running</p> <p><code>$ free -m</code></p> <p>and checking whether the row for swap has 0. It did.</p> <p>Then i verified that kubeapi service had successfully started by executing</p> <p><code>$ systemctl status kubelet</code></p> <p>It has successfully started. I verified by also re-checking journalctl logs. Did not see swap error this time.</p> <p>I verified k8s node status by running </p> <p><code>$ kubectl get nodes</code></p> <p>which was now working and showing expected output.</p> <p>NOTE: I had KUBECONFIG set in my .bash_profile file as well, previously.</p> <pre><code>root@k8s-master:~# cat .bash_profile export KUBECONFIG="/etc/kubernetes/admin.conf" </code></pre>
<p>I have setup a basic 2 node k8s cluster on AWS using KOPS .. I had issues connecting and interacting with the cluster using <code>kubectl</code> ... and I keep getting the error: <code>The connection to the server api.euwest2.dev.avi.k8s.com was refused - did you specify the right host or port?</code> when trying to run any kubectl command .....</p> <p>have done basic <code>kops export kubecfg --name xyz.hhh.kjh.k8s.com --config=~$KUBECONFIG</code> --> to export the kubeconfig for the cluster I have created. Not sure what else I'm missing to make a successful connection to the kubeapi-server to make kubectl work ?</p>
<p>Sounds like either:</p> <ul> <li>Your kube-apiserver is not running. <ul> <li>Check with <code>docker ps -a | grep apiserver</code> on your Kubernetes master.</li> </ul></li> <li><code>api.euwest2.dev.avi.k8s.com</code> is resolving to an IP address where your nothing is listening. <ul> <li><code>208.73.210.217</code>?</li> </ul></li> <li>You have the wrong port configured for your kube-apiserver on your ~/.kube/config <ul> <li><code>server: https://api.euwest2.dev.avi.k8s.com:6443</code>?</li> </ul></li> </ul>
<p>I am designing the architecture of my software instance provisioning system. For this I will use kubernetes in such a way that each client will have their namespace with the pods of their integrity. However, the kubernetes cluster will have a common entry point to all the instances, which will be a nginx server.</p> <p>My question is as follows, as software provided allows upload of files, in case at the same time several of my clients decide to upload a file simultaneously, I run the risk that the server nginx is overloaded and that nobody can access your instance hired?</p> <p>Is there any good practice to try to design my architecture?</p>
<p>You could use <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md" rel="nofollow noreferrer">nginx ingress controller</a> and deploy it with multiple replicas so that it can be scaled up to handle the load. Then your nginx is part of the cluster (rather than a separate server) and can take advantage of the kubernetes cluster's capacity for horizontal scaling.</p> <p>Unless you are running on-prem with NodePort/HostPort - then you might want to run your nginx as an external Load balancer as in that case you don't have one from a cloud provider. Then what you can do is configure rate-limiting and throttling in nginx. If cloud then you can also use annotations to do this with nginx ingress.</p>
<p>I'm running Kubernetes with AWS EKS. I'm performing some load tests for a nodeport service and seeing a concurrent connection limit of ~16k-20k when hitting a node the pod is not running on. I'm wondering if there's some way to increase the number of concurrent connections.</p> <p>So I'm running a nodeport service with only 1 pod that is scheduled on node A. The load test I'm running tries to connect as many concurrent websocket connections as possible. The websockets just sleep and send heartbeats every 30s to keep the connection alive.</p> <p>When I point the load tester (tsung) at node A, I can get upwards of 65k concurrent websockets before the pod gets OOMKilled so memory is the limiting factor and that's fine. The real problem is when I point the load tester at node B, and kube-proxy's iptables forward the connection to node A, all of sudden, I can only get about 16k-20k concurrent websocket connections before the connections start stalling. According to netstat, they are getting stuck in the <code>SYN_SENT</code> state.</p> <pre><code>netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n ... 20087 ESTABLISHED 30969 SYN_SENT </code></pre> <p>The only thing I can think of to check is my conntrack limit and it looks to be fine. Here is what I get for node B.</p> <pre><code>net.netfilter.nf_conntrack_buckets = 16384 net.netfilter.nf_conntrack_max = 131072 net.nf_conntrack_max = 131072 </code></pre> <p>Here is the port range. I'm not sure if it matters (I'm not sure if DNAT and SNAT use up ports), but the range seems to be well above 16k.</p> <pre><code>net.ipv4.ip_local_port_range = 32768 60999 </code></pre> <p>The file descriptor limit and kernel TCP settings are the same for node A and node B so I <em>think</em> that rules them out.</p> <p>Is there anything else that could be limiting the number of concurrent connections forwarded through iptables/netfilter?</p>
<p>You are always going to get worse performance when hitting the NodePort where your pod is not running. Essentially, your packets are going through extra hops trying (through iptables) to get its final destination.</p> <p>I'd recommend using <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">source IP</a> for your <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">NodePort</a> service. Basically, patch your service with this:</p> <pre><code>$ kubectl patch svc &lt;your-service&gt; -p '{"spec":{"externalTrafficPolicy":"Local"}}' </code></pre> <p>Then let your load balancer forward traffic only to NodePorts that are serving traffic.</p> <p>Alternatively, if you'd like to consider something better performing you could consider using proxy mode <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">ipvs</a> or something like <a href="https://cilium.readthedocs.io/en/latest/intro/" rel="nofollow noreferrer">BPF/Cillium</a> for your overlay.</p>
<p>I'm running a web service that I can not change any of the specifications. I want to use liveness probe with HTTP POST on Kubernetes. I couldn't find anything available. All of my efforts with busybox and netcat have failed. </p> <p>Is there a solution? Is it possible to build a custom liveness probe from any Linux dist?</p>
<p>Kubernetes Probes only support HTTP GET, TCP &amp; Command.</p> <p>If you must check something over HTTP POST you could use a command approach and just <code>curl -XPOST ..</code></p> <p>An example would be:</p> <pre><code>... containers: - name: k8-byexamples-spring-rest image: gcr.io/matthewdavis-byexamples/k8-byexamples-spring-rest:1d4c1401c9485ef61322d9f2bb33157951eb351f ports: - containerPort: 8080 name: http livenessProbe: exec: command: - curl - -X POST - http://localhost/test123 initialDelaySeconds: 5 periodSeconds: 5 ... </code></pre> <p>For more explanation see: <a href="https://matthewdavis.io/kubernetes-health-checks-demystified/" rel="noreferrer">https://matthewdavis.io/kubernetes-health-checks-demystified/</a>.</p> <p>Hope that helps!</p>
<p>I have the a use case where I need to integrate two different service using a fanout ingress (different path). I decide to do so because both services is actually related so I don't want to separate the domain. Another reason is I don't want to use another ingress. However, this problem arises, I understand that once I activate Google IAP, a new clientId and secretId will be created. And the redirect javascript url should be a domain (doesn't include any path or in it's wildcard form). </p> <p>I'm tring to access https://{my-domain}/{some-path}. It turns out I got the following error.</p> <ol start="400"> <li>That’s an error.</li> </ol> <p>Error: redirect_uri_mismatch</p> <blockquote> <p>The redirect URI in the request, https://{my-domain}/_gcp_gatekeeper/authenticate, does not match the ones authorized for the OAuth client. To update the authorized redirect URIs, visit: <a href="https://console.developers.google.com/apis/credentials/oauthclient/" rel="nofollow noreferrer">https://console.developers.google.com/apis/credentials/oauthclient/</a></p> </blockquote> <p>This is my ingress config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.allow-http: "false" kubernetes.io/ingress.global-static-ip-name: &lt;some-static-ip&gt; name: &lt;name&gt; namespace: &lt;namespace&gt; spec: backend: serviceName: &lt;service-1&gt; servicePort: &lt;port-of-service-1&gt; rules: - http: paths: - backend: serviceName: &lt;service-2&gt; servicePort: &lt;port-of-service-2&gt; path: /&lt;some-path&gt;/* tls: - secretName: &lt;secret-name&gt; </code></pre> <p>However when I tried to access my 1st service it just fine and works as expected.</p> <p>Just wondering if it's possible to create IAP for two different backend services using the same ingress. I don't manage to find more about this tho. Thanks!</p> <p>Best,</p>
<p>After I read this:</p> <ol> <li><a href="https://cloud.google.com/iap/docs/reference/compute-engine-apis#set_iap_properties_by_updating" rel="nofollow noreferrer">https://cloud.google.com/iap/docs/reference/compute-engine-apis#set_iap_properties_by_updating</a></li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig</a></li> <li><a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto" rel="nofollow noreferrer">https://cloud.google.com/iap/docs/enabling-kubernetes-howto</a></li> </ol> <p>I realize that I just need to enable IAP for my second service and then override the value of clientId and secretId of my 2nd service by the value provided by IAP config for my 1st service. Go to your IAP console:</p> <pre><code>kubectl create secret generic backend-config-secret --namespace {namespace} \ --from-literal=client_id=client_id_key \ --from-literal=client_secret=client_secret_key </code></pre> <p>Create your BackendConfig:</p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: {name} namespace: {namespace} spec: iap: enabled: true oauthclientCredentials: secretName: backend-config-secret </code></pre> <p>Add this to your service:</p> <pre><code>beta.cloud.google.com/backend-config: '{"default": "{name}"}' </code></pre>
<p>I'd like to know the current limit on the RAM. (No limit/request was explicitly configured.) </p> <p>How do I see the current configuration of an existing pod?</p> <p>[Edit] That configuration would include not only how much memory is now in use, but also the max-limit, the point at which it would be shut down.</p> <p>(If I blow up the heap with huge strings, I see a limit of approx 4 GB, and the Google Cloud Console shows a crash at 5.4 GB (which of course includes more than the Python interpreter), but I don't know where this comes from. The Nodes have up to 10 GB.)</p> <p>I tried <code>kubectl get pod id-for-the-pod -o yaml</code>, but it shows nothing about memory.</p> <p>I am using Google Container Engine.</p>
<p>Use kubectl top command</p> <p><code>kubectl top pod id-for-the-pod</code></p> <blockquote> <p><strong>kubectl top</strong> --help</p> <p>Display Resource (CPU/Memory/Storage) usage.</p> <p>The top command allows you to see the resource consumption for nodes or pods.</p> <p>This command requires Heapster to be correctly configured and working on the server.</p> <p>Available Commands: node Display Resource (CPU/Memory/Storage) usage of nodes pod Display Resource (CPU/Memory/Storage) usage of pods</p> <p>Usage: kubectl top [flags] [options]</p> </blockquote>
<p>I have 3 Linux VM on my MBP, and all 3 VM can share the same disk on MBP's disk (I have no NFS). K8S can dispatch the docker images to K8S nodes. When I kill the process, it seems restarted on the <em>same</em> node. I am pretty the other node has the same docker image installed, and I guess it is limited by the .yaml file which binds to the same PVC and PV on that node.</p> <p>If so, how I can configure my .yaml file especially for PV and PVC so when the process is killed, K8S can dispatch it from the one node (the process got killed) to the other node.</p> <p>Thanks, Derek</p>
<p>I don't believe this is possible. When the PV is bound to a node, as that's where it exists; so if your pod has a PVC bound to that PV it will always be scheduled on that node.</p> <p>You'd need to use a different provider, such as Ceph/RBD, in-order to maintain freedom of movement and PV/PVC's.</p> <p>Maybe Rook.io would be something useful for you to experiment with :)</p>
<p>I have a Service in a Kubernetes cluster which has a label selector, several pods have this label, therefore are "connected" to the Service. Is it possible to retrieve the internal IP addresses of these pods using the Service IP?</p> <p>I know that service endpoints can be retrieved in the Kubernetes CLI, however, I was wondering whether this could be achieved in Java? i.e., Nslookup.</p> <p>Maybe an addition to this question would be: How do you perform a DNS query on a headless service, do you pass the service name?</p> <p>Thanks,</p>
<p>You can get the endpoints from the API server. For example to get the IP addresses for the first endpoint returned by the API server:</p> <pre><code>$ curl -s -k -H 'Authorization: Bearer &lt;REDACTED&gt;' \ https://&lt;apiserver-address&gt;:6443/api/v1/endpoints | \ jq .items[0].subsets[0].addresses[].ip </code></pre> <p>You can also use this to retrieve the node names where your pods are running calling the above endpoint. </p> <pre><code>$ curl -s -k -H 'Authorization: Bearer &lt;REDACTED&gt;' \ https://&lt;apiserver-address&gt;:6443/api/v1/endpoints | \ jq .items[0].subsets[0].addresses[].nodeName </code></pre> <p>This can also be done using your favorite language API like <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Java</a>, <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Go</a> and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Python</a></p> <p>Described <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records" rel="nofollow noreferrer">here</a> the SRV records that you can use to find the CLUSTER-IP address of your service (not your pods IP addresses). From a pod:</p> <pre><code> $ dig srv _portname._tcp.servicename.default.svc.cluster.local ... servicename.default.svc.cluster.local. 5 IN A 10.x.x.x &lt;== service IP ... </code></pre>
<p>I'm trying to set up the LetsEncrypt SSL ceritficate using cert manager. I have successfully deployed Cert Manager by Helm and stuck at configuring <code>ingress.yaml</code>.</p> <pre><code>$ sudo kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/master/docs/tutorials/quick-start/example/ingress.yaml </code></pre> <p>I've got this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: kuard namespace: default spec: rules: - host: example.example.com http: paths: - backend: serviceName: kuard servicePort: 80 path: / tls: - hosts: - example.example.com secretName: quickstart-example-tls </code></pre> <p>So I just replaced hosts from example.com to my external IP and got this:</p> <pre><code>A copy of your changes has been stored to "/tmp/kubectl-edit-qx3kw.yaml" The Ingress "kuard" is invalid: spec.rules[0].host: Invalid value: must be a DNS name, not an IP address </code></pre> <p>Is there any way to set it up using just my external IP? I have't yet chosen the domain name for my app and want to use just plain IP for demoing and playing around.</p>
<p>No. You cannot use an IP address for the Ingress. To use an IP address, you'd need to configure it to point to your worker nodes and create a NodePort Service, which will allow you to browse to <code>http://IP:NODEPORT</code>.</p>
<p>We have configured Kubernetes cluster on EC2 machines in our AWS account using kops tool (<a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">https://github.com/kubernetes/kops</a>) and based on AWS posts (<a href="https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/</a>) as well as other resources. </p> <p>We want to setup a K8s cluster of master and slaves such that:</p> <ol> <li>It will automatically resize (both masters as well as nodes/slaves) based on system load.</li> <li>Runs in Multi-AZ mode i.e. at least one master and one slave in every AZ (availability zone) in the same region for e.g. us-east-1a, us-east-1b, us-east-1c and so on. </li> </ol> <p>We tried to configure the cluster in the following ways to achieve the above.</p> <ol> <li><p>Created K8s cluster on AWS EC2 machines using kops this below configuration: node count=3, master count=3, zones=us-east-1c, us-east-1b, us-east-1a. We observed that a K8s cluster was created with 3 Master &amp; 3 Slave Nodes. Each of the master and slave server was in each of the 3 AZ’s.</p></li> <li><p>Then we tried to resize the Nodes/slaves in the cluster using (<a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml</a>). We set the node_asg_min to 3 and node_asg_max to 5. When we increased the workload on the slaves such that auto scale policy was triggered, we saw that additional (after the default 3 created during setup) slave nodes were spawned, and they did join the cluster in various AZ’s. This worked as expected. There is no question here.</p></li> <li><p>We also wanted to set up the cluster such that the number of masters increases based on system load. Is there some way to achieve this? We tried a couple of approaches and results are shared below:</p></li> </ol> <p>A) We were not sure if the cluster-auto scaler helps here, but nevertheless tried to resize the Masters in the cluster using (<a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml</a>). This is useful while creating a new cluster but was not useful to resize the number of masters in an existing cluster. We did not find a parameter to specify node_asg_min, node_asg_max for Master the way it is present for slave Nodes. Is there some way to achieve this?</p> <p>B) We increased the count MIN from 1 to 3 in ASG (auto-scaling group), associated with one the three IG (instance group) for each master. We found that new instances were created. However, they did not join the master cluster. Is there some way to achieve this?</p> <p>Could you please point us to steps, resources on how to do this correctly so that we could configure the number of masters to automatically resize based on system load and is in Multi-AZ mode?</p> <p>Kind regards, Shashi</p>
<p>There is no need to scale <code>Master</code> nodes.</p> <blockquote> <p>Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).</p> <p>Master components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all master components on the same machine, and do not run user containers on this machine. See <a href="https://kubernetes.io/docs/admin/high-availability/" rel="nofollow noreferrer">Building High-Availability Clusters</a> for an example multi-master-VM setup.</p> </blockquote> <h2>Master node consists of the following components:</h2> <p><strong>kube-apiserver</strong></p> <blockquote> <p>Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane.</p> </blockquote> <p><strong>etcd</strong></p> <blockquote> <p>Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.</p> </blockquote> <p><strong>kube-scheduler</strong></p> <blockquote> <p>Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.</p> </blockquote> <p><strong>kube-controller-manager</strong></p> <blockquote> <p>Component on the master that runs <a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow noreferrer">controllers</a>.</p> </blockquote> <p><strong>cloud-controller-manager</strong></p> <blockquote> <p>runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.</p> </blockquote> <p>For more detailed explanation please read the <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">Kubernetes Components</a> docs. Also if You are thinking about HA, you can read about <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">Creating Highly Available Clusters with kubeadm</a></p>
<p>I got this structure:</p> <pre><code>├── root │   ├── app A │   ├── app B │   ├── app C </code></pre> <p>The root folder is initialized as git folder so I can easily push the whole project. Now I am wondering where to place the <code>kubernetes yaml</code> files. My idea would be something like this:</p> <pre><code> ├── root │   ├── app A │   ├── app B │   ├── app C │   ├── kubernetes.conf.d ├── appA.yaml </code></pre> <p><strong>Question</strong>: is there is a common location to place the config files for kubernetes?</p>
<p>This is completely subjective and you should do whatever works for you. My preference, however, is:</p> <pre><code># Each application has - opt - kubernetes - deployment.yaml - helm - values.yaml </code></pre> <p>I like using <code>./opt</code>, as it is familiar to most with Unix experience.</p>
<p>I would like to suspend the main process in a docker container running in a kubernetes pod. I have attempted to do this by running </p> <pre><code>kubectl exec &lt;pod-name&gt; -c &lt;container-name&gt; kill -STOP 1 </code></pre> <p>but the signal will not stop the container. Investigating other approaches, it looks like <code>docker stop --signal=SIGSTOP</code> or <code>docker pause</code> might work. However, as far as I know, kubectl exec always runs in the context of a container, and these commands would need to be run in the pod outside the context of the container. Does kubectl's interface allow for anything like this? Might I achieve this behavior through a call to the underlying kubernetes API?</p>
<p>You could set the replicaset to 0 which would set the number of working deployments to 0. This isn't quite a Pause but it does Stop the deployment until you set the number of deployments to >0. </p> <pre><code>kubectl scale --replicas=0 deployment/&lt;pod name&gt; --namespace=&lt;namespace&gt; </code></pre>
<p>I have a situation in which I run a Kubernetes Job and then want to delete the pod in which the Job's containers run when the job completes. The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="noreferrer">Kubernetes documentation says</a> the it is the responsibility of the user to delete the pod.</p> <p>However, my goal is to run a CPU intensive job by spinning up a big honking pod, let the job run, and then upon job completion, automagically nuke the pod. My thinking is that paying for an expensive, resource intensive pod is worth it when the revenue producing job runs. Once the job is over, the revenue stream ends and I need to cut the expense, hence nuking the pod.</p> <p>I am thinking of having my code in the job container send a out a message upon program completion to a publisher that is picked up by a subscriber. The subscriber knows how to run the command, <code>kubectl delete jobs/myContainer</code> to nuke the pod.</p> <p>But, maybe there is a better way. Thus, I ask.</p> <p>Thanks in advance.</p>
<p>There is a new API that delete jobs after completion and all the dependent elements. The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/" rel="nofollow noreferrer">TTLAfterFinished</a>.</p> <p>You can define it in the <code>.spec.ttlSecondsAfterFinished</code> property of your job. The cleanup will happen <code>X</code> seconds after the job is completed or failed.</p> <p>As of 16, March, 2019, this API is in alpha, with version 1.12 and can only be used with alpha features enabled.</p>