prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have 3 node[host a,host b, host c] kubernetes cluster(version 1.12.2). I am trying run spark-pi example jar as mentioned in <a href="https://kubernetes.io/blog/2018/03/apache-spark-23-with-native-kubernetes/" rel="nofollow noreferrer">kubernetes document</a>.</p> <p>Host a is my kubernetes Master. >> kubectl get nodees list all the three nodes.</p> <p>I have built the spark docker image using whats provided in spark 2.3.0 binary folder.</p> <pre><code>&gt;&gt; sudo ./bin/docker-image-tool.sh -r docker.io/spark/spark -t spark230 build </code></pre> <p>I got the message the image got built successfully.</p> <pre><code>&gt;&gt; docker images ls REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/spark/spark spark230 6a2b645d7efe About an hour ago 346 MB docker.io/weaveworks/weave-npc 2.5.0 d499500e93d3 7 days ago 49.5 MB docker.io/weaveworks/weave-kube 2.5.0 a5103f96993a 7 days ago 148 MB docker.io/openjdk 8-alpine 97bc1352afde 2 weeks ago 103 MB k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 2 weeks ago 96.5 MB k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 2 weeks ago 194 MB k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 2 weeks ago 164 MB k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 2 weeks ago 58.3 MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 7 weeks ago 220 MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 2 months ago 39.2 MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742 kB </code></pre> <pre><code>&gt; ./bin/spark-submit --master k8s://https://&lt;api-server&gt;:&lt;api &gt; server port&gt; --deploy-mode cluster --name spark-pi &gt; --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf &gt; spark.kubernetes.container.image=spark/spark:spark230 --conf &gt; spark.kubernetes.authenticate.driver.serviceAccountName=spark &gt; local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar </code></pre> <p>When I submit above command , it gives proper output sometimes. Other times it throws below error. </p> <pre><code>&gt; code = Unknown desc = repository docker.io/spark/spark not found: does not exist or no pull access, reason=ErrImagePull </code></pre> <p>When I debugged it further, it looks like, whenever node name: host b or host c its throwing above error message.</p> <p>When node name : host a , then it runs fine. Looks like other nodes are unable to locate the image. </p> <p>Questions: </p> <ol> <li>Should I install spark on all nodes &amp; build docker image on all nodes? </li> <li>Is it possible to pass the image reference from single node [host a] to other nodes? i.e how to make other nodes refer the same image from host a.</li> </ol>
<p>Yes, you need to build Spark image in all the nodes. You can write the wrapper script to invoke the 'rebuild-image.sh' on all the nodes as below:</p> <pre><code>for h in hostnames; do rsync -av /opt/spark ${h}:/opt ssh ${h} /opt/spark/rebuild-image.sh </code></pre>
<p>I have setup a kubernetes cluster using kubeadm.</p> <p><strong>Environment</strong></p> <ol> <li>Master node installed in a PC with public IP.</li> <li>Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)</li> </ol> <p><strong>Status</strong></p> <p>The worker node is able to join the cluster and running</p> <pre><code>kubectl get nodes </code></pre> <p>the status of the node is ready. </p> <p>Kubernetes can deploy and run pods on that node.</p> <p><strong>Problem</strong></p> <p>The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run </p> <pre><code>kubectl logs &lt;pod-name&gt; </code></pre> <p>where pod-name is the name of a pod deployed on the worker node, I have this error:</p> <pre><code>Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout </code></pre> <p>because it is trying to use the local IP 192.168.0.17, which is not accessable externally. </p> <p>I have seen that the node had this annotation:</p> <pre><code>flannel.alpha.coreos.com/public-ip: 192.168.0.17 </code></pre> <p>So, I have tried to modify the annotation, setting the external IP, in this way:</p> <pre><code>flannel.alpha.coreos.com/public-ip: &lt;my_externeal_ip&gt; </code></pre> <p>and I see that the node is correctly annotated, but it is still using 192.168.0.17.</p> <p>Is there something else that I have to setup in the worker node or in the cluster configuration?</p>
<p><em>there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate</em></p> <blockquote> <p>Is there something else that I have to setup in the worker node or in the cluster configuration?</p> </blockquote> <p>No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use <code>kubectl exec</code> nor <code>kubectl logs</code> (and I think <code>port-forward</code>, too) since those commands do not send traffic through the API server, rather it directly contacts the <code>kubelet</code> port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.</p> <p>I think it is <em>theoretically</em> possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.</p> <p>I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.</p>
<p>I have a GKE cluster running in us-central1 with a preemptable node pool. I have nodes in each zone (us-central1-b,us-central1-c,us-central1-f). For the last 10 hours, I get the following error for the underlying node vm:</p> <pre><code>Instance '[instance-name]' creation failed: The zone '[instance-zone]' does not have enough resources available to fulfill the request. Try a different zone, or try again later. </code></pre> <p>I tried creating new clusters in different regions with different machine types, using HA (multi-zone) settings and I get the same error for every cluster.</p> <p>I saw an issue on <a href="https://status.cloud.google.com/incident/container-engine/18005" rel="nofollow noreferrer">Google Cloud Status Dashboard</a> and tried with the console, as recommended, and it errors out with a timeout error.</p> <p>Is anyone else having this problem? Any idea what I may be dong wrong?</p> <p><strong>UPDATES</strong></p> <ul> <li>Nov 11 <ul> <li>I stood up a cluster in us-west2, this was the only one which would work. I used gcloud command line, it seems the UI was not effective. There was a note similar to this situation, use gcloud not ui, on the Google Cloud Status Dashboard.</li> <li>I tried creating node pools in us-central1 with the gcloud command line, and ui, to no avail.</li> <li>I'm now federating deployments across regions and standing up multi-region ingress.</li> </ul></li> <li>Nov. 12 <ul> <li>Cannot create HA clusters in us-central1; same message as listed above.</li> <li>Reached out via twitter and received a response.</li> <li>Working with the <a href="https://kubernetes.io/docs/tasks/administer-federation/cluster/" rel="nofollow noreferrer">K8s guide to federation</a> to see if I can get multi-cluster running. Most likely going to use <a href="https://github.com/kelseyhightower/kubernetes-cluster-federation" rel="nofollow noreferrer">Kelsey Hightowers approach</a></li> <li>Only problem, can't spin up clusters to federate.</li> </ul></li> </ul> <p><strong>Findings</strong></p> <ul> <li>Talked with google support, need a $150/mo. package to get a tech person to answer my questions.</li> <li>Preemptible instances are not a good option for a primary node pool. I did this because I'm cheap, it bit me hard. <ul> <li>The new architecture is a primary node pool with <a href="https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts" rel="nofollow noreferrer">committed use</a> VMs that do not autoscale, and a secondary node pool with preemptible instances for autoscale needs. The secondary pool will have minimum nodes = 0 and max nodes = 5 (for right now); this cluster is regional so instances are across all zones.</li> <li>Cost for an n1-standard-1 <a href="https://cloud.google.com/compute/docs/sustained-use-discounts" rel="nofollow noreferrer">sustained use</a> (assuming 24/7) a 30% discount off list.</li> <li>Cost for a 1-year n1-standard-1 <a href="https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts" rel="nofollow noreferrer">committed use</a> is about ~37% discount off list.</li> <li>Preemptible instances are re-provisioned every 24hrs., if they are not taken from you when resource needs spike in the region.</li> <li>I believe I fell prey to a resource spike in the us-central1.</li> </ul></li> <li>A must-watch for people looking to federate K8s: <a href="https://www.youtube.com/watch?v=kwOvOLnFYck" rel="nofollow noreferrer">Kelsey Hightower - CNCF Keynote | Kubernetes Federation</a></li> </ul>
<p>Issue appears to be resolved as of Nov 13th.</p>
<p>I am trying to create a Role and RoleBinding so I can use Helm. What are the equivelant <code>kubectl</code> commands to create the following resources? Using the command line makes dev-ops simpler in my scenario.</p> <h2>Role</h2> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-manager-foo namespace: foo rules: - apiGroups: ["", "batch", "extensions", "apps"] resources: ["*"] verbs: ["*"] </code></pre> <h2>RoleBinding</h2> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-binding-foo namespace: foo subjects: - kind: ServiceAccount name: tiller-foo namespace: foo roleRef: kind: Role name: tiller-manager-foo apiGroup: rbac.authorization.k8s.io </code></pre> <h2>Update</h2> <p>According to @nightfury1204 I can run the following to create the <code>Role</code>:</p> <blockquote> <p>kubectl create role tiller-manager-foo --namespace foo --verb=* --resource=<em>.,</em>.apps,<em>.batch,</em> .extensions -n foo --dry-run -o yaml</p> </blockquote> <p>This outputs:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: tiller-manager-foo rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' </code></pre> <p>The <code>namespace</code> is missing and secondly, is this equivelant?</p>
<p><strong>For Role:</strong></p> <pre><code>kubectl create role tiller-manager-foo --verb=* --resource=*.batch,*.extensions,*.apps,*. -n foo </code></pre> <blockquote> <p><code>--resource=*</code> support added on kubectl 1.12 version</p> </blockquote> <p><strong>For Rolebinding:</strong></p> <pre><code>kubectl create rolebinding tiller-binding-foo --role=tiller-manager-foo --serviceaccount=foo:tiller-foo -n foo </code></pre>
<p>I have a chart for Helm that works fine.</p> <p>I updated couple lines of "template" files to have it set up differently and ran <code>helm install -n &lt;relaese name&gt; &lt;char dir&gt;</code>.</p> <p>But I found that change never gets applied.</p> <p>when I tried <code>helm install --dry-run --debug</code>, I don't see my updates. (It might be getting the chart from remote ...)</p> <p>Does Helm cache stuff? I wasn't able to find anything about it...</p> <p>I am trying to setup hdfs on my cluster using this <a href="https://github.com/apache-spark-on-k8s/kubernetes-HDFS" rel="nofollow noreferrer">link</a></p>
<p>It is possible to make changes to a chart that do not make difference to the application when it runs or even that are not included in the Kubernetes resources that are generated (e.g. a change within an if block whose condition evaluates to false). You can use '--dry-run --debug' to see what the template evaluates to and check whether your change is present in the Kubernetes resources that would result from the chart installation. This gives you a quick way to check a chart change without it being installed.</p> <p>If you were publishing the chart then you could see a delay between publishing and getting it from the hosted repo and might need to run <code>helm repo update</code> but you seem to be using the chart source code directly so I would not expect any delay. </p>
<p>In OpenShift, is there a more elegant way of obtaining the name of the most recently created pod in application <code>my_app</code> than this one?</p> <pre><code>name=$(oc get pods -l app=my_app -o=jsonpath='{range.items[*]}{.status.startTime}{"\t"}{.metadata.name}{"\n"}{end}' | sort -r | head -1 | awk '{print $2}') </code></pre> <p>The idea is to sort by <code>.status.startTime</code> and to output one <code>.metadata.name</code>. So far, I have not been successful in using <code>oc get</code> with both options <code>--sort-by</code> and <code>-o jsonpath</code> at the same time, so I have fallen back to Unix pipes in this version.</p> <p>I am using OpenShift v3.9. I am also tagging this question for Kubernetes because it presumably applies to <code>kubectl</code> (instead of <code>oc</code>) in an analogous manner (without the <code>-l app=my_app</code>). </p>
<p>Try this:</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp -o jsonpath="{.items[0].metadata.name}" </code></pre>
<p>This is an excerpt of my deployment config:</p> <pre><code>... spec: containers: - env: - name: GIT_USERNAME valueFrom: secretKeyRef: key: username name: git - name: GIT_PASSWORD valueFrom: secretKeyRef: key: password name: git initContainers: - args: - clone - '--single-branch' - '--' - 'https://$(GIT_USERNAME):$(GIT_PASSWORD)@someurl.com/something.git' - '/testing/' image: alpine/git imagePullPolicy: Always name: init-clone-repo resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /testing name: test-volume volumes: - emptyDir: {} name: test-volume ... </code></pre> <p>The initContainer fails, because <em>$(GIT_USERNAME)</em> and <em>$(GIT_PASSWORD)</em> are used as is and not expanded. I have tried <em>$GIT_USERNAME</em>, <em>${GIT_USERNAME}</em> and I am pretty much out of ideas.</p> <p>How do I correctly use environment variables in args for init containers?</p>
<p>Add environment variable in the init container.</p> <pre><code>spec: initContainers: - args: - clone - '--single-branch' - '--' - 'https://$(GIT_USERNAME):$(GIT_PASSWORD)@someurl.com/something.git' - '/testing/' image: alpine/git imagePullPolicy: Always name: init-clone-repo env: - name: GIT_USERNAME valueFrom: secretKeyRef: key: username name: git - name: GIT_PASSWORD valueFrom: secretKeyRef: key: password name: git resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /testing name: test-volume volumes: - emptyDir: {} name: test-volume </code></pre>
<p><strong>I would like to sh myself inside a kubernetes pod and execute a CURL command. Unfortunatly I can't find anywhere a working image with curl availble (and compatible with kubernetes)...</strong></p> <ol> <li>I tried some docker images with Alpine and CURL but each time it ended with crashLoopBackOff. I guess it means the container exited because the docker image exits after executing itself...</li> <li>I also tried using the image of alpine and ubuntu alone but each time it also ended with crashloopBackOff.</li> <li>I manage to exec in a few images but it never had CURL installed and neither APT-GET or APK were working.</li> </ol> <p>To exec into a container I'm doing a simple <code>kubectl exec -it POD_ID /bin/bash</code></p> <p><em>Does someone knows of a minimal docker image that contains a CURL binary and wont crash in kubernetes ?</em></p> <p>PS: This is for testing purpose so it does not need to be rock solid or anything</p> <p>Thx</p> <hr> <p>UPDATE 1 This is the yaml I use to deploy all potential image :</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: blue namespace: default spec: replicas: 1 template: metadata: labels: name: blue spec: containers: - name: blue-website image: SOME_IMAGE:latest resources: requests: cpu: 0.1 memory: 200 </code></pre> <p>I don't think that its broken because it works on certain image.</p>
<p>You can skip the manifest and use <code>kubectl run</code> to spin up one of these pods on demand. i.e.</p> <pre><code>kubectl run curl -it --rm --image=curlimages/curl -- sh </code></pre> <p>This would create a deployment named <code>curl</code> from the <code>curlimages/curl</code> image and give you an interactive (<code>-it</code>) shell inside it. When you exit, the deployment will be deleted (<code>--rm</code>).</p>
<p>I use Minikube for simulating my Kubernetes production architecture. In the cluster, I need to create a website and I decided to use Sails.js.</p> <p>Here is my Kubernetes configuration :</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: white-label-storage-persistent-volume labels: type: local app: white-label role: master tier: backend spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteMany hostPath: path: "/white-label-data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: white-label-storage-persistent-volume-claim labels: app: white-label role: master tier: backend spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 5Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: white-label-deployment labels: app: white-label role: master tier: backend spec: replicas: 1 strategy: type: RollingUpdate selector: matchLabels: app: white-label role: master tier: backend template: metadata: labels: app: white-label role: master tier: backend spec: containers: - name: white-label image: pastel-white-label:v1 imagePullPolicy: IfNotPresent workingDir: "/usr/src/app" resources: requests: memory: 2Gi cpu: 1 limits: memory: 4Gi cpu: 2 ports: - containerPort: 1337 protocol: TCP volumeMounts: - mountPath: "/data" name: white-label-persistent-volume volumes: - name: white-label-persistent-volume persistentVolumeClaim: claimName: white-label-storage-persistent-volume-claim --- apiVersion: v1 kind: Service metadata: name: white-label-service labels: app: white-label role: master tier: backend spec: type: LoadBalancer ports: - port: 1337 protocol: TCP nodePort: 30003 selector: app: white-label role: master tier: backend sessionAffinity: None --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: white-label-hpa labels: app: white-label role: master tier: backend namespace: default spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1 kind: Deployment name: white-label-deployment targetCPUUtilizationPercentage: 80 </code></pre> <p>And here is the pastel-white-label:v1 Docker image :</p> <pre><code>FROM node:10.13.0-stretch WORKDIR /usr/src/app COPY . ./ RUN npm install -g sails npm-check-updates RUN npm install @sailshq/connect-redis --save RUN npm install CMD ["sails", "lift"] </code></pre> <p>When I start my cluster and build my pod, everything works like a charm. My Sails.js log is spotless, I can see the home page in the browser: no problem at all. I use Sails.js v1.1.0 in Web app mode out of the box BTW. I can see as well that Grunt is launched and is watching.</p> <p>Now if I edit a .less file though, I get an unfriendly:</p> <pre><code>debug: ------------------------------------------------------- error: ** Grunt :: An error occurred. ** error: ------------------------------------------------------------------------ Aborted due to warnings. Running "watch" task Waiting... &gt;&gt; File "assets/styles/styleguide/colors.less" changed. Loading "sync.js" tasks...ERROR &gt;&gt; TypeError: Cannot read property 'length' of undefined Warning: Task "sync:dev" not found. </code></pre> <p>I am sure my .less file has no error (hexa code edition), my .tmp folder is writable (touch .tmp/foo is working for instance) and I believe Grunt is correctly installed as it comes out of the box...</p> <p>Then I really don't know what is going on here...</p> <p>Do you guys have an idea, please ?</p> <p>Thank you ahead</p>
<p>I think you are running into exactly <a href="https://github.com/balderdashy/sails/issues/4513" rel="nofollow noreferrer">this</a>. Looks like it's specific to the node version. You can try an earlier version for your node docker image:</p> <pre><code>FROM node:8.12.0-stretch </code></pre>
<p>I am not able to attach to a container in a pod. Receiving below message Error from server (Forbidden): pods "sleep-76df4f989c-mqvnb" is forbidden: cannot exec into or attach to a privileged container</p> <p>Could someone please let me what i am missing?</p>
<p>This seems to be a permission (possibly <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a>) issue.<br> See <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Kubernetes pod security-policy</a>.</p> <p>For instance <a href="https://github.com/gluster/gluster-kubernetes/issues/432" rel="nofollow noreferrer"><code>gluster/gluster-kubernetes</code> issue 432</a> points to <a href="https://github.com/Azure/acs-engine/pull/1961" rel="nofollow noreferrer">Azure PR 1961</a>, which disable the <code>cluster-admin</code> rights (although you can <a href="https://github.com/Azure/acs-engine/issues/2200#issuecomment-363070771" rel="nofollow noreferrer">customize/override the admission-controller flags passed to the API server</a>).</p> <p>So it depends on the nature of your Kubernetes environment.</p>
<p>I set up datadog and kubernetes to test to out monitoring, although in datadog i can see some logs and metrics, in the agent in kubernetes I have the following errors:</p> <pre><code> TRACE ] trace-agent exited with code 0, disabling [ AGENT ] 2018-10-17 08:18:24 UTC | WARN | (datadog_agent.go:149 in LogMessage) | (base.py:212) | DEPRECATION NOTICE: device_name is deprecated, please use a device: tag in the tags list instead [ AGENT ] 2018-10-17 08:18:26 UTC | ERROR | (kubeutil.go:50 in GetKubeletConnectionInfo) | connection to kubelet failed: temporary failure in kubeutil, will retry later: try delay not elapsed yet [ AGENT ] 2018-10-17 08:18:26 UTC | ERROR | (runner.go:289 in work) | Error running check kubelet: [{"message": "Unable to detect the kubelet URL automatically.", "traceback": "Traceback (most recent call last):\n File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/checks/base.py", line 352, in run\n self.check(copy.deepcopy(self.instances[0]))\n File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/kubelet/kubelet.py", line 107, in check\n raise CheckException("Unable to detect the kubelet URL automatically.")\nCheckException: Unable to detect the kubelet URL automatically.\n"}] [ AGENT ] 2018-10-17 08:18:28 UTC | ERROR | (autoconfig.go:604 in collect) | Unable to collect configurations from provider Kubernetes: temporary failure in kubeutil, will retry later: try delay not elapsed yet image: repository: datadog/agent tag: 6.4.2 </code></pre> <p>As the logs state the agent cannot connect to Kubectl, has anyone come across this? </p>
<p>This might be a problem that <a href="https://github.com/DataDog/integrations-core/issues/1829" rel="nofollow noreferrer">other people are running into too</a>. kubelet is no longer listening on the ReadOnlyPort in newer Kubernetes versions, and the port is being deprecated. Samuel Cormier-Iijima reports that the issue can be solved by adding adding <code>KUBELET_EXTRA_ARGS=--read-only-port=10255</code> in <code>/etc/default/kubelet</code> on the node host.</p>
<p>I have a container running inside a pod and I want to be able to monitor its content every week. I want to write a Kube cronjob for it. Is there a best way to do this?</p> <p>At the moment I am doing this by running a script in my local machine that does <code>kubectl exec my-container</code> and monitors the content of the directory in that container.</p>
<p><code>kubectl exec my-container</code> sounds perfectly fine to me. You might want to look at <a href="https://stackoverflow.com/questions/42642170/kubernetes-how-to-run-kubectl-commands-inside-a-container">this</a> if you want to run <code>kubectl</code> in a pod (Kubernetes CronJob).</p> <p>There are other ways but depending on what you are trying to do in the long term it might be an overkill. For example:</p> <ul> <li><p>You can set up a <a href="https://www.fluentd.org/" rel="nofollow noreferrer">Fluentd</a> or <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent" rel="nofollow noreferrer">tail/grep sidecar</a> (or <code>ls</code>, if you are using a binary file?) to send the content or part of the content of that file to an Elasticsearch cluster. </p></li> <li><p>You can set up <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> in <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Ckubernetes_sd_config%3E" rel="nofollow noreferrer">Kubernetes</a> to scrape metrics on the pod mounted filesystems. You will probably have to use a custom exporter in the pod or something else that exports files in mount points in the pod. <a href="https://github.com/giantswarm/kubernetes-prometheus/blob/master/manifests/prometheus/node-directory-size-metrics/daemonset.yaml" rel="nofollow noreferrer">This is a similar example</a>.</p></li> </ul>
<h1>Notes</h1> <p>I am trying to deploy a service and ingress for a demo service (from 'Kubernetes in Action') to an AWS EKS cluster in which the <code>traefik</code> ingress controller has been Helm installed.</p> <p>I am able to access the traefik dashboard from the <code>traefik.example.com</code> hostname after manually adding the IP address of the AWS ELB provisioned by <code>traefik</code> to that hostname in my local <code>/etc/hosts</code> file.</p> <p>If I describe the service and ingress of the <code>traefik-dashboard</code>:</p> <pre><code>$ kubectl describe svc -n kube-system traefik-dashboard Name: traefik-dashboard Namespace: kube-system Labels: app=traefik chart=traefik-1.52.6 heritage=Tiller release=traefik Annotations: &lt;none&gt; Selector: app=traefik,release=traefik Type: ClusterIP IP: 10.100.164.81 Port: &lt;unset&gt; 80/TCP TargetPort: 8080/TCP Endpoints: 172.31.27.70:8080 Session Affinity: None Events: &lt;none&gt; $ kubectl describe ing -n kube-system traefik-dashboard Name: traefik-dashboard Namespace: kube-system Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- traefik.example.com traefik-dashboard:80 (172.31.27.70:8080) Annotations: Events: &lt;none&gt; </code></pre> <p>The service and ingress controller seem to be using the running <code>traefik-575cc584fb-v4mfn</code> pod in the <code>kube-system</code> namespace.</p> <p>Given this info and looking at the traefik docs, I try to expose a demo service through its ingress with the following YAML:</p> <pre><code>apiVersion: apps/v1beta2 kind: ReplicaSet metadata: name: kubia spec: replicas: 3 selector: matchLabels: app: kubia template: metadata: labels: app: kubia spec: containers: - name: kubia image: luksa/kubia --- apiVersion: v1 kind: Service metadata: name: kubia namespace: default spec: selector: app: traefik release: traefik ports: - name: web port: 80 targetPort: 8080 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubia namespace: default spec: rules: - host: kubia.int http: paths: - path: / backend: serviceName: kubia servicePort: web </code></pre> <p>After applying this, I am unable to access the <code>kubia</code> service from the <code>kubia.int</code> hostname after manually adding the IP address of the AWS ELB provisioned by <code>traefik</code> to that hostname in my local <code>/etc/hosts</code> file. Instead, I get a <code>Service Unavailable</code> in the response. Describing the created resources shows some differing info.</p> <pre><code>$ kubectl describe svc kubia Name: kubia Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"ports":[{"name":"web","por... Selector: app=traefik,release=traefik Type: ClusterIP IP: 10.100.142.243 Port: web 80/TCP TargetPort: 8080/TCP Endpoints: &lt;none&gt; Session Affinity: None Events: &lt;none&gt; $ kubectl describe ing kubia Name: kubia Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- kubia.int / kubia:web (&lt;none&gt;) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kubia","namespace":"default"},"spec":{"rules":[{"host":"kubia.int","http":{"paths":[{"backend":{"serviceName":"kubia","servicePort":"web"},"path":"/"}]}}]}} Events: &lt;none&gt; </code></pre> <p>I also notice that the demo <code>kubia</code> service has no endpoints, and the corresponding ingress shows no available backends.</p> <p>Another thing I notice is that the demo <code>kubia</code> service and ingress is in the <code>default</code> namespace, while the <code>traefik-dashboard</code> service and ingress are in the <code>kube-system</code> namespace.</p> <p>Does anything jump out to anyone? Any suggestions on the best way to diagnose it?</p> <p>Many thanks in advance!</p>
<p>It would seem that you are missing the <code>kubernetes.io/ingress.class: traefik</code> that tells your Traefik ingress controller to serve for that Ingress definition.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubia namespace: default annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: kubia.int http: paths: - path: / backend: serviceName: kubia servicePort: web </code></pre> <p>If you look at the examples in the <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">docs</a> you can see that the only Ingress that doesn't have annotation is <code>traefik-web-ui</code> that points to the Traefik Web UI.</p>
<p>I want to set up an ingress controller on AWS EKS for several microservices that are accessed from an external system.</p> <p>The microservices are accessed via virtual host-names like <code>svc1.acme.com</code>, <code>svc2.acme.com</code>, ...</p> <p>I set up the nginx ingress controller with a helm chart: <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a></p> <p>My idea was to reserve an Elastic IP Address and bind the nginx-controller to that IP by setting the variable externalIP.</p> <p>This way I should be able to access the services with a stable wildcard DNS entry <code>*.acme.com --&gt; 54.72.43.19</code></p> <p>I can see that the ingress controller service get the externalIP, but the IP is not accessible.</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-ingress-controller LoadBalancer 10.100.45.119 54.72.43.19 80:32104/TCP,443:31771/TCP 1m </code></pre> <p>Any idea why?</p> <p>Update:</p> <p>I installed the ingress controller with this command:</p> <p><code> helm install --name ingress -f values.yaml stable/nginx-ingress </code></p> <p>Here is the gist for values, the only thing changed from the default is</p> <p><code> externalIPs: ["54.72.43.19"] </code></p> <p><a href="https://gist.github.com/christianwoehrle/3b136023b1e0085b028a67ca6a0959b7" rel="nofollow noreferrer">https://gist.github.com/christianwoehrle/3b136023b1e0085b028a67ca6a0959b7</a></p>
<p>Maybe you can achieve that by using a Network Load Balancer (<a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html" rel="noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html</a>), that supports fixed IPs, as the backing for your Nginx ingress, eg (<a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/" rel="noreferrer">https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/</a>):</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx namespace: default labels: app: nginx annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" spec: externalTrafficPolicy: Local ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer </code></pre>
<p>When I try to run my elasticsearch container through kubernetes deployments, my elasticsearch pod fails after some time, While it runs perfectly fine when directly run as docker container using docker-compose or Dockerfile. This is what I get as a result of <code>kubectl get pods</code></p> <pre><code>NAME READY STATUS RESTARTS AGE es-764bd45bb6-w4ckn 0/1 Error 4 3m </code></pre> <p>below is the result of <code>kubectl describe pod</code></p> <pre><code>Name: es-764bd45bb6-w4ckn Namespace: default Node: administrator-thinkpad-l480/&lt;node_ip&gt; Start Time: Thu, 30 Aug 2018 16:38:08 +0530 Labels: io.kompose.service=es pod-template-hash=3206801662 Annotations: &lt;none&gt; Status: Running IP: 10.32.0.8 Controlled By: ReplicaSet/es-764bd45bb6 Containers: es: Container ID: docker://9be2f7d6eb5d7793908852423716152b8cefa22ee2bb06fbbe69faee6f6aa3c3 Image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:9ae20c753f18e27d1dd167b8675ba95de20b1f1ae5999aae5077fa2daf38919e Port: 9200/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 78 Started: Thu, 30 Aug 2018 16:42:56 +0530 Finished: Thu, 30 Aug 2018 16:43:07 +0530 Ready: False Restart Count: 5 Environment: ELASTICSEARCH_ADVERTISED_HOST_NAME: es ES_JAVA_OPTS: -Xms2g -Xmx2g ES_HEAP_SIZE: 2GB Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-nhb9z: Type: Secret (a volume populated by a Secret) SecretName: default-token-nhb9z Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m default-scheduler Successfully assigned default/es-764bd45bb6-w4ckn to administrator-thinkpad-l480 Normal Pulled 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Container image &quot;docker.elastic.co/elasticsearch/elasticsearch:6.2.4&quot; already present on machine Normal Created 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Created container Normal Started 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Started container Warning BackOff 1m (x15 over 5m) kubelet, administrator-thinkpad-l480 Back-off restarting failed container </code></pre> <p>Here is my elasticsearc-deployment.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.1.0 (36652f6) creationTimestamp: null labels: io.kompose.service: es name: es spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: es spec: containers: - env: - name: ELASTICSEARCH_ADVERTISED_HOST_NAME value: es - name: ES_JAVA_OPTS value: -Xms2g -Xmx2g - name: ES_HEAP_SIZE value: 2GB image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 name: es ports: - containerPort: 9200 resources: {} restartPolicy: Always status: {} </code></pre> <p>When i try to get logs using <code>kubectl logs -f es-764bd45bb6-w4ckn</code>, I get</p> <pre><code>Error from server: Get https://&lt;slave node ip&gt;:10250/containerLogs/default/es-764bd45bb6-w4ckn/es?previous=true: dial tcp &lt;slave node ip&gt;:10250: i/o timeout </code></pre> <p>What could be the reason and solution for this problem ?</p>
<p>I had the same problem, there can be couple of reasons for this issue. In my case the jar file was missing. @Lakshya has already answered this problem, I would like to add the steps that you can take to troubleshoot it. </p> <ol> <li>Get the pod status, Command - <strong>kubectl get pods</strong> </li> <li>Describe pod to have further look - <strong>kubectl describe pod "pod-name"</strong> The last few lines of output gives you events and where your deployment failed </li> <li>Get logs for more details - <strong>kubectl logs "pod-name"</strong></li> <li>Get container logs - <strong>kubectl logs "pod-name" -c "container-name"</strong> Get the container name from the output of describe pod command </li> </ol> <p>If your container is up, you can use the <strong>kubectl exec -it</strong> command to further analyse the container </p> <p>Hope it helps community members in future issues. </p>
<p>How can I mount a 'single' file from a secret?</p> <p>I've created a secret with:</p> <pre><code>kubectl create secret generic oauth \ --from-file=./.work-in-progress/oauth_private.key \ --from-file=./.work-in-progress/oauth_public.key \ </code></pre> <p>How can I mount the <code>oauth_private.key</code> file as a single file, rather than overriding the entire path with a directory that ONLY contains the two files (and potentially removing files that existed on the container initially)?</p>
<p>You can do as bellow:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: "/etc/foo" readOnly: true volumes: - name: foo secret: secretName: mysecret items: - key: username path: my-group/my-username </code></pre> <p>Suppose <code>mysecret</code> contains <code>username</code> and <code>password</code>. Above yaml will mount only <code>username</code> in <code>/etc/foo/my-group/my-username</code> directory.</p> <p>For more details check this: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="noreferrer">Using Secrets as Files from a Pod</a></p>
<p>How do I download the latest chart of package which already has different versions to it?</p> <p>I tried adding the incubator repo using <code>helm repo add &lt;repo-name&gt; &lt;repo-url&gt;</code> and then did a <code>helm repo update</code>. And when I tried to download/install the latest chart using the below command:</p> <pre><code>helm install helm-name repo/chart-name </code></pre> <p>It throws the error below:</p> <pre><code>Error: [debug] Created tunnel using local port: '37220' [debug] SERVER: "127.0.0.1:37220" [debug] Original chart version: "" Error: chart "chart-name" matching not found in repo index. (try 'helm repo update'). No chart version found for chart-name- </code></pre> <p>Any ideas on how to download the latest chart instead of specifying the chart version everytime? Or Does this download the latest charts only if the semver is used for versioning of charts?</p>
<p>It means that the chart you want to install doesn't exist in the repository. Try to list all the charts . Try <code>helm repo list</code> to get the list of all existing charts.</p> <p>I've just tried <code>helm install incubator/vdfgdfgdfgfdg --dry-run --debug</code> to simulate the install of some non-existing chart and got the same error:</p> <pre><code>helm install incubator/vdfgdfgdfgfdg --dry-run --debug [debug] Created tunnel using local port: '45830' [debug] SERVER: "127.0.0.1:45830" [debug] Original chart version: "" Error: chart "vdfgdfgdfgfdg" matching not found in incubator index. (try 'helm repo update'). no chart name found </code></pre>
<p>I am trying to deploy microservices architecture on kubernetes cluster, do any one knows how to create ingress for AWS.</p>
<p>I recommend you use the ALB Ingress Controller <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-alb-ingress-controller</a>, as it is recommended by AWS and creates Application Load Balancers for each Ingress.</p> <p>Alternatively, know that you can use any kind of Ingress, such as Nginx, in AWS. You will create the Nginx Service of type LoadBalancer, so that all requests to that address are redirected to Nginx. Nginx itself will take care to redirect the requests to the correct service inside Kubernetes.</p>
<p>I am new to kubernetes world, Can someone specify/tell/redirect as to:<br/> what is Kubernetes CNI? <br/> why is it used? <br/> what are its usecases? <br/> what are the best CNI plugins? <br/></p>
<p>You can go through following blogs to understand what is CNI and why it is used:</p> <p><a href="https://thenewstack.io/kubernetes-and-cni-whats-next-making-it-easier-to-write-networking-plugins/" rel="nofollow noreferrer">https://thenewstack.io/kubernetes-and-cni-whats-next-making-it-easier-to-write-networking-plugins/</a></p> <p>Following link has some good information about different type of CNI plugins available and when to use what:</p> <p><a href="https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/" rel="nofollow noreferrer">https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/</a></p> <p>Hope this helps.</p>
<p>I have web application tar file. I have created docker image for the same. I will be using a private docker registry (Due to security reasons). I have written Helm charts to use the image in Kubernetes (Kept it in Private helm repo). So if anyone want to install the APP using docker image on EKS feature of AWS, what would be the best way I can package my app and give it to them ? </p> <p>Basic requirement is It shouldn't be available to everyone for installation. Only the one's approved by me can install. </p> <p>Thanks in advance.</p>
<p>You can push it to their private container registry. If they are using AWS you can use <a href="https://aws.amazon.com/ecr/" rel="nofollow noreferrer">ECR</a>. You can find more information on how to push the image <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html" rel="nofollow noreferrer">here</a></p> <p>Basically, they would need to create an <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html" rel="nofollow noreferrer">IAM user/role</a> for you to be able to push to their AWS account.</p>
<p>I have kubeadm and Kubernetes v1.12 without AWS or Google Cloud.</p> <p>I want to know if the Kubernetes cluster installed already has an ingress controller and if it has two what is the default.</p> <p>Thanks :)</p>
<p>You can check for pods implementing ingress controllers (actually with ingress in the name) with:</p> <p><code>kubectl get pods --all-namespaces | grep ingress</code></p> <p>And services exposing them with:</p> <p><code>kubectl get service --all-namespaces | grep ingress</code></p> <p>As @<a href="https://stackoverflow.com/users/6843187/prafull-ladha">Prafull Ladha</a> says, you won't have an ingress controller by default. The <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="noreferrer">documentation states</a> that in "environments other than GCE/Google Kubernetes Engine, you need to deploy a controller as a pod".</p>
<p>On my macOS (not using Minikube), I have modeled my Kubernetes cluster after <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/storage/redis" rel="noreferrer">this</a> example, which means I have executed this verbatim and in this order:</p> <pre><code># Adding my own service to redix-proxy kubectl create -f ./redis/redis-service.yaml # Create a bootstrap master kubectl create -f examples/storage/redis/redis-master.yaml # Create a service to track the sentinels kubectl create -f examples/storage/redis/redis-sentinel-service.yaml # Create a replication controller for redis servers kubectl create -f examples/storage/redis/redis-controller.yaml # Create a replication controller for redis sentinels kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml # Scale both replication controllers kubectl scale rc redis --replicas=3 kubectl scale rc redis-sentinel --replicas=3 # Adding my own NodeJS web client server kubectl create -f web-deployment.yaml </code></pre> <p>The only difference is in <code>redis-proxy.yaml</code> I used the image <code>image: kubernetes/redis-proxy</code> instead of <code>image: kubernetes/redis-proxy:v2</code> because I wasn't able to pull the latter.</p> <p>These are the objects I pass to <a href="https://github.com/luin/ioredis" rel="noreferrer">ioredis</a> to create my Redis instances (one for sessions and one as the main one):</p> <p><strong>config.js</strong></p> <pre><code>main: { host: 'redis', port: 6379, db: 5 }, session: { host: 'redis', port: 6379, db: 6 } </code></pre> <hr /> <h2>Error logs:</h2> <p>In my web client <code>web-3448218364-sf1q0</code> pod, I get this repeated in the logs:</p> <pre><code>INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' } INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' } INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' } WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' } WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' } WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' } WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' } INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' } INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event </code></pre> <p>In my Redis <code>redis-proxy</code> pod, I get this repeated in the logs:</p> <pre><code>Error connecting to read: dial tcp :0: connection refused </code></pre> <hr /> <p><strong>Cluster info:</strong></p> <pre><code>$ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.91.240.1 &lt;none&gt; 443/TCP 2d redis 10.91.251.170 &lt;none&gt; 6379/TCP 31m redis-sentinel 10.91.250.118 &lt;none&gt; 26379/TCP 31m web 10.91.240.16 &lt;none&gt; 80/TCP 31m $ kubectl get po NAME READY STATUS RESTARTS AGE redis-2frd0 1/1 Running 0 34m redis-master 2/2 Running 0 34m redis-n4x6f 1/1 Running 0 34m redis-proxy 1/1 Running 0 34m redis-sentinel-k8tbl 1/1 Running 0 34m redis-sentinel-kzd66 1/1 Running 0 34m redis-sentinel-wlzsb 1/1 Running 0 34m web-3448218364-sf1q0 1/1 Running 0 34m $ kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE web 1 1 1 1 39m </code></pre> <p><strong>Question 1</strong>) Now, I need to actually connect my application to a Redis pod. I should be connecting to the <code>redis-proxy</code> pod right? So, I created this <code>redis-service.yaml</code> service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: redis spec: ports: - port: 6379 targetPort: 6379 selector: name: redis-proxy role: proxy </code></pre> <p>I believe I have connected to <code>redis</code> at port 6379 since I usually will get another error message if this is so. Going into the bash shell of my web container <code>web-3448218364-sf1q0</code>, I see the <code>printenv</code> variables of <code>REDIS_SERVICE_PORT=6379</code> and <code>REDIS_SERVICE_HOST=10.91.251.170</code>.</p> <p><strong>Question 2</strong>) From my error logs, what does it mean by <code>dial tcp :0:</code>? From my interactive Kubernetes console under Services and in the Internal Endpoints column, I see this for the <code>redis</code> service:</p> <pre><code>redis:6379 TCP redis:0 TCP </code></pre> <p>Is this <code>0 TCP</code> related to that? All of my services have 0 TCP listed in the console, but as you can see, not from the CLI in <code>kubectl get svc</code>.</p>
<p>Always the first thing to check when a kubernetes service does not behave as expected is to check the endpoints of the corresponding service. In your case <code>kubectl get ep redis</code>.</p> <p>If my assumption is correct it should show you something like this</p> <pre><code>NAME ENDPOINTS AGE redis &lt;none&gt; 42d </code></pre> <p>This means that your service does not select/match any pods.</p> <p>In your service spec there is the key <code>selector:</code> this selector has to match the labels of the actual deployment you have. You are selecting for all pods with the labels <code>name: redis-proxy</code> and <code>role: proxy</code> which are potentially not matching any pod.</p> <p>You can run <code>kubectl get pod --show-labels=true</code> to show the labels on the pods and change your service accordingly.</p> <p>I don't know what the port 0 means in this context. Sometimes it is used to do only DNS resolution with the service.</p>
<p>I am busy writing a model to predict types of text like names or dates on a pdf document.</p> <p>The model uses nltk.word_tokenize and nltk.pos_tag</p> <p>When I try to use this on Kubernetes on Google Cloud Platform I get the following error:</p> <pre><code> from nltk.tag import pos_tag from nltk.tokenize import word_tokenize tokenized_word = tokenize_word('x') tagges_word = pos_tag(['x']) </code></pre> <p>stacktrace:</p> <pre><code> Resource punkt not found. Please use the NLTK Downloader to obtain the resource: &gt;&gt;&gt; import nltk &gt;&gt;&gt; nltk.download('punkt') Searched in: - '/root/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' - '/env/nltk_data' - '/env/share/nltk_data' - '/env/lib/nltk_data' - '' </code></pre> <p>But obviously downloading it to your local device will not solve the problem if it has to run on Kubernetes and we do not have NFS set up on the project yet.</p>
<p>How I ended up solving this problem was adding the download of the nltk packages in an <strong>init</strong> function</p> <pre><code>import logging import nltk from nltk import word_tokenize, pos_tag LOGGER = logging.getLogger(__name__) LOGGER.info('Catching broad nltk errors') DOWNLOAD_DIR = '/usr/lib/nltk_data' LOGGER.info(f'Saving files to {DOWNLOAD_DIR} ') try: tokenized = word_tokenize('x') LOGGER.info(f'Tokenized word: {tokenized}') except Exception as err: LOGGER.info(f'NLTK dependencies not downloaded: {err}') try: nltk.download('punkt', download_dir=DOWNLOAD_DIR) except Exception as e: LOGGER.info(f'Error occurred while downloading file: {e}') try: tagged_word = pos_tag(['x']) LOGGER.info(f'Tagged word: {tagged_word}') except Exception as err: LOGGER.info(f'NLTK dependencies not downloaded: {err}') try: nltk.download('averaged_perceptron_tagger', download_dir=DOWNLOAD_DIR) except Exception as e: LOGGER.info(f'Error occurred while downloading file: {e}') </code></pre> <p>I realize that the amount of try catch expressions are not needed. I also specify the download dir because it seemed that if you do not do that it downloads and unzips 'tagger' to /usr/lib and the nltk does not look for the the files there.</p> <p>This will download the files on every first run on a new pod and the files will persist until the pod dies.</p> <p>The error was solved on a Kubernetes stateless set which means this can deal with non persistent applications like App Engine, but will not be the most efficient because it will need to be download every time the instance spins up.</p>
<p>I've created a secret from a file using a command like:</p> <pre><code>kubectl create secret generic laravel-oauth \ --from-file=./.work-in-progress/oauth_private.key \ --from-file=./.work-in-progress/oauth_public.key </code></pre> <p>However it seems new lines are stripped from the files (when using the secrets as ENV variables).</p> <p>There is a 'encoding' note in the docs that state:</p> <blockquote> <p>The serialized JSON and YAML values of secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines. Conversely Linux users should add the option -w 0 to base64 commands or the pipeline base64 | tr -d '\n' if -w option is not available.</p> </blockquote> <p>However I assumed this only applies for 'manually' created secrets via YAML files.</p>
<p>The new lines are not stripped the files are just <a href="https://en.wikipedia.org/wiki/Base64" rel="nofollow noreferrer">base64</a> encoded as mentioned in the other answers too. For example:</p> <pre><code># mycert.pem -----BEGIN CERTIFICATE----- xxxxxx xxxxxx ... -----END CERTIFICATE----- </code></pre> <p>Then:</p> <pre><code>$ kubectl create secret generic mysecret --from-file=./cert.pem </code></pre> <p>Then:</p> <pre><code>$ kubectl get secret mysecret -o=yaml apiVersion: v1 data: cert.pem: &lt;base64 encoded string&gt; kind: Secret metadata: creationTimestamp: 2018-11-14T18:11:46Z name: mysecret namespace: default resourceVersion: "20180431" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: xxxxxx type: Opaque </code></pre> <p>Then if you decode it, you will get the original secret.</p> <pre><code>$ echo '&lt;base64 encoded string&gt;' | base64 -D -----BEGIN CERTIFICATE----- xxxxxx xxxxxx ... -----END CERTIFICATE----- </code></pre> <p>Also, this is not necessarily secure at rest. If you are looking for more security you can use something like <a href="https://www.vaultproject.io/" rel="nofollow noreferrer">Hashicorp Vault</a> or as alluded by @Alex <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">Bitnami's sealed secrets</a>.</p>
<p>After I have run <code>helm list</code> I got following error:</p> <blockquote> <p>Error: incompatible versions client[v2.9.0] server[v2.8.2]</p> </blockquote> <p>I did a helm init to install the compatible tiller version "Warning: Tiller is already installed in the cluster. (Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)". </p> <p>Any pointers?</p>
<p>Like the OP, I had this error:</p> <pre><code>$ helm list Error: incompatible versions client[v2.10.0] server[v2.9.1] </code></pre> <p>Updating the server wasn't an option for me so I needed to brew install a previous version of the client. I hadn't previously installed client[v2.9.1] (or any previous client version) and thus couldn't just <code>brew switch kubernetes-helm 2.9.1</code>. I ended up having to follow the steps in this SO answer: <a href="https://stackoverflow.com/a/17757092/2356383">https://stackoverflow.com/a/17757092/2356383</a></p> <p>Which basically says</p> <ul> <li>Look on Github for the correct kubernetes-helm.rb file for the version you want (2.9.1 in my case): <a href="https://github.com/Homebrew/homebrew-core/search?q=kubernetes-helm&amp;type=Commits" rel="noreferrer">https://github.com/Homebrew/homebrew-core/search?q=kubernetes-helm&amp;type=Commits</a></li> <li>Click the commit hash (78d6425 in my case)</li> <li>Click the "View" button to see the whole file</li> <li>Click the "Raw" button</li> <li>And copy the url: <a href="https://raw.githubusercontent.com/Homebrew/homebrew-core/78d64252f30a12b6f4b3ce29686ab5e262eea812/Formula/kubernetes-helm.rb" rel="noreferrer">https://raw.githubusercontent.com/Homebrew/homebrew-core/78d64252f30a12b6f4b3ce29686ab5e262eea812/Formula/kubernetes-helm.rb</a></li> </ul> <p>Now that I had the url for the correct kubernetes-helm.rb file, I ran the following:</p> <pre><code>$ brew unlink kubernetes-helm $ brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/78d64252f30a12b6f4b3ce29686ab5e262eea812/Formula/kubernetes-helm.rb $ brew switch kubernetes-helm 2.9.1 </code></pre> <p>Hope this helps someone.</p>
<p>I have a 3 Node cluster running on GCP in Kubernetes. I am able to forward the port and connect with my DB Tool to the cluster:</p> <pre><code>$ kubectl port-forward elassandra-0 9042 </code></pre> <p>When I try to connect to the cassandra cluster from my Spring Boot application, I get the following error:</p> <pre><code>2018-11-14 17:43:36,339 INFO [5914] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /10.4.3.3:9042 added 2018-11-14 17:43:36,339 INFO [5914] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /10.4.2.4:9042 added 2018-11-14 17:43:36,340 INFO [5915] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /127.0.0.1:9042 added 2018-11-14 17:43:41,391 WARN [10966] [cluster1-nio-worker-2] c.d.d.c.HostConnectionPool [HostConnectionPool.java:184] Error creating connection to /10.4.2.4:9042 com.datastax.driver.core.exceptions.TransportException: [/10.4.2.4:9042] Cannot connect at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:167) at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:150) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) </code></pre> <p>I am trying to connect to <code>127.0.0.1:9042</code>, the other hosts are being pulled from the cluster by the spring-data framework.</p> <p>What am I doing wrong here?</p> <p>Thanks.</p>
<p>Answering own question:</p> <p>You need to specify custom adress resolution driver side as described here:</p> <p><a href="https://docs.datastax.com/en/developer/java-driver/2.1/manual/address_resolution/" rel="nofollow noreferrer">https://docs.datastax.com/en/developer/java-driver/2.1/manual/address_resolution/</a></p> <p>In detail when you want to use only one node:</p> <pre><code>public class MyAdressTranslator implements AddressTranslator { @Override public void init(Cluster cluster) {} public InetSocketAddress translate(InetSocketAddress address) { return new InetSocketAddress("127.0.0.1", 9042); } @Override public void close() {} } </code></pre>
<p>Im about to deploy a new K8S baremetal cluster using KubeSpray. On on my agents, I have resolvd running that does not take the DNS settings from <code>/etc/resolvd.conf</code> but rather takes it from <code>/etc/systemd/resolved.conf</code>.</p> <p>So which is the best DNS setting to use ? CoreDNS ? KubeDNS ? Just want to make sure that the pods I deploy use the same DNS servers as configured on my agent nodes.</p> <p>whats should be my selection for</p> <pre><code># Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none dns_mode: kubedns # Set manual server if using a custom cluster DNS server #manual_dns_server: 10.x.x.x # Can be docker_dns, host_resolvconf or none resolvconf_mode: docker_dns </code></pre> <p>?</p>
<p>As per <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction" rel="nofollow noreferrer">official documentation</a>:</p> <p>As of Kubernetes v1.12, <strong>CoreDNS is the recommended DNS Server, replacing kube-dns</strong>. However, kube-dns may still be installed by default with certain Kubernetes installer tools. Refer to the documentation provided by your installer to know which DNS server is installed by default.</p> <p>The CoreDNS Deployment is exposed as a Kubernetes Service with a static IP. Both the CoreDNS and kube-dns Service are named <code>kube-dns</code> in the <code>metadata.name</code> field. This is done so that there is greater interoperability with workloads that relied on the legacy <code>kube-dns</code> Service name to resolve addresses internal to the cluster. It abstracts away the implementation detail of which DNS provider is running behind that common endpoint.</p> <p>If a Pod’s <code>dnsPolicy</code> is set to “<code>default</code>”, it inherits the name resolution configuration from the node that the Pod runs on. The Pod’s DNS resolution should behave the same as the node. But see <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="nofollow noreferrer">Known issues</a>.</p> <p>If you don’t want this, or if you want a different DNS config for pods, you can use the kubelet’s <code>--resolv-conf</code> flag. Set this flag to “” to prevent Pods from inheriting DNS. Set it to a valid file path to specify a file other than <code>/etc/resolv.conf</code> for DNS inheritance.</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="nofollow noreferrer">Known Issue</a>:</p> <p>Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces <code>/etc/resolv.conf</code> with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s <code>--resolv-conf</code> flag to point to the correct <code>resolv.conf</code>(With <code>systemd-resolved</code>, this is <code>/run/systemd/resolve/resolv.conf</code>). kubeadm 1.11 automatically detects <code>systemd-resolved</code>, and adjusts the kubelet flags accordingly.</p> <p>Kubernetes installs do not configure the nodes’ <code>resolv.conf</code> files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually.</p> <p>Linux’s libc is impossibly stuck (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=168253" rel="nofollow noreferrer">see this bug from 2005</a>) with limits of just 3 DNS <code>nameserver</code> records and 6 DNS <code>search</code> records. Kubernetes needs to consume 1 <code>nameserver</code>record and 3 <code>search</code> records. This means that if a local installation already uses 3 <code>nameserver</code>s or uses more than 3 <code>search</code>es, some of those settings will be lost. As a partial workaround, the node can run <code>dnsmasq</code> which will provide more <code>nameserver</code> entries, but not more <code>search</code> entries. You can also use kubelet’s <code>--resolv-conf</code> flag.</p> <p>If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check <a href="https://github.com/kubernetes/kubernetes/issues/30215" rel="nofollow noreferrer">here</a> for more information.</p>
<p>My task is to add a label named "app" to all <code>deployments</code>, <code>daemonsets</code>, and <code>cronjobs</code> so that it's easier to query our apps across the stack in our monitoring tools. This way, we can build dashboards that use a single selector, namely app.</p> <p>To avoid downtime I've decided to resolve this issue in the following steps:</p> <ol> <li>Add labels to dev, test &amp; stage environments.</li> <li>Add labels to prod env's.</li> <li>Deploy (1)</li> <li>Deploy (2)</li> <li>Delete old labels &amp; update the services of dev to use the new labels. Then test &amp; deploy. (<strong>currently on this step</strong>)</li> <li>Repeat (5) for stage.</li> <li>Repeat (5) for prod.</li> </ol> <p>When using <code>$ kubectl apply</code> to update the resources I've added the "app" label to/replaced "service" label with "app" labels to, I run into the following error:</p> <blockquote> <p>Error from server (Invalid): error when applying patch: {<em>longAssPatchWhichIWon'tIncludeButYaGetThePoint</em>} to: &amp;{0xc421b02f00 0xc420803650 default provisioning manifests/prod/provisioning-deployment.yaml 0xc 42000c6f8 3942200 false} for: "manifests/prod/provisioning-deployment.yaml": Deployment.apps "provisioning" is invalid: s pec.template.metadata.labels: Invalid value: map[string]string{"app":"provisioning", "component" :"marketplace"}: <code>selector</code> does not match template <code>labels</code></p> </blockquote> <p>I need some insights on why it's throwing this error.</p>
<p>It seems you are in trouble. Check this section: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates" rel="noreferrer">Label selector updates</a></p> <blockquote> <p>Note: In API version <code>apps/v1</code>, a Deployment’s label selector is immutable after it gets created.</p> </blockquote> <p>So, this line say you can not update <code>selector</code> once deployment is created. Selector can not be changed for any API version except <code>apps/v1beta1</code> and <code>extension/v1beta1</code>. Ref: <a href="https://github.com/kubernetes/kubernetes/blob/dad6741530b0c69587a71a6c08544b4c9412fa01/test/integration/deployment/deployment_test.go#L226" rel="noreferrer">TestDeploymentSelectorImmutability</a>.</p> <p>One possible workaround might be to keep the old labels and adding new labels along with old ones. This way, you don't have to update <code>selector</code>. Deployment will select pods using old labels but your dashboard can select using new labels. This might not meet your requirement but I don't see any better way.</p>
<p>We have a K8s cluster on Azure (aks). On this cluster, we added a loadbalancer on the setup which installed an nginx-ingress controller.</p> <p>Looking at the deployments:</p> <pre><code>addon-http-application-routing-default-http-backend 1 addon-http-application-routing-external-dns 1 addon-http-application-routing-nginx-ingress-controller 1 </code></pre> <p>I see there is 1 of each running. Now I find very little information if these should be scaled (there is 1 pod each) and if they should, how?</p> <p>I've tried running</p> <pre><code>kubectl scale deployment addon-http-application-routing-nginx-ingress-controller --replicas=3 </code></pre> <p>Which temporarily scales it to 3 pods, but after a few moments, it is downscaled again.</p> <p>So again, are these supposed to be scaled? Why? How?</p> <p><strong>EDIT</strong></p> <p>For those that missed it like I did: The AKS addon-http-application is <strong>not</strong> ready for production, it is there to quickly set you up and start experimenting. Which is why I wasn't able to scale it properly. </p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing" rel="nofollow noreferrer">Read more</a></p>
<p>That's generally the way how you do it:</p> <pre><code>$ kubectl scale deployment addon-http-application-routing-nginx-ingress-controller --replicas=3 </code></pre> <p>However, I suspect you have an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a> configured which will scale up/down depending on the load or some metrics and has the <code>minReplicas</code> spec set to <code>1</code>. You can check with:</p> <pre><code>$ kubectl get hpa $ kubectl describe hpa &lt;hpa-name&gt; </code></pre> <p>If that's the case you can scale up by just patching the HPA:</p> <pre><code>$ kubectl patch hpa &lt;hpa-name&gt; -p '{"spec": {"minReplicas": 3}}' </code></pre> <p>or edit it manually:</p> <pre><code>$ kubectl edit hpa &lt;hpa-name&gt; </code></pre> <p>More information on HPAs <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">here</a>.</p> <p>And yes, the ingress controllers are supposed to be scaled up and down depending on the load.</p>
<p>I am aware that we can create node groups with labels via kubelet-extra-args:</p> <pre><code>--kubelet-extra-args --node-labels=foo=bar </code></pre> <p>This syntax was a bit of a surprise to me so I'm not exactly sure how to add multiple labels.</p>
<p>Found it! I should have guessed but this passes things off to kubelet, which apparently takes comma separated key-value pairs joined with '='</p> <pre><code>--kubelet-extra-args --node-labels=alabel=foo,another=bar </code></pre>
<p>I want to expose some Helm Charts through Istio ingress.</p> <p>For example, today I can expose Kubernetes Dashboard via <code>Ingress</code> type (with NginX Ingress): <code> helm install stable/kubernetes-dashboard --set ingress.enabled=true </code></p> <p>However, for Istio <strong>would I have to fork</strong> the Kubernetes Dashboard Helm chart to add the required <code>Gateway</code> and <code>VirtualService</code> yaml?</p> <p>Or is there a better way to patch opensource charts to work with Istio ingress?</p>
<p>You could create your own chart that includes the <code>stable/kubernetes-dashboard</code> as dependency in the <code>requirements.yaml</code>. Then you effectively have a wrapper chart that includes the dashboard and you can include the Istio ingress configuration at the wrapper level. </p>
<p>I have a 3 Node cluster running on GCP in Kubernetes. I am able to forward the port and connect with my DB Tool to the cluster:</p> <pre><code>$ kubectl port-forward elassandra-0 9042 </code></pre> <p>When I try to connect to the cassandra cluster from my Spring Boot application, I get the following error:</p> <pre><code>2018-11-14 17:43:36,339 INFO [5914] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /10.4.3.3:9042 added 2018-11-14 17:43:36,339 INFO [5914] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /10.4.2.4:9042 added 2018-11-14 17:43:36,340 INFO [5915] [localhost-startStop-1] c.d.d.c.Cluster [Cluster.java:1587] New Cassandra host /127.0.0.1:9042 added 2018-11-14 17:43:41,391 WARN [10966] [cluster1-nio-worker-2] c.d.d.c.HostConnectionPool [HostConnectionPool.java:184] Error creating connection to /10.4.2.4:9042 com.datastax.driver.core.exceptions.TransportException: [/10.4.2.4:9042] Cannot connect at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:167) at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:150) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424) at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:121) </code></pre> <p>I am trying to connect to <code>127.0.0.1:9042</code>, the other hosts are being pulled from the cluster by the spring-data framework.</p> <p>What am I doing wrong here?</p> <p>Thanks.</p>
<p>To add more details since you have automatic address resolution enabled, your client it's trying to add contact points to all the nodes in your cluster:</p> <pre><code>10.4.3.3:9042 10.4.2.4:9042 127.0.0.1:9042 </code></pre> <p>However, the top 2 nodes cannot be found from your localhost because they are not being proxied to. If you added them as proxies it wouldn't work either because you can't proxy on the same port from your laptop. The solution like @AlexTbk mentioned is to use a single contact point but specifying address resolution on the client with the single <code>127.0.0.1:9042</code> contact point.</p>
<p>I am experimenting Spark2.3 on a K8s cluster. Wondering how does the checkpoint work? Where is it stored? If the main driver dies, what happens to the existing processing?</p> <p>In case of consuming from Kafka, how does the offset maintained? I tried to lookup online but could not find any answer to those questions. Our application is consuming a lot of Kafka data so it is essential to be able to restart and pick up from where it was stopped.</p> <p>Any gotchas on running Spark Streaming on K8s?</p>
<p><a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">The Kubernetes Spark Controller</a> doesn't know anything about checkpointing, AFAIK. It's just a way for Kubernetes to schedule your Spark driver and the Workers that it needs to run a job.</p> <p>Storing the offset is really up to your application and where you want to store the Kafka offset, so that when it restarts it picks up that offset and starts consuming from there. This <a href="https://elang2.github.io/myblog/posts/2017-09-20-Kafak-And-Zookeeper-Offsets.html" rel="nofollow noreferrer">is an example</a> on how to store it in Zookeeper.</p> <p>You could, for example, write ZK offset manager functions in Scala:</p> <pre><code>import com.metamx.common.scala.Logging import org.apache.curator.framework.CuratorFramework ... object OffsetManager extends Logging { def getOffsets(client: CuratorFramework, ... = { } def setOffsets(client: CuratorFramework, ... = { } ... </code></pre> <p>Another way would be storing your Kafka offsets in something reliable like <a href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html" rel="nofollow noreferrer">HDFS</a>.</p>
<p>I have <code>kubernetes</code> cluster running on 4 <code>Raspberry-pi</code> devices, out of which 1 is acting as <code>master</code> and other 3 are working as <code>worker</code> i.e <code>w1</code>, <code>w2</code>, <code>w3</code>. I have started a daemon set deployment, so each worker is running a pod of 2 containers.</p> <p><code>w2</code> is running pod of 2 container. If I <code>exec</code> into any container and ping <code>www.google.com</code> from the container, I get the response. But if I do the same on <code>w1</code> and <code>w3</code> it says <code>temporary failure in name resolution</code>. All the pods in kube-system are running. I am using <code>weave</code> for networking. Below are all the pods for kube-system</p> <pre><code>NAME READY STATUS RESTARTS AGE etcd-master-pi 1/1 Running 1 23h kube-apiserver-master-pi 1/1 Running 1 23h kube-controller-manager-master-pi 1/1 Running 1 23h kube-dns-7b6ff86f69-97vtl 3/3 Running 3 23h kube-proxy-2tmgw 1/1 Running 0 14m kube-proxy-9xfx9 1/1 Running 2 22h kube-proxy-nfgwg 1/1 Running 1 23h kube-proxy-xbdxl 1/1 Running 3 23h kube-scheduler-master-pi 1/1 Running 1 23h weave-net-7sh5n 2/2 Running 1 14m weave-net-c7x8p 2/2 Running 3 23h weave-net-mz4c4 2/2 Running 6 22h weave-net-qtgmw 2/2 Running 10 23h </code></pre> <p>If I am starting the containers using the normal docker container command but not from the kubernetes deployment then I do not see this issue. I think this is because of <code>kube-dns</code>. How can I debug this issue.?</p>
<p>You can start by checking if the dns is working</p> <p>Run the nslookup on kubernetes.default from inside the pod, check if it is working.</p> <pre><code>[root@metrics-master-2 /]# nslookup kubernetes.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 </code></pre> <p>Check the local dns configuration inside the pods:</p> <pre><code>[root@metrics-master-2 /]# cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5 </code></pre> <p>At last, check the kube-dns container logs while you run the ping command, It will give you the possible reasons why the name is not resolving.</p> <pre><code>kubectl logs kube-dns-86f4d74b45-7c4ng -c kubedns -n kube-system </code></pre> <p>Hope this helps.</p>
<p>kubeadm version 1.12.2</p> <p><code>$ sudo kubeadm init --config kubeadm_new.config --ignore-preflight-errors=all</code></p> <p>/var/log/syslog shows:</p> <pre><code>Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438374 5101 server.go:1013] Started kubelet Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438406 5101 server.go:133] Starting to listen on 0.0.0.0:10250 Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438446 5101 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438492 5101 server.go:753] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438968 5101 server.go:318] Adding debug handlers to kubelet server. Nov 15 08:44:13 khteh-T580 kubelet[5101]: F1115 08:44:13.439455 5101 server.go:145] listen tcp 0.0.0.0:10250: bind: address already in use </code></pre> <p>I have tried <code>sudo systemctl stop kubelet</code> and manually kill kubelet process but to no avail. Any advice and insights are appreciated.</p>
<p>Here is what you can do:</p> <p>Try the following command to find out which process is holding the port 10250</p> <pre><code>root@master admin]# ss -lntp | grep 10250 LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=23373,fd=20)) </code></pre> <p>It will give you PID of that process and name of that process. If it is unwanted process which is holding the port, you can always kill the process and that port becomes available to use by kubelet.</p> <p>After killing the process again run the above command, it should return no value.</p> <p>Just to be on safe side run kubeadm reset and then run kubeadm init and it should go through.</p>
<p>My real question is, if secrets are mounted as volumes in pods - can they be read if someone gains root access to the host OS.</p> <p>For example by accessing /var/lib/docker and drilling down to the volume.</p>
<p>If someone has root access to your host with containers, he can do pretty much whatever he wants... Don't forget that pods are just a bunch of containers, which in fact are processes with pids. So for example, if I have a pod called sleeper:</p> <pre><code>kubectl get pods sleeper-546494588f-tx6pp -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE sleeper-546494588f-tx6pp 1/1 Running 1 21h 10.200.1.14 k8s-node-2 &lt;none&gt; </code></pre> <p>running on the node k8s-node-2. With root access to this node, I can check what pid this pod and its containers have (I am using containerd as container engine, but points below are very similar for docker or any other container engine): </p> <pre><code>[root@k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock pods -name sleeper-546494588f-tx6pp -q ec27f502f4edd42b85a93503ea77b6062a3504cbb7ac6d696f44e2849135c24e [root@k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock ps -p ec27f502f4edd42b85a93503ea77b6062a3504cbb7ac6d696f44e2849135c24e CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID 70ca6950de10b 8ac48589692a5 2 hours ago Running sleeper 1 ec27f502f4edd [root@k8s-node-2 /]# crictl -r unix:///var/run/containerd/containerd.sock# inspect 70ca6950de10b | grep pid | head -n 1 "pid": 24180, </code></pre> <p>And then finally with those information (pid number), I can access "/" mountpoint of this process and check its content including secrets: </p> <pre><code>[root@k8s-node-2 /]# ll /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/ total 0 lrwxrwxrwx. 1 root root 13 Nov 14 13:57 ca.crt -&gt; ..data/ca.crt lrwxrwxrwx. 1 root root 16 Nov 14 13:57 namespace -&gt; ..data/namespace lrwxrwxrwx. 1 root root 12 Nov 14 13:57 token -&gt; ..data/token [root@k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/namespace ; echo default [root@k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/token | cut -d'.' -f 1 | base64 -d ;echo {"alg":"RS256","kid":""} [root@k8s-node-2 serviceaccount]# cat /proc/24180/root/var/run/secrets/kubernetes.io/serviceaccount/token | cut -d'.' -f 2 | base64 -d 2&gt;/dev/null ;echo {"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"default","kubernetes.io/serviceaccount/secret.name":"default-token-6sbz9","kubernetes.io/serviceaccount/service-account.name":"default","kubernetes.io/serviceaccount/service-account.uid":"42e7f596-e74e-11e8-af81-525400e6d25d","sub":"system:serviceaccount:default:default"} </code></pre> <p>It is one of the reasons why it is super important to properly secure access to your kubernetes infrastructure.</p>
<p>I've got a K8s instance (ACS) v1.8.1 deployed on Azure using their v1.7.7 orchestrator (NOT the acs-engine CLI). Our VM's default disk (standard disk 30GiB) is bottlenecking our pods so I attached a premium SSD disk (300GiB) to our VM's per <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk" rel="nofollow noreferrer">these instructions</a>.</p> <p>What's the proper procedure for pointing the kubelet (v1.8.1) to this new disk?</p> <hr> <p>I thought I could just edit /etc/systemd/system/kubelet.service and point it to that new disk but I get all kinds of errors when doing that and I think I've bricked the kubelet on this instance because reverting the edits doesn't get me back to a working state.</p> <hr> <p><strong>Update 2:</strong><br> I created a new cluster (ACS) with a single agent and updated <code>/etc/systemd/system/docker.service.d/exec_start.conf</code> to point docker to the new attached disk; no other changes were made to the machine.</p> <p>The pods attempt to start but I get "Error syncing pod" and "Pod sandbox changed, it will be killed and re-created." errors for every single pod on the agent.</p> <ul> <li>Docker is started and running on the machine.</li> <li><code>docker ps</code> shows the hyperkube-amd64 image running.</li> <li><code>docker logs &lt;hyperkube container&gt;</code> shows a bunch of errors regarding resolv.conf</li> </ul> <p><strong>Hyperkube container log:</strong></p> <pre><code>E1126 07:50:23.693679 1897 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "kubernetes-dashboard-86cf46546d-vjzqd": ResolvConfPath "/poddisk/docker/containers/aaa27116bb39092f27ec6723f70be35d9bcb48d66e49811566c19915ff804516/resolv.conf" does not exist E1126 07:50:23.693744 1897 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "kubernetes-dashboard-86cf46546d-vjzqd": ResolvConfPath "/poddisk/docker/containers/aaa27116bb39092f27ec6723f70be35d9bcb48d66e49811566c19915ff804516/resolv.conf" does not exist E1126 07:50:23.693781 1897 kuberuntime_manager.go:632] createPodSandbox for pod "kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod "kubernetes-dashboard-86cf46546d-vjzqd": ResolvConfPath "/poddisk/docker/containers/aaa27116bb39092f27ec6723f70be35d9bcb48d66e49811566c19915ff804516/resolv.conf" does not exist E1126 07:50:23.693868 1897 pod_workers.go:182] Error syncing pod 1d90eb2e-ee18-11e8-a6f7-000d3a727bf3 ("kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)"), skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)\" failed: rpc error: code = Unknown desc = rewrite resolv.conf failed for pod \"kubernetes-dashboard-86cf46546d-vjzqd\": ResolvConfPath \"/poddisk/docker/containers/aaa27116bb39092f27ec6723f70be35d9bcb48d66e49811566c19915ff804516/resolv.conf\" does not exist" I1126 07:50:23.746435 1897 kubelet.go:1871] SyncLoop (PLEG): "kubernetes-dashboard-924040265-sr9v7_kube-system(8af36209-ec52-11e8-b632-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"8af36209-ec52-11e8-b632-000d3a727bf3", Type:"ContainerDied", Data:"410897d41aebe92b0d10a47572405c326228cc845bf8875d4bec27be8dccbf6f"} W1126 07:50:23.746674 1897 pod_container_deletor.go:77] Container "410897d41aebe92b0d10a47572405c326228cc845bf8875d4bec27be8dccbf6f" not found in pod's containers I1126 07:50:23.746700 1897 kubelet.go:1871] SyncLoop (PLEG): "kubernetes-dashboard-924040265-sr9v7_kube-system(8af36209-ec52-11e8-b632-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"8af36209-ec52-11e8-b632-000d3a727bf3", Type:"ContainerStarted", Data:"5c32fa4c57009725adfef3df7034fe1dd6166f6e0b56b60be1434f41f33a2f7d"} I1126 07:50:23.835783 1897 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-v20-765f4cf698-zlms6_kube-system(1d10753c-ee18-11e8-a6f7-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"1d10753c-ee18-11e8-a6f7-000d3a727bf3", Type:"ContainerDied", Data:"192e8df5e196e86235b7d79ecfb14d7ed458ec7709a09115ed8b995fbc90371f"} W1126 07:50:23.835972 1897 pod_container_deletor.go:77] Container "192e8df5e196e86235b7d79ecfb14d7ed458ec7709a09115ed8b995fbc90371f" not found in pod's containers I1126 07:50:23.939929 1897 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-v20-3003781527-lw6p9_kube-system(1cf6caec-ee18-11e8-a6f7-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"1cf6caec-ee18-11e8-a6f7-000d3a727bf3", Type:"ContainerDied", Data:"605635bedd890c597a9675c23030d128eadd344e3c73fd5efaba544ce09dfa76"} W1126 07:50:23.940026 1897 pod_container_deletor.go:77] Container "605635bedd890c597a9675c23030d128eadd344e3c73fd5efaba544ce09dfa76" not found in pod's containers I1126 07:50:23.951129 1897 kuberuntime_manager.go:401] Sandbox for pod "heapster-342135353-x07fk_kube-system(a9810826-ec52-11e8-b632-000d3a727bf3)" has no IP address. Need to start a new one I1126 07:50:24.047879 1897 kuberuntime_manager.go:401] Sandbox for pod "kubernetes-dashboard-924040265-sr9v7_kube-system(8af36209-ec52-11e8-b632-000d3a727bf3)" has no IP address. Need to start a new one I1126 07:50:24.137353 1897 kuberuntime_manager.go:401] Sandbox for pod "kube-dns-v20-765f4cf698-zlms6_kube-system(1d10753c-ee18-11e8-a6f7-000d3a727bf3)" has no IP address. Need to start a new one I1126 07:50:24.241774 1897 kuberuntime_manager.go:401] Sandbox for pod "kube-dns-v20-3003781527-lw6p9_kube-system(1cf6caec-ee18-11e8-a6f7-000d3a727bf3)" has no IP address. Need to start a new one W1126 07:50:24.343902 1897 docker_service.go:333] Failed to retrieve checkpoint for sandbox "ce0304d171adf24619dac12e47914f2e3670d29d26b5bc3bec3358b631ebaf06": checkpoint is not found. </code></pre> <p><strong><code>service kubelet status</code> output:</strong></p> <pre><code>● kubelet.service - Kubelet Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-11-22 06:03:42 UTC; 4 days ago Process: 1822 ExecStartPre=/sbin/iptables -t nat --list (code=exited, status=0/SUCCESS) Process: 1815 ExecStartPre=/sbin/ebtables -t nat --list (code=exited, status=0/SUCCESS) Process: 1810 ExecStartPre=/sbin/sysctl -w net.ipv4.tcp_retries2=8 (code=exited, status=0/SUCCESS) Process: 1806 ExecStartPre=/bin/mount --make-shared /var/lib/kubelet (code=exited, status=0/SUCCESS) Process: 1797 ExecStartPre=/bin/bash -c if [ $(mount | grep "/var/lib/kubelet" | wc -l) -le 0 ] ; then /bin/mount --bind /var/lib/kubelet /var/lib/kubelet ; fi (code=exited, status=0/SUCCESS) Process: 1794 ExecStartPre=/bin/mkdir -p /var/lib/kubelet (code=exited, status=0/SUCCESS) Process: 1791 ExecStartPre=/bin/bash /opt/azure/containers/kubelet.sh (code=exited, status=0/SUCCESS) Main PID: 1828 (docker) Tasks: 9 Memory: 4.2M CPU: 5min 9.259s CGroup: /system.slice/kubelet.service └─1828 /usr/bin/docker run --net=host --pid=host --privileged --rm --volume=/dev:/dev --volume=/sys:/sys:ro --volume=/var/run:/var/run:rw --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:shared --volume=/var/log:/var/log:rw --volume=/etc/kubernetes/:/etc/kubernetes:ro --volume=/srv/kubernetes/:/srv/kubernetes:ro --volume=/var/lib/waagent/ManagedIdentity-Settings:/var/lib/waagent/ManagedIdentity-Settings:ro gcrio.azureedge.net/google_containers/hyperkube-amd64:v1.8.1 /hyperkube kubelet --kubeconfig=/var/lib/kubelet/kubeconfig --require-kubeconfig --pod-infra-container-image=gcrio.azureedge.net/google_containers/pause-amd64:3.0 --address=0.0.0.0 --allow-privileged=true --enable-server --pod-manifest-path=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --node-labels=kubernetes.io/role=agent,agentpool=agent --cloud-provider=azure --cloud-config=/etc/kubernetes/azure.json --azure-container-registry-config=/etc/kubernetes/azure.json --network-plugin=kubenet --max-pods=110 --node-status-update-frequency=10s --image-gc-high-threshold=85 --image-gc-low-threshold=80 --v=2 --feature-gates=Accelerators=true Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: W1126 16:52:43.631813 1897 pod_container_deletor.go:77] Container "9e535d1d87c7c52bb154156bed1fbf40e3509ed72c76f0506ad8b6ed20b6c82d" not found in pod's containers Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:43.673560 1897 kuberuntime_manager.go:401] Sandbox for pod "kube-dns-v20-3003781527-rn3fz_kube-system(89854e4f-ec52-11e8-b632-000d3a727bf3)" has no IP address. Need to start a new one Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:43.711945 1897 kuberuntime_manager.go:401] Sandbox for pod "kubernetes-dashboard-86cf46546d-vjzqd_kube-system(1d90eb2e-ee18-11e8-a6f7-000d3a727bf3)" has no IP address. Need to start a new one Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:43.783121 1897 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-v20-3003781527-lw6p9_kube-system(1cf6caec-ee18-11e8-a6f7-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"1cf6caec-ee18-11e8-a6f7-000d3a727bf3", Type:"ContainerDied", Data:"26e592abff8bf63a8d9b7a57778dd6768240112a6edafb6de55e7217258d764f"} Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: W1126 16:52:43.783563 1897 pod_container_deletor.go:77] Container "26e592abff8bf63a8d9b7a57778dd6768240112a6edafb6de55e7217258d764f" not found in pod's containers Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:43.783591 1897 kubelet.go:1871] SyncLoop (PLEG): "kube-dns-v20-3003781527-lw6p9_kube-system(1cf6caec-ee18-11e8-a6f7-000d3a727bf3)", event: &amp;pleg.PodLifecycleEvent{ID:"1cf6caec-ee18-11e8-a6f7-000d3a727bf3", Type:"ContainerStarted", Data:"2ac43b964e4c7b4172fb6ab0caa0c673c5526fd4947a9760ee390a9d7f78ee14"} Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: W1126 16:52:43.863377 1897 docker_service.go:333] Failed to retrieve checkpoint for sandbox "cfec0b7704e0487ee3122488284adc0520ba1b57a5a8675df8c40deaeee1cf2e": checkpoint is not found. Nov 26 16:52:43 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:43.935784 1897 kuberuntime_manager.go:401] Sandbox for pod "heapster-342135353-x07fk_kube-system(a9810826-ec52-11e8-b632-000d3a727bf3)" has no IP address. Need to start a new one Nov 26 16:52:44 k8s-agent-CA50C8FA-0 docker[1828]: I1126 16:52:44.085925 1897 kuberuntime_manager.go:401] Sandbox for pod "kube-dns-v20-3003781527-lw6p9_kube-system(1cf6caec-ee18-11e8-a6f7-000d3a727bf3)" has no IP address. Need to start a new one Nov 26 16:52:44 k8s-agent-CA50C8FA-0 docker[1828]: E1126 16:52:44.394661 1897 summary.go:92] Failed to get system container stats for "/docker/a29aa11ff8933b350e339bb96c02932a78aba63917114e505abd47b89460d453": failed to get cgroup stats for "/docker/a29aa11ff8933b350e339bb96c02932a78aba63917114e505abd47b89460d453": failed to get container info for "/docker/a29aa11ff8933b350e339bb96c02932a78aba63917114e505abd47b89460d453": unknown container "/docker/a29aa11ff8933b350e339bb96c02932a78aba63917114e505abd47b89460d453" </code></pre> <hr> <p><strong>Update 1:</strong><br> Per @Rico's answer I attempted updating <code>/etc/default/docker</code> but it had no affect. I then located and updated <code>/etc/systemd/system/docker.service.d/exec_start.conf</code>. This caused docker to re-create all of its files at the <code>/kubeletdrive/docker</code> location. <code>exec_start.conf</code> now looks like this:</p> <pre><code>[Service] ExecStart= ExecStart=/usr/bin/docker daemon -H fd:// -g /kubeletdrive/docker --storage-driver=overlay2 --bip=172.17.0.1/16 </code></pre> <p>Running <code>service docker status</code> shows this output and none of the containers are actually creating now. I see that something is adding the option <code>--state-dir /var/run/docker/libcontainerd/containerd</code> to the mix but I have yet to find the file this is coming from. I think updating this to the same location will fix this?</p> <pre><code>docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d └─clear_mount_propagation_flags.conf, exec_start.conf Active: active (running) since Thu 2018-11-15 06:09:40 UTC; 4 days ago Docs: https://docs.docker.com Main PID: 1175 (dockerd) Tasks: 259 Memory: 3.4G CPU: 1d 15h 9min 15.414s CGroup: /system.slice/docker.service ├─ 1175 dockerd -H fd:// -g /kubeletdrive/docker --storage-driver=overlay2 --bip=172.17.0.1/16 ├─ 1305 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc ├─ 1716 docker-containerd-shim 32387d1bf7a0fc58e26f0146b9a2cb21c7f0d673a730a71007d13cff3505cb5a /var/run/docker/libcontainerd/32387d1bf7a0fc58e26f0146b9a2cb21c7f0d673a730a71007d13cff3505cb5a docker-runc ├─ 1837 docker-containerd-shim 73fd1a01f7cb7a2c44f5d40ad0a6136398469e290dc6c554ff66a9971ba3fb1f /var/run/docker/libcontainerd/73fd1a01f7cb7a2c44f5d40ad0a6136398469e290dc6c554ff66a9971ba3fb1f docker-runc ├─ 1901 docker-containerd-shim ce674925877ba963b8ba8c85598cdc40137a81a70f996636be4e15d880580b05 /var/run/docker/libcontainerd/ce674925877ba963b8ba8c85598cdc40137a81a70f996636be4e15d880580b05 docker-runc ├─20715 docker-containerd-shim 83db148b4e55a8726d40567e7f66d84a9f25262c685a6e0c2dc2f4218b534378 /var/run/docker/libcontainerd/83db148b4e55a8726d40567e7f66d84a9f25262c685a6e0c2dc2f4218b534378 docker-runc ├─21047 docker-containerd-shim 4b1acbfcbec5ae3989607b16d59c5552e4bf9a4d88ba5f6d89bb7ef1d612cd63 /var/run/docker/libcontainerd/4b1acbfcbec5ae3989607b16d59c5552e4bf9a4d88ba5f6d89bb7ef1d612cd63 docker-runc ├─23319 docker-containerd-shim c24c646afc59573e54be3645bdfb691ce31a108d76052423618fcb767eaa8775 /var/run/docker/libcontainerd/c24c646afc59573e54be3645bdfb691ce31a108d76052423618fcb767eaa8775 docker-runc ├─23516 docker-containerd-shim 59e0414a170be175eb66221458e5811e3d8b15a5ed07b146a7be265dcc85e234 /var/run/docker/libcontainerd/59e0414a170be175eb66221458e5811e3d8b15a5ed07b146a7be265dcc85e234 docker-runc ├─23954 docker-containerd-shim da267fd3b43a3601b2d4938575bc3529cf174bd1291f2f6696cdc4981293b64f /var/run/docker/libcontainerd/da267fd3b43a3601b2d4938575bc3529cf174bd1291f2f6696cdc4981293b64f docker-runc ├─24396 docker-containerd-shim a8f843981f6f24144d52b77b659eb71f4b2bf30df9c6c74154f960e208af4950 /var/run/docker/libcontainerd/a8f843981f6f24144d52b77b659eb71f4b2bf30df9c6c74154f960e208af4950 docker-runc ├─26078 docker-containerd-shim 1345ae86c3fc7242bb156785230ebf7bdaa125ba48b849243388aa3d9506bf7e /var/run/docker/libcontainerd/1345ae86c3fc7242bb156785230ebf7bdaa125ba48b849243388aa3d9506bf7e docker-runc ├─27100 docker-containerd-shim 0749c242003cfa542ef9868f001335761be53eb3c52df00dcb4fa73f9e94a57b /var/run/docker/libcontainerd/0749c242003cfa542ef9868f001335761be53eb3c52df00dcb4fa73f9e94a57b docker-runc ├─28254 docker-containerd-shim 7934ba2701673f7e3c6567e4e35517625d14b97fe9b7846e716c0559a2442241 /var/run/docker/libcontainerd/7934ba2701673f7e3c6567e4e35517625d14b97fe9b7846e716c0559a2442241 docker-runc └─29917 docker-containerd-shim 26f8f5963396a478e37aebdacdc0943af188d32dbe5bbe28f3ccc6edef003546 /var/run/docker/libcontainerd/26f8f5963396a478e37aebdacdc0943af188d32dbe5bbe28f3ccc6edef003546 docker-runc Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.722015704Z" level=error msg="Handler for POST /v1.24/containers/7409f3546ffa9e42da8b6cc694ba37571e908df43bfa2001e449a4cca3c50801/stop returned error: Container 7409f3546ffa9e42da8b6cc694ba37571e908df43bfa2001e449a4cca3c50801 is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.759151399Z" level=error msg="Handler for POST /v1.24/containers/ebbeed144e3768758c62749763977b65e4e2b118452bcf342b3f9d79ff0a5362/stop returned error: Container ebbeed144e3768758c62749763977b65e4e2b118452bcf342b3f9d79ff0a5362 is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.792131939Z" level=error msg="Handler for POST /v1.24/containers/85ff0f9d9feb893eb87062b00dc0f034ee47e639289a401e1c9f4e2ca7a5a202/stop returned error: Container 85ff0f9d9feb893eb87062b00dc0f034ee47e639289a401e1c9f4e2ca7a5a202 is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.830289673Z" level=error msg="Handler for POST /v1.24/containers/dc8f0fbaacfaba68453895996976706581aab817790bb1694f1a15de6cd2861f/stop returned error: Container dc8f0fbaacfaba68453895996976706581aab817790bb1694f1a15de6cd2861f is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.830618185Z" level=error msg="Handler for GET /v1.24/containers/702807afde28063ae46e321e86d18861440b691f254df52591c87ff732383467/json returned error: No such container: 702807afde28063ae46e321e86d18861440b691f254df52591c87ff732383467" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.864109644Z" level=error msg="Handler for POST /v1.24/containers/702807afde28063ae46e321e86d18861440b691f254df52591c87ff732383467/stop returned error: No such container: 702807afde28063ae46e321e86d18861440b691f254df52591c87ff732383467" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.874873849Z" level=error msg="Handler for GET /v1.24/containers/6f1f542a4f6bb30f21d0a747d915b458a34a5c6cedc66b301faa32a12a502d0f/json returned error: No such container: 6f1f542a4f6bb30f21d0a747d915b458a34a5c6cedc66b301faa32a12a502d0f" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.898141823Z" level=error msg="Handler for POST /v1.24/containers/816c48d0dd01fb66769cc6275e799e68a1301fc0f6623a0a99558350a414ee7c/stop returned error: Container 816c48d0dd01fb66769cc6275e799e68a1301fc0f6623a0a99558350a414ee7c is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.928695972Z" level=error msg="Handler for POST /v1.24/containers/f35a7847f9d2563101337389968f2265e5d0cd8bc78e0a2790a52be7dd3a0f3a/stop returned error: Container f35a7847f9d2563101337389968f2265e5d0cd8bc78e0a2790a52be7dd3a0f3a is already stopped" Nov 19 16:38:42 k8s-agent-D24C3A06-0 docker[1175]: time="2018-11-19T16:38:42.998395591Z" level=error msg="Handler for POST /v1.24/containers/a9d0acecee28b17992cbd99e8c782513157f0bca57acaa22d5688c07062d3346/stop returned error: Container a9d0acecee28b17992cbd99e8c782513157f0bca57acaa22d5688c07062d3346 is already stopped" </code></pre>
<p>If the pod space is what you want to change and assuming you are using Docker (which most people are) you have to change the graph directory for Docker. You can do it with <code>-g</code> option on your Docker daemon:</p> <pre><code>-g /mount/to/your-new-disk </code></pre> <p>Depending on your setup you might be able to change it in the <code>/etc/defaults/docker</code> file.</p> <pre><code>DOCKER_OPTS="-g /mount/to/your-new-disk" </code></pre> <p>Or also in your systemctl service:</p> <pre><code># /lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine ... [Service] Type=notify ... ExecStart=/usr/bin/dockerd -H fd:// -g /mount/to/your-new-disk ... [Install] WantedBy=multi-user.target </code></pre> <p>Another option is to add it to the <a href="https://stackoverflow.com/a/50726177/2989261"><code>/etc/docker/config.json</code></a> file.</p> <p>If you are using <a href="https://containerd.io/" rel="nofollow noreferrer">containerd</a> instead of Docker you can change the value for <code>root</code> in the <a href="https://github.com/containerd/containerd/blob/master/docs/ops.md" rel="nofollow noreferrer"><code>/etc/containerd/config.toml</code></a> file.</p> <p>If you are using <a href="http://cri-o.io/" rel="nofollow noreferrer">CRIO</a> you can also use the <code>root</code> option in the <a href="https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crio-table" rel="nofollow noreferrer"><code>crio.conf</code></a></p>
<p>kubeadm version 1.12.2</p> <p><code>$ sudo kubeadm init --config kubeadm_new.config --ignore-preflight-errors=all</code></p> <p>/var/log/syslog shows:</p> <pre><code>Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438374 5101 server.go:1013] Started kubelet Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438406 5101 server.go:133] Starting to listen on 0.0.0.0:10250 Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438446 5101 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438492 5101 server.go:753] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438968 5101 server.go:318] Adding debug handlers to kubelet server. Nov 15 08:44:13 khteh-T580 kubelet[5101]: F1115 08:44:13.439455 5101 server.go:145] listen tcp 0.0.0.0:10250: bind: address already in use </code></pre> <p>I have tried <code>sudo systemctl stop kubelet</code> and manually kill kubelet process but to no avail. Any advice and insights are appreciated.</p>
<p>Have you tried using netstat to see what other process is running that has already bound to that port?</p> <pre><code>sudo netstat -tulpn | grep 10250 </code></pre>
<p>I would like to add multiple nodegroups to eks, each one with different labels. I have successfully deployed a second cloud formation stack and can see the new ec2 instances, but I cannot see the new nodes in the k8s dashboard. Am I missing something?</p>
<p>I was able to fix this by going back and updating the aws-auth configmap, adding a second role map: </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: OLD ARN username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - rolearn: NEW ARN username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes </code></pre>
<p>I need to move my helm (<code>/usr/local/bin/helm</code>) to another server, and I can't work out how to helm to connect to remote tiller or a remote Kubernetes server.</p> <p>Helm is running locallyy on the server B (k8s). I want it to run server A, to be able to connect to the server B to execute the YAML files.</p>
<p>IIRC Helm should act against whichever cluster is set as your current context for <code>kubectl</code> on server A. </p> <p>Set up kubectl on your server. Use <code>kubectl config use-context</code> to target the cluster and helm should follow. </p> <p>You'll probably want to do <code>helm init --client-only</code> on the server to initialize helm without reinstalling tiller.</p> <blockquote> <p>NOTE: This only applies to Helm 2. Tiller has been removed in Helm 3.</p> </blockquote>
<p>A fairly common setup I see for docker is to have a container spin up perform a task and then exit. This is something I do quite often with docker-compose where I have a node container that performs a build process and doesn't need to stay up once the static files have been built. In these cases if I look at the <code>docker-compose ps</code> output, while my other containers are up and exposed on a port, the node containers state will be "Exit 0". Although otherwise dormant if I need to access this container it's available to be spun up.</p> <p>What's a good practice for translating this setup to Kubernetes?</p> <p>My initial approach was to place everything in one pod but the container exiting causes a CrashLoopBackOff and due to the pod restart policy the pod keeps restarting. If I were to keep this setup I'd only want the pod to restart if one of the other containers were to fail. It already moves the build static files into a volume that is accessible by the other containers.</p> <p>Should this container be moved into another pod that does not restart? Seems like this would unnecessarily complicate the deployment.</p>
<p>Generally, to prevent POD from restating use <code>restartPolicy: Never</code> (<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer">more on Restart Policy</a>).</p> <p>Also, for the thing which you want to run "to completion" use k8s component called <code>Job</code> (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">more on Job</a>):</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: &lt;job_name&gt; spec: template: spec: containers: &lt;...&gt; </code></pre> <p>To run Job till the first success (which is <code>exit code 0</code>) set <code>restartPolicy: OnFailure</code>.</p>
<p>I have an application that works just fine when deployed on regular K8s. I installed Istio on K8s along with my application. I configured a gateway and virtual service. Most thing appear to work except for internal connections to MySQL.</p> <p>There are a few services that use MySQL and they can no longer connect to the database with Istio.</p> <p>Any idea what broke? I am guessing it's something to do with the automatically injected sidecar proxy messing with the traffic. I am new to Istio and the docs are a bit scarce in places. Do I need to configure anything special for MySQL? Interestingly calls to MongoDB and Redis appear to be working. Confused :-(</p>
<p>There are multiple bugs in istio 1.0.3 preventing this. One is the / in the name, which the developers don't seem to think is a big deal, but breaks all stateful sets as they use a slash in the name. Once this is resolved, you can get a statefulset mysql up, but the connection is fubar, it connects but immediately gives a <code>MySQL has gone away</code>. The newer 1.1 versions appear just as bad. I think 1.0.2 has the last "working" version of Istio, but there were still major issues that made me try newer versions.</p> <p>You can find the istio.yaml change here: <a href="https://github.com/istio/istio/issues/9982" rel="nofollow noreferrer">https://github.com/istio/istio/issues/9982</a></p>
<p>I have few <strong>internal</strong> services which talk to one or more <strong>internal</strong> <strong>&lt;service_name&gt;.example.com</strong>. How can I deploy a cluster where calls to <strong>&lt;service_name&gt;.example.com</strong> would route to the actual service? NOTE: There areno</p> <p>Note, I might need to create aliases such as <strong>&lt;service_name&gt;.interal.example.com ---> &lt;service_name&gt;.example.com</strong></p> <p>The idea is, a lot of the components in the architecture have http calls to <code>.example.com</code> domain, and for the migration to work. I want Kubernetes to take care of mapping the apropriate <code>.example.com</code> to the service within the cluster, and not the outside one. And not having to rename all of .example.com to <code>.svc.cluster.local</code></p> <p>these services shouldn't be exposed externally, only the ingress is exposed externally.</p> <p>What would be the best way to achieve this? </p>
<p>This works, the assumption here is that a service, <code>&lt;service_name&gt;.example.com</code> maps to <code>&lt;service_name&gt;.svc.cluster.local</code>. Usually a namespace will be involved, so the rewrite would look more like <code>{1}.{1}.svc.cluster.local</code> (wherein <code>&lt;service_name&gt;</code> is also the <code>&lt;namespace_name&gt;</code>), or the namespace can be hard coded as needed <code>{1}.&lt;namespace_name&gt;.svc.cluster.local</code>. </p> <p>Keep in mind to not set <code>kubernetes.io/cluster-service: "true"</code> to <code>true</code> hence it is commented out, otherwise if it set to <code>true</code> GKE keeps removing the service. I did not look into why this was happening.</p> <p>CoreDNS <a href="https://coredns.io/plugins/proxy/" rel="nofollow noreferrer">proxy plugin</a> will not take a DNS name, it takes a IP, IP:PORT or a FILENAME (such as /etc/resolv.conf). </p> <p>The proxy/upstream is needed because once the DNS resolution is handed to CoreDNS and CoreDNS rewrites it to a local cluster service, that local cluster service DNS entry has to be resolved, please see <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#effects-on-pods" rel="nofollow noreferrer">effects on pods</a> from the kubernetes documentation. The final <em>resolving</em> to an IP happens with the proxy or perhaps even using an upstream server which points back to <code>kube-dns.kube-system.svc.cluster.local</code>.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: internal-dns namespace: kube-system data: Corefile: | example.com:53 { log errors health prometheus :9153 rewrite name regex (.*).example.com {1}.svc.cluster.local proxy . 10.10.10.10 ### ip of kube-dns.kube-system.svc.cluster.local } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: internal-dns namespace: kube-system labels: k8s-app: internal-dns kubernetes.io/name: "CoreDNS" spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: internal-dns template: metadata: labels: k8s-app: internal-dns spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - key: "CriticalAddonsOnly" operator: "Exists" containers: - name: coredns image: coredns/coredns:1.2.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: internal-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: internal-dns #kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: internal-dns ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP </code></pre> <p>As pointed out by the in the comments above by @patrick-w and @danny-l, a stubdomain needs to be inserted into kube-dns, which then delegates the calls to example.com o CoreDNS.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {"example.com": ["10.20.20.20"]} ### ip of internal-dns.kube-system.svc.cluster.local. </code></pre> <p>The stubdomain has the capability of taking a DNS name, <code>internal-dns.kube-system.svc.cluster.local</code> would have worked, but because of <a href="https://github.com/kubernetes/dns/issues/82" rel="nofollow noreferrer">bug in kube-dns (dnsmasq)</a> the dnsmasq container fails to start and ends up in a CrashLoopBackOff.</p> <p><code>internal-dns.kube-system.svc.cluster.local</code> is the name of the CoreDNS/internal-dns service.</p> <p>dnsmasq error:</p> <pre><code>I1115 17:19:20.506269 1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I1115 17:19:20.506570 1 sync.go:167] Updated stubDomains to map[example.com:[internal-dns.kube-system.svc.cluster.local]] I1115 17:19:20.506734 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --server /example.com/internal-dns.kube-system.svc.cluster.local] I1115 17:19:20.507923 1 nanny.go:116] I1115 17:19:20.507952 1 nanny.go:116] dnsmasq: bad command line options: bad address I1115 17:19:20.507966 1 nanny.go:119] W1115 17:19:20.507970 1 nanny.go:120] Got EOF from stderr I1115 17:19:20.507978 1 nanny.go:119] W1115 17:19:20.508079 1 nanny.go:120] Got EOF from stdout F1115 17:19:20.508091 1 nanny.go:190] dnsmasq exited: exit status 1 </code></pre> <p>dnsmasq successful when using ip in the stubdomain:</p> <pre><code>I1115 17:24:18.499937 1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I1115 17:24:18.500605 1 sync.go:167] Updated stubDomains to map[example.com:[10.20.20.20]] I1115 17:24:18.500668 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --server /example.com/10.20.20.20] I1115 17:24:18.850687 1 nanny.go:119] W1115 17:24:18.850726 1 nanny.go:120] Got EOF from stdout I1115 17:24:18.850748 1 nanny.go:116] dnsmasq[15]: started, version 2.78 cachesize 1000 I1115 17:24:18.850765 1 nanny.go:116] dnsmasq[15]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I1115 17:24:18.850773 1 nanny.go:116] dnsmasq[15]: using nameserver 10.20.20.20#53 for domain example.com I1115 17:24:18.850777 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1115 17:24:18.850780 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1115 17:24:18.850783 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain cluster.local I1115 17:24:18.850788 1 nanny.go:116] dnsmasq[15]: reading /etc/resolv.conf I1115 17:24:18.850791 1 nanny.go:116] dnsmasq[15]: using nameserver 10.20.20.20#53 for domain example.com I1115 17:24:18.850796 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1115 17:24:18.850800 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1115 17:24:18.850803 1 nanny.go:116] dnsmasq[15]: using nameserver 127.0.0.1#10053 for domain cluster.local I1115 17:24:18.850850 1 nanny.go:116] dnsmasq[15]: read /etc/hosts - 7 addresses </code></pre>
<p>I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: es-setup-indexes namespace: elk-test spec: template: metadata: name: es-setup-indexes spec: containers: - name: es-setup-indexes image: appropriate/curl command: ['curl -H "Content-Type: application/json" -XPUT http://elasticsearch.elk-test.svc.cluster.local:9200/_template/filebeat -d@/etc/filebeat/filebeat.template.json'] volumeMounts: - name: configmap-volume mountPath: /etc/filebeat/filebeat.template.json subPath: filebeat.template.json restartPolicy: Never volumes: - name: configmap-volume configMap: name: elasticsearch-configmap-indexes </code></pre> <p>I tried deleting the job but it would only work if I ran the following command:</p> <pre><code>kubectl delete job es-setup-indexes --cascade=false </code></pre> <p>After that I noticed when running:</p> <pre><code>kubectl get pods -w </code></pre> <p>I would get a TON of pods in an Error state and I see no way to clean them up. Here is just a small sample of the output when I run get pods:</p> <pre><code>es-setup-indexes-zvx9c 0/1 Error 0 20h es-setup-indexes-zw23w 0/1 Error 0 15h es-setup-indexes-zw57h 0/1 Error 0 21h es-setup-indexes-zw6l9 0/1 Error 0 16h es-setup-indexes-zw7fc 0/1 Error 0 22h es-setup-indexes-zw9bw 0/1 Error 0 12h es-setup-indexes-zw9ck 0/1 Error 0 1d es-setup-indexes-zwf54 0/1 Error 0 18h es-setup-indexes-zwlmg 0/1 Error 0 16h es-setup-indexes-zwmsm 0/1 Error 0 21h es-setup-indexes-zwp37 0/1 Error 0 22h es-setup-indexes-zwzln 0/1 Error 0 22h es-setup-indexes-zx4g3 0/1 Error 0 11h es-setup-indexes-zx4hd 0/1 Error 0 21h es-setup-indexes-zx512 0/1 Error 0 1d es-setup-indexes-zx638 0/1 Error 0 17h es-setup-indexes-zx64c 0/1 Error 0 21h es-setup-indexes-zxczt 0/1 Error 0 15h es-setup-indexes-zxdzf 0/1 Error 0 14h es-setup-indexes-zxf56 0/1 Error 0 1d es-setup-indexes-zxf9r 0/1 Error 0 16h es-setup-indexes-zxg0m 0/1 Error 0 14h es-setup-indexes-zxg71 0/1 Error 0 1d es-setup-indexes-zxgwz 0/1 Error 0 19h es-setup-indexes-zxkpm 0/1 Error 0 23h es-setup-indexes-zxkvb 0/1 Error 0 15h es-setup-indexes-zxpgg 0/1 Error 0 20h es-setup-indexes-zxqh3 0/1 Error 0 1d es-setup-indexes-zxr7f 0/1 Error 0 22h es-setup-indexes-zxxbs 0/1 Error 0 13h es-setup-indexes-zz7xr 0/1 Error 0 12h es-setup-indexes-zzbjq 0/1 Error 0 13h es-setup-indexes-zzc0z 0/1 Error 0 16h es-setup-indexes-zzdb6 0/1 Error 0 1d es-setup-indexes-zzjh2 0/1 Error 0 21h es-setup-indexes-zzm77 0/1 Error 0 1d es-setup-indexes-zzqt5 0/1 Error 0 12h es-setup-indexes-zzr79 0/1 Error 0 16h es-setup-indexes-zzsfx 0/1 Error 0 1d es-setup-indexes-zzx1r 0/1 Error 0 21h es-setup-indexes-zzx6j 0/1 Error 0 1d kibana-kq51v 1/1 Running 0 10h </code></pre> <p>But if I look at the jobs I get nothing related to that anymore:</p> <pre><code>$ kubectl get jobs --all-namespaces NAMESPACE NAME DESIRED SUCCESSFUL AGE kube-system configure-calico 1 1 46d </code></pre> <p>I've also noticed that kubectl seems much slow to respond. I don't know if the pods are continuously trying to be restarted or in some broken state but would be great if someone could let me know how to troubleshoot as I have not come across another issue like this in kubernetes.</p> <p>Kube info:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p><code>kubectl delete pods --field-selector status.phase=Failed -n &lt;your-namespace&gt;</code></p> <p>...cleans up any failed pods in your-namespace.</p>
<p>I've got a k8s cronjob that consists of an init container and a one pod container. If the init container fails, the Pod in the main container never gets started, and stays in "PodInitializing" indefinitely.</p> <p>My intent is for the job to fail if the init container fails.</p> <pre><code>--- apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-name namespace: default labels: run: job-name spec: schedule: "15 23 * * *" startingDeadlineSeconds: 60 concurrencyPolicy: "Forbid" successfulJobsHistoryLimit: 30 failedJobsHistoryLimit: 10 jobTemplate: spec: # only try twice backoffLimit: 2 activeDeadlineSeconds: 60 template: spec: initContainers: - name: init-name image: init-image:1.0 restartPolicy: Never containers: - name: some-name image: someimage:1.0 restartPolicy: Never </code></pre> <p>a kubectl on the pod that's stuck results in: </p> <pre><code>Name: job-name-1542237120-rgvzl Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: my-node-98afffbf-0psc/10.0.0.0 Start Time: Wed, 14 Nov 2018 23:12:16 +0000 Labels: controller-uid=ID job-name=job-name-1542237120 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container elasticsearch-metrics; cpu request for init container elasticsearch-repo-setup; cpu requ... Status: Failed IP: 10.0.0.0 Controlled By: Job/job-1542237120 Init Containers: init-container-name: Container ID: docker://ID Image: init-image:1.0 Image ID: init-imageID Port: &lt;none&gt; Host Port: &lt;none&gt; State: Terminated Reason: Error Exit Code: 1 Started: Wed, 14 Nov 2018 23:12:21 +0000 Finished: Wed, 14 Nov 2018 23:12:32 +0000 Ready: False Restart Count: 0 Requests: cpu: 100m Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro) Containers: some-name: Container ID: Image: someimage:1.0 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 100m Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wwl5n (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True </code></pre>
<p>To try and figure this out I would run the command:</p> <p><code>kubectl get pods</code> - Add the namespace param if required.</p> <p>Then copy the pod name and run:</p> <p><code>kubectl describe pod {POD_NAME}</code></p> <p>That should give you some information as to why it's stuck in the initializing state.</p>
<p>I have a docker image that uses a volume to write files: </p> <pre><code>docker run --rm -v /home/dir:/out/ image:cli args </code></pre> <p>when I try to run this inside a pod the container exit normally but no file is written. </p> <p>I don't get it. </p> <p>The container throw errors if it does not find the volume, for example if I run it without the <code>-v</code> option it throws: </p> <pre><code>Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'. </code></pre> <p>But I don't have any error from the container. It finishes like it wrote files, but files do not exist.</p> <p>I'm quite new to Kubernetes but this is getting me crazy. </p> <p>Does kubernetes prevent files from being written? or am I missing something obvious? </p> <p>The whole Kubernetes context is managed by GCP composer-airflow, if it helps...</p> <pre><code>docker -v: Docker version 17.03.2-ce, build f5ec1e2 </code></pre>
<p>If you want to have that behavior in Kubernetes you can use a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> volume.</p> <p>Essentially you specify it in your pod spec and then the volume is mounted on the node where your pod runs and then the file should be there in the node after the pod exits.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: image:cli name: test-container volumeMounts: - mountPath: /home/dir name: test-volume volumes: - name: test-volume hostPath: path: /out type: Directory </code></pre>
<p>We have an application which is deployed on gcloud using Kubernetes. The application can be deployed from the master branch of git as well as any git branches we create. We are now moving to a multi-regional deployment of our application.</p> <p>The question is will branch deployment be supported in multi-regional deployments? For a multi-regional deployment, I am using the kubemci tool.</p> <p>Has anyone used this tool and done something similar? Please help.</p>
<p>How you manage your git branches is not directly related to managing a multi-cluster ingress with <code>kubemci</code>.</p> <p>To manage your branches and deployments (even multi-region) I suggest you look at <a href="https://www.weave.works/technologies/gitops/" rel="nofollow noreferrer">GitOps</a> tools in Kubernetes. Some of them:</p> <ul> <li><a href="https://github.com/weaveworks/flux" rel="nofollow noreferrer">Flux</a></li> <li><a href="https://github.com/GoogleContainerTools/skaffold" rel="nofollow noreferrer">Skaffold</a></li> <li><a href="https://github.com/hasura/gitkube" rel="nofollow noreferrer">GitKube</a></li> <li><a href="https://github.com/Azure/draft" rel="nofollow noreferrer">Draft</a></li> <li><a href="https://ksonnet.io/" rel="nofollow noreferrer">Ksonnet</a></li> <li><a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">Argo</a></li> </ul> <p>You can still use <code>kubemci</code> to manage your ingresses in your clusters across multi-regions.</p>
<p>Azure Container/Kubernetes Service - Virtual Network : IPs and network interface are not deleted/cleaned after pods deletion.</p> <p>No more available address in the subnet that kubernetes use after a few deployments.</p> <p>Is there a way to clean those network interface?</p>
<p>RESOLVED: By default, AKS is reserving 31 ips for each node in the cluster so my problem was not that the IPs were not released but just that a lot of IPs were reserved :)</p>
<p>The second example policy from the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies" rel="noreferrer">PodSecurityPolicy documentation</a> consists of the following PodSecurityPolicy snippet</p> <pre><code>... spec: privileged: false # Required to prevent escalations to root. allowPrivilegeEscalation: false # This is redundant with non-root + disallow privilege escalation, # but we can provide it for defense in depth. requiredDropCapabilities: - ALL ... </code></pre> <p>Why is dropping all capabilities redundant for non-root + disallow privilege escalation? You can have a container process without privilege escalation that is non-root but has effective capabilities right?</p> <p>It seems like this is not possible with Docker:</p> <pre><code>$ docker run --cap-add SYS_ADMIN --user 1000 ubuntu grep Cap /proc/self/status CapInh: 00000000a82425fb CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a82425fb CapAmb: 0000000000000000 </code></pre> <p>All effective capabilities have been dropped even when trying to explicitly add them. But other container runtimes could implement it, so is this comment just Docker specific?</p>
<blockquote> <p>Why is dropping all capabilities redundant for non-root + disallow privilege escalation?</p> </blockquote> <p>Because you need privilege escalation to be able to use 'new' capabilities, an effectively <code>allowPrivilegeEscalation: false</code> is disabling <em>setuid</em> in the <a href="https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt" rel="noreferrer">execve</a> system call that prevents the use of any new capabilities.<br> Also as shown in the docs: <em>"Once the bit is set, it is inherited across fork, clone, and execve and cannot be unset"</em>. More info <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation" rel="noreferrer">here</a>. </p> <p>This in combination with <code>privileged: false</code> renders <code>requiredDropCapabilities: [ALL]</code> redundant.</p> <p>The equivalent Docker options here are:</p> <ul> <li><code>--user=whatever</code> => <code>privileged: false</code></li> <li><code>--security-opt=no-new-privileges</code> => <code>allowPrivilegeEscalation: false</code></li> <li><code>--cap-drop=all</code> => <code>requiredDropCapabilities: [ALL]</code></li> </ul> <blockquote> <p>It seems like this is not possible with Docker</p> </blockquote> <p>That's what looks like Docker is doing, the moment you specify a non-privileged user all of the effective capabilities are dropped (<code>CapEff: 0000000000000000</code>), even if you specify <code>--cap-add SYS_ADMIN</code></p> <p>This combined with the <code>--security-opt=no-new-privileges</code> as an option renders <code>--cap-drop=all</code> redundant.</p> <p>Note that it seems like the default capability mask for docker includes <code>SYS_ADMIN</code></p> <pre><code>$ docker run --rm ubuntu grep Cap /proc/self/status CapInh: 00000000a80425fb CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 $ capsh --decode=00000000a82425fb 0x00000000a82425fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_admin,cap_mknod,cap_audit_write,cap_setfcap </code></pre> <p>Which would make sense why the <code>00000000a82425fb</code> is the same without specifying any <code>--cap-add</code> option.</p> <blockquote> <p>But other container runtimes could implement it, so is this comment just Docker specific?</p> </blockquote> <p>I suppose, so you could have a case where <code>privileged: false</code> and <code>allowPrivilegeEscalation: false</code> not effectively disabling capabilities and that could be dropped with <code>requiredDropCapabilities:</code> (Although, I don't see why another runtime would want to change the Docker behavior).</p>
<p>I spun a two node cluster in AWS and installed traefik using helm. I see that the service external IP is stuck at pending status. Checked several sources but couldn't find anything to resolve the issue. ANy help is appreciated</p> <pre><code>helm install stable/traefik ubuntu@ip-172-31-34-78:~$ kubectl get pods -n default NAME READY STATUS RESTARTS AGE unhinged-prawn-traefik-67b67f55f4-tnz5w 1/1 Running 0 18m ubuntu@ip-172-31-34-78:~$ kubectl get services -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 55m unhinged-prawn-traefik LoadBalancer 10.102.38.210 &lt;pending&gt; 80:30680/TCP,443:32404/TCP 18m ubuntu@ip-172-31-34-78:~$ kubectl describe service unhinged-prawn-traefik Name: unhinged-prawn-traefik Namespace: default Labels: app=traefik chart=traefik-1.52.6 heritage=Tiller release=unhinged-prawn Annotations: &lt;none&gt; Selector: app=traefik,release=unhinged-prawn Type: LoadBalancer IP: 10.102.38.210 Port: http 80/TCP TargetPort: http/TCP NodePort: http 30680/TCP Endpoints: 10.32.0.6:80 Port: https 443/TCP TargetPort: httpn/TCP NodePort: https 32404/TCP Endpoints: 10.32.0.6:8880 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; ubuntu@ip-172-31-34-78:~$ kubectl get svc unhinged-prawn-traefik --namespace default -w NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE unhinged-prawn-traefik LoadBalancer 10.102.38.210 &lt;pending&gt; 80:30680/TCP,443:32404/TCP 24m </code></pre>
<p>I'm not sure how you installed your cluster, but basically, the <code>kube-controller-manager/kubelet/kube-apiserver</code> cannot talk to the AWS API to create a load balancer to serve traffic for your <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>. </p> <ul> <li><p>It could be just as simple as your instance missing the required <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html" rel="nofollow noreferrer">instance profile</a> with the permissions to create a load balancer and routes.</p></li> <li><p>It could also that you need to add this flag to all your kubelets, your kube-apiserver, and your kube-controller-manager:</p> <pre><code>--cloud-provider=aws </code></pre></li> <li><p>It could also be that you are missing these <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html" rel="nofollow noreferrer">EC2 tags</a> on your instances:</p> <pre><code>KubernetesCluster=&lt;yourclustername&gt; kubernetes.io/cluster/kubernetes=owned k8s.io/role/node=1 </code></pre></li> </ul> <p>Note that you might also need the <code>KubernetesCluster=&lt;yourclustername&gt;</code> tag on the subnet where your nodes are on.</p> <ul> <li><p>It could also be that your K8s nodes don't have a <code>ProviderID:</code> spec that looks like this:</p> <pre><code>ProviderID: aws:///&lt;aws-region&gt;/&lt;instance-id&gt; # You can add it with kubectl edit &lt;node-name&gt; </code></pre></li> </ul> <p>Note that the <code>--cloud-provider</code> flag is being deprecated in favor of the <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">Cloud Providers</a> controller.</p>
<p>I have a chart for Helm that works fine.</p> <p>I updated couple lines of "template" files to have it set up differently and ran <code>helm install -n &lt;relaese name&gt; &lt;char dir&gt;</code>.</p> <p>But I found that change never gets applied.</p> <p>when I tried <code>helm install --dry-run --debug</code>, I don't see my updates. (It might be getting the chart from remote ...)</p> <p>Does Helm cache stuff? I wasn't able to find anything about it...</p> <p>I am trying to setup hdfs on my cluster using this <a href="https://github.com/apache-spark-on-k8s/kubernetes-HDFS" rel="nofollow noreferrer">link</a></p>
<p>I found that I had to rebuild dependency after I make a changes</p>
<p>I have a docker image that uses a volume to write files: </p> <pre><code>docker run --rm -v /home/dir:/out/ image:cli args </code></pre> <p>when I try to run this inside a pod the container exit normally but no file is written. </p> <p>I don't get it. </p> <p>The container throw errors if it does not find the volume, for example if I run it without the <code>-v</code> option it throws: </p> <pre><code>Unhandled Exception: System.IO.DirectoryNotFoundException: Could not find a part of the path '/out/file.txt'. </code></pre> <p>But I don't have any error from the container. It finishes like it wrote files, but files do not exist.</p> <p>I'm quite new to Kubernetes but this is getting me crazy. </p> <p>Does kubernetes prevent files from being written? or am I missing something obvious? </p> <p>The whole Kubernetes context is managed by GCP composer-airflow, if it helps...</p> <pre><code>docker -v: Docker version 17.03.2-ce, build f5ec1e2 </code></pre>
<blockquote> <p>when I try to run this inside a pod the container exit normally but no file is written</p> </blockquote> <p>First of all, there is no need to run the <code>docker run</code> command inside the pod :). A <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">spec file</a> (yaml) should be written for the pod and kubernetes will run the container in the pod using docker for you. Ideally, you don't need to run <code>docker</code> commands when using kubernetes (unless you are debugging docker-related issues).</p> <p><a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">This link</a> has useful <code>kubectl</code> commands for docker users.</p> <p>If you are used to <code>docker-compose</code>, refer <code>Kompose</code> to go from <code>docker-compose</code> to kubernetes:</p> <ul> <li><a href="https://github.com/kubernetes/kompose" rel="nofollow noreferrer">https://github.com/kubernetes/kompose</a></li> <li><a href="http://kompose.io" rel="nofollow noreferrer">http://kompose.io</a></li> </ul> <p>Some options to mount a directory on the host as a volume inside the container in kubernetes:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a></li> <li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a></li> <li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="nofollow noreferrer">configMap</a></li> </ul>
<p>We have a service which is fairly idle most of the time, hence it would be great for us if we could delete all the pods when the service is not getting any request for say 30 minutes, and in the next time when a new request comes Kubernetes will create the first pod and process the response.</p> <p>Is it possible to set the min pod instance count to 0?</p> <p>I found that currently, Kubernetes does not support this, is there a way I can achieve this?</p>
<p>This is not supported in Kubernetes the way it's supported by web servers like nginx, apache or app engines like <a href="https://github.com/puma/puma" rel="nofollow noreferrer">puma</a>, <a href="https://github.com/phusion/passenger" rel="nofollow noreferrer">passenger</a>, <a href="https://gunicorn.org/" rel="nofollow noreferrer">gunicorn</a>, <a href="https://bogomips.org/unicorn/" rel="nofollow noreferrer">unicorn</a> or even <a href="https://cloud.google.com/appengine/docs/standard/" rel="nofollow noreferrer">Google App Engine Standard</a> where they can be soft started and then brought up the moment the first request comes in with downside of this is that your first requests will always be slower. (There may have been some rationale behind Kubernetes pods not having to behave this way, and I can see a lot of design changes or having to create a new type of workload for this very specific case)</p> <p>If a pod is sitting idle it would not be consuming that many resources. You could tweak the values of your pod <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu" rel="nofollow noreferrer">resources</a> for request/limit so that you request a small number of CPUs/Memory and you set the limit to a higher number of CPUs/Memory. The upside of having a pod always running is that in theory, your first requests will never have to wait a long time to get a response.</p>
<p>i use Kubernetes v1.11.3 ,it use coredns to resolve host or service name,but i find in pod ,the resolve not work correctly,</p> <pre><code># kubectl get services --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 50d &lt;none&gt; kube-system calico-etcd ClusterIP 10.96.232.136 &lt;none&gt; 6666/TCP 50d k8s-app=calico-etcd kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 50d k8s-app=kube-dns kube-system kubelet ClusterIP None &lt;none&gt; 10250/TCP 32d &lt;none&gt; testalex grafana NodePort 10.96.51.173 &lt;none&gt; 3000:30002/TCP 2d app=grafana testalex k8s-alert NodePort 10.108.150.47 &lt;none&gt; 9093:30093/TCP 13m app=alertmanager testalex prometheus NodePort 10.96.182.108 &lt;none&gt; 9090:30090/TCP 16m app=prometheus </code></pre> <p>following command no response </p> <pre><code># kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash [root@k8s-monitor-7ddcb74b87-n6jsd /]# ping k8s-alert PING k8s-alert.testalex.svc.cluster.local (10.108.150.47) 56(84) bytes of data. </code></pre> <p>and no cordons output log </p> <pre><code># kubectl logs coredns-78fcdf6894-h78sd -n kube-system </code></pre> <p>i think maybe something is wrong,but i can not locate the problem,another question is why the two coredns pods on the master node,it suppose to one on each node </p> <h2><strong>UPDATE</strong></h2> <p>it seems coredns work fine ,but i do not understand the ping command no return </p> <pre><code>[root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# nslookup kubernetes.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 [root@k8s-monitor-7ddcb74b87-n6jsd yum.repos.d]# cat /etc/resolv.conf nameserver 10.96.0.10 search testalex.svc.cluster.local svc.cluster.local cluster.local options ndots:5 # kubectl get ep kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 192.168.121.3:53,192.168.121.4:53,192.168.121.3:53 + 1 more... 50d </code></pre> <p>also dns server can not be reached</p> <pre><code># kubectl exec -it k8s-monitor-7ddcb74b87-n6jsd -n testalex /bin/bash [root@k8s-monitor-7ddcb74b87-n6jsd /]# cat /etc/resolv.conf nameserver 10.96.0.10 search testalex.svc.cluster.local svc.cluster.local cluster.local options ndots:5 [root@k8s-monitor-7ddcb74b87-n6jsd /]# ping 10.96.0.10 PING 10.96.0.10 (10.96.0.10) 56(84) bytes of data. ^C --- 10.96.0.10 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8000ms </code></pre> <p>i think maybe i misconfig the network this is my cluster init command </p> <pre><code> kubeadm init --kubernetes-version=v1.11.3 --apiserver-advertise-address=10.100.1.20 --pod-network-cidr=172.16.0.0/16 </code></pre> <p>and this is calico ip pool set </p> <pre><code># kubectl exec -it calico-node-77m9l -n kube-system /bin/sh Defaulting container name to calico-node. Use 'kubectl describe pod/calico-node-77m9l -n kube-system' to see all of the containers in this pod. / # cd /tmp /tmp # ls calicoctl tunl-ip /tmp # ./calicoctl get ipPool CIDR 192.168.0.0/16 </code></pre>
<p>You can start by checking if the dns is working</p> <p>Run the nslookup on kubernetes.default from inside the pod k8s-monitor-7ddcb74b87-n6jsd, check if it is working.</p> <pre><code>[root@k8s-monitor-7ddcb74b87-n6jsd /]# nslookup kubernetes.default Server: 10.96.0.10 Address: 10.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1 </code></pre> <p>If this returns output that means everything is working from the coredns. If output is not okay, then look into the the resolve.conf inside the pod k8s-monitor-7ddcb74b87-n6jsd, it should return output something like this:</p> <pre><code>[root@metrics-master-2 /]# cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5 </code></pre> <p>At last check the coredns endpoints are exposed using:</p> <pre><code>kubectl get ep kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 10.180.3.17:53,10.180.3.17:53 1h </code></pre> <p>You can verify if queries are being received by CoreDNS by adding the log plugin to the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in a ConfigMap named coredns</p> <p>Hope this helps.</p> <p>EDIT:</p> <p>You might be having this issue, Please have a look:</p> <p><a href="https://github.com/kubernetes/kubeadm/issues/1056" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/1056</a></p>
<p>I have frontend container running with below php code;</p> <pre><code>&lt;?php $hn=file_get_contents('/var/secrets/hostname.txt'); $hn=str_replace("\n", "", $hn); $pass=file_get_contents('/var/secrets/password.txt'); $pass=str_replace("\n", "", $pass); $cname=$_POST['name']; $email=$_POST['email']; $hostname=$hn; $username='root'; $password=$pass; $dbname='test'; $usertable='testuser'; $con=mysqli_connect($hostname,$username, $password) OR DIE ('Unable to connect to database! Please try again later.'); mysqli_select_db($con,$dbname); $query = "select * from testuser"; $result = $con-&gt;query($query) or die("Not Updated!"); while($row = $result-&gt;fetch_assoc()) { echo "&lt;tr&gt;&lt;td&gt; " . $row["name"] . "&lt;td&gt; " . $row["email"] . "&lt;/tr&gt; "; } $con-&gt;close(); ?&gt; </code></pre> <p>I have hostname name and password saved in text file as mentioned above on frontend container. I have metioned <strong>hostname as SeriveName</strong>. Please find below sample backend service, here backend service name is "userdatabase-service". Same I have mentioned in hostname.txt on frontend container.</p> <pre><code> apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-11-13T12:01:25Z labels: app: userdatabase name: userdatabase-service namespace: default resourceVersion: "32448" selfLink: /api/v1/namespaces/default/services/userdatabase-service uid: d85cf471-e73b-11e8-8506-42010aa60fca spec: clusterIP: 10.7.255.80 externalTrafficPolicy: Cluster ports: - nodePort: 30198 port: 80 protocol: TCP targetPort: 3306 selector: app: userdatabase sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 35.228.105.176 </code></pre> <p>But while accessing frontend service which pulls record from database gives message: Unable to connect to database! Please try again later.</p> <p>I tried with Ingress Ip but still it gives same error. Can you please guide here? Regards, Vikas</p>
<p>This was solved in the comments by @<a href="https://stackoverflow.com/users/225016/matthew-l-daniel">Matthew L Daniel</a>. The connection string was being formed with 'userdatabase-service' but without any explicit port. It worked when the port 80 was added to the connection string. </p>
<p>I deployed docker Linux to gcloud gke pod.</p> <p>I added the code bellow, trying to set up the time zone in the dockerfile. This code is running correctly in a local docker. But it does not work in gcloud gke pod. The timezones are in local PST, timezones in GKE Pod are still in UTC. Please help!</p> <pre><code>ENV TZ=America/Los_Angeles RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;&amp; echo $TZ &gt; /etc/timezone </code></pre>
<p>I'm not sure how this is working on your local environment. Looks like you are missing (Ubuntu, Debian):</p> <pre><code>dpkg-reconfigure -f noninteractive tzdata </code></pre> <p>So in summary something like this:</p> <pre><code>echo America/Los_Angeles &gt;/etc/timezone &amp;&amp; \ ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime &amp;&amp; \ dpkg-reconfigure -f noninteractive tzdata </code></pre> <p>This <a href="https://www.ivankrizsan.se/2015/10/31/time-in-docker-containers/" rel="nofollow noreferrer">blog</a> has a very good explanation, including how to do it <a href="https://alpinelinux.org/" rel="nofollow noreferrer">Alpine Linux</a>.</p>
<p>I'm following the official Helm <a href="https://docs.helm.sh/using_helm/#role-based-access-control" rel="nofollow noreferrer">documentation</a> for "Deploy Tiller in a namespace, restricted to deploying resources only in that namespace". Here is my bash script:</p> <pre><code>Namespace="$1" kubectl create namespace $Namespace kubectl create serviceaccount "tiller-$Namespace" --namespace $Namespace kubectl create role "tiller-role-$Namespace" / --namespace $Namespace / --verb=* / --resource=*.,*.apps,*.batch,*.extensions kubectl create rolebinding "tiller-rolebinding-$Namespace" / --namespace $Namespace / --role="tiller-role-$Namespace" / --serviceaccount="$Namespace:tiller-$Namespace" helm init / --service-account "tiller-$Namespace" / --tiller-namespace $Namespace --override "spec.template.spec.containers[0].command'='{/tiller,--storage=secret}" --upgrade --wait </code></pre> <p>Running <code>helm upgrade</code> gives me the following error:</p> <blockquote> <p>Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"</p> </blockquote> <p>Is there a bug in the official documentation? Have I read it wrong?</p>
<p>I'm not sure about <code>--resource</code> flag correct syntax in your script, whether asterisk symbols "*" are allowed here, look at this <a href="https://github.com/kubernetes/kubernetes/issues/62989" rel="nofollow noreferrer">issue</a> reported on GitHub.</p> <pre><code>$ kubectl create role "tiller-role-$Namespace" \ --namespace $Namespace \ --verb=* \ --resource=*.,*.apps,*.batch,*.extensions the server doesn't have a resource type "*" </code></pre> <p>But you can check this role object in your cluster:</p> <p><code>kubectl get role tiller-role-$Namespace -n $Namespace -o yaml</code></p> <p>Otherwise, try to create the role for <code>tiller</code> within yaml file as guided in the documentation:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-manager namespace: tiller-world rules: - apiGroups: ["", "batch", "extensions", "apps"] resources: ["*"] verbs: ["*"] </code></pre> <p>Moreover, keep in mind that if you have installed <code>tiller</code> in the non-default namespace (<code>default</code>), it is necessary to specify namespace where <code>tiller</code> resides on, when you invoke <code>Helm</code> command:</p> <pre><code>$ helm --tiller-namespace $Namespace version Client: &amp;version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} </code></pre>
<p>The plan is to move my dockerized application to Kubernetes.</p> <p>The docker container uses couple of files - which I used to mount on the docker volumes by specifying in the docker-compose file:</p> <pre><code>volumes: - ./license.dat:/etc/sys0/license.dat - ./config.json:/etc/sys0/config.json </code></pre> <p>The config file would be different for different environments, and the license file would be the same across.</p> <p>How do I define this in a helm template file (yaml) so that it is available for the running application?</p> <p>What is generally the best practise for this? Is it also possible to define the configuration values in values.yaml and the config.json file could get it?</p>
<p>Since you are dealing with json a good example to follow might be the <a href="https://github.com/helm/charts/tree/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/stable/centrifugo" rel="noreferrer">official stable/centrifugo chart</a>. It defines a ConfigMap that contains a config.json file:</p> <pre><code>data: config.json: |- {{ toJson .Values.config| indent 4 }} </code></pre> <p>So it takes a <code>config</code> section from the values.yaml and transforms it to json using the toJson function. The config can be whatever you want define in that yaml - the chart has:</p> <pre><code>config: web: true namespaces: - name: public anonymous: true publish: true ... </code></pre> <p>In the deployment.yaml it <a href="https://github.com/helm/charts/blob/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/stable/centrifugo/templates/deployment.yaml#L67" rel="noreferrer">creates a volume from the configmap</a>:</p> <pre><code> volumes: - name: {{ template "centrifugo.fullname" . }}-config configMap: name: {{ template "centrifugo.fullname" . }}-config </code></pre> <p>Note that <code>{{ template "centrifugo.fullname" . }}-config</code> <a href="https://github.com/helm/charts/blob/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/stable/centrifugo/templates/configmap.yaml#L4" rel="noreferrer">matches the name of the ConfigMap</a>.</p> <p>And <a href="https://github.com/helm/charts/blob/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/stable/centrifugo/templates/deployment.yaml#L50" rel="noreferrer">mounts it into the deployment's pod/s</a>:</p> <pre><code> volumeMounts: - name: "{{ template "centrifugo.fullname" . }}-config" mountPath: "/centrifugo" readOnly: true </code></pre> <p>This approach would let you populate the json config file from the values.yaml so that you can set different values for different environments by supplying custom values file per env to override the default one in the chart. </p> <p>To handle the license.dat you can add an extra entry to the ConfigMap to define an additional file but with static content embedded. Since that is a license you may want to switch the ConfigMap to a Secret instead, which is a simple change of replacing the word ConfigMap for Secret in the definitions. You could try it with ConfigMap first though.</p>
<p>Currently, I'm running a Kubernetes cluster on GCloud K8s Engine.</p> <p>Now I'm running an OpenVPN Server on there to create a network where multiple clients can talk together in a client-to-client fashion. If I'm using just a single VPN server it also already works. The client can connect to the K8s Pod and communicate with other clients or even the server itself.</p> <p>However now I want to make that a little bit more available and want to have at least two servers which means I create another VPN network, which is relatively simple, by using the same configuration I used in server1 (I just need to adjust the Subnet).</p> <p>But the tricky part is how can I make it happen that both pods can correctly route the networks?</p> <p>i.e. I have the VPN networks <code>172.40.0.0/16</code> (Pod 1) and <code>172.41.0.0/16</code> (Pod 2). Does K8s or GCloud have any way of announcing the VPN network so that the pods will correctly route requests from <code>172.40.0.0/16</code> to <code>172.41.0.0/16</code></p> <p>(OpenVPN will have both routes pushed to the client, so either Pod 1 will be the gateway or Pod 2)</p> <p>I wouldn't bother writing code so that I can correctly communicate with the pods i.e. if I create a GCloud Route with the POD IP as a gateway with the networks would that work?</p>
<blockquote> <p>Does K8s or GCloud have any way of announcing the VPN network so that the pods will correctly route requests from 172.40.0.0/16 to 172.41.0.0/16</p> </blockquote> <p>Kubernetes doesn't have any such mechanisms. However, you could look at <a href="https://docs.projectcalico.org/v2.0/usage/configuration/bgp" rel="nofollow noreferrer">BGP peering</a> with <a href="https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/" rel="nofollow noreferrer">Calico</a> as an overlay.</p> <p>The other option I guess is to create manual routes on both servers that point to each other that way traffic flows both ways. Traffic to the PodCidr is gonna be trickier because it's generally masqueraded with <a href="https://en.wikipedia.org/wiki/Iptables" rel="nofollow noreferrer">iptables</a> and in a Kubernetes cluster the PodCidr is cluster-wide.</p>
<p>I have created a local ubuntu Kubernetes cluster, having 1 master and 2 slave nodes.</p> <p>I deployed 2 applications in 2 pods and created service for both of the pods, it's working fine. I entered inside pod by typing this command ,</p> <pre><code>$ kubectl exec -it firstpod /bin/bash # apt-get update </code></pre> <p>Unable to make update and I'm getting an error:</p> <pre><code>Err http://security.debian.org jessie/updates InRelease Err http://deb.debian.org jessie InRelease Err http://deb.debian.org jessie-updates InRelease Err http://security.debian.org jessie/updates Release.gpg Temporary failure resolving 'security.debian.org' Err http://deb.debian.org jessie-backports InRelease Err http://deb.debian.org jessie Release.gpg Temporary failure resolving 'deb.debian.org' Err http://deb.debian.org jessie-updates Release.gpg Temporary failure resolving 'deb.debian.org' Err http://deb.debian.org jessie-backports Release.gpg Temporary failure resolving 'deb.debian.org' Reading package lists... Done W: Failed to fetch http://deb.debian.org/debian/dists/jessie/InRelease W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/InRelease W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease W: Failed to fetch http://deb.debian.org/debian/dists/jessie-backports/InRelease W: Failed to fetch http://security.debian.org/dists/jessie/updates/Release.gpg Temporary failure resolving 'security.debian.org' W: Failed to fetch http://deb.debian.org/debian/dists/jessie/Release.gpg Temporary failure resolving 'deb.debian.org' W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/Release.gpg Temporary failure resolving 'deb.debian.org' W: Failed to fetch http://deb.debian.org/debian/dists/jessie-backports/Release.gpg Temporary failure resolving 'deb.debian.org' W: Some index files failed to download. They have been ignored, or old ones used instead. </code></pre> <p>I'm trying to ping my second pod service:</p> <pre><code># ping secondservice (This is the service name of secondpod) PING secondservice.default.svc.cluster.local (10.100.190.196): 56 data bytes unable to ping. </code></pre> <p>How can I ping/call the second service from the first node?</p>
<p>There are two (unrelated) questions I see there. I'm going to focus on the second one since the first is unclear to me (what is the ask?).</p> <p>So, you wonder why the following doesn't work:</p> <pre><code># ping secondservice </code></pre> <p>This is not a bug or unexpected (actually, I wrote about it <a href="https://blog.openshift.com/kubernetes-services-by-example/" rel="nofollow noreferrer">here</a>). In short: the FQDN <code>secondservice.default.svc.cluster.local</code> gets resolved via the DNS plugin to a virtual IP (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">VIP</a>), the very essence of this VIP is that it is virtual, that is, it's not attached to a network interface, it's just a bunch of iptables rules. Hence, the ICMP-based ping has nothing to work against, since it's not a 'real' IP. You can <code>curl</code> the service, though. Assuming the service runs on port 9876, the following should work:</p> <pre><code># curl secondservice:9876 </code></pre>
<p>I am working through the persistent disks tutorial found <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk" rel="noreferrer">here</a> while also creating it as a StatefulSet instead of a deployment.</p> <p>When I run the yaml file into GKE the database fails to start, looking at the logs it has the following error.</p> <blockquote> <p>[ERROR] --initialize specified but the data directory has files in it. Aborting.</p> </blockquote> <p>Is it possible to inspect the volume created to see what is in the directory? Otherwise, what am I doing wrong that is causing the disk to be non empty?</p> <p>Thanks</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: datalayer-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi --- apiVersion: v1 kind: Service metadata: name: datalayer-svc labels: app: myapplication spec: ports: - port: 80 name: dbadmin clusterIP: None selector: app: database --- apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: datalayer spec: selector: matchLabels: app: myapplication serviceName: "datalayer-svc" replicas: 1 template: metadata: labels: app: myapplication spec: terminationGracePeriodSeconds: 10 containers: - name: database image: mysql:5.7.22 env: - name: "MYSQL_ROOT_PASSWORD" valueFrom: secretKeyRef: name: mysql-root-password key: password - name: "MYSQL_DATABASE" value: "appdatabase" - name: "MYSQL_USER" value: "app_user" - name: "MYSQL_PASSWORD" value: "app_password" ports: - containerPort: 3306 name: mysql volumeMounts: - name: datalayer-pv mountPath: /var/lib/mysql volumes: - name: datalayer-pv persistentVolumeClaim: claimName: datalayer-pvc </code></pre>
<p>This issue could be caused by the <code>lost+found</code> directory on the filesystem of the PersistentVolume.</p> <p>I was able to verify this by adding a <code>k8s.gcr.io/busybox</code> container (in PVC set <code>accessModes: [ReadWriteMany]</code>, OR comment out the <code>database</code> container):</p> <pre><code>- name: init image: "k8s.gcr.io/busybox" command: ["/bin/sh","-c","ls -l /var/lib/mysql"] volumeMounts: - name: database mountPath: "/var/lib/mysql" </code></pre> <p>There are a few potential workarounds...</p> <p>Most preferable is to use a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer"><code>subPath</code></a> on the <code>volumeMounts</code> object. This uses a subdirectory of the PersistentVolume, which should be empty at creation time, instead of the volume root:</p> <pre><code>volumeMounts: - name: database mountPath: "/var/lib/mysql" subPath: mysql </code></pre> <p>Less preferable workarounds include:</p> <ul> <li>Use a one-time container to <code>rm -rf /var/lib/mysql/lost+found</code> (not a great solution, because the directory is managed by the filesystem and is likely to re-appear)</li> <li>Use <code>mysql:5</code> image, and add <code>args: ["--ignore-db-dir=lost+found"]</code> to the container (this option was removed in mysql 8)</li> <li>Use <code>mariadb</code> image instead of <code>mysql</code></li> </ul> <p>More details might be available at docker-library/mysql issues: <a href="https://github.com/docker-library/mysql/issues/69" rel="noreferrer">#69</a> and <a href="https://github.com/docker-library/mysql/issues/186" rel="noreferrer">#186</a></p>
<p>In a vmware environment, should the external address become populated with the VM's (or hosts) ip address? </p> <p>I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run <code>kubectl get nodes -o wide</code>. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node. </p> <p>KOPS deployed to GCP = external address is the real public IP addresses of the nodes. </p> <p>Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range). </p> <p>Kubeadm deployed, NO cloud provider = no external ip. </p> <p>I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters. </p> <p>My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal). </p> <p>Thank you</p>
<p>Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.</p> <p>When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB. </p> <p><a href="https://github.com/google/metallb" rel="nofollow noreferrer">https://github.com/google/metallb</a></p>
<p>Does anyone know how to override what seems to be the default LimitRange in GKE which sets the default request for CPU to 100m? </p> <p>I've previously updated the limit to be 10m which is still overkill but better than the default using the following manifest; </p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: limits namespace: default spec: limits: - defaultRequest: cpu: 10m type: Container </code></pre> <p>This has since been overwritten back to 100m. Can I disable this behaviour? </p> <p>Clearly I could update my manifest file to always include the request amount on containers but I'm interested to understand how this works in GKE and if it's documented. </p>
<p>From reading the <a href="https://github.com/pachyderm/pachyderm/issues/1892" rel="nofollow noreferrer">attached</a> github article, there is a Limit Range set within the default namespace. If you would like to change the Limit Range, you can create a Limit Range in a non-default namespace. You can read on how to create a Limit Range in a non default namespace <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">here</a>. </p>
<p>I have a Kubernetes Cluster and want to know how much disk space my containers use. I am not talking about mounted Volumes.</p> <p>I can get this information by using docker commands like <code>docker system df -v</code> or <code>docker ps -s</code>, but I don't want to connect to every single worker node.</p> <p>Is there a way to get a container's disk usage via <code>kubectl</code> or are there kubelet metrics where I can get this information from?</p>
<p>Yes, but currently not with kubectl, you can get metrics from the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet</a>, either through the kube-apiserver (proxied) or directly calling the kubelet HTTP(s) server endpoint (default port <code>10250</code>). Disk metrics are generally available on the <code>/stats/summary</code> endpoint and you can also find some <a href="https://github.com/google/cadvisor" rel="noreferrer">cAdvisor</a> metrics on the <code>/metrics/cavisor</code> endpoint.</p> <p>For example, to get the 'usedBytes' for the first container in the first pod returned through the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">kube-apiserver</a>:</p> <pre><code>$ curl -k -s -H 'Authorization: Bearer &lt;REDACTED&gt;' \ https://kube-apiserver:6443/api/v1/nodes/&lt;node-name&gt;/proxy/stats/summary \ | jq '.pods[0].containers[0].rootfs.usedBytes' </code></pre> <p>The Bearer token can be a service account token tied to a ClusterRole like this:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: name: myrole rules: - apiGroups: - "" resources: - nodes - nodes/proxy verbs: - get - list - watch - nonResourceURLs: - /metrics - /api/* verbs: - get </code></pre>
<p>I am using <strong>Helm</strong> as for <strong>Kubernetes</strong> deployment (Grafana and Prometheus) specifically. I have specified <strong>values.yaml</strong> files for both of them. It works amazingly.</p> <p>Since I have changed <strong>Grafana</strong> datasource from default <strong>sqlite3</strong> to <strong>PostgreSQL</strong> - data-source configuration is now stored in <strong>PostgreSQL database</strong>.</p> <p>Well, the problem is that in my <strong>values.yaml file* for **Grafana</strong> I have specified datasource as following:</p> <pre><code>datasources: {} datasources.yaml: apiVersion: 1 datasources: - name: on-premis type: prometheus url: http://prom-helmf-ns-monitoring-prometheus-server access: direct isDefault: true ... ... grafana.ini: paths: data: /var/lib/grafana/data logs: /var/log/grafana plugins: /var/lib/grafana/plugins analytics: check_for_updates: true log: mode: console grafana_net: url: https://grafana.net database: ## You can configure the database connection by specifying type, host, name, user and password ## # as separate properties or as on string using the URL property. ## # Either "mysql", "postgres" or "sqlite3", it's your choice type: postgres host: qa.com:5432 name: grafana user: grafana # If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;""" password: passwd ssl_mode: disable </code></pre> <p>Unfortunately this does not take an effect and I have to configure connection to in <strong>Grafana web interface</strong> manually - which is not what I need. How do I specify this section correctly?</p> <pre><code>datasources: {} datasources.yaml: apiVersion: 1 datasources: - name: on-premis type: prometheus url: http://prom-helmf-ns-monitoring-prometheus-server access: direct isDefault: true </code></pre>
<p>remove '{}' after section datasources. like this </p> <pre><code> datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-server access: proxy isDefault: true </code></pre>
<p>Can I add some config so that my daemon pods start before other pods can be scheduled or nodes are designated as ready?</p> <p>Adding post edit:</p> <p>These are 2 different pods altogether, the daemonset is a downstream dependency to any pods that might get scheduled on the host. </p>
<p>There's no such a thing as <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">Pod hierarchy</a> in Kubernetes between multiple separate types of pods. Meaning belonging to different Deployments, Statefulsets, Daemonsets, etc. In other words, there is no notion of a master pod and children pods. If you like to create your custom hierarchy you can build your own tooling around, for example waiting for the status of all pods in a DaemonSet to start or create a new Pod or Kubernetes workload resource.</p> <p>The closest in terms of pod dependency in K8s is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a>.</p> <p>As per the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</p> </blockquote>
<p>I'm creating a cluster with <code>kubeadm init --with-stuff</code> (Kubernetes 1.8.4, for reasons). I can setup nodes, <code>weave</code>, etc. But I have a problem setting the cluster name. When I open the <code>admin.conf</code> or a different config file I see:</p> <pre><code>name: kubernetes </code></pre> <p>When I run <code>kubectl config get-clusters</code>:</p> <pre><code>NAME kubernetes </code></pre> <p>Which is the default. Is there a way to set the cluster name during <code>init</code> (there is no command line parameter)? Or is there a way to change this after the <code>init</code>? The current <code>name</code> is referenced in many files in <code>/etc/kubernetes/</code></p> <p>Best Regrads<br> Kamil</p>
<p>You can now do so using kubeadm's config file. PR here:</p> <p><a href="https://github.com/kubernetes/kubernetes/pull/60852" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/60852</a></p> <p>Using the kubeadm config you just set the following at the top level</p> <pre><code>clusterName: kubernetes </code></pre>
<p><strong>Kubernetes newbie (or rather basic networking) question:</strong> Installed single node minikube (0.23 release) on a ubuntu box running in my lan (on IP address 192.168.0.20) with virtualbox. </p> <p>minikube start command completes successfully as well</p> <pre><code>minikube start Starting local Kubernetes v1.8.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. </code></pre> <p>minikube dashboard also comes up successfully. (running on 192.168.99.100:30000)</p> <p>what i want to do is access minikube dashboard from my macbook (running on 192.168.0.11) in the same LAN.</p> <p>Also I want to access the same minikube dashboard from the internet. </p> <p><strong>For LAN Access:</strong> Now from what i understand i am using virtualbox (the default vm option), i can change the networking type (to NAT with port forwarding) using vboxnet command </p> <pre><code>VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,,22" </code></pre> <p>as listed <a href="https://superuser.com/questions/901422/virtualbox-command-line-setting-up-port-forwarding">here</a></p> <p>In my case it will be something like this </p> <pre><code>VBoxManage modifyvm "VM name" --natpf1 "guesthttp,http,,30000,,8080" </code></pre> <p>Am i thinking along the right lines here?</p> <p><strong>Also for remotely accessing</strong> the same minikube dashboard address, i can setup a no-ip.com like service. They asked to install their utility on linux box and also setup port forwarding in the router settings which will port forward from host port to guest port. Is that about right? Am i missing something here?</p>
<p>I was able to get running with something as simple as:</p> <pre><code>kubectl proxy --address='0.0.0.0' --disable-filter=true </code></pre>
<p>I'm new to kubernetes and am still trying to extract log from a few lines and write it, if anyone can help me what commands i should execute.</p> <p>If the pod is named bino, and i wanted to extract the lines corresponding to the error unable-to-access-website, and then write them to a certain location, say John/Doe/bino. How would i do this is there a easy command?</p> <p>I tried using kubectl log bino, but it just dumps all the output on the terminal, if i wanted to write certain parts how can i do it? Thanks!</p> <p>Or if anyone has played around in katacoda i would appreciate a link to a similar example.</p>
<p>You can use grep in linux to fetch the relevant log messages you want:</p> <pre><code>kubectl log bino | grep "error unable-to-access-website" &gt;&gt; John/Doe/Bino/log.txt </code></pre> <p>Hope this helps.</p>
<p>I migrated our docker registry that was running on an external dedicated server into our Kubernetes cluster </p> <p>Now I can still push and pull images to the registry from every external machine but when I try to deploy images from the registry to the Kubernetes cluster itself it is not able to pull it. I get the following error log:</p> <pre><code> Warning Failed 47s (x3 over 1m) kubelet, gke-kube-1-default-pool-c5e11d0f-zxm8 Failed to pull image "myregistry.example.com/appimage:1": rpc error: code = Unknown desc = Error response from daemon: Get https://myregistry.example.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 47s (x3 over 1m) kubelet, gke-kube-1-default-pool-c5e11d0f-zxm8 Error: ErrImagePull </code></pre> <p>The registry is configured to be accessible via <a href="https://myregistry.example.com" rel="noreferrer">https://myregistry.example.com</a> by a traefik ingress controller and it looks like Kubernetes is internally trying to take a different route?</p>
<p>It's an odd path, but it should work, but taking a wild guess it looks like a <a href="https://github.com/docker/for-win/issues/611" rel="nofollow noreferrer">DNS issue</a>. (It works for me connecting to an externally facing service). Some things to look at:</p> <ul> <li>Can you resolve <code>myregistry.example.com</code> from any other running pod?</li> <li>What does the <code>/etc/resolv.conf</code> look like?</li> <li>What about your K8s nodes <code>/etc/resolv.conf</code></li> <li>Can you resolve <code>myregistry.example.com</code> from your nodes?</li> </ul>
<p>I followed this post <a href="https://stackoverflow.com/questions/44325048/kubernetes-configmap-only-one-file">Kubernetes configMap - only one file</a> to pass a config file to a deployment, but got an error. Why?</p> <p>The config file <code>config-prom-prometheus.yml</code>:</p> <pre><code>scrape_configs: - job_name: job-leo-prometheus kubernetes_sd_configs: - role: endpoints </code></pre> <p>The .yaml file <code>prom-prometheus.yaml</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: prom-prometheus-deployment spec: selector: matchLabels: app: prom-prometheus replicas: 1 template: metadata: labels: app: prom-prometheus spec: containers: - name: prom-prometheus image: 127.0.0.1:30400/prom/prometheus ports: - name: port9090 containerPort: 9090 volumeMounts: - name: volume-prometheus mountPath: /etc/prometheus/prometheus.yml subPath: prometheus.yml volumes: - name: volume-prometheus configMap: name: config-prom --- apiVersion: v1 kind: Service metadata: name: prom-prometheus spec: type: NodePort ports: - name: port9090 protocol: TCP port: 9090 targetPort: 9090 nodePort: 30090 selector: app: prom-prometheus </code></pre> <p>Commands:</p> <pre><code>kubectl create configmap config-prom --from-file=config-prom-prometheus.yml kubectl -f prom-prometheus.yaml apply </code></pre> <p>Results:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 17s default-scheduler Successfully assigned prom-prometheus-deployment-66887dcdbf-bfqd4 to minikube Normal SuccessfulMountVolume 17s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-ml6w5" Normal SuccessfulMountVolume 17s kubelet, minikube MountVolume.SetUp succeeded for volume "volume-prometheus" Warning Failed 9s kubelet, minikube Error: failed to start container "prom-prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/ec99da92-e994-11e8-a578-08002742f2a3/volume-subpaths/volume-prometheus/prom-prometheus/0\\\" to rootfs \\\"/var/lib/docker/overlay2/12c7da1c07c55fe2ec5dff61e5c457fa8aeaa32d47232c28a1d7e127c4f81bf0/merged\\\" at \\\"/var/lib/docker/overlay2/12c7da1c07c55fe2ec5dff61e5c457fa8aeaa32d47232c28a1d7e127c4f81bf0/merged/etc/prometheus/prometheus.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Normal Pulling 7s (x2 over 13s) kubelet, minikube pulling image "127.0.0.1:30400/prom/prometheus" Normal Pulled 7s (x2 over 13s) kubelet, minikube Successfully pulled image "127.0.0.1:30400/prom/prometheus" Normal Created 6s (x2 over 10s) kubelet, minikube Created container Warning Failed 4s kubelet, minikube Error: failed to start container "prom-prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/ec99da92-e994-11e8-a578-08002742f2a3/volume-subpaths/volume-prometheus/prom-prometheus/0\\\" to rootfs \\\"/var/lib/docker/overlay2/7b07728ae4439e4d479386eab6b042948e2cb586c54171941f24d03352a7c8b4/merged\\\" at \\\"/var/lib/docker/overlay2/7b07728ae4439e4d479386eab6b042948e2cb586c54171941f24d03352a7c8b4/merged/etc/prometheus/prometheus.yml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type </code></pre>
<p>This is not well documented but as per my experience name of configmap yaml (config-prom-prometheus.yml in your case) should be the same as <code>mountPath</code> and <code>subPath</code> in <code>Deployment</code>.</p> <p>If you use <code>subPath: prometheus.yml</code> - rename <code>config-prom-prometheus.yml</code> to <code>prometheus.yml</code> and try again.</p>
<p>I have running Prometheus in k8s. Could you please advice how can I change running config <code>prometheus.yaml</code> in cluster? I just want simply to change:</p> <pre><code>scrape_configs: - job_name: my-exporter scrape_interval: 15s scrape_timeout: 10s metrics_path: /metrics scheme: http </code></pre> <p>How can I do this?</p> <p>Thanks.</p>
<p>The recommended way is to provide the <code>prometheus.yml</code> via a ConfigMap. That way changes in the ConfigMap will be propagated into the pod that consumes the configMap. However, that is not enough for prometheus to pick up the new config.</p> <p>Prometheus supports <a href="https://www.robustperception.io/reloading-prometheus-configuration" rel="noreferrer">runtime reload of the config</a>, so that you don't need to stop prometheus in order to pickup the new config. You can either do that manually by sending a POST request as described in the link above, or automate this process by having a sidecar container inside the same prometheus pod that watch for updates to the config file and does the reload POST request.</p> <p>The following is an example on the second approach: <a href="https://www.weave.works/blog/prometheus-configmaps-continuous-deployment/" rel="noreferrer">prometheus-configmaps-continuous-deployment</a> </p>
<p>I am running pyspark application, v2.4.0, on Kubernetes, my spark application depends on numpy and tensorflow modules, please suggest the way to add these dependencies to Spark executors.</p> <p>I have checked the documentation, we can include the remote dependencies using --py-files, --jars etc. but nothing mentioned about library dependencies.</p>
<p>Found the way to add the library dependencies to Spark applications on K8S, thought of sharing it here.</p> <p>Mention the required dependencies installation commands in Dockerfile and rebuild the spark image, when we submit the spark job, new container will be instantiated with the dependencies as well.</p> <p>Dokerfile (/{spark_folder_path}/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/bindings/python/Dockerfile) contents:</p> <pre><code>RUN apk add --no-cache python &amp;&amp; \ apk add --no-cache python3 &amp;&amp; \ python -m ensurepip &amp;&amp; \ python3 -m ensurepip &amp;&amp; \ # We remove ensurepip since it adds no functionality since pip is # installed on the image and it just takes up 1.6MB on the image rm -r /usr/lib/python*/ensurepip &amp;&amp; \ pip install --upgrade pip setuptools &amp;&amp; \ # You may install with python3 packages by using pip3.6 pip install numpy &amp;&amp; \ # Removed the .cache to save space rm -r /root/.cache </code></pre>
<p>I'm using AWS' EKS which is Kubernetes v1.10 and I'm using client-go v7.0.0.</p> <p>What I'm trying to do is parse a .yml file with multiple Kubernetes resource definitions in a file and submit those resources to the Kubernetes API. I can successfully parse the files using this code <code>scheme.Codecs.UniversalDeserializer().Decode</code>, and I get back an array of <code>runtime.Object</code>.</p> <p>I know that all the Kubernetes resources conform to the <code>runtime.Object</code> interface, but I can't find a way to submit the generic interface to the API. Most methods I've seen use the methods on the concrete types like Deployment, Pod, etc.</p> <p>I've seen some code around a generic RESTClient like this <code>clientset.RESTClient().Put().Body(obj).Do()</code>, but that doesn't work and I can't figure it out.</p> <p>I know my clientset is configured correctly because I can successfully list all Pods.</p>
<p>If you have a "generic" <code>runtime.Object</code>, you can use the <a href="https://github.com/kubernetes/client-go/tree/3dda0e178874793c2401fcf92356ab800727785b/dynamic" rel="noreferrer">dynamic client</a> in client-go for this. The dynamic client deals with <code>unstructured.Unstructured</code> objects and all <code>runtime.Object</code>s can be converted to it. Here is an example:</p> <pre><code>// create the dynamic client from kubeconfig dynamicClient, err := dynamic.NewForConfig(kubeconfig) if err != nil { return err } // convert the runtime.Object to unstructured.Unstructured unstructuredObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(obj) if err != nil { return err } // create the object using the dynamic client nodeResource := schema.GroupVersionResource{Version: "v1", Resource: "Node"} createdUnstructuredObj, err := dynamicClient.Resource(nodeResource).Namespace(ns).Create(unstructuredObj) if err != nil { return err } // convert unstructured.Unstructured to a Node var node *corev1.Node if err = runtime.DefaultUnstructuredConverter.FromUnstructured(createdUnstructuredObj, node); err != nil { return err } </code></pre>
<p>I'm trying to bootstrap a etcd cluster within my kubernetes cluster, here is the relevant section of the pod definition</p> <pre><code> - name: etcd image: quay.io/coreos/etcd:v2.2.0 ports: - containerPort: 2379 - containerPort: 2380 - containerPort: 4001 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP args: - "-name" - "etcd0" - "-advertise-client-urls" - http://${POD_IP}:2379,http://${POD_IP}:4001 - "-initial-advertise-peer-urls" - "http://${POD_IP}:2380" - "-listen-peer-urls" - "http://0.0.0.0:2380" - "-initial-cluster" - 'etcd0=http://${POD_IP}:2380' - "-initial-cluster-state" - "new" </code></pre> <p>However when I apply the POD_IP environment variable seems to get mangled, evidenced by the log:</p> <pre><code> advertise URLs of "etcd0" do not match in --initial-advertise-peer-urls [http://$%7BPOD_IP%7D:2380] and --initial-cluster [http://$%7BPOD_IP%7D:2380] </code></pre> <p>Has anyone seen anything similar to this?</p>
<p>The arguments are not interpreted by a shell, so curly braces don't get you the behavior you want. If you want to use an envvar value in an arg, variable references like <code>$(VAR_NAME)</code> are expanded using the container's environment.</p>
<p>Is there a recommended way to use <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Kubernetes Secrets</a>? They can be exposed as environment variables or using a volume mount. Is one more secure than the other?</p>
<p><a href="https://www.oreilly.com/library/view/velocity-conference-2017/9781491985335/video316233.html" rel="noreferrer">https://www.oreilly.com/library/view/velocity-conference-2017/9781491985335/video316233.html</a></p> <p>Kubernetes secrets exposed by environment variables may be able to be enumerated on the host via /proc/. If this is the case it's probably safer to load them via volume mounts.</p>
<p>Within the same kubernetes cluster,</p> <ol> <li><p>Can I have multiple StatefulSets attached to one headless service or should each StatefulSet have it's own headless service? What are the pros and cons of doing this?</p></li> <li><p>Can I mix standard and headless services in the same cluster? Specifically, I would like to use LoadBalancer service to load balance headless services. Can I define a service of type LoadBalancer and have headless services (ClusterIP = None) attached to it? If yes, how can I achieve this?</p></li> </ol> <p>Here is my intended architecture:</p> <pre><code>Load Balancer Service - Headless Service (Database-service) - MySql - BlazeGraph - Headless Service (Web / Tomcat) - Web Service (RESTful / GraphQL) </code></pre> <p>Any advice and insight is appreciated.</p> <p><strong>My setup</strong></p> <p>My service and the statefulsets attached to it have different labels.</p> <pre><code>database-service: app=database mysqlset: app=mysql </code></pre> <p>My pods</p> <pre><code>khteh@khteh-T580:~ 2007 $ k get pods -l app=mysql -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE mysql-0 1/1 Running 1 18h 10.1.1.4 khteh-t580 &lt;none&gt; khteh@khteh-T580:~ 2008 $ k get pods -l app=blazegraph -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE blazegraph-0 1/1 Running 1 18h 10.1.1.254 khteh-t580 &lt;none&gt; khteh@khteh-T580:~ 2009 $ k describe service database-service Name: database-service Namespace: default Labels: app=database Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"database"},"name":"database-service","namespace":"defaul... Selector: app=database,tier=database Type: ClusterIP IP: None Port: mysql 3306/TCP TargetPort: 3306/TCP Endpoints: &lt;none&gt; Port: blazegraph 9999/TCP TargetPort: 9999/TCP Endpoints: &lt;none&gt; Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Notice the service Endpoints is <code>&lt;none&gt;</code>. I am not sure this is the right setup.</p>
<p>Headless Service you should use in any case where you want to automatically discover all pods under the service as opposed to regular Service where you get ClusterIP instead. As an illustration from above mentioned example here is difference between DNS entries for Service (with ClusterIP) and Headless Service (without ClusterIP):</p> <ul> <li><p>Standard service you will get the clusterIP value:</p> <pre><code>kubectl exec zookeeper-0 -- nslookup zookeeper Server: 10.0.0.10 Address: 10.0.0.10#53 Name: zookeeper.default.svc.cluster.local Address: 10.0.0.213 </code></pre></li> <li><p>Headless service you will get IP of each pod</p> <pre><code>kubectl exec zookeeper-0 -- nslookup zookeeper Server: 10.0.0.10 Address: 10.0.0.10#53 Name: zookeeper.default.svc.cluster.local Address: 172.17.0.6 Name: zookeeper.default.svc.cluster.local Address: 172.17.0.7 Name: zookeeper.default.svc.cluster.local Address: 172.17.0.8 </code></pre></li> </ul> <p>Now, If you connect two statefulset with single headless service, it will return the address of each pod in both the statefulset. There will be no way to differentiate the pods from two applications if you create two statefulset and one headless service for that. See the following article to understand <a href="https://akomljen.com/stateful-applications-on-kubernetes/" rel="nofollow noreferrer">why headless services are used</a></p> <p>Headless service allow developer to reduce coupling from kubernetes system by allowing them to do discovery their own way. For such services, clusterIP is not allocated, kube-proxy doesn't handle these services and there is no load balancing and proxying done by platform for them. So, If you define <code>clusterIP: None</code> in your service there will be no load-balancing will be done from kubernetes end.</p> <p>Hope this helps.</p> <p>EDIT:</p> <p>I did a little experiment to answer your queries, created two statefulsets of mysql database named mysql and mysql2, with 1 replica for each statefulset. They have their own PV, PVC but bound by only single headless service.</p> <pre><code>[root@ip-10-0-1-235 centos]# kubectl get pods -l app=mysql -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-0 1/1 Running 0 4m 192.168.13.21 ip-10-0-1-235.ec2.internal mysql2-0 1/1 Running 0 3m 192.168.13.22 ip-10-0-1-235.ec2.internal </code></pre> <p>Now you can see the single headless service attached to both the pods</p> <pre><code>[root@ip-10-0-1-235 centos]# kubectl describe svc mysql Name: mysql Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=mysql Type: ClusterIP IP: None Port: &lt;unset&gt; 3306/TCP TargetPort: 3306/TCP Endpoints: 192.168.13.21:3306,192.168.13.22:3306 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Now when you lookup the service from some other pod, it returns IP address of both the pods:</p> <pre><code>[root@rtp-worker-0 /]# nslookup mysql Server: 10.96.0.10 Address: 10.96.0.10#53 Name: mysql.default.svc.cluster.local Address: 192.168.13.21 Name: mysql.default.svc.cluster.local Address: 192.168.13.22 </code></pre> <p>Now, it is impossible to identify which address(pod) is of which statefulset. Now I tried to identify the statefulset using its metadata name, but couldn't</p> <pre><code>[root@rtp-worker-0 /]# nslookup mysql2.mysql.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 ** server can't find mysql2.mysql.default.svc.cluster.local: NXDOMAIN </code></pre> <p>Hope it clarifies. </p>
<p>Here is my service.yaml code : </p> <pre><code>kind: Service apiVersion: v1 metadata: name: login spec: selector: app: login ports: - protocol: TCP name: http port: 5555 targetPort: login-http type: NodePort </code></pre> <p>I wrote service type as </p> <pre><code>type: NodePort </code></pre> <p>but when i hit command as below it does not show the external ip as 'nodes' :</p> <pre><code>'kubectl get svc' </code></pre> <p>here is output: </p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 &lt;none&gt; 443/TCP 7h login NodePort 10.100.70.98 &lt;none&gt; 5555:32436/TCP 5m </code></pre> <p>please help me to understand the mistake.</p>
<p>There is nothing wrong with your service, you should be able to access it using <code>&lt;your_vm_ip&gt;:32436</code>.</p> <p>NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. So, On your node port 32436 is open and will receive all the external traffic on this port and forward it to the login service.</p> <p>EDIT:</p> <p>nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).</p> <p>nodePort is unique, so 2 different services cannot have the same nodePort assigned. Once declared, the k8s master reserves that nodePort for that service. nodePort is then opened on EVERY node (master and worker) - also the nodes that do not run a pod of that service - k8s iptables magic takes care of the routing. That way you can make your service request from outside your k8s cluster to any node on nodePort without worrying whether a pod is scheduled there or not. </p> <p>See the following article, it shows different ways to expose your services:</p> <p><a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p>
<p>I am trying to write a kubernetes crd validation schema. I have an array (vc) of structures and one of the fields in those structures is required (<code>name</code> field).</p> <p>I tried looking through various examples but it doesn't generate error when <code>name</code> is not there. Any suggestions what is wrong ?</p> <pre><code>vc: type: array items: type: object properties: name: type: string address: type: string required: - name </code></pre>
<p>If you are on v1.8, you will need to enable the <code>CustomResourceValidation</code> feature gate for using the validation feature. This can be done by using the following flag on kube-apiserver:</p> <pre><code>--feature-gates=CustomResourceValidation=true </code></pre> <p>Here is an example of it working (I tested this on v1.12, but this should work on earlier versions as well):</p> <p>The CRD:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: foos.stable.example.com spec: group: stable.example.com versions: - name: v1 served: true storage: true version: v1 scope: Namespaced names: plural: foos singular: foo kind: Foo validation: openAPIV3Schema: properties: spec: properties: vc: type: array items: type: object properties: name: type: string address: type: string required: - name </code></pre> <p>The custom resource:</p> <pre><code>apiVersion: "stable.example.com/v1" kind: Foo metadata: name: new-foo spec: vc: - address: "bar" </code></pre> <ol> <li>Create the CRD.</li> </ol> <p><code>kubectl create -f crd.yaml</code> <code>customresourcedefinition.apiextensions.k8s.io/foos.stable.example.com created</code></p> <ol start="2"> <li>Get the CRD and check if the validation field exists in the output. If it doesn't, you probably don't have the feature gate turned on.</li> </ol> <p><code>kubectl get crd foos.stable.example.com -oyaml</code></p> <ol start="3"> <li>Try to create the custom resource. This should fail with:</li> </ol> <p><code>kubectl create -f cr-validation.yaml</code></p> <p><code>The Foo "new-foo" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"creationTimestamp":"2018-11-18T19:45:23Z", "generation":1, "uid":"7d7f8f0b-eb6a-11e8-b861-54e1ad9de0be", "name":"new-foo", "namespace":"default"}, "spec":map[string]interface {}{"vc":[]interface {}{map[string]interface {}{"address":"bar"}}}, "apiVersion":"stable.example.com/v1", "kind":"Foo"}: validation failure list: spec.vc.name in body is required</code></p>
<p>I am using Prometheus to monitor my Kubernetes cluster. I have set up Prometheus in a separate namespace. I have multiple namespaces and multiple pods are running. Each pod container exposes a custom metrics at this end point, <code>:80/data/metrics</code> . I am getting the Pods CPU, memory metrics etc, but how to configure Prometheus to pull data from <code>:80/data/metrics</code> in each available pod ? I have used this tutorial to set up Prometheus, <a href="https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/" rel="noreferrer">Link</a></p>
<p>You have to add this three annotation to your pods:</p> <pre><code>prometheus.io/scrape: 'true' prometheus.io/path: '/data/metrics' prometheus.io/port: '80' </code></pre> <p>How it will work?</p> <p>Look at the <code>kubernetes-pods</code> job of <code>config-map.yaml</code> you are using to configure prometheus,</p> <pre><code>- job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name </code></pre> <p>Check this three relabel configuration</p> <pre><code>- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ </code></pre> <p>Here, <code>__metrics_path__</code> and <code>port</code> and whether to scrap metrics from this pod are being read from pod annotations.</p> <p>For, more details on how to configure Prometheus see <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration" rel="noreferrer">here</a>. </p>
<p>I'm running a <em>CronJob</em> in <em>kubernetes</em>, the jobs complete successfully and I log output to the log file inside <code>(path: storage/logs)</code> but I cannot access that file due to the container being in a completed state.</p> <p>Here is my job <em>yaml</em>:</p> <pre><code>apiVersion: v1 items: - apiVersion: batch/v1beta1 kind: CronJob metadata: labels: chart: cronjobs-0.1.0 name: cron-cronjob1 namespace: default spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: app: cron cron: cronjob1 spec: containers: - args: - /usr/local/bin/php - -c - /var/www/html/artisan bulk:import env: - name: DB_CONNECTION value: postgres - name: DB_HOST value: postgres - name: DB_PORT value: &quot;5432&quot; - name: DB_DATABASE value: xxx - name: DB_USERNAME value: xxx - name: DB_PASSWORD value: xxxx - name: APP_KEY value: xxxxx image: registry.xxxxx.com/xxxx:2ecb785-e927977 imagePullPolicy: IfNotPresent name: cronjob1 ports: - containerPort: 80 name: http protocol: TCP imagePullSecrets: - name: xxxxx restartPolicy: OnFailure terminationGracePeriodSeconds: 30 schedule: '* * * * *' successfulJobsHistoryLimit: 3 </code></pre> <p>Is there anyway I can get my log file content displayed with <code>kubectl log &lt;podname&gt;</code> command or other alternatives?</p>
<p><code>Cronjob</code> runs pod according to the <code>spec.schedule</code>. After completing the task the pod's status will be set as <code>completed</code>, but the <code>cronjob</code> controller doesn't delete the pod after completing. And the log file content still there in the pod's container filesystem. So you need to do:</p> <pre class="lang-sh prettyprint-override"><code># here you can get the pod_name from the stdout of the cmd `kubectl get pods` $ kubectl logs -f -n default &lt;pod_name&gt; </code></pre>
<p>I Integrated Google Kubernetes Engine with my Gitlab repo and created a cluster from gitlab. Now I am trying to build my docker image using <code>gitlab-ci</code> and push it to Google Container Registry. But I keep getting this error:</p> <pre><code>Running with gitlab-runner 11.2.0 (35e8515d) on gitlab runner vm instance 4e6e33ed Using Docker executor with image docker:latest ... Starting service docker:dind ... Pulling docker image docker:dind ... Using docker image sha256:edbe3f3ad406799b528fe6633c5553725860566b638cdc252e0520010436869f for docker:dind ... Waiting for services to be up and running... *** WARNING: Service runner-4e6e33ed-project-8016623-concurrent-0-docker-0 probably didn't start properly. Health check error: ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-4e6e33ed-project-8016623-concurrent-0-docker-0 AS /runner-4e6e33ed-project-8016623-concurrent-0-docker-0-wait-for-service/service (executor_docker.go:1305:0s) Service container logs: 2018-11-14T13:02:37.917684152Z mount: permission denied (are you root?) 2018-11-14T13:02:37.917743944Z Could not mount /sys/kernel/security. 2018-11-14T13:02:37.917747902Z AppArmor detection and --privileged mode might break. 2018-11-14T13:02:37.917750733Z mount: permission denied (are you root?) ********* Pulling docker image docker:latest ... Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest ... Running on runner-4e6e33ed-project-8016623-concurrent-0 via gitlab-runners.. ### # Running before_script commands here ### # Error Comes on Docker build command Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running? ERROR: Job failed: exit code 1 </code></pre> <p>This is my gitlab-ci.yml.</p> <pre><code>services: - docker:dind before_script: - apk update &amp;&amp; apk upgrade &amp;&amp; apk add --no-cache bash openssh variables: DOCKER_DRIVER: overlay2 stages: - build build: stage: build image: docker:latest variables: DOCKER_HOST: tcp://localhost:2375 before_script: # Pre-requisites required to install google cloud sdk on gitlab runner - export COMMIT_SHA=$(echo $CI_COMMIT_SHA | cut -c1-8) - apk update - apk upgrade - apk add --update ca-certificates - apk add --update -t deps curl - apk del --purge deps - rm /var/cache/apk/* script: # Build our image using docker - docker build -t $GCP_PROJECT_ID/$CI_PROJECT_NAME:$COMMIT_SHA . # Write our GCP service account private key into a file - echo $GCLOUD_SERVICE_KEY | base64 -d &gt; ${HOME}/gcloud-service-key.json # Download and install Google Cloud SDK - wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz - tar zxvf google-cloud-sdk.tar.gz &amp;&amp; ./google-cloud-sdk/install.sh --usage-reporting=false --path-update=true # Update gcloud components - google-cloud-sdk/bin/gcloud --quiet components update # Give access to gcloud project - google-cloud-sdk/bin/gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json || die "unable to authenticate service account for gcloud" # Get current projects credentials to access it - google-cloud-sdk/bin/gcloud container clusters get-credentials gitlab-kube --zone cluster-zone --project project-id # Configure container registry to push using docker - docker login -u _json_key --password-stdin https://gcr.io &lt; ${HOME}/gcloud-service-key.json # Push the image using docker - docker push $GCP_PROJECT_ID/$CI_PROJECT_NAME:$COMMIT_SHA </code></pre> <p>The docker image is building on local. Also I saw on various posts to update the <code>config.toml</code> file but I dont have one in my project. Where to add that file?</p> <p>Thanks</p>
<p>First of all : You don't need <code>gcloud</code> to push your images to GCP. The authentication by service account (like you do) is enough. (see: <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file</a>)</p> <p>However... If you really want to use the Gloud SDK, use the <code>google/gcloud-sdk</code> image instead of the <code>docker</code> image in your job (Docker is already present inside the <code>google/gcloud-sdk</code> image)</p> <p>Next, to use the docker deamon, you need to specify the good endpoint. You use the <code>docker:dind</code> service so, the Docker Host will be <code>tcp://docker:2375/</code> (<code>docker</code> is the hostname of your service)</p> <p>Finally, your runner will need the "privilegied" mode (To do DIND). (See: <a href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor" rel="nofollow noreferrer">https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor</a>)</p> <p>This is a quick example (sorry, not tested), to do that :</p> <pre><code>stages: - build build: stage: build image: google/cloud-sdk services: - docker:dind variables: DOCKER_DRIVER: overlay2 DOCKER_HOST: tcp://docker:2375/ before-script: - echo $GCLOUD_SERVICE_KEY | base64 -d &gt; ${HOME}/gcloud-service-key.json script: - docker build -t $GCP_PROJECT_ID/$CI_PROJECT_NAME:$COMMIT_SHA . - docker login -u _json_key --password-stdin https://gcr.io &lt; ${HOME}/gcloud-service-key.json - docker push $GCP_PROJECT_ID/$CI_PROJECT_NAME:$COMMIT_SHA </code></pre>
<p>I have a multizone (3 zones) GKE cluster (1.10.7-gke.1) of 6 nodes and want each zone to have at least one replica of my application.</p> <p>So I've tried preferred podAntiAffinity:</p> <pre><code> affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: failure-domain.beta.kubernetes.io/zone </code></pre> <p>Everything looks good the first time I install (scale from 1 to 3 replicas) my application. After the next rolling update, everything gets mixed up and I can have 3 copies of my application in one zone. Since additional replicas are created and the old ones are terminated.</p> <p>When I am trying the same term with <em>requiredDuringSchedulingIgnoredDuringExecution</em> everything looks good but rolling updates don't work because new replicas can't be scheduled (pods with "component" = "app" already exist in each zone).</p> <p>How to configure my deployment to be sure I have replica in each availability zone?</p> <p>UPDATED:</p> <p>My workaround now is to have hard anti-affinity and deny additional pods (more than 3) during the rolling update:</p> <pre><code> replicaCount: 3 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: failure-domain.beta.kubernetes.io/zone deploymentStrategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 1 </code></pre>
<p>If you have two nodes in each zone, you can use below affinity rules to make sure rolling updates works as well and you have a pod in each zone.</p> <pre><code> affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: "kubernetes.io/hostname" preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: component operator: In values: - app topologyKey: failure-domain.beta.kubernetes.io/zone </code></pre>
<p>Hey so for sake of project need I have configure application that will response on port <code>8083</code> for that I configured following deployment, gateway, service and virtual service inside <strong>dedicated namespace</strong></p> <pre><code>apiVersion: v1 data: my.databag.1: need_triage kind: ConfigMap metadata: name: my-service-env-variables namespace: api --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: my-service name: my-service-service-deployment namespace: api spec: replicas: 1 template: metadata: annotations: traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0 labels: app: my-service-service-deployment spec: containers: - env: - name: my.variable valueFrom: secretKeyRef: key: my_token name: my.variable envFrom: - configMapRef: name: my-service-env-variables image: imaagepath:tag name: my-service-pod ports: - containerPort: 8080 name: mysvcport resources: limits: cpu: 700m memory: 1.8Gi requests: cpu: 500m memory: 1.7Gi --- apiVersion: v1 kind: Service metadata: name: my-service namespace: api spec: ports: - port: 8083 protocol: TCP targetPort: mysvcport selector: app: my-service-service-deployment --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-service-ingress namespace: api spec: gateways: - http-gateway hosts: - my-service.example.com http: - route: - destination: host: my-service port: number: 8083 --- apiVersion: v1 items: - apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: clusterName: "" creationTimestamp: 2018-11-07T13:17:00Z name: http-gateway namespace: api resourceVersion: "11778445" selfLink: /apis/networking.istio.io/v1alpha3/namespaces/api/gateways/http-gateway uid: 694f66a4-e28f-11e8-bc21-0ac9e31187a0 spec: selector: istio: ingressgateway servers: - hosts: - '*.example.com' port: name: http number: 80 protocol: HTTP - hosts: - '*.example.com' port: name: http-tomcat number: 8083 protocol: TCP kind: List metadata: resourceVersion: "" selfLink: "" </code></pre> <p>kubectl -n istio-system get service istio-ingressgateway -o yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","chart":"gateways-1.0.1","heritage":"Tiller","istio":"ingressgateway","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"http2","nodePort":31380,"port":80,"targetPort":80},{"name":"https","nodePort":31390,"port":443},{"name":"tcp","nodePort":31400,"port":31400},{"name":"tcp-pilot-grpc-tls","port":15011,"targetPort":15011},{"name":"tcp-citadel-grpc-tls","port":8060,"targetPort":8060},{"name":"tcp-dns-tls","port":853,"targetPort":853},{"name":"http2-prometheus","port":15030,"targetPort":15030},{"name":"http2-grafana","port":15031,"targetPort":15031}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"type":"LoadBalancer"}} creationTimestamp: 2018-09-06T02:43:34Z labels: app: istio-ingressgateway chart: gateways-1.0.1 heritage: Tiller istio: ingressgateway release: istio name: istio-ingressgateway namespace: istio-system resourceVersion: "12960680" selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway uid: a6455551-b17e-11e8-893c-0a872c53b2c0 spec: clusterIP: 100.64.235.167 externalTrafficPolicy: Cluster ports: - name: http2 nodePort: 31380 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 31390 port: 443 protocol: TCP targetPort: 443 - name: tcp nodePort: 31400 port: 31400 protocol: TCP targetPort: 31400 - name: tcp-pilot-grpc-tls nodePort: 30052 port: 15011 protocol: TCP targetPort: 15011 - name: tcp-citadel-grpc-tls nodePort: 30614 port: 8060 protocol: TCP targetPort: 8060 - name: tcp-dns-tls nodePort: 30085 port: 853 protocol: TCP targetPort: 853 - name: http2-prometheus nodePort: 30518 port: 15030 protocol: TCP targetPort: 15030 - name: http2-grafana nodePort: 31358 port: 15031 protocol: TCP targetPort: 15031 **_- name: http-tomcat nodePort: 30541 port: 8083 protocol: TCP targetPort: 8083_** selector: app: istio-ingressgateway istio: ingressgateway sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - hostname: &lt;elb_endpoint&gt; </code></pre> <p>As we can see I edited the port in service <code>ingress-gateway</code>. But when I hit endpoint it gets response on port 80 and i am getting gateway timeout on <code>8083</code> I am wondering why its happening I added everywhere I can think of or get to know from docs and community. :) </p> <p>Would really appreciate any help I can get in this.</p>
<p>By the way is istio-ingressgateway in correct status? I'm asking because the number of ports seems to exceed the limit. At our cluster we've discovered one day: <code> Error creating load balancer (will retry): failed to ensure load balancer for service istio-system/istio-ingressgateway: googleapi: Error 400: Invalid value for field 'resource.ports[5]': '853'. Too many ports specified. Maximum is 5., invalid </code></p> <p>You can check that by doing <code>kubectl describe svc istio-ingressgateway -n istio-system</code></p>
<p>I have started 2 ubuntu 16 EC2 instance(one for master and other for worker). Everything working OK. I need to setup dashboard to view on my machine. I have copied admin.ctl and executed the script in my machine's terminal</p> <pre><code> kubectl --kubeconfig ./admin.conf proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*' </code></pre> <p>Everything is fine. But in browser when I use below link </p> <pre><code>http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre> <p>I am getting Error: 'dial tcp 192.168.1.23:8443: i/o timeout' Trying to reach: '<a href="https://192.168.1.23:8443/" rel="nofollow noreferrer">https://192.168.1.23:8443/</a>'</p> <p>I have enabled all traffics in security policy for aws. What am I missing? Please point me a solution</p>
<p>If you only want to reach the dashboard then it is pretty easy, get the IP address of your EC2 instance and the Port on which it is serving dashboard (<code>kubectl get services --all-namespaces</code>) and then reach it using: First:</p> <p><code>kubectl proxy --address 0.0.0.0 --accept-hosts '.*'</code></p> <p>And in your browswer:</p> <p><code>http://&lt;IP&gt;:&lt;PORT&gt;/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login</code></p> <p><strong>Note that this is a possible security vulnerability as you are accepting all traffic</strong> (AWS firewall rules) and also all connections for your <code>kubectl proxy</code> (<code>--address 0.0.0.0 --accept-hosts '.*'</code>) so please narrow it down or use different approach. If you have more questions feel free to ask. </p>
<p>I read the calico docs, it says calico will start an etcd instance when it starts, but I noticed that the K8s cluster will start an etcd pod, when the cluster starts. I want calico use that etcd node, so I do the following action:</p> <p>Use calicoctl do test, create a config file:</p> <pre><code># cat myconfig.yml apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: etcdv3 etcdEndpoints: https://10.100.1.20:2379 etcdKeyFile: /etc/kubernetes/pki/etcd/server.key etcdCertFile: /etc/kubernetes/pki/etcd/server.crt etcdCACertFile: /etc/kubernetes/pki/etcd/ca.crt </code></pre> <p>the etcd config info came from /etc/kubernetes/manifests/etcd.yaml</p> <pre><code># cat /etc/kubernetes/manifests/etcd.yaml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --advertise-client-urls=https://127.0.0.1:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://127.0.0.1:2380 - --initial-cluster=t-k8s-a1=https://127.0.0.1:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379 - --listen-peer-urls=https://127.0.0.1:2380 - --name=t-k8s-a1 - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt image: k8s.gcr.io/etcd-amd64:3.2.18 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -ec - ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /var/lib/etcd type: DirectoryOrCreate name: etcd-data - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs status: {} </code></pre> <p>still refused </p> <pre><code># calicoctl get nodes --config ./myconfig.yml Failed to create Calico API client: dial tcp 10.100.1.20:2379: connect: connection refused # kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE kube-system calico-node-5nbwz 2/2 Running 0 22h 10.100.1.21 t-k8s-b2 &lt;none&gt; kube-system calico-node-m967m 2/2 Running 0 22h 10.100.1.20 t-k8s-a1 &lt;none&gt; kube-system calico-typha-64fc9d86dd-g8m54 1/1 Running 0 22h 10.100.1.21 t-k8s-b2 &lt;none&gt; kube-system coredns-78fcdf6894-5thqv 1/1 Running 0 1d 192.168.1.2 t-k8s-b2 &lt;none&gt; kube-system coredns-78fcdf6894-gm5zs 1/1 Running 0 1d 192.168.1.3 t-k8s-b2 &lt;none&gt; kube-system etcd-t-k8s-a1 1/1 Running 0 1d 10.100.1.20 t-k8s-a1 &lt;none&gt; kube-system kube-apiserver-t-k8s-a1 1/1 Running 0 1d 10.100.1.20 t-k8s-a1 &lt;none&gt; kube-system kube-controller-manager-t-k8s-a1 1/1 Running 0 1d 10.100.1.20 t-k8s-a1 &lt;none&gt; kube-system kube-proxy-9rgmd 1/1 Running 0 1d 10.100.1.20 t-k8s-a1 &lt;none&gt; kube-system kube-proxy-z75kc 1/1 Running 0 1d 10.100.1.21 t-k8s-b2 &lt;none&gt; kube-system kube-scheduler-t-k8s-a1 1/1 Running 0 1d 10.100.1.20 t-k8s-a1 &lt;none&gt; testalex etcd-deployment-5b5d67bb84-nr7vc 1/1 Running 0 1d 192.168.1.26 t-k8s-b2 &lt;none&gt; testalex k8s-alert-76f97ccf49-gffgb 1/1 Running 0 1d 192.168.1.18 t-k8s-b2 &lt;none&gt; testalex k8s-monitor-7ddcb74b87-75vxb 1/1 Running 0 1d 192.168.1.27 t-k8s-b2 &lt;none&gt; testalex mysql-deployment-858464457f-nznq5 1/1 Running 0 1d 192.168.1.16 t-k8s-b2 &lt;none&gt; </code></pre>
<p>When you <a href="https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/calico" rel="noreferrer">install Calico on Kubernetes</a> by default it will use the Kubernetes datastore (which uses etcdv3). Your <code>calicoctl</code> config under <code>/etc/calico/calicoctl.cfg</code> should looks something like this:</p> <pre><code>apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/path/to/.kube/config" </code></pre> <p>It works for me:</p> <pre><code>calicoctl get nodes NAME ip-172-x-x-x.us-west-2.compute.internal ip-172-x-x-x.us-west-2.compute.internal ip-172-x-x-x.us-west-2.compute.internal ip-172-x-x-x.us-west-2.compute.internal ip-172-x-x-x.us-west-2.compute.internal ip-172-x-x-x.us-west-2.compute.internal </code></pre>
<p>I am wondering if it is possible to store a key-value pair in Secret.yml. I want to be able to store an encryption key as a value and an id as its key which I can use to retrieve the encryption key stored in Secret.yml. </p> <p>Is such functionality available with Kubernetes?</p> <p><strong>EDIT</strong> I should have said in my original message, I want to be able to store multiple pairs and add pairs during the lifespan of my application and use different encryption key on the fly without/minimal update to my application.</p>
<p>Yes, <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secrets</a> are key value pairs. You can create them using kubectl:</p> <pre><code>kubectl create secret the-secret-name --from-literal=KEY=VALUE </code></pre> <p>Then, you can look at / edit the yaml specification via</p> <pre><code>#get kubectl get secret the-secret-name -o yaml #edit kubectl edit secret the-secret-name </code></pre> <p>A secret looks like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: the-secret-name type: Opaque data: # the value is base64 encoded KEY: VkFMVUUK </code></pre> <p>Most of the time, it's better to use two key value pairs in your situation though. The reason for this is that it's more complicated/expensive to look for a key with an unknown name than to lookup a value under a key.</p>
<p>Specifically <code>kubernetes-cli</code>. I have 1.12.0 installed. I need 1.11.x, but I don't have that installed.</p> <p>I've reviewed and tried every answer in this thread and nothing worked: <a href="https://stackoverflow.com/questions/3987683/homebrew-install-specific-version-of-formula">Homebrew install specific version of formula?</a></p> <p>I've tried <code>brew search</code> but there are no tapped versions:</p> <pre><code>~ brew search kubernetes-cli ==&gt; Formulae kubernetes-cli ✔ </code></pre> <p>I've tried <code>brew versions</code> but that command has been removed:</p> <pre><code>~ brew versions Error: Unknown command: versions </code></pre> <p>I've tried <code>brew install [email protected]</code> and .1 and .2:</p> <pre><code>~ brew install [email protected] Error: No available formula with the name "[email protected]" ==&gt; Searching for a previously deleted formula (in the last month)... Error: No previously deleted formula found. ==&gt; Searching for similarly named formulae... Error: No similarly named formulae found. ==&gt; Searching taps... ==&gt; Searching taps on GitHub... Error: No formulae found in taps. </code></pre> <p><code>brew switch</code> requires that I have the older version installed, which I don't.</p> <pre><code>~ brew switch kubernetes-cli 1.11.0 Error: kubernetes-cli does not have a version "1.11.0" in the Cellar. kubernetes-cli installed versions: 1.12.0 </code></pre>
<ol> <li>Go to homebrew git repo: <a href="https://github.com/Homebrew/homebrew-core/" rel="noreferrer">https://github.com/Homebrew/homebrew-core/</a></li> <li>Identify the commit specific to kubernetes 1.11.x version</li> <li>Go to <code>Formula</code> folder</li> <li>Open raw version of <code>kubernetes-cli.rb</code> file</li> <li>Copy the raw link of the file <code>https://raw.githubusercontent.com/Homebrew/homebrew-core/3e8f5503dde7069c5ff49b82d5e1576e6ebe3a5d/Formula/kubernetes-cli.rb</code></li> <li>Run <code>brew install &lt;raw_link&gt;</code></li> </ol>
<p>I would like to know/get opinions on how to setup liveness probe for a RabbitMQ queue consumer. I am not sure how to verify if consumer is still processing messages from the queue. I have already tried searching for some clues over the internet but could not find any. So just asking a question here to see if anyone has got any idea. </p> <p>The code block which I want to make sure working fine is</p> <pre><code> var consumer = new EventingBasicConsumer(channel); consumer.Received += (model, ea) =&gt; { var body = ea.Body; var message = Encoding.UTF8.GetString(body); Console.WriteLine($"Message Received: {message}"); }; </code></pre> <p>Thank you.</p>
<p>First thing, you will need to expose an HTTP endpoint in your application code that checks whether the consumer is alive or not.</p> <p>There are many ways to test the liveness of the consumer, for example, you can check the timestamp of the last message that was consumed. If it's too old, you could declare the consumer as dead by returning an HTTP 500 error, otherwise, return HTTP 200. It depends on your business logic, you might want to use what I proposed, or any other method that fits your needs.</p> <p>Once you have an HTTP endpoint, you can define a liveness probe in your Kubernetes manifest.</p> <pre><code>livenessProbe: httpGet: path: /healthz port: 8080 httpHeaders: - name: X-Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3 </code></pre> <p>(Taken from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a>)</p>
<p>I'm trying to run a service exposed via port 80 and 443. The SSL termination happens on the pod. I specified only port 80 for liveness probe but for some reasons kubernates is probing https (443) as well. Why is that and how can I stop it probing 443?</p> <p><strong>Kubernates config</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: myregistrykey namespace: default data: .dockerconfigjson: xxx== type: kubernetes.io/dockerconfigjson --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: example-com spec: replicas: 0 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 50% minReadySeconds: 30 template: metadata: labels: app: example-com spec: imagePullSecrets: - name: myregistrykey containers: - name: example-com image: DOCKER_HOST/DOCKER_IMAGE_VERSION imagePullPolicy: Always ports: - containerPort: 80 protocol: TCP name: http - containerPort: 443 protocol: TCP name: https livenessProbe: httpGet: scheme: "HTTP" path: "/_ah/health" port: 80 httpHeaders: - name: Host value: example.com initialDelaySeconds: 35 periodSeconds: 35 readinessProbe: httpGet: scheme: "HTTP" path: "/_ah/health" port: 80 httpHeaders: - name: Host value: example.com initialDelaySeconds: 35 periodSeconds: 35 resources: requests: cpu: 250m limits: cpu: 500m --- apiVersion: v1 kind: Service metadata: name: example-com spec: type: LoadBalancer ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 0 name: http - port: 443 protocol: TCP targetPort: 443 nodePort: 0 name: https selector: app: example-com </code></pre> <p>The error/logs on pods clearly indicate that kubernates is trying to access the service via https.</p> <pre><code> kubectl describe pod example-com-86876875c7-b75hr Name: example-com-86876875c7-b75hr Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: aks-agentpool-37281605-0/10.240.0.4 Start Time: Sat, 17 Nov 2018 19:58:30 +0200 Labels: app=example-com pod-template-hash=4243243173 Annotations: &lt;none&gt; Status: Running IP: 10.244.0.65 Controlled By: ReplicaSet/example-com-86876875c7 Containers: example-com: Container ID: docker://c5eeb03558adda435725a0df3cc2d15943966c3df53e9462e964108969c8317a Image: example-com.azurecr.io/example-com:2018-11-17_19-58-05 Image ID: docker-pullable://example-com.azurecr.io/example-com@sha256:5d425187b8663ecfc5d6cc78f6c5dd29f1559d3687ba9d4c0421fd0ad109743e Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Sat, 17 Nov 2018 20:07:59 +0200 Last State: Terminated Reason: Error Exit Code: 2 Started: Sat, 17 Nov 2018 20:05:39 +0200 Finished: Sat, 17 Nov 2018 20:07:55 +0200 Ready: False Restart Count: 3 Limits: cpu: 500m Requests: cpu: 250m Liveness: http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3 Readiness: http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3 Environment: NABU: nabu KUBERNETES_PORT_443_TCP_ADDR: agile-kube-b3e5753f.hcp.westeurope.azmk8s.io KUBERNETES_PORT: tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443 KUBERNETES_PORT_443_TCP: tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443 KUBERNETES_SERVICE_HOST: agile-kube-b3e5753f.hcp.westeurope.azmk8s.io Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rcr7c (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-rcr7c: Type: Secret (a volume populated by a Secret) SecretName: default-token-rcr7c Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned default/example-com-86876875c7-b75hr to aks-agentpool-37281605-0 Warning Unhealthy 3m46s (x6 over 7m16s) kubelet, aks-agentpool-37281605-0 Liveness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Normal Pulling 3m45s (x3 over 10m) kubelet, aks-agentpool-37281605-0 pulling image "example-com.azurecr.io/example-com:2018-11-17_19-58-05" Normal Killing 3m45s (x2 over 6m5s) kubelet, aks-agentpool-37281605-0 Killing container with id docker://example-com:Container failed liveness probe.. Container will be killed andrecreated. Normal Pulled 3m44s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Successfully pulled image "example-com.azurecr.io/example-com:2018-11-17_19-58-05" Normal Created 3m42s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Created container Normal Started 3m42s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Started container Warning Unhealthy 39s (x9 over 7m4s) kubelet, aks-agentpool-37281605-0 Readiness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre>
<p>As per your comments, you are doing an HTTP to HTTPS redirect in the pod and basically, the probe cannot connect to it. If you still want to serve a probe on port 80 you should consider using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP probes</a>. For example:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: example-com spec: ... minReadySeconds: 30 template: metadata: labels: app: example-com spec: imagePullSecrets: - name: myregistrykey containers: - name: example-com ... livenessProbe: httpGet: scheme: "HTTP" path: "/_ah/health" port: 80 httpHeaders: - name: Host value: example.com initialDelaySeconds: 35 periodSeconds: 35 readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 35 periodSeconds: 35 ... </code></pre> <p>Or you can ignore some redirects in your application depending on the URL, just like mentioned in @night-gold's answer.</p>
<p>Why am I getting: </p> <p><code>kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods</code></p> <p>UPDATE - I've now migrated the entire cluster to <code>us-west-2</code> rather than <code>eu-west-1</code> so I can run the code out of the box to prevent introducing any errors. The <code>tfstate</code> file showed the correct EKS AMI is being referred to.</p> <p>E.g. </p> <p><code>720: "image_id": "ami-00c3b2d35bddd4f5c",</code></p> <p>FWIW, I'm following along to <a href="https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html" rel="noreferrer">https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html</a> and using the code it links to in Github - i.e. <a href="https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started" rel="noreferrer">https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started</a></p> <p>Note: looking in EC2 Instances, I can see 2 EKS nodes running with the correct AMI IDs.</p> <p>==== UPDATES</p> <p>Checking nodes:</p> <pre><code>kubectl get nodes No resources found. </code></pre> <p>ssh into one of the nodes and running <code>journalctl</code> shows:</p> <pre><code>Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.419465 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized Nov 21 12:28:25 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:25.735882 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.237953 4417 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: W1121 12:28:26.418327 4417 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Nov 21 12:28:26 ip-10-0-0-247.us-west-2.compute.internal kubelet[4417]: E1121 12:28:26.418477 4417 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: n </code></pre> <p>Given Auth may be an issue I checked the Terraform code which seems to be correct. E.g.:</p> <p><a href="https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20" rel="noreferrer">https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/eks-getting-started/outputs.tf#L9-L20</a></p> <p>Any way I can test this in a bit more detail? Or any further suggestions?</p>
<p>I'm guessing you don't have any nodes registered on your cluster. Just because the EC2 nodes are not up, it doesn't mean that your cluster is able to use them. You can check with:</p> <pre><code>$ kubectl get nodes </code></pre> <p>Another possibility is that your nodes are available but they don't have enough resources (which is unlikely).</p> <p>Another possibility is that your nodes are tainted with something like this:</p> <pre><code>$ kubectl taint node node1 key=value:NoSchedule </code></pre> <p>You can check and remove it:</p> <pre><code>$ kubectl describe node node1 $ kubectl taint node node1 key:NoSchedule- </code></pre> <p>Another possibility is that you have <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer"><code>nodeSelector</code></a> in your pod spec and you don't have the nodes labeled with that node selector. Check with:</p> <pre><code>$ kubectl get nodes --show-labels </code></pre>
<p>I'm trying to setup PostgreSQL on Minikube with data path being my host folder mounted on Minikube (I'd like to keep my data on host). </p> <p>With the kubernetes object created (below) I get permission error, the same one as here <a href="https://stackoverflow.com/questions/41856108/how-to-solve-permission-trouble-when-running-postgresql-from-minikube">How to solve permission trouble when running Postgresql from minikube?</a> although the question mentioned doesn't answer the issue. It advises to mount minikube's VM dir instead.</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: storage env: - name: POSTGRES_PASSWORD value: user - name: POSTGRES_USER value: pass - name: POSTGRES_DB value: k8s volumes: - name: storage hostPath: path: /data/postgres </code></pre> <p>Is there any other way to do that other than building own image on top of Postgres and playing with the permissions somehow? I'm on macOS with Minikube 0.30.0 and I'm experiencing that with both Virtualbox and hyperkit drivers for Minikube.</p>
<p>Look at these lines from here : <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a></p> <blockquote> <p>the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</p> </blockquote> <p>So, either you have to run as root or you have to change the file permission of <code>/data/postgres</code> directory.</p> <p>However, you can run your Postgres container as root without rebuilding docker image.</p> <p>You have to add following to your container:</p> <pre><code>securityContext: runAsUser: 0 </code></pre> <p>Your yaml should look like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: storage env: - name: POSTGRES_PASSWORD value: user - name: POSTGRES_USER value: pass - name: POSTGRES_DB value: k8s securityContext: runAsUser: 0 volumes: - name: storage hostPath: path: /data/postgres </code></pre>
<p>When I create a Kubernetes cluster and specify the <code>--master-zone us-west-2a,us-west2b-us-west2c</code> I end up with 3 masters (which is fine) but they are in different instance groups. </p> <p>i.e. </p> <pre><code>$ kops get ig Using cluster from kubectl context: kube2.mydomain.net NAME ROLE MACHINETYPE MIN MAX ZONES master-us-west-2a Master m4.large 1 1 us-west-2a master-us-west-2b Master m4.large 1 1 us-west-2b master-us-west-2c Master m4.large 1 1 us-west-2c nodes Node m4.large 3 3 us-west-2a,us-west-2b,us-west-2c </code></pre> <p>I'm not sure this is correct, or is this a best practice? </p> <p>I would think that all the masters should be in one instance group.</p>
<p>I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.</p> <p>If you want to create them in a single zone you can run something like this, for example:</p> <pre><code>$ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com </code></pre> <p>Or </p> <pre><code>$ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3 </code></pre> <p>More info <a href="https://github.com/kubernetes/kops/issues/732" rel="nofollow noreferrer">here</a>.</p> <p>I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.</p>
<p>I'm am currently configuring Heketi Server (Deployed on K8S clusterA) to interact with my Glusterfs cluster that is deployed as a DaemonSet on another K8S cluster ClusterB.</p> <p>One of the configurations required by Heketi to connect to GlusterFS K8S cluster are :</p> <pre><code> "kubeexec": { "host" :"https://&lt;URL-OF-CLUSTER-WITH-GLUSTERFS&gt;:6443", "cert" : "&lt;CERTIFICATE-OF-CLUSTER-WITH-GLUSTERFS&gt;", "insecure": false, "user": "WHERE_DO_I_GET_THIS_FROM", "password": "&lt;WHERE_DO_I_GET_THIS_FROM&gt;", "namespace": "default", "backup_lvm_metadata": false }, </code></pre> <p>As you can see, it requires a user and password. I have no idea where to get that from. One thing that comes to mind is creating a service account on ClusterB and using the token to authenticate but Heketi does not seem to be taking that as an authentication mechanism. </p> <p>The cert is something that I got from <code>/usr/local/share/ca-certificates/kube-ca.crt</code> but I have no idea where to get the user/password from. Any idea what could be done?</p> <p>If I do a <code>kubectl config view</code> I only see certificates for the admin user of my cluster.</p>
<p>That could only mean one thing: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-password-file" rel="nofollow noreferrer">basic HTTP auth</a>.</p> <p>You can specify a username/password in a file when you start the kube-apiserver with the <code>--basic-auth-file=SOMEFILE</code> option.</p>
<p>How can I configure a LoadBalancer to retrieve a ephemeral rather than static IP?</p> <p>I need this so I don't have to WAIT FOR GOOGLE TO INCREASE THE QUOTA FROM 1 IP ADDRRESS... (It's been a long day...)</p> <pre><code> Normal EnsuringLoadBalancer 3m (x7 over 8m) service-controller Ensuring load balancer Warning CreatingLoadBalancerFailed 3m (x7 over 8m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/subzero-react: failed to ensure a static IP for load balancer (*****************(default/subzero-react)): error creating gce static IP address: googleapi: Error 403: Quota 'STATIC_ADDRESSES' exceeded. Limit: 1.0 in region us-central1., quotaExceeded </code></pre> <p>Upon removing the <code>loadBalancerIp</code> field and recreating the service, it still remains in pending.</p> <p>This is the output of <code>kubectl get service ****** -o yaml</code>:</p> <pre><code>kubectl get service **** -o yaml apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"api"},"name":"subzero-react","namespace":"default"},"spec":{"ports":[{"name":"http","port":80}],"selector":{"app":"initial-pp3subzero"},"type":"LoadBalancer"}} creationTimestamp: 2018-11-19T18:04:24Z labels: app: api name: ***************** namespace: default resourceVersion: "584211" selfLink: /api/v1/namespaces/default/services/********** uid: 8c140d40-ec25-11e8-b7b3-42010a8000c2 spec: clusterIP: 10.7.242.176 externalTrafficPolicy: Cluster ports: - name: http nodePort: 31853 port: 80 protocol: TCP targetPort: 80 selector: app: ****************** sessionAffinity: None type: LoadBalancer status: loadBalancer: {} </code></pre>
<p>To not assign a static IP on a GCP load balancer with Kubernetes (default behavior) you generally don't need to specify anything in the <code>loadBalancerIP</code> service spec as described <a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/hello-app/manifests/helloweb-service-static-ip.yaml" rel="nofollow noreferrer">here</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">here</a>.</p> <p>You can delete your service and re-create it without the <code>loadBalancerIP</code> or you can patch it:</p> <pre><code>$ kubectl patch service &lt;service-name&gt; -p '{"spec": { "loadBalancerIP": null }}' </code></pre>
<p>From time to time, when I submit spark job to Google Kubernetes cluster, I got 401 unauthorized, so I do this <code>gcloud container clusters get-credential my-cluster</code>, but it almost always followed by 403 error, saying client system:anonymous etc., but the weird the part is, I just need to do a simple <code>kubectl get namespace</code>, then it works again, why is that?</p>
<blockquote> <p>From time to time, when I submit spark job to Google Kubernetes cluster, I got 401 unauthorized</p> </blockquote> <p>Your credentials expire.</p> <blockquote> <p>I do this <code>gcloud container clusters get-credential my-cluster</code>, but it almost always followed by 403 error, saying client system:anonymous etc.</p> </blockquote> <p>Sounds like a timing issue where your new credentials and getting propagated in the cluster.</p>
<p>I have a deployment configured in yml using RollingUpdate:</p> <pre><code> strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 10% </code></pre> <p>I'd like to be able to slow down the deployment to give a longer window in which I can pause and possibly rollback.</p> <p>Is there a way to configure this?</p>
<p>Kubernetes doesn't really have a way of controlling this (the speed of the rolling updates). <code>maxUnavailable: 0, maxSurge: 10%</code> seems like a step gap hack. </p> <p>If you are concerned about your update being ready and having the ability to rollback, you should consider creating a canary <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. In other words, another Deployment with a small number of replicas, that you can delete if something goes wrong.</p> <p>Another alternative is looking at a Service-Mesh like <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> that allows you to do <a href="https://istio.io/blog/2017/0.1-canary/" rel="nofollow noreferrer">Canary Deployments</a>.</p>
<p>I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes) My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?". </p> <p>And If I run this command: </p> <pre><code>kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo </code></pre> <p>It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP. </p> <p><strong>Kubernetes Cluster</strong></p> <p>I installed Kubernetes through kubeadm, using flannel as CNI</p> <p><strong>kubectl version:</strong></p> <pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"} </code></pre> <p>Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?</p> <p>One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP</p> <p>EDIT: This happens after installation: <a href="https://i.stack.imgur.com/GyE03.png" rel="nofollow noreferrer">see picture</a></p> <p>EDIT2: By running the following command: </p> <pre><code>kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps </code></pre> <p>I get the following result:</p> <p><a href="https://i.stack.imgur.com/gav1U.png" rel="nofollow noreferrer">see picture</a></p> <p>In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it. </p> <p>Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.</p>
<p>I would consider to look at <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for <code>Ingress</code> resource, there is option for Bare-metal environments to switch to <code>MetalLB</code> solution which will create Kubernetes services of type <code>LoadBalancer</code> in the clusters that don’t run on a cloud provider, therefore it can be also implemented for <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p> <p>Generally, <code>MetalLB</code> can be installed via Kubernetes manifest file or using <a href="https://docs.helm.sh/" rel="nofollow noreferrer">Helm</a> package manager as described <a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">here</a>. </p> <p><code>MetalLB</code> deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the <code>ingress-nginx</code> service. This pool can be defined in a <code>ConfigMap</code> called <code>config</code> located in the same namespace as the <code>MetalLB</code> controller:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 </code></pre> <p>External IP would be assigned to your <code>LoadBalancer</code> once ingress service obtains IP address from this address pool. </p> <p>Find more details about <code>MetalLB</code> implementation for NGINX Ingress Controller in official <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb" rel="nofollow noreferrer">documentation</a>.</p>
<p>People, I am trying to create a simple file <em>/tmp/tarte.test</em> with initContainers. I have a constraint, using an alpine image for the container. Please let me know what is NOT in this simple yaml file. </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: initonpod namespace: prod labels: app: myapp spec: containers: - name: mycont-nginx image: alpine initContainers: - name: myinit-cont image: alpine imagePullPolicy: IfNotPresent command: - touch - "/tmp/tarte.test" - sleep 200 </code></pre> <p>the describe of the pod</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned prod/initonpod to k8s-node-1 Normal Pulled 8s kubelet, k8s-node-1 Container image "alpine" already present on machine Normal Created 8s kubelet, k8s-node-1 Created container Normal Started 7s kubelet, k8s-node-1 Started container Normal Pulling 4s (x2 over 7s) kubelet, k8s-node-1 pulling image "alpine" Normal Pulled 1s (x2 over 6s) kubelet, k8s-node-1 Successfully pulled image "alpine" Normal Created 1s (x2 over 5s) kubelet, k8s-node-1 Created container Normal Started 1s (x2 over 5s) kubelet, k8s-node-1 Started container Warning BackOff 0s kubelet, k8s-node-1 Back-off restarting failed container </code></pre> <p>And if I change the alpine image for an nginx image container... it's work good.</p>
<p><code>Back-off restarting failed container</code> because of your container spec.</p> <pre><code>spec: containers: - name: mycont-nginx image: alpine </code></pre> <p>This <code>alpine</code> container doesn't run forever. In kubernetes, container has to run forever.That's why you are getting error. When you use <code>nginx</code> image, it runs forever. So to use <code>alpine</code> image change the spec as below:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busypod labels: app: busypod spec: containers: - name: busybox image: alpine command: - "sh" - "-c" - &gt; while true; do sleep 3600; done initContainers: - name: myinit-cont image: alpine imagePullPolicy: IfNotPresent command: - touch - "/tmp/tarte.test" - sleep 200 </code></pre>
<p>I've gone through the Azure Cats&amp;Dogs tutorial described <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app" rel="noreferrer">here</a> and I am getting an error in the final step where the apps are launched in AKS. Kubernetes is reporting that I have insufficent pods but I'm not sure why this would be. I've run through this same tutorial a few weeks ago without problems.</p> <pre><code>$ kubectl apply -f azure-vote-all-in-one-redis.yaml deployment.apps/azure-vote-back created service/azure-vote-back created deployment.apps/azure-vote-front created service/azure-vote-front created $ kubectl get pods NAME READY STATUS RESTARTS AGE azure-vote-back-655476c7f7-mntrt 0/1 Pending 0 6s azure-vote-front-7c7d7f6778-mvflj 0/1 Pending 0 6s $ kubectl get events LAST SEEN TYPE REASON KIND MESSAGE 3m36s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient pods. 84s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient pods. 70s Warning FailedScheduling Pod skip schedule deleting pod: default/azure-vote-back-655476c7f7-l5j28 9s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient pods. 53m Normal SuccessfulCreate ReplicaSet Created pod: azure-vote-back-655476c7f7-kjld6 99s Normal SuccessfulCreate ReplicaSet Created pod: azure-vote-back-655476c7f7-l5j28 24s Normal SuccessfulCreate ReplicaSet Created pod: azure-vote-back-655476c7f7-mntrt 53m Normal ScalingReplicaSet Deployment Scaled up replica set azure-vote-back-655476c7f7 to 1 99s Normal ScalingReplicaSet Deployment Scaled up replica set azure-vote-back-655476c7f7 to 1 24s Normal ScalingReplicaSet Deployment Scaled up replica set azure-vote-back-655476c7f7 to 1 9s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient pods. 3m36s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient pods. 53m Normal SuccessfulCreate ReplicaSet Created pod: azure-vote-front-7c7d7f6778-rmbqb 24s Normal SuccessfulCreate ReplicaSet Created pod: azure-vote-front-7c7d7f6778-mvflj 53m Normal ScalingReplicaSet Deployment Scaled up replica set azure-vote-front-7c7d7f6778 to 1 53m Normal EnsuringLoadBalancer Service Ensuring load balancer 52m Normal EnsuredLoadBalancer Service Ensured load balancer 46s Normal DeletingLoadBalancer Service Deleting load balancer 24s Normal ScalingReplicaSet Deployment Scaled up replica set azure-vote-front-7c7d7f6778 to 1 $ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-27217108-0 Ready agent 7d4h v1.9.9 </code></pre> <p>The only thing I can think of that has changed is that I have other (larger) clusters running now as well, and the main reason I went through this Cats&amp;Dogs tutorial again was because I hit this same problem today with my other clusters. Is this a resources limit issue with my Azure account?</p> <p><strong>Update 10-20/3:15 PST:</strong> Notice how these three clusters all show that they use the same nodepool, even though they were created in different resource groups. Also note how the "get-credentials" call for gem2-cluster reports an error. I did have a cluster earlier called gem2-cluster which I deleted and recreated using the same name (in fact I deleted the wole resource group). What's the correct process for doing this?</p> <pre><code>$ az aks get-credentials --name gem1-cluster --resource-group gem1-rg Merged "gem1-cluster" as current context in /home/psteele/.kube/config $ kubectl get nodes -n gem1 NAME STATUS ROLES AGE VERSION aks-nodepool1-27217108-0 Ready agent 3h26m v1.9.11 $ az aks get-credentials --name gem2-cluster --resource-group gem2-rg A different object named gem2-cluster already exists in clusters $ az aks get-credentials --name gem3-cluster --resource-group gem3-rg Merged "gem3-cluster" as current context in /home/psteele/.kube/config $ kubectl get nodes -n gem1 NAME STATUS ROLES AGE VERSION aks-nodepool1-14202150-0 Ready agent 26m v1.9.11 $ kubectl get nodes -n gem2 NAME STATUS ROLES AGE VERSION aks-nodepool1-14202150-0 Ready agent 26m v1.9.11 $ kubectl get nodes -n gem3 NAME STATUS ROLES AGE VERSION aks-nodepool1-14202150-0 Ready agent 26m v1.9.11 </code></pre>
<p>What is your max-pods set to? This is a normal error when you've reached the limit of pods per node.</p> <p>You can check your current maximum number of pods per node with:</p> <pre><code>$ kubectl get nodes -o yaml | grep pods pods: "30" pods: "30" </code></pre> <p>And your current with:</p> <pre><code>$ kubectl get pods --all-namespaces | grep Running | wc -l 18 </code></pre>