prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I try to deploy one docker image that I build and is not on a public or private registry. </p> <p>I use the <code>imagePullPolicy: IfNotPresent</code> for the Kubernetes deployment.</p> <p>I use kubeadm v1.12 the error:</p> <pre><code>Normal Scheduled 35s default-scheduler Successfully assigned default/test-777dd9bc96-chgc7 to ip-10-0-1-154 Normal SandboxChanged 32s kubelet, ip-10-0-1-154 Pod sandbox changed, it will be killed and re-created. Normal BackOff 30s (x3 over 31s) kubelet, ip-10-0-1-154 Back-off pulling image "test_kube" Warning Failed 30s (x3 over 31s) kubelet, ip-10-0-1-154 Error: ImagePullBackOff Normal Pulling 15s (x2 over 34s) kubelet, ip-10-0-1-154 pulling image "test" Warning Failed 13s (x2 over 33s) kubelet, ip-10-0-1-154 Failed to pull image "test": rpc error: code = Unknown desc = Error response from daemon: pull access denied for test_kube, repository does not exist or may require 'docker login' Warning Failed 13s (x2 over 33s) kubelet, ip-10-0-1-154 Error: ErrImagePull </code></pre> <p>My deployment file: </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment vmetadata: name: test-kube spec: template: metadata: labels: app: test spec: containers: - name: test image: test imagePullPolicy: IfNotPresent ports: - containerPort: 3000 env: - name: SECRET-KUBE valueFrom: secretKeyRef: name: secret-test key: username </code></pre> <blockquote> <p>docker images]</p> </blockquote> <pre><code>REPOSITORY TAG test latest test test </code></pre> <p>In the deployment file i tried with </p> <blockquote> <p>image: test and with image: test:test</p> </blockquote> <p>The same error:</p> <blockquote> <p>Error: ErrImagePull</p> </blockquote>
<ul> <li>create a secret based on docker registry user with pull/push rights</li> <li>use it as imagePullSecret</li> </ul> <p>OR</p> <ul> <li>pre-pull the image on the deployment node</li> </ul> <p><strong>Details of creating secret and usage:</strong></p> <p>A Kubernetes cluster uses the Secret of docker-registry type to authenticate with a container registry to pull a private image.</p> <p>Create this Secret, naming it regcred:</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>where:</p> <pre><code>&lt;your-registry-server&gt; is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub) &lt;your-name&gt; is your Docker username. &lt;your-pword&gt; is your Docker password. &lt;your-email&gt; is your Docker email. </code></pre> <p>Then create a pod that uses that secret:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: &lt;your-private-image&gt; imagePullSecrets: - name: regcred </code></pre> <p>For the local image use case please see this post:</p> <p><a href="https://stackoverflow.com/questions/40144138/pull-a-local-image-to-run-a-pod-in-kubernetes">Pull a local image to run a pod in Kubernetes</a></p>
<p>I'm having a problem with my GKE cluster, all the pods are stuck with ContainerCreating status. When I run the kubectl get events I see this error:</p> <pre><code>Failed create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>Anyone knows what the hell is happening? I can't find this solution anywhere.</p> <p><strong>EDIT</strong> I saw this post <a href="https://github.com/kubernetes/kubernetes/issues/44273" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/44273</a> saying that the GKE instances that are small than the default google instance for GKE(n1-standard-1) can have network problems. So I changed my instances to the default type, but without success. Here are my node and pod descriptions:</p> <pre><code>Name: gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/fluentd-ds-ready=true beta.kubernetes.io/instance-type=n1-standard-1 beta.kubernetes.io/os=linux cloud.google.com/gke-nodepool=pool-nodes-dev failure-domain.beta.kubernetes.io/region=southamerica-east1 failure-domain.beta.kubernetes.io/zone=southamerica-east1-a kubernetes.io/hostname=gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Thu, 27 Sep 2018 20:27:47 -0300 Taints: &lt;none&gt; Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- KernelDeadlock False Fri, 28 Sep 2018 09:58:58 -0300 Thu, 27 Sep 2018 20:27:16 -0300 KernelHasNoDeadlock kernel has no deadlock FrequentUnregisterNetDevice False Fri, 28 Sep 2018 09:58:58 -0300 Thu, 27 Sep 2018 20:32:18 -0300 UnregisterNetDevice node is functioning properly NetworkUnavailable False Thu, 27 Sep 2018 20:27:48 -0300 Thu, 27 Sep 2018 20:27:48 -0300 RouteCreated NodeController create implicit route OutOfDisk False Fri, 28 Sep 2018 09:59:03 -0300 Thu, 27 Sep 2018 20:27:47 -0300 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 28 Sep 2018 09:59:03 -0300 Thu, 27 Sep 2018 20:27:47 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 28 Sep 2018 09:59:03 -0300 Thu, 27 Sep 2018 20:27:47 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 28 Sep 2018 09:59:03 -0300 Thu, 27 Sep 2018 20:27:47 -0300 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 28 Sep 2018 09:59:03 -0300 Thu, 27 Sep 2018 20:28:07 -0300 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.2 ExternalIP: Hostname: gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 Capacity: cpu: 1 ephemeral-storage: 98868448Ki hugepages-2Mi: 0 memory: 3787608Ki pods: 110 Allocatable: cpu: 940m ephemeral-storage: 47093746742 hugepages-2Mi: 0 memory: 2702168Ki pods: 110 System Info: Machine ID: 1e8e0ecad8f5cc7fb5851bc64513d40c System UUID: 1E8E0ECA-D8F5-CC7F-B585-1BC64513D40C Boot ID: 971e5088-6bc1-4151-94bf-b66c6c7ee9a3 Kernel Version: 4.14.56+ OS Image: Container-Optimized OS from Google Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.2 Kubelet Version: v1.10.7-gke.2 Kube-Proxy Version: v1.10.7-gke.2 PodCIDR: 10.0.32.0/24 ProviderID: gce://aditumpay/southamerica-east1-a/gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 Non-terminated Pods: (11 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system event-exporter-v0.2.1-5f5b89fcc8-xsvmg 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-scaler-7c5db745fc-vttc9 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v3.1.0-sz8r8 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system heapster-v1.5.3-75486b456f-sj7k8 138m (14%) 138m (14%) 301856Ki (11%) 301856Ki (11%) kube-system kube-dns-788979dc8f-99xvh 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%) kube-system kube-dns-788979dc8f-9sz2b 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%) kube-system kube-dns-autoscaler-79b4b844b9-6s8x2 20m (2%) 0 (0%) 10Mi (0%) 0 (0%) kube-system kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-598d75cb96-6nhcd 50m (5%) 100m (10%) 100Mi (3%) 300Mi (11%) kube-system l7-default-backend-5d5b9874d5-8wk6h 10m (1%) 10m (1%) 20Mi (0%) 20Mi (0%) kube-system metrics-server-v0.2.1-7486f5bd67-fvddz 53m (5%) 148m (15%) 154Mi (5%) 404Mi (15%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 891m (94%) 396m (42%) memory 817952Ki (30%) 1391392Ki (51%) Events: &lt;none&gt; </code></pre> <p>The other node: </p> <pre><code>Name: gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/fluentd-ds-ready=true beta.kubernetes.io/instance-type=n1-standard-1 beta.kubernetes.io/os=linux cloud.google.com/gke-nodepool=pool-nodes-dev failure-domain.beta.kubernetes.io/region=southamerica-east1 failure-domain.beta.kubernetes.io/zone=southamerica-east1-a kubernetes.io/hostname=gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Thu, 27 Sep 2018 20:30:05 -0300 Taints: &lt;none&gt; Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- KernelDeadlock False Fri, 28 Sep 2018 10:11:03 -0300 Thu, 27 Sep 2018 20:29:34 -0300 KernelHasNoDeadlock kernel has no deadlock FrequentUnregisterNetDevice False Fri, 28 Sep 2018 10:11:03 -0300 Thu, 27 Sep 2018 20:34:36 -0300 UnregisterNetDevice node is functioning properly NetworkUnavailable False Thu, 27 Sep 2018 20:30:06 -0300 Thu, 27 Sep 2018 20:30:06 -0300 RouteCreated NodeController create implicit route OutOfDisk False Fri, 28 Sep 2018 10:11:49 -0300 Thu, 27 Sep 2018 20:30:05 -0300 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 28 Sep 2018 10:11:49 -0300 Thu, 27 Sep 2018 20:30:05 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 28 Sep 2018 10:11:49 -0300 Thu, 27 Sep 2018 20:30:05 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 28 Sep 2018 10:11:49 -0300 Thu, 27 Sep 2018 20:30:05 -0300 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 28 Sep 2018 10:11:49 -0300 Thu, 27 Sep 2018 20:30:25 -0300 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.0.0.3 ExternalIP: Hostname: gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz Capacity: cpu: 1 ephemeral-storage: 98868448Ki hugepages-2Mi: 0 memory: 3787608Ki pods: 110 Allocatable: cpu: 940m ephemeral-storage: 47093746742 hugepages-2Mi: 0 memory: 2702168Ki pods: 110 System Info: Machine ID: f1d5cf2a0b2c5472cf6509778a7941a7 System UUID: F1D5CF2A-0B2C-5472-CF65-09778A7941A7 Boot ID: f35bebb8-acd7-4a2f-95d6-76604638aef9 Kernel Version: 4.14.56+ OS Image: Container-Optimized OS from Google Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.2 Kubelet Version: v1.10.7-gke.2 Kube-Proxy Version: v1.10.7-gke.2 PodCIDR: 10.0.33.0/24 ProviderID: gce://aditumpay/southamerica-east1-a/gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default aditum-payment-7d966c494c-wpk2t 100m (10%) 0 (0%) 0 (0%) 0 (0%) default aditum-portal-dev-5c69d76bb6-n5d5b 100m (10%) 0 (0%) 0 (0%) 0 (0%) default aditum-vtexapi-5c758fcfb7-rhvsn 100m (10%) 0 (0%) 0 (0%) 0 (0%) default admin-mongo-dev-7d9f7f7d46-rrj42 100m (10%) 0 (0%) 0 (0%) 0 (0%) default mongod-0 200m (21%) 0 (0%) 200Mi (7%) 0 (0%) kube-system fluentd-gcp-v3.1.0-pgwfx 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz 100m (10%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 700m (74%) 0 (0%) memory 200Mi (7%) 0 (0%) Events: &lt;none&gt; </code></pre> <p>All the cluster's pods are stucked.</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default aditum-payment-7d966c494c-wpk2t 0/1 ContainerCreating 0 13h default aditum-portal-dev-5c69d76bb6-n5d5b 0/1 ContainerCreating 0 13h default aditum-vtexapi-5c758fcfb7-rhvsn 0/1 ContainerCreating 0 13h default admin-mongo-dev-7d9f7f7d46-rrj42 0/1 ContainerCreating 0 13h default mongod-0 0/1 ContainerCreating 0 13h kube-system event-exporter-v0.2.1-5f5b89fcc8-xsvmg 0/2 ContainerCreating 0 13h kube-system fluentd-gcp-scaler-7c5db745fc-vttc9 0/1 ContainerCreating 0 13h kube-system fluentd-gcp-v3.1.0-pgwfx 0/2 ContainerCreating 0 16h kube-system fluentd-gcp-v3.1.0-sz8r8 0/2 ContainerCreating 0 16h kube-system heapster-v1.5.3-75486b456f-sj7k8 0/3 ContainerCreating 0 13h kube-system kube-dns-788979dc8f-99xvh 0/4 ContainerCreating 0 13h kube-system kube-dns-788979dc8f-9sz2b 0/4 ContainerCreating 0 13h kube-system kube-dns-autoscaler-79b4b844b9-6s8x2 0/1 ContainerCreating 0 13h kube-system kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6 0/1 ContainerCreating 0 13h kube-system kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz 0/1 ContainerCreating 0 13h kube-system kubernetes-dashboard-598d75cb96-6nhcd 0/1 ContainerCreating 0 13h kube-system l7-default-backend-5d5b9874d5-8wk6h 0/1 ContainerCreating 0 13h kube-system metrics-server-v0.2.1-7486f5bd67-fvddz 0/2 ContainerCreating 0 13h </code></pre> <p>A stucked pod.</p> <pre><code>Name: aditum-payment-7d966c494c-wpk2t Namespace: default Node: gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz/10.0.0.3 Start Time: Thu, 27 Sep 2018 20:30:47 -0300 Labels: io.kompose.service=aditum-payment pod-template-hash=3852270507 Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container aditum-payment Status: Pending IP: Controlled By: ReplicaSet/aditum-payment-7d966c494c Containers: aditum-payment: Container ID: Image: gcr.io/aditumpay/aditumpaymentwebapi:latest Image ID: Port: 5000/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 100m Environment: CONNECTIONSTRING: &lt;set to the key 'CONNECTIONSTRING' of config map 'aditum-payment-config'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-qsc9k (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-qsc9k: Type: Secret (a volume populated by a Secret) SecretName: default-token-qsc9k Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 3m (x1737 over 13h) kubelet, gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz Failed create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>Thanks!</p>
<p>Sorry for taking to long to respond. It was a very silly problem. After I reach the google cloud support, I notice that my NAT machine was not working properly. The PrivateAccess route was passing thougth my NAT. Thanks everyone for the help.</p>
<p>How does Istio support IP based routing between pods in the same Service (or ReplicaSet to be more specific)?</p> <p>We would like to deploy a Tomcat application with replica > 1 within an Istio mesh. The app runs Infinispan, which is using JGroups to sort out communication and clustering. JGroups need to identify its cluster members and for that purpose there is the KUBE_PING (Kubernetes discovery protocol for JGroups). It will consult K8S API with a lookup comparable to <em>kubectl get pods</em>. The cluster members can be both pods in other services and pods within the same Service/Deployment.</p> <p>Despite our issue being driven by rather specific needs the topic is generic. How do we enable pods to communicate with each other within a replica set?</p> <p>Example: as a showcase we deploy the demo application <a href="https://github.com/jgroups-extras/jgroups-kubernetes" rel="noreferrer">https://github.com/jgroups-extras/jgroups-kubernetes</a>. The relevant stuff is:</p> <pre><code>apiVersion: v1 items: - apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ispn-perf-test namespace: my-non-istio-namespace spec: replicas: 3 &lt; -- edited for brevity -- &gt; </code></pre> <p>Running <strong>without Istio</strong>, the three pods will find each other and form the cluster. Deploying the same <strong>with Istio</strong> in <em>my-istio-namespace</em> and adding a basic Service definition:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ispn-perf-test-service namespace: my-istio-namespace spec: selector: run : ispn-perf-test ports: - protocol: TCP port: 7800 targetPort: 7800 name: "one" - protocol: TCP port: 7900 targetPort: 7900 name: "two" - protocol: TCP port: 9000 targetPort: 9000 name: "three" </code></pre> <p>Note that output below is wide - you might need to scroll right to get the IPs</p> <pre><code>kubectl get pods -n my-istio-namespace -o wide NAME READY STATUS RESTARTS AGE IP NODE ispn-perf-test-558666c5c6-g9jb5 2/2 Running 0 1d 10.44.4.63 gke-main-pool-4cpu-15gb-98b104f4-v9bl ispn-perf-test-558666c5c6-lbvqf 2/2 Running 0 1d 10.44.4.64 gke-main-pool-4cpu-15gb-98b104f4-v9bl ispn-perf-test-558666c5c6-lhrpb 2/2 Running 0 1d 10.44.3.22 gke-main-pool-4cpu-15gb-98b104f4-x8ln kubectl get service ispn-perf-test-service -n my-istio-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ispn-perf-test-service ClusterIP 10.41.13.74 &lt;none&gt; 7800/TCP,7900/TCP,9000/TCP 1d </code></pre> <p>Guided by <a href="https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration" rel="noreferrer">https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration</a>, let's peek into the resulting Envoy conf of one of the pods:</p> <pre><code>istioctl proxy-config listeners ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace ADDRESS PORT TYPE 10.44.4.63 7900 TCP 10.44.4.63 7800 TCP 10.44.4.63 9000 TCP 10.41.13.74 7900 TCP 10.41.13.74 9000 TCP 10.41.13.74 7800 TCP &lt; -- edited for brevity -- &gt; </code></pre> <p>The Istio doc describes the listeners above as</p> <blockquote> <p>Receives outbound non-HTTP traffic for relevant IP:PORT pair from listener <code>0.0.0.0_15001</code></p> </blockquote> <p>and this all makes sense. The pod <em>ispn-perf-test-558666c5c6-g9jb5</em> can reach itself on 10.44.4.63 and the service via 10.41.13.74. But... what if the pod sends packets to 10.44.4.64 or 10.44.3.22? Those IPs are not present among the listeners so afaik the two "sibling" pods are non-reachable for <em>ispn-perf-test-558666c5c6-g9jb5</em>.</p> <p>Can Istio support this today - then how?</p>
<p>You are right that HTTP routing only supports local access or remote access by service name or service VIP.</p> <p>That said, for your particular example, above, where the service ports are named "one", "two", "three", the routing will be plain TCP as described <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">here</a>. Therefore, your example should work. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the other pods at 10.44.4.64 and 10.44.3.22.</p> <p>If you rename the ports to "http-one", "http-two", and "http-three" then HTTP routing will kick in and the RDS config will restrict the remote calls to ones using recognized service domains.</p> <p>To see the difference in the RDF config look at the output from the following command when the port is named "one", and when it is changed to "http-one".</p> <pre><code>istioctl proxy-config routes ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace --name 7800 -o json </code></pre> <p>With the port named "one" it will return no routes, so TCP routing will apply, but in the "http-one" case, the routes will be restricted.</p> <p>I don't know if there is a way to add additional remote pod IP addresses to the RDS domains in the HTTP case. I would suggest opening an Istio issue, to see if it's possible.</p>
<p>I have a Kubernetes Pod that has </p> <ul> <li>Requested Memory of 1500Mb</li> <li>Memory Limit of 2048Mb</li> </ul> <p>I have 2 containers running inside this pod, one is the actual application (heavy Java app) and a lightweight log shipper.</p> <p>The pod consistently reports a usage of 1.9-2Gb of memory usage. Because of this, the deployment is scaled (an autoscaling configuration is set which scales pods if memory consumption > 80%), naturally resulting in more pods and more costs</p> <p><strong>Yellow Line represents application memory usage</strong></p> <p><a href="https://i.stack.imgur.com/K3vpL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/K3vpL.png" alt="enter image description here"></a></p> <p>However, on deeper investigation, this is what I found.</p> <p>On <code>exec</code>ing inside the application container, I ran the <code>top</code> command, and it reports a total of <code>16431508 KiB</code> or roughly 16Gb of memory available, which is the memory available on the Machine.</p> <p>There are 3 processes running inside the application container, out of which the root process (application) takes <em>5.9%</em> of memory, which roughly comes out to 0.92Gb.</p> <p>The log-shipper simply takes 6Mb of memory.</p> <p>Now, what I don't understand is <em>WHY</em> my pod consistently reports such high usage metrics. Am I missing something ? We're incurring significant costs due to the unintended auto-scaling and would like to fix the same.</p>
<p>In Linux unused memory considered as wasted memory, that's why all "free" RAM i. e. memory not used by application or kernel itself is actively used for caching IO operations, file system metadata, etc. but would be provided to your application if required.</p> <p>You can get detailed information about your container memory consumption in here:</p> <blockquote> <p>/sys/fs/cgroup/memory/docker/{id}/memory.stat</p> </blockquote> <p>If you want to scale your cluster based on memory usage it is better to count only your application size, not container memory usage. </p>
<p>I'm starting to introduce <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">liveness and readiness probes</a> in my services, and I'm not sure if I've succeeded in getting it working or not, because I can't confidently interpret the status reported by <code>kubectl</code>.</p> <p><code>kubectl describe pod mypod</code> gives me something like this:</p> <pre><code>Name: myapp-5798dd798c-t7dqs Namespace: dev Node: docker-for-desktop/192.168.65.3 Start Time: Wed, 24 Oct 2018 13:22:54 +0200 Labels: app=myapp pod-template-hash=1354883547 Annotations: version: v2 Status: Running IP: 10.1.0.103 Controlled By: ReplicaSet/myapp-5798dd798c Containers: myapp: Container ID: docker://5d39cb47d2278eccd6d28c1eb35f93112e3ad103485c1c825de634a490d5b736 Image: myapp:latest Image ID: docker://sha256:61dafd0c208e2519d0165bf663e4b387ce4c2effd9237fb29fb48d316eda07ff Port: 80/TCP Host Port: 0/TCP State: Running Started: Wed, 24 Oct 2018 13:23:06 +0200 Ready: True Restart Count: 0 Liveness: http-get http://:80/healthz/live delay=0s timeout=10s period=60s #success=1 #failure=3 Readiness: http-get http://:80/healthz/ready delay=3s timeout=3s period=5s #success=1 #failure=3 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvnc2 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-gvnc2: Type: Secret (a volume populated by a Secret) SecretName: default-token-gvnc2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 84s default-scheduler Successfully assigned myapp-5798dd798c-t7dqs to docker-for-desktop Normal SuccessfulMountVolume 84s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gvnc2" Normal Pulled 75s kubelet, docker-for-desktop Container image "myapp:latest" already present on machine Normal Created 74s kubelet, docker-for-desktop Created container Normal Started 72s kubelet, docker-for-desktop Started container Warning Unhealthy 65s kubelet, docker-for-desktop Readiness probe failed: Get http://10.1.0.103:80/healthz/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>Now, I note that the <code>container</code> has <code>Status: Ready</code>, but the last event in the events list lists the state as <code>Unhealthy</code> because of a failed readiness probe. (Looking in the application logs I can see that there has been many more incoming requests to the readiness probe since, and that they all succeeded.)</p> <p>How should I interpret this information? Does Kubernetes consider my pod to be ready, or not ready?</p>
<p>A pod is ready when readiness probes of all its containers return a success. In your case the readiness probe failed in first attempt but next probe was a success and the container went in ready state. Here in below example of failed readiness probe </p> <p>the readiness probe below probed 58 times for last 11m and failed.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned default/upnready to mylabserver.com Normal Pulling 11m kubelet, mylabserver.com pulling image "luksa/kubia:v3" Normal Pulled 11m kubelet, mylabserver.com Successfully pulled image "luksa/kubia:v3" Normal Created 11m kubelet, mylabserver.com Created container Normal Started 11m kubelet, mylabserver.com Started container Warning Unhealthy 103s (x58 over 11m) kubelet, mylabserver.com Readiness probe failed: Get http://10.44.0.123:80/: dial tcp 10.44.0.123:80: connect: </code></pre> <p>also the container status is not ready as can be seen below</p> <pre><code>kubectl get pods -l run=upnready NAME READY STATUS RESTARTS AGE upnready 0/1 Running 0 17m </code></pre> <p>In your case the readiness probe passed the health check and your pod is in ready state. </p> <p>You can make use of initialDelaySeconds,periodSeconds,timeoutSeconds effectively to get better results. Here is a article.</p> <p><a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">article on readiness probe and liveness probe</a></p>
<p>I'm working on installing a three node kubernetes cluster on a CentOS 7 with flannel for a some time, however the CoreDNS pods cannot connect to API server and constantly restarting.</p> <p>The reference HowTo document I followed is <a href="https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/" rel="nofollow noreferrer">here</a>.</p> <h2>What Have I Done so Far?</h2> <ul> <li>Disabled SELinux,</li> <li>Disabled <code>firewalld</code>,</li> <li>Enabled <code>br_netfilter</code>, <code>bridge-nf-call-iptables</code>,</li> <li>Installed kubernetes on three nodes, set-up master's pod network with flannel default network (<code>10.244.0.0/16</code>),</li> <li>Installed other two nodes, and joined the master.</li> <li>Deployed flannel,</li> <li>Configured Docker's BIP to use flannel default per-node subnet and network.</li> </ul> <h2>Current State</h2> <ul> <li>The kubelet works and the cluster reports nodes as ready.</li> <li>The Cluster can schedule and migrate pods, so CoreDNS are spawned on nodes.</li> <li>Flannel network is connected. No logs in containers and I can ping <code>10.244.0.0/24</code> networks from node to node.</li> <li>Kubernetes can deploy and run arbitrary pods (Tried <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">shell demo</a>, and can access its shell via <code>kubectl</code> even if the container is on a different node. <ul> <li>However, since DNS is not working, they cannot resolve any IP addresses.</li> </ul></li> </ul> <h2>What is the Problem?</h2> <ul> <li><p>CoreDNS pods report that they cannot connect to API server with error:</p> <pre><code>Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host </code></pre></li> <li><p>I cannot see <code>10.96.0.0</code> routes in routing tables:</p> <pre><code>default via 172.16.0.1 dev eth0 proto static metric 100 10.1.0.0/24 dev eth1 proto kernel scope link src 10.1.0.202 metric 101 10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 10.244.1.0/24 dev docker0 proto kernel scope link src 10.244.1.1 10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 172.16.0.0/16 dev eth0 proto kernel scope link src 172.16.0.202 metric 100 </code></pre></li> </ul> <h2>Additional Info</h2> <ul> <li>Cluster init is done with the command <code>kubeadm init --apiserver-advertise-address=172.16.0.201 --pod-network-cidr=10.244.0.0/16</code>.</li> <li>I have torn down the cluster and rebuilt with 1.12.0 The problem still persists.</li> <li>The workaround in Kubernetes <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-pods-have-crashloopbackoff-or-error-state" rel="nofollow noreferrer">documentation</a> doesn't work.</li> <li>Problem is present and same both with <code>1.11-3</code>and <code>1.12-0</code> CentOS7 packages.</li> </ul> <h2>Progress so Far</h2> <ul> <li>Downgraded Kubernetes to <code>1.11.3-0</code>.</li> <li>Re-initialized Kubernetes with <code>kubeadm init --apiserver-advertise-address=172.16.0.201 --pod-network-cidr=10.244.0.0/16</code>, since the server has another external IP which cannot be accessed via other hosts, and Kubernetes tends to select that IP as API Server IP. <code>--pod-network-cidr</code> is mandated by <a href="https://coreos.com/flannel/docs/latest/kubernetes.html" rel="nofollow noreferrer">flannel</a>.</li> <li><p>Resulting <code>iptables -L</code> output after initialization <em>with no joined nodes</em></p> <pre><code>Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */ DOCKER-USER all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain KUBE-EXTERNAL-SERVICES (1 references) target prot opt source destination Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 Chain KUBE-FORWARD (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000 Chain KUBE-SERVICES (1 references) target prot opt source destination REJECT udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns has no endpoints */ udp dpt:domain reject-with icmp-port-unreachable REJECT tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints */ tcp dpt:domain reject-with icmp-port-unreachable </code></pre></li> <li><p>Looks like API Server is deployed as it should</p> <pre><code>$ kubectl get svc kubernetes -o=yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-10-25T06:58:46Z labels: component: apiserver provider: kubernetes name: kubernetes namespace: default resourceVersion: "6" selfLink: /api/v1/namespaces/default/services/kubernetes uid: 6b3e4099-d823-11e8-8264-a6f3f1f622f3 spec: clusterIP: 10.96.0.1 ports: - name: https port: 443 protocol: TCP targetPort: 6443 sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre></li> <li><p>Then I've applied flannel network pod with </p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre></li> <li><p>As soon as I apply the flannel network, CoreDNS pods start and start to give the same error:</p> <pre><code>Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500\u0026resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host </code></pre></li> <li><p>I've found out that <code>flanneld</code> is using the wrong network interface, and changed it in the <code>kube-flannel.yml</code> file before deployment. However the outcome is still the same.</p></li> </ul> <p>Any help is greatly appreciated.</p>
<p>This is basically saying that your coredns pod cannot talk to the kube-apiserver. The kube-apiserver is exposed in the pod through these environment variables: <code>KUBERNETES_SERVICE_HOST=10.96.0.1</code> and <code>KUBERNETES_SERVICE_PORT_HTTPS=443</code></p> <p>I believe that the routes that you posted are routes on the host since this is what you get when you run <code>ip routes</code> in pod container:</p> <pre><code>root@xxxx-xxxxxxxxxx-xxxxx:/# ip route default via 169.254.1.1 dev eth0 169.254.1.1 dev eth0 scope link root@xxxx-xxxxxxxxxx-xxxxx:/# </code></pre> <p>In any case, you wouldn't see <code>10.96.0.1</code> since that's exposed in the cluster using iptables. So what is that address? It happens that is a <code>service</code> in the default namespace called <code>kubernetes</code>. That service's <code>ClusterIP</code> is <code>10.96.0.1</code> and it's listening on port <code>443</code>, it also maps to <code>targetPort</code> <code>6443</code> which is where your kube-apiserver is running.</p> <p>Since you can deploy pods, etc. It seems like the kube-apiserver is not down and that's not your problem. So most likely you are missing that service (or there's some iptable rule not allowing you to connect to it). You can see it here, for example:</p> <pre><code>$ kubectl get svc kubernetes NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 92d </code></pre> <p>The full output is something like this:</p> <pre><code>$ kubectl get svc kubernetes -o=yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-07-23T21:10:22Z labels: component: apiserver provider: kubernetes name: kubernetes namespace: default resourceVersion: "24" selfLink: /api/v1/namespaces/default/services/kubernetes uid: xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx spec: clusterIP: 10.96.0.1 ports: - name: https port: 443 protocol: TCP targetPort: 6443 sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>So if you are missing it, you can create it like this:</p> <pre><code>cat &lt;&lt;EOF apiVersion: v1 kind: Service metadata: labels: component: apiserver provider: kubernetes name: kubernetes namespace: default spec: clusterIP: 10.96.0.1 ports: - name: https port: 443 protocol: TCP targetPort: 6443 sessionAffinity: None type: ClusterIP EOF | kubectl apply -f - </code></pre>
<p>I have a Kubernetes cluster running on IBM Cloud and I'm trying to deploy the Couchbase operator. </p> <p>When running the command:</p> <pre><code>cbopctl apply --kubeconfig /home/jenkins/.bluemix/cluster.yml -f couchbase-autonomous-operator-kubernetes_1.0.0-linux_x86_64/couchbase-cluster.yaml </code></pre> <p>I get the following error.</p> <pre><code>panic: No Auth Provider found for name "oidc" goroutine 1 [running]: github.com/couchbase/couchbase-operator/pkg/client.MustNew(0xc4201e2e00, 0xc4201e2e00, 0x0) /var/tmp/foo/goproj/src/github.com/couchbase/couchbase-operator/pkg/client/client.go:21 +0x71 main.(*ApplyContext).Run(0xc4207e8570) </code></pre> <p>How do I authenticate this service?</p>
<p>Looks like you have your <code>~/.kube/config</code> file configured to use OpenID with the oidc authenticator. The <code>~/.kube/config</code> is with the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> library uses to authenticate and <a href="https://docs.couchbase.com/operator/1.0/cbopctl.html" rel="nofollow noreferrer">cbopctl</a> uses the client-go library.</p> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">This</a> explains how to set it up in Kubernetes. If you are using an <a href="https://www.ibm.com/cloud/container-service" rel="nofollow noreferrer">IBM cloud managed Kubenetes cluster</a>, it's probably already configured on the kube-apiserver and you would have to follow <a href="https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install" rel="nofollow noreferrer">this</a></p> <p>To manually configure <code>kubectl</code> you would have to do something like <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator" rel="nofollow noreferrer">this</a>.</p>
<p>I am experimenting with minikube for learning purposes, on a CentOS 7 Linux machine with Docker 18.06.010ce installed</p> <p>I installed minikube using</p> <pre><code>minikube start --vm-driver=none" </code></pre> <p>I deployed a few applications but only to discover they couldn't talk to each other using their hostnames.</p> <p>I deleted minikube using</p> <pre><code>minikube delete </code></pre> <p>I re-installed minikube using</p> <pre><code>minikube start --vm-driver=none </code></pre> <p>I then followed the instructions under "Debugging DNS Resolution" (<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a>) but only to find out that the DNS system was not functional</p> <p>More precisely, I run:</p> <p>1.</p> <pre><code>kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml </code></pre> <p>2.</p> <pre><code># kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <p>3.</p> <pre><code># kubectl exec busybox cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local contabo.host options ndots:5 </code></pre> <p>4.</p> <pre><code># kubectl get pods --namespace=kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-c4cffd6dc-dqtbt 1/1 Running 1 4m kube-dns-86f4d74b45-tr8vc 2/3 Running 5 4m </code></pre> <p>surprisingly both kube-dns and coredns are running should this be a concern?</p> <p>I have looked for a solution anywhere without success step 2 always returns error</p> <p>I simply cannot accept that something so simple has become such a huge trouble for me Please assist</p>
<p>Mine is working with coredns enabled and kube-dns disabled.</p> <pre><code>C02W84XMHTD5:ucp iahmad$ minikube addons list - addon-manager: enabled - coredns: enabled - dashboard: enabled - default-storageclass: enabled - efk: disabled - freshpod: disabled - heapster: disabled - ingress: disabled - kube-dns: disabled - metrics-server: disabled - nvidia-driver-installer: disabled - nvidia-gpu-device-plugin: disabled - registry: disabled - registry-creds: disabled - storage-provisioner: enabled </code></pre> <p>you may disable the kube-dns:</p> <pre><code>minikube addons disable kube-dns </code></pre>
<p>I have a 3-node Kubernetes cluster running on vagrant using the oracle Kubernetes vagrant boxes from <a href="http://github.com/oracle/vagrant-boxes.git" rel="nofollow noreferrer">http://github.com/oracle/vagrant-boxes.git</a>.</p> <p>I want to add a pod including an Oracle database and persist the data so that in case all nodes go down, I don't lose my data.</p> <p>According to how I read the Kubernetes documentation persistent volumes cannot be created on a local filesystem only on a cloud-backed device. I want to configure the persistent volume and persistent volume claim on my vagrant boxes as a proof of concept and training exercise for my Kubernetes learning.</p> <p>Are there any examples of how I might go about creating the PV and PVC in this configuration?</p> <p>As a complete Kubernetes newbie, any code samples would be greatly appreciated.</p>
<p>Use host path:</p> <p>create PV:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data </code></pre> <p>create PVC:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>Use it in a pod:</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage </code></pre> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">documentation</a></p> <p>This is just an example, for testing only.</p> <p>For production use case, you will need dynamic provisioning using the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a> for PVC, so that the volume/data is available when the pod moves across the cluster.</p>
<p>I'm setting up an instance of ghost and I'm trying to secure the /ghost path with client cert verification. </p> <p>I've got an initial ingress up and running that serves the site quite happily with the path specified as /. </p> <p>I'm trying to add a second ingress (that's mostly the same) for the /ghost path. If I do this and add the annotations for basic auth, everything seems to work. i.e. If I browse to /ghost I am prompted for credentials in the basic-auth secret, if I browse to any other URL it is served without auth. </p> <p>I then switched to client cert verification based on this example: <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/client-certs" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/client-certs</a></p> <p>When I try this either the whole site or none of the site is secured, rather than the path-based separation, I got with basic-auth. Looking at the nginx.conf from the running pod the <code>proxy_set_header ssl-client-verify</code>, <code>proxy_set_header ssl-client-subject-dn</code> &amp; <code>proxy_set_header ssl-client-issuer-dn</code> elements are added under the root / path and the /ghost path. I've tried removing those (from the root only) and copying the config directly back to the pod but not luck there either.</p> <p>I'm pulling nginx-ingress (Chart version 0.23.0) in as a dependency via Helm</p> <p>Ingress definition for <code>/</code> location - this one works</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" labels: app: my-app chart: my-app-0.1.1 heritage: Tiller release: my-app name: my-app namespace: default spec: rules: - host: example.com http: paths: - backend: serviceName: my-app servicePort: http path: / tls: - hosts: - example.com secretName: mysite-tls </code></pre> <p>Ingress definition for <code>/ghost</code> location - this one doesn't work</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-tls-verify-client: "on" nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain" nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1" nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html" nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false" kubernetes.io/ingress.class: "nginx" labels: app: my-app chart: my-app-0.1.1 heritage: Tiller release: my-app name: my-app-secure namespace: default spec: rules: - host: example.com http: paths: - backend: serviceName: my-app servicePort: http path: /ghost tls: - hosts: - example.com secretName: mysite-tls </code></pre>
<p>You need a <code>'*'</code> on your path on your second ingress if you want to serve all the pages securely under <code>/ghost</code> and if you want just <code>/ghost</code> you need another rule. Something like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-tls-verify-client: "on" nginx.ingress.kubernetes.io/auth-tls-secret: "default/auth-tls-chain" nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1" nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.example.com/error-cert.html" nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false" kubernetes.io/ingress.class: "nginx" labels: app: my-app chart: my-app-0.1.1 heritage: Tiller release: my-app name: my-app-secure namespace: default spec: rules: - host: example.com http: paths: - backend: serviceName: my-app servicePort: http path: /ghost - backend: serviceName: my-app servicePort: http path: /ghost/* tls: - hosts: - example.com secretName: mysite-tls </code></pre> <p>However, if you want something like <code>/</code> unsecured and <code>/ghost</code> secured, I believe you won't be able to do it. For example, if you are using nginx, this is a limitation from nginx itself, when you configure a <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#server" rel="nofollow noreferrer"><code>server {}</code></a> block with TLS in nginx it looks something like this:</p> <pre><code>server { listen 443 ssl; server_name example.com; ssl_certificate example.com.crt; ssl_certificate_key example.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ... } </code></pre> <p>The ingress controller creates paths like this:</p> <pre><code>server { listen 443 ssl; server_name example.com; ssl_certificate example.com.crt; ssl_certificate_key example.com.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ... location / { ... } location /ghost { ... } } </code></pre> <p>So when you configure another <code>server {}</code> block with the same hostname and with no SSL it will override the first one.</p> <p>You could do it with different <code>- host:</code> rules in your ingress for example <code>ghost.example.com</code> with TLS and <code>main.example.com</code> without TLS. So in your nginx.conf you would have different <code>server {}</code> blocks.</p> <p>You can always shell into the ingress controller pod to check the configs, for example:</p> <pre><code>$ kubectl exec -it nginx-ingress-controller-xxxxxxxxx-xxxxx bash www-data@nginx-ingress-controller-6bd7c597cb-8kzjh:/etc/nginx$ cat nginx.conf </code></pre>
<p><strong>Background</strong></p> <p>So, on the journey from monolith(s) mess, to microservices, we've decided to go down the k8s route (already a WIN), on google cloud (likewise), and we're looking for an authentication and authorization solution.</p> <p>So we're considering using Istio, which again, the RBAC element looks like a WIN, and will allow us to keep authorization outside of the applications, as well as other niceties.</p> <p>And, cloud IAP. Sweet, we don't need to care about authentication, just grant users (all of which already have g-suite accounts) access via cloud iam.</p> <p><strong>Question</strong></p> <p>How do we manage &amp; inject auth data for users? IAP lets us grant access to projects, and presents data via JWT (perfect so far), but we can't add custom application permissions.</p> <p>We would like to be able to use fine-grained permissions for endpoints, and groups/roles to grant these. </p> <p>After much searching, I can't find any solution, and this seems like a super common requirement. Have I missed something (am I looking at this wrong?).</p>
<p>There's a couple of solutions I can think of:</p> <ul> <li><p>Istio (like you mentioned). Which supports:</p> <ul> <li><a href="https://istio.io/docs/concepts/security/#authentication" rel="nofollow noreferrer">Transport Authentication</a></li> <li><a href="https://istio.io/docs/concepts/security/#authentication" rel="nofollow noreferrer">End-user Authentication</a>. <a href="https://istio.io/help/ops/security/end-user-auth/" rel="nofollow noreferrer">Here's an example on how to use it</a>.</li> <li>For fine-grained authorization, you can use Istio <a href="https://istio.io/docs/concepts/security/#authorization" rel="nofollow noreferrer">authorization</a></li> </ul></li> <li><p><a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a> with <a href="https://www.consul.io/docs/platform/k8s/index.html" rel="nofollow noreferrer">Kubernetes</a> and use <a href="https://www.consul.io/docs/guides/acl.html" rel="nofollow noreferrer">ACLs</a> with ACL tokens. The tokens could also be managed by <a href="https://www.vaultproject.io/docs/secrets/consul/index.html" rel="nofollow noreferrer">Vault</a>. As of this writing, it doesn't integrate with OpenID or Oauth2 providers. Consul will help you provide that fine-grained authorization with <a href="https://learn.hashicorp.com/consul/advanced/day-1-operations/acl-guide" rel="nofollow noreferrer">ACLs</a></p></li> </ul>
<p>I need to add <code>kubectl apply</code> functionality to my application.</p> <p>I've looked through <code>kubectl go-client</code>, it has no provisions for the apply command.</p> <ol> <li>Can I create an instance of <code>kubectl</code> in my go-application? </li> <li>If not 1, can I use the <code>k8s.io/kubernetes</code> package to emulate an <code>kubectl apply</code> command?</li> </ol> <p>Questions and clarifications if needed, will be given.</p>
<blockquote> <ol> <li>Can I create an instance of kubectl in my application?</li> </ol> </blockquote> <p>You can wrap the <code>kubectl</code> command in your application and start it in a new child-process, like you would do via a shell-script. See the <code>exec</code> package in go for more information: <a href="https://golang.org/pkg/os/exec/" rel="nofollow noreferrer">https://golang.org/pkg/os/exec/</a></p> <p>This works pretty good for us and kubectl usually has the <code>-o</code>-Parameter that lets you control the output format, so you get machine readable text back.</p> <p>There are already some open-source projects that use this approach:</p> <ul> <li><a href="https://github.com/box/kube-applier" rel="nofollow noreferrer">https://github.com/box/kube-applier</a></li> <li><a href="https://github.com/autoapply/autoapply" rel="nofollow noreferrer">https://github.com/autoapply/autoapply</a></li> </ul> <blockquote> <ol start="2"> <li>If not 1, can I use the k8s.io/kubernetes package to emulate an kubectl apply command?</li> </ol> </blockquote> <p>Have you found <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/apply/apply.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/apply/apply.go</a> while searching in kubectl-source code? Take a look at the run-function:</p> <pre> func (o *ApplyOptions) Run() error { ... r := o.Builder. Unstructured(). Schema(o.Validator). ContinueOnError(). NamespaceParam(o.Namespace).DefaultNamespace(). FilenameParam(o.EnforceNamespace, &o.DeleteOptions.FilenameOptions). LabelSelectorParam(o.Selector). IncludeUninitialized(o.ShouldIncludeUninitialized). Flatten(). Do() ... err = r.Visit(func(info *resource.Info, err error) error { ... </pre> <p>It is not very good readable it guess but this is what <code>kubectl apply</code> does. Maybe one possible way of whould be to debug the code and see what is does further more.</p>
<p><a href="https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator" rel="nofollow noreferrer">The Cloud Composer documentation</a> explicitly states that:</p> <blockquote> <p>Due to an issue with the Kubernetes Python client library, your Kubernetes pods should be designed to take no more than an hour to run.</p> </blockquote> <p>However, it doesn't provide any more context than that, and I can't find a definitively relevant issue on the Kubernetes Python client project.</p> <p>To test it, I ran a pod for two hours and saw no problems. What issue creates this restriction, and how does it manifest? </p>
<p>I'm not deeply familiar with either the Cloud Composer or Kubernetes Python client library ecosystems, but sorting the GitHub issue tracker by most comments shows this open item near the top of the list: <a href="https://github.com/kubernetes-client/python/issues/492" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/issues/492</a></p> <p>It sounds like there is a token expiration issue:</p> <blockquote> <p>@yliaog this is an issue for us, as we are running kubernetes pods as batch processes and tracking the state of the pods with a static client. Once the client object is initialized, it does no refresh, and therefore any job that takes longer than 60 minutes will fail. Looking through python-base, it seems like we could make a wrapper class that generates a new client (or refreshes the config) every n minutes, or checks status prior to every call (as @mvle suggested). The best fix would be in swagger-codegen, but a temporary solution would probably be very useful for a lot of people.</p> <p>- @flylo, <a href="https://github.com/kubernetes-client/python/issues/492#issuecomment-376581140" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/issues/492#issuecomment-376581140</a></p> </blockquote>
<p>Is it possible to copy one stage of a multi-stage Dockerfile into another?</p> <p>For various business reasons I have been instructed to use a multi-stage Dockerfile, but what I really need to do is combine the appserver image and webserver image. This is fine in docker-compose as you can reference each section - but I am not sure if this can be done over GCP and Kubernetes.</p> <p>My Dockerfile code is below.</p> <pre><code>FROM php:7.1-fpm as appserver RUN apt-get update &amp;&amp; apt-get install -y libpq-dev \ &amp;&amp; docker-php-ext-install pdo pdo_pgsql pgsql RUN apt-get update &amp;&amp; \ apt-get install -y \ zlib1g-dev \ &amp;&amp; docker-php-ext-install zip COPY ./app /var/www/html FROM nginx:stable-alpine as webserver COPY ./app /var/www/html/ COPY vhost-prod.conf /etc/nginx/conf.d/default.conf </code></pre>
<p>Not sure, what are you trying to achieve by your Dockerfile above.</p> <ul> <li>multistage build is not for the above purpose</li> <li>you cannot throw any problem at multi-stage build and solve it</li> <li>multi-stage build is to build the binaries/code and copy it to the final image so that you dont carry the dev tools into production</li> </ul> <p>For your use case , please make two docker files , one for the app, and other for proxy/nginx , build them independently and run/scale them independently.</p> <p>In case of static content serving from nginx , you just need to run nginx with volume mount of static content.</p>
<p>In kubernetes do containers inside the POD have their own IP addresses?</p> <p>In Kubernetes Nodes will have IP address and PODs will have IP addresses, Do containers inside the pods will also have IP address of their own.</p>
<p>No, they share the same networking namespace. Hence if you bind to a port on 127.0.0.1 on one container you can connect to that port on the other etc.</p>
<p>I have an application pod which will be deployed on k8s cluster But as Kubernetes scheduler decides on which node this pod needs to run</p> <p>Now I want to add taint to the node dynamically where my application pod is running with NOschedule so that no new pods will be scheduled on this node</p> <p>I know that we can use kubectl taint node with NOschedule if I know the node name but I want to achieve this dynamically based on which node this application pod is running </p> <p>The reason why I want to do this is this is critical application pod which shouldn’t have down time and for good reasons I have only 1 pod for this application across the cluster</p> <p>Please suggest</p>
<p>In addition to @Rico answer.</p> <p>You can use feature called <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">node affinity</a>, this is still a beta but some functionality is already implemented.</p> <p>You should add a <code>label</code> to your node, for example <code>test-node-affinity: test</code>. Once this is done you can Add the <code>nodeAffinity</code> of field <code>affinity</code> in the PodSpec.</p> <pre><code>spec: ... affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: test-node-affinity operator: In values: - test </code></pre> <p>This will mean the <code>POD</code> will look for a node with key <code>test-node-affinity</code> and value <code>test</code> and will be deployed there.</p> <p>I recommend reading this blog <a href="https://banzaicloud.com/blog/k8s-taints-tolerations-affinities/" rel="nofollow noreferrer">Taints and tolerations, pod and node affinities demystified</a> by Toader Sebastian.</p> <p>Also familiarise yourself with <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a> from Kubernetes docs.</p>
<p>I'm looking at the helm chart versions from the jupyterhub helm repo: <code>https://jupyterhub.github.io/helm-chart/index.yaml</code></p> <p>When I use <code>helm search -l jupyterhub/jupyterhub</code>, the versions come out in the order that they appear in the <code>index.yaml</code>, which is not the order in which they are created (according to the <code>created</code> field in <code>index.yaml</code>)</p> <p>Is there a way to get the version list sorted by date created?</p>
<p>From the helm point of view no. But you can tweak the output to get what you want, although it's pretty tricky since the versioning/tagging hasn't been consistent for <code>jupyterhub/jupyterhub</code> for example. </p> <p>Anyhow, I came up with this bash/Ruby one-liner but it's picking it up directly from: <code>https://jupyterhub.github.io/helm-chart/index.yaml</code></p> <pre><code>$ curl -s https://jupyterhub.github.io/helm-chart/index.yaml | ruby -ryaml -rjson -rdate -e 'puts YAML.load(ARGF)["entries"]["binderhub"].sort_by {|hsh| hsh["created"] }' </code></pre>
<p>I have two network interfaces on a node. One is internal network and the other is external network. Internal network is <code>192.168.50.0/255.255.255.0</code>(internal network). And external network is <code>192.168.0.0/255.255.255.0</code>. Kubernetes consists of <code>192.168.50.0/255.255.255.0</code>. I want to approach internal network from another local nodes without using internal network interface. How can I solve this problem?</p>
<p>Without subnet masks , I do not understand how they are different networks. </p> <p>But , in any case , you need to enable routing packets from one interface to another. I assume you are on Linux node , there you may enable ip-forwarding.</p> <pre><code>echo 1 &gt;&gt; /proc/sys/net/ipv4/ip_forward </code></pre> <p>Then set up some rules in iptables to perform the natting and forwarding:</p> <p><strong>Example rules:</strong></p> <pre><code># Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # We allow traffic from the LAN side iptables -A INPUT -i eth0 -j ACCEPT ###################################################################### # # ROUTING # ###################################################################### # eth0 is LAN # eth1 is WAN # Allow established connections iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # fowarding iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT </code></pre> <p><a href="https://serverfault.com/questions/453254/routing-between-two-networks-on-linux">https://serverfault.com/questions/453254/routing-between-two-networks-on-linux</a></p>
<p>I'm currently working on a traditional monolith application, but I am in the process of breaking it up into spring microservices managed by kubernetes. The application allows the uploading/downloading of large files and these files are normally stored on the host filesystem. I'm wondering what would be the most viable method of persisting these files in a microservice architecture?</p>
<p>You have a bunch of different options, Googling your question you'll find many answers, for any budget and taste. Basically you'd want high-availability storage like AWS S3. You could setup your own dedicated server to store these files as well if you wanted to cut costs, but then you'd have to worry about backups and availability. If you need low latency access to these files then you'd want to have them behind CDN as well.</p>
<p>On my OS X host, I'm using Docker CE (18.06.1-ce-mac73 (26764)) with Kubernetes enabled and using Kubernetes orchestration. From this host, I can run a stack deploy to deploy a container to Kubernetes using this simple docker-compose file (kube-compose.yml):</p> <pre><code>version: '3.3' services: web: image: dockerdemos/lab-web volumes: - "./web/static:/static" ports: - "9999:80" </code></pre> <p>and this command-line run from the directory containing the compose file:</p> <pre><code>docker stack deploy --compose-file ./kube-compose.yml simple_test </code></pre> <p>However, when I attempt to run the same command from my Jenkins container, Jenkins returns: </p> <blockquote> <p>this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again</p> </blockquote> <p>I do not want the docker client in the Jenkins container to be initialized for a swarm since I'm not using Docker swarm on the host. </p> <p>The Jenkins container is defined in a docker-compose to include a volume mount to the docker host socket endpoint:</p> <pre><code>version: '3.3' services: jenkins: # contains embedded docker client &amp; blueocean plugin image: jenkinsci/blueocean:latest user: root ports: - "8080:8080" - "50000:50000" volumes: - ./jenkins_home:/var/jenkins_home # run Docker from the host system when the container calls it. - /var/run/docker.sock:/var/run/docker.sock # root of simple project - .:/home/project container_name: jenkins </code></pre> <p>I have also followed this guide to proxy requests to the docker host with socat: <a href="https://github.com/docker/for-mac/issues/770" rel="nofollow noreferrer">https://github.com/docker/for-mac/issues/770</a> and here: <a href="https://stackoverflow.com/questions/40737389/docker-compose-deploying-service-in-multiple-hosts">Docker-compose: deploying service in multiple hosts</a>. </p> <p>Finally, I'm using the following Jenkins definition (Jenkinsfile) to call stack to deploy on my host. Jenkins has the Jenkins docker plug-in installed:</p> <pre><code>node { checkout scm stage ('Deploy To Kube') { docker.withServer('tcp://docker.for.mac.localhost:1234') { sh 'docker stack deploy app --compose-file /home/project/kube-compose.yml' } } } </code></pre> <p>I've also tried changing the withServer signature to:</p> <pre><code>docker.withServer('unix:///var/run/docker.sock') </code></pre> <p>and I get the same error response. I am, however, able to telnet to the docker host from the Jenkins container so I know it's reachable. Also, as I mentioned earlier, I know the message is saying to run swarm init, but I am not deploying to swarm. </p> <p>I checked the version of the docker client in the Jenkins container and it is the same version (Linux variant, however) as I'm using on my host: </p> <blockquote> <p>Docker version 18.06.1-ce, build d72f525745</p> </blockquote> <p>Here's the code I've described: <a href="https://github.com/ewilansky/localstackdeploy.git" rel="nofollow noreferrer">https://github.com/ewilansky/localstackdeploy.git</a></p> <p>Please let me know if it's possible to do what I'm hoping to do from the Jenkins container. The purpose for all of this is to provide a simple, portable demonstration of a pipeline and deploying to Kubernetes is the last step. I understand that this is not the approach that would be taken anywhere outside of a local development environment.</p>
<p>Here is an approach that's working well for me until the Jenkins Docker plug-in or the Kubernetes Docker Stack Deploy command can support the remote deployment scenario I described. </p> <p>I'm now using the Kubernetes client kubectl from the Jenkins container. To minimize the size increase of the Jenkins container, I added just the Kubernetes client to the jenkinsci/blueocean image that was built on Alpine Linux. This DockerFile shows the addition:</p> <pre><code>FROM jenkinsci/blueocean USER root RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl RUN chmod +x ./kubectl RUN mv ./kubectl /usr/local/bin/kubectl RUN mkdir /root/.kube COPY kube-config /root/.kube/config </code></pre> <p>I took this approach, which added ~100 mb to the image size rather than getting the Alpine Linux Kubernetes package, which almost doubled the size of the image in my testing. Granted, the Kubernetes package has all Kubernetes components, but all I needed was the Kubernetes client. This is similar to the requirement that the docker client be resident to the Jenkins container in order to run Docker commands on the host. </p> <p>Notice in the DockerFile that there is reference to the Kuberenetes config file:</p> <pre><code>kube-config /root/.kube/config </code></pre> <p>I started with the Kubernetes configuration file on my host machine (the computer running Docker for Mac). I believe that if you enable Kubernetes in Docker for Mac, the Kubernetes client configuration will be present at ~/.kube/config. If not, install the Kubernetes client tools separately. In the Kubernetes configuration file that you will copy over to the Jenkins container via DockerFile, just change the server value so that the Jenkins container is pointing at the Docker for Mac host:</p> <pre><code> server: https://docker.for.mac.localhost:6443 </code></pre> <p>If you're using a Windows machine, I think you can use docker.for.win.localhost. There's a discussion about this here: <a href="https://github.com/docker/for-mac/issues/2705" rel="nofollow noreferrer">https://github.com/docker/for-mac/issues/2705</a> and other approaches described here: <a href="https://github.com/docker/for-linux/issues/264" rel="nofollow noreferrer">https://github.com/docker/for-linux/issues/264</a>.</p> <p>After recomposing the Jenkins container, I was then able to use kubectl to create a deployment and service for my app that's now running in the Kubernetes Docker for Mac host. In my case, here are the two commands I added to my Jenkins file:</p> <pre><code> stage ('Deploy To Kube') { sh 'kubectl create -f /kube/deploy/app_set/sb-demo-deployment.yaml' } stage('Configure Kube Load Balancer') { sh 'kubectl create -f /kube/deploy/app_set/sb-demo-service.yaml' } </code></pre> <p>There are loads of options for Kubernetes container deployments. In my case, I simply needed to deploy my web app (with replicas) behind a load balancer. All of that is defined in the two yaml files called by kubectl. This is a bit more involved than docker stack deploy, but achieves the same end result. </p>
<p>Is it possible to launch a pod in K8s cluster named B from a pod running in K8 cluster C.</p>
<p>Yes. It's a hacky way but it's possible.</p> <p>In addition, the guidelines are generally the same as in this answer:</p> <p><a href="https://stackoverflow.com/questions/52901435/how-i-create-new-namespace-in-kubernetes/52901519#52901519">How I create new namespace in Kubernetes</a></p> <p>In a pod in cluster B you can call:</p> <pre><code>$ kubectl apply -f &lt;your pod-definition&gt; </code></pre> <p>Which creates a pod in cluster C, and <code>~/.kube/config</code> points to cluster C.</p> <p>You can also set up a role and the whole RBAC and issue a <code>curl</code> from cluster B to cluster B.</p> <pre><code>$ curl -k -H -X POST -H 'Content-Type: application/json' \ -H 'Authorization: Bearer &lt;token&gt;' \ https://$KUBERNETES_SERVICE_HOST:6443/api/v1/namespaces/namespace/pods -d ' { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "mypod" "namespace": "mynamespace", }, "spec": { "containers": [ { ... } ] ... } }' </code></pre> <p>Or also use a library like <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> or/and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes-client/python</a>.</p>
<p>Is it possible to launch a pod in K8s cluster named B from a pod running in K8 cluster C.</p>
<p>Yes , run the pod in Cluster C with service account that has the authenitcation and authorization to launch pods in Cluster B , or embed some other credentails in the pod on cluster C that can do the same on cluster B , moreover , the API server of cluster B need to be accessable from pod on cluster C.</p> <p>The simple way to test is run a pod on C with kubectl installed , and kubeconfig of cluster B , and just run the kubectl command.</p> <p>It doesnt matter from where ( cluster , pod , container , machine) you are accessing the API server. You just need kubectl/client library with proper credentails.</p>
<p>How can we use url of datasource in kubernetes while creating configmap like ex. kubectl create configmap config-from-file --from-file=<a href="https://url-to-file-location" rel="nofollow noreferrer">https://url-to-file-location</a></p>
<p>No, currently it is not possible to <em>directly</em> use URL as source for a configMap property.</p> <p>But this will do the trick:</p> <pre><code>kubectl create configmap config-from-url --from-literal=propkey="$(curl -k https://url-to-file-location)" </code></pre> <p>You can specify the namespace where to create the configMap with <code>-n</code> or <code>--namespace</code> - see <code>kubectl options</code>.</p> <p>The <code>-k</code> option for <code>curl</code> allows connections to sites with untrusted (e.g. self-signed) certs.</p> <p>Using <code>wget</code> instead of <code>curl</code> can be another option.</p>
<p>I am using a POD directly to manage our C* cluster in a K8s cluster, not using any high-level controller. When I want to upgrade C*, I want to do an image update. Is this a good pattern to update image for an upgrade? </p> <p>I saw the high-level deployment controller support image update too, but that causes the POD to delete and recreate, which in turn causes the IP to change. I don't want to change the IP and I found if I directly update the POD image, it can cause a restart and also keep the IP. This is the exact behavior I want, is this pattern right?</p> <p>Is it safe to use in production? </p>
<p>I believe you can follow the K8s <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets" rel="nofollow noreferrer">documentation</a> for a more 'production ready' upgrade strategy. Basically, use the <code>updateStrategy=RollingUpdate</code>:</p> <pre><code>$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}} </code></pre> <p>and then update the image:</p> <pre><code>$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cassandra:next-version"}]' </code></pre> <p>and watch your updates:</p> <pre><code>$ kubectl get pod -l app=cassandra -w </code></pre> <p>There's also <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#staging-an-update" rel="nofollow noreferrer">Staging the Update</a> in case you'd like to update each C* node individually, for example, why if the new version turns out to be incompatible, then you can revert that C* back to the original version.</p> <p>Also, familiarize with all the Cassandra release notes before doing the upgrade.</p>
<p>I'm currently working on a traditional monolith application, but I am in the process of breaking it up into spring microservices managed by kubernetes. The application allows the uploading/downloading of large files and these files are normally stored on the host filesystem. I'm wondering what would be the most viable method of persisting these files in a microservice architecture?</p>
<p>We are mostly on prem. We end up using nfs. Path to least resistance, but probably not the most performant and making it highly available is tough. If you have the chance i agree with Denis Pshenov, that S3-like system for example minio might be a better alternative.</p>
<p>I am trying to run a Redis cluster in Kubernetes in DigitalOcean. As a poc, I simply tried running an example I found online (<a href="https://github.com/sanderploegsma/redis-cluster/blob/master/redis-cluster.yml" rel="noreferrer">https://github.com/sanderploegsma/redis-cluster/blob/master/redis-cluster.yml</a>), which is able to spin up the pods appropriately when running locally using minikube.</p> <p>However, when running it on Digital Ocean, I always get the following error: </p> <blockquote> <p>Warning FailedScheduling 3s (x8 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times) </p> </blockquote> <p>Given that I am not changing anything, I am not sure why this would not work. Does anyone have any suggestions?</p> <p>EDIT: some additional info </p> <pre><code>$ kubectl describe pvc Name: data-redis-cluster-0 Namespace: default StorageClass: Status: Pending Volume: Labels: app=redis-cluster Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 3m19s (x3420 over 14h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set Mounted By: &lt;none&gt; </code></pre> <p>EDIT: setting the default storage class partially resolved the problem! However, the node is now not able to find available volumes to bind:</p> <p>kubectl describe pvc:</p> <pre><code>Name: data-redis-cluster-0 Namespace: default StorageClass: local-storage Status: Pending Volume: Labels: app=redis-cluster Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 12m (x9 over 13m) persistentvolume-controller waiting for first consumer to be created before binding Normal WaitForFirstConsumer 3m19s (x26 over 9m34s) persistentvolume-controller waiting for first consumer to be created before binding </code></pre> <p>kubectl describe pod redis-cluster-0</p> <pre><code>.... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 16m (x25 over 17m) default-scheduler 0/5 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 4 node(s) didn't find available persistent volumes to bind. </code></pre> <p>kubectl describe sc</p> <pre><code>Name: local-storage IsDefaultClass: Yes Annotations: storageclass.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/no-provisioner Parameters: &lt;none&gt; AllowVolumeExpansion: &lt;unset&gt; MountOptions: &lt;none&gt; ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: &lt;none&gt; </code></pre> <p>kubernetes manager pod logs:</p> <pre><code>I1028 15:30:56.154131 1 event.go:221] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"redis-cluster", UID:"7528483e-dac6-11e8-871f-2e55450d570e", APIVersion:"apps/v1", ResourceVersion:"2588806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Claim data-redis-cluster-0 Pod redis-cluster-0 in StatefulSet redis-cluster success I1028 15:30:56.166649 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-redis-cluster-0", UID:"76746506-dac6-11e8-871f-2e55450d570e", APIVersion:"v1", ResourceVersion:"2588816", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding I1028 15:30:56.220464 1 event.go:221] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"redis-cluster", UID:"7528483e-dac6-11e8-871f-2e55450d570e", APIVersion:"apps/v1", ResourceVersion:"2588806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod redis-cluster-0 in StatefulSet redis-cluster successful I1028 15:30:57.004631 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-redis-cluster-0", UID:"76746506-dac6-11e8-871f-2e55450d570e", APIVersion:"v1", ResourceVersion:"2588825", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding </code></pre>
<p>This:</p> <blockquote> <p>no storage class is set</p> </blockquote> <p>And an empty output for <code>kubectl describe sc</code> means that there's no storage class.</p> <p>I recommend installing the <a href="https://github.com/digitalocean/csi-digitalocean" rel="noreferrer">CSI-driver</a> for Digital Ocean. That will create a <code>do-block-storage</code> class using the Kubernetes CSI interface. </p> <p>Another option is to use local storage. Using a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="noreferrer">local</a> storage class:</p> <pre><code>$ cat &lt;&lt;EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer EOF | kubectl apply -f - </code></pre> <p>Then for either case you may need to set it as a default storage class if you don't specify <code>storageClassName</code> in your PVC:</p> <pre><code>$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' </code></pre> <p>or</p> <pre><code>$ kubectl patch storageclass do-block-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' </code></pre>
<p>If not specified, pods are run under a default service account.</p> <ul> <li>How can I check what the default service account is authorized to do?</li> <li>Do we need it to be mounted there with every pod?</li> <li>If not, how can we disable this behavior on the namespace level or cluster level.</li> <li>What other use cases the default service account should be handling?</li> <li>Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.</li> </ul> <p>Environment: Kubernetes 1.12 , with RBAC</p>
<ol> <li>A default service account is automatically created for each namespace.</li> </ol> <blockquote> <p>kubectl get serviceaccount</p> <p>NAME SECRETS AGE</p> <p>default 1 1d</p> </blockquote> <ol start="2"> <li><p>Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.</p> </li> <li><p>A pod can only use one service account from the same namespace.</p> </li> <li><p>Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.</p> </li> <li><p>The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.</p> </li> <li><p>By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.</p> </li> <li><p>Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.</p> </li> </ol> <blockquote> <p>kubectl exec -it test -n foo sh / # curl localhost:8001/api/v1/namespaces/foo/services { &quot;kind&quot;: &quot;Status&quot;,<br /> &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: {</p> <p>}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;services is forbidden: User &quot;system:serviceaccount:foo:default&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; in the namespace &quot;foo&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;kind&quot;: &quot;services&quot; }, &quot;code&quot;: 403</p> </blockquote> <p>as can be seen above the default service account cannot list services</p> <p>but when given proper role and role binding like below</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: foo-role namespace: foo rules: - apiGroups: - &quot;&quot; resources: - services verbs: - get - list apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: test-foo namespace: foo roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: foo-role subjects: - kind: ServiceAccount name: default namespace: foo </code></pre> <p>now i am able to list the resurce service</p> <pre><code>kubectl exec -it test -n foo sh / # curl localhost:8001/api/v1/namespaces/foo/services { &quot;kind&quot;: &quot;ServiceList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { &quot;selfLink&quot;: &quot;/api/v1/namespaces/bar/services&quot;, &quot;resourceVersion&quot;: &quot;457324&quot; }, &quot;items&quot;: [] </code></pre> <ol start="8"> <li><p>Giving all your service accounts the <code>clusteradmin</code> ClusterRole is a bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.</p> </li> <li><p>It’s a good idea to create a specific service account for each pod and then associate it with a tailor-made role or a ClusterRole through a RoleBinding.</p> </li> <li><p>If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the <code>serviceaccountName</code> property in the pod spec.</p> </li> </ol> <p>You can refer the below link for an in-depth explanation.</p> <p><a href="https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/" rel="noreferrer">Service account example with roles</a></p> <p>You can check <code>kubectl explain serviceaccount.automountServiceAccountToken</code> and edit the service account</p> <p>kubectl edit serviceaccount default -o yaml</p> <pre><code>apiVersion: v1 automountServiceAccountToken: false kind: ServiceAccount metadata: creationTimestamp: 2018-10-14T08:26:37Z name: default namespace: default resourceVersion: &quot;459688&quot; selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: de71e624-cf8a-11e8-abce-0642c77524e8 secrets: - name: default-token-q66j4 </code></pre> <p>Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.</p> <pre><code>kubectl exec tp -it bash root@tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory </code></pre>
<ul> <li><p>I'm working locally on OSX.</p> </li> <li><p>I'm using Kafka and zookeeper in local mode. Meaning Zookeeper from my Kafka installation. One node cluster.</p> </li> <li><p>Both work on the loopback localhost <code>zookeeper.connect=localhost:2181</code></p> </li> <li><p>My <code>/etc/host</code> looks as follows:</p> <pre><code> 127.0.0.1 localhost MaatPro.local 255.255.255.255 broadcasthost ::1 localhost MaatPro.local fe80::1%lo0 localhost MaatPro.local </code></pre> </li> <li><p>I have docker for Mac set on my machine, with the Kubernetes extension.</p> </li> </ul> <p><strong>My scenario</strong></p> <p>I have an Akka-stream micro-service dockerized, that reads data from an external database and write it in a Kafka topic. It uses as bootstrap server <code>&quot;localhost:9092&quot;</code>.</p> <p><strong>Issue</strong></p> <p>When I run my service on my machine directly (e.g. command line or from within Intellij) everything works fine. When I run it on my Local Docker or Kubernetes I get the following error:</p> <p><code>(o.a.k.clients.NetworkClient) [Producer clientId=producer-1] Connection to node 0 could not be established. The broker may not be available.</code></p> <p>With Kubernetes I build the following YAML File to deploy my pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: fetcher spec: hostNetwork: true containers: - image: entellectextractorsfetchers:0.1.0-SNAPSHOT name: fetcher </code></pre> <p>I took the precaution to set <strong>hostNetwork: true</strong></p> <p>With Docker daemon directly I originally tried to set the network as host too but discovered that this does not work with Docker for Mac. Hence, I abandoned that route. I understood that it has to do with virtualization.</p> <ol> <li><p>Does the virtualization issue that happens with docker is actually the same as my local Kubernetes? Basically, the host network is the virtual machine and not my mac?</p> </li> <li><p>I try to change my code and add as a bootstrap server the following address: <strong>host.docker.internal</strong> as per the documentation. But the problem persists. Is the fundamental problem the fact that I am working on a loopback address? Shall I work on my network address indeed? To which address does host.docker.internal point to? How can I make it work with the loopback address? If I'm completely off, any idea of what I need to implement to get this working?</p> </li> </ol>
<p>Based on @cricket_007 guidance <a href="http://rmoff.net/2018/08/02/kafka-listeners-explained" rel="noreferrer">Kafka Listeners - Explained</a> and the many read I have had here and there over this issue, including the official docker documentation <a href="https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds" rel="noreferrer">I Want to connect from a container to a servicer on the host</a></p> <p>I came up with the following solution. </p> <p>I added the following to my default local kafka configuration (i.e. server.properties)</p> <blockquote> <pre><code>listeners=EXTERNAL://0.0.0.0:19092,INTERNAL://0.0.0.0:9092 listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT advertised.listeners=INTERNAL://127.0.0.1:9092,EXTERNAL://docker.for.mac.localhost:19092 inter.broker.listener.name=INTERNAL </code></pre> </blockquote> <p>In fact External here, is <strong><em>expected</em></strong> to be the <strong><em>docker network</em></strong>. This config is only for my OSX machine for my local development purpose. I do not expect people connecting to my laptop to use my local kafka hence i can use <code>EXTERNAL://docker.for.mac.localhost:19092.</code> This is what is advertised to my container in docker/kube. From within that network, docker.for.mac.localhost is reachable. </p> <p><em>Note this would probably not work with Minikube. This is specific to Docker For Mac. The kubernetes that I run on my machine is the one coming with docker for mac and not minikube.</em> </p> <p>Finally in my code i use both in a list </p> <pre><code>"localhost:9092, docker.for.mac.localhost:19092" </code></pre> <p>I use typeSafe config, so that in prod, this is erased by the env variable. When the env variable is not specified, this is what is used. When i run my micro-service from Intellij <code>localhost:9092</code> is used. That’s because In that case, i am in the same network as my kafka/zookeeper in my machine. However when I run the same micro-service from docker/kube <code>docker.for.mac.localhost:19092</code> is used. </p> <p><strong>Answers to the side questions I had</strong></p> <ol> <li><p><strong>Yes. Docker for Mac use HyperKit as a lightweight virtual machine, running a linux on it, and Docker Engine is ran on it</strong>. The Docker for MAC Kubernetes extension is basically about running kubernetes cluster Services/infrastructure as containers in the docker daemon. <a href="https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment" rel="noreferrer">Docker for Mac vs. Docker Toolbox</a> . In other words, the host is hyperkit and not osx. But as the above doc explain, Docker for Mac implementation is all about making it appear to the user as if there were no virtualization involve between OSX and Docker.</p></li> <li><ul> <li><p><strong>Connecting to the host using loopback address is an issue that has not been solved.</strong> I'm not even sure that it works perfect even if the host is Linux. Not sure, might have been resolved at this point. Nonetheless, it would require to run an image by stating that the container or the pod in the case of kube are on the host network. But in docker for mac, that functionality will never work based on my readings online. Hence the solution of using <strong>docker.for.mac.localhost</strong> or <strong>host.docker.internal</strong>, that Docker for Mac did set up to refer to the mac host and not the hyperkit host. </p></li> <li><p><strong>host.docker.internal</strong> and <strong>docker.for.mac.localhost</strong> are one of the same and the late recommendation at this point is host.docker.internal. This being said, this address did not originally work for me because my Kafka Set up was not good. It is worth readying @criket_007 link to understand that well <a href="http://rmoff.net/2018/08/02/kafka-listeners-explained" rel="noreferrer">http://rmoff.net/2018/08/02/kafka-listeners-explained</a>.</p></li> </ul></li> </ol>
<p>How do I use the standard <code>kafka-avro-console-consumer</code> tool with Kafka running via the Confluent Helm Charts? The <code>confluentinc/cp-kafka:5.0.0</code> image recommended for running cli utilities doesn't contain <code>kafka-avro-console-consumer</code>.</p> <p>If I shell into the schema-registry pod to use <code>kafka-avro-console-consumer</code></p> <pre><code>kubectl exec -it my-confluent-oss-cp-schema-registry-6c8546c86d-pjpmd -- /bin/bash /usr/bin/kafka-avro-console-consumer --bootstrap-server my-confluent-oss-cp-kafka:9092 --topic my-test-avro-records --from-beginning Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 5555; nested exception is: java.net.BindException: Address already in use (Bind failed) sun.management.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 5555; nested exception is: java.net.BindException: Address already in use (Bind failed) at sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:480) at sun.management.Agent.startAgent(Agent.java:262) at sun.management.Agent.startAgent(Agent.java:452) Caused by: java.rmi.server.ExportException: Port already in use: 5555; nested exception is: java.net.BindException: Address already in use (Bind failed) </code></pre>
<blockquote> <p>java.rmi.server.ExportException: Port already in use: 5555;</p> </blockquote> <p>Sounds like you've enabled JMX as part of that container via the <code>KAFKA_JMX_PORT</code> variable.</p> <p>If that's the case, you will need to temporarily override that by exporting (or <code>unset</code>) it within the shell session to a different value before running any other Kafka scripts </p>
<p>I'm currently working on a traditional monolith application, but I am in the process of breaking it up into spring microservices managed by kubernetes. The application allows the uploading/downloading of large files and these files are normally stored on the host filesystem. I'm wondering what would be the most viable method of persisting these files in a microservice architecture?</p>
<p>Maybe you should have a look at the rook project (<a href="https://rook.io/" rel="nofollow noreferrer">https://rook.io/</a>). It's easy to set up and provides different kinds of storage and persistence technologies to your CNAs.</p>
<p>We have a spring boot application that is deployed to Kubernetes. We are adding i18n capabilities to this application and want to place the messages.properties file outside the application jar/war. I have been able to do that in spring boot. How will this work when I deploy it on Kubernetes? Do I need to use the configmaps? Following is the code snippet</p> <pre><code>@Configuration public class AppConfig { @Bean public MessageSource messageSource() { ReloadableResourceBundleMessageSource messageSource = new ReloadableResourceBundleMessageSource(); //Path to the messages.properties files messageSource.setBasenames("file:/messages/messages", "classpath:messages"); messageSource.setDefaultEncoding("UTF-8"); messageSource.setCacheSeconds(60); return messageSource; } } </code></pre>
<p>Yes you can do this with a configmap. It is much the same as accessing an external application.properties file. First you can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="noreferrer">create a ConfigMap directly from the file</a> or create a <a href="https://github.com/ryandawsonuk/secrets-treasurehunt/blob/master/treasurehunt/config.yaml" rel="noreferrer">ConfigMap representing the file</a>:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: treasurehunt-config namespace: default data: application.properties: | treasurehunt.max.attempts=5 </code></pre> <p>Then in your kubernetes Deployment you create a <a href="https://github.com/ryandawsonuk/secrets-treasurehunt/blob/master/treasurehunt/deployment.yaml#L39" rel="noreferrer">Volume for the ConfigMap</a> and <a href="https://github.com/ryandawsonuk/secrets-treasurehunt/blob/master/treasurehunt/deployment.yaml#L35" rel="noreferrer">mount that into the Pod under the directory you use for the external configuration</a>:</p> <pre><code> volumeMounts: - name: application-config mountPath: "/config" readOnly: true volumes: - name: application-config configMap: name: treasurehunt-config items: - key: application.properties path: application.properties </code></pre> <p>These snippets come from an <a href="https://dzone.com/articles/hunting-treasure-with-kubernetes-configmaps-and-se" rel="noreferrer">example of mounting a Volume from ConfigMap</a> for an application.properties file so they use the spring boot <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-application-property-files" rel="noreferrer">default external properties file path</a> of <code>/config</code>. You can <a href="https://github.com/ryandawsonuk/secrets-treasurehunt/blob/master/treasurehunt/deployment.yaml#L37" rel="noreferrer">set that in the yaml for the mount</a> so you could mount the file to use the same relative path that you are already using when running outside of kubernetes.</p>
<pre><code> Spec: v1.PodSpec{ Containers: []v1.Container{ v1.Container{ Name: podName, Image: deploymentName, ImagePullPolicy: "IfNotPresent", Ports: []v1.ContainerPort{}, Env: []v1.EnvVar{ v1.EnvVar{ Name: "RASA_NLU_CONFIG", Value: os.Getenv("RASA_NLU_CONFIG"), }, v1.EnvVar{ Name: "RASA_NLU_DATA", Value: os.Getenv("RASA_NLU_DATA"), }, }, Resources: v1.ResourceRequirements{}, }, }, RestartPolicy: v1.RestartPolicyOnFailure, }, </code></pre> <p>I want to provide resource limits as corresponding like :</p> <blockquote> <pre><code>resources: limits: cpu: "1" requests: cpu: "0.5" args: - -cpus - "2" </code></pre> </blockquote> <p>How do I go on to do that. I tried adding Limits and its map key value pair but it seems to be quite a nested structure. There doesnt seem to be any example as to how to provide resources in kube client go.</p>
<p>I struggled with the same when i was creating a statefulset. Maybe my codesnipped will help you:</p> <pre><code>Resources: apiv1.ResourceRequirements{ Limits: apiv1.ResourceList{ "cpu": resource.MustParse(cpuLimit), "memory": resource.MustParse(memLimit), }, Requests: apiv1.ResourceList{ "cpu": resource.MustParse(cpuReq), "memory": resource.MustParse(memReq), }, }, </code></pre> <p>the vars cpuReq, memReq, cpuLimit and memLimit are supposed to be strings</p>
<p>I am running minikube using the instructions at </p> <p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noreferrer">https://kubernetes.io/docs/tutorials/hello-minikube/</a></p> <p>I started minikube:</p> <pre><code>$ minikube start --vm-driver=hyperkit </code></pre> <p>and verified that it is successfully running.</p> <p>I am running 'Docker Community Edition' version 18.06.1-ce-mac73.</p> <pre><code>$ minikube ssh </code></pre> <p>is working fine.</p> <p>However when I do</p> <pre><code>$ docker ps </code></pre> <p>on my mac os host, it doesn't show any containers. However, when I do</p> <pre><code>$ docker ps </code></pre> <p>after doing minikube ssh, I see about 20 containers.</p> <p>So, where are the docker containers really running? Why does docker ps not show any containers on my mac?</p> <p>Thank you.</p>
<p>You can use the following command to configure your Docker Host address:</p> <pre><code>eval $(minikube docker-env) </code></pre> <p>Then, when you run <code>docker ps</code>, you should see your containers. Read more <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">here</a>.</p>
<p>I'm currently working on a traditional monolith application, but I am in the process of breaking it up into spring microservices managed by kubernetes. The application allows the uploading/downloading of large files and these files are normally stored on the host filesystem. I'm wondering what would be the most viable method of persisting these files in a microservice architecture?</p>
<p>There are many places to store your data. It also depends on the budget that you are able to spent (Holding duplicate data means also more storage which costs money) and mostly on your business requirements.</p> <ul> <li>Is all data needed at all time?</li> <li>Are there geo/region-related cases?</li> <li>How fast needs a read / write operation need to be?</li> <li>Do things need to be cached?</li> <li>Statefull or Stateless?</li> <li>Are there operational requirements? How should this be maintained?</li> <li>...</li> </ul> <p>A part from this your microservices should not know where the data is actually stored. In kubernetes you can use Persistent-Volumes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> that can link to a storage of your Cloud-Provider or something else. The microservice should just mount the volume and be able to treat it like a local file.</p> <p>Note that the Cloud Provider Storages already include solutions for scaling, concurrency etc. So I would probably use a single Blob-Storage under the hood.</p> <p>However it has to be said, there is trend to understand a microservice as a package of data and logic coupled together and also accept duplicating the data, which leads to better scalability.</p> <p>See for more information:</p> <ul> <li><a href="http://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/" rel="nofollow noreferrer">http://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/</a></li> <li><a href="https://github.com/katopz/best-practices/blob/master/best-practices-for-building-a-microservice-architecture.md#stateless-service-instances" rel="nofollow noreferrer">https://github.com/katopz/best-practices/blob/master/best-practices-for-building-a-microservice-architecture.md#stateless-service-instances</a></li> <li><a href="https://12factor.net/backing-services" rel="nofollow noreferrer">https://12factor.net/backing-services</a></li> <li><a href="https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale.html" rel="nofollow noreferrer">https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale.html</a> </li> </ul>
<p>I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work. </p> <p>These are the steps I took:</p> <ul> <li>I started a Kubernetes pod running Mongodb on a remote virtual machine.</li> <li>Then I exposed it through a Kubernetes NodePort service. </li> <li>Then I tried to connect to the Mongodb instance from my laptop, but it didn't work.</li> </ul> <p>Here is the command I used to try to connect: </p> <pre><code>$ mongo host:NodePort </code></pre> <p>(by "host" I mean the Kubernetes master). </p> <p>And here is its output:</p> <pre><code>MongoDB shell version v4.0.3 connecting to: mongodb://host:NodePort/test 2018-10-24T21:43:41.462+0200 E QUERY [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException: Error connecting to host:NodePort :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:257:13 @(connect):1:6 exception: connect failed </code></pre> <p>From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:</p> <pre><code>db._adminCommand( {getCmdLineOpts: 1} </code></pre> <p>And here is the output:</p> <pre><code>{ "argv" : [ "mongod", "--bind_ip", "0.0.0.0" ], "parsed" : { "net" : { "bindIp" : "0.0.0.0" } }, "ok" : 1 } </code></pre> <p>So the Mongodb server should actually be accessible remotely.</p> <p>I can't figure out whether the problem is caused by Kubernetes or by Mongodb.</p> <p>As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.</p> <p>Could someone help me shed some light on this? Or tell me how to debug this problem?</p> <p>EDIT:</p> <p>Here is the output of the <code>kubectl describe deployment &lt;mongo-deployment&gt;</code> command, as per your request:</p> <pre><code>Name: mongo-remote Namespace: default CreationTimestamp: Thu, 25 Oct 2018 06:31:24 +0000 Labels: name=mongo-remote Annotations: deployment.kubernetes.io/revision=1 Selector: name=mongo-remote Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: name=mongo-remote Containers: mongocontainer: Image: mongo:4.0.3 Port: 5000/TCP Host Port: 0/TCP Command: mongod --bind_ip 0.0.0.0 Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: mongo-remote-655478448b (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mongo-remote-655478448b to 1 </code></pre> <p>For the sake of completeness, here is the yaml description of the deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongo-remote spec: replicas: 1 template: metadata: labels: name: mongo-remote spec: containers: - name: mongocontainer image: mongo:4.0.3 imagePullPolicy: Always command: - "mongod" - "--bind_ip" - "0.0.0.0" ports: - containerPort: 5000 name: mongocontainer nodeSelector: kubernetes.io/hostname: xxx </code></pre>
<p>I found the mistake (and as I suspected, it was a silly one).<br> The problem was in the yaml description of the deployment. As no port was specified in the <code>mongod</code> command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000). </p> <p>So the solution is to either set the containerPort as the default port of mongodb, like so: </p> <pre><code> command: - "mongod" - "--bind_ip" - "0.0.0.0" ports: - containerPort: 27017 name: mongocontainer </code></pre> <p>Or to set the port of mongodb as the one of the containerPort, like so:</p> <pre><code> command: - "mongod" - "--bind_ip" - "0.0.0.0" - "--port" - "5000" ports: - containerPort: 5000 name: mongocontainer </code></pre>
<p>I tried to demploy a k8s environment for ver1.12_rc.1. It includes one master and 2 nodes. All of them are CentOS Linux release 7.4.1708 (Core). The related info are follows:</p> <pre><code>[root@bogon174 dashboard]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE heapster-684777c4cb-fm6kd 1/1 Running 0 32m kubernetes-dashboard-77fd78f978-khc4f 1/1 Running 0 32m metrics-server-v0.3.1-6879897646-c7rwz 2/2 Running 0 37m monitoring-grafana-56b668bccf-29277 1/1 Running 0 32m monitoring-influxdb-5c5bf4949d-l8ttc 1/1 Running 0 32m [root@bogon174 dashboard]# kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 169.169.120.140 &lt;none&gt; 80/TCP 33m kubernetes-dashboard NodePort 169.169.151.109 &lt;none&gt; 443:26007/TCP 33m metrics-server NodePort 169.169.218.252 &lt;none&gt; 443:10521/TCP 38m monitoring-grafana ClusterIP 169.169.170.53 &lt;none&gt; 80/TCP 33m monitoring-influxdb ClusterIP 169.169.248.0 &lt;none&gt; 8086/TCP 33m [root@bogon174 dashboard]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.20.171 Ready &lt;none&gt; 10d v1.12.0-rc.1 192.168.20.172 NotReady &lt;none&gt; 10d v1.12.0-rc.1 </code></pre> <p><img src="https://i.stack.imgur.com/BVxPW.png" alt="enter image description here"></p> <p>I researched previous questions, but I cannot to get the correct solution.</p>
<p>Version 1.12 is no longer using heapster , and the top command is not yet ported to the new metrics system. There is a github issue for that , in order for the top to work with new metrics system.</p> <p>Look at the options available , the only one is heapster but heapster is no more used.</p> <pre><code>[iahmad@web-prod-ijaz001 ~]$ kubectl top node --help Display Resource (CPU/Memory/Storage) usage of nodes. The top-node command allows you to see the resource consumption of nodes. Aliases: node, nodes, no Examples: # Show metrics for all nodes kubectl top node # Show metrics for a given node kubectl top node NODE_NAME Options: --heapster-namespace='kube-system': Namespace Heapster service is located in --heapster-port='': Port name in service to use --heapster-scheme='http': Scheme (http or https) to connect to Heapster as --heapster-service='heapster': Name of Heapster service -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) Usage: kubectl top node [NAME | -l label] [options] </code></pre>
<p>I am newbie on Docker and Kubernetes. And now I am developing Restful APIs which later be deployed to Docker containers in a Kubernetes cluster. </p> <p>How the path of the endpoints will be changed? I have heard that Docker-Swarm and Kubernetes add some ords on the endpoints.</p>
<p>The "path" part of the endpoint URLs themselves (for this SO question, the <code>/questions/53008947/...</code> part) won't change. But the rest of the URL might.</p> <p>Docker publishes services at a TCP-port level (<code>docker run -p</code> option, Docker Compose <code>ports:</code> section) and doesn't look at what traffic is going over a port. If you have something like an Apache or nginx proxy as part of your stack that might change the HTTP-level path mappings, but you'd probably be aware of that in your environment.</p> <p>Kubernetes works similarly, but there are more layers. A container runs in a Pod, and can publish some port out of the Pod. That's not used directly; instead, a Service refers to the Pod (by its labels) and republishes its ports, possibly on different port numbers. The Service has a DNS name <code>service-name.namespace.svc.cluster.local</code> that can be used within the cluster; you can also configure the Service to be reachable on a fixed TCP port on every node in the service (<code>NodePort</code>) or, if your Kubernetes is running on a public-cloud provider, to create a load balancer there (<code>LoadBalancer</code>). Again, all of this is strictly at the TCP level and doesn't affect HTTP paths.</p> <p>There is one other Kubernetes piece, an Ingress controller, which acts as a declarative wrapper around the nginx proxy (or something else with similar functionality). That <em>does</em> operate at the HTTP level and could change paths.</p> <p>The other corollary to this is that the URL to reach a service might be different in different environments: <code>http://localhost:12345/path</code> in a local development setup, <code>http://other_service:8080/path</code> in Docker Compose, <code>http://other-service/path</code> in Kubernetes, <code>https://api.example.com/other/path</code> in production. You need some way to make that configurable (often an environment variable).</p>
<p>I'm using <code>kubectl run</code> with environment parameters to create temporary docker containers for me (e.g. some forwarding for debugging purposes). Since several weeks <code>kubectl</code> is complaining about <code>kubectl run</code> being deprecated. Unfortunately I can't find an appropriate replacement.</p> <p>This is the old command:</p> <pre><code>$KUBECTL run -i -t --attach=false --image djfaze/port-forward --env="REMOTE_HOST=$REMOTE_HOST" --env="REMOTE_PORT=$REMOTE_PORT" $POD_NAME </code></pre> <p>When issuing this, <code>kubectl</code> complains with this message:</p> <blockquote> <p><code>kubectl run --generator=deployment/apps.v1beta1</code> is DEPRECATED and will be removed in a future version. Use kubectl create instead.</p> </blockquote> <p>Any ideas how to replace this run command?</p>
<p>As the author of the problem let me explain a little bit the intention behind this deprecation. Just like Brendan explains in <a href="https://stackoverflow.com/a/52902113">his answer</a>, <code>kubectl run</code> per se is not being deprecated, only all the generators, except for the one that creates a Pod for you.</p> <p>The reason for this change is two folds:</p> <ol> <li><p>The vast majority of input parameters for <code>kubectl run</code> command is overwhelming for newcomers, as well as for the old timers. It's not that easy to figure out what will be the result of your invocation. You need to take into consideration several passed options as well as the server version.</p></li> <li><p>The code behind it is also a mess to maintain given the matrix of possibilities is growing faster than we can handle.</p></li> </ol> <p>That's why we're trying to move people away from using <code>kubectl run</code> for their daily workflows and convince them that using explicit <code>kubectl create</code> commands is more straightforward. Finally, we want to make the newcomers that played with docker or any other container engine, where they run a container, to have the same experience with Kubernetes where <code>kubectl run</code> will just run a Pod in a cluster.</p> <p>Sorry for the initial confusion and I hope this will clear things up.</p> <p>UPDATE (2020/01/10): As of <a href="https://github.com/kubernetes/kubernetes/pull/87077" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/87077</a> <code>kubectl run</code> will ONLY create Pods. All generators will be removed entirely. </p>
<p>When I list the pods in a cluster (on a specific node and in all namespaces) then each pod listed also contains the container statuses, and therein I get the container runtime engine IDs of each of the containers listed.</p> <p>To illustrate, I'm using this Python3 script to access the cluster API via the official Kubernetes Python client; this is a slightly modified version from <a href="https://stackoverflow.com/questions/52785515/how-to-find-all-kubernetes-pods-on-the-same-node-from-a-pod-using-the-official-p">How to find all Kubernetes Pods on the same node from a Pod using the official Python client?</a></p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config import os def main(): # it works only if this script is run by K8s as a POD config.load_incluster_config() # use this outside pods # config.load_kube_config() # grab the node name from the pod environment vars node_name = os.environ.get('KUHBERNETES_NODE_NAME', None) v1 = client.CoreV1Api() print("Listing pods with their IPs on node: ", node_name) # field selectors are a string, you need to parse the fields from the pods here field_selector = 'spec.nodeName='+node_name ret = v1.list_pod_for_all_namespaces(watch=False, field_selector=field_selector) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) for c in i.status.container_statuses: print("\t%s\t%s" % (c.name, c.container_id)) if __name__ == '__main__': main() </code></pre> <p>N.B. The Pod uses a suitable ServiceAccount which enables it to list pods in all namespaces.</p> <p>A typical result output when run on a minikube setup might look like this:</p> <pre><code>Listing pods with their IPs on node: minikube 172.17.0.5 cattle-system cattle-cluster-agent-c949f5b48-llm65 cluster-register docker://f12fcb1acbc2e7c01c24dbd831ed53ab2a6df2353abe80988ae132c39f7c68c6 10.0.2.15 cattle-system cattle-node-agent-hmq86 agent docker://e335a3d30ea37887ac2a1a1cc339eabb0a0098471f86db1926cfe02eef2c6b8f 172.17.0.6 gw pyk8s py8ks docker://1272747b52983e8f745bd118b2d935c1d314e9c6cc310e88013021ba974bc030 172.17.0.4 kube-system coredns-c4cffd6dc-7lsdn coredns docker://8b0c3c67532ee2d7d16958a33cb942d5bd09ed37ded1d570830b5f7e5f7a09ab 10.0.2.15 kube-system etcd-minikube etcd docker://5e0e0ee48248e9779a2a5f9347a39c58743562b10719a31d7d6fc0af5e79e093 10.0.2.15 kube-system kube-addon-manager-minikube kube-addon-manager docker://96908bc5d5fd9b87779c8a8544591e5aeda2d58956fb365ab595681605b01001 10.0.2.15 kube-system kube-apiserver-minikube kube-apiserver docker://0711ec9a2321b1b5a801ab2b19409a1edc731058aa994978f989185efc4c8294 10.0.2.15 kube-system kube-controller-manager-minikube kube-controller-manager docker://16d2e11a8dea2a46cd44bc97a5f894e7ff9da2da70f3c24376b4189dd912336e 172.17.0.2 kube-system kube-dns-86f4d74b45-wbdf6 dnsmasq docker://653c7ef27760a820449ee518b59e39ab4a7f65cade996ed85313c98038827f67 kubedns docker://6cf6aaeac1192cf1d580293e03164db57bc70bce41cf91e5cac081010fe48cf7 sidecar docker://9816e10d8455988aa400f98df32cfa69ce89fbfc3e3e1554145d9d6418c02157 10.0.2.15 kube-system kube-proxy-ll7lq kube-proxy docker://6b8c7ce1ae3c8fbc487bf05ccca9105dffaf675f916cdb62a595d8be7902e69b 10.0.2.15 kube-system kube-scheduler-minikube kube-scheduler docker://ab79e46ba900753d86b7000061720551a199c0ea6eee923fcd86bda2d86cc54a 172.17.0.3 kube-system kubernetes-dashboard-6f4cfc5d87-bmnl8 kubernetes-dashboard docker://a73ef6b30fb87826a4a71ba428a01511278a759d69fade82ddd654911ec3f14f 10.0.2.15 kube-system storage-provisioner storage-provisioner docker://51eaf90bc3ae11baa354a436e366730c19206c73743c6517a0ad9eb8f0b89896 </code></pre> <p>Please note that this lists the container IDs of the pod containers, except the pause container IDs. Is there an API method to also get/list the container IDs of the pause containers in pods?</p> <p>I tried searching for things like "kubernetes api pod pause container id" ... but I did not get any useful answers, except the usual API results for containerStatuses, etc.</p>
<p>After some research into how Kubernetes' Docker shim works, it's clear that the pause containers are not visible at the Kubernetes cluster API. That's because pause containers are an artefact required with some container engines, such as Docker, but not in others (CRI-O if I'm not mistaken).</p> <p>However, when the low-level Docker container view is necessary and needs to be related to the Kubernetes node-scheduled pod view, then the predictable Docker container naming scheme used in the Kubernetes Docker shim can be used. The shim creates the container names in the form of <code>k8s_conainer_pod_namespace_uid_attempt</code> with an optional <code>_random</code> suffix in case od hitting the Docker &lt;=1.11 name conflict bug.</p> <ul> <li>k8s is the fixed prefix which triggers the shim to regard this container as a Kubernetes container.</li> <li>container is the name as specified in the pod spec. Please note that Kubernetes only allows lowercase a-z, 0-9, and dashes. Pause containers thus get the "reserved" name "POD" in all-uppercase.</li> <li>pod is the pod name.</li> <li>namespace is the namespace name as assigned, or "default".</li> <li>pod UID with verying formats.</li> <li>attempt is a counter starting from 0 that the shim needs in order to correctly manage pod updates, that is, container cleanup, etc.</li> </ul> <p>See also:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/blob/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/pkg/kubelet/dockershim/naming.go#L29" rel="nofollow noreferrer">container names implementation</a></li> <li><a href="https://github.com/kubernetes/kubernetes/blob/2e357e39c81673f916a81a0a4f485ed080043e25/pkg/kubelet/leaky/leaky.go" rel="nofollow noreferrer">name of pause pod</a></li> <li><a href="https://github.com/moby/moby/issues/23371" rel="nofollow noreferrer">Docker name conflict bug</a></li> </ul>
<p>We have a Kubernetes cluster setup using AWS EC2 instances which we created using KOPS. We are experiencing problems with internal pod communication through kubernetes services (which will load balance traffic between destination pods). The problem emerges when the source and destination pod are on the same EC2 instance (node). Kubernetes is setup with flannel for internode communication using vxlan, and kubernetes services are managed by kube-proxy using iptables.</p> <p>In a scenario where:</p> <ul> <li>PodA running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.240</li> <li>PodB running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.247</li> <li>ServiceB (service where PodB is a possible destination endpoint): 100.67.30.133</li> </ul> <p>If we go inside PodA and execute "curl -v <a href="http://ServiceB/" rel="nofollow noreferrer">http://ServiceB/</a>", no answer is received and finally, a timeout is produced.</p> <p>When we inspect the traffic (cni0 interface in instance 1), we observe:</p> <ol> <li>PodA sends a SYN package to ServiceB IP</li> <li>The package is mangled and the destination IP is changed from ServiceB IP to PodB IP</li> <li><p>Conntrack registers that change:</p> <pre><code>root@ip-172-20-121-84:/home/admin# conntrack -L|grep 100.67.30.133 tcp 6 118 SYN_SENT src=100.96.54.240 dst=100.67.30.133 sport=53084 dport=80 [UNREPLIED] src=100.96.54.247 dst=100.96.54.240 sport=80 dport=43534 mark=0 use=1 </code></pre></li> <li><p>PodB sends a SYN+ACK package to PodA</p></li> <li>The source IP for the SYN+ACK package is not reverted back from the PodB IP to the ServiceB IP</li> <li>PodA receives a SYN+ACK package from PodB, which was not expected and it send back a RESET package</li> <li>PodA sends a SYN package to ServiceB again after a timeout, and the whole process repeats</li> </ol> <p>Here the tcpdump annotated details:</p> <pre><code>root@ip-172-20-121-84:/home/admin# tcpdump -vv -i cni0 -n "src host 100.96.54.240 or dst host 100.96.54.240" TCP SYN: 15:26:01.221833 IP (tos 0x0, ttl 64, id 2160, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.240.43534 &gt; 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -&gt; 0x3e31), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0 15:26:01.221866 IP (tos 0x0, ttl 63, id 2160, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.240.43534 &gt; 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -&gt; 0x25a2), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0 Level 2: 15:26:01.221898 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 100.96.54.240 tell 100.96.54.247, length 28 15:26:01.222050 ARP, Ethernet (len 6), IPv4 (len 4), Reply 100.96.54.240 is-at 0a:58:64:60:36:f0, length 28 TCP SYN+ACK: 15:26:01.222151 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.247.80 &gt; 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -&gt; 0xc318), seq 2871879716, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372198 ecr 153372198,nop,wscale 9], length 0 TCP RESET: 15:26:01.222166 IP (tos 0x0, ttl 64, id 32433, offset 0, flags [DF], proto TCP (6), length 40) 100.96.54.240.43534 &gt; 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0 TCP SYN (2nd time): 15:26:02.220815 IP (tos 0x0, ttl 64, id 2161, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.240.43534 &gt; 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -&gt; 0x3d37), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0 15:26:02.220855 IP (tos 0x0, ttl 63, id 2161, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.240.43534 &gt; 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -&gt; 0x24a8), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0 15:26:02.220897 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) 100.96.54.247.80 &gt; 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -&gt; 0x91f0), seq 2887489130, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372448 ecr 153372448,nop,wscale 9], length 0 15:26:02.220915 IP (tos 0x0, ttl 64, id 32492, offset 0, flags [DF], proto TCP (6), length 40) 100.96.54.240.43534 &gt; 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0 </code></pre> <p>The relevant iptable rules (automatically managed by kube-proxy) on instance 1 (ip-172-20-121-84, us-east-1c):</p> <pre><code>-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A KUBE-SERVICES ! -s 100.96.0.0/11 -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-SVC-3IL52ANAN3BQ2L74 -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.10000000009 -j KUBE-SEP-4XYJJELQ3E7C4ILJ -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.11110999994 -j KUBE-SEP-2ARYYMMMNDJELHE4 -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.12500000000 -j KUBE-SEP-OAQPXBQCZ2RBB4R7 -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.14286000002 -j KUBE-SEP-SCYIBWIJAXIRXS6R -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.16667000018 -j KUBE-SEP-G4DTLZEMDSEVF3G4 -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-NXPFCT6ZBXHAOXQN -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-7DUMGWOXA5S7CFHJ -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-LNIY4F5PIJA3CQPM -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SLBETXT7UIBTZCPK -A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -j KUBE-SEP-FMCOTKNLEICO2V37 -A KUBE-SEP-OAQPXBQCZ2RBB4R7 -s 100.96.54.247/32 -m comment --comment "prod/export:" -j KUBE-MARK-MASQ -A KUBE-SEP-OAQPXBQCZ2RBB4R7 -p tcp -m comment --comment "prod/export:" -m tcp -j DNAT --to-destination 100.96.54.247:80 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE </code></pre> <p>This is the service definition:</p> <pre><code>root@adsvm010:/yamls# kubectl describe service export Name: export Namespace: prod Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: run=export Type: ClusterIP IP: 100.67.30.133 Port: &lt;unset&gt; 80/TCP TargetPort: 80/TCP Endpoints: 100.96.5.44:80,100.96.54.235:80,100.96.54.247:80 + 7 more... Session Affinity: None Events: &lt;none&gt; </code></pre> <p>If instead of the service we use directly PodB IP (so there is no need to mangle packages), the connection works.</p> <p>If we use the service but the randomly selected destination pod is running in a different instance, then the connection tracking mechanism works properly and it mangles the package back so that PodA sees the SYN+ACK package as it expected it (coming from ServiceB IP). In this case, traffic goes through cni0 and flannel.0 interfaces.</p> <p>This behavior started some weeks ago before we were not observing any problems (over a year) and we do not recall any major change to the cluster setup or to the pods we are running. Does anybody have any idea that would explain why the SYN+ACK package is not mangled back to the expected src/dst IPs?</p>
<p>I finally found the answer. The cni0 interface is in bridge mode with all the pod virtual interfaces (one veth0 per pod running on that node):</p> <pre><code>root@ip-172-20-121-84:/home/admin# brctl show bridge name bridge id STP enabled interfaces cni0 8000.0a5864603601 no veth05420679 veth078b53a1 veth0a60985d ... root@ip-172-20-121-84:/home/admin# ip addr 5: cni0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 8951 qdisc noqueue state UP group default qlen 1000 link/ether 0a:58:64:60:36:01 brd ff:ff:ff:ff:ff:ff inet 100.96.54.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::1c66:76ff:feb6:2122/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>The traffic that goes from/to the bridged interface to/from some other interface is processed by netfilter/iptables, but the traffic that does not leave the bridged interface (e.g. from one veth0 to another, both belonging to the same bridge) is NOT processed by netfilter/iptables.</p> <p>In the example I exposed in the question, PodA (100.96.54.240) sends a SYN package to ServiceB (100.67.30.133) which is not in the cni0 subnet (100.96.54.1/24) so this package will not stay in the bridged cni0 interface and iptable processes it. That is why we see that the DNAT happened and it got registered in the conntrack. But if the selected destination pod is in the same node, for instance PodB (100.96.54.247), then PodB sees the SYN package and responses with a SYN+ACK where the source is 100.96.54.247 and the destination is 100.96.54.240. These are IPs inside the cni0 subnet and do not need to leave it, hence netfilter/iptables does not process it and does not mangle back the package based on conntrack information (i.e., the real source 100.96.54.247 is not replaced by the expected source 100.67.30.133).</p> <p>Fortunately, there is the <a href="http://ebtables.netfilter.org/documentation/bridge-nf.html" rel="nofollow noreferrer">bridge-netfilter</a> kernel module that can enable netfilter/iptables to process traffic that happens in the bridged interfaces: </p> <pre><code>root@ip-172-20-121-84:/home/admin# modprobe br_netfilter root@ip-172-20-121-84:/home/admin# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 </code></pre> <p>To fix this in a Kubernetes cluster setup with KOPS (<a href="https://github.com/kubernetes/kops/issues/4391#issuecomment-364321275" rel="nofollow noreferrer">credits</a>), edit the cluster manifest with <code>kops edit cluster</code> and under <code>spec:</code> include:</p> <pre><code>hooks: - name: fix-bridge.service roles: - Node - Master before: - network-pre.target - kubelet.service manifest: | Type=oneshot ExecStart=/sbin/modprobe br_netfilter [Unit] Wants=network-pre.target [Install] WantedBy=multi-user.target </code></pre> <p>This will create a systemd service in <code>/lib/systemd/system/fix-bridge.service</code> in your nodes that will run at startup and it will make sure the <code>br_netfilter</code> module is loaded before kubernetes (i.e., kubelet) starts. If we do not do this, what we experienced with AWS EC2 instances (Debian Jessie images) is that sometimes the module is loaded during startup and sometimes it is not (I do not know why there such a variability), so depending on that the problem may manifest itself or not.</p>
<p>Looking for a <code>kubectl</code> command to get the list of all the objects on which quotas can be applied , or a complete list somewhere else. In the documentation I could not find a complete list. OR does it mean all of them support quota?</p> <p><strong>Use case:</strong></p> <p>We need to apply quota on all the namespace resouces. But I doubt all of them support quota ?</p>
<p>Here is a list of objects that is supported by kubernetes till now . They are categorized in Compute Resource Quota,Storage Resource Quota,Object Count Quota. Below is the kubernetes doc link.</p> <p><a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">supported api objects for quotas</a></p>
<p>Recently, prometheus-operator has been promoted to stable helm chart (<a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a>). </p> <p>I'd like to understand how to add a custom application to monitoring by prometheus-operator in a k8s cluster. An example for say gitlab runner which by default provides metrics on 9252 would be appreciated (<a href="https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server" rel="noreferrer">https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server</a>).</p> <p>I have a rudimentary yaml that obviously doesn't work but also not provides any feedback on <em>what</em> isn't working:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: gitlab-monitor # Change this to the namespace the Prometheus instance is running in namespace: default labels: app: gitlab-runner-gitlab-runner release: prometheus spec: selector: matchLabels: app: gitlab-runner-gitlab-runner namespaceSelector: # matchNames: # - default any: true endpoints: - port: http-metrics interval: 15s </code></pre> <p>This is the prometheus configuration:</p> <pre><code>&gt; kubectl get prometheus -o yaml ... serviceMonitorNamespaceSelector: {} serviceMonitorSelector: matchLabels: release: prometheus ... </code></pre> <p>So the selectors should match. By "not working" I mean that the endpoints do not appear in the prometheus UI.</p>
<p>Thanks to Peter who showed me that it idea in principle wasn't entirely incorrect I've found the missing link. As a <code>servicemonitor</code> does monitor services (haha), I missed the part of creating a service which isn't part of the gitlab helm chart. Finally this yaml did the trick for me and the metrics appear in Prometheus:</p> <pre><code># Service targeting gitlab instances apiVersion: v1 kind: Service metadata: name: gitlab-metrics labels: app: gitlab-runner-gitlab-runner spec: ports: - name: metrics # expose metrics port port: 9252 # defined in gitlab chart targetPort: metrics protocol: TCP selector: app: gitlab-runner-gitlab-runner # target gitlab pods --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: gitlab-metrics-servicemonitor # Change this to the namespace the Prometheus instance is running in # namespace: default labels: app: gitlab-runner-gitlab-runner release: prometheus spec: selector: matchLabels: app: gitlab-runner-gitlab-runner # target gitlab service endpoints: - port: metrics interval: 15s </code></pre> <p>Nice to know: the <code>metrics</code> <code>targetPort</code> is defined in the gitlab runner chart.</p>
<p>we are a software vendor that is currently investigating Docker in general and Kubernetes in particular as a way to allow customers to install our software stack in their environment. Our software comes with a whole bunch of command line tools, which are mainly used for adminstration purposes. While most admin tasks can be done through the UI, others can only be done through these tools. We thought on ways to provide these tools to customers running our software on Docker. First thought was to just make a big archive available to them, which they can download and run on their admin machine. But we quickly came up with the idea to instead provide an "admin tools" container that contains all these tools. Different from our other containers, this one is not meant to be run as a server, but to be run either from scripts or interactively. Even thought the corresponding docker or kubectl command lines get a bit lengthy, this mostly works and I think the approach has quite some merit, as it is an "in-band" way to also publish our command line tools. However, some administration tasks require you to pass files to the respective command line tool or the tools generate files (e.g., backups). So you need a way to get these files into or out of your container. In a Docker environment, it is quite straightforward to use "docker run" and mount a volume or host directory into your container which contains the files. In Kubernetes, this seem to be not as straightforward, though (see <a href="https://stackoverflow.com/questions/37555281/create-kubernetes-pod-with-volume-using-kubectl-run">Create kubernetes pod with volume using kubectl run</a> for an approach to do this with "kubectl run"... Yikes!). Running the admin tools container in a regular pod (created with a YAML file) with a volume mount and attaching to it is actually simpler.</p> <p>In the end, I would like to get your thoughts on the title question: What is the "best" way to make command line tools for the administration of containerized applications available to their users?</p> <ul> <li>Ship them as a dedicated archive and let users use them as they did in the pre-container world?</li> <li>Ship them in containers that are meant to be used interactively (as described above)?</li> <li>any other ideas?</li> </ul> <p>Regards PJ</p>
<p>You want <strong>Kubernetes Operators</strong>.</p> <p>People don't want to use your CLI, they are only using your CLI to "get stuff done". If you can put your CLI actions into a Kubernetes Operator, you get a standard API that any other tool can use.</p> <p>For example, let's say our software is kinda like a database. Your operator could have an entry representing each database, which would have fields for "how often to backup" and "how many replicas". Any tool (or even the <code>kubctl</code> CLI) can set those fields, and your operator does all the dirty work to make it happen. Your customers install a single operator, and they don't have to manage the transient containers that "get work done", nor do they have to understand your CLI.</p>
<p>Though <code>resourceQuotas</code> may limit the number of configmaps in a namespace, is there any such option to limit the size of the individual configmap? I will not like some user to start uploading large text files as configmaps.</p> <p>What is the max size of ConfigMap etcd support? If there is a reasonable limit on the etcd side, should be fine then.</p>
<p>There are no <a href="https://github.com/kubernetes/kubernetes/issues/19969" rel="noreferrer">hard-limits</a> on either the ConfigMap or Secret objects as of this writing. </p> <p>There's, however, a 1MB limit from the <a href="https://github.com/kubernetes/kubernetes/issues/19781" rel="noreferrer">etcd</a> side which is where Kubernetes stores its objects.</p> <p>From the API side, if you actually see the API <a href="https://github.com/kubernetes/api/blob/master/core/v1/types.go#L5027" rel="noreferrer">code</a> and the ConfigMap type, you'll see that its <a href="https://github.com/kubernetes/api/blob/master/core/v1/types.go#L5040" rel="noreferrer">data</a> field is Golang map of strings so this appears memory bound and managed at runtime unless somewhere else it gets defined with <code>make()</code>. Technically the max size for the number of keys on hashmap is the map length which is an <code>int</code> and the max value is explained here: <a href="https://stackoverflow.com/questions/32600763/maximum-number-of-elements-in-map">Maximum number of elements in map</a>. That would also be the theoretical limit for the value of data as the maximum value for <code>len(string)</code>.</p> <p>If you actually want to get more insights from the API side where the kube-apiserver receives protobufs (or JSON for that matter) you can take a look at the <a href="https://stackoverflow.com/questions/34128872/google-protobuf-maximum-size">google protobuf maximum size</a>. That will give you some measure as to the limitation of sending the <code>data</code> field through the wire. There may be other limitation from the kube-apiserver itself when it comes to processing any large message.</p>
<p>My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.</p> <p>How do I do this?</p> <p>Everything I'm seeing online involves Ingress and nginx-ingress in particular.</p> <p>But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?</p> <p>I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?</p>
<p>A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. <a href="https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs" rel="nofollow noreferrer">This</a> seems to be a good example, using <a href="https://hub.docker.com/r/ployst/nginx-ssl-proxy/" rel="nofollow noreferrer">nginx-ssl-proxy container</a>.</p>
<p>I am migrating an application in Nodejs to kubernetes in GCP. In CI tutorials, I see the updated application being copied to a new docker image and sent to GCR.</p> <p>The process of uploading an image is slow compared to updating only the code. So what exactly is the gain of sending a new image containing the application?</p>
<p>The philosophy of Docker is simple - layers are reusable [1]. As long as the layers have not changed, they are reused across images. As long as you keep your application's layer as the last few, the base layers can be reused, keeping the number of layers pushed to a minimum. You should consider using multi-stage builds to minimise shipping build-stage dependencies with your container. Hasura.io has an excellent post[2] on using multi-stage builds for NodeJS apps effectively. </p> <ol> <li><a href="https://www.infoworld.com/article/3077875/linux/containers-101-docker-fundamentals.html" rel="nofollow noreferrer">https://www.infoworld.com/article/3077875/linux/containers-101-docker-fundamentals.html</a></li> <li><a href="https://blog.hasura.io/an-exhaustive-guide-to-writing-dockerfiles-for-node-js-web-apps-bbee6bd2f3c4" rel="nofollow noreferrer">https://blog.hasura.io/an-exhaustive-guide-to-writing-dockerfiles-for-node-js-web-apps-bbee6bd2f3c4</a></li> </ol>
<p>Data showing "xxx" has been masked.</p> <p>Problem description:</p> <p>Success Scenario: When i make my image public in docker registry, my pod is getting created successfully.</p> <p>Failure Scenario: When i make my image private in docker registry. My image pull fails on kubernetes cluster.</p> <p>Please details below and help.</p> <p>I have my image published to docker registry.</p> <p>Following is my kubernetes secret:</p> <pre><code>c:\xxxxxxx\temp&gt;kubectl get secret regcredx -o yaml apiVersion: v1 data: .dockerconfigjson: xxxxxx kind: Secret metadata: creationTimestamp: 2018-10-25T21:38:18Z name: regcredx namespace: default resourceVersion: "1174545" selfLink: /api/v1/namespaces/default/secrets/regcredx uid: 49a71ba5-d89e-11e8-8bd2-005056b7126c type: kubernetes.io/dockerconfigjson </code></pre> <p>Here is my pod.yaml file:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: whatever spec: containers: - name: whatever image: xxxxxxxxx/xxxxxx:123 imagePullPolicy: Always command: [ "sh", "-c", "tail -f /dev/null" ] imagePullSecrets: - name: regcredx </code></pre> <p>Here is my pod config in cluster:</p> <pre><code>c:\Sharief\temp&gt;kubectl get pod whatever -o yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/podIP: 100.96.1.81/32 creationTimestamp: 2018-10-26T20:49:11Z name: whatever namespace: default resourceVersion: "1302024" selfLink: /api/v1/namespaces/default/pods/whatever uid: 9783b81f-d960-11e8-94ca-005056b7126c spec: containers: - command: - sh - -c - tail -f /dev/null image: xxxxxxxxx/xxxxxxx:123 imagePullPolicy: Always name: whatever resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-4db4c readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: regcredx nodeName: xxxx-pvt priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-4db4c secret: defaultMode: 420 secretName: default-token-4db4c status: conditions: - lastProbeTime: null lastTransitionTime: 2018-10-26T20:49:33Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-10-26T20:49:33Z message: 'containers with unready status: [whatever]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: 2018-10-26T20:49:11Z status: "True" type: PodScheduled containerStatuses: - image: xxxxxxxxx/xxxxxxx:123 imageID: "" lastState: {} name: whatever ready: false restartCount: 0 state: waiting: message: Back-off pulling image "xxxxxxxxx/xxxxxxx:123" reason: ImagePullBackOff hostIP: xx.xxx.xx.xx phase: Pending podIP: xx.xx.xx.xx qosClass: BestEffort startTime: 2018-10-26T20:49:33Z </code></pre> <p>Here is my pod discription:</p> <pre><code>c:\xxxxxxx\temp&gt;kubectl describe pod whatever Name: whatever Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: co2-vmkubwrk01company-pvt/xx.xx.xx.xx Start Time: Fri, 26 Oct 2018 15:49:33 -0500 Labels: &lt;none&gt; Annotations: cni.projectcalico.org/podIP=xxx.xx.xx.xx/xx Status: Pending IP: xxx.xx.x.xx Containers: whatever: Container ID: Image: xxxxxxxxx/xxxxxxx:123 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c tail -f /dev/null State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4db4c (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-4db4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-4db4c Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 27m default-scheduler Successfully assigned whatever to xxx Normal SuccessfulMountVolume 26m kubelet, co2-vmkubwrk01company-pvt MountVolume.SetUp succeeded for volume "default-token-4db4c" Normal Pulling 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt pulling image "xxxxxxxxx/xxxxxxx:123" Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Failed to pull image "xxxxxxxxx/xxxxxxx:123": rpc error: code = Unknown desc = repository docker.io/xxxxxxxxx/xxxxxxx not found: does not exist or no pull access Warning Failed 25m (x4 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ErrImagePull Normal BackOff 16m (x41 over 26m) kubelet, co2-vmkubwrk01company-pvt Back-off pulling image "xxxxxxxxx/xxxxxxx:123" Warning Failed 1m (x106 over 26m) kubelet, co2-vmkubwrk01company-pvt Error: ImagePullBackOff </code></pre>
<p>Kubernetes could not find your repository , the image path is wrong , you need to fix this:</p> <pre><code>image: xxxxxxxxx/xxxxxx:123 </code></pre> <p>One thing you can try to test the assumption that pre-pull the image on the node on which the deployment is going to happen. do <code>docker images</code> , note the correct uri/repo:tag and update it in you deployment.</p>
<p>I would like to see all resources in a namespace.</p> <p>Doing <code>kubectl get all</code> will, despite of the name, not list things like services and ingresses.</p> <p>If I know the the type I can explicitly ask for that particular type, but it seems there is also no command for listing all possible types. (Especially <code>kubectl get</code> does for example not list custom types).</p> <p>How to show all resources before for example deleting that namespace?</p>
<p>Based on <a href="https://github.com/kubernetes/kubectl/issues/151#issuecomment-402003022" rel="noreferrer">this comment</a> , the supported way to list all resources is to iterate through all the api versions listed by <code>kubectl api-resources</code>:</p> <blockquote> <p>kubectl api-resources enumerates the resource types available in your cluster.</p> <p>this means you can combine it with kubectl get to actually list every instance of every resource type in a namespace:</p> </blockquote> <pre><code>kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -l &lt;label&gt;=&lt;value&gt; -n &lt;namespace&gt; </code></pre>
<p>I'm trying to dive into K8s networking model and I think I have a pretty good understanding of it so far, but there is one thing that I can't get my head around. In the <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model" rel="nofollow noreferrer">Cluster Networking</a> guide, the following is mentioned:</p> <blockquote> <p>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p> <ul> <li>all containers can communicate with all other containers without NAT</li> <li><strong>all nodes can communicate with all containers (and vice-versa) without NAT</strong></li> <li>the IP that a container sees itself as is the same IP that others see it as</li> </ul> </blockquote> <p>The second bullet point specifies that x-node container communication should be possible without NAT. This is however not true when kube-proxy runs in <code>iptables</code> mode. This is the dump of the iptables from one of my nodes:</p> <pre><code>Chain POSTROUTING (policy ACCEPT) target prot opt source destination KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */ Chain KUBE-POSTROUTING (1 references) target prot opt source destination MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 /* sample target pod chain being marked for MASQ */ Chain KUBE-SEP-2BKJZA32HM354D5U (1 references) target prot opt source destination KUBE-MARK-MASQ all -- xx.yyy.zzz.109 anywhere /* kube-system/heapster: */ DNAT tcp -- anywhere anywhere /* kube-system/heapster: */ tcp to:xx.yyy.zzz.109:8082 Chain KUBE-MARK-MASQ (156 references) target prot opt source destination MARK all -- anywhere anywhere MARK or 0x4000 </code></pre> <p>Looks like K8s is changing the source IP of marked outbound packets to the node's IP (for a ClusterIP service). And they even explicitly mention this in <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip" rel="nofollow noreferrer">Source IP for Services with Type=ClusterIP</a>:</p> <blockquote> <p>Packets sent to ClusterIP from within the cluster are never source NAT’d if you’re running kube-proxy in iptables mode, which is the default since Kubernetes 1.2. <strong>If the client pod and server pod are in the same node, the client_address is the client pod’s IP address. However, if the client pod and server pod are in different nodes, the client_address is the client pod’s node flannel IP address.</strong></p> </blockquote> <p>This starts by saying packets within the cluster are never SNAT'd but then proceedes to say packages sent to pods in other nodes are in fact SNAT'd. I'm confused about this - am I misinterpreting the <em>all nodes can communicate with all containers (and vice-versa) without NAT</em> requirement somehow?</p>
<p>If you read <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">point 2</a>:</p> <blockquote> <p>Pod-to-Pod communications: this is the primary focus of this document.</p> </blockquote> <p>This still applies to all the containers and pods running in your cluster, because all of them are in the <code>PodCidr</code>:</p> <ul> <li>all containers can communicate with all other containers without NAT</li> <li>all nodes can communicate with all containers (and vice-versa)</li> <li>without NAT the IP that a container sees itself as is the same IP that others see it as</li> </ul> <p>Basically, all pods have unique IP addresses and are in the same space and can talk to each at the IP layer.</p> <p>Also, if you look at the routes on one of your Kubernetes nodes you'll see something like this for Calico, where the podCidr is <code>192.168.0.0/16</code>:</p> <pre><code>default via 172.0.0.1 dev ens5 proto dhcp src 172.0.1.10 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 172.31.0.0/20 dev ens5 proto kernel scope link src 172.0.1.10 172.31.0.1 dev ens5 proto dhcp scope link src 172.0.1.10 metric 100 blackhole 192.168.0.0/24 proto bird 192.168.0.42 dev calixxxxxxxxxxx scope link 192.168.0.43 dev calixxxxxxxxxxx scope link 192.168.4.0/24 via 172.0.1.6 dev tunl0 proto bird onlink 192.168.7.0/24 via 172.0.1.55 dev tunl0 proto bird onlink 192.168.8.0/24 via 172.0.1.191 dev tunl0 proto bird onlink 192.168.9.0/24 via 172.0.1.196 dev tunl0 proto bird onlink 192.168.11.0/24 via 172.0.1.147 dev tunl0 proto bird onlink </code></pre> <p>You see the packets with a <code>192.168.x.x</code> are directly forwarded to a tunnel interface connected to the nodes, so no NATing there.</p> <p>Now, when you are connecting from the outside the PodCidr your packets are definitely NATed, say through services are through an external host. You also definitely see iptable rules like this:</p> <pre><code># Completed on Sat Oct 27 00:22:39 2018 # Generated by iptables-save v1.6.1 on Sat Oct 27 00:22:39 2018 *nat :PREROUTING ACCEPT [65:5998] :INPUT ACCEPT [1:60] :OUTPUT ACCEPT [28:1757] :POSTROUTING ACCEPT [61:5004] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] </code></pre>
<p>I have 4 microservices running on my laptop listening at various ports. Can I use Istio to create a service mesh on my laptop so the services can communicate with each other through Istio? All the links on google about Istio include kubernetes but I want to run Istio without Kubernetes. Thanks for reading.</p>
<p>In practice, not really as of this writing, since pretty much all the Istio runbooks and guides are available for Kubernetes.</p> <p>In theory, yes. Istio components are designed to be <a href="https://istio.io/docs/concepts/what-is-istio/#security" rel="noreferrer">'platform independent'</a>. Quote from the docs:</p> <blockquote> <p>While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.</p> </blockquote> <p>But unless you know really well the details of each of the components: <a href="https://www.envoyproxy.io/docs/envoy/latest/" rel="noreferrer">Envoy</a>, <a href="https://istio.io/docs/concepts/what-is-istio/#mixer" rel="noreferrer">Mixer</a>, <a href="https://istio.io/docs/concepts/what-is-istio/#pilot" rel="noreferrer">Pilot</a>, <a href="https://istio.io/docs/concepts/what-is-istio/#citadel" rel="noreferrer">Citadel</a>, and <a href="https://istio.io/docs/concepts/what-is-istio/#galley" rel="noreferrer">Galley</a> and you are willing to spend a lot of time it becomes not practically feasible to get it running outside of Kubernetes.</p> <p>If you want to use something less tied to Kubernetes you can take a look at <a href="https://www.consul.io/" rel="noreferrer">Consul</a>, although it doesn't have all the functionality Istio has, it has overlap with some of its features.</p>
<p>Kubernetes documentation <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource" rel="nofollow noreferrer">example here</a> shows how a network policy can be applied for a source specified by either a pod selector OR a namespace selector. Can I specify a source the fulfills both constraints at the same time.</p> <p>e.g. Can a source be a pod with label "tier=web" which is deployed in namespace "ingress".</p> <p><strong>P.S.</strong> For now, I have it working by adding namespace name as pod-labels.</p>
<p>Yes, this is possible, but not immediately intuitive. If you look at the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">section below</a> the chunk you linked, it gives a pretty good explanation (this appears to have been added after you asked your question). The NetworkPolicy API documentation <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io" rel="nofollow noreferrer">here</a> is generally helpful as well.</p> <p>Basically, if you put each selector as two separate items in the list like the example does, it is using a logical OR. If you put them as two items in the same array element in the list (no dash in front of the second item) like the example below to AND the podSelector and namespaceSelector, it will work. It may help to see these in a yaml to json converter.</p> <p>Here's an ingress chunk from their policy, modified to AND the conditions</p> <pre><code> ingress: - from: - namespaceSelector: matchLabels: project: myproject podSelector: matchLabels: role: frontend </code></pre> <p>This same sort of logic applies to using the <code>ports</code> rule if you use that alongside of the <code>to</code> or <code>from</code> statements. You'll notice in the example that they do not have a dash in front of <code>ports</code> under the ingress rule. If they had put a dash in front, it would OR the conditions of ingress and ports.</p> <p>Here are some GitHub links from when they were discussing how to implement combining selectors:</p> <ol> <li>This comment may give a little more background. The API already supported the OR, so doing it otherwise would've broken some functionality for people with that implemented: <a href="https://github.com/kubernetes/kubernetes/issues/50451#issuecomment-336305625" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/50451#issuecomment-336305625</a></li> <li><a href="https://github.com/kubernetes/kubernetes/pull/60452" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/60452</a></li> </ol>
<p><strong>Summary</strong></p> <p>I'm trying to figure out how to properly use the OR <code>|</code> operator in a Prometheus query because my imported Grafana dashboard is not working.</p> <p><strong>Long version</strong></p> <p>I'm trying to debug a Grafana dashboard based on some data scraped from my Kubernetes pods running <a href="https://github.com/AppMetrics/Prometheus" rel="nofollow noreferrer">AppMetrics/Prometheus</a>; the dashboard is <a href="https://grafana.com/dashboards/2204" rel="nofollow noreferrer">here</a>. Basically what happens is that when the value "All" for the <code>server</code> is selected on the Grafana dashboard (<code>server</code> is an individual pod in this case), no data appears. However, when I select an individual pod, then data does appear.</p> <p>Here's an example of the same metric scraped from the two pods:</p> <pre><code># HELP application_httprequests_transactions # TYPE application_httprequests_transactions summary application_httprequests_transactions_sum{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test"} 5.006965628 application_httprequests_transactions_count{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test"} 1367 application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.5"} 0.000202825 application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.75"} 0.000279318 application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.95"} 0.000329862 application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.99"} 0.055584233 # HELP application_httprequests_transactions # TYPE application_httprequests_transactions summary application_httprequests_transactions_sum{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test"} 6.10214788 application_httprequests_transactions_count{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test"} 1363 application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.5"} 0.000218548 application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.75"} 0.000277483 application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.95"} 0.033821094 application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.99"} 0.097113234 </code></pre> <p>I ran the Query inspector in Grafana to find out which query it is calling, and then ran the PromQL query in Prometheus itself. Basically, when I execute the following PromQL queries individually, they return data:</p> <pre><code>rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-l9tdv"}[15m])*60 rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-jdq78"}[15m])*60 </code></pre> <p>However, when I try to use PromQL's <code>|</code> operator to combine them, I don't get data back:</p> <pre><code>rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-l9tdv|myapp-test-58d94bf78d-jdq78"}[15m])*60 </code></pre> <p>Here's the raw output from Grafana's query inspector:</p> <pre><code>xhrStatus:"complete" request:Object method:"GET" url:"api/datasources/proxy/56/api/v1/query_range?query=rate(application_httprequests_transactions_count%7Benv%3D%22test%22%2Capp%3D%22MyApp%22%2Cserver%3D%22myapp-test-58d94bf78d-jdq78%7Cmyapp-test-58d94bf78d-l9tdv%7Cmyapp-test-5b8c9845fb-7lklm%7Cmyapp-test-5b8c9845fb-8jf7n%7Cmyapp-test-5b8c9845fb-d9x5c%7Cmyapp-test-5b8c9845fb-fw4gj%7Cmyapp-test-5b8c9845fb-vtl9z%7Cmyapp-test-5b8c9845fb-vv7xv%7Cmyapp-test-5b8c9845fb-wq9bs%7Cmyapp-test-5b8c9845fb-xqfrt%7Cmyapp-test-69999d58b5-549vd%7Cmyapp-test-69999d58b5-lmp8x%7Cmyapp-test-69999d58b5-nbvt9%7Cmyapp-test-69999d58b5-qphj2%7Cmyapp-test-6b8dcc5ffb-gjjvj%7Cmyapp-test-6b8dcc5ffb-rxfk2%7Cmyapp-test-7fdf446767-bzhm2%7Cmyapp-test-7fdf446767-hp46w%7Cmyapp-test-7fdf446767-rhqhq%7Cmyapp-test-7fdf446767-wxmm2%22%7D%5B1m%5D)*60&amp;start=1540574190&amp;end=1540574505&amp;step=15" response:Object status:"success" data:Object resultType:"matrix" result:Array[0] =&gt; [] </code></pre> <p>I opened a GitHub issue for this as well; it has a quick GIF screen recording showing what I mean: <a href="https://github.com/AppMetrics/Prometheus/issues/43" rel="nofollow noreferrer">AppMetrics/Prometheus#43</a></p>
<p><code>|</code> is for regular expressions, PromQL doesn't have a <code>|</code> operator (but it does have an <code>or</code> operator). You need to specify that the matcher is a regex rather than an exact match with <code>=~</code>:</p> <pre><code>rate(application_httprequest_transactions_count{env="test",app="MyApp",server=~"myapp-test-58d94bf78d-l9tdv|myapp-test-58d94bf78d-jdq78"}[15m])*60 </code></pre>
<p>Docker supports <a href="https://docs.docker.com/engine/security/userns-remap/" rel="nofollow noreferrer">user namespace remapping</a>, so that the user namespace is completely separated from the host.</p> <p>The current default behavior ensures that containers get their own user and group management, i.e. their own version of <code>/etc/passwd</code> and <code>/etc/group</code>, but container processes are run under the same identical UIDs on the host system. This means if your container runs with UID 1 (root), it will also run as root on the host. By the same token, if your container has user &quot;john&quot; with UID 1001 installed and starts its main process with that user, on the host it will also run with UID 1001, which might belong to user &quot;Will&quot; and could also have admin rights.</p> <p>To make user namespace isolation complete, one needs to enable <strong>remapping</strong>, which maps the UIDs in the container to different UIDs on the host. So, UID 1 on the container would be mapped to a &quot;non-privileged&quot; UID on the host.</p> <p>Is there any support in Kubernetes for this feature to be enabled on the underlying Container Runtime? Will it work out of the box without issues?</p>
<p>So, it's not supported yet like <a href="https://docs.docker.com/engine/security/userns-remap/" rel="noreferrer">Docker</a> as per <a href="https://github.com/kubernetes/enhancements/issues/127" rel="noreferrer">this</a> (as alluded in the comments) and <a href="https://github.com/kubernetes/kubernetes/issues/59152" rel="noreferrer">this</a>.</p> <p>However, if you are looking at isolating your workloads there are other alternatives (it's not the same, but the options are pretty good):</p> <p>You can use <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy" rel="noreferrer">Pod Security Policies</a> and specifically you can use <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups" rel="noreferrer">RunAsUser</a>, together with <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation" rel="noreferrer">AllowPrivilegeEscalation=false</a>. Pod Security Policies can be tied to RBAC so you can restrict how users run their pods.</p> <p>In other words, you can force your users to run pods only as 'youruser' and disable the <code>privileged</code> flag in the pod <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer"><code>securityContext</code></a>. You can also disable <code>sudo</code> and in your container images.</p> <p>Furthermore, you can drop Linux <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#capabilities" rel="noreferrer">Capabilities</a>, specifically <code>CAP_SETUID</code>. And even more advanced use a <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp" rel="noreferrer">seccomp</a> profile, use <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#selinux" rel="noreferrer">SElinux</a> or an <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor" rel="noreferrer">Apparmor</a> profile. </p> <p>Other alternatives to run untrusted workloads (in alpha as of this writing):</p> <ul> <li><a href="https://katacontainers.io/" rel="noreferrer">Kata Containers</a></li> <li><a href="https://nabla-containers.github.io/" rel="noreferrer">Nabla Containers</a></li> <li><a href="https://github.com/google/gvisor" rel="noreferrer">gVisor</a></li> </ul>
<p>I am trying to create a kubernetes Custom Resource Definition (named <code>Block</code>) but keep hitting the following error: </p> <pre><code>Failed to list *v1alpha1.Block: the server could not find the requested resource (get blocks.kubechain.com). </code></pre> <p>This issue is raised from a call to <code>List</code> on a Controller for this CRD:</p> <pre><code>indexer, controller := cache.NewIndexerInformer( &amp;cache.ListWatch{ ListFunc: func(lo metav1.ListOptions) (result k8sruntime.Object, err error) { return clientSet.Block(ns).List(lo) }, WatchFunc: func(lo metav1.ListOptions) (watch.Interface, error) { return clientSet.Block(ns).Watch(lo) }, }, &amp;v1alpha1.Block{}, 1*time.Minute, cache.ResourceEventHandlerFuncs{}, cache.Indexers{}, ) </code></pre> <p>For some context here is the <code>register.go</code> file where I register the above resourced to the a scheme builder:</p> <pre><code>// GroupName is the api prefix. const GroupName = "kubechain.com" // GroupVersion is the version of the api. const GroupVersion = "v1alpha1" // SchemeGroupVersion is the group version object. var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: GroupVersion} var ( // SchemeBuilder adds the new CRDs Block and Blockchain. SchemeBuilder = runtime.NewSchemeBuilder(AddKnownTypes) // AddToScheme uses SchemeBuilder to add new CRDs. AddToScheme = SchemeBuilder.AddToScheme ) // AddKnownTypes . func AddKnownTypes(scheme *runtime.Scheme) error { scheme.AddKnownTypes(SchemeGroupVersion, &amp;Block{}, &amp;BlockList{}, ) metav1.AddToGroupVersion(scheme, SchemeGroupVersion) return nil } </code></pre> <p>And here is the <code>scheme.go</code> file where I actually run <code>AddToScheme</code> from the former file:</p> <pre><code>var Scheme = runtime.NewScheme() var Codecs = serializer.NewCodecFactory(Scheme) var ParameterCodec = runtime.NewParameterCodec(Scheme) var localSchemeBuilder = runtime.SchemeBuilder{ v1alpha1.AddToScheme, } var AddToScheme = localSchemeBuilder.AddToScheme func init() { metav1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"}) if err := AddToScheme(Scheme); err != nil { panic(err) } } </code></pre> <p>Can anyone share some information as to what I am doing wrong here??</p> <p>This work is following <a href="https://www.martin-helmich.de/en/blog/kubernetes-crd-client.html" rel="noreferrer">this</a> blog post.</p>
<p>I have seen similar error. It was RBAC issue. But the error message was misleading.</p> <p>If your cluster has RBAC enabled, make sure your controller has <code>get</code>,<code>list</code> permission for <code>blocks.kubechain.com</code> resource.</p>
<p>I am trying to configure my GKE cluster to pull from a private GCR repo in the same project. I am not using OAuth scopes but have associated a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#use_least_privilege_sa" rel="noreferrer">least privilege service account</a> with the default node pool and provided it with the <code>roles/storage.objectViewer</code> permission.</p> <p>However, I am still receiving the following when trying to access this image: <code> Failed to pull image "eu.gcr.io/&lt;project&gt;/&lt;image&gt;": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication </code></p> <p>Do I also need to configure <a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="noreferrer"><code>imagePullSecrets</code></a> or should the <code>roles/storage.objectViewer</code> permission be sufficient?</p>
<p>The root cause of this issue was not setting access (OAuth) scopes on the cluster instances preventing the service account from working as intended.</p> <p>From the GCP docs about <a href="https://cloud.google.com/compute/docs/access/service-accounts#usingroles" rel="noreferrer">Compute service accounts</a> :</p> <blockquote> <p><strong>You must set access scopes on the instance to authorize access.</strong></p> <p>You cannot set only IAM roles on the service account and omit access scopes when creating the virtual machine instance. The level of access a service account has is determined by a combination of access scopes and IAM roles so you must configure both access scopes and IAM roles for the service account to work properly.</p> </blockquote> <p>The minimal scopes required when accessing private images in GCR can be found <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#reduce_node_sa_scopes" rel="noreferrer">here</a> with the meaning of these scopes found <a href="https://developers.google.com/identity/protocols/googlescopes" rel="noreferrer">here</a>. A least privilege service account for the cluster nodes can then be created following the instructions <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#use_least_privilege_sa" rel="noreferrer">here</a>.</p> <p>As described <a href="https://cloud.google.com/compute/docs/access/service-accounts#service_account_permissions" rel="noreferrer">here</a> an alternative would be to only grant the <code>https://www.googleapis.com/auth/cloud-platform</code> scope to the cluster nodes which authorises access to all Cloud Platform services and then limit access through IAM roles on node service accounts.</p> <p>By configuring the cluster nodes as above, <code>imagePullSecrets</code> are not required for pulling private images from GCR in the same project.</p>
<p>I have a folder with name "myspec" which has some kube-spec files , let's say</p> <ul> <li>pod.yaml , service.yaml, secret.yaml</li> </ul> <p>when I run the command <strong>"kubectl create -f myspec"</strong> it creates everything , pod , service and secret.</p> <p>Now I wish to perform the same thing using kubernetes go client library.</p>
<p>I believe the previous poster, meant to post This: </p> <p>1) You first convert the string to bytes.</p> <p>2) Then serialize it to a pod.</p> <p>3) Then create the pod like any other object.</p> <p>This can be done, without loss of generality, for Pods, Services, ReplicationControllers, Deployments, ConfigMaps, Secrets, and any other kubernetes API object.</p> <p><strong>example</strong></p> <pre><code>func CreatePodFromYaml(podAsYaml string, ns string) error { var p v1.Pod err := json.Unmarshal([]byte(podAsYaml), &amp;p) if err != nil { return err } pod, poderr := kubernetesConfig().CoreV1().Pods(ns).Create(&amp;p) if poderr != nil { return poderr } fmt.Printf("Created Pod %q.\n", pod.GetObjectMeta().GetName()) return nil } </code></pre> <p>To compile this code, you'll also need to make the kubernetesConfig object:</p> <pre><code>func kubernetesConfig() *kubernetes.Clientset { config, err := clientcmd.BuildConfigFromFlags("", "/$HOME/.kube/config") if err != nil { fmt.Println(err.Error()) } clientset, err := kubernetes.NewForConfig(config) if err != nil { fmt.Println(err.Error()) } return clientset } </code></pre>
<p>Is there some python/go library to generate Terraform code from a JSON file, or is there any templating language to generate terraform code.</p> <p><strong>Use case:</strong></p> <p>I need to register all the Kubernetes services of type node port on my cluster, on external load balancer, and there is a terraform provider for that external load balancer that takes the service and namespace name and some other parameters and registers the service on a pool, once the pool is created with correct service , and namespace name, then the load balancer automatically do discovery of nodeports and node IPs and keep the pool updated all the time. Currently, I have to edit the terraform code manually to add the service names and other stuff, to update the load balancer config to create the pool for k8s service.</p> <p>Once I figure out how to generate/update the terraform file from the given state of the cluster, I will put this process as cluster daemon in a pod, that will run the script every X interval so that the load balancer is updated when users create and delete k8s services.</p> <p><strong>Example record in terraform file:</strong></p> <pre><code> # Pool automatically populated by K8s Service Discovery resource "vtm_pool" "&lt;cluster name&gt;_&lt;k8s namsepace&gt;_&lt;k8s service name&gt;" { name = ""&lt;cluster name&gt;_&lt;k8s namsepace&gt;_&lt;service name&gt;"" monitors = ["Ping"] service_discovery_enabled = "true" service_discovery_interval = "15" service_discovery_plugin = "${var.k8s_discovery_plugin}" service_discovery_plugin_args = "-s &lt;k8s service name &gt; -n &lt;k8s namsepace&gt; -c &lt;kubeconf file name&gt;" } </code></pre>
<p>Terraform has a couple of hook points that might work for you.</p> <p>The overall approach you're suggesting will probably work fine. The HCL syntax itself isn't that complicated, and you could just write it out. Hashicorp has <a href="https://github.com/hashicorp/hcl" rel="nofollow noreferrer">a Go library</a> that can read and write it.</p> <p>Terraform also <a href="https://www.terraform.io/docs/configuration/syntax.html#json-syntax" rel="nofollow noreferrer">directly supports JSON input</a> as well. The syntax is harder to hand-write but probably easier to machine-generate. There are a couple of subtleties in how the HCL syntax converts to JSON and it's probably worth reading through that whole page (even the HCL part), but this might be easiest to machine-generate.</p> <p>If I had to do this, I might reach for a Terraform <a href="https://www.terraform.io/docs/providers/external/data_source.html" rel="nofollow noreferrer">external data source</a>. That can run any program that produces JSON output, but once you have that it's "natively" in Terraform space. You could write something like (untested):</p> <pre><code>data "external" "services" { program = ["kubectl", "get", "service", "-o", "json", "--field-selector", "spec.type==LoadBalancer"] } resource "vtm_pool" "k8s_services" { count = "${length(data.external.services.result)}" name = "${data.external.services.*.metadata.name[count.index]}" } </code></pre> <p>(In the past, I've had trouble where Terraform can get very fixated on specific indices, and so if you do something like this and the <code>kubectl</code> output returns the Service objects in a different order, it might want to go off and swap which load balancer is which; or if something gets deleted, every other load balancer might get reassigned.)</p> <p>The "most right" (but hardest) answer would be to teach Kubernetes about your cloud load balancer. There is a standard (Go) <a href="https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/cloud-provider" rel="nofollow noreferrer">k8s.io/cloud-provider</a> interface you can implement, and a <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers" rel="nofollow noreferrer">handful of providers</a> in the main Kubernetes source tree. I'd guess the <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/fake/fake.go" rel="nofollow noreferrer">fake cloud provider</a> is as good a starting point as any.</p>
<p>I am migrating an application in Nodejs to kubernetes in GCP. In CI tutorials, I see the updated application being copied to a new docker image and sent to GCR.</p> <p>The process of uploading an image is slow compared to updating only the code. So what exactly is the gain of sending a new image containing the application?</p>
<p>You are missing the whole docker philosophy and the concept of immutable infrastructure, and the matrix from hell bellow, docker and other container-based technology was originally adopted to address the matrix from hell. <a href="https://i.stack.imgur.com/ka3SN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ka3SN.jpg" alt="enter image description here"></a></p> <p>Solution <a href="https://i.stack.imgur.com/YVBVH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YVBVH.png" alt="enter image description here"></a></p> <p>Entire books have been written to answer your question why not copy the code and why use the images , but the short answer is , use the docker images and address the slowness by doing some optimizations such as minimal docker images , minimal layers , caching etc</p> <p><a href="https://medium.com/skills-matter/3-simple-tricks-for-smaller-docker-images-cf2760645621?mkt_tok=eyJpIjoiWTJNeFpXTmtZamd4TXpVeSIsInQiOiJQTU8wUFR1d1RtSFpabUdFVmJoQTJWaTZ5Q2did0lJR3FacVRJYndYQUdQZXhIOE95ZmhSN3oyMXBpVmRpQ09nd3lOcU43Q1dHak1vcnMyWm1ObllKMXRyTG14aE5URHdFU3dzYzlzbXAyTUkrNzVKYkRqOUxscHhYdG80bHRDciJ9" rel="nofollow noreferrer">Minimal docker images</a></p>
<p>I have set up a Kubernetes cluster on gcloud via gitlab.</p> <p>I have some trouble pulling my images when I deploy my application.</p> <p>I use a gcloud cluster with a registry on the same gcloud project. Normally, I'm able to pull my image directly without any modification (supposed to use the <strong>Compute Engine default service account?</strong>).</p> <p>But I get a unauthorized on my pod when he try to pull the image : </p> <pre><code> Warning Failed 3m (x2 over 3m) kubelet, gke-production-default-pool-********-**** Failed to pull image "eu.gcr.io/[My-Project]/services-identity:715bfffa": rpc error: code = Unknown desc = unauthorized: authentication required Warning Failed 3m (x2 over 3m) kubelet, gke-production-default-pool-********-**** Error: ErrImagePull Normal BackOff 2m (x6 over 3m) kubelet, gke-production-default-pool-********-**** Back-off pulling image "eu.gcr.io/[My-Project]/services-identity:715bfffa" Warning Failed 2m (x6 over 3m) kubelet, gke-production-default-pool-********-**** Error: ImagePullBackOff Normal Pulling 2m (x3 over 3m) kubelet, gke-production-default-pool-********-**** pulling image "eu.gcr.io/[My-Project]/services-identity:715bfffa" </code></pre> <p>I deploy via gitlab-ci with the following command line: </p> <pre><code>helm upgrade --install services-identity -f ./deploy/env/production-values.yml ./deploy/ --set image.tag=${CI_COMMIT_SHA:0:8} --namespace=production --wait </code></pre> <p>For information, I can pull the registry when this one is public, I can also pull the image locally via a docker login(using my gcloud account).</p> <p>Thanks in advance for your advice.</p>
<p>This is very similar to this: <a href="https://stackoverflow.com/questions/53008125/whats-the-minimal-permissions-i-need-to-configure-for-a-gke-node-pool-to-pull-f/53018981#53018981">What&#39;s the minimal permissions I need to configure for a GKE node pool to pull from a private GCR repo in the same project?</a>, except that you are not mentioning that it's on GKE so I assume is on GCE.</p> <p>You can use a <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file" rel="nofollow noreferrer"><code>json_key_file</code></a>.</p> <p>On all your nodes (assuming you are using Docker):</p> <pre><code>$ docker login -u _json_key --password-stdin https://gcr.io </code></pre> <p>Or the same json_key_file using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer"><code>ImagePullSecrets</code></a> in the pod spec as described <a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="nofollow noreferrer">here</a>.</p> <p>Or you can use on all your Kubernetes nodes:</p> <pre><code>$ gcloud auth configure-docker </code></pre>
<p>I want to (temporarily) use localhost bound directories to persist application state of SonarQube. Below I describe how I achieved this in a self-hosted Kubernetes (1.11.3) cluster.</p> <p>The problem I encounter is that despite everything working Kubernetes does not use the host path to persist the data (<code>/opt/sonarqube/postgresql</code>). Upon <code>docker inspect</code> of the SonarQube containers it uses the binds below.</p> <p><strong>How can I use the host mounted path for the mounts?</strong></p> <pre><code> "Binds": [ "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/0:/opt/sonarqube/conf", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~configmap/startup:/tmp-script/:ro", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/2:/opt/sonarqube/data", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/3:/opt/sonarqube/extensions", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~secret/default-token-zrjdj:/var/run/secrets/kubernetes.io/serviceaccount:ro", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/etc-hosts:/etc/hosts", "/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/containers/sonarqube/95053a5c:/dev/termination-log" ] </code></pre> <p><strong>Here is what I did to set up the application</strong></p> <p>I created a <code>StorageClass</code> to create PVs that mount local paths:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage-nowait provisioner: kubernetes.io/no-provisioner </code></pre> <p>Then I created two PVs to be used with the <a href="https://github.com/helm/charts/tree/master/stable/sonarqube" rel="nofollow noreferrer">SonarQube helm chart</a> like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: sonarqube-pv-postgresql labels: type: local spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage hostPath: path: /opt/sonarqube/postgresql type: DirectoryOrCreate nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - myhost </code></pre> <p>I launched the SonarQube helm chart with this additional config to use the PVs I just created</p> <pre><code>image: tag: 7.1 persistence: enabled: true storageClass: local-storage accessMode: ReadWriteOnce size: 10Gi postgresql: persistence: enabled: true storageClass: local-storage accessMode: ReadWriteOnce size: 10Gi </code></pre>
<p>If you see the docs <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="noreferrer">here</a></p> <blockquote> <ul> <li>HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)</li> </ul> </blockquote> <p>Hence, that's probably why you are seeing it in a different place. I tried it myself and my PVC remains in pending state. So you can either use <code>local</code> like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv labels: vol=myvolume spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/ssd1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node </code></pre> <p>Then you have to create the corresponding PVC:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: local-storage selector: matchLabels: vol: "myvolume" </code></pre> <p>Then in the pod spec:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume persistentVolumeClaim: claimName: myclaim </code></pre> <p>You can also use <code>hostPath</code> directly in the pod spec if you don't care about landing on any node and having different data in each node:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume hostPath: # directory location on host path: /data # this field is optional type: DirectoryOrCreate </code></pre>
<p>How I can change securityContext of my pods in Jenkins Kubernetes Plugin. For example, run docker in docker images with privileged mode in docker environment. </p>
<p>I believe <a href="https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates" rel="nofollow noreferrer">this</a> should work (as per the docs):</p> <pre><code>def label = "mypod-${UUID.randomUUID().toString()}" podTemplate(label: label, yaml: """ apiVersion: v1 kind: Pod metadata: labels: some-label: some-label-value spec: containers: - name: busybox image: busybox command: - cat tty: true securityContext: allowPrivilegeEscalation: true """ ) { node (label) { container('busybox') { sh "hostname" } } } </code></pre>
<p>I am trying to run a docker container registry in Minikube for testing a CSI driver that I am writing. </p> <p>I am running minikube on mac and am trying to use the following minikube start command: <code>minikube start --vm-driver=hyperkit --disk-size=40g</code>. I have tried with both kubeadm and localkube bootstrappers and with the virtualbox vm-driver.</p> <p>This is the resource definition I am using for the registry pod deployment. </p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: registry labels: app: registry namespace: docker-registry spec: containers: - name: registry image: registry:2 imagePullPolicy: Always ports: - containerPort: 5000 volumeMounts: - mountPath: /var/lib/registry name: registry-data volumes: - hostPath: path: /var/lib/kubelet/plugins/csi-registry type: DirectoryOrCreate name: registry-data </code></pre> <p>I attempt to create it using <code>kubectl apply -f registry-setup.yaml</code>. Before running this my minikube cluster reports itself as ready and with all the normal minikube containers running.</p> <p>However, this fails to run and upon running <code>kubectl describe pod</code>, I see the following message:</p> <pre><code> Name: registry Namespace: docker-registry Node: minikube/192.168.64.43 Start Time: Wed, 08 Aug 2018 12:24:27 -0700 Labels: app=registry Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"registry"},"name":"registry","namespace":"docker-registry"},"spec":{"cont... Status: Running IP: 172.17.0.2 Containers: registry: Container ID: docker://42e5193ac563c2b2e2a2b381c91350d30f7e7c5009a30a5977d33b403a374e7f Image: registry:2 ... TRUNCATED FOR SPACE ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned registry to minikube Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "registry-data" Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-kq5mq" Normal Pulling 1m kubelet, minikube pulling image "registry:2" Normal Pulled 1m kubelet, minikube Successfully pulled image "registry:2" Normal Created 1m kubelet, minikube Created container Normal Started 1m kubelet, minikube Started container ... TRUNCATED ... Name: storage-provisioner Namespace: kube-system Node: minikube/192.168.64.43 Start Time: Wed, 08 Aug 2018 12:24:38 -0700 Labels: addonmanager.kubernetes.io/mode=Reconcile integration-test=storage-provisioner Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provis... Status: Pending IP: 192.168.64.43 Containers: storage-provisioner: Container ID: Image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /storage-provisioner State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /tmp from tmp (rw) /var/run/secrets/kubernetes.io/serviceaccount from storage-provisioner-token-sb5hz (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: tmp: Type: HostPath (bare host directory volume) Path: /tmp HostPathType: Directory storage-provisioner-token-sb5hz: Type: Secret (a volume populated by a Secret) SecretName: storage-provisioner-token-sb5hz Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned storage-provisioner to minikube Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "tmp" Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "storage-provisioner-token-sb5hz" Normal Pulling 23s (x3 over 1m) kubelet, minikube pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" Warning Failed 21s (x3 over 1m) kubelet, minikube Failed to pull image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1": rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /storage-provisioner: no space left on device Warning Failed 21s (x3 over 1m) kubelet, minikube Error: ErrImagePull Normal BackOff 7s (x3 over 1m) kubelet, minikube Back-off pulling image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" Warning Failed 7s (x3 over 1m) kubelet, minikube Error: ImagePullBackOff ------------------------------------------------------------ ... </code></pre> <p>So while the registry container starts up correctly, a few of the other minikube services (including dns, http ingress service, etc) begin to fail with reasons such as the following: <code>write /storage-provisioner: no space left on device</code>. Despite allocating a 40GB disk-size to minikube, it seems as though minikube is trying to write to <code>rootfs</code> or <code>devtempfs</code> (depending on the vm-driver) which has only 1GB of space.</p> <pre><code>$ df -h Filesystem Size Used Avail Use% Mounted on rootfs 919M 713M 206M 78% / devtmpfs 919M 0 919M 0% /dev tmpfs 996M 0 996M 0% /dev/shm tmpfs 996M 8.9M 987M 1% /run tmpfs 996M 0 996M 0% /sys/fs/cgroup tmpfs 996M 8.0K 996M 1% /tmp /dev/sda1 34G 1.3G 30G 4% /mnt/sda1 </code></pre> <p>Is there a way to make minikube actually use the 34GB of space that was allocated to /mnt/sda1 instead of rootfs when pulling images and creating containers?</p> <p>Thanks in advance for any help!</p>
<p>You can also use the minikube <code>--docker-opt</code> option to set the <code>--data-root</code> <a href="https://docs.docker.com/engine/reference/commandline/dockerd/" rel="nofollow noreferrer">option</a> of the <code>dockerd</code> daemon running inside minikube. <code>--docker-opt</code> can be used as a pass-through for any parameter to <code>dockerd</code>. </p> <p>For example, in the case you describe above it would look like: </p> <p><code>minikube start --vm-driver=hyperkit --disk-size=40g --docker-opt="--data-root /mnt/sda1"</code></p> <p>Keep in mind that if you try to modify an existing minikube cluster you either have to copy <code>var/lib/docker</code> to <code>/mnt/sda1</code> (as the previous answer also suggested) before restarting or delete and rebuild the cluster. </p> <p><strong>update:</strong> After experimentation, I noticed that the above solution will not work the <strong>first time</strong> you run <code>minikube start</code> as it somehow interferes with minikube's own core-system build and boot-up process. In practice this means that you need to run <code>minikube start</code> at least once without the <code>--docker-opt</code> to build the core system and then re-run it with <code>--docker-opt</code>. </p>
<p>I am trying to setup mongodb on kubernetes with istio. My statefulset is as follows:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: treeservice namespace: staging spec: serviceName: tree-service-service replicas: 1 selector: matchLabels: app: treeservice template: metadata: labels: app: treeservice spec: containers: - name: mongodb-cache image: mongo:latest imagePullPolicy: Always ports: - containerPort: 30010 volumeMounts: - name: mongodb-cache-data mountPath: /data/db resources: requests: memory: "4Gi" # 4 GB cpu: "1000m" # 1 CPUs limits: memory: "4Gi" # 4 GB cpu: "1000" # 1 CPUs readinessProbe: exec: command: - mongo - --eval "db.stats()" --port 30010 initialDelaySeconds: 60 #wait this period after staring fist time periodSeconds: 30 # polling interval every 5 minutes timeoutSeconds: 60 livenessProbe: exec: command: - mongo - --eval "db.stats()" --port 30010 initialDelaySeconds: 60 #wait this period after staring fist time periodSeconds: 30 # polling interval every 5 minutes timeoutSeconds: 60 command: ["/bin/bash"] args: ["-c","mongod --port 30010 --replSet test"] #bind to localhost volumeClaimTemplates: - metadata: name: mongodb-cache-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: fast resources: requests: storage: 300Gi </code></pre> <p>however, the pod is not created and I see the following error:</p> <pre><code>kubectl describe statefulset treeservice -n staging Warning FailedCreate 1m (x159 over 1h) statefulset-controller create Pod treeservice-0 in StatefulSet treeservice failed error: Pod "treeservice-0" is invalid: spec.containers[1].env[7].name: Invalid value: "ISTIO_META_statefulset.kubernetes.io/pod-name": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') </code></pre> <p>I assum <code>treeservice</code> is a valid pod name. Am I missing something?</p>
<p>I guess it's due to this issue <a href="https://github.com/istio/istio/issues/9571" rel="nofollow noreferrer">https://github.com/istio/istio/issues/9571</a> which is still open</p> <p>I made it work temporarily using the following:</p> <pre><code>annotations: sidecar.istio.io/inject: "false" </code></pre>
<p>I created 2 simple spring boot application and deployed in Google Kubernetes using a docker container by referring this link : <a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#1" rel="nofollow noreferrer">Deploy spring boot in Kubernetes</a></p> <p>Now when I run <code>kubectl get services</code> i can see 2 services(Spring boot applications) listed! </p> <p>I understand that using expose I can reserve a static IP for the services! But What I need is ** I need to access two services using Single IP** (Similar to routing) so that end-user need only be known about one IP address for the multiservices! How could I achieve that? I am very new to this! Please help.. </p>
<p>The easiest way to access your services publicly, using the same IP address, would be to use the <code>ingress-controller</code> and <code>Ingress</code> resource. This will allow you to use dns based hostnames and/or path matching to access each service individually.</p> <p>I have an easy to use starter repository that I recommend you use to get started. Simply run the following commands:</p> <p>Install the ingress-controller:</p> <pre><code>$ git clone https://github.com/mateothegreat/k8-byexamples-ingress-controller $ cd k8-byexamples-ingress-controller $ git submodule update --init $ make install LOADBALANCER_IP=&lt;your static ip here&gt; </code></pre> <p>Create your <code>Ingress</code> resource:</p> <pre><code>$ make issue HOST=&lt;your dns hostname&gt; SERVICE_NAME=&lt;your service&gt; SERVICE_PORT=&lt;the service port&gt; </code></pre> <p>You can repeat this last step as many times as you need. For additional information you can go to <a href="https://github.com/mateothegreat/k8-byexamples-ingress-controller" rel="nofollow noreferrer">https://github.com/mateothegreat/k8-byexamples-ingress-controller</a>.</p>
<p>We had a situation where we had to drain a Kubernetes node. <a href="https://i.stack.imgur.com/mNLzW.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/mNLzW.jpg" alt="K8S drained node"></a></p> <p>Is there anything I can do to enable pod scheduling back again? Please suggest.</p>
<p>You've most likely used the <code>kubectl cordon &lt;node name&gt;</code> to set your node as un-scheduleable.</p> <p>To revert this you can run <code>kubectl uncordon &lt;node name&gt;</code> to make it schedulable again.</p> <p>If this doesn't allow you to schedule on this node please provide the outputs of <code>kubectl describe node &lt;node name&gt;</code>.</p> <p>Good luck!</p>
<p>Fairly new to K8s and I am trying to find a way by which I can share a configuration file across pods. Using hostPath mean I have to </p> <ol> <li>Mount an NFS drive</li> <li>Add the configuration file to all of the nodes</li> </ol> <p>If I provision multiple nodes then I have to mount the drive on all nodes. </p> <p>Is there a way (S3) by which I can share the configuration? Or if NFS is the best way to tackle it.</p>
<p>The standard way to share configuration files is using a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer"><code>ConfigMap</code></a>. Essentially once you create one and assign it to a pod spec as a volume it will get injected in all the pods in all the nodes where your pod is running.</p> <p>There are multiple ways of using ConfigMaps described <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap" rel="nofollow noreferrer">here</a>.</p> <p>Note, that there's a <a href="https://github.com/kubernetes/kubernetes/issues/19781" rel="nofollow noreferrer">1mb limit</a> on a ConfigMap size. This is an etcd limitation.</p> <p>If you are looking at storing larger files an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">NFS Volume</a> would be an option.</p> <p>IMO, S3 (or any public cloud object storage) doesn't make sense to store configs, since it doesn't have the best performance, meaning you have to go outside your cluster to fetch a file. Also, there's no direct support in Kubernetes for object storage configs. </p>
<p>While deploying to kubernetes , redis connection is not able to establish connection because of jedis connection refused error.</p> <pre><code>"message": "Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused (Connection refused)", </code></pre> <p>Deployment yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: selector: matchLabels: app: redis replicas: 1 template: metadata: labels: app: redis spec: containers: - name: redis-master image: gcr.io/google_containers/redis:e2e ports: - containerPort: 6379 volumeMounts: - name: redis-storage mountPath: /data/redis volumes: - name: redis-storage --- apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis spec: ports: - port: 6379 selector: app: redis </code></pre> <p>---Sample Jedis code used in project:</p> <pre><code>JedisConnectionFactory jedisConnectionFactoryUpdated() { RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(); redisStandaloneConfiguration.setHostName("redis-master"); redisStandaloneConfiguration.setPort(6379); JedisClientConfigurationBuilder jedisClientConfiguration = JedisClientConfiguration.builder(); jedisClientConfiguration.connectTimeout(Duration.ofSeconds(60));// 60s connection timeout JedisConnectionFactory jedisConFactory = new JedisConnectionFactory(redisStandaloneConfiguration, jedisClientConfiguration.build()); return jedisConFactory; } </code></pre> <p>Does anybody overcome this issue? TIA.</p>
<p>You need to first update your service to reflect: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis spec: ports: - port: 6379 targetPort: 6379 selector: app: redis </code></pre> <p>Once you have done so you can check whether or not your redis service is up and responding by using nmap. Here is an example using my nmap image:</p> <pre><code>kubectl run --image=appsoa/docker-alpine-nmap --rm -i -t nm -- -Pn 6379 redis-master </code></pre> <p>Also, make sure that both redis &amp; your spring boot app are deployed to the same namespace. If not, you need to explicity define your hostname using . (i.e.: "redis-master.mynamespace").</p>
<p>The kubernetes go client has tons of methods and I can't find how I can get the current CPU &amp; RAM usage of a specific (or all pods).</p> <p>Can someone tell me what methods I need to call to get the current usage for pods &amp; nodes?</p> <p><strong>My NodeList:</strong></p> <pre><code>nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{}) </code></pre> <p>Kubernetes Go Client: <a href="https://github.com/kubernetes/client-go" rel="noreferrer">https://github.com/kubernetes/client-go</a></p> <p>Metrics package: <a href="https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/metrics" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/metrics</a></p> <p>As far as I got the metrics server implements the Kubernetes metrics package in order to fetch the resource usage from pods and nodes, but I couldn't figure out where &amp; how they do it: <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">https://github.com/kubernetes-incubator/metrics-server</a></p>
<p>It is correct that go-client does not have support for metrics type, but in the metrics package there is a pregenerated <a href="https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/metrics/pkg/client/clientset/versioned" rel="noreferrer">client</a> that can be used for fetching metrics objects and assign them right away to the appropriate structure. The only thing you need to do first is to generate a config and pass it to metrics client. So a simple client for metrics would look like this:</p> <pre><code>package main import ( "k8s.io/client-go/tools/clientcmd" metrics "k8s.io/metrics/pkg/client/clientset/versioned" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) func main() { var kubeconfig, master string //empty, assuming inClusterConfig config, err := clientcmd.BuildConfigFromFlags(master, kubeconfig) if err != nil{ panic(err) } mc, err := metrics.NewForConfig(config) if err != nil { panic(err) } mc.MetricsV1beta1().NodeMetricses().Get("your node name", metav1.GetOptions{}) mc.MetricsV1beta1().NodeMetricses().List(metav1.ListOptions{}) mc.MetricsV1beta1().PodMetricses(metav1.NamespaceAll).List(metav1.ListOptions{}) mc.MetricsV1beta1().PodMetricses(metav1.NamespaceAll).Get("your pod name", metav1.GetOptions{}) } </code></pre> <p>Each of the above methods from metric client returns an appropriate structure (you can check those <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/metrics/pkg/apis/metrics/v1beta1/types.go" rel="noreferrer">here</a>) and an error (if any) which you should process according to your requirements. </p>
<p>I have a very simple node.js application (HTTP service), which "talks" to redis. I want to create a deployment and run it with minikube.</p> <p>From my understanding, I need a kubernetes Pod for my app, based on the docker image. Here's my Dockerfile:</p> <pre><code>FROM node:8.9.1 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD ["npm", "start"] </code></pre> <p>I build the docker image with <code>docker build -t my-app .</code></p> <p>Next, I created a Pod definition for my app's Pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: my-app image: my-app:latest imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>So far, so good. But from now on, I have no clear idea how to proceed with redis:</p> <ol> <li><p>should redis be another Pod, or a Service (in terms of Kubernetes kind)?</p></li> <li><p>How do I reference redis from inside my app? Based on whether redis will be defined as a Pod/Service, how do I obtain a connection URL and port? I read about environment variables being created by Kubernetes, but I am not sure whether these work for Pods or Services.</p></li> <li><p>How do I aggregate both (my app &amp; redis) under single configuration? How do I make sure that redis starts first, then my app (which requires running redis instance), and how do I expose my HTTP endpoints to the "outside world"? I read about Deployments, but I am not sure how to connect these pieces together.</p></li> </ol> <p>Ideally, I would like to have all configurations inside YAML files, so that at the end of the day the whole infrastructure could be started with a single command.</p>
<p>I think I figured out a solution (using a Deployment and a Service).</p> <p>For my deployment, I used two containers (webapp + redis) within one Pod, since it doesn't make sense for a webapp to run without active redis instance, and additionally it connects to redis upon application start. I could be wrong in this reasoning, so feel free to correct me if you think otherwise.</p> <p>Here's my deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app-deployment spec: selector: matchLabels: app: my-app-deployment template: metadata: labels: app: my-app-deployment spec: containers: - name: redis image: redis:latest ports: - containerPort: 6379 volumeMounts: - mountPath: /srv/www name: redis-storage - name: my-app image: my-app:latest imagePullPolicy: Never ports: - containerPort: 8080 volumes: - name: redis-storage emptyDir: {} </code></pre> <p>And here's the Service definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-app-service spec: ports: - port: 8080 protocol: TCP type: NodePort selector: app: my-app-deployment </code></pre> <p>I create the deployment with: <code>kubectl create -f deployment.yaml</code> Then, I create the service with <code>kubectl create -f service.yaml</code> I read the IP with <code>minikube ip</code> and extract the port from the output of <code>kubectl describe service my-app-service</code>.</p>
<p>I have a pod in a Kubernetes(k8s) which has a Java application running on it in a docker container. This application produces logs. I want to move these log files to another amazon EC2 machine. Both the machines are linux based. How can this be done. Is it possible to do so using simple <code>scp</code> command?</p>
<p>For moving logs from pods to your log store , you can use the following options to do it all the time for you , instead of one time copy:</p> <ul> <li>filebeat</li> <li>fluentd</li> <li>fluentbit</li> </ul> <p><a href="https://github.com/fluent/fluent-bit-kubernetes-logging" rel="nofollow noreferrer">https://github.com/fluent/fluent-bit-kubernetes-logging</a></p> <p><a href="https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd" rel="nofollow noreferrer">https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd</a></p>
<p>I have a pod in a Kubernetes(k8s) which has a Java application running on it in a docker container. This application produces logs. I want to move these log files to another amazon EC2 machine. Both the machines are linux based. How can this be done. Is it possible to do so using simple <code>scp</code> command?</p>
<p>To copy a file from a pod to a machine via scp you can use the following command:</p> <pre><code>kubectl cp &lt;namespace&gt;/&lt;pod&gt;:/path/inside/container /path/on/your/host </code></pre>
<p>How exactly it helps if recommended labels from kubernetes 1.12 are added in helm charts?</p>
<p>Since this question (as revealed in the comments) is about the application-related <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="nofollow noreferrer">recommended labels</a> that are prefixed with <code>app.kubernetes.io</code>, the appropriate place to look is the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="nofollow noreferrer">kubernetes documentation on this</a>. These labels serve to identify various kubernetes objects (Pods, Services, ConfigMaps etc.) as part of a single application. Having a "common set of labels allows tools to work interoperably, describing objects in a common manner that all tools can understand". The idea is that you should be able to go into tools like the kubernetes dashboard or a monitoring tool and see a list of applications and then drill into the individual objects under the applications. However, 1.12 <a href="https://discuss.kubernetes.io/t/k8s-1-12-release-information/1364" rel="nofollow noreferrer">was only released a month ago</a> so it will take time for the common labels to be adopted and for tools to offer support for querying based on them. Having the labels present in helm charts is a step towards adoption.</p>
<p>I created a backend config as described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig" rel="nofollow noreferrer">here</a> and a cloud armor policy. Then I set the backend config on one of my service's port. It seems that the ingress ignores the BackendConfig.</p> <p>I use the nginx ingress <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">controller</a>. </p>
<p>By following the official documentation you might stumble with some issues that have to do with your quota. You have a limit of 9 backend services by default. The gce configuration in comparison with the nginx-ingress recognises each service exposed through ingress as a backend service. The best way to troubleshoot it is by issuing</p> <pre><code>kubectl describe ing </code></pre> <p>This will give you the logs needed. The other issue that needs troubleshooting is when you don't have the correct cluster version which has supports for BackendConfig.</p>
<p>I have installed two nodes <code>kubernetes 1.12.1</code> in cloud VMs, both behind internet proxy. Each VMs have floating IPs associated to connect over SSH, <code>kube-01</code> is a master and <code>kube-02</code> is a node. Executed export:</p> <pre><code>no_proxy=127.0.0.1,localhost,10.157.255.185,192.168.0.153,kube-02,192.168.0.25,kube-01 </code></pre> <p>before running <code>kubeadm init</code>, but I am getting the following status for <code>kubectl get nodes</code>:</p> <pre><code>NAME STATUS ROLES AGE VERSION kube-01 NotReady master 89m v1.12.1 kube-02 NotReady &lt;none&gt; 29s v1.12.2 </code></pre> <p>Am I missing any configuration? Do I need to add <code>192.168.0.153</code> and <code>192.168.0.25</code> in respective VM's <code>/etc/hosts</code>?</p>
<p>Looks like pod network is not installed yet on your cluster . You can install weave for example with below command </p> <pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <p>After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.</p> <p>You can install pod networks of your choice . Here is a <a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/" rel="noreferrer">list</a> </p> <p>after this check </p> <pre><code>$ kubectl describe nodes </code></pre> <p>check all is fine like below</p> <pre><code>Conditions: Type Status ---- ------ OutOfDisk False MemoryPressure False DiskPressure False Ready True Capacity: cpu: 2 memory: 2052588Ki pods: 110 Allocatable: cpu: 2 memory: 1950188Ki pods: 110 </code></pre> <p>next ssh to the pod which is not ready and observe kubelet logs. Most likely errors can be of certificates and authentication.</p> <p>You can also use journalctl on systemd to check kubelet errors.</p> <pre><code>$ journalctl -u kubelet </code></pre>
<p>During a custom K8s flexvolume development, I observed that containers are able to view complete block devices list present on the K8s minion (host) where it is running over. Basically "lsblk" command output on host os is also visible if executed "lsblk" over the containers. Also if container c1 has a flexvolume v1 assigned and c2 has v2; and both c1, c2 runs over same K8s host then c1 os can see v1, v2 both in "lsblk" output. Where c1 has only access to v1 and not v2 which is as expected but for certain security aspects, we do not want c1 os to view any block devices being accessed by c2 or specifically by K8s host. K8s is using docker as containerization service.</p> <p>Please can anyone guide here to achieve expected configuration. Is K8s Namespace way to go? if yes, can you provide any example? Thanks in advance.</p>
<ul> <li>Persistent volumes and host volumes/paths are not namespaced</li> <li>However, persistent volume claims PVCs belong to a single namespace</li> <li>You may consume the storage through PVCs so that each namespace has access to its own PVCs</li> <li>Separate containers/pods by namespaces</li> <li>Dont use host volumes in production</li> <li>Use podSecurityPolicy to allow specific volume types and deny others such as HostPaths</li> </ul> <blockquote> <p>Additionally, a PV that is already bound to one PVC cannot be bound to another, regardless of namespace. This means that even if a user attempts to craft a PVC which claims an existing PV from a different namespace, it will fail. When using Trident, the PV and PVC are destroyed at the same time by default. This behavior can be changed so that PVs are retained, but a PV that was bound to a PVC once and then unbound can never be bound again.</p> </blockquote> <p><a href="https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/" rel="nofollow noreferrer">https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/</a></p> <p>use pod security polcieis to restrict pods not to access host paths and only access some specific volume types and pvs</p> <p><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems</a></p> <pre><code>AllowedHostPaths - See Volumes and file systems. Volumes and file systems Volumes - Provides a whitelist of allowed volume types. The allowable values correspond to the volume sources that are defined when creating a volume. For the complete list of volume types, see Types of Volumes. Additionally, * may be used to allow all volume types. The recommended minimum set of allowed volumes for new PSPs are: configMap downwardAPI emptyDir persistentVolumeClaim secret projected </code></pre>
<p>I'm ramping up on Docker and k8s, and am running into an issue with a 3rd party application I'm containerizing where the application is configured via flat text files, without override environment variables. </p> <p>What is the best way to dynamically configure this app? I'm immediately leaning towards a sidecar container that accepts environment variables and writes the text file config, writes it to a shared volume in the pod, and then the application container will read the config file. Is this correct?</p> <p>What is the best practice here?</p>
<p>Create a <code>ConfigMap</code> with this configuration file. Then, mount the <code>ConfigMap</code> into the pod. This will create the configuration file in mounted directory. Then, you can use this configuration file as usual.</p> <p>Here are related example: </p> <ol> <li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">Create ConfigMap from file</a>.</p></li> <li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Mount ConfigMap as volume</a>.</p></li> </ol>
<p>I deployed two services in a cluster in google cloud.</p> <p>When I run: <code>kubectl get services</code> I get-></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-java-1 NodePort 10.7.254.204 &lt;none&gt; 8080:31848/TCP 21m hello-java-2 NodePort 10.7.246.52 &lt;none&gt; 8080:30624/TCP 19m kubernetes ClusterIP 10.7.240.1 &lt;none&gt; 443/TCP 23m </code></pre> <p>Now, I followed as per google cloud docs: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">Ingress</a> and configured the fanout-ingress as:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: fanout-ingress spec: rules: - http: paths: - path: /product/* backend: serviceName: hello-java-1 servicePort: 8080 - path: /list/* backend: serviceName: hello-java-2 servicePort: 8080 </code></pre> <p>Now:</p> <pre><code>$kubectl get ingress fanout-ingress NAME HOSTS ADDRESS PORTS AGE fanout-ingress * 35.190.55.204 80 17m </code></pre> <p>I get these results.</p> <p>I checked the command: <code>kubectl describe ingress fanout-ingress</code></p> <p>The output is:</p> <pre><code> * /product/* hello-java-1:8080 (&lt;none&gt;) /list/* hello-java-2:8080 (&lt;none&gt;) Annotations: ingress.kubernetes.io/backends: {"k8s-be-30624--e761000d52fd1c80":"HEALTHY","k8s-be-31726--e761000d52fd1c80":"HEALTHY","k8s-be-31848--e761000d52fd1c80":"HEALTHY"} ingress.kubernetes.io/forwarding-rule: k8s-fw-default-fanout-ingress--e761000d52fd1c80 ingress.kubernetes.io/target-proxy: k8s-tp-default-fanout-ingress--e761000d52fd1c80 ingress.kubernetes.io/url-map: k8s-um-default-fanout-ingress--e761000d52fd1c80 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 18m loadbalancer-controller default/fanout-ingress Normal CREATE 17m loadbalancer-controller ip: 35.190.55.204 Normal Service 8m (x4 over 17m) loadbalancer-controller no user specified default backend, using system default </code></pre> <p><strong>Now when I access <code>http://35.190.55.204/product/home</code> I get spring whitelabel error.. but \home is defined in the application! Why this happens?</strong></p>
<p>I got the issue! For the path to be <code>/product/*</code>, <strong>all the URL mappings of our first service</strong> Requestpath in spring application should start with <code>/product/</code></p> <blockquote> <p>ex: /product/list, /product/add, /product/delete etc</p> </blockquote> <p>Also for Ingress path rule /list/*, <strong>all the URL mappings in our second service</strong> Requestpath in spring application should start with <code>/list/</code></p> <blockquote> <p>ex: /list/sort, /list/add, /list/delete etc</p> </blockquote>
<p>I am using Stackdriver Logging for Python and a Python logger at the same time. I am using the function <code>google.cloud.logging.logger.log_struct</code> (<a href="https://gcloud-python.readthedocs.io/en/latest/logging/logger.html" rel="nofollow noreferrer">https://gcloud-python.readthedocs.io/en/latest/logging/logger.html</a>) to log a JSON to the Stackdriver.</p> <p>I am able to view the logs as expected in the log viewer with the selected resource <code>Global</code> when I am running my script using a <code>Google Compute Engine VM instance</code>. The struct I am logging is recorded properly in <code>jsonPayload</code>.</p> <p>However, when the logging is coming from a <code>Google Kubernetes Engine</code>, the logger view does not show the structs I passed, but rather what is printed on <code>stdout</code>. How do I make sure I observe the same behaviour from the <code>Google Compute Engine VM instance</code> and a <code>Google Kubernetes Engine</code>?</p> <p>This is a snippet showing how I am doing the logging:</p> <pre><code>import google.cloud.logging import logging logging_client = google.cloud.logging.Client() # connects the logger to the root logging handler cloud_logger = logging_client.logger('email_adaptor') formatter = logging.Formatter( '%(asctime)s - %(levelname)s - %(message)s - %(lineno)d - %(filename)s') # get the logger a name logger = logging.getLogger('email_adaptor') # set a level for the logger logger.setLevel(logging.DEBUG) # make the logger write on stdout stdout_alarm_log = logging.StreamHandler(sys.stdout) stdout_alarm_log.setFormatter(formatter) logger.addHandler(stdout_alarm_log) struct = {'message':'Processed Alarm', 'valid': True} cloud_logger.log_struct(struct, severity='INFO') logger.info(str(struct)) </code></pre> <p>This is an example of what I get on the STDOUT on both the <code>VM instance</code> and the <code>Kubernetes Engine</code>:</p> <pre><code>2018-10-26 12:30:20,008 - INFO - Processed Alarm {'valid': True} - 79 - log.py INFO:email_adaptor:Processed Alarm {'valid': True} </code></pre> <p>This is what I see under the resource <code>Global</code> in the Google Log Viewer (the logs are ones from my VM instance, and do not correspond to the example I gave in the snippet code): <a href="https://i.stack.imgur.com/yH452.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yH452.png" alt="enter image description here"></a></p> <p>This is what I see under the resource <code>Google Kubernetes Engine</code>: The struct do not show, instead I see what is printed on <code>STDOUT</code>: <a href="https://i.stack.imgur.com/CYNl7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CYNl7.png" alt="enter image description here"></a></p>
<p>The Stackdriver library calls in the code write against “global” and the structured log entries from your GKE containers will show under “Global” resource.</p> <p>The “GKE Container” resource will show logging written to stdout and stderr, which are ingested by default.</p> <p>To write structured logs to stdout/stderr and access them in Stackdriver, the only structured format that the logging agent will accept is JSON. You must configure your logger to output JSON for it to be picked up as a structured log entry. More information at <a href="https://cloud.google.com/logging/docs/agent/configuration#process-payload" rel="nofollow noreferrer">https://cloud.google.com/logging/docs/agent/configuration#process-payload</a> .</p>
<p>I explain my case.</p> <p>I have three pods running in my Kubernetes. In one pod, there is a Flask framework running. In the two other pods, there is a Java application with a REST API. ( The Java application is the same on the two pods ).</p> <p>My pod with Flask has to ask the two pods with the Java application <strong>individually</strong> using HTTP requests.</p> <p>I have created a service that points toward my two pods with Java Application. When my pod with Flask uses the service to ask the two others, I just have one response.</p> <p>How could I target my pods individually ? Is it possible in my case to get endpoints from the pod with Flask ? I could have x pods with my Java Application.</p> <p>Best regards,</p> <p>Nico.</p>
<p>IMO, the right way to do this is to have 3 <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> each managing your pods individually each with a <code>replica</code> of <code>1</code>.</p> <ul> <li>One for your Flask app</li> <li>One for your Java app</li> <li>Another one for the same Java app. </li> </ul> <p>If the Java apps are the only ones that you need to connect to from the Flask app you can expose those deployments with 2 different <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> that will manage the endpoints. Services are supposed to manage the endpoints unless you are trying to connect to an <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">external</a> endpoint.</p>
<p>I'm attempting to configure a Horizontal Pod Autoscaler to scale a deployment based on the duty cycle of attached GPUs.</p> <p>I'm using GKE, and my Kubernetes master version is 1.10.7-gke.6 .</p> <p>I'm working off the tutorial at <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling</a> . In particular, I ran the following command to set up custom metrics:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml </code></pre> <p>This appears to have worked, or at least I can access a list of metrics at /apis/custom.metrics.k8s.io/v1beta1 .</p> <p>This is my YAML:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: images-srv-hpa spec: minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: container.googleapis.com|container|accelerator|duty_cycle targetAverageValue: 50 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: images-srv-deployment </code></pre> <p>I believe that the metricName exists because it's listed in /apis/custom.metrics.k8s.io/v1beta1 , and because it's described on <a href="https://cloud.google.com/monitoring/api/metrics_gcp" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_gcp</a> .</p> <p>This is the error I get when describing the HPA:</p> <pre><code> Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetExternalMetric 18s (x3 over 1m) horizontal-pod-autoscaler unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API Warning FailedComputeMetricsReplicas 18s (x3 over 1m) horizontal-pod-autoscaler failed to get container.googleapis.com|container|accelerator|duty_cycle external metric: unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API </code></pre> <p>I don't really know how to go about debugging this. Does anyone know what might be wrong, or what I could do next?</p>
<p>You are using ‘type: External’. For External Metrics List, you need to use ‘kubernetes.io’ instead of ‘container.googleapis.com’ [1]</p> <p>Replace the ‘metricName:container.googleapis.com|container|accelerator|duty_cycle’ </p> <p>with </p> <p>‘metricName: kubernetes.io|container|accelerator|duty_cycle’</p> <p>[1]<a href="https://cloud.google.com/monitoring/api/metrics_other#other-kubernetes.io" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_other#other-kubernetes.io</a></p>
<p>Is it possible to create a Kubernetes cluster admin without the ability to read namespace secrets? </p> <p>I know you can create a ClusterRole and list every single resource and omit secret but seems unintuitive. </p> <p>Can you use Aggregated ClusterRoles to remove a permission? so using ClusterRole cluster-admin and have a role that uses:</p> <pre><code>rules: - apiGroups: [""] resources: ["secrets"] verbs: [""] </code></pre>
<p>Not really <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles" rel="nofollow noreferrer"><code>Aggregated Cluster Roles</code></a> is a <a href="https://en.wikipedia.org/wiki/Union_(set_theory)" rel="nofollow noreferrer">set union</a> of several <code>ClusterRole</code>s. To get the behavior you want you would need a <a href="https://en.wikipedia.org/wiki/Complement_(set_theory)" rel="nofollow noreferrer">set subtraction</a> of cluster-admin role minus the rules that you have defined. <a href="https://github.com/kubernetes/kubernetes/issues/70387" rel="nofollow noreferrer">It's not supported in K8s as of this writing</a>.</p>
<p>I followed this great tutorial to install Jenkins on my GKE kubernetes cluster, and start playing with CI. Everything went fine until I tried to use the docker plugin in my pipeline.</p> <p>Here is a link to the issue I added to the github project I'm refering to:</p> <p><a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/65" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/65</a></p> <p>When I try to use the docker plugin like this:</p> <pre><code>stage "Prepare environment" docker.image('node:4.1.2').inside { print "inside a node server" sh("echo test"); //sh("npm install"); } </code></pre> <p>I got the following error:</p> <pre><code>[Pipeline] stage (Prepare environment) Using the ‘stage’ step without a block argument is deprecated Entering stage Prepare environment Proceeding [Pipeline] sh [play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA] Running shell script + docker inspect -f . node:4.1.2 . [Pipeline] withDockerContainer $ docker run -t -d -u 0:0 -w /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA -v /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA:/root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA:rw -v /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA@tmp:/root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat node:4.1.2 [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Failed to run image 'node:4.1.2'. Error: docker: Error response from daemon: mkdir /root/workspace: read-only file system. at org.jenkinsci.plugins.docker.workflow.client.DockerClient.run(DockerClient.java:125) at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:175) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:184) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:126) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:18) at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:122) at org.jenkinsci.plugins.docker.workflow.Docker.node(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:63) at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:116) at WorkflowScript.run(WorkflowScript:12) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82) at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46) at com.cloudbees.groovy.cps.Next.step(Next.java:58) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Finished: FAILURE </code></pre> <p>Basically this:</p> <pre><code>java.io.IOException: Failed to run image 'node:4.1.2'. Error: docker: Error response from daemon: mkdir /root/workspace: read-only file system. </code></pre> <p>More investigation in the link I specified above. Using docker in docker seems to be tricky in this situation. I'm new to jenkins but I guess it must be a way to make the docker plugin work in this context (GKE kubernetes cluster)</p> <p>Thanks a lot in advance</p> <p>Philippe</p>
<p>I have stayed far away from docker "abstractions" through Jenkins plugins. You can simply execute shell commands directly to perform your docker workload(s) such as:</p> <pre><code>sh "docker build -t $SOME_NAME:$GIT_COMMIT_SHA ." sh "docker run .." </code></pre> <p>or more specifically in your paste:</p> <pre><code>sh "docker run -d --name node node:4.1.2" sh "docker exec -it node /bin/sh -c 'npm install'" </code></pre>
<p>Kubernetes Supports <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#finalizers" rel="nofollow noreferrer">Finalizer in CR</a> to prevent hard deletion. I had a hard time to find sample code though. Can someone please point to real code snippet? </p>
<p>This sample repository show demo use of <code>Finalizer</code> and <code>Initializer</code>. Finalizer are used here for garbage collection.</p> <p>Repostory: <a href="https://github.com/hossainemruz/k8s-initializer-finalizer-practice" rel="noreferrer">k8s-initializer-finalizer-practice</a></p> <p>Here, I have created a custom controller for pods, just like Deployment.</p> <ol> <li>I have used <code>Initializer</code> to add <code>busybox</code> sidecar or <code>finalizer</code> to underlying pods. See <a href="https://github.com/hossainemruz/k8s-initializer-finalizer-practice/blob/a63a7a543c747df3f37399876780cdf4f74a7d42/pkg/controller/deploymentcontroller.go#L343" rel="noreferrer">here</a>.</li> <li>When a <code>CustomDeployment</code> crd is deleted, kubernetes set <code>DeletionTimestamp</code> but does not delete it if it has finalizer. Then controller checks if it has finalizer. If it has finalizer, it deletes its pod and remove the finalizer. Then the crd terminate. See <a href="https://github.com/hossainemruz/k8s-initializer-finalizer-practice/blob/a63a7a543c747df3f37399876780cdf4f74a7d42/pkg/controller/deploymentcontroller.go#L315" rel="noreferrer">here</a>.</li> </ol>
<p>Can I enable, on the cluster level, for the pods to use default secomp and apparmor profiles or do I need to make an admission controller of my own to insert the innotation to the objects? </p> <p>Leaving it to users is not an option.</p>
<p>There is already the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer"><code>PodSecurityPolicy</code></a> object which essentially is an implementation of an admission controller. You can control the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp" rel="nofollow noreferrer"><code>seccomp</code></a> and <a href="https://kubernetes.io/docs/tutorials/clusters/apparmor/#podsecuritypolicy-annotations" rel="nofollow noreferrer"><code>apparmor</code></a> profiles using annotations in the PodSecurityPolicy:</p> <p>For example (as described in the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies" rel="nofollow noreferrer">docs</a>), notice the 'default' in the annotation:</p> <pre><code>apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default' apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' spec: ... </code></pre> <p>Note that <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp" rel="nofollow noreferrer">Seccomp</a> is alpha and <a href="https://kubernetes.io/docs/tutorials/clusters/apparmor/" rel="nofollow noreferrer">Apparmor</a> is beta as of this writing.</p>
<p>I wanted to copy an S3 bucket on Kubernetes nodes as a DaemonSet, as the new node will also get the s3 bucket copy as soon it gets launched, I prefer an S3 copy to the Kubernetes node because copying S3 to directly to the pod as an AWS API would mean multiple calls as multiple pods require it and it will take time to copy content each time when the pod is launching.</p>
<p>Assuming that your S3 content is static and doesn't change often. I believe more than a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><code>DaemonSet</code></a> it makes more sense to use a one time <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> to copy the whole S3 bucket to a local disk. It's not clear how you would signal the kube-scheduler that your node is not ready until the S3 bucket is fully copied. But, perhaps you can <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer"><code>taint</code></a> your node before the job is finished and remove the taint after the job finishes.</p> <p>Note also that S3 is inherently slow and meant to be used for processing (reading/writing) single files at a time, so if your bucket has a large amount of data it would take a long time to copy to the node disk.</p> <p>If your S3 content is dynamically (constantly changing) then it would be more challenging since you would have to files in sync. Your apps would probably have to cache architecture where you would go to the local disk to find files and if they are not there, then make a request to S3.</p>
<p>I want to run a mongodb command in Kubernetes deployment. In my yaml file, I want to run the following:</p> <pre><code>command: ["mongo --port ${MONGODBCACHE_PORT} --host ${MONGODBCACHE_BIND_IP} \ --eval "rs.initiate('{ _id: \"test\", members: [ { _id: 0, host: \"${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}\" },]}')" &amp;&amp; \ ./mycommand "] </code></pre> <p>I checked that the environment variables are present correctly. How do I escape the characters when running this command?</p>
<p>Use only <code>mongo</code> in command and the others in args field which is an array. Like,</p> <pre><code>command: ["/bin/bash", "-c"] args: - mongo - --port - ${MONGODBCACHE_PORT} - --host - ${MONGODBCACHE_BIND_IP} - --eval - rs.initiate('{ _id: "test", members: [ { _id: 0, host: "${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}" } ] }') &amp;&amp; ./mycommand </code></pre> <p>Hope this will help.</p>
<p>I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.</p> <p>I setup the load balancer to use sticky sessions based on IP. </p> <p>Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?</p> <p>How can I set it up to ensure session affinity to a particular POD in the cluster?</p> <p>NOTE: I manually set the sessionAffinity on the cloud load balancer. <a href="https://i.stack.imgur.com/oZcNd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oZcNd.png" alt="enter image description here"></a></p> <p>Here would be my ingress yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.global-static-ip-name: my-static-ip spec: backend: serviceName: my-service servicePort: 80 </code></pre> <h1>Service</h1> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service labels: app: myApp spec: sessionAffinity: ClientIP type: NodePort ports: - port: 80 targetPort: http-port selector: app: myApp </code></pre>
<p>First off, you need to set "sessionAffinity" at the <code>Ingress</code> resource level, not your load balancer (this is only related to a specific node in the target group):</p> <p>Here is an example <code>Ingress</code> spec:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-test-sticky annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-hash: "sha1" spec: rules: - host: $HOST http: paths: - path: / backend: serviceName: $SERVICE_NAME servicePort: $SERVICE_PORT </code></pre> <p>Second, you probably need to tune your <code>ingress-controller</code> to allow longer connection times. Everything else, by default, supports websocket proxying.</p> <p>If you are still having issues please provide outputs for <code>kubectl describe -oyaml pod/&lt;ingress-controller-pod&gt;</code> and <code>kubectl describe -oyaml ing/&lt;your-ingress-name&gt;</code></p> <p>Hope this helps, good luck!</p>
<p>Is it possible to put multiple grpc services behind a google Cloud Loadbalancer (specifically one running inside container engine) to allow load balancing of multiple grpc backend services?</p>
<p>This is currently not supported (see: <a href="https://cloud.google.com/load-balancing/docs/backend-service#HTTP2-limitations" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/backend-service#HTTP2-limitations</a>).</p> <p>Your other option is to use an L4 LoadBalancer.</p>
<p>When I try to </p> <pre><code>kubectl create -f cloudflare-argo-rolebinding.yml </code></pre> <p>this RoleBinding</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cloudflare-argo-rolebinding namespace: default subjects: - kind: ServiceAccount name: cloudflare-argo apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: cloudflare-argo-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>I get this error :</p> <pre><code>The RoleBinding "cloudflare-argo-rolebinding" is invalid: subjects[0].apiGroup: Unsupported value: "rbac.authorization.k8s.io": supported values: "" </code></pre> <p>Any idea ? I'm on DigitalOcean using their new Kubernetes service if it helps.</p>
<p>I think problem is using wrong <code>apiGroup</code>.</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cloudflare-argo-rolebinding namespace: default subjects: - kind: ServiceAccount name: cloudflare-argo # apiGroup is ""(core/v1) for service_account apiGroup: "" roleRef: kind: Role name: cloudflare-argo-role apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I'm trying to create a kafka cluster deployed on kubernetes. I have the following configuration:</p> <p>Kafka service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka labels: app: kafka namespace: kafka spec: ports: - name: kafka-port port: 9093 protocol: TCP selector: app: kafka type: NodePort </code></pre> <p>Kafka StatefullSet:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: kafka namespace: kafka labels: app: kafka spec: replicas: 1 serviceName: kafka podManagementPolicy: Parallel updateStrategy: type: RollingUpdate selector: matchLabels: app: kafka template: metadata: labels: app: kafka spec: containers: - name: kafka imagePullPolicy: Always image: wurstmeister/kafka ports: - containerPort: 9093 command: - sh - -c - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \ --override listeners=PLAINTEXT://:9093 \ --override zookeeper.connect=zookeeper:2181 \ --override log.dir=/var/lib/kafka \ --override auto.create.topics.enable=true \ --override auto.leader.rebalance.enable=true \ --override background.threads=10 \ --override compression.type=producer \ --override delete.topic.enable=false \ --override leader.imbalance.check.interval.seconds=300 \ --override leader.imbalance.per.broker.percentage=10 \ --override log.flush.interval.messages=9223372036854775807 \ --override log.flush.offset.checkpoint.interval.ms=60000 \ --override log.flush.scheduler.interval.ms=9223372036854775807 \ --override log.retention.bytes=-1 \ --override log.retention.hours=168 \ --override log.roll.hours=168 \ --override log.roll.jitter.hours=0 \ --override log.segment.bytes=1073741824 \ --override log.segment.delete.delay.ms=60000 \ --override message.max.bytes=1000012 \ --override min.insync.replicas=1 \ --override num.io.threads=8 \ --override num.network.threads=3 \ --override num.recovery.threads.per.data.dir=1 \ --override num.replica.fetchers=1 \ --override offset.metadata.max.bytes=4096 \ --override offsets.commit.required.acks=-1 \ --override offsets.commit.timeout.ms=5000 \ --override offsets.load.buffer.size=5242880 \ --override offsets.retention.check.interval.ms=600000 \ --override offsets.retention.minutes=1440 \ --override offsets.topic.compression.codec=0 \ --override offsets.topic.num.partitions=50 \ --override offsets.topic.replication.factor=1 \ --override offsets.topic.segment.bytes=104857600 \ --override queued.max.requests=500 \ --override quota.consumer.default=9223372036854775807 \ --override quota.producer.default=9223372036854775807 \ --override replica.fetch.min.bytes=1 \ --override replica.fetch.wait.max.ms=500 \ --override replica.high.watermark.checkpoint.interval.ms=5000 \ --override replica.lag.time.max.ms=10000 \ --override replica.socket.receive.buffer.bytes=65536 \ --override replica.socket.timeout.ms=30000 \ --override request.timeout.ms=30000 \ --override socket.receive.buffer.bytes=102400 \ --override socket.request.max.bytes=104857600 \ --override socket.send.buffer.bytes=102400 \ --override unclean.leader.election.enable=true \ --override zookeeper.session.timeout.ms=6000 \ --override zookeeper.set.acl=false \ --override broker.id.generation.enable=true \ --override connections.max.idle.ms=600000 \ --override controlled.shutdown.enable=true \ --override controlled.shutdown.max.retries=3 \ --override controlled.shutdown.retry.backoff.ms=5000 \ --override controller.socket.timeout.ms=30000 \ --override default.replication.factor=1 \ --override fetch.purgatory.purge.interval.requests=1000 \ --override group.max.session.timeout.ms=300000 \ --override group.min.session.timeout.ms=6000 \ --override inter.broker.protocol.version=0.11.0 \ --override log.cleaner.backoff.ms=15000 \ --override log.cleaner.dedupe.buffer.size=134217728 \ --override log.cleaner.delete.retention.ms=86400000 \ --override log.cleaner.enable=true \ --override log.cleaner.io.buffer.load.factor=0.9 \ --override log.cleaner.io.buffer.size=524288 \ --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \ --override log.cleaner.min.cleanable.ratio=0.5 \ --override log.cleaner.min.compaction.lag.ms=0 \ --override log.cleaner.threads=1 \ --override log.cleanup.policy=delete \ --override log.index.interval.bytes=4096 \ --override log.index.size.max.bytes=10485760 \ --override log.message.timestamp.difference.max.ms=9223372036854775807 \ --override log.message.timestamp.type=CreateTime \ --override log.preallocate=false \ --override log.retention.check.interval.ms=300000 \ --override max.connections.per.ip=2147483647 \ --override num.partitions=1 \ --override producer.purgatory.purge.interval.requests=1000 \ --override replica.fetch.backoff.ms=1000 \ --override replica.fetch.max.bytes=1048576 \ --override replica.fetch.response.max.bytes=10485760 \ --override reserved.broker.max.id=1000 " volumeMounts: - name: datadir mountPath: /var/lib/kafka readinessProbe: exec: command: - sh - -c - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093" volumeClaimTemplates: - metadata: name: datadir spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi </code></pre> <p>Zookeeper service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: zookeeper namespace: kafka labels: name: zookeeper spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: name: zookeeper </code></pre> <p>Zookeeper Deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: zookeeper namespace: kafka spec: replicas: 1 template: metadata: labels: name: zookeeper spec: containers: - env: - name: ZOOKEEPER_ID value: "1" - name: ZOOKEEPER_SERVER_1 value: zookeeper name: zookeeper image: digitalwonderland/zookeeper ports: - containerPort: 2181 </code></pre> <p>With this configuration all works well. But I want to add more replicas of kafka. If i try to add another replica, I receive this error:</p> <pre><code>Error connecting to node kafka-1.kafka.kafka.svc.cluster.local:9093 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient) java.io.IOException: Can't resolve address: kafka-1.kafka.kafka.svc.cluster.local:9093 </code></pre> <p>I can’t figure out where "kafka-1.kafka.kafka.svc.cluster.local" is set and if I can come from that. I can see a solution would be to create a service for every pod but with this approach the cluster doesn’t scale easily.</p> <p>Can I configure my kafka stateful to discover each other without any other service?</p> <p>EDIT:</p> <p>The configuration with one replica works only from cluster, if I'm trying to access broker from outside I receive the exception:</p> <pre><code>Can't resolve address: kafka-0.kafka.kafka.svc.cluster.local:9093 </code></pre> <p>Thanks</p>
<p>You need to set the <code>.metadata.name</code> in the service definition and the <code>.spec.serviceName</code> in the deploy definition to the same name, and the service should be headless service, with <code>'clusterIP: None'</code> in your setup, then you can resolve <code>kafka-0.kafka.kafka.svc.cluster.local</code> to the pod ip.</p>
<p>Has anyone tried deploying Cassandra (POC) on GCP using kubernetes (not GKE). If so can you please share info on how to get it working?</p>
<p>You could start by looking at <a href="https://developer.ibm.com/patterns/deploy-a-scalable-apache-cassandra-database-on-kubernetes/" rel="nofollow noreferrer">IBM's Scalable-Cassandra-deployment-on-Kubernetes</a>.</p> <p>For seeds discovery you can use a headless service, similar to this <a href="http://node.mu/2015/09/18/multi-node-cassandra-cluster-made-easy-with-kubernetes/" rel="nofollow noreferrer">Multi-node Cassandra Cluster Made Easy with Kubernetes</a>.</p> <p>Some difficulties:</p> <ul> <li><a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">fast local storage for K8s is still in beta</a>; of course, you can use what k8s already has; there are <a href="https://lists.apache.org/thread.html/578d5f801cbf2d845c6ef2941149ee1065ef33d09400d08067ac2d53@%3Cuser.cassandra.apache.org%3E" rel="nofollow noreferrer">some users reporting</a> that they use Ceph RBD with 8 C* nodes each of them having 2TB of data on K8s.</li> <li>at some point in time you will realize that you need a C* operator - here is some good startup - <a href="https://github.com/instaclustr/cassandra-operator" rel="nofollow noreferrer">Instaclustr's Cassandra Operator</a> and <a href="https://www.pipersgates.net/2018/10/23/cassandra-operator-for-kubernetes/" rel="nofollow noreferrer">Pantheon Systems' Cassandra Operator</a> </li> <li>you need a way to <a href="https://medium.com/@marko.luksa/graceful-scaledown-of-stateful-apps-in-kubernetes-2205fc556ba9" rel="nofollow noreferrer">scale in gracefully stateful applications</a> (should be also covered by the operator; this is a solution if you don't want an operator, but you still need to use a controller).</li> </ul> <p>You could also check the <a href="https://lists.apache.org/[email protected]:lte=18M:kubernetes" rel="nofollow noreferrer">Cassandra mailing list,</a> since there are people there already using Cassandra over K8s in production.</p>
<p>I have following spring YAML structure:</p> <pre><code>catalogParamter: - name: kota street: xxx - name: yyy street: kkkk </code></pre> <p>now I want those values to come from values YAML in helm chart </p> <p>in spring YAML catalogParamter: ${CATALOG_PARAMETER} and defined it in deployment yaml and config yaml</p> <p>and values coming from values YAML</p> <p>how to represent catalogParamter in values YAML.</p>
<p>There are different options here. I'd suggest putting the <a href="https://github.com/trisberg/devoxx-spring-boot-k8s/blob/master/actors/config/actors-config.yaml" rel="nofollow noreferrer">application.yaml in a ConfigMap</a>. You could then expose properties in your values file with:</p> <pre><code>properties: catalogParamter: - name: kota street: xxx - name: yyy street: kkkk </code></pre> <p>And populate the data section of your ConfigMap with:</p> <pre><code>data: application.yaml: |- {{ toYaml .Values.properties | indent 4 }} </code></pre> <p>You can <a href="https://github.com/trisberg/devoxx-spring-boot-k8s/blob/master/actors/config/actors-deployment.yaml#L51" rel="nofollow noreferrer">then mount that ConfigMap</a> into the <code>/config</code> directory so that the spring boot app will <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-application-property-files" rel="nofollow noreferrer">automatically read it</a>.</p> <p>If instead you use environment variables then you can map complex properties to environment variables with Spring boot's <a href="https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0" rel="nofollow noreferrer">relaxed binding</a>. Coding that conversion in your chart could be tricky but your case seems to be name-value pairs so I think it could be done with helm's range function. </p> <p>Another different pattern is to expose the <a href="https://hackernoon.com/the-art-of-the-helm-chart-patterns-from-the-official-kubernetes-charts-8a7cafa86d12" rel="nofollow noreferrer">option to set env vars directly to the user</a>. But the disadvantage of this is the user would have to understand the env var format, which may not be intuitive with relaxed binding. </p>
<p>On my local machine I can do <code>mkdir</code> and trust that this operation is atomic. I can thus use it as a lock. Can I similarly use <code>mkdir</code> on an EC2 instance with EBS attached and have it be atomic? </p> <p>For further context, I am thinking of a situation where I have multiple Kubernetes Pods running on a Kubernetes Node with one <code>persistentVolume</code> (AWS EBS) between them. If one of these pods is looking to get exclusive access to a folder in this volume, can it do so? The pods are sharing read-only data, but I to trigger an <code>aws s3 sync</code> only once per week, not once-per-pod-per-week. </p>
<p>Since EBS is not a filesystem...</p> <blockquote> <p>Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances.</p> <p><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html</a></p> </blockquote> <p>...there is no direct answer to the question, because it isn't the correct question. EBS -- as a block device -- is not responsible for or even aware of the actual filesystem operations or its guarantees.</p> <p>tl;dr: If <code>mkdir()</code> is atomic in the filesystem you are using, then it will still be atomic if that filesystem is on an EBS volume.</p> <p>Whether using <code>mkdir()</code> for locking is a good practice is a <a href="https://stackoverflow.com/q/7208447/1695906">different question</a>.</p>
<p>I run a Kubernetes cluster on my mac using the latest Docker community edition. I usually do:</p> <pre><code>$ minikube start --vm-driver=hyperkit </code></pre> <p>and it works well for me.</p> <p>Today, I ran that command multiple times in a script. Now, how do I know how many minikube VMs are running on a mac? How do I delete all but one of them? Can I see a list of all minikube vms running? </p> <pre><code>$ minikube status </code></pre> <p>shows:</p> <pre><code>minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.3 </code></pre> <p>Is running minikube start twice not harmful?</p> <p>I am running minikube version: v0.30.0 on Mac OS High Sierra.</p> <pre><code>$ kubectl version </code></pre> <p>shows:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"} </code></pre> <p>Thanks for reading.</p>
<p>You are using the <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer"><code>Hyperkit</code></a> minikube driver that uses the <code>/usr/local/bin/hyperkit</code> command line (in reality it uses the <a href="https://github.com/mist64/xhyve" rel="nofollow noreferrer">xhyve</a> Hypervisor). So a simple:</p> <pre><code>$ ps -Af | grep hyperkit 0 9445 1 0 1:07PM ttys002 1:45.27 /usr/local/bin/hyperkit -A -u -F /Users/youruser/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2caa5ca9-d55c-11e8-92a0-186590def269 -s 2:0,virtio-blk,/Users/youruser/.minikube/machines/minikube/minikube.rawdisk -s 3,ahci-cd,/Users/youruser/.minikube/machines/minikube/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/youruser/.minikube/machines/minikube/tty,log=/Users/youruser/.minikube/machines/minikube/console-ring -f kexec,/Users/youruser/.minikube/machines/minikube/bzimage,/Users/youruser/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube </code></pre> <p>will tell you how many Hyperkit processes/VMs you are running. AFAIK, <a href="https://github.com/kubernetes/minikube/issues/94" rel="nofollow noreferrer">minikube only supports one</a>, but you could have another one if you have <a href="https://docs.docker.com/v17.12/docker-for-mac/install/" rel="nofollow noreferrer">Docker for Mac</a> installed.</p> <p>Then if you follow this: <a href="https://stackoverflow.com/questions/39739560/how-to-access-the-vm-created-by-dockers-hyperkit">How to access the VM created by docker&#39;s HyperKit?</a>. You can connect to VM an see what's running inside:</p> <pre><code>$ sudo screen /Users/youruser/.minikube/machines/minikube/tty Welcome to minikube minikube login: root _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , &lt; ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) # docker ps ... &lt;== shows a bunch of K8s containers </code></pre>
<p>im Running Kubernetes (minikube) on Windows via Virtualbox. I've got a Services running on my Host-Service i dont want to put inside my Kubernetes Cluster, how can i access that Services from Inside my Pods?</p> <p>Im new to to Kubernetes i hope this Questions isnt to stupid to ask.</p> <p>I tried to create a Service + Endpoint but it didnt work:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Endpoints apiVersion: v1 metadata: name: vetdb subsets: - addresses: - ip: 192.168.99.100 ports: - port: 3307</code></pre> </div> </div> </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: vetdb spec: selector: app: vetdb type: ClusterIP ports: - port: 3306 targetPort: 3307</code></pre> </div> </div> </p> <p>i started a ubuntu image inside the same cluster the pod should be running on later and tested the connection:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>$ root@my-shell-7766cd89c6-rtxt2:/# curl vetdb:3307 --output test Host '192.168.99.102' is not allowed to connect to this MariaDB serverroot@my-shell</code></pre> </div> </div> </p> <p>This is the same Output i get running (except other Host-IP)</p> <pre><code>curl 192.168.99.100:3307 </code></pre> <p>on my Host PC ==> Itworks.</p> <p>But i cant access the Host from inside my Microservices where i really need to access the URL.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE eureka-77f949499-g2l82 1/1 Running 0 2h my-shell-7766cd89c6-rtxt2 1/1 Running 0 2h vet-ms-54b89f9c86-29psf 1/1 Running 10 18m vet-ms-67c774fd9b-2fnjc 0/1 CrashLoopBackOff 7 18m</code></pre> </div> </div> </p> <p>The Curl Response i posted above was from Pod: <code>my-shell-7766cd89c6-rtxt2</code> But i need to access vetdb from <code>vet-ms-*</code></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>$ kubectl logs -f vet-ms-67c774fd9b-2fnjc ... Caused by: java.net.UnknownHostException: vetdb ...</code></pre> </div> </div> </p> <p>Spring URL Settings i tried</p> <pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb:3307/vetdb?useSSL=false&amp;allowPublicKeyRetrieval=true </code></pre> <blockquote> <pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb:3306/vetdb?useSSL=false&amp;allowPublicKeyRetrieval=true </code></pre> </blockquote> <pre><code>spring.profiles.datasource.url: jdbc:mysql://vetdb/vetdb?useSSL=false&amp;allowPublicKeyRetrieval=true </code></pre> <p>Ty guys</p> <hr> <hr> <p>Edit:// i allowed every Host to Connect to the DB to remove this error</p> <pre><code>Host '192.168.99.102' is not allowed to connect to this MariaDB </code></pre> <p>but i still get the Same Unknown Host Exception inside of my Microservices.</p>
<p>I think the Ubuntu image test is most informative here.</p> <p>From the error message I think the problem is in the MySQL config. You must configure server to listen on port of your host IP address (i.e. not localhost or socketfile).</p> <p>In addition, you must also ensure that IP address from pod subnets are allowed to connect also.</p>
<p>The closest answer I found is <a href="https://stackoverflow.com/questions/45928798/docker-volume-vs-kubernetes-persistent-volume-for-databases">this</a>.</p> <p>But I want to know is that, will the Dockerfile VOLUME command be totally ignored by Kubernetes? Or data will be persisted into two places? One for docker volume (in the host which pod running) and another is Kubernetes's PV?</p> <p>The reason of asking this is because I deploy some containers from docker hub which contain VOLUME command. Meanwhile I also attach PVC to my pod. I am thinking whether local volume (docker volume, not K8 PV) will be created in the node? If my pod scheduled to another node, then another new volume created? </p> <hr> <p>On top of this, thanks for @Rico to point out that <code>-v</code> command and Kubernetes's mount will take precedence over dockerfile VOLUME command, but what if as scenario below:</p> <ul> <li><p>dockerfile VOLUME onto '/myvol'</p></li> <li><p>Kubernetes mount PVC to '/anotherMyVol'</p></li> </ul> <p>In this case, will <code>myvol</code> mount to my local node harddisk? and cause unaware data persisted locally?</p>
<p>It will not be ignored unless you override it on your Kubernetes pod spec. For example, if you follow <a href="https://docs.docker.com/engine/reference/builder/#volume" rel="noreferrer">this</a> example from the Docker documentation:</p> <pre><code>$ docker run -it container bash root@7efcf5ef12a2:/# mount | grep myvol /dev/nvmeXnXpX on /myvol type ext4 (rw,relatime,discard,data=ordered) root@7efcf5ef12a2:/# </code></pre> <p>You'll see that it's mounted on the root drive of the host where the container is running on. Docker actually creates a volume on the host filesystem under <code>/var/lib/docker/volumes</code> (<code>/var/lib/docker</code> is your Docker graph directory):</p> <pre><code>$ pwd /var/lib/docker/volumes $ find . | grep greeting ./d0bc20d085243c39c4f386dce2f6cafcd8146128d6b0c8f9dcb27cfb61a7ecab/_data/greeting </code></pre> <p>You can override this with the <a href="https://docs.docker.com/storage/volumes/" rel="noreferrer"><code>-v</code></a> option in Docker:</p> <pre><code>$ docker run -it -v /mnt:/myvol container bash root@1c7211cf43d0:/# cd /myvol/ root@1c7211cf43d0:/myvol# touch hello root@1c7211cf43d0:/myvol# exit exit $ pwd # &lt;= on the host /mnt $ ls hello </code></pre> <p>So on Kubernetes you can override it in the pod spec:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mycontainer image: container volumeMounts: - name: storage mountPath: /myvol volumes: - name: storage hostPath: path: /mnt type: Directory </code></pre>
<p>Does it make sense to create a separate <strong><em>Kubernetes</em></strong> cluster for my <em>Cassandra</em> instances and one cluster for the <em>application layer</em>? Is the DB cluster accessible from the service cluster when both are in the same region and zone?</p> <p>Or is it better to have one cluster with different pools - one pool for the service layer and one pool the DB nodes? </p> <p>Thanks</p>
<p>This is more of a toss-up or opinion in terms of how you want to design your whole architecture. Here are some things to consider:</p> <p>Same cluster:</p> <ul> <li>Pros <ul> <li>Workloads don't need to go to a different podCidr to get its data.</li> <li>You can optimize your resources in the same set of servers. <ul> <li>This is one of the main reasons people use containers orchestrators and containers.</li> <li>It allows you to run multiple different types of workloads on the same set of resources.</li> </ul></li> </ul></li> <li>Cons <ul> <li>If you have an issue with your cluster running Cassandra you risk losing your data. Or temporarily lose data if you have backups. (Longer downtime)</li> <li>If you'd like to super isolate the db and app in terms of security, it may be harder.</li> </ul></li> </ul> <p>Different clusters:</p> <ul> <li><p>Pros</p> <ul> <li>'Safer' if one of your clusters goes down.</li> <li>More separation in terms of security for your data at rest.</li> </ul></li> <li><p>Cons</p> <ul> <li>Resources may not be optimally utilized. Leaving some CPUs, memory, etc idle.</li> <li>More infrastructure management.</li> </ul></li> </ul> <p>Different node pools:</p> <ul> <li>Pros <ul> <li>Separation of data at rest</li> <li>Still going through the same PodCidr.</li> </ul></li> <li>Cons <ul> <li>More management of different node pools.</li> <li>Resources may not be optimally utilized.</li> </ul></li> </ul>
<p>Does kubectl provide a way to copy files from pod in one namespace to another? I see we can copy files from pod to local machine and then copy them on another pod of different namespace. But can we copy directly from one namespace to another?</p> <p>I tried:</p> <p><code>kubectl cp &lt;namespace1&gt;/&lt;pod1&gt;:/tmp/foo.txt &lt;namespace2&gt;/&lt;pod1&gt;:/tmp/foo.txt</code></p> <p>Looking at <code>kubectl cp</code> command help options I don't think there is any way to do that.</p>
<p>Not really <code>kubectl cp</code> can only copy remote/local or local/remote, so unfortunately it's a 2 step process:</p> <pre><code> $ kubectl cp &lt;namespace1&gt;/&lt;pod1&gt;:/tmp/foo.txt foo.txt $ kubectl cp foo.txt &lt;namespace2&gt;/&lt;pod1&gt;:/tmp/foo.txt </code></pre> <p>I would be nice to have a 1 step process, like <a href="https://rsync.samba.org/" rel="noreferrer">rsync</a>, but it is what it is as of this writing. I opened this <a href="https://github.com/kubernetes/kubectl/issues/551" rel="noreferrer">issue</a> to track it.</p>
<p>Kubernetes ends up with long running pods when an image specified for a container is purged from an image repository. These deployments are created by a continuous integration system and sometimes pipelines are run or rerun when images have been purged.</p> <p>The status from <code>kubectl get pods</code> shows <code>ImagePullBackOff</code>.</p> <p>What should be set in the kube config yaml file to stop these pods from running for days? Ideally we just want the Image to be pulled a couple of times and then fail if it's unsuccessful.</p> <p>The pod definition is </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-missing-image spec: containers: - image: missingimage name: test resources: limits: memory: "10000Mi" readinessProbe: httpGet: port: 5678 path: /somePath initialDelaySeconds: 360 periodSeconds: 30 timeoutSeconds: 30 restartPolicy: Never terminationGracePeriodSeconds: 0 </code></pre> <p>Thanks!</p>
<p>AKAIK, the only way to control this as of this writing is with the <a href="https://kubernetes.io/docs/concepts/configuration/overview/#container-images" rel="nofollow noreferrer">imagePullPolicy </a> in the container spec. </p> <p>You may set it to <code>Never</code> but your pod will not run since the image is not present locally. Or you can set it to <code>IfNotPresent</code> but somehow you will have to have to create an image with that specific tag locally in your K8s nodes. Either option is not ideal, but I believe there might be a rationale to have it go into <code>ImagePullBackOff</code>: people would want to to know why their pod is not running.</p> <p>So IMO the bigger question is why would you want to delete/invalidate images in your docker registry that are still running in your cluster? Why not update the <code>pods/deployments/daemonsets/replicasets/statefulsets</code> with the latest images prior to deleting or invalidating an image in the docker registry (also called deploy)? </p> <p>The general practice could be something like this:</p> <pre><code>create new image =&gt; deploy it =&gt; make sure everything is ok =&gt; { ok =&gt; invalidate the old image tag. not ok =&gt; rollback =&gt; delete new image tag =&gt; go back to create new image =&gt; create new image tag. } </code></pre> <p>Note, layers, and images are not deleted in a docker registry. You can delete or overwrite tags: <a href="https://stackoverflow.com/questions/25436742/how-to-delete-images-from-a-private-docker-registry?answertab=votes#tab-top">How to delete images from a private docker registry?</a></p>
<p>I'm using the <a href="https://github.com/helm/charts/tree/master/stable/mcrouter" rel="nofollow noreferrer">mcrouter helm chart</a> to setup mcrouter on GKE. For my setup I'd like to have a dedicated node pool for the memcached statefulset and a daemonset for mcrouter.</p> <p>I'm <a href="https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create" rel="nofollow noreferrer">creating the node pool</a> with a taint using the <code>--node-taints</code> flag. To ensure that the memcached statefulset can run on this node pool I need to specify <code>tolerations</code> as described in <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a>.</p> <p>How do I specify the toleration? I'm currently creating my setup using helm as follows:</p> <blockquote> <p>helm install stable/mcrouter --name=mycache --set memcached.replicaCount=15 --set memcached.resources.requests.memory=10Gi --set memcached.resources.requests.cpu=2 --set resources.requests.memory=512Mi --set resources.requests.cpu=1.5 --set resources.limits.memory=512Mi --set resources.limits.cpu=2 --set memcached.memcached.maxItemMemory=8432</p> </blockquote>
<p>The <a href="https://github.com/helm/charts/tree/master/stable/mcrouter" rel="nofollow noreferrer">Helm Chart</a> doesn't support it.</p> <p>After you create the DaemonSet you can patch it. For example (change to whatever toleration you want):</p> <pre><code>$ kubectl patch daemonset mycache-mcrouter -p='{"spec":{"template": {"spec": {"tolerations": [{"key": "key1", "operator": "Equal", "value": "value1", "effect": "NoSchedule"}]}}}}' </code></pre> <p>You can also upstream a change to support tolerations. You would have to add variables in the <code>values.yaml</code>(<a href="https://github.com/helm/charts/blob/master/stable/mcrouter/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/mcrouter/values.yaml</a>) file and then optionally use it in the <a href="https://github.com/helm/charts/blob/master/stable/mcrouter/templates/daemonset.yaml" rel="nofollow noreferrer"><code>daemonset.yaml</code></a> template.</p>