prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>What approach do you take in order to identify feasible values for resource requests and limits for <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer"><code>ResourceQuota</code></a> and <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/" rel="nofollow noreferrer"><code>LimitRange</code></a> objects in OpenShift/Kubernetes projects?</p>
<p>I am <strong>not</strong> asking how to create quotas or limit ranges, but rather how to rationalize cpu/memory request/limit values instead of simply guessing.</p>
<p>I have come across <a href="https://blog.openshift.com/full-cluster-part-3-capacity-management/" rel="nofollow noreferrer">this blog post</a> which is a good starting point, and was hoping to find more resources or recommendations for best practices. I understand that I will have to tinker with these settings as there is no one-size-fits-all solution.</p>
| <p>I think you kind of answered the question yourself. There's no size fit all, it really depends on the types of workloads. This also an opinion on how much slack you want to leave on the resources.</p>
<p><strong>IMO</strong></p>
<p>For compute resources:</p>
<ul>
<li>CPUs and Memory: Anything under 10% is probably underutilized, anything over 80% overutilized. You always want to strive here for more utilization on both of these because these resources tend to be the ones that cost the most.</li>
<li>Disk: Anything at 80% means you probably need to increase the size of the disk, or do garbage collection.</li>
</ul>
<p>For K8s resource limits like number of ConfigMaps there's no max per se, it's just a feature for you to make sure cluster users don't abuse resource creation just because cluster resources are never infinite. One example I can think of is you can say that every Deployment has on average 2 ConfigMaps and you'd like to have 100 Deployments and you might want to set a limit of 220 ConfigMaps.</p>
|
<p>In the book <a href="https://www.goodreads.com/book/show/26759355-kubernetes" rel="nofollow noreferrer">Kubernetes: Up & Running</a>, on the section "Creating Deployments", it has a yaml file that is being used for deployments that starts like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
</code></pre>
<p>What's the point of setting up <code>deployment.kubernetes.io/revision: "1"</code>?</p>
<p>This is a file that is going to be applied, not the result of querying the server.</p>
| <p>This annotation is set by Kubernetes. It is required for deployment to identify it's respective Replicaset.</p>
<p>Ok, let me explain. A Deployment creates Replicaset. This Replicaset is responsible for creating Pod's.</p>
<p>Whenever, you made some changes in deployment's podTemplate, it creates a new replicaset. But it don't delete old replicaset as it is necessary if you want to rollback to a previous version.</p>
<p>Now, how deployment will know which replicaset is currently being used? Here comes the <code>deployment.kubernetes.io/revision:</code> annotation. Replicaset also contains this annotation. So, deployment know which replicaset is being used by matching revision number of it's annotation with the revision number of replicaset's annotation.</p>
<p>You can read this nice article to understand more: <a href="https://thenewstack.io/kubernetes-deployments-work/" rel="noreferrer">How Kubernetes Deployments Work</a>.</p>
<p>To know how to rollback deploymment to a previous version, see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-to-a-previous-revision" rel="noreferrer">here</a>.</p>
|
<p>So I am using the helm chart <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="noreferrer">stable/traefik</a> to deploy a reverse proxy to my cluster. I need to customise it beyond what is possible with the variables I can set for the template.</p>
<p>I want to enable the dashboard service while not creating an ingress for it (I set up OpenVPN to access the traefik dashboard only via VPN).
Both <code>dashboard-ingress.yaml</code> and <code>dashboard-service.yaml</code> conditionally include the ingress or the respective service based on the same variable <code>{{- if .Values.dashboard.enabled }}</code></p>
<p>From my experience I would fork the helm chart and push the customised version to my own repository.</p>
<p><strong>Is there a way to add that customization but keep the original helm chart from the stable repository?</strong></p>
| <p>You don't necessarily have to push to your own repository as you could take the source code and include the chart in your own as source. For example, if you dig into the <a href="https://gitlab.com/charts/gitlab/tree/master" rel="noreferrer">gitlab chart</a> in their <a href="https://gitlab.com/charts/gitlab/tree/master/charts" rel="noreferrer">charts</a> dependencies they've included multiple other charts as source their, not packaged .tgz files. That enables you to make changes in the chart within your own source (much as the gitlab guys have). You could get the source using <code>helm fetch stable/traefik --untar</code></p>
<p>However, including the chart as source is still quite close to forking. If you want to upgrade to get fixes then you still have to reapply your changes. I believe your only other option is to raise the issue <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="noreferrer">on the official chart repo</a>. Perhaps for your case you could suggest to the <a href="https://github.com/helm/charts/blob/master/stable/traefik/OWNERS" rel="noreferrer">maintainers</a> that the ingress be included only when .Values.dashboard.enabled and a separate ingress condition is met. </p>
|
<p>I am using Jenkins-X for a relatively large project, which consists of approximately 30 modules, 15 of which are services (and therefore, contain Dockerfiles, and a respective Helm chart for deployment). </p>
<p>During some of these relatively large builds, I am intermittently (~every other build) seeing a build pod become evicted, using <code>kubectl describe pod <podname></code> I can investigate and I've noticed that the pod is evicted due to the following: </p>
<p><code>the node was low on resource imagefs</code></p>
<p>Full data: </p>
<pre><code>Name: maven-96wmn
Namespace: jx
Node: ip-192-168-66-176.eu-west-1.compute.internal/
Start Time: Tue, 06 Nov 2018 10:22:54 +0000
Labels: jenkins=slave
jenkins/jenkins-maven=true
Annotations: <none>
Status: Failed
Reason: Evicted
Message: The node was low on resource: imagefs.
IP:
Containers:
maven:
Image: jenkinsxio/builder-maven:0.0.516
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
Args:
cat
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 400m
memory: 512Mi
Environment:
JENKINS_SECRET: 131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4
JENKINS_TUNNEL: jenkins-agent:50000
DOCKER_CONFIG: /home/jenkins/.docker/
GIT_AUTHOR_EMAIL: [email protected]
GIT_COMMITTER_EMAIL: [email protected]
GIT_COMMITTER_NAME: jenkins-x-bot
_JAVA_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Xms10m -Xmx192m
GIT_AUTHOR_NAME: jenkins-x-bot
JENKINS_NAME: maven-96wmn
XDG_CONFIG_HOME: /home/jenkins
JENKINS_URL: http://jenkins:8080
HOME: /home/jenkins
Mounts:
/home/jenkins from workspace-volume (rw)
/home/jenkins/.docker from volume-2 (rw)
/home/jenkins/.gnupg from volume-3 (rw)
/root/.m2 from volume-1 (rw)
/var/run/docker.sock from volume-0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-smvvp (ro)
jnlp:
Image: jenkinsci/jnlp-slave:3.14-1
Port: <none>
Host Port: <none>
Args:
131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4
maven-96wmn
Requests:
cpu: 100m
memory: 128Mi
Environment:
JENKINS_SECRET: 131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4
JENKINS_TUNNEL: jenkins-agent:50000
DOCKER_CONFIG: /home/jenkins/.docker/
GIT_AUTHOR_EMAIL: [email protected]
GIT_COMMITTER_EMAIL: [email protected]
GIT_COMMITTER_NAME: jenkins-x-bot
_JAVA_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Xms10m -Xmx192m
GIT_AUTHOR_NAME: jenkins-x-bot
JENKINS_NAME: maven-96wmn
XDG_CONFIG_HOME: /home/jenkins
JENKINS_URL: http://jenkins:8080
HOME: /home/jenkins
Mounts:
/home/jenkins from workspace-volume (rw)
/home/jenkins/.docker from volume-2 (rw)
/home/jenkins/.gnupg from volume-3 (rw)
/root/.m2 from volume-1 (rw)
/var/run/docker.sock from volume-0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-smvvp (ro)
Volumes:
volume-0:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType:
volume-2:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-docker-cfg
Optional: false
volume-1:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-maven-settings
Optional: false
workspace-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
volume-3:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-release-gpg
Optional: false
jenkins-token-smvvp:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-smvvp
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Created container
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "workspace-volume"
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-0"
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-1"
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-2"
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-3"
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "jenkins-token-smvvp"
Normal Pulled 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Container image "jenkinsxio/builder-maven:0.0.516" already present on machine
Normal Scheduled 7m default-scheduler Successfully assigned maven-96wmn to ip-192-168-66-176.eu-west-1.compute.internal
Normal Started 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Started container
Normal Pulled 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Container image "jenkinsci/jnlp-slave:3.14-1" already present on machine
Normal Created 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Created container
Normal Started 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Started container
Warning Evicted 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal The node was low on resource: imagefs.
Normal Killing 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Killing container with id docker://jnlp:Need to kill Pod
Normal Killing 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Killing container with id docker://maven:Need to kill Pod
</code></pre>
<p>How can I remedy this issue? I generally do not fully understand what imagefs is, how I configure / increase it, or avoid saturating it. </p>
<p>ps. sorry this post is written so passively, I had to use an active tone to make the wording wordy enough for SO to allow me to not just post a code snippet. </p>
| <p>Resolved; due to underlying size of storage being only 20gb, changed to 50gb in EBS and rebooted the nodes (which had increased nodefs) which removed this problem (as imagefs no longer was saturated).</p>
|
<p>I have deployed a pod on a Kubernetes Cluster on GCP. I have used a Persistent volume using a PVC as the volume mount.
I need to input data(.doc files) residing in Google Cloud Storage buckets into the the pod's data path.</p>
<p>How do I mount this external storage or injest the data into the pod?</p>
<p>And I require this this to be a live connection or for the injestion to happen at regular intervals.</p>
| <p>I found a way to mount google cloud storage buckets into Kubernetes-
GCS Fuse has to be built into the image and then the cloud storage buckets can be mounted as directories. I referred to these links to implement this-</p>
<p><a href="https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/" rel="nofollow noreferrer">https://karlstoney.com/2017/03/01/fuse-mount-in-kubernetes/</a></p>
<p><a href="https://github.com/maciekrb/gcs-fuse-sample" rel="nofollow noreferrer">https://github.com/maciekrb/gcs-fuse-sample</a></p>
|
<p>I`m trying to create an OrientDB (version 3.0.10) cluster using Kubernetes. OrientDB uses Hazelcast (version 3.10.4) in its distributed mode that is why I hat to set up KubernetesHazelcast plugin. I used <a href="https://github.com/shivahr/Scalable-OrientDB-deployment-on-Kubernetes" rel="nofollow noreferrer">this repository</a> as an example.
I have created all the necessary configuration files, I have defined hazelcast Kubernetes dependency (version 1.3.1) in build.sbt file for my project and this dependency appeared in the classpath
However, the logs on each pod show this error message:</p>
<pre><code>com.orientechnologies.orient.server.distributed.ODistributedStartupException: Error on starting distributed plugin
Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={service-dns=orientdbservice2.default.svc.cluster.local, service-dns-timeout=10}, className='com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
</code></pre>
<p>So it looks like the Hazelcast Kubernetes dependency is set up in a worng way. How can this error be fixed?</p>
<p>Here is my config hazelcast.xml file:</p>
<pre><code> <properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">orientdbservice2.default.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</code></pre>
<p>For the cluster creation, I use StatefulSet with OrientDB image and mount all the config files as config maps. I am pretty sure that the problem is not in my config files as with multicast instead of the dns strategy everything works fine. Also, there are no network problems in the Kubernetes cluster itself.</p>
| <p>First of all, OrientDB version should be updated to the latest - 3.0.10 with embedded newest Hazelcast version. Also, I have mounted hazelcast-kubernetes.jar dependency file directly into /orientdb/lib folder and it started to work properly. HazelcastKubernetes plugin is discovered and nodes join the cluster:</p>
<pre><code>INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Kubernetes Discovery activated resolver: DnsEndpointResolver [DiscoveryService]
INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Activating Discovery SPI Joiner [Node]
INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks) [OperationExecutorImpl]
Members {size:3, ver:3} [
Member [172.17.0.3]:5701 - hash
Member [172.17.0.4]:5701 - hash
Member [172.17.0.8]:5701 - hash
]
</code></pre>
|
<p>I launched a <code>jenkins-k8s-slave</code> which should pull from a local registry. Why does docker ignore the local DNS settings (<code>/etc/hosts</code> and <code>/etc/resolv.conf</code>) managed by kubernetes?</p>
<p>When I do:</p>
<p><code>docker pull service.namespace.svc.cluster.local:5000/test:latest</code></p>
<p>I get: <code>dial tcp: lookup service.namespace.svc.cluster.local: no such host</code></p>
<p>but this works:</p>
<p><code>curl https://service.namespace.svc.cluster.local:5000/v2/_catalog -k</code>
<code>{"repositories":[...]}</code></p>
| <p>You will need to configure docker to use your dns , whatever that may be , in this case , it seems that you need to tell docker to use the kubernetes dns:</p>
<p><a href="https://github.com/moby/moby/issues/23910" rel="nofollow noreferrer">https://github.com/moby/moby/issues/23910</a></p>
<p>Example config:</p>
<pre><code>cat /etc/docker/daemon.json
{
"hosts": [ "unix:///var/run/docker.sock","tcp://0.0.0.0:2376"],
"live-restore": true,
"tls": true,
"tlscacert": "/etc/docker/ssl/ca.pem",
"tlscert": "/etc/docker/ssl/cert.pem",
"tlskey": "/etc/docker/ssl/key.pem",
"tlsverify": true,
"dns":["172.21.1.100","172.16.1.100"]
}
</code></pre>
<p>See Also:
<a href="https://forums.docker.com/t/docker-pull-not-using-correct-dns-server-when-private-registry-on-vpn/11117/29" rel="nofollow noreferrer">https://forums.docker.com/t/docker-pull-not-using-correct-dns-server-when-private-registry-on-vpn/11117/29</a></p>
|
<p>We want to use RabbitMQ with Kubernetes but we found some opinions telling that it is not very easy and even impossible.
For exemple, the people say when the pods are down it is not easy to establish after correctly the nodes of RabbitMQ.</p>
<p>My question is it really impossible, is there some best practices to know about the implementation if the response is no ?</p>
<p>Thanks for any help</p>
| <p>The question is too generic, btw it is possible to use rmq on k8s.</p>
<p>You can find the documentation about that here:</p>
<p><a href="http://www.rabbitmq.com/cluster-formation.html#peer-discovery-k8s" rel="nofollow noreferrer">http://www.rabbitmq.com/cluster-formation.html#peer-discovery-k8s</a></p>
<blockquote>
<p>It is highly recommended that RabbitMQ clusters are deployed using a
stateful set. If a stateless set is used recreated nodes will not have
their persisted data and will start as blank nodes. This can lead to
data loss and higher network traffic volume due to more frequent eager
synchronisation of newly joining nodes. Stateless sets are also prone
to the natural race condition during initial cluster formation, unlike
stateful sets that initialise pods one by one.</p>
</blockquote>
<p>and here you can find an full example here:</p>
<p><a href="https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets</a></p>
|
<p>I'm following along with the Istio Rate Limits section of this task from the docs:
<a href="https://istio.io/docs/tasks/policy-enforcement/rate-limiting/" rel="nofollow noreferrer">https://istio.io/docs/tasks/policy-enforcement/rate-limiting/</a></p>
<p>I have the bookinfo application set up properly, I have a virtual service for productpage (and all of the other components of bookinfo), and I'm running their code as is, but rate limiting is not working for me.</p>
<p>Every time I hit the url for productpage, it works, no rate limiting happens at all. However, every time I hit the url I do see this message in the mixer logs:</p>
<pre><code>kubectl -n istio-system logs $(kubectl -n istio-system get pods -lapp=policy -o jsonpath='{.items[0].metadata.name}') -c mixer
2018-09-21T16:06:28.456449Z warn Requested quota 'requestcount' is not configured
</code></pre>
<p>requestcount quota is definitely set up though:</p>
<pre><code>apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
</code></pre>
<p>When I apply the full yaml file I see:</p>
<pre><code> memquota.config.istio.io "handler" configured
quota.config.istio.io "requestcount" configured
quotaspec.config.istio.io "request-count" configured
quotaspecbinding.config.istio.io "request-count" configured
rule.config.istio.io "quota" configured
</code></pre>
<p>And if I see the following when I run</p>
<pre><code>kubectl get quota requestcount -n istio-system -o yaml
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"config.istio.io/v1alpha2","kind":"quota","metadata":{"annotations":{},"name":"requestcount","namespace":"istio-system"},"spec":{"dimensions":{"destination":"destination.labels[\"app\"] | destination.service | \"unknown\"","destinationVersion":"destination.labels[\"version\"] | \"unknown\"","source":"request.headers[\"x-forwarded-for\"] | \"unknown\""}}}
clusterName: ""
creationTimestamp: 2018-09-21T16:02:23Z
generation: 1
name: requestcount
namespace: istio-system
resourceVersion: "263792"
selfLink: /apis/config.istio.io/v1alpha2/namespaces/istio-system/quotas/requestcount
uid: ba4d2510-bdb7-11e8-b8c9-025000000001
spec:
dimensions:
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
source: request.headers["x-forwarded-for"] | "unknown"
</code></pre>
<p>So why do I receive this message? </p>
| <p>I ran into this and fixed it by making sure the <code>rule</code> was in the same namespace as the <code>quota</code></p>
<pre><code>---
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
# quota only applies if you are not logged in.
# match: match(request.headers["cookie"], "user=*") == false
actions:
- handler: handler.memquota
instances:
- requestcount.quota
</code></pre>
|
<p>I have two services that I would like to access a PersistentVolumeClaim.</p>
<p>One is a StatefulSet that is reading from the volume (and serving content to end users), the other is a Kubernetes CronJob that periodically updates the contents of the data in obtained by a PVC.</p>
<p>Right now I'm running into the issue that my PVC is backed by a PV (not NFS, Ceph, or the like) and one service grabs the volume making the other not start.</p>
<p>How can I make it so both of these services have access to the volume?</p>
<p>And is there a way to add a CronJob to my StatefulSet the same way I add more containers?</p>
| <p>Have you checked the <code>accessModes</code> of your <code>pv</code> and <code>pvc</code>?</p>
<p>If you want more than one pod to be able to mount the volume you'll need to use <code>ReadOnlyMany</code> or <code>ReadWriteMany</code></p>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Persistent Volume Docs</a></p>
<p>As for your second question, no, there's no way to "add a CronJob to [a] StatefulSet". They are separate and distinct API objects.</p>
|
<p>I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.</p>
<p>There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.</p>
<p>Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?</p>
| <p>You don't really need a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> <a href="https://kubernetes.io/docs/concepts/services-networking/service" rel="nofollow noreferrer">Service</a> type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.</p>
<p>Note that your <code>nginx.conf</code> will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a>, which will basically manage the <code>nginx.conf</code> for you through an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource and you can also expose it as a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> Service type.</p>
|
<p>I'm trying to get kubelet config.yaml file on my broken production cluster with no luck. The command I am using is:</p>
<pre><code>kubeadm alpha phase kubelet config write-to-disk --config=/var/lib/kubelet/config.yaml
</code></pre>
<p>This returns the following error:</p>
<pre><code>no InitConfiguration or ClusterConfiguration kind was found in the YAML file
</code></pre>
<p>Could somebody please help me resolve this? Thanks.</p>
| <p>You basically need to print the config prior (The one that contains the <code>InitConfiguration</code> and the <code>ClusterConfiguration</code>:</p>
<pre><code>$ kubeadm config print-default > cluster.yaml
</code></pre>
<p>Then:</p>
<pre><code>$ kubeadm alpha phase kubelet config write-to-disk --config=cluster.yaml
</code></pre>
|
<p>We are using docker-ee </p>
<pre><code>Docker Enterprise 2.1
18.09.0-beta3
</code></pre>
<p>I installed UCP on one node and added worker nodes to it. The UCP shows node error as:
<code>"Calico-node pod is unhealthy: unexpected calico-node pod condition Ready".</code>
When I do the kubectl on node it shows as below</p>
<pre><code>kubectl get pods --all-namespaces
kube-system calico-kube-controllers-549679 1/1 Running 2 5h
kube-system calico-node-6fk4j 1/2 CrashLoopBackOff 85 5h
kube-system calico-node-6xldl 1/2 Running 78 5h
</code></pre>
<p>The Pod describe shows </p>
<pre><code>kubectl describe pod calico-node-6fk4j -n kube-system:
Warning Unhealthy 17m (x210 over 2h) kubelet, tclasapid004.tiffco.net Liveness probe failed: Get http://localhost:9099/liveness: dial tcp 127.0.0.1:9099: connect: connection refused
Warning BackOff 7m (x410 over 2h) kubelet, tclasapid004.tiffco.net Back-off restarting failed container
Warning Unhealthy 2m (x231 over 2h) kubelet, tclasapid004.tiffco.net Readiness probe failed: calico/node is not ready: felix is not ready: Get http://localhost:9099/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>We cannot run few operations with this error (Calico-node). Please help here.</p>
<p>Appreciate your help</p>
| <p>Do you have multiple interfaces on that host? You need to set the auto-detect settings.</p>
<p>Related issue:</p>
<p><a href="https://github.com/projectcalico/calico/issues/2042" rel="nofollow noreferrer">https://github.com/projectcalico/calico/issues/2042</a></p>
<blockquote>
<p>you need to set auto-detect to use another method suitable for your
network. E.g. adding following to calico yaml:</p>
</blockquote>
<pre><code> - name: IP_AUTODETECTION_METHOD
value: "interface=eth.*"
</code></pre>
<p>Please go throgh that issue , there are multiple reasosn with many solutions:</p>
<blockquote>
<p>I was finally able to resolve the issue. Thanks to @tmjd for the hint.
I had two interfaces on each of my Ubuntu VMs, enp0s3 and enp0s8. the
enp0s8 interface had the same IP on all the three VMs hence the calico
nodes on the slave were complaining about the IP conflict. To resolve
this problem I edited my /etc/network/interfaces file and assigned
static IPs to enpos8 interface. this resolved the problem.</p>
</blockquote>
|
<p>I followed this tutorial to get let's encrypt in kubernetes : <a href="https://github.com/ahmetb/gke-letsencrypt/blob/master/" rel="noreferrer">https://github.com/ahmetb/gke-letsencrypt/blob/master/</a></p>
<p>I encountered some problems, cert-manager don't create the needed secret.
Could you help me please to resolve this problem ?</p>
<p>Cert-manager ERRORS : </p>
<pre><code>Found status change for Certificate "mydomain.fr" condition "Ready": "False" -> "False"; setting lastTransitionTime to 2018-11-06 17:37:20.683089649 +0000 UTC m=+5887.364224968
Error preparing issuer for certificate coffeer-ci/mydomain.fr: http-01 self check failed for domain "mydomain.fr"
[coffeer-ci/mydomain.fr] Error getting certificate 'domain-tls': secret "domain-tls" not found
</code></pre>
<p>Here is my kubernetes objects :</p>
<p><code>kubectl -n kube-system describe pod cert-manager</code></p>
<pre><code>Name: cert-manager-7bb46cc6b-scqrp
Namespace: kube-system
Node: gke-inkubator-default-pool-68c0309d-b86b/10.132.0.3
Start Time: Tue, 06 Nov 2018 16:59:10 +0100
Labels: app=cert-manager
pod-template-hash=366027726
release=cert-manager
Annotations: <none>
Status: Running
IP: 10.16.1.132
Controlled By: ReplicaSet/cert-manager-7bb46cc6b
Containers:
cert-manager:
Container ID: docker://d4795cfa85aacd2cbd0c5fd51246c436e3cf953632f4ca4a26e683c5867bf113
Image: quay.io/jetstack/cert-manager-controller:v0.5.0
Image ID: docker-pullable://quay.io/jetstack/cert-manager-controller@sha256:fd89c3c33fd89ffe0a9f91df2f54423397058d4180eccfe90b831859ba46b6e5
Port: <none>
Host Port: <none>
Args:
--cluster-resource-namespace=$(POD_NAMESPACE)
--leader-election-namespace=$(POD_NAMESPACE)
State: Running
Started: Tue, 06 Nov 2018 16:59:13 +0100
Ready: True
Restart Count: 0
Environment:
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from cert-manager-token-9ck7b (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
cert-manager-token-9ck7b:
Type: Secret (a volume populated by a Secret)
SecretName: cert-manager-token-9ck7b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p><code>kubectl describe clusterissuer</code></p>
<pre><code>Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Cluster Name:
Creation Timestamp: 2018-11-06T16:00:23Z
Generation: 1
Resource Version: 10184529
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-staging
UID: 11e44fe0-e1dd-11e8-8bc6-42010a840078
Spec:
Acme:
Email: [email protected]
Http 01:
Private Key Secret Ref:
Key:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/7297218
Conditions:
Last Transition Time: 2018-11-06T16:00:33Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
</code></pre>
<p><code>kubectl -n coffeer-ci describe certificate</code></p>
<pre><code>Name: mydomain.fr
Namespace: coffeer-ci
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Cluster Name:
Creation Timestamp: 2018-11-06T16:10:57Z
Generation: 1
Resource Version: 10197662
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/coffeer-ci/certificates/mydomain.fr
UID: 8b6d508a-e1de-11e8-8bc6-42010a840078
Spec:
Acme:
Config:
Domains:
mydomain.fr
Http 01:
Ingress: coffee-ingress
Common Name: mydomain.fr
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: domain-tls
Status:
Acme:
Order:
Challenges:
Authz URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz/wm5MvoFA12U37qdXdBCccyIWezpEsLoxHUGVDacmHpI
Domain: mydomain.fr
Http 01:
Ingress: coffee-ingress
Key: RjHMkquS8Hh4dvJWZp2jLGW-MrSKEba-y8B8PzmVQ-M.4LwovuRj4ZgjrwLuye1cd5ftBRYaGIvtK__igMmDUD8
Token: RjHMkquS8Hh4dvJWZp2jLGW-MrSKEba-y8B8PzmVQ-M
Type: http-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/challenge/wm5MvoFA12U37qdXdBCccyIWezpEsLoxHUGVDacmHpI/192521366
Wildcard: false
URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/7297218/12596140
Conditions:
Last Transition Time: 2018-11-06T17:47:28Z
Message: http-01 self check failed for domain "mydomain.bap.fr"
Reason: ValidateError
Status: False
Type: Ready
Events: <none>
</code></pre>
<p><code>kubectl -n coffeer-ci describe ingress</code></p>
<pre><code>Name: coffee-ingress
Namespace: coffeer-ci
Address: 35.233.8.223
Default backend: default-http-backend:80 (10.16.1.5:8080)
Rules:
Host Path Backends
---- ---- --------
mydomain.fr
/ coffee-service:80 (<none>)
/.well-known/acme-challenge/RjHMkquS8Hh4dvJWZp2jLGW-MrSKEba-y8B8PzmVQ-M cm-acme-http-solver-kw2w4:8089 (<none>)
Annotations:
ingress.kubernetes.io/forwarding-rule: k8s-fw-coffeer-ci-coffee-ingress--4b1e5690f5d3853f
ingress.kubernetes.io/target-proxy: k8s-tp-coffeer-ci-coffee-ingress--4b1e5690f5d3853f
ingress.kubernetes.io/url-map: k8s-um-coffeer-ci-coffee-ingress--4b1e5690f5d3853f
kubernetes.io/ingress.global-static-ip-name: coffeer-ci-static
kubernetes.io/tls-acme: true
ingress.kubernetes.io/backends: {"k8s-be-32603--4b1e5690f5d3853f":"HEALTHY"}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 40m nginx-ingress-controller Ingress coffeer-ci/coffee-ingress
Normal CreateCertificate 34m cert-manager Successfully created Certificate "domain-tls"
Warning Sync 25m (x23 over 59m) loadbalancer-controller Could not find TLS certificates. Continuing setup for the load balancer to serve HTTP. Note: this behavior is deprecated and will be removed in a future version of ingress-gce
Normal UPDATE 15m (x8 over 39m) nginx-ingress-controller Ingress coffeer-ci/coffee-ingress
Warning Sync 3m (x49 over 1h) loadbalancer-controller Error during sync: googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0 globally., quotaExceeded
</code></pre>
<p>I have also the error <strong>Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0 globally., quotaExceeded</strong> in the ingress.</p>
<p>Thanks</p>
| <blockquote>
<p>Error preparing issuer for certificate coffeer-ci/mydomain.fr: http-01 self check failed for domain "mydomain.fr"</p>
</blockquote>
<p>Means that it can't do the HTTP check that you actually own the domain. Do you own <code>mydomain.fr</code>? If yes, you need to add a DNS entry to make <code>mydomain.fr</code> resolve to the external IP (A record) of the load balancer (or if the load balancer has a name entry it would have to be a CNAME record, in the case of AWS ELBs) This way letsencrypt can use it to verify that you own the domain.</p>
<p>The other error:</p>
<blockquote>
<p>Warning Sync 3m (x49 over 1h) loadbalancer-controller Error during sync: googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 9.0 globally., quotaExceeded</p>
</blockquote>
<p>looks like the byproduct of not being able to verify the domain. If you don't specify and Ingress looks like cert-manager creates one for you with a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">'LoadBalancer'</a> type of service. It looks like it creates it initially but it keeps trying to sync to create it on GCP (maybe because to check if it can configure port 443) but after a while, the GCP API is throttling you.</p>
|
<p>So, is it possible to share the same pod among helm packages with a common reference. Example:</p>
<p>Scenario:</p>
<pre><code>Package A
...
- requirements.yml
require: C
Package B
...
- requirements.yml
require: C
</code></pre>
<p>When I run:</p>
<pre><code>helm install A
helm install B
</code></pre>
<p>These two pods for A and B project use the same C pod.
Is it possible? Theres a documentation to help me with that?</p>
<p>PS: The C package in my case is a broker, but both A & B package can be deployed separately.</p>
| <p>This should work fine with Helm. A little bit of background <a href="https://docs.helm.sh/developing_charts/#operational-aspects-of-using-dependencies" rel="nofollow noreferrer">here</a>. One key aspect here is <code>update</code>:</p>
<blockquote>
<ul>
<li>created/updated in that order.</li>
</ul>
</blockquote>
<p>When you update an object, i.e, <code>kubectl apply</code> on a Pod/Deployment/Service/etc if the object exists it won't be changed, so you'll end up with the same object in the end. </p>
<p>Also, Kubernetes objects with the same name use the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#standard-api-terminology" rel="nofollow noreferrer">idempotency principle</a>:</p>
<blockquote>
<p>All objects will have a unique name to allow idempotent creation and retrieval</p>
</blockquote>
<p>In your example:</p>
<pre><code>helm install stable/packageA => which also installs PackageC
helm install stable/packageB => will update PackageC, but it's already present and won't change.
</code></pre>
<p>You have to make sure that the dependencies for <code>PackageA</code> for <code>PackageB</code> are exactly the same version of <code>PackageC</code>.</p>
|
<p>I'm trying to set up a CI pipeline using Cloud Build. My build file builds and pushes the Docker images, and then uses the <code>kubectl</code> builder to update the images in a kubernetes deployment. However I'm getting the following error:</p>
<pre><code>Error from server (NotFound): deployments.extensions "my-app" not found
Running: kubectl set image deployment my-app my-app-api=gcr.io/test-project-195004/my-app-api:ef53550e2ahy784a14iouyh79712c9f
</code></pre>
<p>I've verified via the UI that the deployment is active and has that name. Thought it could be a permissions issue but as far as I know the Cloud Build service account has the Kubernetes Engine Admin role, and is successfully able to pull the cluster auth data in the previous step.</p>
<p>EDIT: As requested, here is my build script:</p>
<pre><code>steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s
</code></pre>
<p>And the deployment.yaml --</p>
<pre><code>metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-11-04T17:34:44Z
generation: 1
labels:
app: my-app
name: my-app
namespace: my-app
resourceVersion: "4370"
selfLink: /apis/extensions/v1beta1/namespaces/my-app/deployments/my-app
uid: 65kj54g3-e057-11e8-81bc-42010aa20094
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: my-app
spec:
containers:
- image: gcr.io/test-project/my-app-api:f16erogierjf1abd436e733398a08e1b76ce6b712
imagePullPolicy: IfNotPresent
name: my-appi-api
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: 2018-11-04T17:37:59Z
lastUpdateTime: 2018-11-04T17:37:59Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
</code></pre>
| <p>You need to define the namespace for your deployment when using <code>kubectl</code>
if you're not using the default namespace.</p>
<p>Because you're using namespace for your deployment <code>namespace: my-app</code> you will need to add it to the command using the <code>--namspace</code> flag.</p>
<p>Here's how to do it inside <code>cloudbuild.yaml</code></p>
<pre><code>steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
- --namespace
- my-app
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s
</code></pre>
|
<p>Need an advice to dockerize and run a node JS static-content app on K8s cluster.</p>
<p>I have a static web-content which I run "npm run build” into the terminal which generates /build and direct my IIS webserver to /build/Index.html. </p>
<p>Now, I started creating a Docker file, how do I point my nodeJS image to invoke /build/Index.html file </p>
<pre><code>FROM node:carbon
WORKDIR /app
COPY /Core/* ./app
npm run build
EXPOSE 8080
CMD [ "node", ".app/build/index.html" ]
</code></pre>
<p>Please how can I run this app only on node v8.9.3 and
npm 5.6.0 ?</p>
<p>Any inputs please ?</p>
| <p>You can specify the version of node specifically:</p>
<pre><code>FROM node:8.9.3
</code></pre>
|
<p>I have Kubernetes v1.12.1 installed on my cluster.
I downloaded the metrics-server from the following repo:
<a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/metrics-server</a></p>
<p>and then run the following command:</p>
<pre><code>kubectl create -f metrics-server/deploy/1.8+/
</code></pre>
<p>and then I tried autoscaling a deployment using:</p>
<pre><code>kubectl autoscale deployment example-app-tier --min 1 --max 3 --cpu-percent 70 --spacename example
</code></pre>
<p>but the targets here shows unkown/70% </p>
<pre><code>kubectl get hpa --spacename example
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example example-app-tier Deployment/example-app-tier <unknown>/70% 1 3 1 3h35m
</code></pre>
<p>and when I try running the kubectl top nodes or pods I get an error saying:</p>
<pre><code>error: Metrics not available for pod default/pi-ss8j6, age: 282h48m5.334137739s
</code></pre>
<p>So I'm looking for any tutorial that helps me step by step enabling autoscaling using metrics-server or Prometheus (and not Heapster as it is deprecated and will not be supported anymore)</p>
<p>Thank you!</p>
| <p>you need to register your metrics server with API server and make sure they communicate. </p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/59438" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/59438</a></p>
<p>If it is done already , you need to check the help for the kubectl top command in your version of k8s , the command may default to use heapster , and you may need to tell it to use the new service instead.</p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/56206" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/56206</a></p>
<p>from the help command it looks like it is not yet ported to new metric server and still looking for heapster by default.</p>
<pre><code>C02W84XMHTD5:tmp iahmad$ kubectl top node --help
Display Resource (CPU/Memory/Storage) usage of nodes.
The top-node command allows you to see the resource consumption of nodes.
Aliases:
node, nodes, no
Examples:
# Show metrics for all nodes
kubectl top node
# Show metrics for a given node
kubectl top node NODE_NAME
Options:
--heapster-namespace='kube-system': Namespace Heapster service is located in
--heapster-port='': Port name in service to use
--heapster-scheme='http': Scheme (http or https) to connect to Heapster as
--heapster-service='heapster': Name of Heapster service
-l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l
key1=value1,key2=value2)
Usage:
kubectl top node [NAME | -l label] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
</code></pre>
<p><strong>note:</strong> I am using 1.10 , maybe in your version , the options are different</p>
|
<p>I am writing a Go binary that will run on my local machine. I wish to authenticate with the kubernetes API for a GKE cluster. How can I get a client key and certificate?</p>
<p>(Note that a kubernetes service account does not seem appropriate because my binary does not itself run on the cluster. And I do not want to have to install gcloud locally because I may want to distribute my binary to others, so I cannot use the gcloud auth helper flow.)</p>
| <p>You can't get it from GKE because GCP doesn't expose the CA key for you to create client certificate/key pairs for you to authenticate with the cluster. That key lives in the Kubernetes master(s) and GKE doesn't give you direct access to them (They manage them). I recommend you use a <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file" rel="nofollow noreferrer">token</a>. </p>
<p>Check my other <a href="https://stackoverflow.com/a/53182585/2989261">answer</a> with more details. Basically, create a ServiceAccount and bind it to a Role or ClusterRole (RBAC). You can actually authenticate outside your cluster using a token tied to a ServiceAccount.</p>
|
<p>I am trying to invoke kubectl from within my CI system. I wish to use a google cloud service account for authentication. I have a secret management system in place that injects secrets into my CI system.</p>
<p>However, my CI system does not have gcloud installed, and I do not wish to install that. It only contains kubectl. Is there any way that I can use a credentials.json file containing a gcloud service account (not a kubernetes service account) directly with kubectl?</p>
| <p>The easiest way to skip the gcloud CLI is to probably use the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-2-use-the-token-option" rel="noreferrer"><code>--token</code></a> option. You can get a token with RBAC by creating a service account and tying it to a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noreferrer"><code>ClusterRole</code> or <code>Role</code></a> with either a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noreferrer"><code>ClusterRoleBinding</code> or <code>RoleBinding</code></a>.</p>
<p>Then from the command line:</p>
<pre><code>$ kubectl --token <token-from-your-service-account> get pods
</code></pre>
<p>You still will need a <code>context</code> in your <code>~/.kube/config</code>:</p>
<pre><code>- context:
cluster: kubernetes
name: kubernetes-token
</code></pre>
<p>Otherwise, you will have to use:</p>
<pre><code>$ kubectl --insecure-skip-tls-verify --token <token-from-your-service-account> -s https://<address-of-your-kube-api-server>:6443 get pods
</code></pre>
<p>Note that if you don't want the token to show up on the logs you can do something like this:</p>
<pre><code>$ kubectl --token $(cat /path/to/a/file/where/the/token/is/stored) get pods
</code></pre>
<p>Also, note that this doesn't prevent you from someone running <code>ps -Af</code> on your box and grabbing the token from there, for the lifetime of the <code>kubectl</code> process (It's a good idea to rotate the tokens)</p>
<p>Edit:</p>
<p>You can use the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file" rel="noreferrer"><code>--token-auth-file=/path/to/a/file/where/the/token/is/stored</code></a> with <code>kubectl</code> to avoid passing it through <code>$(cat /path/to/a/file/where/the/token/is/stored)</code></p>
|
<p>I am able to use a imagePullSecret in my spec so that my container section images are able to connect to a private repository. If I have initContainer section also, it is not using the imagePullSecret and the deployment fails.</p>
| <p>ImagePullSecret will work for both normal container and initContainer, you just need to define imagePullSecret once in your spec. </p>
|
<p>Good morning,</p>
<p>I'm very new to Docker and Kubernetes, and I do not really know where to start looking for help. I created a database container with Docker and I want manage it and scale with Kubernetes. I started installing minikube in my machine, and tried to create a Deployment first and then a StatefulSet for a database container. But I have a problem with the StatefulSet when creating a Pod with a database (mariadb or mysql). When I use a Deployment the Pods are loaded and work fine. However, the same Pods are not working when using them in a StatefulSet, returning errors asking for the MYSQL constants. This is the Deployment, and I use the command <code>kubectl create -f deployment.yaml</code>:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mydb-deployment
spec:
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: mydb
image: ignasiet/aravomysql
ports:
- containerPort: 3306
</code></pre>
<p>And when listing the deployments: <code>kubectl get Deployments</code>:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mydb-deployment 1 1 1 1 2m
</code></pre>
<p>And the pods: <code>kubectl get pods</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mydb-deployment-59c867c49d-4rslh 1/1 Running 0 50s
</code></pre>
<p>But since I want to create a persistent database, I try to create a statefulSet object with the same container, and a persistent volume.
Thus, when creating the following StatefulSet with <code>kubectl create -f statefulset.yaml</code>:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
</code></pre>
<p>With the service <code>kubectl create -f service-db.yaml</code>: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
type: ClusterIP
ports:
- port: 3306
selector:
name: mydb-pod
</code></pre>
<p>And the permission file <code>kubectl create -f permissions.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-mydb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>The pods do not work. They give an error:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
statefulset-mydb-0 0/1 CrashLoopBackOff 1 37s
</code></pre>
<p>And when analyzing the logs kubectl logs statefulset-mydb-0:</p>
<pre><code>`error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD`
</code></pre>
<p>How it is possible that it does ask for these variables when the container has already an initialization script and works perfectly? And why it asks only when launching as statefulSet, and not when launching the Deployment?</p>
<p>Thanks in advance.</p>
| <p>I pulled your image <code>ignasiet/aravomysql</code> to try to figure out what went wrong. As it turns out, your image already has an initialized MySQL data directory at <code>/var/lib/mysql</code>:</p>
<pre><code>$ docker run -it --rm --entrypoint=sh ignasiet/aravomysql:latest
# ls -al /var/lib/mysql
total 110616
drwxr-xr-x 1 mysql mysql 240 Nov 7 13:19 .
drwxr-xr-x 1 root root 52 Oct 29 18:19 ..
-rw-rw---- 1 root root 16384 Oct 29 18:18 aria_log.00000001
-rw-rw---- 1 root root 52 Oct 29 18:18 aria_log_control
-rw-rw---- 1 root root 1014 Oct 29 18:18 ib_buffer_pool
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile0
-rw-rw---- 1 root root 50331648 Oct 29 18:18 ib_logfile1
-rw-rw---- 1 root root 12582912 Oct 29 18:18 ibdata1
-rw-rw---- 1 root root 0 Oct 29 18:18 multi-master.info
drwx------ 1 root root 2696 Nov 7 13:19 mysql
drwx------ 1 root root 12 Nov 7 13:19 performance_schema
drwx------ 1 root root 48 Nov 7 13:19 yypy
</code></pre>
<p>However, when mounting a <code>PersistentVolume</code> or just a simple Docker volume to <code>/var/lib/mysql</code>, it's initially empty and therefore the script thinks your database is uninitialized. You can reproduce this issue with:</p>
<pre><code>$ docker run -it --rm --mount type=tmpfs,destination=/var/lib/mysql ignasiet/aravomysql:latest
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
</code></pre>
<p>If you have a bunch of scripts you need to run to initialize the database, you have two options:</p>
<ol>
<li>Create a Dockerfile based on the <code>mysql</code> Dockerfile, and add shell scripts or SQL scripts to <code>/docker-entrypoint-initdb.d</code>. More details available <a href="https://docs.docker.com/samples/library/mysql/#docker-secrets" rel="nofollow noreferrer">here</a> under "Initializing a fresh instance".</li>
<li>Use the <code>initContainers</code> property in the PodTemplateSpec, something like:</li>
</ol>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-mydb
spec:
serviceName: mydb-pod
template:
metadata:
labels:
name: mydb-pod
spec:
containers:
- name: aravo-database
image: ignasiet/aravomysql
ports:
- containerPort: 3306
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
initContainers:
- name: aravo-database-init
command:
- /script/to/initialize/database
image: ignasiet/aravomysql
volumeMounts:
- name: volume-mydb
mountPath: /var/lib/mysql
volumes:
- name: volume-mydb
persistentVolumeClaim:
claimName: config-mydb
</code></pre>
|
<p>We are currently evaluating various Kubernetes aaS providers and Azure AKS was looking like a front runner. However, recently some information has emerged that the master and etcd nodes are not run in a HA configuration?</p>
<p>Is this correct? I find it hard to believe! There is nothing on their website with regard to the architecture of AKS which is always a worry but as they provide the master nodes as a service I don't understand how they would leave such a gaping hole?</p>
<p>Does any one have any more in depth information regarding the AKS architecture?</p>
| <p>The worker nodes are in an Availability set meaning they are spread across update domains and fault domains. In a service level agreement (SLA), the provider agrees to reimburse the customer for the cost of the service if the published service level isn't met. Since AKS itself is free, there is no cost available to reimburse and thus no formal SLA. However, AKS seeks to maintain availability of at least 99.5% for the Kubernetes API server. (<a href="https://learn.microsoft.com/en-us/azure/aks/faq#does-aks-offer-a-service-level-agreement" rel="nofollow noreferrer">source</a>)</p>
|
<p>I'm newbie in the Kubernetes world and I try to figure it out how a volumeClaim or volumeClaimTemplates defined in a StatefulSet can be linked to a specific PersistentVolume.</p>
<p>I've followed some tutorials to understand and set a local PersistentVolume. If I take Elasticsearch as an example, when the StatefulSet starts, the PersistantVolumeClaim is bound to the PersistantVolume.</p>
<p>Like you know, for a local PersistentVolume we must define the local path to the storage destination.</p>
<p>For Elasticsearch I've defined something like this </p>
<pre><code>local:
path: /mnt/kube_data/elasticsearch
</code></pre>
<p>But in a real project, there are more than one persistent volume. So, I will have more than one folder in path /mnt/kube_data. How does Kubernetes select the right persistent volume for a persistent volume claim? </p>
<p>I don't want Kubernetes to put Database data in a persistent volume created for another service.</p>
<p>Here is the configuration for Elasticsearch :</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-sts
spec:
serviceName: elasticsearch
replicas: 1
[...]
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-elasticsearch
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/elasticsearch
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
---
</code></pre>
| <p>You need ClaimRef in the persistent volume definition which have the PVC name to which you want to bind your PV. Also, ClaimRef in PV should have the namespace name where PVC resides because PV's are independent to namespace while PVC aren't. So a same name PVC can exist in two different namespace, hence it is mandatory to provide namespace along with PVC name even when PVC resides in default namespace.</p>
<p>You can refer following answer for PV,PVC and statefulset yaml files for local storage.</p>
<p><a href="https://stackoverflow.com/questions/52948124/is-it-possible-to-mount-different-pods-to-the-same-portion-of-a-local-persistent/52952505#52952505">Is it possible to mount different pods to the same portion of a local persistent volume?</a></p>
<p>Hope this helps.</p>
|
<p>I am trying to create a Kubernetes application in which, I have created one pod and service for backend(spring boot microservice) & frontend pod and
a loadbalancer service.</p>
<p>I wanted to know how would I call the backend API from frontend pod in Kubernetes?</p>
<p>Here are the running services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
angular LoadBalancer 10.100.15.215 a17f17fd2e25011e886100a0e002191e-1613530232.us-east-1.elb.amazonaws.com 4200:30126/TCP 12s app=angular
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 35m <none>
login ClusterIP 10.100.99.52 <none> 5555/TCP 13m app=login,tier=backend
</code></pre>
<p>I am calling the following API from frontend and it is showing name not resolved error:</p>
<pre><code>http://login/login
</code></pre>
<p>I have also tried to call the API with cluster IP but that failed.</p>
| <p>Looks like your backend service is running on port <code>5555</code>, so you would have to call your backend service like this:</p>
<pre><code>http://login:5555/login
</code></pre>
<p>This assuming the pods for your frontend are on the same Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a>. If they are on a different namespace you would call something like this:</p>
<pre><code>http://login.<namespace>.svc.cluster.local:5555/login
</code></pre>
<p>Also as described <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns" rel="nofollow noreferrer">here</a>.</p>
<p>Note that this will work only within the cluster, if you are hitting your Angular frontend from a web browser outside of the cluster, this will not work, because the web browser would have no idea of where your backend is in the cluster. So either you will have to expose your backend using another LoadBalancer type of service or you may consider using a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a> with an ingress controller.</p>
|
<p>I'm trying to install a helm package on a kubernetes cluster which allegedly has RBAC disabled.
I'm getting a permission error mentioning <code>clusterroles.rbac.authorization.k8s.io</code>, which is what I'd expect if RBAC was <em>enabled</em>.</p>
<p>Is there a way to check with <code>kubectl</code> whether RBAC really is disabled?</p>
<p>What I've tried:</p>
<ul>
<li><code>kubectl describe nodes --all-namespaces | grep -i rbac</code> : nothing comes up</li>
<li><code>kubectl describe rbac --all-namespaces | grep -i rbac</code> : nothing comes up</li>
<li><code>kubectl config get-contexts | grep -i rbac</code> : nothing comes up</li>
<li><code>k get clusterroles</code> it says "No resources found", not an error message. So does that mean that RBAC <em>is</em> enabled?</li>
<li><code>kuebctl describe cluster</code> isn't a thing</li>
</ul>
<p>I'm aware that maybe this is the <a href="https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">x-y problem</a> because it's possible the helm package I'm installing is expecting RBAC to be enabled. But still, I'd like to know how to check whether or not it is enabled/disabled.</p>
| <p>You can check this by executing the command <code>kubectl api-versions</code>; if RBAC is enabled you should see the API version <code>.rbac.authorization.k8s.io/v1</code>.</p>
<p>In AKS, the best way is to check the cluster's resource details at <a href="https://resources.azure.com" rel="noreferrer">resources.azure.com</a>.
If you can spot <code>"enableRBAC": true</code>, your cluster has RBAC enabled.
Please note that existing non-RBAC enabled AKS clusters cannot currently be updated for RBAC use. (thanks @DennisAmeling for the clarification)</p>
|
<p>I have an NGINX in front of several PODs, exposed through ClusterIP: none.</p>
<p>NGINX is forwarding traffic to these nodes like that:</p>
<pre><code>upstream api {
server my-api:1066;
}
</code></pre>
<p>Will this configuration distribute traffic evenly among all PODs behind the <code>my-api</code> hostname?
Will failing PODs be removed from the hostname resolution?</p>
| <p>The default traffic distribution for Kubernetes services is random based on the default <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">proxy mode: iptables</a>. (This is likely your case)</p>
<p>In very old Kubernetes versions (<1.1) the default <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">proxy mode: userspace</a> would default to round-robin (you can still switch to that mode if you'd like to)</p>
<p>The newer way (< 1.8) of doing round robin (optionally) is to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">proxy mode: ipvs</a>.</p>
<p>You can also look at other solutions like <a href="https://cilium.io/" rel="nofollow noreferrer">Cillium</a> that provide load balancing capabilities.</p>
|
<p>I get this error when i tried to deploy one deployment with ten replicas.</p>
<blockquote>
<p>0/2 nodes are available: 1 Insufficient memory, 1 node(s) had taints that the pod didn't tolerate.</p>
</blockquote>
<p>I don't understand why two node. Is the same node and just the same problem.</p>
<p>I have a lot of RAM (1GB) free. </p>
<p>How can i fix this error with out add another node.</p>
<p>I have in deployment yaml file this for resources:</p>
<p>limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 100m
memory: 200Mi</p>
<p>Server:</p>
<ol>
<li><p>Master: </p>
<pre><code>CPU: 2
RAM: 2 - 1 Free
</code></pre></li>
<li><p>Slave:</p>
<pre><code>CPU: 2
RAM: 2 - 1 Free
</code></pre></li>
</ol>
| <p>I think you have multiple issues here.</p>
<p>First to the format of the error message you get </p>
<blockquote>
<p>0/2 nodes are available: 1 Insufficient memory, 1 node(s) had taints that the pod didn't tolerate.</p>
</blockquote>
<p>The first thing is clear you have 2 nodes in total an could not schedule to any of them. Then comes a list of conditions which prevent the scheduling on that node. One node can be affected by multiple issues. For example, low memory and insufficient CPU. So, the numbers can add up to more than what you have on total nodes.</p>
<p>The second issue is that the requests you write into your YAML file apply per replica. If you instantiate the same pod with 100M Memory 5 times they need 500M in total. You want to run 10 pods which request each 200Mi memory. So, you need 2000Mi free memory.</p>
<p>Your error message already implies that there is not enough memory on one node. I would recommend you inspect both nodes via <code>kubectl describe node <node-name></code> to find out how much free memory Kubernetes "sees" there. Kubernetes always blocks the full amount of memory a pod requests regardless how much this pod uses.</p>
<p>The taints in your error message tells that the other node, possibly the master, has a taint which is not tolerated by the deployment. For more about taints and tolerations see the <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">documentation</a>. In short find out which taint on the node prevents the scheduling and remove it via <code>kubectl taint nodes <node-name> <taint-name>-</code>.</p>
|
<p>So trying to figure how tainting in k8s will work , i have following setting at kubelet yaml spec, i am slightly confused what will be value at register-with-taints given that i want to only allow certain pod's to be placed on this node....rest all POD's should drop or or for any other pod then specific pod node should behave unschedulable.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code> --container-runtime=docker \
--register-node=true \
--allow-privileged=true \
--register-schedulable=false \
--register-with-taints=
--pod-manifest-path=/etc/kubernetes/manifests \</code></pre>
</div>
</div>
</p>
| <p>The <code>--register-with-taints</code> argument to <code>kubelet</code> is a node-level argument and registers the node with the given list of taints.</p>
<p><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">Here</a> is the documentation about <code>--register-with-taints</code>:</p>
<pre><code>--register-with-taints []api.Taint
Register the node with the given list of taints (comma separated "=:").
No-op if register-node is false.
</code></pre>
<p>If <code>--register-with-taints</code> is set, it should be of the form <code><key>=<value>:<effect></code> (or comma separated like <code><key1>=<value1>:<effect1>,<key2>=<value2>:<effect2></code>).</p>
<blockquote>
<p>i want to only allow certain pods to be placed on this node</p>
</blockquote>
<p>To do this:</p>
<ol>
<li>Pass something like <code>--register-with-taints=key=value:NoSchedule</code> to <code>kubelet</code>. This means that no pod will be able to schedule onto this node unless it has a matching toleration.</li>
<li><p>Now, to allow a certain pod to be placed on this node, specify a toleration matching the above taint for the pod in the PodSpec yaml. Both of the following tolerations "match" the above taint, and thus a pod with either toleration below would be able to schedule onto the node:</p>
<pre><code>tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
---- OR ----
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule"
</code></pre></li>
</ol>
<p>More information about taints and tolerations in Kubernetes is <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">here</a>.</p>
|
<p>How to secure <code>.kube/config</code>, so that even if our computer containing that file is compromised, our cluster is still secure? </p>
<p>e.g. It's not as straightforward as running <code>kubectl delete deployment</code> to delete our deployment (assuming we are the super admin in RBAC)</p>
| <p>There are multiple ways of doing this, in case your machine gets compromised and you want to disable access to the cluster. Note that no solution will prevent a small window where a hacker can gain access and do some damage.</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator" rel="nofollow noreferrer">OIDC</a> authentication (OpenID Connect). Mitigation -> Disable the OIDC user on the OIDC provider and enable a lifetime for the session in the OIDC provider.</p></li>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication" rel="nofollow noreferrer">Webhook</a> authentication. Mitigation -> disable client certs on the webhook service and the token lifetime is controlled by <code>--authentication-token-webhook-cache-ttl</code> which defaults to 2 minutes. In this case, the webhook service manages the tokens on your K8s cluster.</p></li>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authenticating-proxy" rel="nofollow noreferrer">Authenticating Proxy</a>. Mitigation -> disable users on the proxy.</p></li>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins" rel="nofollow noreferrer">Client Go credential plugins</a>. Mitigation -> Disable user in the provider where the plugin is authenticating with. For example, the <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="nofollow noreferrer">AWS IAM Authenticator</a> uses this, so you would delete or disable the IAM user on AWS.</p></li>
</ul>
|
<p>Is kubectl top the current memory / CPU value or is it an average of a certain time period ?</p>
| <p>It's the actual usage of the pods and nodes at the moment you issue the command.</p>
<p>Example:</p>
<pre><code>$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-xxx-xx-x-xxx.us-west-2.compute.internal 62m 6% 1014Mi 27%
</code></pre>
<p>Using 62 milicores(0.062 cores), 6% of the CPUs, 1014 Mebybites and 24% of the memory at the moment the API returned.</p>
<p>You can also find more information <a href="https://stackoverflow.com/a/45206488/2989261">here</a>.</p>
|
<p>I have a kubernetes deployment which uses secrets for environment variables. So the pod has env variables which are using the secret. </p>
<p>Now I updated my deployment manifest json and the secret json to remove some of the variables, but when I apply these files, I get <strong>CreateContainerConfigError</strong> for the pod, and in its description I see:</p>
<blockquote>
<p>Couldn't find key FOO in Secret mynamespace/secret-mypod</p>
</blockquote>
<p>I've deleted this key from my secret json and from the deployment manifest json. Why am I still getting this error? </p>
<p>I understand that I can delete the deployment and apply it to get it to work, but I don't want to do that. </p>
<p>What is the right way to handle this scenario?</p>
| <p>I would do a <code>replace</code> on the deployment json. If the original deployment had the usual update settings, a new pod will be created with the new deployment config. If it starts ok, the old one will be deleted.</p>
<p>To be safer, you could create a new secret with some version indicator in the name, refer to the new secret name in the new deployment definition. That way if there's a problem with the new pod or the old needs to be redeployed, the old secret will still be there.</p>
|
<p>I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.</p>
| <p>You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:</p>
<ul>
<li>Native JRE</li>
<li>Perm / metaspace</li>
<li>JIT bytecode</li>
<li>JNI</li>
<li>NIO</li>
<li>Threads</li>
</ul>
<p><a href="https://jaxenter.com/nobody-puts-java-container-139373.html" rel="nofollow noreferrer">This blog</a> explains some of them.</p>
|
<p>I'm trying to configure an ingress on gke to serve two different ssl certificates on two different hosts.</p>
<p>My SSl certificates are stored as secrets and my k8s version is 1.10.9-gke.0 (I'm currently trying to upgrade to 1.11 to see if that changes anything).</p>
<p>Here is my ingress configuration :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30086--b1574396a1d7162f":"HEALTHY","k8s-be-31114--b1574396a1d7162f":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/ssl-cert: k8s-ssl-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/static-ip: k8s-fw-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/target-proxy: k8s-tp-default-si-preproduction-ingress--b1574396a1d7162f
ingress.kubernetes.io/url-map: k8s-um-default-si-preproduction-ingress--b1574396a1d7162f
creationTimestamp: 2018-10-26T09:45:46Z
generation: 9
name: si-preproduction-ingress
namespace: default
resourceVersion: "1846219"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/si-preproduction-ingress
uid: e9bba9ad-d903-11e8-872e-42010a840feb
spec:
tls:
- hosts:
- domain_1
secretName: cert_1
- hosts:
- domain_2
secretName: cert_2
rules:
- host: domain_1
http:
paths:
- backend:
serviceName: si-preproduction-service
servicePort: 80
path: /*
- host: domain_2
http:
paths:
- backend:
serviceName: si-preproduction-service
servicePort: 80
path: /*
status:
loadBalancer:
ingress:
- ip: our_ip
</code></pre>
<p>My cert_1 is correctly served on domain_1 but it is also served on domain_2 (instead of cert_2) and therefore not securing my connections has it is supposed to. </p>
<p>I'm also opening an issue on github <a href="https://github.com/kubernetes/ingress-nginx/issues/3371" rel="noreferrer">here</a>.</p>
| <p>Upgrading to k8s 1.11+ solved the problem. </p>
|
<p>Team,</p>
<p>I already have a cluster running and I need to update the OIDC value. is there a way I can do it without having to recreate the cluster?</p>
<p>ex: below is my cluster info and I need to update the <code>oidcClientID: spn:</code> </p>
<p>How can I do this as I have 5 masters running?</p>
<pre><code>kubeAPIServer:
storageBackend: etcd3
oidcClientID: spn:45645hhh-f641-498d-b11a-1321231231
oidcUsernameClaim: upn
oidcUsernamePrefix: "oidc:"
oidcGroupsClaim: groups
oidcGroupsPrefix: "oidc:"
</code></pre>
| <p>You update your kube-apiserver on your masters one by one (update/restart). If your cluster is setup correctly, when you get to the active kube-apiserver it should automatically failover to another kube-apiserver master in standby. </p>
<p>You can add the oidc options in the <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> pod manifest file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=172.x.x.x
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --oidc-client-id=...
- --oidc-username-claim=...
- --oidc-username-prefix=...
- --oidc-groups-claim=...
- --oidc-groups-prefix=...
...
</code></pre>
<p>Then you can restart your <code>kube-apiserver</code> container, if you are using docker:</p>
<pre><code>$ sudo docker restart <container-id-for-kube-apiserver>
</code></pre>
<p>Or if you'd like to restart all the components on the master:</p>
<pre><code>$ sudo systemctl restart docker
</code></pre>
<p>Watch for logs on the kube-apiserver container</p>
<pre><code>$ sudo docker logs -f <container-id-for-kube-apiserver>
</code></pre>
<p>Make sure you never have less running nodes than your <a href="https://blogs.msdn.microsoft.com/clustering/2011/05/27/understanding-quorum-in-a-failover-cluster/" rel="nofollow noreferrer">quorum</a> which should be 3 for your 5 master cluster, to be safe. If for some reason your etcd cluster falls out of quorum you will have to recover by recreating the etcd cluster and <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster" rel="nofollow noreferrer">restoring</a> from a backup. </p>
|
<p>I followed this tutorial to create a Kubernetes cluster on Azure to run build agents: <a href="http://www.chrisjohnson.io/2018/07/07/using-azure-kubernetes-service-aks-for-your-vsts-build-agents/" rel="nofollow noreferrer">http://www.chrisjohnson.io/2018/07/07/using-azure-kubernetes-service-aks-for-your-vsts-build-agents/</a></p>
<p>To recap what is there: a helm chart to do a deployment with a secret and a config map. For this deployment, I created a kubernetes cluster on Azure with all default settings and it is pulling an image from the docker hub with vsts build agent installed.</p>
<p>All was working fine, but recently pods started to be evicted pretty regularly, the message on them is:</p>
<blockquote>
<p>Message: Pod The node was low on resource: [DiskPressure].</p>
</blockquote>
<p>How can I fix this issue?</p>
| <p>Either/Or:</p>
<ul>
<li><p>You upgrade the size of your main node disks with something like <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks" rel="nofollow noreferrer">this</a>.</p></li>
<li><p>Check what pods are taking up space. Is it logs? Is it cached data? is it swap? Every application is different so you will have to go case by case.</p></li>
<li><p>Set <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks" rel="nofollow noreferrer">local ephemeral</a> storage at the pod level for your workloads so that they don't go over. Pods using a lot will get evicted.</p></li>
<li><p>Use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a> for your workloads, especially some that are not local and just reserved for your applications.</p></li>
</ul>
|
<p>I have a jenkins image, I made service as NodeType. It works well. Since I will add more services, I need to use ingress nginx to divert traffic to different kinds of services. </p>
<p>At this moment, I use my win10 to set up two vms (Centos 7.5). One vm as master1, it has two internal IPv4 address (<code>10.0.2.9 and 192.168.56.103</code>) and one vm as worker node4 (<code>10.0.2.6 and 192.168.56.104</code>). </p>
<p>All images are local. I have downloaded into local docker image repository. The problem is that Nginx ingress does not run.</p>
<p>My configuration as follows:</p>
<p>ingress-nginx-ctl.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
name: ingress-nginx
imagePullPolicy: Never
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
</code></pre>
<p>ingress-nginx-res.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: shinyinfo-jenkins-svc
servicePort: 8080
</code></pre>
<p>nginx-default-backend.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
namespace: default
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: chenliujin/defaultbackend
imagePullPolicy: Never
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>shinyinfo-jenkins-pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: shinyinfo-jenkins
labels:
app: shinyinfo-jenkins
spec:
containers:
- name: shinyinfo-jenkins
image: shinyinfo_jenkins
imagePullPolicy: Never
ports:
- containerPort: 8080
containerPort: 50000
volumeMounts:
- mountPath: /devops/password
name: jenkins-password
- mountPath: /var/jenkins_home
name: jenkins-home
volumes:
- name: jenkins-password
hostPath:
path: /jenkins/password
- name: jenkins-home
hostPath:
path: /jenkins
</code></pre>
<p>shinyinfo-jenkins-svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: shinyinfo-jenkins-svc
labels:
name: shinyinfo-jenkins-svc
spec:
selector:
app: shinyinfo-jenkins
type: NodePort
ports:
- name: tcp
port: 8080
nodePort: 30003
</code></pre>
<p>There is something wrong with nginx ingress, the console output is as follows:</p>
<pre><code>[master@master1 config]$ sudo kubectl apply -f ingress-nginx-ctl.yaml
service/ingress-nginx created
deployment.extensions/ingress-nginx created
[master@master1 config]$ sudo kubectl apply -f ingress-nginx-res.yaml
ingress.extensions/my-ingress created
</code></pre>
<p>Images is CrashLoopBackOff, Why???</p>
<pre><code>[master@master1 config]$ sudo kubectl get po
NAME READY STATUS RESTARTS AGE
ingress-nginx-66df6b6d9-mhmj9 0/1 CrashLoopBackOff 1 9s
nginx-default-backend-645546c46f-x7s84 1/1 Running 0 6m
shinyinfo-jenkins 1/1 Running 0 20m
</code></pre>
<p>describe pod:</p>
<pre><code>[master@master1 config]$ sudo kubectl describe po ingress-nginx-66df6b6d9-mhmj9
Name: ingress-nginx-66df6b6d9-mhmj9
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node4/192.168.56.104
Start Time: Thu, 08 Nov 2018 16:45:46 +0800
Labels: app=ingress-nginx
pod-template-hash=228926285
Annotations: <none>
Status: Running
IP: 100.127.10.211
Controlled By: ReplicaSet/ingress-nginx-66df6b6d9
Containers:
ingress-nginx:
Container ID: docker://2aba164d116758585abef9d893a5fa0f0c5e23c04a13466263ce357ebe10cb0a
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
Image ID: docker://sha256:a3f21ec4bd119e7e17c8c8b2bf8a3b9e42a8607455826cd1fa0b5461045d2fa9
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 08 Nov 2018 16:46:09 +0800
Finished: Thu, 08 Nov 2018 16:46:09 +0800
Ready: False
Restart Count: 2
Liveness: http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-66df6b6d9-mhmj9 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-24hnm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-24hnm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-24hnm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40s default-scheduler Successfully assigned default/ingress-nginx-66df6b6d9-mhmj9 to node4
Normal Pulled 18s (x3 over 39s) kubelet, node4 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0" already present on machine
Normal Created 18s (x3 over 39s) kubelet, node4 Created container
Normal Started 17s (x3 over 39s) kubelet, node4 Started container
Warning BackOff 11s (x5 over 36s) kubelet, node4 Back-off restarting failed container
</code></pre>
<p>logs of pod:</p>
<pre><code>[master@master1 config]$ sudo kubectl logs ingress-nginx-66df6b6d9-mhmj9
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.20.0
Build: git-e8d8103
Repository: https://github.com/kubernetes/ingress-nginx.git
-------------------------------------------------------------------------------
nginx version: nginx/1.15.5
W1108 08:47:16.081042 6 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1108 08:47:16.081234 6 main.go:196] Creating API client for https://10.96.0.1:443
I1108 08:47:16.122315 6 main.go:240] Running in Kubernetes cluster version v1.11 (v1.11.3) - git (clean) commit a4529464e4629c21224b3d52edfe0ea91b072862 - platform linux/amd64
F1108 08:47:16.123661 6 main.go:97] ✖ The cluster seems to be running with a restrictive Authorization mode and the Ingress controller does not have the required permissions to operate normally.
</code></pre>
<p>Could experts here drop me some hints?</p>
| <p>You need set ingress-nginx to use a seperate serviceaccount and give neccessary privilege to the serviceaccount. </p>
<p>here is a example:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: lb
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-normal
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-minimal
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "ingress-controller-leader-dev"
- "ingress-controller-leader-prod"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-minimal
subjects:
- kind: ServiceAccount
name: lb
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-normal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-normal
subjects:
- kind: ServiceAccount
name: lb
namespace: kube-system
</code></pre>
|
<p>I trying to create a CD pipeline on spinnaker, and my applications were packaged as helm chart. </p>
<p>I set k8s namespace in following page, but when i trigger pipeline, spinnaker did't create applications in that namespace, actually applications were created in default namespace "spinnaker" which i setup spinnaker.</p>
<p>Is it a spinnaker bug or configuration mistake?
Can someone point me on how to troubleshoot/solve this?</p>
<p><a href="https://i.stack.imgur.com/YiQSZ.png" rel="nofollow noreferrer">configuration of spinnaker pipeline</a></p>
<p>And i found below log info from spin-rosco:</p>
<pre><code>2018-11-07 06:48:49.146 INFO 1 --- [0.0-8087-exec-6] c.n.s.rosco.jobs.local.JobExecutorLocal : Starting job: [helm, template, /tmp/52a04675-210e-44a4-a0d8-d008222d527a/84C4D3AF1AA88C049E8175B4F068D7EE, --name, mytest, --namespace, mynamespace]...
2018-11-07 06:48:49.147 INFO 1 --- [0.0-8087-exec-6] c.n.s.rosco.jobs.local.JobExecutorLocal : Polling state for e8521f11-ef81-4d72-a172-b578a8c4c10a...
2018-11-07 06:48:49.148 INFO 1 --- [ionThreadPool-1] c.n.s.rosco.jobs.local.JobExecutorLocal : Executing e8521f11-ef81-4d72-a172-b578a8c4c10a with tokenized command: [helm, template, /tmp/52a04675-210e-44a4-a0d8-d008222d527a/84C4D3AF1AA88C049E8175B4F068D7EE, --name, mytest, --namespace, mynamespace]
2018-11-07 06:48:50.147 INFO 1 --- [0.0-8087-exec-6] c.n.s.rosco.jobs.local.JobExecutorLocal : Polling state for e8521f11-ef81-4d72-a172-b578a8c4c10a...
2018-11-07 06:48:50.149 INFO 1 --- [0.0-8087-exec-6] c.n.s.rosco.jobs.local.JobExecutorLocal : State for e8521f11-ef81-4d72-a172-b578a8c4c10a changed with exit code 0.
</code></pre>
| <p>Solved! My mistake, the helm template should contain namespace as offical website's note:</p>
<pre><code>The release namespace (optional)
The Kubernetes namespace to install release into. If parameter is not specified default namespace will be used.
Note: Not all Helm charts contain namespace definitions in their manifests. Make sure that your manifests contain the following code:
metadata:
namespace: *{{ .Release.Namespace }}*
</code></pre>
|
<p>I have a pod in kubernetes where it's slowing down under heavy load. The pod has two containers -- Nginx and PHP.</p>
<p>I see that Nginx is set to use "100m" CPU and PHP is set to use 1 CPU (1 core). I'm suspecting that Nginx might be bottleneck, but I'm having hard time to determine it.</p>
<p>This is my Kube setting for the nginx.</p>
<pre><code> resources:
limits:
cpu: 100m
memory: 128M
requests:
cpu: 100m
memory: 128M
</code></pre>
<p>When I SSH into Nginx container and use "top" command, I see 1% CPU pretty much all the time. I never see it goes over 1%.</p>
<p><code>7 nginx 20 0 31972 2360 972 S 0.7 0.0 7:07.94 nginx
1 root 20 0 31480 1784 976 S 0.0 0.0 0:00.01 nginx
</code></p>
<p>Is that mean that Nginx container is limited to 1% CPU usage because I set the CPU request to 100m?</p>
<p>Am I reading it correctly? or is there a standard way of reading CPU usage of a single container in a pod?</p>
| <p>To check the pods that use most memory and CPU, you can use the kubectl top command, but it doesn't sort the pods on basis of highest usage. You can only see the current usage of pods.</p>
<pre><code>$ kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system kube-proxy-gke-rel3170-default-pool-3459fe6a 2m 12Mi
kube-system kube-proxy-gke-rel3170-default-pool-3459fe6a 2m 12Mi
kube-system fluentd-gcp-v2.0.9-5t9q6 8m 85Mi
kube-system fluentd-gcp-v2.0.9-pd4s9 10m 84Mi
</code></pre>
<p>However it doesn't show the quota limits of pods. It just shows current CPU and memory usage of pod.
Hope this helps.</p>
|
<p>We are currently evaluating various Kubernetes aaS providers and Azure AKS was looking like a front runner. However, recently some information has emerged that the master and etcd nodes are not run in a HA configuration?</p>
<p>Is this correct? I find it hard to believe! There is nothing on their website with regard to the architecture of AKS which is always a worry but as they provide the master nodes as a service I don't understand how they would leave such a gaping hole?</p>
<p>Does any one have any more in depth information regarding the AKS architecture?</p>
| <p>As most of the information about AKS have been provided above I just wanted to relate to this sentence:</p>
<blockquote>
<p>We are currently evaluating various Kubernetes aaS providers and Azure
AKS was looking like a front runner.</p>
</blockquote>
<p>I am not here to tell you which Cloud provider is the best, as there is no simple answer to that question. However what I can do is share with you the resources that I have used for such cases. </p>
<p><a href="https://kubedex.com/google-gke-vs-microsoft-aks-vs-amazon-eks/" rel="nofollow noreferrer">Here</a> you can find a great comparison not only as an article but also as an <a href="https://docs.google.com/spreadsheets/d/1U0x4-NQegEPGM7eVTKJemhkPy18LWuHW5vX8uZzqzYo/edit#gid=0" rel="nofollow noreferrer">excel sheet</a> that shows major differences and possibilities of the biggest Kubernetes aaS providers. </p>
<p>One thing for sure, AWS is the biggest of them all and it has a pretty big number of people involved into developing and upgrading the solutions related to Kubernetes all around the Internet so you can find a good number of solutions and resources. </p>
<p>Microsoft from what I know is also very active and their goal is to make the experience better and better, but quoting the article it seems there are still some areas that should be improved. </p>
<p>Google Cloud Platform. As we know Kubernetes was born in Google, and I think that Google made its goal to have the best quality Kubernetes experience on their platform because of that. I have seen a lot of materials and there is a lot of buzz around the world made by Google around the GKE - conferences, youtube videos, blog posts, solid introductions to k8s etc.<br>
Quoting the article it seems like they are doing it well.
Anyway, this is my opinion, based also on hands-on experience and the linked article itself, which I recommend you to check. </p>
|
<p>I am really new to Kubernetes.</p>
<p>I have deployed Kubernetes using kops.
My question is how can I shutdown my instances (not terminate them) so my data, deployments and services will not be lost. </p>
<p>Currently after editing ig of master and nodes, by changing max and min instance size to 0 inside auto scaling group of EC2, changes my instances into terminated stance. Which also makes me loose my pods and data inside of them?</p>
<p>How to overcome on this issue??</p>
| <p>You have actually answered yourself. All that is required to do is to scale the instance size to 0.
Following this <a href="https://perrohunter.com/how-to-shutdown-a-kops-kubernetes-cluster-on-aws/" rel="nofollow noreferrer">tutorial</a>, the steps are:</p>
<ul>
<li><code>kops edit ig nodes</code> change <code>minSize</code> and <code>maxSize</code> to 0 </li>
<li><code>kops get ig</code>- to get master node name </li>
<li><code>kops edit ig</code> - change min and max size to 0</li>
<li><code>kops update cluster --yes</code> </li>
<li><code>kops rolling-update cluster</code></li>
</ul>
<p>After that you can see in EC2, that all of the cluster machines are terminated. When you will want to start it again just repeat the steps but change the values to desired number of machines (min 1 master).</p>
<p>I can confirm that all the pods, services and deployments are running again after scaling the cluster back to its initial size. In my case those were nginx pods and hello-minikube pod from Kubernetes <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-deployment" rel="nofollow noreferrer">documentation</a> example. Did you miss any of these steps that it did not work in your case? Do you have an s3 bucket that <a href="https://kubernetes.io/docs/setup/custom-cloud/kops/#3-5-create-an-s3-bucket-to-store-your-clusters-state" rel="nofollow noreferrer">stores the cluster state?</a>
You need to run these commands before running the kops cluster:</p>
<p><code>aws s3api create-bucket --bucket ... --region eu-central-1a
aws s3api put-bucket-versioning --bucket ... --versioning-configuration</code></p>
<blockquote>
<p>kops lets you manage your clusters even after installation. To do
this, it must keep track of the clusters that you have created, along
with their configuration, the keys they are using etc. This
information is stored in an S3 bucket. S3 permissions are used to
control access to the bucket.</p>
</blockquote>
<p>This one is after scaling down to 0:</p>
<p><a href="https://i.stack.imgur.com/FMhS0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FMhS0.png" alt=""></a></p>
<p>And this screenshot after scaling back.
<a href="https://i.stack.imgur.com/x4Ewv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x4Ewv.png" alt=""></a></p>
|
<p>Is it possible to setup a delay for pod shutdowns after a rolling update on Kubernetes?</p>
<p>For example I roll out a new version and want the old Pods to run for further 15secs after the new instance has been started.</p>
<p>How can I manage that?</p>
| <p>Yes, you can use <code>PreStop</code> Hook to achieve that.</p>
<p><code>PreStop</code> hooks are executed after a Pod is marked as terminating. See what happen when you delete a pod from <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">here</a>.</p>
<p>You just have to run <code>sleep 15</code> on <code>PreStop</code> Hook.</p>
<p>For more details see <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">Container hooks</a>.</p>
<p>See how to add a <code>PreStop</code> hook from here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">Define postStart and preStop handlers</a>.</p>
|
<p>I have 2 Postgres databases each in their own CloudSQL instance and a .NET web app running in GKE.</p>
<p>Goal: Connect web app utilizing EntityFramework Core to both CloudSQL instances using a single CloudSQL proxy.</p>
<p>I followed <a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/cloudsql/postgres_deployment.yaml" rel="nofollow noreferrer">this</a> setup and modified it following <a href="https://stackoverflow.com/questions/40793222/connecting-to-multiple-cloudsql-instances-using-cloud-sql-proxy">this</a> S.O. answer.</p>
<p>There is an EF Core DbContext for each CloudSQL Instance.
The context connections are set using 2 environment variables:</p>
<pre><code>new Context1(
{
optionsBuilder.UseNpgsql(Environment.GetEnvironmentVariable("CONNECTION_1"));
});
new Context2(
{
optionsBuilder.UseNpgsql(Environment.GetEnvironmentVariable("CONNECTION_2"));
});
</code></pre>
<p>The environment variables are set as:</p>
<pre><code>CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password2"
</code></pre>
<p><strong>Current Behavior:</strong></p>
<p>Context1 interacts with CloudSQL instance1 as normal.</p>
<p>Context2 throws PostgresException <code>"42P01: relation {my_Table_Name} does not exist."</code> when trying to access a table.</p>
<p>Note: <code>"my_Table_Name"</code> is a table in CloudSQL instance2</p>
<p>This leads me to believe Context2 is trying to access CloudSQL instance1 instead of instance2.</p>
<p>How can I point Context2 through the SQL Proxy to CloudSQL instance2?</p>
| <p>Basically this:</p>
<blockquote>
<pre><code>CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password2"
</code></pre>
</blockquote>
<p>means that you are connecting to the exact same Cloud SQL instance, with just two different passwords (same username). Not sure how CONNECTION_2 might even connect to the Cloud SQL instance1 though.</p>
<p>You would want to have something like this:</p>
<blockquote>
<pre><code>CONNECTION_1 = "Host=127.0.0.1;Port=5432;Database=postgres;Username=postgres;Password=password"
CONNECTION_2 = "Host=127.0.0.1;Port=5433;Database=postgres;Username=postgres;Password=password2"
</code></pre>
</blockquote>
<p>and on your cloud-proxy command line (running on the same pod):</p>
<pre><code>-instances=project:region:sqlinstance1=tcp:5432,project:region:sqlinstance2=tcp:5433
</code></pre>
|
<p>I have a Kubernetes cluster on Google Cloud Platform. It has a persistent Volume Claim with a Capacity of 1GB. The persistent volume claim is bound to many deployments. </p>
<p>I would like to identify the space left in the persistent Volume Claim in order to know if 1GB is sufficient for my application. </p>
<p>I have used the command "kubectl get pv" but this does not show the storage space left.</p>
| <p>If there's a running pod with mounted PV from the PVC,</p>
<pre><code>kubectl -n <namespace> exec <pod-name> -- df -ah
</code></pre>
<p>...will list all file systems, including the mounted volumes, and their free disk space.</p>
|
<p>We have several microservices implemented in Java/Kotlin and Spring MVC, running in Tomcat docker images. These services provide public APIs which are authenticated by user's cookies/sessions. These work correctly.</p>
<p>Now, we would like to create an internal endpoint, which wouldn't be accessible either outside of GKE or via some kind of internal authentication. </p>
<p>What would be the good way to go especially for Spring MVC and GKE?</p>
<p>EDIT:</p>
<p>I would like to achieve to authenticate different endpoints on one service. For instance:</p>
<ul>
<li><code>/public/</code> - no auth</li>
<li><code>/private/</code> - user must be logged in</li>
<li><code>/internal/</code> - only other microservices can access</li>
</ul>
<p>I would prefer to implement such auth on the application level, but I am not sure what would be the best way. IP range of internal Google IPs? Some other way of securely identifying the caller?</p>
<p>Maybe my idea is bad, if so, I will be happy to change my mind.</p>
| <p>Your question isn't GKE specific. It's broadly a Kubernetes question.</p>
<p>I encourage you to search Kubernetes service authentication.</p>
<p>There are many ways to do this, including rolling your own auth model. One feature that can help here is Kubernetes NetworkPolicy resource (it's like firewalls), you can learn more about it here <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a> and see here for some examples: <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes</a> (Keep in mind that this is a firewall, not authentication.)</p>
<p>If you want to get this automatically, you can use Istio (<a href="https://istio.io" rel="nofollow noreferrer">https://istio.io</a>) which allows you to automatically set up mutual TLS between all your services without any code changes. Istio also gives a strong identity to each workload. You can use Istio's authentication policies to set up auth between your microservices <em>without changing your application code</em> which is really cool: <a href="https://istio.io/docs/tasks/security/authn-policy/" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/authn-policy/</a></p>
|
<p>I search the previews post for this but I can't find the solution, sorry.</p>
<p>I install metrics server on kubeadm v1.12 and I get this error from the logs:</p>
<p>1 master node and 1 slave node, in private network.</p>
<pre><code>Get https://ip-10-0-1-154:10250/stats/summary/: x509: a certificate signed by an unknown authority, unable to fully scrape metrics from source
</code></pre>
<p>I don't install any certificate.</p>
<p>How can I install a new certificate and where I need to change this without set up a new kubernetes cluster?</p>
<p>Sorry for the noob question, I tried to create a new certificate but I cannot make kubelet to change.</p>
| <p>It's a problem with kubeadm in where it generates the <code>kubelet</code> certificates on the nodes under <code>/var/lib/kubelet/pki</code> (<code>kubelet.crt</code>,<code>kubelet.key</code>) signed by a different CA from the one used for the master(s) under <code>/etc/kubernetes/pki</code> (ca.crt). Some background <a href="https://github.com/kubernetes-incubator/metrics-server/issues/130" rel="nofollow noreferrer">here</a>
.
You'll have to regenerate the certificates for your kubelets signed by the CA on the master(s) <code>/etc/kubernetes/pki/ca.crt</code></p>
<p>You can follow something like <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">this</a>. For example use <a href="https://github.com/cloudflare/cfssl" rel="nofollow noreferrer"><code>cfssl</code></a></p>
<p>Something like this:</p>
<pre><code>$ mkdir ~/mycerts; cd ~/mycerts
$ cp /etc/kubernetes/pki/ca.crt ca.pem
$ cp /etc/kubernetes/pki/ca.key ca-key.pem
</code></pre>
<p>Create a file <code>kubelet-csr.json</code> with something like this:</p>
<pre><code>{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<your-node-name>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "US",
"ST": "NY",
"L": "City",
"O": "Org",
"OU": "Unit"
}]
}
</code></pre>
<p>Create a <code>ca-config.json</code> file:</p>
<pre><code>{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
</code></pre>
<p>Create a <code>config.json</code> file:</p>
<pre><code>{
"signing": {
"default": {
"expiry": "168h"
},
"profiles": {
"www": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
}
}
}
}
</code></pre>
<p>Generate the certs:</p>
<pre><code>$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
--config=ca-config.json -profile=kubernetes \
kubelet-csr.json | cfssljson -bare kubelet
</code></pre>
<p>Copy the files to your nodes:</p>
<pre><code>$ scp kubelet.pem <node-ip>:/var/lib/kubelet/pki/kubelet.crt
$ scp kubelet-key.pem <node-ip>:/var/lib/kubelet/pki/kubelet.key
</code></pre>
<p>Restart the kubelet on your node:</p>
<pre><code>$ systemctl restart kubelet
</code></pre>
<p>PD. Opened <a href="https://github.com/kubernetes/kubeadm/issues/1223" rel="nofollow noreferrer">this</a> to track the issue.</p>
|
<p>How do I define a network policy YML file so it only allows traffic to and from a hostname like console.cloud.ibm.com?</p>
<p>I only can find how to block from certain IP addresses or namespaces.</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a> to block the IP address of what resolves to:</p>
<pre><code>$ dig +short console.cloud.ibm.com
23.204.34.209
</code></pre>
<p>The downside is that the IP might change. Network policies are a layer 3 or IP layer feature so you won't see hostnames there. Some background <a href="https://github.com/kubernetes/kubernetes/issues/50453" rel="nofollow noreferrer">here</a></p>
<p>If you want to block on hostname you might consider using a Service Mesh like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> with <a href="https://www.envoyproxy.io/" rel="nofollow noreferrer">Envoy</a> which will allow you to control traffic to a <code>host</code> by using <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">Egress Control</a>.</p>
|
<p>I have encountered a problem that does not stop immediately even if I delete pod.</p>
<p>What should be fixed in order to terminate normally?</p>
<h2>manifest file.</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cmd-example
spec:
replicas: 1
selector:
matchLabels:
app: cmd-example
template:
metadata:
labels:
app: cmd-example
spec:
terminationGracePeriodSeconds: 30
containers:
- name: cmd-container
image: alpine:3.8
resources:
requests:
cpu: 100m
memory: 100Mi
command: ["/bin/sh"]
args: ["-c", "while true; do exec sleep 100;done"]
</code></pre>
<h2>Reproduction procedure</h2>
<ol>
<li>create deployment.<br>
<code>$ kubectl apply -f deployments.yaml</code></li>
<li>delete deployment.<br>
<code>kubectl delete-f 020-deployments.yaml</code></li>
</ol>
<p><code>kubectl get po -w</code>output is.</p>
<pre><code>cmd-example-5cccf79598-zpvmz 1/1 Running 0 2s
cmd-example-5cccf79598-zpvmz 1/1 Terminating 0 6s
cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 37s
cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 38s
cmd-example-5cccf79598-zpvmz 0/1 Terminating 0 38s
</code></pre>
<p>This should finish faster.<br>
It took about 30 seconds to complete. Perhaps it is due to SIGKILL at the time of terminationGracePeriodSeconds 30s.<br>
<strong>Why is not pod cleanup immediately with SIGTERM?</strong></p>
<p>What should be fixed?</p>
<h2>Environment</h2>
<p>I confirmed it in the following environment.</p>
<ul>
<li>Docker for Mac:18.06.1-ce,Kubernetes :v1.10.3</li>
<li>Docker for Windows:18.06.1-ce,Kubernetes :v1.10.3</li>
<li>Google Kubernete Engine:1.11.2-gke.15</li>
</ul>
| <h2>Cause of a problem</h2>
<p>This shell is that it does not stop even if it accepts the signal of SIGTERM.</p>
<h2>Solution</h2>
<p>Using the trap command.</p>
<p><strong>Changed place</strong></p>
<pre><code> command: ["/bin/sh"]
args: ["-c", "trap 'exit 0' 15;while true; do exec sleep 100 & wait $!; done"]
</code></pre>
<h2>Result</h2>
<p>after delete, pod was cleaned up as soon!</p>
<pre><code>img-example-d68954677-mwsqp 1/1 Running 0 2s
img-example-d68954677-mwsqp 1/1 Terminating 0 8s
img-example-d68954677-mwsqp 0/1 Terminating 0 10s
img-example-d68954677-mwsqp 0/1 Terminating 0 11s
img-example-d68954677-mwsqp 0/1 Terminating 0 11s
</code></pre>
|
<p>Want to have a single command to have continue streaming of logs from particular pod. Currently I have to do</p>
<ul>
<li><code>kubectl get pods</code> (lists all the running pods)</li>
<li>check for the running pod from the list and copy the pod name</li>
<li><code>kubectl logs <pod name> -f</code> (continuous streaming of logs from pod)</li>
</ul>
<p>Bonus points: list pods starting with certain word like, <code>kubectl get pods asset*</code>. Which would just display pods with names starting with asset</p>
| <p>Finally was able to figure out the solution. This would be somewhat hacky, but I would basically use <code>--field-selector=status.phase=Running</code> and just get the name with <code>-o=name</code> flag.
My final command would be something like </p>
<pre><code>kubectl logs -f $(kubectl get pods --field-selector=status.phase=Running -o=name | awk '/asset/ {print $1;exit}')
</code></pre>
<p>Links: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">Field Selectors</a></p>
|
<p>I'm running an ASP.NET Core API in Kubernetes behind a reverse proxy that's sending <code>X-Forwarded-For</code>, <code>X-Forwarded-Proto</code>, and <code>X-Forwarded-Host</code> headers.</p>
<p>I've found that I need to use <code>UseForwardedHeaders()</code> to accept the values from the proxy, so I've written the following code:</p>
<pre><code>var forwardedOptions = new ForwardedHeadersOptions()
{
ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.All
};
forwardedOptions.KnownNetworks.Add(new IPNetwork(IPAddress.Parse(configuration["network:address"]), int.Parse(configuration["network:cidrMask"])));
app.UseForwardedHeaders(forwardedOptions);
</code></pre>
<p>I'm running my API and reverse proxy within Kubernetes, and the API is only visible in the cluster. Because of this, I'm not worried about somebody on the cluster network spoofing the headers. What I would like to do is to automatically detect the cluster's internal subnet and add this to the <code>KnownNetworks</code> list. Is this possible? If so, how?</p>
| <p>I've created a method that calculates the start in the range and the CIDR subnet mask for each active interface:</p>
<pre><code>private static IEnumerable<IPNetwork> GetNetworks(NetworkInterfaceType type)
{
foreach (var item in NetworkInterface.GetAllNetworkInterfaces()
.Where(n => n.NetworkInterfaceType == type && n.OperationalStatus == OperationalStatus.Up) // get all operational networks of a given type
.Select(n => n.GetIPProperties()) // get the IPs
.Where(n => n.GatewayAddresses.Any())) // where the IPs have a gateway defined
{
var ipInfo = item.UnicastAddresses.FirstOrDefault(i => i.Address.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork); // get the first cluster-facing IP address
if (ipInfo == null) { continue; }
// convert the mask to bits
var maskBytes = ipInfo.IPv4Mask.GetAddressBytes();
if (!BitConverter.IsLittleEndian)
{
Array.Reverse(maskBytes);
}
var maskBits = new BitArray(maskBytes);
// count the number of "true" bits to get the CIDR mask
var cidrMask = maskBits.Cast<bool>().Count(b => b);
// convert my application's ip address to bits
var ipBytes = ipInfo.Address.GetAddressBytes();
if (!BitConverter.IsLittleEndian)
{
Array.Reverse(maskBytes);
}
var ipBits = new BitArray(ipBytes);
// and the bits with the mask to get the start of the range
var maskedBits = ipBits.And(maskBits);
// Convert the masked IP back into an IP address
var maskedIpBytes = new byte[4];
maskedBits.CopyTo(maskedIpBytes, 0);
if (!BitConverter.IsLittleEndian)
{
Array.Reverse(maskedIpBytes);
}
var rangeStartIp = new IPAddress(maskedIpBytes);
// return the start IP and CIDR mask
yield return new IPNetwork(rangeStartIp, cidrMask);
}
}
</code></pre>
<p>Examples:</p>
<ul>
<li>192.168.1.33 with mask 255.255.255.252 returns 192.168.1.32/30</li>
<li>10.50.28.77 with mask 255.252.0.0 returns 10.50.0.0/14</li>
</ul>
<p>I've then changed my options code to look like this:</p>
<pre><code>var forwardedOptions = new ForwardedHeadersOptions()
{
ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.All
};
foreach (var network in GetNetworks(NetworkInterfaceType.Ethernet))
{
forwardedOptions.KnownNetworks.Add(network);
}
app.UseForwardedHeaders(forwardedOptions);
</code></pre>
|
<p>Want to have a single command to have continue streaming of logs from particular pod. Currently I have to do</p>
<ul>
<li><code>kubectl get pods</code> (lists all the running pods)</li>
<li>check for the running pod from the list and copy the pod name</li>
<li><code>kubectl logs <pod name> -f</code> (continuous streaming of logs from pod)</li>
</ul>
<p>Bonus points: list pods starting with certain word like, <code>kubectl get pods asset*</code>. Which would just display pods with names starting with asset</p>
| <p>You can use <a href="https://www.gnu.org/software/gawk/manual/gawk.html" rel="nofollow noreferrer">awk</a> to achieve this. You can use:</p>
<pre><code>kubectl logs -n <namespace> $(kubectl get pod -n <namespace> | awk '/<pattern>/{print $1}') -f
</code></pre>
<p>Here, you can specify regex in <code><pattern></code> field. <code>print $1</code> prints only first column of matched object. In our case, it is pod name.</p>
<p>For example:</p>
<pre><code>kubectl logs -n kube-system $(kubectl get pod -n kube-system | awk '/kube-proxy*/{print $1}') -f
</code></pre>
|
<p>I want to deploy a few <code>Spring Boot</code> microservices on <code>Kubernetes</code> cluster. One of them is <em>authorization server</em> serving <code>OAuth 2.0</code> tokens. With current deployment (no k8s) only two services are visible to the outer world: <code>api-gateway</code> (Zuul) and <code>authorization-server</code> (Spring OAuth). The rest is hidden behind the <code>api-gateway</code>. During <code>k8s</code> deployment Zuul proxy probably will be substituted by Kubernetes Ingress. </p>
<p>Now the questions: </p>
<ul>
<li>Should I put <code>authorization-server</code> behind the Ingress or not?</li>
<li>What are pros and cons concerning these two solutions?</li>
<li>What are <em>best practices</em>?</li>
<li>Maybe I shouldn't get rid of Zuul at all?</li>
</ul>
| <p>Getting rid of Zuul is perfectly reasonable. Ingress should be the only outer-cluster accessible component that provides access to the cluster through ingress rules.
So yes, authorization-server and microservices should be accessible through ingress.</p>
|
<p>I just installed Kubernetes on a cluster of Ubuntu's using the explanation of <a href="https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-1-11-cluster-using-kubeadm-on-ubuntu-18-04" rel="nofollow noreferrer">digital ocean</a> with Ansible. Everything seems to be fine; but when verifying the cluster, the master node is in a not ready status:</p>
<pre><code># kubectl get nodes
NAME STATUS ROLES AGE VERSION
jwdkube-master-01 NotReady master 44m v1.12.2
jwdkube-worker-01 Ready <none> 44m v1.12.2
jwdkube-worker-02 Ready <none> 44m v1.12.2
</code></pre>
<p>This is the version:</p>
<pre><code># kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>When I check the master node, the kube-proxy is hanging in a starting mode:</p>
<pre><code># kubectl describe nodes jwdkube-master-01
Name: jwdkube-master-01
Roles: master
...
LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 104.248.207.107
Hostname: jwdkube-master-01
Capacity:
cpu: 1
ephemeral-storage: 25226960Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1008972Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 23249166298
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 906572Ki
pods: 110
System Info:
Machine ID: 771c0f669c0a40a1ba7c28bf1f05a637
System UUID: 771c0f66-9c0a-40a1-ba7c-28bf1f05a637
Boot ID: 2532ae4d-c08c-45d8-b94c-6e88912ed627
Kernel Version: 4.18.0-10-generic
OS Image: Ubuntu 18.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-jwdkube-master-01 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-jwdkube-master-01 250m (25%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-jwdkube-master-01 200m (20%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-p8cbq 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-jwdkube-master-01 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (55%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientDisk 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48m (x5 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48m kubelet, jwdkube-master-01 Updated Node Allocatable limit across pods
Normal Starting 48m kube-proxy, jwdkube-master-01 Starting kube-proxy.
</code></pre>
<p><strong>update</strong></p>
<p>running <code>kubectl get pods -n kube-system</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-8p7k2 1/1 Running 0 4h47m
coredns-576cbf47c7-s5tlv 1/1 Running 0 4h47m
etcd-jwdkube-master-01 1/1 Running 1 140m
kube-apiserver-jwdkube-master-01 1/1 Running 1 140m
kube-controller-manager-jwdkube-master-01 1/1 Running 1 140m
kube-flannel-ds-5bzrx 1/1 Running 0 4h47m
kube-flannel-ds-bfs9k 1/1 Running 0 4h47m
kube-proxy-4lrzw 1/1 Running 1 4h47m
kube-proxy-57x28 1/1 Running 0 4h47m
kube-proxy-j8bf5 1/1 Running 0 4h47m
kube-scheduler-jwdkube-master-01 1/1 Running 1 140m
tiller-deploy-6f6fd74b68-5xt54 1/1 Running 0 112m
</code></pre>
| <p>It seems to be a problem of Flannel <code>v0.9.1</code> compatibility with Kubernetes cluster <code>v1.12.2</code>. Once you replace URL in master configuration playbook it should help:</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/
kube-flannel.yml</code></p>
<p>To enforce this solution on the current cluster:</p>
<hr>
<ol>
<li><p>On the master node delete relevant objects for Flannel v0.9.1:</p>
<p><code>kubectl delete clusterrole flannel -n kube-system</code></p>
<p><code>kubectl delete clusterrolebinding flannel-n kube-system</code></p>
<p><code>kubectl delete clusterrolebinding flannel -n kube-system</code></p>
<p><code>kubectl delete serviceaccount flannel -n kube-system</code></p>
<p><code>kubectl delete configmap kube-flannel-cfg -n kube-system</code></p>
<p><code>kubectl delete daemonset.extensions kube-flannel-ds -n kube-system</code></p>
<p>Proceed also with Flannel Pods deletion:</p>
<p><code>kubectl delete pod kube-flannel-ds-5bzrx -n kube-system</code></p>
<p><code>kubectl delete pod kube-flannel-ds-bfs9k -n kube-system</code></p>
<p>And check whether no more objects related to Flannel exist:</p>
<p><code>kubectl get all --all-namespaces</code></p></li>
</ol>
<hr>
<ol start="2">
<li>Install the latest Flannel version to your cluster:</li>
</ol>
<p><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code></p>
<p>For me it works, however if you discover any further problems write a comment below this answer.</p>
|
<p>kubernetes 1.7.x</p>
<p>kubelet store some data in /var/lib/kubelet, how can I change it to somewhere else ?</p>
<p>Because my /var is every small.</p>
| <p>if your <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> is loading environment from <code>/etc/sysconfig/kubelet</code>, as does mine, you can update it to include your extra args.</p>
<pre><code># /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--root-dir=/data/k8s/kubelet
</code></pre>
<p>Entire <code>10-kubeadm.conf</code>, for reference:</p>
<pre><code># /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
|
<p>While reading the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#dry-run" rel="nofollow noreferrer">API concepts</a> page of Kubernetes documentation, I got a bit confused with the given definition:</p>
<blockquote>
<p>In version 1.12, if the dry run alpha feature is enabled, the modifying verbs (POST, PUT, PATCH, and DELETE) can accept requests in a dry run mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. <strong>The response body for the request is as close as possible to a non dry run response.</strong> The system guarantees that dry run requests will not be persisted in storage or have any other side effects.</p>
</blockquote>
<p>So, dry run requests are meant to have as most as possible the same behavior from the client point of view.</p>
<p>What is the main idea behind this concept, and what use-cases does it covers?</p>
| <p><strong>Dry run</strong> is not a concept exclusive to Kubernetes. It's an expression used to indicate <em>a rehearsal of a performance or procedure before the real one</em>. Dry run mode gives you the possibility of issuing a command without side effects for testing an actual command that you intend to run.</p>
<p>Having said that, <em>read again</em> the following quote and it should make sense now:</p>
<blockquote>
<p>Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. [...] The system guarantees that dry run requests will not be persisted in storage or have any other side effects.</p>
</blockquote>
|
<p>Got a list of config files that reside outside of templates folder that we feed into a helm chart like the following: </p>
<pre><code>├── configs
│ ├── AllEnvironments
│ │ ├── Infrastructure
│ │ └── Services
│ │ ├── ConfigFile1
│ │ ├── ConfigFile2
│ ├── Apps
│ │ ├── App1
│ │ ├── App2
│ │ └── App3
│ ├── ManagementEnvironments
│ │ ├── Database
│ │ │ ├── DbFile1
│ │ │ └── DbFile2
│ │ ├── Infrastructure
│ ├── TestEnvironments
│ │ ├── Pipeline
│ │ │ └── Pipeline1
│ │ ├── Database
│ │ │ ├── Pipeline2
│ ├── Console
│ │ ├── Console1
│ │ ├── Console2
</code></pre>
<p>It works out good for us so far. Now we need to parse the folders and get a list of all the folders under configs that do not end with Environments. So basically a range that would include Apps and Console in this case.</p>
<p>Doing the following I get a repetition of Apps 3 times as that many files are under it and also Console 2 times. </p>
<p>I would like to get a list of folders that do not end with Environments only once.</p>
<p>I tried to look at the Go templates and some helm chart toolkits but I have no experience in Go which seems like a requirement to make this happen and I will probably take on the next few days. But for now I'm stuck, so any help is appreciated.</p>
<pre><code>{{- range $path, $bytes := $.Files.Glob "configs/**" }}
{{- if not (or (dir $path | regexFind "configs.*Environments.*") (dir $path | regexFind "configs$")) }}
{{ base (dir $path) }}
{{- end }}
{{- end }}
</code></pre>
| <p>Here is a way to do it if it will help anyone else:</p>
<p>Helm charts uses Go template and Sprig library. So using a dict from Sprig we can keep the previous value of the folder that we are listing and we print it out only if the current folder is different from the previous one. Now this works as the files are listed in alphabetical order so files on the same folder will be consecutive. If they were to be read without order these approach would not work. </p>
<pre><code>{{- $localDict := dict "previous" "-"}}
{{- range $path, $bytes := $.Files.Glob "configs/**" }}
{{- if not (or (dir $path | regexFind "configs.*Environments.*") (dir $path | regexFind "configs$")) }}
{{- $folder := base (dir $path) }}
{{- if not (eq $folder $localDict.previous)}}
{{$folder -}}
{{- end }}
{{- $_ := set $localDict "previous" $folder -}}
{{- end }}
{{- end }}
</code></pre>
|
<p>We use <code>helm</code> to manage our <code>kubernetes applications</code>.
We use kubectl commands like this :
<code>kubectl --token $TOKEN get pods</code></p>
<p>So we use token based authentication system to run <code>kubectl</code> as opposed to auth info stored at <code>~/.kube/config</code> file.</p>
<p>Now problem is : We are not able to run any of helm commands as all helm commands throw : <code>Error: Unauthorized</code>.</p>
<p>So question is how to use helm with token based authentication system.</p>
<p>Context :
Recently our DevOps team moved from <code>self managed K8S</code> to amazon's EKS. Only mechanism they provided to authenticate kubectl is through token.</p>
| <p>Currently Helm does not support Token based authorization mechanism with Kubernetes API. Therefore, there is no option for user-provided bearer token verification.You can find Github <a href="https://github.com/helm/helm/issues/1918" rel="nofollow noreferrer">issue</a> with a proposal of Authentication and Authorization support in Tiller.</p>
<p>Helm 3 Design <a href="https://github.com/helm/community/blob/master/helm-v3/000-helm-v3.md" rel="nofollow noreferrer">Proposal</a> has been announced with a significant changes to be implemented such as the new Security <a href="https://github.com/helm/community/blob/master/helm-v3/007-security.md" rel="nofollow noreferrer">concepts</a> .</p>
|
<p>I have setup a Kubernetes cluster which somehow cannot have internet connectivity because of organizaion policies. Now there are some services which I need to communicate via internet. To resolve this I have setup a forward proxy (Squid) which is outside of K8s cluster. All the nodes of my K8s cluster can access "google.com" using forward proxy. But I am not able to make my pods communicate through that proxy.</p>
<p>I have setup following variable on all the master and worker nodes:</p>
<pre><code>export http_proxy="http://10.x.x.x:3128"
export https_proxy="https://10.x.x.x:3128"
</code></pre>
<p>I am able to curl google.com from master and worker nodes. But when I attach into my container I notice that there are no variable http_proxy and https_proxy. and it cannot perform successful curl.</p>
<p>My pods and service network is different than my VM network</p>
<pre><code>pod-network-cidr=192.167.0.0/16
service-cidr 192.168.0.0/16
</code></pre>
<p>and my VM network is like:</p>
<pre><code>Master -> 10.2.2.40
Worker1 -> 10.2.2.41
Worker2 -> 10.2.2.42
Worker3 -> 10.2.2.43
</code></pre>
<p>And my forward proxy is running at</p>
<pre><code>Forward Proxy: 10.5.2.30
</code></pre>
<p>I am using kubernetes version v1.11.2. Any help here like where should I put my http_proxy setting for kubernetes cluster to make it effective for all pods and services?</p>
| <p>So I figured it out that to set the proxy for particular containers, set the env variable in Dockerfile.</p>
<pre><code>ENV HTTP_PROXY http://10.x.x.x:PORT
</code></pre>
|
<p>I have defined a ClusterRole for Prometheus:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
k8s-app: prometheus
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- endpoints
- services
- nodes
- pods
verbs:
- get
- watch
- list
- nonResourceURLs:
- /metrics
- /api/*
verbs:
- get
</code></pre>
<p>Prometheus is able to access the API-Servers /metrics route:</p>
<pre><code>https://10.0.1.104:443/metrics
https://10.0.2.112:443/metrics
</code></pre>
<p>But I get "server returned HTTP status 403 Forbidden" on </p>
<pre><code>https://kubernetes.default.svc:443/api/v1/nodes/ip-10-0-0-219.eu-west-1.compute.internal/proxy/metrics
</code></pre>
<p>and </p>
<pre><code>https://kubernetes.default.svc:443/api/v1/nodes/ip-10-0-0-219.eu-west-1.compute.internal/proxy/metrics/cadvisor
</code></pre>
<p>I thought I had this covered by </p>
<pre><code>- nonResourceURLs:
- /api/*
</code></pre>
<p>What am I missing?</p>
| <p>I tried this myself and yes <code>nodes/proxy</code> is missing. (it works for me after adding it)</p>
<pre><code>rules:
- apiGroups: [""]
resources:
- namespaces
- endpoints
- services
- nodes
- nodes/proxy <===
- pods
</code></pre>
<hr>
<pre><code># From my K8s master
$ curl -k -H 'Authorization: Bearer <redacted>' https://localhost:6443/api/v1/nodes/ip-x-x-x-x.us-west-2.compute.internal/proxy/stats/summary
{
"node": {
"nodeName": "ip-x-x-x-x.us-west-2.compute.internal",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2018-10-19T21:02:19Z",
"cpu": {
"time": "2018-11-09T23:51:15Z",
"usageNanoCores": 30779949,
"usageCoreNanoSeconds": 59446529195638
},
....
</code></pre>
<hr>
<p>Removing it:</p>
<pre><code>$ curl -k -H 'Authorization: Bearer <redacted>' https://localhost:6443/api/v1/nodes/ip-x-x-x-x.us-west-2.compute.internal/proxy/stats/summary
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "nodes \"ip-x-x-x-x.us-west-2.compute.internal\" is forbidden: User \"system:serviceaccount:default:prometheus-k8s\" cannot get resource \"nodes/proxy\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"name": "ip-x-x-x-x.us-west-2.compute.internal",
"kind": "nodes"
},
"code": 403
}
</code></pre>
|
<p>Let's say I have a deployment template like this</p>
<pre><code>spec:
minReadySeconds: 15
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
</code></pre>
<p>How will this affect the newly versions of my app? Will the <code>minReadySeconds</code> and <code>initialDelaySeconds</code> count at the same time? Will the <code>initialDelaySeconds</code> come first then <code>minReadySeconds</code>?</p>
| <p>From Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#min-ready-seconds" rel="noreferrer">Deployment documentation</a>:</p>
<blockquote>
<p><code>.spec.minReadySeconds</code> is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when a Pod is considered ready, see <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">Container Probes</a></p>
</blockquote>
<p>So your newly created app pod have to be ready for <code>.spec.minReadySeconds</code> seconds to be considered as available.</p>
<blockquote>
<p><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated.</p>
</blockquote>
<p>So <code>initialDelaySeconds</code> comes before <code>minReadySeconds</code>.</p>
<p>Lets say, container in the pod has started at <code>t</code> seconds. Readiness probe will be initiated at <code>t+initialDelaySeconds</code> seconds. Assume Pod become ready at <code>t1</code> seconds(<code>t1 > t+initialDelaySeconds</code>). So this pod will be available after <code>t1+minReadySeconds</code> seconds.</p>
|
<h1>TL;DR</h1>
<blockquote>
<p>Can I reference a previously-defined key-value pair in a ConfigMap?</p>
</blockquote>
<h1>Full Version</h1>
<p>I'm writing a deployment specification for an application which accepts
its configuration from environment variables at startup. Some of the
environment variables are derived from others. If I were setting
them up as file to be sourced at application startup, I would simply
do:</p>
<pre><code>[me@myserver ~]$ cat ./myenv.sh
export FOO=foo
export BAR=bar
export FOOBAR=$FOO$BAR
[me@myserver ~]$ . ./myenv.sh
[me@myserver ~]$ printenv FOOBAR
foobar
</code></pre>
<p>However, the analagous way to do this in a ConfigMap by referencing
previously-defined key-value pairs doesn't work (see sample ConfigMap
and Pod, below). Here are the results:</p>
<pre><code>[me@myserver ~]$ kubectl create -f my-app-config.yaml -f my-app-pod.yaml
configmap "my-app" created
pod "my-app" created
[me@myserver ~]$ kubectl exec -it my-app -- printenv | grep MY_CONFIGMAP
MY_CONFIGMAP_FOO=foo
MY_CONFIGMAP_FOOBAR=$(MY_CONFIGMAP_FOO)$(MY_CONFIGMAP_BAR)
MY_CONFIGMAP_BAR=bar
</code></pre>
<p>Hoped-for value of <code>MY_CONFIGMAP_FOOBAR</code> is <code>foobar</code>.</p>
<p>I attempted to cross-apply the following instructions under
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#use-configmap-defined-environment-variables-in-pod-commands" rel="nofollow noreferrer">Use ConfigMap-defined environment variables in Pod commands</a>
from the documentation:</p>
<blockquote>
<p>You can use ConfigMap-defined environment variables in the <strong><code>command</code></strong>
section of the Pod specification using the <strong><code>$(VAR_NAME)</code></strong>
Kubernetes substitution syntax.</p>
</blockquote>
<p>However, this obviously did not work, and I've been unable to find an
answer in my research.</p>
<p>I suspect I already know the answer, but is there any way to accomplish
this?</p>
<h1><a href="https://stackoverflow.com/help/mcve">MCVE</a></h1>
<h2>ConfigMap</h2>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-app
data:
MY_CONFIGMAP_FOO: foo
MY_CONFIGMAP_BAR: bar
# Desired concatenation would be `foobar`
MY_CONFIGMAP_FOOBAR: $(MY_CONFIGMAP_FOO)$(MY_CONFIGMAP_BAR)
</code></pre>
<h2>Pod</h2>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: alpine
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: my-app
command:
- /bin/sh
args:
- -c
- "while true; do sleep 3600; done"
</code></pre>
<h1>Epilogue: Twenty-Nine Months Later</h1>
<p>After working with Kubernetes for an additional two-and-a-half years, my ultimate solution (and recommendation) is to use a Helm chart, preferrably with an Ansible-managed deployment.</p>
| <p>You can achieve your expected result with help of this tool: <a href="https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html" rel="nofollow noreferrer">envsubst</a>.</p>
<p>Create the <code>ConfigMap</code> following way,</p>
<pre><code>envsubst < my-app-config.yaml | kubectl apply -f -
</code></pre>
<p>Now, all environment variables on your <code>ConfigMap</code> will be replaced with resolved value.</p>
|
<p>I want to deploy many Pods in Google Kubernetes Engine and then establish a TCP connection to each specific Pod by Subdomain like pod-name-or-label.mydomain.com or path routing like protocol://mydomain.com:7878/pod-name-or-label.</p>
<p>I have looked in different directions like Istio or nginx-ingress, but that seems to me to be too complicated. </p>
<p>Is not there a simple solution for that?</p>
| <p>For <strong>Istio</strong>, You can use <a href="https://istio.io/docs/concepts/traffic-management/#virtual-services" rel="nofollow noreferrer">VirtualService</a> to control the <strong>routing rules</strong> to the target <strong>subset</strong> with defining by <a href="https://istio.io/docs/concepts/traffic-management/#destination-rules" rel="nofollow noreferrer">DestinationRules</a>. </p>
<p>The <strong>DestinationRule</strong> will route to the target <strong>Pods</strong> by the <strong>specified label</strong> pods.</p>
<p>The request flow will like to:</p>
<pre><code>+--------------------+
| |
| Istio Gateway |
| |
| |
+---------+----------+
|traffic incoming
|
+---------v----------+
| |
| VirtualService |
| |
| |
+---------+----------+
|route to subset by the routing rules
v
+--------------------+
| |
| DestinationRules |
| |
| |
+---------+----------+
|route traffic to target pods
v
+--------------------+
| |
| |
| Pods |
| |
+--------------------+
</code></pre>
<p>so as @ericstaples said you should create different <strong>Deployments</strong> with different <strong>pod labels</strong> to achieve separating <strong>traffic</strong> to <strong>the target pods</strong>, Example:</p>
<ol>
<li>create a deployment with pod label: t1</li>
<li>create a subset in <em>DestinationRule</em>: select t1 label pod as subset s1</li>
<li>control your traffic in <strong>VirtualService</strong> that route to <code>s1</code> subset</li>
<li><code>s1</code> route to the target pods</li>
</ol>
<p>also for expose <strong>Gateway</strong>, you can use <strong>ClusterIP</strong> or <strong>NodePort</strong> like ** Kubernetes** other service did, see more of <a href="https://istio.io/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">Istio Traffic</a>.</p>
<p>There are some references maybe it's helpful:</p>
<p><a href="https://istio.io/docs/concepts/traffic-management/" rel="nofollow noreferrer">https://istio.io/docs/concepts/traffic-management/</a></p>
<p><a href="https://istio.io/docs/tasks/traffic-management/request-routing/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/request-routing/</a></p>
|
<p>I have a deployment with a php image that connects with a redis deployment in the 6379 port.</p>
<p>The problem is that the php application connects in the host 127.0.0.1 of its own pod, but the redis is in another pod (and have its own ClusterIP service). </p>
<p>I can't change the app code, so I want to redirect the 6379 port of php pod for the same port on the redis port.</p>
<p>How I can do this?</p>
| <p>kubernetes uses <a href="http://www.dest-unreach.org/socat/" rel="nofollow noreferrer"><code>socat</code></a> for doing port-forwarding from <code>kubectl</code>, so if they trust it that much, it seems reasonable you could trust it, too.</p>
<p>Place it in a 2nd container that runs alongside your php container, run <code>socat</code> in forwarding mode, and hope for the best:</p>
<pre><code>containers:
- name: my-php
# etc
- name: redis-hack
image: my-socat-container-image-or-whatever
command:
- socat
- TCP4-LISTEN:6379,fork
- TCP4:$(REDIS_SERVICE_HOST):$(REDIS_SERVICE_PORT)
# assuming, of course, your Redis ClusterIP service is named `redis`; adjust accordingly
</code></pre>
<p>Since all Pods share the same network namespace, the second container will listen on "127.0.0.1" also</p>
<p>Having said all of that, as commentary on your situation, it is a <strong>terrible</strong> situation to introduce this amount of hackery to work around a very, very simple problem of just not having the app hard-code "127.0.0.1" as the redis connection host</p>
|
<p>Just finished reading Nigel Poulton's <strong>The Kubernetes Book</strong>. I'm left with the question of whether or not a Deployment can specify multiple ReplicaSets.</p>
<p>When I think Deployment, I think of it in the traditional sense of an entire application being deployed. Or is there meant to be a Deployment for each microservice?</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-pod
image: nigelpoulton/k8sbook : latest
ports:
- containerPort: 8080
</code></pre>
| <p>It is meant to be a deployment of each microservice. </p>
<p>You can also manage the quantity of "deployed services" of each microservices type.
So for instance, if you want to deploy Service A (Docker image with an Java service) 5 times, you have a deployment resulting 5 pods. Each pod contains the image of Service A.</p>
<p>If you deploy a new version of this Service A (Docker image with an Java service), Kubernetes is able to do a rolling update and manage the shut down of the old Java service type (the existing pods) and creates 5 new pods with the new Java Service A.2 (a new docker image).</p>
<p>Thus your whole microservices application/infrastructure is build upon multiple deployments. Each generating Kubernetes pods, which are published by Kubernetes services.</p>
|
<h2>Context</h2>
<p>I'm writing a script that uses the <a href="https://github.com/kubernetes/client-go/" rel="noreferrer">k8s.io/client-go</a> library (<a href="https://godoc.org/k8s.io/client-go/" rel="noreferrer">godocs here</a>) to manipulate Deployments. In particular, I want to add a label selector to every Deployment in my cluster. Deployment label selectors are <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates" rel="noreferrer">immutable</a>. So my approach is to:</p>
<ol>
<li>Create a copy of each Deployment with the only difference being the name is suffixed with "-temp". This is to minimize downtime of existing Deployments.</li>
<li>Delete the original Deployments.</li>
<li>Recreate the original Deployments with the only difference being an additional label selector.</li>
<li>Delete the temporary Deployments.</li>
</ol>
<p>I can't just use the client-go library to go through steps 1-4 sequentially because I only want to go onto the next step when the API server considers the previous step to be done. For example, I don't want to do step 3 until the API server says the original Deployments have been deleted. Otherwise, I'll get the error that the Deployment with the same name already exists.</p>
<h2>Question</h2>
<p>What's the best way to use the client-go library to detect when a Deployment is done being created and deleted and to attach callback functions? I came across the following packages.</p>
<ul>
<li><a href="https://godoc.org/k8s.io/apimachinery/pkg/watch" rel="noreferrer">watch</a></li>
<li><a href="https://godoc.org/k8s.io/client-go/informers" rel="noreferrer">informers</a></li>
<li><a href="https://godoc.org/k8s.io/client-go/tools/cache" rel="noreferrer">cache/informers</a></li>
</ul>
<p>But I'm not sure what the differences are between them and which one to use.</p>
<p>I read examples of <a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899" rel="noreferrer">watch here</a> and <a href="https://medium.com/firehydrant-io/stay-informed-with-kubernetes-informers-4fda2a21da9e" rel="noreferrer">informer here</a>. Here's <a href="https://stackoverflow.com/questions/40975307/how-to-watch-events-on-a-kubernetes-service-using-its-go-client">two</a> <a href="https://stackoverflow.com/questions/52567334/watch-kubernetes-pod-status-to-be-completed-in-client-go">related</a> SO questions.</p>
<h3>Update</h3>
<p>It seems like <a href="https://godoc.org/k8s.io/apimachinery/pkg/watch" rel="noreferrer">watch</a> provides a lower-level way to watch for changes to resources and receive events about changes. Seems like using the <a href="https://godoc.org/k8s.io/client-go/informers#SharedInformerFactory" rel="noreferrer">SharedInformerFactory</a> to create a SharedInformer is the way to go.</p>
<p>So far I have</p>
<pre><code>import (
"encoding/json"
"errors"
"flag"
"fmt"
"io/ioutil"
"k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
typedv1 "k8s.io/client-go/kubernetes/typed/apps/v1"
"k8s.io/client-go/tools/cache"
"path/filepath"
"strings"
// We need this import to load the GCP auth plugin which is required to authenticate against GKE clusters.
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/clientcmd"
"log"
"os"
)
func main() {
...
factory := informers.NewSharedInformerFactory(kubeclient, 0)
informer := factory.Apps().V1().Deployments().Informer()
stopper := make(chan struct{})
defer close(stopper)
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
d := obj.(v1.Deployment)
fmt.Printf("Created deployment in namespace %s, name %s.\n", d.GetNamespace(), d.GetName())
if _, ok := d.GetLabels()[tempLabelKey]; ok {
fmt.Printf("Detected temporary deployment created in namespace %s, name %s.\n", d.GetNamespace(), d.GetName())
deploymentToDelete := strings.Replace(d.GetName(), tempSuffix, "", -1)
fmt.Printf("Now deleting previous deployment in namespace %s, name %s.\n", d.GetNamespace(), deploymentToDelete)
deleteDeployment(deploymentToDelete, d.GetNamespace(), kubeclient)
}
},
DeleteFunc: func(obj interface{}) {
d := obj.(v1.Deployment)
fmt.Printf("Deleted deployment in namespace %s, name %s.\n", d.GetNamespace(), d.GetName())
if _, ok := d.GetLabels()[stageLabelKey]; !ok {
fmt.Printf("Detected deployment without stage label was deleted in namespace %s, name %s.\n", d.GetNamespace(), d.GetName())
fmt.Printf("Now creating normal deployment with stage label in namespace %s, name %s.\n", d.GetNamespace(), d.GetName())
deployment := createDeploymentWithNewLabel(stageLabelKey, "production", d)
createDeploymentsOnApi(deployment, kubeclient)
}
},
})
informer.Run(stopper)
}
</code></pre>
| <p>I ended up using a <a href="https://godoc.org/k8s.io/client-go/informers#NewSharedInformerFactory" rel="noreferrer">SharedInformer</a>.</p>
<p>These resources were helpful.</p>
<ul>
<li><a href="https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/" rel="noreferrer">https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/</a></li>
<li><a href="https://github.com/kubernetes/community/blob/8decfe4/contributors/devel/controllers.md" rel="noreferrer">https://github.com/kubernetes/community/blob/8decfe4/contributors/devel/controllers.md</a></li>
</ul>
<p>.</p>
<pre><code>package main
import (
"encoding/json"
"errors"
"flag"
"fmt"
"io/ioutil"
"k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"path/filepath"
"strings"
// We need this import to load the GCP auth plugin which is required to authenticate against GKE clusters.
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/clientcmd"
"log"
"os"
)
const manifestsDir = "manifests"
// Use an empty string to run on all namespaces
const namespace = ""
const newLabelKey = "new-label-to-add"
const tempLabelKey = "temporary"
const tempSuffix = "-temp"
const componentLabelKey = "component"
func main() {
var kubeconfig *string
if home := homeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
// TODO (dxia) How can I specify a masterUrl or even better a kubectl context?
cfg, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
exitOnErr(err)
kubeclient, err := kubernetes.NewForConfig(cfg)
exitOnErr(err)
fmt.Printf("Getting deployments with '%s' label.\n", componentLabelKey)
deployments, err := kubeclient.AppsV1().Deployments(namespace).List(metav1.ListOptions{
LabelSelector: componentLabelKey,
})
fmt.Printf("Got %d deployments.\n", len(deployments.Items))
exitOnErr(err)
deployments = processDeployments(deployments)
fmt.Println("Saving deployment manifests to disk as backup.")
err = saveDeployments(deployments)
exitOnErr(err)
tempDeployments := appendToDeploymentName(deployments, tempSuffix)
tempDeployments = createDeploymentsWithNewLabel(tempLabelKey, "true", tempDeployments)
factory := informers.NewSharedInformerFactory(kubeclient, 0)
informer := factory.Apps().V1().Deployments().Informer()
stopper := make(chan struct{})
defer close(stopper)
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
d := obj.(*v1.Deployment)
labels := d.GetLabels()
if _, ok := labels[tempLabelKey]; ok {
labelsStr := joinLabelKeyVals(labels)
fmt.Printf("2: Temporary deployment created in namespace %s, name %s, labels '%s'.\n", d.GetNamespace(), d.GetName(), labelsStr)
deploymentToDelete := strings.Replace(d.GetName(), tempSuffix, "", -1)
deployment := getDeployment(d.GetNamespace(), deploymentToDelete, componentLabelKey, kubeclient)
if deployment != nil {
fmt.Printf("3: Now deleting previous deployment in namespace %s, name %s.\n", d.GetNamespace(), deploymentToDelete)
if err := deleteDeployment(d.GetNamespace(), deploymentToDelete, kubeclient); err != nil {
exitOnErr(err)
}
} else {
fmt.Printf("4: Didn't find deployment in namespace %s, name %s, label %s. Skipping.\n", d.GetNamespace(), deploymentToDelete, componentLabelKey)
}
} else if labelVal, ok := labels[newLabelKey]; ok && labelVal == "production" {
fmt.Printf("Normal deployment with '%s' label created in namespace %s, name %s.\n", newLabelKey, d.GetNamespace(), d.GetName())
deploymentToDelete := d.GetName() + tempSuffix
fmt.Printf("6: Now deleting temporary deployment in namespace %s, name %s.\n", d.GetNamespace(), deploymentToDelete)
if err := deleteDeployment(d.GetNamespace(), deploymentToDelete, kubeclient); err != nil {
exitOnErr(err)
}
}
},
DeleteFunc: func(obj interface{}) {
d := obj.(*v1.Deployment)
labels := d.GetLabels()
if _, ok := labels[newLabelKey]; !ok {
if _, ok := labels[tempLabelKey]; !ok {
fmt.Printf("Deployment without '%s' or '%s' label deleted in namespace %s, name %s.\n", newLabelKey, tempLabelKey, d.GetNamespace(), d.GetName())
fmt.Printf("5: Now creating normal deployment with '%s' label in namespace %s, name %s.\n", newLabelKey, d.GetNamespace(), d.GetName())
deploymentToCreate := createDeploymentWithNewLabel(newLabelKey, "production", *d)
if err := createDeploymentOnApi(deploymentToCreate, kubeclient); err != nil {
exitOnErr(err)
}
}
}
},
})
fmt.Println("1: Creating temporary Deployments.")
err = createDeploymentsOnApi(tempDeployments, kubeclient)
exitOnErr(err)
informer.Run(stopper)
}
func getDeployment(namespace string, name string, labelKey string, client *kubernetes.Clientset) *v1.Deployment {
d, err := client.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{})
if err != nil {
return nil
}
if _, ok := d.GetLabels()[labelKey]; !ok {
return nil
}
return d
}
func createDeploymentWithNewLabel(key string, val string, deployment v1.Deployment) v1.Deployment {
newDeployment := deployment.DeepCopy()
labels := newDeployment.GetLabels()
if labels == nil {
labels = make(map[string]string)
newDeployment.SetLabels(labels)
}
labels[key] = val
podTemplateSpecLabels := newDeployment.Spec.Template.GetLabels()
if podTemplateSpecLabels == nil {
podTemplateSpecLabels = make(map[string]string)
newDeployment.Spec.Template.SetLabels(podTemplateSpecLabels)
}
podTemplateSpecLabels[key] = val
labelSelectors := newDeployment.Spec.Selector.MatchLabels
if labelSelectors == nil {
labelSelectors = make(map[string]string)
newDeployment.Spec.Selector.MatchLabels = labelSelectors
}
labelSelectors[key] = val
return *newDeployment
}
func createDeploymentsWithNewLabel(key string, val string, deployments *v1.DeploymentList) *v1.DeploymentList {
newDeployments := &v1.DeploymentList{}
for _, d := range deployments.Items {
newDeployment := createDeploymentWithNewLabel(key, val, d)
newDeployments.Items = append(newDeployments.Items, newDeployment)
}
return newDeployments
}
func setAPIVersionAndKindForDeployment(d v1.Deployment, apiVersion string, kind string) {
// These fields are empty strings.
// Looks like an open issue: https://github.com/kubernetes/kubernetes/issues/3030.
d.APIVersion = apiVersion
d.Kind = kind
}
func processDeployments(deployments *v1.DeploymentList) *v1.DeploymentList {
newDeployments := &v1.DeploymentList{}
for _, d := range deployments.Items {
// Set APIVersion and Kind until https://github.com/kubernetes/kubernetes/issues/3030 is fixed
setAPIVersionAndKindForDeployment(d, "apps/v1", "Deployment")
d.Status = v1.DeploymentStatus{}
d.SetUID(types.UID(""))
d.SetSelfLink("")
d.SetGeneration(0)
d.SetCreationTimestamp(metav1.Now())
newDeployments.Items = append(newDeployments.Items, d)
}
return newDeployments
}
func saveDeployments(deployments *v1.DeploymentList) error {
for _, d := range deployments.Items {
if err := saveManifest(d); err != nil {
return err
}
}
return nil
}
func saveManifest(resource interface{}) error {
var path = manifestsDir
var name string
var err error
switch v := resource.(type) {
case v1.Deployment:
path = fmt.Sprintf("%s%s/%s/%s", path, v.GetClusterName(), v.GetNamespace(), "deployments")
name = v.GetName()
default:
return errors.New(fmt.Sprintf("Got an unknown resource kind: %v", resource))
}
bytes, err := json.MarshalIndent(resource, "", " ")
if err != nil {
return err
}
err = os.MkdirAll(path, 0755)
if err != nil {
return err
}
err = ioutil.WriteFile(fmt.Sprintf("%s/%s", path, name), bytes, 0644)
if err != nil {
return err
}
return nil
}
func deleteDeployment(namespace string, name string, client *kubernetes.Clientset) error {
if err := client.AppsV1().Deployments(namespace).Delete(name, &metav1.DeleteOptions{}); err != nil {
return err
}
return nil
}
func appendToDeploymentName(deployments *v1.DeploymentList, suffix string) *v1.DeploymentList {
newDeployments := &v1.DeploymentList{}
for _, d := range deployments.Items {
d.SetName(fmt.Sprintf("%s%s", d.GetName(), suffix))
newDeployments.Items = append(newDeployments.Items, d)
}
return newDeployments
}
func createDeploymentOnApi(d v1.Deployment, client *kubernetes.Clientset) error {
d.SetResourceVersion("")
if _, err := client.AppsV1().Deployments(d.GetNamespace()).Create(&d); err != nil {
return err
}
return nil
}
func createDeploymentsOnApi(deployments *v1.DeploymentList, client *kubernetes.Clientset) error {
for _, d := range deployments.Items {
if err := createDeploymentOnApi(d, client); err != nil {
return err
}
}
return nil
}
func joinLabelKeyVals(labels map[string]string) string {
labelKeyVals := make([]string, 0, len(labels))
for k, v := range labels {
labelKeyVals = append(labelKeyVals, fmt.Sprintf("%v=%v", k, v))
}
return strings.Join(labelKeyVals, ", ")
}
func homeDir() string {
if h := os.Getenv("HOME"); h != "" {
return h
}
return os.Getenv("USERPROFILE") // windows
}
func exitOnErr(err error) {
if err != nil {
log.Fatal(err)
}
}
</code></pre>
|
<p>What are the steps for recovering the deleted Google Kubernetes Engine cluster in Google Cloud Platform?</p>
| <p>Once you delete it, it's gone. You can't recover it unless you backed it up.</p>
<p>There are a couple of popular tools to backup your cluster:</p>
<ul>
<li><a href="https://github.com/heptio/ark" rel="nofollow noreferrer">Ark</a></li>
<li><a href="https://github.com/pieterlange/kube-backup" rel="nofollow noreferrer">kube-backup</a></li>
</ul>
<p>If you have a stateful applications Ark is a better solution, since it handles things like <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent volumes</a>. If you have stateless applications kube-backup is good enough since it basically backups all your Kubernetes cluster resources.</p>
<p>If you have stateful applications, i.e databases, it may also vary on a case by case basis, for example you'd backup a MySQL database with <a href="https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html" rel="nofollow noreferrer"><code>mysqldump</code></a>.</p>
|
<p>When I <code>helm delete <deploymentname></code> a deployment its PVs and PVCs are deleted as well. How can I avoid the actual data in the PVs from being deleted and be able to reclaim those PVs when I <code>helm install <deploymentname></code> again?</p>
<p>I am using <code>helm upgrade</code> regularly but it makes me very uncomfortable if all it takes to delete all data is a helm delete (even without --purge option it removes all PVs) </p>
| <p>Assuming you are using the <code>default</code> Storage-class, the only way to avoid a Helm chart to delete the PV/PVCs used on it it's creating the PVCs beforehand so they are not managed by the Helm release.</p>
<p>The only exception is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations" rel="noreferrer">StatefulSets</a> which by definition never delete their PVCs even when they are created by the Helm release</p>
<p>The other option, if your Helm charts allow it, is using a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy" rel="noreferrer">Storage-class with <code>reclaimPolicy: Retain</code></a>, that will avoid deleting your PVs when your deployments, daemon-sets pods PVCs are detached and deleted</p>
|
<p>I'm new to Terraform, and using it to create a Kubernetes cluster on GKE. I also want to create a Kubernetes deployment using the kubernetes_deployment resource type. This works perfectly, until I make a change to the deployment and run 'terraform apply' again. This results in the following error:</p>
<p>kubernetes_deployment.example: replicationcontrollers "terraform-example" not found</p>
<p>It looks like the Kubernetes provider starts looking for a replication controller to modify instead of a deployment. Am I doing something wrong, or might this be a bug in the provider?</p>
<p>I'm trying this with the following example: <a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/kubernetes/r/deployment.html</a></p>
<p>Thanks,
Daniel</p>
| <p>It looks like a but there's already an issue open <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/210" rel="nofollow noreferrer">here</a>.</p>
<p>You can also see a gist with the logs <a href="https://gist.github.com/enozero/9e60d362b7801b779555133241608059" rel="nofollow noreferrer">here</a>.</p>
<p>A deployment creates and managers a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> and not a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">ReplicationController</a>. A replication controller is the older Kubernetes way of managing replicas and you may still some K8s resource definitions out there, and it's still supported. However, a lot of new management of stateless replicas has been centered mostly around Deployments/ReplicaSets.</p>
|
<p>In my minikube I'm getting an error <code>persistentvolumeclaim "jenkins-volume-claim" not found</code></p>
<p>I'm installing jenkins using helm with the command below:</p>
<pre><code>helm install --name jenkins -f kubernetes/jenkins-values.yaml stable/jenkins --namespace jenkins-system
</code></pre>
<p>the snippet about <code>Persistence</code> in <code>jenkins-values.yaml</code> is below:</p>
<pre><code>Persistence:
Enabled: true
## A manually managed Persistent Volume and Claim
## Requires Persistence.Enabled: true
## If defined, PVC must be created manually before volume will be bound
ExistingClaim: jenkins-volume-claim
</code></pre>
<p>I've created a persistence volume using the command below:</p>
<pre><code>kubectl create -f persistence.yaml
</code></pre>
<p>persistence.yaml looks like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/jenkins-volume/
</code></pre>
<p><strong>Question</strong></p>
<p>I have persistence volume <code>jenkins-volume</code> created but am still getting error <code>persistentvolumeclaim "jenkins-volume-claim" not found</code>. How can I resolve this?</p>
| <p>The error message points to missing <code>PersistentVolumeClaim</code> named <code>jenkins-volume-claim</code>. To create one, execute:</p>
<pre><code>kubectl -n <namespace> create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-claim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
EOF
</code></pre>
<p>Executing after that <code>kubectl get pv</code> should show the <code>jenkins-volume</code> PV in <code>Bound</code> status (assuming the PV has been created already with capacity of at least 5Gi).</p>
<p>Use selector(s) as described <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector" rel="nofollow noreferrer">here</a> to make sure the claim will bind to the desired pre-created PV (persistent volume) in case there are more than one PV available with proper capacity.</p>
|
<p>I am learning kubernetes by playing with minikube.</p>
<p>This is my pod deployment file which is fine.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: myapp
image: myid/myimage
</code></pre>
<p>I am exposing the above pods using NodePort. I am able to access using minikube IP at port 30002.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-ip-service
spec:
type: NodePort
externalIPs:
- 192.168.99.100
selector:
component: web
ports:
- port: 3000
nodePort: 30002
targetPort: 8080
</code></pre>
<p>Now i would like to use ingress to access the application at port 80 which will forward the request the ip-service at port 3000. It does NOT work</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: my-ip-service
servicePort: 3000
</code></pre>
<p>If i try to access to ingress, address is blank. </p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 41m
</code></pre>
<p>How to use ingress with minikube? Or how to bind the minikube ip with ingress service - so that the app can be exposed outside without using nodeport</p>
| <p>You can get your minikube node's IP address with:</p>
<pre><code>minikube ip
</code></pre>
<p>The ingress' IP address will not populate in minikube because minikube lacks a load balancer. If you'd like something that behaves like a load balancer for your minikube cluster, <a href="https://github.com/knative/serving/blob/master/docs/creating-a-kubernetes-cluster.md#loadbalancer-support-in-minikube" rel="nofollow noreferrer">https://github.com/knative/serving/blob/master/docs/creating-a-kubernetes-cluster.md#loadbalancer-support-in-minikube</a> suggests running the following commands to patch your cluster:</p>
<pre><code>sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
</code></pre>
|
<p>[ Disclaimer: this question was originally posted <a href="https://serverfault.com/questions/939527/kubernetes-calico-on-oracle-cloud-oci">on ServerFault</a>. However, since the official K8s <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/" rel="nofollow noreferrer">documentation</a> states "ask your questions on StackOverflow", I am also adding it here ]</p>
<p>I am trying to deploy a test Kubernetes cluster on Oracle Cloud, using OCI VM instances - however, I'm having issues with pod networking.</p>
<p>The networking plugin is Calico - it seems to be installed properly, but no traffic gets across the tunnels from one host to another. For example, here I am trying to access nginx running on another node:</p>
<pre><code>root@kube-01-01:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-th9ns 1/1 Running 0 38s 192.168.181.1 kube-01-06 <none>
root@kube-01-01:~# curl 192.168.181.1
[ ... timeout... ]
</code></pre>
<p>Using tcpdump, I see the IP-in-IP (protocol 4) packets leaving the first host, but they never seem to make it to the second one (although all other packets, including BGP traffic, make it through just fine).</p>
<pre><code>root@kube-01-01:~# tcpdump -i ens3 proto 4 &
[1] 16642
root@kube-01-01:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
root@kube-01-01:~# curl 192.168.181.1
09:31:56.262451 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661065 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
09:31:57.259756 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661315 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
09:31:59.263752 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661816 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
root@kube-01-06:~# tcpdump -i ens3 proto 4 &
[1] 12773
root@kube-01-06:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
</code></pre>
<p>What I have checked so far:</p>
<ol>
<li>The Calico routing mesh comes up just fine. I can see the BGP traffic on the packet capture, and I can see all nodes as "up" using calicoctl</li>
</ol>
<p>root@kube-01-01:~# ./calicoctl node status
Calico process is running.</p>
<pre><code>IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.13.23.123 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.124 | node-to-node mesh | up | 09:12:49 | Established |
| 10.13.23.126 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.129 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.127 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.128 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.130 | node-to-node mesh | up | 09:12:52 | Established |
+--------------+-------------------+-------+----------+-------------+
</code></pre>
<ol start="2">
<li>The security rules for the subnet allow all traffic. All the nodes are in the same subnet, and I have a stateless rule permitting all traffic from other nodes within the subnet (I have also tried adding a rule permitting IP-in-IP traffic explicitly - same result).</li>
<li>The source/destination check is disabled on all the vNICs on the K8s nodes.</li>
</ol>
<p>Other things I have noticed:</p>
<ol>
<li>I can get calico to work if I disable IP in IP encapsulation for same-subnet traffic, and use regular routing inside the subnet (as described <a href="https://docs.projectcalico.org/v3.2/reference/public-cloud/aws" rel="nofollow noreferrer">here</a> for AWS)</li>
<li>Other networking plugins (such as weave) seem to work correctly.</li>
</ol>
<p>So my question here is - what is happening to the IP-in-IP encapsulated traffic? Is there anything else I can check to figure out what is going on? </p>
<p>And yes, I know that I could have used managed Kubernetes engine directly, but where is the fun (and the learning opportunity) in that? :D </p>
<p>Edited to address Rico's answer below:</p>
<p>1) I'm also not getting any pod-to-pod traffic to flow through (no communication between pods on different hosts). But I was unable to capture that traffic, so I used node-to-pod as an example.</p>
<p>2) I'm also getting a similar result if I hit a NodePort svc on another node than the one the pod is running on - I see the outgoing IP-in-IP packets from the first node, but they never show up on the second node (the one actually running the pod):</p>
<pre><code>root@kube-01-01:~# tcpdump -i ens3 proto 4 &
[1] 6499
root@kube-01-01:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
root@kube-01-01:~# curl 127.0.0.1:32137
20:24:08.460069 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444115 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:09.459768 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444365 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:11.463750 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444866 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:15.471769 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19445868 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
</code></pre>
<p>Nothing on the second node ( <code>kube-01-06</code>, the one that is actually running the nginx pod ):</p>
<pre><code>root@kubespray-01-06:~# tcpdump -i ens3 proto 4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
</code></pre>
<p>I used 127.0.0.1 for ease of demonstration - of course, the exact same thing happens when I hit that NodePort from an outside host:</p>
<pre><code>20:25:17.653417 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56630 > 192.168.181.1.http: Flags [S], seq 980178400, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:17.654371 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56631 > 192.168.181.1.http: Flags [S], seq 3932412963, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:17.667227 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56632 > 192.168.181.1.http: Flags [S], seq 2017119223, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.653656 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56630 > 192.168.181.1.http: Flags [S], seq 980178400, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.654577 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56631 > 192.168.181.1.http: Flags [S], seq 3932412963, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.668595 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56632 > 192.168.181.1.http: Flags [S], seq 2017119223, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
</code></pre>
<p>3) As far as I can tell (please correct me if I'm wrong here), the nodes <em>are</em> aware of routes to pod networks, and pod-to-node traffic is also encapsulated IP-in-IP (notice the protocol 4 packets in the first capture above)</p>
<pre><code>root@kube-01-01:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
alpine-9d85bf65c-2wx74 1/1 Running 1 23m 192.168.82.194 kube-01-08 <none>
nginx-dbddb74b8-th9ns 1/1 Running 0 10h 192.168.181.1 kube-01-06 <none>
root@kube-01-01:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
<snip>
192.168.181.0 10.13.23.127 255.255.255.192 UG 0 0 0 tunl0
</code></pre>
| <p>Maybe it is a MTU issue:</p>
<blockquote>
<p>Typically the MTU for your workload interfaces should match the network MTU. If you need IP-in-IP then the MTU size for both the workload and tunnel interfaces should be 20 bytes less than the network MTU for your network. This is due to the extra 20 byte header that the tunnel will add to each packet.</p>
</blockquote>
<p>Read more <a href="https://docs.projectcalico.org/v3.2/usage/configuration/mtu" rel="nofollow noreferrer">here</a>.</p>
|
<p>I'm trying to use Kubernetes to manage a scenario where I need to run several instances of an application (that is, several Pods). These are my requirements:</p>
<ol>
<li>When I need to scale up my application, I want to deploy one single Pod on a specific Node (not a random one).</li>
<li>When I need to scale down my application, I want to remove a specific Pod from a specific Node (not a random one).</li>
<li>When a new Pod is deployed, I want it to mount a specific PersistentVolume (not a random one) that I have manually provisioned.</li>
<li>After a Pod has been deleted, I want its PersistentVolume to be re-usable by a different Pod.</li>
</ol>
<p>So far, I used this naive solution to do all of the above: every time I needed to create a new instance of my application, I created one new Deployment (with exactly one replica) and one PersistentVolumeClaim. So for example, if I need five instances of my application, then I need five Deployments. Though, this solution is not very scalable and it doesn't exploit the full potential of Kubernetes.</p>
<p>I think it would be a lot smarter to use one single template for all the Pods, but I'm not sure whether I should use a Deployment or a Statefulset. </p>
<p>I've been experimenting with Labels and Node Affinity, and I found out that I can satisfy requirement 1, but I cannot satisfy requirement 2 this way. In order to satisfy requirement 2, <strong>would it be possible to delete a specific Pod by writing my own custom scheduler?</strong></p>
<p>I don't understand how Kubernetes decides to tie a specific PersistentVolume to a specific PersistentVolumeClaim. <strong>Is there a sort of volume scheduler? Can I customize it somehow?</strong> This way, every time a new Pod is created, I'd be able to tie it to a specific volume.</p>
| <p>There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...</p>
<p>Yes - with nodeSelector using labels, <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">node affinity</a>, and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">anti-affinity</a> rules, pods can be scheduled on "appropriate" nodes.</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="nofollow noreferrer">Static Pods</a> may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)</p>
<p>Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:</p>
<p>Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a>.</p>
<p>Use statically provisioned PVs and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector" rel="nofollow noreferrer">selector(s)</a> to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.</p>
<p>Then requirement #4 will be possible - just make sure the PVs use the proper <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy" rel="nofollow noreferrer">reclaim policy</a>.</p>
|
<p>I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it <code>--privilege</code>.</p>
<p>All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the <code>--privilege</code> permissions.</p>
<p>Is running privilege container on Openshift is dangerous, and if so why and how much?</p>
| <p>A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities" rel="nofollow noreferrer">among other things</a>. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.</p>
<p>I'd suggest that using <code>--privileged</code> at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if <em>e.g.</em> your container needs to <strong>mlock</strong>(2) you should <code>--cap-add IPC_LOCK</code> for the specific privilege you need, instead of opening up the whole world.</p>
<p>(My understanding is still that trying to run Docker inside Docker is <a href="https://hub.docker.com/_/docker/" rel="nofollow noreferrer">generally considered a mistake</a> and using the host's Docker daemon is preferable. Of course, this <em>also</em> gives unlimited control over the host...)</p>
|
<p>I don't really understand this after reading through <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu" rel="noreferrer">the document</a>. Some use a term like "CPU", but some use "core".</p>
<p>I am running Kubernetes in my laptop for testing purpose. My laptop has one CPU (2.2 GHz) and four cores.</p>
<p>If I want to set the CPU request/limit for pod, should the maximum resource that I have be 1000m or 4000m?</p>
<p>What is the difference (CPU vs. core) here in a Kubernetes context?</p>
| <p>To clarify what's described <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="noreferrer">here</a> in the Kubernetes context, 1 CPU is the same as a core (Also more information <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units" rel="noreferrer">here</a>).</p>
<pre><code>1000m (milicores) = 1 core = 1 vCPU = 1 AWS vCPU = 1 GCP Core.
100m (milicores) = 0.1 core = 0.1 vCPU = 0.1 AWS vCPU = 0.1 GCP Core.
</code></pre>
<p>For example, an Intel Core i7-6700 has four cores, but it has Hyperthreading which doubles what the system sees in terms of cores. So in essence, it will show up in Kubernetes as:</p>
<pre><code>8000m = 8 cores = 8 vCPUs
</code></pre>
<p>Some extra information: These resources are managed by the kube-scheduler using the <a href="https://en.wikipedia.org/wiki/Completely_Fair_Scheduler" rel="noreferrer">Completely Fair Scheduler</a> (CFS), and there are no guarantees in terms of overruns within the same machine and your pod may be moved around.</p>
<p>If you'd like to have stronger guarantees, you might consider the <code>--cpu-manager-policy=static</code> (CPU Manager) option in the kubelet. More information is <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies" rel="noreferrer">here</a> and <a href="https://kubernetes.io/blog/2018/07/24/feature-highlight-cpu-manager/" rel="noreferrer">here</a>.</p>
<p>For more details on what your system sees as a CPU (and number of CPUs) on a Linux system you can see how many vCPUs you have by running <code>cat /proc/cpuinfo</code>.</p>
|
<p>I'm trying to install Kubernetes on my Ubuntu server/desktop version 18.04.1.
But, when I want to add kubernetes to the apt repository using the following command:</p>
<pre><code>sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-bionic main"
</code></pre>
<p>I get the following error:</p>
<pre><code>Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Ign:3 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://dl.google.com/linux/chrome/deb stable Release
Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:7 https://download.docker.com/linux/ubuntu bionic InRelease
Ign:8 https://packages.cloud.google.com/apt kubernetes-bionic InRelease
Err:10 https://packages.cloud.google.com/apt kubernetes-bionic Release
404 Not Found [IP: 216.58.211.110 443]
Reading package lists... Done
E: The repository 'http://apt.kubernetes.io kubernetes-bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
</code></pre>
<p>If I then try to install <code>kubeadm</code>, it does not work because I don't have the repository added to apt</p>
<p>I hope someone can shed some light on my issue..</p>
<p>All of this is running inside a VM on Hyper-V</p>
<p>PS: I'm not a die hard Linux expert but coming from Windows!</p>
| <p>At the moment (nov. 2018) there is no bionic folder. You can see the supported distributions here: </p>
<p><a href="https://packages.cloud.google.com/apt/dists" rel="noreferrer">https://packages.cloud.google.com/apt/dists</a></p>
<p>The last kubernetes version there is: kubernetes-yakkety</p>
<p>This should still work with bionic.</p>
|
<p>I am using <code>Raspberry pi</code> for <code>kubernetes</code> cluster setup. I was using below docker version:</p>
<pre><code>Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:30:52 2018
OS/Arch: linux/arm
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:26:37 2018
OS/Arch: linux/arm
Experimental: false
</code></pre>
<p>Looks like now the docker version latest is <code>18.09.0</code> and the latest <code>kubernetes</code> version is not supporting this docker version. I have even tried installing some older version of kube like <code>1.9.7</code> or <code>1.9.6</code> but while initiating the <code>kubeadm init</code>, I am getting the below error:</p>
<pre><code>[ERROR SystemVerification]: unsupported docker version: 18.09.0
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
</code></pre>
<p>Which version should I specify for <code>kubernetes</code> and <code>docker</code> to run properly. Also how can we specify version while insatlling docker. I normally use below command to install docker:</p>
<pre><code>curl -sSL get.docker.com | sh
</code></pre>
| <p>In Kubernetes there is nothing like <em>supported</em>. Instead of it they use <em>validated</em> - it means that all features were tested and validated with some Docker version.</p>
<p>And validated Docker versions are still the same from Kubernetes version 1.8 until 1.11: <strong>Docker 1.11.2 to 1.13.1 and 17.03.x</strong>. See here:
<a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#external-dependencies" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#external-dependencies</a>
and here <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#external-dependencies" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#external-dependencies</a></p>
<p>Starting from Kubernetes version 1.12 <strong>Docker 17.06, 17.09 and 18.06</strong> started to be also validated. See here:
<a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#external-dependencies" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#external-dependencies</a></p>
<p>As I know final version of Docker 18.09 was released 4 days ago so here we cannot expect this version to be validated in Kubernetes now.</p>
<p><strong>Update (9.4.2019):</strong> Docker 18.09 is validated against newly released Kubernetes 1.14: <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#external-dependencies" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#external-dependencies</a></p>
<hr>
<p>You can specify exact Docker version for the <em>get.docker.com</em> script by <code>VERSION</code> variable:</p>
<pre><code>export VERSION=18.03 && curl -sSL get.docker.com | sh
</code></pre>
|
<p>Here is my situation, I'm on kubernetes (ingress), with two docker images: one dedicated to the web and the second one to the api.</p>
<p>Under the next configuration (at the end of the message): <code>/web</code> will show the front-end that will make some calls to <code>/api</code>, <strong>all good there</strong>.</p>
<p>but <code>/</code> is a 404 since nothing is defined, I couldn't find a way to tell in the ingress config that <code>/</code> should redirect to <code>/web</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- demo.com
secretName: tls-secret
rules:
- host: demo.com
http:
paths:
- path: /api
backend:
serviceName: api-app
servicePort: 8080
- path: /web
backend:
serviceName: web-app
servicePort: 80
</code></pre>
| <p>This depends on what your frontend and backend apps expect in terms of paths. Normally the frontend will <a href="https://stackoverflow.com/questions/53148660/communication-from-pod-to-pod-on-same-node-inside-kubernetes-in-gcp">need to be able to find the backend on a certain external path</a> and in your case it sounds like your backend needs to be made available on a different path externally (<code>/api</code>) from what it works on within the cluster (<code>/</code>). You can rewrite the target for requests to the api so that <code>/api</code> will go to <code>/</code> when the request is routed to the backend:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-backend
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- demo.com
secretName: tls-secret
rules:
- host: demo.com
http:
paths:
- path: /api
backend:
serviceName: api-app
servicePort: 8080
</code></pre>
<p>And you can also define a separate ingress (with a different name) for the frontend that does not rewrite the target, so that a request to <code>/web</code> will go to <code>/web</code> for it:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-frontend
annotations:
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- demo.com
secretName: tls-secret
rules:
- host: demo.com
http:
paths:
- path: /web
backend:
serviceName: web-app
servicePort: 80
</code></pre>
|
<p>I just installed Kubernetes with minkube on my desktop(running Ubuntu 18.10) and was then trying to install Postgresql on the desktop machine using Helm.</p>
<p>After installing helm, I did:</p>
<pre><code>helm install stable/postgresql
</code></pre>
<p>When this completed successfully, I forwarded postgres port with:</p>
<pre><code>kubectl port-forward --namespace default svc/wise-beetle-postgresql 5432:5432 &
</code></pre>
<p>and then I tested connecting to it locally from my desktop with:
psql --host 127.0.0.1 -U postgres
which succeeds.</p>
<p>I attempted to connect to postgres from my laptop and that fails with:</p>
<pre><code>psql -h $MY_DESKTOP_LAN_IP -p 5432 -U postgres
psql: could not connect to the server: Connection refused
Is the server running on host $MY_DESKTOP_LAN_IP and accepting TCP/IP connections on port 5432?
</code></pre>
<p>To ensure that my desktop was indeed listening on 5432, I did:</p>
<pre><code>netstat -natp | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 17993/kubectl
tcp6 0 0 ::1:5432 :::* LISTEN 17993/kubectl
</code></pre>
<p>Any help anyone? I'm lost.</p>
| <p>you need to configure <code>postgresql.conf</code> to allow external client connections look for <code>listen</code> parameter and set it to <code>*</code>, it is under your postgres data directory, and then add your laptop's ip in <code>pg_hba.conf</code>. It controls the client access to your postgresql server, more on this here - <a href="https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html" rel="nofollow noreferrer">https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html</a></p>
|
<p>It seems that on Windows Kubernetes starts a <code>pause</code> image for each pod that is created. What is the purpose of this pause image? Where can I find more documentation about it?</p>
| <p>The <code>pause</code> container is a container which holds the network namespace for the pod. Kubernetes creates pause containers to acquire the respective pod’s IP address and set up the network namespace for all other containers that join that pod.</p>
<p>references:</p>
<p>1, <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">https://www.ianlewis.org/en/almighty-pause-container</a></p>
<p>2, <a href="https://groups.google.com/forum/#!topic/kubernetes-users/jVjv0QK4b_o" rel="noreferrer">https://groups.google.com/forum/#!topic/kubernetes-users/jVjv0QK4b_o</a></p>
<p>3, <a href="https://github.com/kubernetes-sigs/cri-o/issues/91" rel="noreferrer">https://github.com/kubernetes-sigs/cri-o/issues/91</a></p>
|
<p>I have two services (client and server) both of types of load balancer. I would like to make the client service talking to server service. How do I do programmatically (via yaml file etc).</p>
<p>More explicitly client runs a python code in which it needs access to server's external ip. However I only know server service name not external ip.</p>
<p>of course one way to solve this issue is to launch server first and get the external ip and add it to client's yaml file. I am looking for solution like accessing external ip by just knowing the service name. </p>
<p>Client and server resides in the same kubernetes cluster. </p>
<p><strong>Client.yaml</strong> Client runs a web server and if someone hits the url advertised by web server <code>demo.py</code> should make a call to the server's service. I was thinking of a way to get the server's <code>external-ip</code> programmatically here, however I am open for suggestions.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name:client-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: client
spec:
containers:
- name: client-container
image: client_image:latest
command:
- "python"
- "/root/demo.py"
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
labels:
run: client-service
name: client-service
spec:
ports:
- port: 5000
targetPort: 5000
selector:
app: inception-client
type: LoadBalancer
</code></pre>
<p><strong>Server.yaml</strong>: It exposes a service at port 9000, which client should call.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: server
spec:
containers:
- name: server-container
image: server_image:latest
command:
- <Command to start a server which listens on port 9000>
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
labels:
run: server-service
name: server-service
spec:
ports:
- port: 9000
targetPort: 9000
selector:
app: server
type: LoadBalancer
</code></pre>
| <p>The client only needs to know the kubernetes Service name linked to the server. Kubernetes will handle the routing to the desired server pod. Specifying a load balancer also implicitly creates a clusterip. Which your client python application can use.</p>
<p>You can provide in your client code a readiness probe that checks if the server is up, that way you can be sure that when the server is ready, the client can also start taking requests.</p>
|
<p>I have been following <a href="https://github.com/kubernetes/dashboard/wiki/Access-control#basic" rel="nofollow noreferrer">here</a> kubernetes github and to change basic to token based authentication. It says to change </p>
<blockquote>
<p>--authentication-mode=basic <br> to <br>
--authentication-mode=token</p>
</blockquote>
<p>but my question is where to change? which file? which yml?</p>
<p>It would really great if you provide an example of yaml configuration file.</p>
| <p><code>--authentication-mode</code> flag is for kubernetes dashboard. Add/Change this flag in kubernetes dashboard deployment.</p>
<p>If you are using <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui" rel="nofollow noreferrer">this</a> to deploy kubernetes dashboard, then add/change flag in the deployment yaml.</p>
<pre><code># ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --authentication-mode=basic
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
</code></pre>
|
<p>I'm just starting to explore Kubernetes, and there is one thing I find unclear. Suppose I have a master node that I have set up with <code>kubeadm</code>, and another two worker nodes that I had joined to master. Now I have a yaml file that specifies the details of a <code>Deployment</code> and I need to run:</p>
<pre><code>kubectl create -f dep.yaml
</code></pre>
<p>Do I need to run this command on master only? and then the master may or may not decide to use both worker nodes for the deployment according to optimal load distribution? Or do I need to run this in all worker nodes?</p>
| <p>You don't have to. You just need to run this command once and <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler" rel="nofollow noreferrer">kube-scheduler</a> will schedule it in appropriate node.</p>
<p>Look at the diagram of this page: <a href="https://kubernetes.io/docs/concepts/architecture/cloud-controller/" rel="nofollow noreferrer">Concepts Underlying the Cloud Controller Manager</a></p>
<p>Mster Nodes runs, <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver" rel="nofollow noreferrer">kube-apiserver</a>, <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">etcd</a>, <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler" rel="nofollow noreferrer">kube-scheduler</a>, <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">kube-controller-manager</a> and <a href="https://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager" rel="nofollow noreferrer">cloud-controller-manager</a>.</p>
<p>kube-scheduler watches for newly created pod and schedule it to appropriate node based on resource requirements, hardware/software/policy constraints, affinity and anti-affinity etc.</p>
|
<p>As we know, kubernetes supports 2 primary modes of finding a Service - environment variables and DNS, could we disable the first way and only choose the DNS way?</p>
| <p>As shown in this <a href="https://github.com/kubernetes/kubernetes/pull/68754" rel="noreferrer">PR</a>, this feature will land with Kubernetes v1.13. From the PR (as Docs are not available yet) I expect it to be the field <code>enableServiceLinks</code> in the pod spec with true as default.</p>
<p>Edit: It has been a while and the PR finally landed. The <code>enableServiceLinks</code> was added as an optional Boolean to the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#podspec-v1-core" rel="noreferrer">Kubernetes PodSpec</a>.</p>
<p>For the record: <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#dns" rel="noreferrer">using DNS</a> to discover service endpoints is the recommended approach. The <code>docker link</code> behavior, from where the environment variables originate, has long been <a href="https://docs.docker.com/compose/compose-file/compose-file-v1/#link-environment-variables" rel="noreferrer">deprecated</a>.</p>
|
<p>I am new with Kubernetes and am trying to setup a Kubernetes cluster on local machines. Bare metal. No OpenStack, No Maas or something.</p>
<p>After <code>kubeadm init ...</code> on the master node, <code>kubeadm join ...</code> on the slave nodes and <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">applying flannel</a> at the master I get the message from the slaves:</p>
<blockquote>
<p>runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized</p>
</blockquote>
<p>Can anyone tell me what I have done wrong or missed any steps?
Should flannel be applied to all the slave nodes as well? If yes, they do not have a <code>admin.conf</code>...</p>
<p>Thanks a lot!</p>
<p>PS. All the nodes do not have internet access. That means all files have to be copied manually via ssh.</p>
| <p>The problem was the missing internet connection. After loading the Docker images manually to the worker nodes they appear to be ready.</p>
<p>Unfortunately I did not found a helpful error message around this.</p>
|
<p>I have a 3-node Kubernetes cluster setup with Vagrant/Virtualbox. I am setting up a 4th VM that is not attached to the cluster.</p>
<p>I want to configure my 4th node so that it routes all traffic in the Service IP CIDR to a node on the Kubernetes cluster. The specific node doesn't matter since once traffic hits a node it will route to the correct pod as I expect.</p>
<p>For example, let's say I deploy a Rabbit broker on my k8s cluster behind a Service with IP <code>10.0.0.5</code> and my cluster service CIDR is <code>10.0.0.0/24</code>. On my 4th VM, I set up a python script to publish messages to <code>10.0.0.5</code>. However, <code>10.0.0.5</code> is virtual since it is a Service ClusterIP and therefore doesn't know how to route. I want to add a routing rule to automatically send <code>10.0.0.0/24</code> traffic to any of the 3 nodes in my cluster.</p>
<p>Can anyone help me out?</p>
| <p>Although you might be able to make routing working with <a href="http://linux-ip.net/html/routing-tables.html" rel="nofollow noreferrer">route tables</a> and <a href="https://en.wikipedia.org/wiki/Iptables" rel="nofollow noreferrer">iptables</a>, I would recommend using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> type of service so that you talk to the IP address of your nodes and not a Virtual IP that is only available within the cluster.</p>
|
<p>Starting from a ~empty AWS account, I am trying to follow <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html</a> </p>
<p>So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.</p>
<p>Then I used the website to create my EKS cluster and used <code>aws configure</code> to set the access key and secret of my IAM user.</p>
<p><code>aws eks update-kubeconfig --name wr-eks-cluster</code> worked fine, but:</p>
<pre><code>kubectl get svc
error: the server doesn't have a resource type "svc"
</code></pre>
<p>I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:</p>
<pre><code>kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p><code>aws-iam-authenticator token -i <my cluster name></code> seems to work fine.</p>
<p>The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?</p>
<p>Or ultimately, how do I proceed and gain access to my cluster using kubectl?</p>
| <ol>
<li>As mentioned <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="noreferrer">in docs</a>, the AWS IAM user created EKS cluster automatically receives <code>system:master</code> permissions, and it's enough to get <code>kubectl</code> working. You need to use this user credentials (<code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code>) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (<a href="https://docs.aws.amazon.com/en_us/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_add-key" rel="noreferrer">Creating Access Keys for the Root User</a>).</li>
<li>The main magic is inside <code>aws-auth</code> ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.</li>
</ol>
<p>I'm not sure about how do you pass credentials for the <code>aws-iam-authenticator</code>:</p>
<ul>
<li>If you have <code>~/.aws/credentials</code> with <code>aws_profile_of_eks_iam_creator</code> then you can try <code>$ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces</code></li>
<li>Also, you can use environment variables <code>$ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces</code></li>
</ul>
<p>Both of them should work, because <code>kubectl ...</code> will use generated <code>~/.kube/config</code> that contains <code>aws-iam-authenticator token -i cluster_name</code> command. <code>aws-iam-authenticator</code> uses environment variables or <code>~/.aws/credentials</code> to give you a token.</p>
<p>Also, <a href="https://stackoverflow.com/a/51181905/3838486">this answer</a> may be useful for the understanding of the first EKS user creation.</p>
|
<p>Helm is failing when upgrading a chart that contains a new sub-chart</p>
<p>e.g.:</p>
<pre><code>chart
/templates
/charts
/sub-1
values.yaml
</code></pre>
<p>Now this chart get's updated, and a new sub-chart is added, which contains a configmap etc..</p>
<pre><code>chart
/templates
/charts
/sub-1
/sub-2
/templates
configmap.yaml #config
values.yaml
</code></pre>
<p>When we run <code>helm upgrade <release> <chart> --install</code> we keep getting:</p>
<p><code>Error: UPGRADE FAILED: no ConfigMap with the name "config" found</code></p>
<p>My guess is that helms tries to diff it with the 'previous' version of <code>config</code> but it does not yet exist. And thus the error. However, how could I make this work without deleting and re-installing the chart. This is not optimal for production scerarios.</p>
| <p>I would just create a blank ConfigMap in whichever Kubernetes namespace you are installing your Chart.</p>
<pre><code>$ kubectl -n <namespace> create cm config
</code></pre>
<p>If for reason it complains about the <code>data</code> field or another field not being available you can always create a dummy one:</p>
<pre><code>$ kubectl -n <namespace> edit cm config
</code></pre>
<p>or</p>
<pre><code>$ kubectl -n <namespace> patch cm config -p '{"data": {"dummy": "dummy1"}}'
</code></pre>
|
<p>I recently upgraded Docker for Mac to v18.06 and noticed it can run a local k8s cluster. I was excited to try it out and ran one of our services via <code>kubectl apply -f deployments/staging/tom.yml</code>. The yml manifest does not have a restart policy specified. I then shut it down using <code>kubectl delete -f ...</code>. Since then, each time I start docker that container is starting automatically. Here is the output of <code>docker ps</code>, truncated for brevity</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED NAMES
2794eae1f31e b06778bfe205 "/bin/sh -c 'java -c…" 27 minutes ago k8s_tom-joiner_tom-joiner-66fcfd84bc...
8dd19dd65486 b06778bfe205 "/bin/sh -c 'java -c…" 27 minutes ago k8s_tom-loader_tom-loader-6cb9f7f4fb...
...
</code></pre>
<p>However, the image is not managed by Kubernetes, so I cannot do <code>kubectl delete -f</code></p>
<pre><code>kubectl get pods
No resources found.
</code></pre>
<p>How do I permanently shut down the image and prevent if from restarting automatically? I tried <code>docker update --restart=no $(docker ps -a -q)</code> with no luck</p>
| <p>This depends on your specific deployment, but <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer">Kubernetes specifices that</a></p>
<blockquote>
<p>A PodSpec has a <code>restartPolicy</code> field with possible values <code>Always</code>, <code>OnFailure</code>, and <code>Never</code>. The default value is <code>Always</code>. [...]</p>
</blockquote>
<p>So, if you don't want your pod to restart for any reason, you have to explicitly tell it not to. You don't provide the example contents of your YAML file so I cannot guess the best place to do that, but that's enough for general guidance I think.</p>
<hr>
<p>Now, for the problem that you face: Docker is probably using a custom namespace. Use</p>
<pre><code>kubectl get namespaces
</code></pre>
<p>to see what you get, then search for pods in those namespaces with</p>
<pre><code>kubectl -n <namespace> get pods
</code></pre>
<p>Or if you're impatient just get it over with:</p>
<pre><code>kubectl --all-namespaces get pods
</code></pre>
<p>Reference: <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="noreferrer">kubectl cheat sheet</a></p>
|
<p>kubectl logs command intermittently fails with "getsockopt: no route to host" error. </p>
<pre><code># kubectl logs -f mypod-5c46d5c75d-2Cbtj
</code></pre>
<blockquote>
<p>Error from server: Get
<a href="https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true" rel="nofollow noreferrer">https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true</a>:
dial tcp X.X.X.X:10250: getsockopt: no route to host</p>
</blockquote>
<p>If I run the same command 5-6 times it works. I am not sure why this is happening. Any help will be really appreciated. </p>
| <p>Just fyi, I just tried using another VPC 172.18.X.X on EKS, and all kubectl commands works fine.</p>
<p>Also I noticed that kops uses 172.18.X.X for docker's internal cidr when I was using 172.17.X.X VPC. So I speculate that kops changes default docker's cidr not to collide with cluster IP. I hope we could configure docker's cidr when EKS worker nodes are created, maybe by CloudFormation yaml template or something.</p>
|
<p>I am trying to use the container from <a href="https://github.com/cybermaggedon/accumulo-docker" rel="nofollow noreferrer">https://github.com/cybermaggedon/accumulo-docker</a> to create a 3 node deployment in the Google Kubernetes Engine. My main problem is how to make the nodes aware of each other. For example, the <code>accumulo/conf/slaves</code> config file contains a list of all the nodes (either names or IPs, one per line), and needs to be replicated across all the nodes. Also, a single Accumulo node is designated as a master, and all slave nodes point to it by making it the only name/IP in the conf/masters file. </p>
<p>The documentation for the Accumulo docker container configures each container in this manner by providing environment variables, which are in turn used by the container startup script to rewrite the configuration files for that container, e.g.</p>
<pre><code> docker run -d --ip=10.10.10.11 --net my_network \
-e ZOOKEEPERS=10.10.5.10,10.10.5.11,10.10.5.12 \
-e HDFS_VOLUMES=hdfs://hadoop01:9000/accumulo \
-e NAMENODE_URI=hdfs://hadoop01:9000/ \
-e MY_HOSTNAME=10.10.10.11 \
-e GC_HOSTS=10.10.10.10 \
-e MASTER_HOSTS=10.10.10.10 \
-e SLAVE_HOSTS=10.10.10.10,10.10.10.11,10.10.10.12 \
-e MONITOR_HOSTS=10.10.10.10 \
-e TRACER_HOSTS=10.10.10.10 \
--link hadoop01:hadoop01 \
--name acc02 cybermaggedon/accumulo:1.8.1h
</code></pre>
<p>This is a startup of one of the slave nodes, it includes itself in <code>SLAVE_HOSTS</code> and points to the master in <code>MASTER_HOSTS</code>. </p>
<p>If I implement my scaling as a stateful set under Kubernetes, how I can achieve a similar result? I can modify the container as needed, I have no problem creating my own version.</p>
| <p>Disclaimer: Just because it runs on docker it doesn't necessarily mean that it can run on Kubernetes. <a href="https://accumulo.apache.org/" rel="nofollow noreferrer">Accumulo</a> is part of the Hadoop/HDFS ecosystem and lots of the components are not necessarily production ready. Check my other answers: <a href="https://stackoverflow.com/a/53160540/2989261">1</a>, <a href="https://stackoverflow.com/a/53137701/2989261">2</a>.</p>
<p>Kubernetes runs its pods using a PodCidr and it's only seen within the cluster. Also, the IP addresses in those for each pod is not fixed, meaning it can change as it moves from one cluster to another or as pods are stopped/started. The way services/pods are generally discovered in a cluster is using <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a>. So, for example for the master and slave options, you will probably have to specify a Kubernetes DNS (and considering you are using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> that uses ordinal numbers for pods)</p>
<pre><code>MASTER_HOSTS=acummulo-0.accumulo.default.svc.cluster.local
SLAVE_HOSTS=acummulo-0.accumulo.default.svc.cluster.local,acummulo-1.accumulo.default.svc.cluster.local,acummulo-2.accumulo.default.svc.cluster.local
</code></pre>
<p>Since Accumulo is a distributed K/V store, you can take cues from how <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">Cassandra</a> could be deployed on a Kubernetes cluster. Hope it helps!</p>
|
<p>I am learning kubernetes by playing with minikube.</p>
<p>This is my pod deployment file which is fine.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: myapp
image: myid/myimage
</code></pre>
<p>I am exposing the above pods using NodePort. I am able to access using minikube IP at port 30002.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-ip-service
spec:
type: NodePort
externalIPs:
- 192.168.99.100
selector:
component: web
ports:
- port: 3000
nodePort: 30002
targetPort: 8080
</code></pre>
<p>Now i would like to use ingress to access the application at port 80 which will forward the request the ip-service at port 3000. It does NOT work</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: my-ip-service
servicePort: 3000
</code></pre>
<p>If i try to access to ingress, address is blank. </p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 41m
</code></pre>
<p>How to use ingress with minikube? Or how to bind the minikube ip with ingress service - so that the app can be exposed outside without using nodeport</p>
| <p>I think you are missing the ingress controller resource on minikube itself. There are many possible ways to create an ingress-controller resource on K8s , but i think for you the best way to start on minikube is to follow <a href="https://kubernetes.github.io/ingress-nginx/deploy/#minikube" rel="nofollow noreferrer">this</a> documentation.</p>
<p>Don't forget to read about <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> in general once you get this working.</p>
|
<p>I have a Jenkins deployment pipeline which involves kubernetes plugin. Using kubernetes plugin I create a slave pod for building a node application using <strong>yarn</strong>. The requests and limits for CPU and Memory are set.</p>
<p>When the Jenkins master schedules the slave, sometimes (as I haven’t seen a pattern, as of now), the pod makes the entire node unreachable and changes the status of node to be Unknown. On careful inspection in Grafana, the CPU and Memory Resources seem to be well within the range with no visible spike. The only spike that occurs is with the Disk I/O, which peaks to ~ 4 MiB.</p>
<p>I am not sure if that is the reason for the node unable to address itself as a cluster member. I would be needing help in a few things here:</p>
<p>a) How to diagnose in depth the reasons for node leaving the cluster.</p>
<p>b) If, the reason is Disk IOPS, is there any default requests, limits for IOPS at Kubernetes level?</p>
<p>PS: I am using EBS (gp2)</p>
| <p>As per the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">docs</a>, for the node to be 'Ready':</p>
<blockquote>
<p>True if the node is healthy and ready to accept pods, False if the node is not healthy and is not accepting pods, and Unknown if the node controller has not heard from the node in the last node-monitor-grace-period (default is 40 seconds)</p>
</blockquote>
<p>If would seem that when you run your workloads your kube-apiserver doesn't hear from your node (kubelet) in 40 seconds. There could be multiple reasons, some things that you can try:</p>
<ul>
<li><p>To see the 'Events' in your node run:</p>
<pre><code>$ kubectl describe node <node-name>
</code></pre></li>
<li><p>To see if you see anything unusual on your kube-apiserver. On your active master run:</p>
<pre><code>$ docker logs <container-id-of-kube-apiserver>
</code></pre></li>
<li><p>To see if you see anything unusual on your kube-controller-manager when your node goes into 'Unknown' state. On your active master run:</p>
<pre><code>$ docker logs <container-id-of-kube-controller-manager>
</code></pre></li>
<li><p>Increase the <code>--node-monitor-grace-period</code> option in your kube-controller-manager. You can add it to the command line in the <code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> and restart the <code>kube-controller-manager</code> container.</p></li>
<li><p>When the node is in the 'Unknown' state can you <code>ssh</code> into it and see if you can reach the <code>kubeapi-server</code>? Both on <code><master-ip>:6443</code> and also the <code>kubernetes.default.svc.cluster.local:443</code> endpoints.</p></li>
</ul>
|
<p>I came to the realization that Windows 10 Docker has the Kubernetes options in it now, so I want to completely uninstall minikube and use the Kubernetes version that comes with docker windows instead.</p>
<p>How can I completely uninstall minikube in windows 10? </p>
| <p>This as simple as running:</p>
<pre><code>minikube stop & REM stops the VM
minikube delete & REM deleted the VM
</code></pre>
<p>Then delete the <code>.minikube</code> and <code>.kube</code> directories usually under:</p>
<pre><code>C:\users\{user}\.minikube
</code></pre>
<p>and</p>
<pre><code>C:\users\{user}\.kube
</code></pre>
<p>Or if you are using chocolatey:</p>
<pre><code>C:\ProgramData\chocolatey\bin\minikube stop
C:\ProgramData\chocolatey\bin\minikube delete
choco uninstall minikube
choco uninstall kubectl
</code></pre>
|
<p>My question is regarding quotas of Google Kubernetes Engine.</p>
<p>I have an instance running 4 pods, each pod is referring to a microservice (api) containing 3 containers:</p>
<ul>
<li>Spring Boot App</li>
<li>esp: endpoints</li>
<li>cloudsqlproxy</li>
</ul>
<p>For each pod (microservice), I have a deployment yaml which includes a nodeport service. Along with that, there is an ingress mapping all these services. Now
I need to deploy another microservice (pod with same 3 containers), but the quota of 5 backend services is in the limit.</p>
<p>I dont know if I'm doing something wrong or this quota is very small. I think, four microservices is very little for a technology that supports this approach.</p>
<p>So, Am I missing something in this architecture / configuration? Something that I'm doing wrong?</p>
<p>Here is my Ingress configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sigma-ingress
annotations:
# type of controller to use:
kubernetes.io/ingress.class: "gce"
# CORS
ingress.kubernetes.io/enable-cors: "true"
#
# ingress.kubernetes.io/rewrite-target: /
# Don't rediret to HTTPS
ingress.kubernetes.io/ssl-redirect: "false"
# Block HTTP requests
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: sigma-ssl
rules:
- http:
paths:
- path: /agro/*
backend:
serviceName: api-agro
servicePort: 443
- path: /fazendas
backend:
serviceName: api-fazenda
servicePort: 443
- path: /fazendas/*
backend:
serviceName: api-fazenda
servicePort: 443
- path: /clima
backend:
serviceName: api-clima
servicePort: 443
- path: /clima/*
backend:
serviceName: api-clima
servicePort: 443
- path: /ocorrencias
backend:
serviceName: api-inspecao
servicePort: 443
- path: /ocorrencias/*
backend:
serviceName: api-inspecao
servicePort: 443
</code></pre>
<p>Thanks in advance</p>
<p>Peter</p>
| <p>You are probably hitting the <a href="https://cloud.google.com/load-balancing/docs/backend-service" rel="nofollow noreferrer">GCP Load balancer backend</a> limit which is usually low (mine is 9) This if you several other load balancers with configured backends that are making you hit that limit and you are mentioning that you already have other NodePort services (are they using another load balancer with a different backend?)</p>
<p>Your ingress should use just one backend that points to all the nodes in your cluster.</p>
<p>Besides the gcloud command line, you can see the limit from the UI.</p>
<p><a href="https://i.stack.imgur.com/S3PMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3PMs.png" alt="backend limit"></a></p>
<p>You can also see the details on the load balancer that is serving your Ingress.</p>
<p><a href="https://i.stack.imgur.com/MLPmH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MLPmH.png" alt="backend"></a></p>
<p>And you can see all your load balancer too:</p>
<p><a href="https://i.stack.imgur.com/1vUNy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1vUNy.png" alt="lbs"></a></p>
<p>You can <a href="https://cloud.google.com/compute/quotas#requesting_additional_quota" rel="nofollow noreferrer">request a backend quota increase</a> from GCP, or you can remove unused load balancers.</p>
|
<p>I have a service running on Kubernetes processing files passed from another resource. Single file size can vary between 10MB - 1GB.</p>
<p>Recently I've been seeing the pod dead because of OOMKilled Error:</p>
<pre><code>State: Running
Started: Sun, 11 Nov 2018 07:28:46 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 09 Nov 2018 18:49:46 +0000
Finished: Sun, 11 Nov 2018 07:28:45 +0000
</code></pre>
<p>I mitigate the issue by bumping the resource (Memory) limit on the pod. But I am concerning whenever there is a traffic or file size spike, we will run into this OOMKilled issue again. But if I set the memory limit too high, I am concerning it will cause trouble on the host of this pod. </p>
<p>I read through the best practices given by Kubernetes: <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#best-practices" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#best-practices</a>. But I am not sure by adding --eviction-hard and --system-reserved=memory could resolve the issue. </p>
<p>Has anyone had experience with a similar issue before? </p>
<p>Any help would be appreciated.</p>
| <p>More than a Kubernetes/Container runtime issue this is more memory management in your application and this will vary depending on what language runtime or if something like the JVM is running your application.</p>
<p>You generally want to set an upper limit on the memory usage in the application, for example, a maximum heap space in your JVM, then leave a little headroom for garbage collection and overruns. </p>
<p>Another example is the Go runtime and looks like <a href="https://github.com/golang/go/issues/16843" rel="noreferrer">they have talked</a> about memory management but with no solution as of this writing. For these cases, it might be good to manually set the <a href="https://unix.stackexchange.com/a/44987/54556"><code>ulimit</code></a> the virtual memory for the specific process of your application. (If you have a leak you will see other types of errors) or using <a href="https://unix.stackexchange.com/a/44988/54556">timeout</a></p>
<p>There's also <a href="https://unix.stackexchange.com/a/125024/54556">manual cgroup management</a> but then again that's exactly with docker and Kubernetes are supposed to do.</p>
<p><a href="https://jaxenter.com/nobody-puts-java-container-139373.html" rel="noreferrer">This is a good article</a> with some insights about managing a JVM in containers.</p>
|
<p>I have mounted a hostpath volume in a Kubernetes container. Now I want to mount a configmap file onto the hostpath volume.</p>
<p>Is that possible? </p>
| <p>Not really, a larger question would be would you'd want to do that?</p>
<p>The standard way to add configurations in Kubernetes is using ConfigMaps. They are stored in etcd and the size limit is <a href="https://github.com/kubernetes/kubernetes/issues/19781" rel="nofollow noreferrer"><code>1MB</code></a>. When your pod comes up the configuration is mounted on a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">pod mount point</a> that you can specify in the pod spec.</p>
<p>You may want the opposite which is to use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer"><code>hostPath</code></a> that has some configuration and that's possible. Say, that you want to have some config that is larger than 1MB (which is not usual) and have your pod use it. The gotcha here is that you need to put this hostPath and the files in all your cluster nodes where your pod may start.</p>
|
<p>I am new to Kubernetes and I am not really sure on how to proceed to implement correctly a watch; especially I am not sure on how to deal with the resourceVersion parameter.</p>
<p>The goal is to watch for new pods with a specific label, and in case of error or disconnection from the cluster being able to restart the watch from the last event occurred.</p>
<p>I am doing something like this:</p>
<pre><code>// after setting up the connection and some parameters
String lastResourceVersion = null; // at beginning version is unknown
while (true) {
try {
Watch<V1Pod> watcher = Watch.createWatch(
client,
api.listNamespacedPodCall(namespace, pretty, fieldSelector, labelSelector, lastResourceVersion, forEver, true, null, null),
new TypeToken<Watch.Response<V1Pod>>() {}.getType()
);
for (Watch.Response<V1Pod> item : watcher) {
//increment the version
lastResourceVersion = item.object.getMetadata().getResourceVersion();
// do some stuff with the pod
}
} catch (ApiException apiException) {
log.error("restarting the watch from "+lastResourceVersion, apiException);
}
}
</code></pre>
<p>Is it correct to use the resourceVersion of a Pod to reinitialize the watch call? Is this number a kind of timestamp for all the events in the cluster, or different api will use different sequences?</p>
<p>Do I need to watch for specific exceptions? eg. in case of the resourceVersion is to old?</p>
<p>thanks</p>
| <p>Adam is right.</p>
<p>This is best explained by <strong><a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="noreferrer">https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes</a></strong></p>
<p>Quoting relevant parts (emphasis mine):</p>
<blockquote>
<p>When retrieving a <strong>collection of resources</strong> (either namespace or cluster scoped), the response from the server will contain a resourceVersion value that can be used to initiate a watch against the server. </p>
</blockquote>
<p>... snip ...</p>
<blockquote>
<p>When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code 410 Gone, clearing their local cache, <strong>performing a list operation, and starting the watch from the resourceVersion returned by that new list operation.</strong></p>
</blockquote>
<p>So before you call watch, you should list and pull the resourceVersion from the list (not the objects inside of it). Then start the watch with that resourceVersion. If the watch fails for some reason, you will have to list again and then use the resourceVersion from that list to re-establish the watch.</p>
|
<p>I have written a simple spring boot application(version springboot 2.0) which uses mysql(version 5.7).</p>
<p><strong>application.properties</strong> snippet</p>
<pre><code>spring.datasource.url = jdbc:mysql://localhost:3306/test?useSSL=false
spring.datasource.username = testuser
spring.datasource.password = testpassword
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
</code></pre>
<p>When I run it locally, it works fine.
If I want to run this spring boot application in docker then I can change</p>
<pre><code>spring.datasource.url = jdbc:mysql://mysql-container:3306/test?useSSL=false
</code></pre>
<p><em>mysql-container is run using mysql:5.7 image from dockerhub.</em></p>
<p>However I want to change value of host from some placeholder properties file. so that this looks something like:</p>
<pre><code>spring.datasource.url = jdbc:mysql://${MYSQL_HOST}:3306/test?useSSL=false
</code></pre>
<p>note: I am not sure about placeholder format. Is it ${MYSQL_HOST} or @MYSQL_HOST@ ?</p>
<p>you can name this placeholder file as <em>placeholder.properties</em> or <em>placeholder.conf</em> or <em>.env</em> or anything. The content of that file should be something like:</p>
<pre><code>MYSQL_HOST=localhost
</code></pre>
<p>or</p>
<pre><code>MYSQL_HOST=some ip address
</code></pre>
<p>I can create .env or .env.test or .env.prod and I can refer that env file based on where I want to run application.</p>
<hr>
<p>UPDATE -</p>
<p>I have two questions:</p>
<ol>
<li><p>Where should I keep placeholder.properties? Is it under /config/ or under some specific directory?</p></li>
<li><p>how to invoke placeholder inside application.properties ?</p></li>
</ol>
<p>can someone suggest?</p>
| <p>SUGGESTION: If you have a relatively small #/properties, why not just have a different application.properties file for each different environment?</p>
<p>You'd specify the environment at runtime with <code>-Dspring.profiles.active=myenv</code>.</p>
<p>Look <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html" rel="nofollow noreferrer">here</a> and <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-profile-specific-properties" rel="nofollow noreferrer">here</a>.</p>
<p>PS:</p>
<p>To answer your specific question: the syntax is <code>${MYSQL_HOST}</code></p>
|
<p>Kubernetes breaks (no response from <code>kubectl</code>) when I have <strong>too many Pods</strong> running in the cluster (1000 pods). </p>
<p>There are <strong>more than enough resources</strong> (CPU and memory), so it seems to me that some kind of controller is breaking and unable to handle a large number of Pods. </p>
<p>The workload I need to run can be massively parallel processed, hence I have a high number of Pods. </p>
<p>Actually, I would like to be able to run many more times 1000 Pods. Maybe even <strong>100,000 Pods</strong>. </p>
<p>My Kubernetes master node is an <code>AWS EC2 m4.xlarge</code> instance. </p>
<p>My intuition tells me that it is the master node's network performance that is holding the cluster back? </p>
<p>Any ideas? </p>
<p><strong>Details:</strong><br>
I am running 1000 Pods in a Deployment.<br>
when I do <code>kubectl get deploy</code><br>
it shows: </p>
<pre><code>DESIRED CURRENT UP-TO-DATE AVAILABLE
1000 1000 1000 458
</code></pre>
<p>and through my application-side DB, I can see that there are only 458 Pods working. </p>
<p>when I do <code>kops validate cluster</code><br>
I receive the warning: </p>
<pre><code>VALIDATION ERRORS
KIND NAME MESSAGE
ComponentStatus controller-manager component is unhealthy
ComponentStatus scheduler component is unhealthy
Pod kube-system/kube-controller-manager-<ip>.ec2.internal
kube-system pod
"kube-controller-manager-<ip>.ec2.internal" is not healthy
Pod
kube-system/kube-scheduler-<ip>.ec2.internal
kube-system pod "kube-scheduler-<ip>.ec2.internal" is not healthy
</code></pre>
| <p>The fact that it takes a long time to list your pods is not really about your nodes as they will able to handle pods as much depending on the resources they have such CPUs and Memory.</p>
<p>The issue you are seeing is more about the <code>kubeapi-server</code> being able query/reply a large number of pods or resources.</p>
<p>So the two contention points here are the kube-apiserver and etcd where the state for everything in a Kubernetes cluster is stored. So you can focus on optimizing those two components and the faster you'll get responses from say <code>kubectl get pods</code> (Networking is another contention point but that's if you are issuing kubectl commands from a slow broadband connection).</p>
<p>You can try:</p>
<ul>
<li><p>Setting up an <a href="https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/" rel="nofollow noreferrer">HA external etcd cluster</a> with pretty beefy machines and fast disks.</p></li>
<li><p>Upgrade the machines where your <code>kubeapi-server</code>(s) lives.</p></li>
<li><p>Follow more guidelines described <a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">here</a>.</p></li>
</ul>
|
<p>I have 3 node[host a,host b, host c] kubernetes cluster(version 1.12.2). I am trying run spark-pi example jar as mentioned in <a href="https://kubernetes.io/blog/2018/03/apache-spark-23-with-native-kubernetes/" rel="nofollow noreferrer">kubernetes document</a>.</p>
<p>Host a is my kubernetes Master. >> kubectl get nodees list all the three nodes.</p>
<p>I have built the spark docker image using whats provided in spark 2.3.0 binary folder.</p>
<pre><code>>> sudo ./bin/docker-image-tool.sh -r docker.io/spark/spark -t spark230 build
</code></pre>
<p>I got the message the image got built successfully.</p>
<pre><code>>> docker images ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/spark/spark spark230 6a2b645d7efe About an hour ago 346 MB
docker.io/weaveworks/weave-npc 2.5.0 d499500e93d3 7 days ago 49.5 MB
docker.io/weaveworks/weave-kube 2.5.0 a5103f96993a 7 days ago 148 MB
docker.io/openjdk 8-alpine 97bc1352afde 2 weeks ago 103 MB
k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 2 weeks ago 96.5 MB
k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 2 weeks ago 194 MB
k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 2 weeks ago 164 MB
k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 2 weeks ago 58.3 MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 7 weeks ago 220 MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 2 months ago 39.2 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742 kB
</code></pre>
<pre><code>> ./bin/spark-submit
--master k8s://https://<api-server>:<api
> server port> --deploy-mode cluster --name spark-pi
> --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf
> spark.kubernetes.container.image=spark/spark:spark230 --conf
> spark.kubernetes.authenticate.driver.serviceAccountName=spark
> local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>When I submit above command , it gives proper output sometimes. Other times it throws below error. </p>
<pre><code>> code = Unknown desc = repository docker.io/spark/spark not found:
does not exist or no pull access, reason=ErrImagePull
</code></pre>
<p>When I debugged it further, it looks like, whenever node name: host b or host c its throwing above error message.</p>
<p>When node name : host a , then it runs fine. Looks like other nodes are unable to locate the image. </p>
<p>Questions: </p>
<ol>
<li>Should I install spark on all nodes & build docker image on all nodes? </li>
<li>Is it possible to pass the image reference from single node [host a] to other nodes? i.e how to make other nodes refer the same image from host a.</li>
</ol>
| <p>You can always save docker image as tar file and then copy that tar file to other host and load the image there.</p>
<p>To save docker image as tar file:</p>
<pre><code>sudo docker save -o <path for generated tar file> <image name>
</code></pre>
<p>Now copy your tar file to other host using scp or some other copy tools. And load the docker image using:</p>
<pre><code>sudo docker load -i <path to image tar file>
</code></pre>
<p>Hope this helps</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.