prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>Is there any CLI tools or libraries that allow to update container images (and other parameters) in K8S YAML/JSON configuration files?</p>
<p>For example, I have this YAML:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
<...>
spec:
template:
spec:
containers:
- name: dmp-reports
image: example.com/my-image:v1
<...>
</code></pre>
<p>And I want to automatically update the image for this deployment in this file (basically, this is necessary for the CI/CD system).</p>
| <p>We have the same issue on the <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in <code>pom.xml, package.json, Dockerfiles, helm charts</code> etc.</p>
<p>We use a simple CLI tool called <a href="https://github.com/jenkins-x/updatebot" rel="nofollow noreferrer">UpdateBot</a> which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the <a href="https://github.com/pulls?q=is%3Apr+archived%3Afalse+user%3Ajenkins-x+is%3Aclosed+label%3Aupdatebot" rel="nofollow noreferrer">current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories</a></p>
<p>Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
<a href="https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29" rel="nofollow noreferrer">https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29</a></p>
|
<p>I am facing a problem with kubernetes nodes deployed on AWS.
(Cluster with 3 nodes and 1 master running on m3.large instances with each about 25GB)</p>
<p>After (about 3 days) there is 0KB left on disk and the cluster get stuck.</p>
<p>All the storage (more or less) is used by /var/lib/docker/overlay/.
Inside this folder are about 500 or more of those files:</p>
<pre><code>drwx------ 3 root root 4096 Jun 20 15:33 ed4f90bd7a64806f9917e995a02974ac69883a06933033ffd5049dd31c13427a
drwx------ 3 root root 4096 Jun 20 15:28 ee9344fea422c38d71fdd2793ed517c5693b0c8a890964e6932befa1ebe5aa63
drwx------ 3 root root 4096 Jun 20 16:17 efed310a549243e730e9796e558b2ae282e07ea3ce0840a50c0917a435893d42
drwx------ 3 root root 4096 Jun 20 14:39 eff7f04f17c0f96cff496734fdc1903758af1dfdcd46011f6c3362c73c6086c2
drwx------ 3 root root 4096 Jun 20 15:29 f5bfb696f5a6cad888f7042d01bfe146c0621517c124d58d76e77683efa1034e
drwx------ 3 root root 4096 Jun 20 15:26 f5fa9d5d2066c7fc1c8f80970669634886dcaccc9e73ada33c7c250845d2fe8c
drwx------ 3 root root 4096 Jun 20 14:38 f8fd64fb1e0ab26708d5458dddd2d5a70018034237dfed3db48ada5666fcf77f
drwx------ 3 root root 4096 Jun 20 14:46 faa143ebd7a4079eaa45ddbf17dcfc9163e3035983f2e334e32a60e89452fa94
drwx------ 3 root root 4096 Jun 20 14:48 fb93c0c64e0d4935bf67fc8f70df2b8a4cffe59e294ee8a876dfdf6b57486da5
drwx------ 3 root root 4096 Jun 20 14:46 fd0a420d5655fb7d022c397effdb95968ff7e722c58fcc7915f97e8df47cd080
</code></pre>
<p>Cluster runs on Kubernetes 1.6.4 and Docker 1.12.6.</p>
<p>Seems to be a problem with the garbage collector of kubernetes. Running cAdvisor /validate gives me following message</p>
<pre><code> None of the devices support 'cfq' I/O scheduler. No disk stats can be reported.
Disk "xvda" Scheduler type "none".
</code></pre>
<p>Running those statement <code>journalctl -u kubelet | grep -i garbage</code> gives also a error message:
<code>Jun 20 14:35:21 ip-172-21-4-239 kubelet[1551]: E0620 14:35:21.986898 1551 kubelet.go:1165] Image garbage collection failed: unable to find data for container /</code></p>
<p>Any ideas how to get the garbage collector working again?</p>
| <p>I was able to resolve a similar issue with recurring high IO on the nodes due to <code>du -s /var/lib/docker/overlay/</code> by editing <code>kops cluster.spec</code> with <code>kops edit cluster [cluster_name]</code>.
I added the following under specs:</p>
<pre><code>docker:
logDriver: json-file
logLevel: warn
storage: overlay2
</code></pre>
<p>It looks like by default <code>kops</code> configures docker to use overlay as default storage driver, while docker recommends using <code>overlay2</code> as newer, more stable and faster.</p>
|
<p>Prometheus.</p>
<p>Any hint about how to collect the total required memory specified in the deployments? I mean the memory you specify as limit or required in the deployments.</p>
<p>I can do it using heapster and influxdb but I have not found how to do it with Prometheus and metric server, I do not find any metric with this information.</p>
<p>Is there any way to gather this information without using heapster?</p>
| <p>You can do it with <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">Kube State Metrics</a>. Just deploy it in your cluster and scrape its metrics with Prometheus.</p>
<p>These metrics give what you are finding:</p>
<ul>
<li>kube_pod_container_resource_requests_memory_bytes</li>
<li>kube_pod_container_resource_limits_memory_bytes</li>
</ul>
|
<p>I am attempting to create a HA Kubernetes cluster in Azure using <code>kubeadm</code> as documented here <code>https://kubernetes.io/docs/setup/independent/high-availability/</code></p>
<p>I have everything working when using only 1 master node but when changing to 3 master nodes kube-dns keeps crashing with apiserver issues</p>
<p>I can see when running <code>kubectl get nodes</code> that the 3 master nodes are ready</p>
<pre><code>NAME STATUS ROLES AGE VERSION
k8s-master-0 Ready master 3h v1.9.3
k8s-master-1 Ready master 3h v1.9.3
k8s-master-2 Ready master 3h v1.9.3
</code></pre>
<p>but the dns and dashboard pod keep crashing</p>
<pre><code>NAME READY STATUS RESTARTS AGE
kube-apiserver-k8s-master-0 1/1 Running 0 3h
kube-apiserver-k8s-master-1 1/1 Running 0 2h
kube-apiserver-k8s-master-2 1/1 Running 0 3h
kube-controller-manager-k8s-master-0 1/1 Running 0 3h
kube-controller-manager-k8s-master-1 1/1 Running 0 3h
kube-controller-manager-k8s-master-2 1/1 Running 0 3h
kube-dns-6f4fd4bdf-rmqbf 1/3 CrashLoopBackOff 88 3h
kube-proxy-5phhf 1/1 Running 0 3h
kube-proxy-h5rk8 1/1 Running 0 3h
kube-proxy-ld9wg 1/1 Running 0 3h
kube-proxy-n947r 1/1 Running 0 3h
kube-scheduler-k8s-master-0 1/1 Running 0 3h
kube-scheduler-k8s-master-1 1/1 Running 0 3h
kube-scheduler-k8s-master-2 1/1 Running 0 3h
kubernetes-dashboard-5bd6f767c7-d8kd7 0/1 CrashLoopBackOff 42 3h
</code></pre>
<p>The logs <code>kubectl -n kube-system logs kube-dns-6f4fd4bdf-rmqbf -c kubedns</code> indicate there is an api server issue</p>
<pre><code>I0521 14:40:31.303585 1 dns.go:48] version: 1.14.6-3-gc36cb11
I0521 14:40:31.304834 1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0521 14:40:31.304989 1 server.go:112] FLAG: --alsologtostderr="false"
I0521 14:40:31.305115 1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0521 14:40:31.305164 1 server.go:112] FLAG: --config-map=""
I0521 14:40:31.305233 1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0521 14:40:31.305285 1 server.go:112] FLAG: --config-period="10s"
I0521 14:40:31.305332 1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0521 14:40:31.305394 1 server.go:112] FLAG: --dns-port="10053"
I0521 14:40:31.305454 1 server.go:112] FLAG: --domain="cluster.local."
I0521 14:40:31.305531 1 server.go:112] FLAG: --federations=""
I0521 14:40:31.305596 1 server.go:112] FLAG: --healthz-port="8081"
I0521 14:40:31.305656 1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0521 14:40:31.305792 1 server.go:112] FLAG: --kube-master-url=""
I0521 14:40:31.305870 1 server.go:112] FLAG: --kubecfg-file=""
I0521 14:40:31.305960 1 server.go:112] FLAG: --log-backtrace-at=":0"
I0521 14:40:31.306026 1 server.go:112] FLAG: --log-dir=""
I0521 14:40:31.306109 1 server.go:112] FLAG: --log-flush-frequency="5s"
I0521 14:40:31.306160 1 server.go:112] FLAG: --logtostderr="true"
I0521 14:40:31.306216 1 server.go:112] FLAG: --nameservers=""
I0521 14:40:31.306267 1 server.go:112] FLAG: --stderrthreshold="2"
I0521 14:40:31.306324 1 server.go:112] FLAG: --v="2"
I0521 14:40:31.306375 1 server.go:112] FLAG: --version="false"
I0521 14:40:31.306433 1 server.go:112] FLAG: --vmodule=""
I0521 14:40:31.306510 1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0521 14:40:31.306806 1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0521 14:40:31.306926 1 dns.go:146] Starting endpointsController
I0521 14:40:31.306996 1 dns.go:149] Starting serviceController
I0521 14:40:31.307267 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0521 14:40:31.307350 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0521 14:40:31.807301 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:40:32.307629 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0521 14:41:01.307985 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0521 14:41:01.308227 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0521 14:41:01.807271 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:02.307301 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:02.807294 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:03.307321 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0521 14:41:03.807649 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
</code></pre>
<p>The output from <code>kubectl -n kube-system logs kube-apiserver-k8s-master-0</code> looks relatively normal, except for all the TLS errors</p>
<pre><code> I0521 11:09:53.982465 1 server.go:121] Version: v1.9.7
I0521 11:09:53.982756 1 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider.
I0521 11:09:55.934055 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.935038 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.938929 1 feature_gate.go:190] feature gates: map[Initializers:true]
I0521 11:09:55.938945 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0521 11:09:55.942042 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:09:55.948001 1 master.go:225] Using reconciler: lease
W0521 11:10:01.032046 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0521 11:10:03.333423 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0521 11:10:03.340119 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0521 11:10:04.188602 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi
[restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi
[restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/
I0521 11:10:06.424379 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0521 11:10:10.910296 1 serve.go:96] Serving securely on [::]:6443
I0521 11:10:10.919244 1 crd_finalizer.go:242] Starting CRDFinalizer
I0521 11:10:10.919835 1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0521 11:10:10.919940 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0521 11:10:10.920028 1 controller.go:84] Starting OpenAPI AggregationController
I0521 11:10:10.921417 1 available_controller.go:262] Starting AvailableConditionController
I0521 11:10:10.922341 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0521 11:10:10.927021 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49208: EOF
I0521 11:10:10.932960 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49210: EOF
I0521 11:10:10.937813 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49212: EOF
I0521 11:10:10.941682 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49214: EOF
I0521 11:10:10.945178 1 logs.go:41] http: TLS handshake error from 127.0.0.1:56640: EOF
I0521 11:10:10.949275 1 logs.go:41] http: TLS handshake error from 127.0.0.1:56642: EOF
I0521 11:10:10.953068 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49442: EOF
---
I0521 11:10:19.912989 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/admin
I0521 11:10:19.941699 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/edit
I0521 11:10:19.957582 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/view
I0521 11:10:19.968065 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0521 11:10:19.998718 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0521 11:10:20.015536 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0521 11:10:20.032728 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0521 11:10:20.045918 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node
I0521 11:10:20.063670 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0521 11:10:20.114066 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0521 11:10:20.135010 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0521 11:10:20.147462 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0521 11:10:20.159892 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0521 11:10:20.181092 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0521 11:10:20.197645 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0521 11:10:20.219016 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0521 11:10:20.235273 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0521 11:10:20.245893 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0521 11:10:20.257459 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0521 11:10:20.269857 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0521 11:10:20.286785 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 11:10:20.298669 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 11:10:20.310573 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 11:10:20.347321 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 11:10:20.364505 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 11:10:20.365888 1 trace.go:76] Trace[1489234739]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2018-05-21 11:10:15.961686997 +0000 UTC m=+22.097873350) (total time: 4.404137704s):
Trace[1489234739]: [4.000707016s] [4.000623216s] About to store object in database
Trace[1489234739]: [4.404137704s] [403.430688ms] END
E0521 11:10:20.366636 1 client_ca_hook.go:112] configmaps "extension-apiserver-authentication" already exists
I0521 11:10:20.391784 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 11:10:20.404492 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
W0521 11:10:20.405827 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.240.0.231 10.240.0.233]
I0521 11:10:20.423540 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 11:10:20.476466 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 11:10:20.495934 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0521 11:10:20.507318 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 11:10:20.525086 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0521 11:10:20.538631 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 11:10:20.558614 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 11:10:20.586665 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 11:10:20.600567 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 11:10:20.617268 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 11:10:20.628770 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0521 11:10:20.655147 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 11:10:20.672926 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0521 11:10:20.694137 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 11:10:20.718936 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 11:10:20.731868 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 11:10:20.752910 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 11:10:20.767297 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 11:10:20.788265 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0521 11:10:20.801791 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0521 11:10:20.815924 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0521 11:10:20.828531 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0521 11:10:20.854715 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0521 11:10:20.864554 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0521 11:10:20.875950 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0521 11:10:20.900809 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0521 11:10:20.913751 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0521 11:10:20.924284 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 11:10:20.940075 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 11:10:20.969408 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 11:10:20.980017 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 11:10:21.016306 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 11:10:21.047910 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 11:10:21.058829 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0521 11:10:21.083536 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 11:10:21.100235 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 11:10:21.127927 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0521 11:10:21.146373 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 11:10:21.160099 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0521 11:10:21.184264 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 11:10:21.204867 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 11:10:21.224648 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 11:10:21.742427 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 11:10:21.758948 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 11:10:21.801182 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0521 11:10:21.832962 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 11:10:21.860369 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0521 11:10:21.892241 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 11:10:21.931450 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 11:10:21.963364 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 11:10:21.980748 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 11:10:22.003657 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 11:10:22.434855 1 controller.go:538] quota admission added evaluator for: { endpoints}
...
I0521 11:12:06.609728 1 logs.go:41] http: TLS handshake error from 168.63.129.16:64981: EOF
I0521 11:12:21.611308 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65027: EOF
I0521 11:12:36.612129 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65095: EOF
I0521 11:12:51.612245 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65141: EOF
I0521 11:13:06.612118 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65177: EOF
I0521 11:13:21.612170 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65235: EOF
I0521 11:13:36.612218 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65305: EOF
I0521 11:13:51.613097 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65354: EOF
I0521 11:14:06.613523 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65392: EOF
I0521 11:14:21.614148 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65445: EOF
I0521 11:14:36.614143 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65520: EOF
I0521 11:14:51.614204 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49193: EOF
I0521 11:15:06.613995 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49229: EOF
I0521 11:15:21.613962 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49284: EOF
I0521 11:15:36.615026 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49368: EOF
I0521 11:15:51.615991 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49413: EOF
I0521 11:16:06.616993 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49454: EOF
I0521 11:16:21.616947 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49510: EOF
I0521 11:16:36.617859 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49586: EOF
I0521 11:16:51.618921 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49644: EOF
I0521 11:17:06.619768 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49696: EOF
I0521 11:17:21.620123 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49752: EOF
I0521 11:17:36.620814 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49821: EOF
</code></pre>
<p>The output from a second api server however looks at lot more broken </p>
<pre><code>E0521 11:11:15.035138 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.040764 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.717294 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.721875 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.728534 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:15.734572 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.036398 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.041735 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.730094 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.736057 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.741505 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:16.741980 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:17.037722 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
E0521 11:11:17.042680 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]]
</code></pre>
| <p>I eventually got to the bottom of this. I had not copied the same Service Account signing keys onto each master node (<code>sa.key</code>, <code>sa.pub</code>).</p>
<p>These keys are documented here: <a href="https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.7.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.7.md</a></p>
<p><code>a private key for signing ServiceAccount Tokens (sa.key) along with its public key (sa.pub)</code></p>
<p>And the step that I had missed is documented here: <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/</a></p>
<p><code>Copy the contents of /etc/kubernetes/pki/ca.crt, /etc/kubernetes/pki/ca.key, /etc/kubernetes/pki/sa.key and /etc/kubernetes/pki/sa.pub and create these files manually on master1 and master2</code></p>
|
<p>We are developing a CI/CD pipeline leveraging Docker/Kubernetes in AWS. This topic is touched in <a href="https://stackoverflow.com/questions/46036853/kubernetes-ci-cd-pipeline">Kubernetes CI/CD pipeline</a>.</p>
<p>We want to create (and destroy) a new environment for each SCM branch, since a Git pull request until merge.</p>
<p>We will have a Kubernetes cluster available for that.</p>
<p>During prototyping by the dev team, we came up to Kubernetes namespaces. It looks quite suitable: For each branch, we create a namespace <code>ns-<issue-id></code>.</p>
<p>But that idea was dismissed by dev-ops prototyper, without much explanation, just stating that "we are not doing that because it's complicated due to RBAC". And it's quite hard to get some detailed reasons.</p>
<p>However, for the CI/CD purposes, we need no RBAC - all can run with unlimited privileges and no quotas, we just need a separated network for each environment.</p>
<p>Is using namespaces for such purposes a good idea? I am still not sure after reading <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noreferrer">Kubernetes docs on namespaces</a>.</p>
<p>If not, is there a better way? Ideally, we would like to avoid using Helm as it a level of complexity we probably don't need.</p>
| <p>We're working on an open source project called <a href="https://jenkins-x.io/" rel="nofollow noreferrer">Jenkins X</a> which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.</p>
<p>When you submit a Pull Request we automatically create a <em>Preview Environment</em> which is exactly what you describe - a temporary environment which is used to deploy the pull request for validation, testing & approval before the pull request is approved. </p>
<p>We now use Preview Environments all the time for many reasons and are big fans of them! Each Preview Environment is in a separate namespace so you get all the usual RBAC features from Kubernetes with them.</p>
<p>If you're interested here's <a href="https://jenkins-x.io/demos/devoxx-uk-2018/" rel="nofollow noreferrer">a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps</a> for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).</p>
|
<p>I'm trying get a Kubernetes cluster working with some nodes working behind NAT without public IP address. (Why i need it is a different story)</p>
<p>There are 3 nodes:</p>
<ol>
<li>Kubernetes cluster master (with public IP address)</li>
<li>Node1 (with public IP address)</li>
<li>Node2 (works behind NAT on my laptop as a VM, no public IP address)</li>
</ol>
<p>All 3 nodes are running Ubuntu 18.04 with Kubernetes v1.10.2(3), Docker 17.12</p>
<p>Kubernetes cluster was created like this:</p>
<p><code>kubeadm init --pod-network-cidr=10.244.0.0/16</code></p>
<p>Flannel network is used:</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code></p>
<p>Node1 and Node2 joined the cluster:</p>
<p><code>NAME STATUS ROLES AGE VERSION
master-node Ready master 3h v1.10.2
node1 Ready <none> 2h v1.10.3
node2 Ready <none> 2h v1.10.2</code></p>
<p>Nginx deployment + service (type=NodePort) created and scheduled for the Node1 (with public IP):</p>
<p><a href="https://pastebin.com/6CrugunB" rel="noreferrer">https://pastebin.com/6CrugunB</a></p>
<p><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 16m
</code></p>
<p>This deployment is accessible through <a href="http://MASTER_NODE_PUBLIC_IP:31742" rel="noreferrer">http://MASTER_NODE_PUBLIC_IP:31742</a> and <a href="http://NODE1_PUBLIC_IP:31742" rel="noreferrer">http://NODE1_PUBLIC_IP:31742</a> as expected.</p>
<p>Another Nginx deployment + service (type=NodePort) created and scheduled for the Node2 (without public IP):</p>
<p><a href="https://pastebin.com/AFK42UNW" rel="noreferrer">https://pastebin.com/AFK42UNW</a></p>
<p><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 22m
nginx-behind-nat NodePort 10.105.242.178 <none> 80:32350/TCP 22m</code></p>
<p>However this service is not accessible through <a href="http://MASTER_NODE_PUBLIC_IP:32350" rel="noreferrer">http://MASTER_NODE_PUBLIC_IP:32350</a> nor <a href="http://NODE1_PUBLIC_IP:32350" rel="noreferrer">http://NODE1_PUBLIC_IP:32350</a>.</p>
<p>It is only accessible through <a href="http://MY_VM_IP:32350" rel="noreferrer">http://MY_VM_IP:32350</a> from my laptop.</p>
<p>Moreover: i can not get inside the <code>nginx-behind-nat</code> pods via <code>kubectl exec</code> either.</p>
<p>Is there any way to achieve it?</p>
| <p>As mentioned in the Kubernetes <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Kubernetes imposes the following fundamental requirements on any
networking implementation (barring any intentional network
segmentation policies):</p>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
<p>What this means in practice is that you can not just take two
computers running Docker and expect Kubernetes to work. You must
ensure that the fundamental requirements are met.</p>
</blockquote>
<p>By default, the connections from api-server to a node, port or service are just plain HTTP without authentication and encryption.<br>
They can work over HTTPS, but by default, apiserver will not validate the HTTPS endpoint certificate, and therefore, it will not provide any guarantees of integrity and could be subject to man-in-the-middle attacks.</p>
<p>For details about securing connections inside the cluster, please check this <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/" rel="nofollow noreferrer">document</a></p>
|
<p>As you know, in k8s, we can set additional entry in /etc/hosts with hostAliases in deployment.yaml like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
</code></pre>
<p>But I want the ip is the POD own IP, then I assign a hostname to the POD. e.g.</p>
<pre><code> hostAliases:
- ip: "$POD_IP"
hostnames:
- "myname"
</code></pre>
<p>Is is possible? and how to?</p>
| <p>I don't think it's possible that way. Kubectl has a condition that hostAliases[].ip must be a valid IP. There is no way to insert there anything but an IP.</p>
<p>That said, there are other solutions:</p>
<ul>
<li><p>By default kubernetes add to /etc/hosts an entry for POD_IP and POD_NAME, so maybe you can use that.</p></li>
<li><p>You can always modify the entrypoint of the container to write that entry in /etc/hosts. Here is an example using the downward API:</p></li>
</ul>
<p><code>
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox:1.24
command: [ "sh", "-c"]
args:
- echo $MY_POD_IP myname >> /etc/hosts;
<INSERT YOU ENTRYPOINT HERE>
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
</code></p>
|
<p>I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do?
Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While
I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes. </p>
<p>My case is illustrated below: I have </p>
<pre><code>one master(172.16.100.91) and
one node(hostname node3, 172.16.100.96)
</code></pre>
<p>I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both
spring boot hello world applications. </p>
<pre><code>docker build -t jshenmaster2/hellocomm:0.0.2 .
kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080
kubectl expose deployment hellocomm --type NodePort
docker build -t jshenmaster2/helloext:0.0.1 .
kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080
kubectl expose deployment helloext --type ClusterIP
[root@master2 shell]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hellocomm NodePort 10.108.175.143 <none> 8080:31666/TCP 8s run=hellocomm
helloext ClusterIP 10.102.5.44 <none> 8080/TCP 2m run=helloext
[root@master2 hello]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3
helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3
</code></pre>
<p>In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello,
With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also.
so that my client lost connection. I do not want this solution.</p>
<p>My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP
10.102.5.44 and port 8080 would be service url, <a href="http://10.102.5.44:8080/hello" rel="noreferrer">http://10.102.5.44:8080/hello</a>? </p>
<p>Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write
following yaml file if I have to create service by yaml?</p>
<p>Below is yaml definition template I need to fill, How to fill?</p>
<pre><code>apiVersion: v1
kind: Service
matadata:
name: string helloext
namespace: string default
labels:
- name: string helloext
annotations:
- name: string hello world
spec:
selector: [] ?
type: string ?
clusterIP: string anything I could give?
sessionAffinity: string ? (yes or no)
ports:
- name: string helloext
protocol: string tcp
port: int 8081? (port used by host machine)
targetPort: int 8080? (spring boot uses 8080)
nodePort: int ?
status: since I am not using loadBalancer in deploymennt, I could forget this.
loadBalancer:
ingress:
ip: string
hostname: string
</code></pre>
| <p>NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">here</a>) - and you will be able to access on the same port even after the pod has been re-created.</p>
<p>The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can <code>exec</code> / <code>ssh</code> into any running container/node and try it out.</p>
<p>Yaml files can be version controlled, documented, templatized (<a href="https://www.helm.sh/" rel="nofollow noreferrer">Helm</a>), etc. </p>
<p>Check <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core</a> for details on each field.</p>
<p><strong>EDIT</strong>:
More detailed info on services here: <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0</a></p>
|
<p>I have a chart in it two containers:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: catalog
template:
metadata:
labels:
app: catalog
spec:
containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: myproxy
image: myproxy:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8008
protocol: TCP
env:
- name: PROXY_PORT
value: '8080'
---
apiVersion: v1
kind: Service
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
type: NodePort
ports:
- port: 8008
targetPort: http
protocol: TCP
name: http
selector:
app: catalog
</code></pre>
<p>I need to redirect all outbound traffic from the catalog container to myproxy container by localhost.</p>
<p>And already in the container to determine whether the catalog can send requests, log them, etc.</p>
<p>Prompt please whether it is possible to implement it using kubernetes.</p>
<p>Thanks.</p>
<hr>
<p><strong>Update:</strong></p>
<p>The problem is that I can not change the code in the catalg container and send queries to localhost</p>
<p>The container also does not have iptables to do something like this</p>
<pre><code>containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
command:
- 'iptables -t nat -A OUTPUT -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8008'
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>Ideally done with kubernetes</p>
| <p>If catalog application respects <code>http_proxy</code> environment variable, it it easy. Just add an environment variable to catalog container.</p>
<pre><code> - name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: HTTP_PROXY
value: localhost:8008
</code></pre>
|
<p>I have a problem trying exec'ing into a container:</p>
<pre><code>kubectl exec -it busybox-68654f944b-hj672 -- nslookup kubernetes
Error from server: error dialing backend: dial tcp: lookup worker2 on 127.0.0.53:53: server misbehaving
</code></pre>
<p>Or getting logs from a container:</p>
<pre><code>kubectl -n kube-system logs kube-dns-598d7bf7d4-p99qr kubedns
Error from server: Get https://worker3:10250/containerLogs/kube-system/kube-dns-598d7bf7d4-p99qr/kubedns: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
</code></pre>
<p>I'm running out of ideas...
I have followed mostly kubernetes-the-hard-way, but have installed it on DigitalOcean and using <code>Flannel</code> for pod networking (I'm also using <code>digitalocean-cloud-manager</code> that seems to be working well).</p>
<p>Also, it seems <code>kube-proxy</code> works, everything looks good in the logs, and the <code>iptable</code> config looks good (to me/a noob)</p>
<h3>Networks:</h3>
<ul>
<li>10.244.0.0/16 Flannel / Pod network</li>
<li>10.32.0.0/24 kube-proxy(<em>?</em>) / Service cluster </li>
<li>kube3 <em>206.x.x.211</em> / <em>10.133.55.62</em></li>
<li>kube1 <em>206.x.x.80</em> / <em>10.133.52.77</em></li>
<li>kube2 <em>206.x.x.213</em> / <em>10.133.55.73</em></li>
<li>worker1 <em>167.x.x.148</em> / <em>10.133.56.88</em></li>
<li>worker3 <em>206.x.x.121</em> / <em>10.133.55.220</em></li>
<li>worker2 <em>206.x.x.113</em> / <em>10.133.56.89</em></li>
</ul>
<h2>So, my logs:</h2>
<h3>kube-dns:</h3>
<pre><code>E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
I0522 12:22:32 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0522 12:22:34 dns.go:167] Timeout waiting for initialization
</code></pre>
<h3>Kube-proxy:</h3>
<pre><code>I0522 12:36:37 flags.go:27] FLAG: --alsologtostderr="false"
I0522 12:36:37 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0522 12:36:37 flags.go:27] FLAG: --cleanup="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-iptables="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-ipvs="true"
I0522 12:36:37 flags.go:27] FLAG: --cluster-cidr=""
I0522 12:36:37 flags.go:27] FLAG: --config="/var/lib/kube-proxy/kube-proxy-config.yaml"
I0522 12:36:37 flags.go:27] FLAG: --config-sync-period="15m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max="0"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max-per-core="32768"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-min="131072"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --feature-gates=""
I0522 12:36:37 flags.go:27] FLAG: --healthz-bind-address="0.0.0.0:10256"
I0522 12:36:37 flags.go:27] FLAG: --healthz-port="10256"
I0522 12:36:37 flags.go:27] FLAG: --help="false"
I0522 12:36:37 flags.go:27] FLAG: --hostname-override=""
I0522 12:36:37 flags.go:27] FLAG: --iptables-masquerade-bit="14"
I0522 12:36:37 flags.go:27] FLAG: --iptables-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --iptables-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-scheduler=""
I0522 12:36:37 flags.go:27] FLAG: --ipvs-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-burst="10"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-qps="5"
I0522 12:36:37 flags.go:27] FLAG: --kubeconfig=""
I0522 12:36:37 flags.go:27] FLAG: --log-backtrace-at=":0"
I0522 12:36:37 flags.go:27] FLAG: --log-dir=""
I0522 12:36:37 flags.go:27] FLAG: --log-flush-frequency="5s"
I0522 12:36:37 flags.go:27] FLAG: --logtostderr="true"
I0522 12:36:37 flags.go:27] FLAG: --masquerade-all="false"
I0522 12:36:37 flags.go:27] FLAG: --master=""
I0522 12:36:37 flags.go:27] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0522 12:36:37 flags.go:27] FLAG: --nodeport-addresses="[]"
I0522 12:36:37 flags.go:27] FLAG: --oom-score-adj="-999"
I0522 12:36:37 flags.go:27] FLAG: --profiling="false"
I0522 12:36:37 flags.go:27] FLAG: --proxy-mode=""
I0522 12:36:37 flags.go:27] FLAG: --proxy-port-range=""
I0522 12:36:37 flags.go:27] FLAG: --resource-container="/kube-proxy"
I0522 12:36:37 flags.go:27] FLAG: --stderrthreshold="2"
I0522 12:36:37 flags.go:27] FLAG: --udp-timeout="250ms"
I0522 12:36:37 flags.go:27] FLAG: --v="4"
I0522 12:36:37 flags.go:27] FLAG: --version="false"
I0522 12:36:37 flags.go:27] FLAG: --vmodule=""
I0522 12:36:37 flags.go:27] FLAG: --write-config-to=""
I0522 12:36:37 feature_gate.go:226] feature gates: &{{} map[]}
I0522 12:36:37 iptables.go:589] couldn't get iptables-restore version; assuming it doesn't support --wait
I0522 12:36:37 server_others.go:140] Using iptables Proxier.
I0522 12:36:37 proxier.go:346] minSyncPeriod: 0s, syncPeriod: 30s, burstSyncs: 2
I0522 12:36:37 server_others.go:174] Tearing down inactive rules.
I0522 12:36:37 server.go:444] Version: v1.10.2
I0522 12:36:37 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0522 12:36:37 server.go:470] Running in resource-only container "/kube-proxy"
I0522 12:36:37 healthcheck.go:309] Starting goroutine for healthz on 0.0.0.0:10256
I0522 12:36:37 server.go:591] getConntrackMax: using conntrack-min
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0522 12:36:37 conntrack.go:52] Setting nf_conntrack_max to 131072
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0522 12:36:37 bounded_frequency_runner.go:170] sync-runner Loop running
I0522 12:36:37 config.go:102] Starting endpoints config controller
I0522 12:36:37 config.go:202] Starting service config controller
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0522 12:36:37 reflector.go:202] Starting reflector *core.Endpoints (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Endpoints from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:202] Starting reflector *core.Service (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Service from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kubernetes-dashboard:" to [10.244.0.2:8443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/hostnames:" to [10.244.0.3:9376 10.244.0.4:9376 10.244.0.4:9376]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/kubernetes:https" to [10.133.52.77:6443 10.133.55.62:6443 10.133.55.73:6443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns" to []
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns-tcp" to []
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for service config controller
I0522 12:36:37 config.go:210] Calling handler.OnServiceSynced()
I0522 12:36:37 proxier.go:623] Not syncing iptables until Services and Endpoints have been received from master
I0522 12:36:37 proxier.go:619] syncProxyRules took 38.306µs
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for endpoints config controller
I0522 12:36:37 config.go:110] Calling handler.OnEndpointsSynced()
I0522 12:36:37 service.go:310] Adding new service port "default/kubernetes:https" at 10.32.0.1:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns" at 10.32.0.10:53/UDP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.32.0.10:53/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kubernetes-dashboard:" at 10.32.0.175:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "default/hostnames:" at 10.32.0.16:80/TCP
I0522 12:36:37 proxier.go:642] Syncing iptables rules
I0522 12:36:37 iptables.go:321] running iptables-save [-t filter]
I0522 12:36:37 iptables.go:321] running iptables-save [-t nat]
I0522 12:36:37 iptables.go:381] running iptables-restore [--noflush --counters]
I0522 12:36:37 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/hostnames"
I0522 12:36:37 proxier.go:619] syncProxyRules took 62.713913ms
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
</code></pre>
<h3>iptables -L -t nat</h3>
<pre><code>Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 anywhere
RETURN all -- 10.244.0.0/16 10.244.0.0/16
MASQUERADE all -- 10.244.0.0/16 !base-address.mcast.net/4
RETURN all -- !10.244.0.0/16 worker3/24
MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16
CNI-9f557b5f70a3ef9b57012dc9 all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
CNI-3f77e9111033967f6fe3038c all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-3f77e9111033967f6fe3038c (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-9f557b5f70a3ef9b57012dc9 (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x8000
Chain KUBE-MARK-MASQ (10 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-372W2QPHULAJK7KN (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.52.77 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255 tcp to:10.133.52.77:6443
Chain KUBE-SEP-F5C5FPCVD73UOO2K (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.73 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255 tcp to:10.133.55.73:6443
Chain KUBE-SEP-LFOBDGSNKNVH4XYX (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.62 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255 tcp to:10.133.55.62:6443
Chain KUBE-SEP-NBPTKIZVPOJSUO47 (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
Chain KUBE-SEP-OT5RYZRAA2AMYTNV (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.2 anywhere /* kube-system/kubernetes-dashboard: */
DNAT tcp -- anywhere anywhere /* kube-system/kubernetes-dashboard: */ tcp to:10.244.0.2:8443
Chain KUBE-SEP-XDZOTYYMKVEAAZHH (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.3 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.3:9376
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-SVC-XGLOHA7QRQ3V22RZ tcp -- anywhere 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-SVC-NWV5X2332I4OT4T3 tcp -- anywhere 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.33332999982
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */
Chain KUBE-SVC-NWV5X2332I4OT4T3 (1 references)
target prot opt source destination
KUBE-SEP-XDZOTYYMKVEAAZHH all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.33332999982
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.50000000000
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */
Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (1 references)
target prot opt source destination
KUBE-SEP-OT5RYZRAA2AMYTNV all -- anywhere anywhere /* kube-system/kubernetes-dashboard: */
</code></pre>
<h3>kubelet</h3>
<pre><code>W12:43:36 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:36 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:46 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:46 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:56 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:56 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:44:06 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:44:06 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<h1>Config:</h1>
<h2>Worker:</h2>
<h3>kubelet:</h3>
<p>systemd service:</p>
<pre><code>/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--register-node=true \
--v=2 \
--cloud-provider=external \
--allow-privileged=true
</code></pre>
<p>kubelet-config.yaml:</p>
<pre><code>kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.244.0.0/16"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/worker3.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/worker3-key.pem"
</code></pre>
<h3>kube-proxy:</h3>
<p>systemd service:</p>
<p>ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kube-proxy/kube-proxy-config.yaml -v 4</p>
<p>kube-proxy-config.yaml:</p>
<pre><code>kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"
</code></pre>
<p>kubeconfig:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ASLDJL...ALKJDS=
server: https://206.x.x.7:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: system:kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:kube-proxy
user:
client-certificate-data: ASDLJAL ... ALDJS
client-key-data: LS0tLS1CRUdJ...ASDJ
</code></pre>
<h2>Controller:</h2>
<h3>kube-apiserver:</h3>
<pre><code>ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=10.133.55.62 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--enable-swagger-ui=true \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://10.133.55.73:2379,https://10.133.52.77:2379,https://10.133.55.62:2379 \
--event-ttl=1h \
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file=/var/lib/kubernetes/service-account.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2
</code></pre>
<h3>kube-controller-manager</h3>
<pre><code>ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.244.0.0/16 \
--allocate-node-cidrs=true \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
</code></pre>
<h3>Flannel config/Log:</h3>
<p><a href="https://pastebin.com/hah0uSFX" rel="noreferrer">https://pastebin.com/hah0uSFX</a>
(since the post is too long!)</p>
<h2>Edit:</h2>
<p><code>route</code>:</p>
<pre><code>Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 0 0 0 eth0
10.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.133.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cnio0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
206.189.96.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0
</code></pre>
<p><code>ip route get 10.32.0.1</code>: <code>10.32.0.1 via 206.189.96.1 dev eth0 src 206.189.96.121 uid 0</code></p>
<pre><code>curl -k https://10.32.0.1:443/version
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.2",
"gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
"gitTreeState": "clean",
"buildDate": "2018-04-27T09:10:24Z",
"goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
}
</code></pre>
<p>A reboot brought up all workers and pods, including kube-dns, so they are not crashing anymore, but when trying to do exec or run, I still have some issues:</p>
<pre><code>kubectl run test --image=ubuntu -it --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: error dialing backend: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
Error from server: Get https://worker3:10250/containerLogs/default/test-6954947c4f-6gkdl/test: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehavin
</code></pre>
| <blockquote>
<p>I have a problem trying exec'ing into a container</p>
</blockquote>
<p>As you see, Kubernetes is trying to connect to your nodes use the names like <code>worker1</code>, which cannot be resolved in your network.</p>
<p>You have 2 ways to fix it:</p>
<ol>
<li>Use real FQDN for all your nodes which can be resolved. Usually, VMs in clouds have resolvable DNS names, but it looks like in DO they are not. So, you need to create domain names manually and point them to your servers. Try to check <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-dns" rel="noreferrer">DO DNS service</a>. I recommend you that way, it is always good when you have your own DNS names for your infrastructure.</li>
<li>Make your names like <code>worker1</code> resolvable for Kubernetes components, maybe some custom DNS server or records in <code>/etc/hosts</code>.</li>
</ol>
<p><strong>UPD:</strong></p>
<p>From @Richard87, for future reference: the third way is to use option <code>--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname</code>.</p>
|
<p>I have a chart in it two containers:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: catalog
template:
metadata:
labels:
app: catalog
spec:
containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: myproxy
image: myproxy:v1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8008
protocol: TCP
env:
- name: PROXY_PORT
value: '8080'
---
apiVersion: v1
kind: Service
metadata:
name: catalog
labels:
app: catalog
chart: catalog-0.1.0
heritage: Tiller
spec:
type: NodePort
ports:
- port: 8008
targetPort: http
protocol: TCP
name: http
selector:
app: catalog
</code></pre>
<p>I need to redirect all outbound traffic from the catalog container to myproxy container by localhost.</p>
<p>And already in the container to determine whether the catalog can send requests, log them, etc.</p>
<p>Prompt please whether it is possible to implement it using kubernetes.</p>
<p>Thanks.</p>
<hr>
<p><strong>Update:</strong></p>
<p>The problem is that I can not change the code in the catalg container and send queries to localhost</p>
<p>The container also does not have iptables to do something like this</p>
<pre><code>containers:
- name: catalog
image: catalog:v1
imagePullPolicy: IfNotPresent
command:
- 'iptables -t nat -A OUTPUT -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8008'
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>Ideally done with kubernetes</p>
| <p>For your update, if you need to manipulate iptables, you can add another <code>initContainer</code>, for example:</p>
<pre><code> initContainers:
- image: centos
imagePullPolicy: Always
name: run-iptables
securityContext:
privileged: true
command:
- "sh"
- "-c"
- 'yum -y install iptables; iptables -t nat -A OUTPUT -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8008'
</code></pre>
<p>Since all containers in a pod share the same net namespace, it effects to catalog container as well.</p>
|
<p>I need to resize a bunch of PVCs. It seems the easiest way to do it is through
the ExpandPersistentVolumes feature. I am however having trouble getting the
configuration to cooperate.</p>
<p>The <code>ExpandPersistentVolumes</code> feature gate is set in kubelet on all three
masters, as shown:</p>
<p>(output trimmed to relevant bits for sanity)</p>
<pre><code>$ parallel-ssh -h /tmp/masters -P "ps aux | grep feature"
172.20.53.249: root 15206 7.4 0.5 619888 83952 ? Ssl 19:52 0:02 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[1] 12:53:08 [SUCCESS] 172.20...
172.20.58.111: root 17798 4.5 0.5 636280 87328 ? Ssl 19:51 0:04 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[2] 12:53:08 [SUCCESS] 172.20...
172.20.53.240: root 9287 4.0 0.5 645276 90528 ? Ssl 19:50 0:06 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[3] 12:53:08 [SUCCESS] 172.20..
</code></pre>
<p>The apiserver has the <code>PersistentVolumeClaimResize</code> admission controller, as shown:</p>
<pre><code>$ kubectl --namespace=kube-system get pod -o yaml | grep -i admission
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
</code></pre>
<p>However, when I create or edit a storage class to add <code>allowVolumeExpansion</code>,
it is removed on save. For example:</p>
<pre><code>$ cat new-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: null
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
allowVolumeExpansion: true
$ kubectl create -f new-sc.yaml
storageclass "gp2-2" created
$ kubectl get sc gp2-2 -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-05-22T20:00:17Z
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
resourceVersion: "2546166"
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2-2
uid: <omitted>
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
</code></pre>
<p>What am I missing? What is erasing this key from my storageclass configuration?</p>
<p><strong>EDIT:</strong> Here is the command used by the kube-apiserver pods. It does not say anything about feature gates. The cluster was launched using Kops.</p>
<pre><code>- /bin/sh
- -c
- mkfifo /tmp/pipe; (tee -a /var/log/kube-apiserver.log < /tmp/pipe & ) ; exec
/usr/local/bin/kube-apiserver --address=127.0.0.1 --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
--allow-privileged=true --anonymous-auth=false --apiserver-count=3 --authorization-mode=RBAC
--basic-auth-file=/srv/kubernetes/basic_auth.csv --client-ca-file=/srv/kubernetes/ca.crt
--cloud-provider=aws --etcd-cafile=/srv/kubernetes/ca.crt --etcd-certfile=/srv/kubernetes/etcd-client.pem
--etcd-keyfile=/srv/kubernetes/etcd-client-key.pem --etcd-servers-overrides=/events#https://127.0.0.1:4002
--etcd-servers=https://127.0.0.1:4001 --insecure-port=8080 --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.cert --proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key
--requestheader-allowed-names=aggregator --requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.cert
--requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User --secure-port=443 --service-cluster-ip-range=100.64.0.0/13
--storage-backend=etcd3 --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key
--token-auth-file=/srv/kubernetes/known_tokens.csv --v=1 > /tmp/pipe 2>&1
</code></pre>
| <p>It could happen if you did not enable alpha feature-gate for the option.</p>
<p>Did you set <code>--feature-gates</code> option for <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a>?</p>
<pre><code>--feature-gates mapStringBool - A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
...
ExpandPersistentVolumes=true|false (ALPHA - default=false)
...
</code></pre>
<p>Update: If you don't see this option in the command line arguments, you need to add it (<code>--feature-gates=ExpandPersistentVolumes=true</code>). </p>
<p>In case you run kube-apiserver as a pod, you should edit /<code>etc/kubernetes/manifests/kube-apiserver.yaml</code> and add the feature-gate option to other arguments. kube-apiserver will restart automatically.</p>
<p>In case you run kube-apiserver as a process maintained by systemd, you should edit <code>kube-apiserver.service</code> or service options <code>$KUBE_API_ARGS</code> in a separate file, and append feature-gate option there. Restart the service with <code>systemctl restart kube-apiserver.service</code> command.</p>
<p>After enabling it, you can create a StorageClass object with allowVolumeExpansion option:</p>
<pre><code># kubectl get sc -o yaml --export
apiVersion: v1
items:
- allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-05-23T14:38:43Z
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
namespace: ""
resourceVersion: "1385"
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2-2
uid: fe516dcf-5e96-11e8-a86d-42010a9a0002
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
|
<p>I have Kubernetes replication controller which is running 2 pods of PHP applicaton. Now, I need to schedule 3 cronjobs which run some PHP scripts in those pods. How can I achieve it? (The base image for the container is centos:7).</p>
| <p>As said in comments your best option is create <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJobs</a>.</p>
<p>If you don't want to expose you PHP functions, you can always use a container that execute commands in your PHP pods.</p>
<p>Here is an example that create a cronjob. This cronjob runs kubectl exec for every pod with label app=my-php, calling ls in each pod.</p>
<pre><code> apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl
image: lachlanevenson/k8s-kubectl
command:
- /bin/sh
- -c
args:
- kubectl get pod -l app=my-php -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -i kubectl exec {} ls
restartPolicy: Never
</code></pre>
|
<p>I have a cluster that scales based on the CPU usage of my pods. The documentation states that i should prevent <em>thrashing</em> by scaling to fast. I want to play around with the autoscaling speed but i can't seem to find where to apply the following flags: </p>
<ul>
<li>--horizontal-pod-autoscaler-downscale-delay</li>
<li>--horizontal-pod-autoscaler-upscale-delay</li>
</ul>
<p>My goal is to set the cooldown timer lower then <em>5m</em> or <em>3m</em>, does anyone know how this is done or where I can find documentation on how to configure this? Also if this has to be configured in the hpa autoscaling YAML file, does anyone know what definition should be used for this or where I can find documentation on how to configure the YAML?
<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="noreferrer">This is a link to the Kubernetes documentation about scaling cooldowns i used.</a></p>
| <p>The HPA controller is part of the controller manager and you'll need to pass the flags to it, see also the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="noreferrer">docs</a>. It is not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.</p>
|
<p>I've setup a kubernetes cluster using <a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">kubespray</a>, and now I am trying to follow <a href="http://%20https://blog.heptio.com/how-to-deploy-web-applications-on-kubernetes-with-heptio-contour-and-lets-encrypt-d58efbad9f56" rel="nofollow noreferrer">this guide</a></p>
<pre><code>root@node1 ~ # kubectl get -n heptio-contour service contour -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour LoadBalancer 10.233.55.94 <pending> 80:32414/TCP,443:30149/TCP 42m app=contour
</code></pre>
<p>It seems <code>EXTERNAL-IP</code> is pending because I am on a bare metal machine (not AWS/GKE etc.)</p>
<p>What do I need to do in order to get an external ip showing there?</p>
| <p><a href="https://kubernetes.io" rel="noreferrer">Kubernetes</a> offers three ways to expose a service:</p>
<p>1) <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">L4 LoadBalancer</a>: Available only on cloud providers such as GCE and AWS</p>
<p>2) Expose Service via NodePort: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">The NodePort</a> directive allocates a port on every worker node, which proxy the traffic to the respective Pod.</p>
<p>3) L7 Ingress: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">The Ingress</a> is a dedicated load balancer (eg. nginx, HAProxy, traefik, vulcand) that redirects incoming HTTP/HTTPS traffic to the respective endpoints</p>
<p>Kubernetes does not offer implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters.</p>
<p>If you’re not running Kubernetes cluster on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state regardless of the time they were created.</p>
<p>The reason is the lack of support of IP routing between the external world and Kubernetes; there is no default implementation of transferring dns zones
used by Ingress to allocate communication to internal of the cluster.</p>
<p>There are external projects to provide bare-metal even in federation clusters mode to be part of standalone or hybrid solution.</p>
<p>It depends on the scale and the maturity of projects you have, so it should begin with choosing a proper load balancer or <a href="https://en.wikipedia.org/wiki/Virtual_IP_address" rel="noreferrer">VIP</a> provider:</p>
<p><a href="https://github.com/google/metallb" rel="noreferrer">https://github.com/google/metallb</a></p>
<p><a href="https://docs.traefik.io/" rel="noreferrer">https://docs.traefik.io/</a></p>
<p><a href="https://github.com/kubernetes/contrib/tree/master/keepalived-vip" rel="noreferrer">https://github.com/kubernetes/contrib/tree/master/keepalived-vip</a></p>
<p><a href="http://vulcand.github.io/" rel="noreferrer">http://vulcand.github.io/</a></p>
<p>and deprecated:</p>
<p><a href="http://www.linuxvirtualserver.org/software/ipvs.html" rel="noreferrer">http://www.linuxvirtualserver.org/software/ipvs.html</a></p>
<p>Please notice that in federation clusters (more than one bare metal Kubernetes
cluster) is needed to export IP address of each physical machine to central API
provider and probably it is not covered by the links I provided to you.</p>
|
<p>I am working on a Flask application which communicates with Google cloud storage using python client library. Currently, on a local development, I am using a service account for authenticating the application and making interactions.</p>
<p>I am planning to build a docker image of the application and deploy it on a kubernetes cluster. My concern is that, how should I provide the Google credentials? </p>
<p>I might be wrong here, When I ran this python file on a VM it was able to create a new bucket without a need for credentials or service accounts.</p>
<pre><code># Imports the Google Cloud client library
from google.cloud import storage
# Instantiates a client
storage_client = storage.Client()
# The name for the new bucket
bucket_name = 'my-new-bucket'
# Creates the new bucket
bucket = storage_client.create_bucket(bucket_name)
print('Bucket {} created.'.format(bucket.name))
</code></pre>
<p>If I dockerize the same code into flask application and deploy it on a cluster, will it still take the default google credentials?
I would like to know the best practice of doing this on a kubernetes cluster.</p>
| <p>The best way is to deploy a kubernetes secret</p>
<p><code>apiVersion: v1
kind: Secret
metadata:
name: GOOGLE_APPLICATION_CREDENTIALS
data:
key.json: "Your service acount key.json"
</code></p>
<p>For the Pod/ Deployment</p>
<p><code>volumes:
- name: GOOGLE_APPLICATION_CREDENTIALS
secret:
secretName: GOOGLE_APPLICATION_CREDENTIALS
</code></p>
<p>Then, for the image you can set the variable as
<code>os.['GOOGLE_APPLICATION_CREDENTIALS']</code> which will take the os variable inside the container and the python code will use that key.</p>
<p>Once you build the image push it to container registry.</p>
<p>That should work.</p>
|
<p>What I see: Kubernetes takes into account only the memory used by its components when scheduling new Pods, and considers the remaining memory as free, even if it's being used by other system processes outside Kubernetes. So, when creating new deployments, it attempts to schedule new pods on a suffocated node.</p>
<p>What I expected to see: Kubernetes <strong>automatically</strong> take in consideration the total memory usage (by kubernetes components + system processes) and schedule it on another node.</p>
<p>As a work-around, is there a configuration parameter that I need to set or is it a bug?</p>
| <p>Yes, there are few parameters to allocate resources:
You can allocate memory and CPU for your pods and allocate memory and CPU for your system daemons manually. <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#example-scenario" rel="nofollow noreferrer">In documentation</a> you could find how it works with the example:</p>
<h2>Example Scenario</h2>
<p>Here is an example to illustrate Node Allocatable computation:</p>
<ul>
<li>Node has <code>32Gi</code> of <code>memory</code>, <code>16 CPUs</code> and <code>100Gi</code> of <code>Storage</code></li>
<li><code>--kube-reserved</code> is set to <code>cpu=1,memory=2Gi,ephemeral-storage=1Gi</code></li>
<li><code>--system-reserved</code> is set to <code>cpu=500m,memory=1Gi,ephemeral-storage=1Gi</code></li>
<li><code>--eviction-hard</code> is set to <code>memory.available<500Mi,nodefs.available<10%</code></li>
</ul>
<p>Under this scenario, <code>Allocatable</code> will be <code>14.5 CPUs</code>, <code>28.5Gi</code> of memory and <code>98Gi</code> of local storage. Scheduler ensures that the total memory <code>requests</code> across all pods on this node does not exceed <code>28.5Gi</code> and storage doesn’t exceed <code>88Gi</code>. Kubelet evicts pods whenever the overall memory usage across pods exceeds <code>28.5Gi</code>, or if overall disk usage exceeds <code>88Gi</code>If all processes on the node consume as much CPU as they can, pods together cannot consume more than <code>14.5 CPUs</code>.</p>
<p>If <code>kube-reserved</code> and/or <code>system-reserved</code> is not enforced and system daemons exceed their reservation, <code>kubelet</code> evicts pods whenever the overall node memory usage is higher than <code>31.5Gi</code> or <code>storage</code> is greater than <code>90Gi</code></p>
<p>You can allocate as many as you need for Kubernetes with flag <code>--kube-reserved</code> and for system with flag <code>-system-reserved</code>. </p>
<p>Additionally, if you need stricter rules for spawning pods, you could try to use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">Pod Affinity.</a> </p>
|
<p>This is our environment:</p>
<ol>
<li>I have a Kubernetes cluster running on Amazon.</li>
<li>Jenkins CI/CD running on Amazon that connects to a private GitLab and builds our services as Docker images.</li>
<li>Amazon ECR that stores our Docker images.</li>
</ol>
<p>My questions:</p>
<ol>
<li>How can I auto deploy images from ECR to Kubernetes (as pods) once the Jenkins pipeline pushes newly built images to ECR?</li>
<li>Can I do that within Jenkins pipeline? I've read lots of material but I couldn't find how should it be done.</li>
</ol>
<p>There is also a 3rd party tool like <a href="https://github.com/keel-hq/keel" rel="noreferrer">Keel</a> but it doesn't support Amazon ECR (Webhook problem with ECR).</p>
<p>Any help would be appreciated. </p>
| <p>I have a similar workflow and I hope this helps you get some direction. I am using bitbucket pipeline for CI, but I am sure Jenkins will work fine as well.</p>
<p>This is what I do in my CI flow :</p>
<ul>
<li>Build my code and install dependencies</li>
<li>Create a container with a unique tag ( commit-id ) > <code>my-cntnr:12</code></li>
<li>Push to ECR</li>
<li>Curl Rancher API for my-pod > set(image:<code>my-cntnr:12</code>)</li>
<li>Kubernates updates the pod and pulls the container with tag 12 from ECR</li>
</ul>
<p>Here is the script for reference : </p>
<pre><code> - composer install --no-interaction
- docker build -t cms .
- docker tag myrepo:latest 123456789.dkr.ecr.my-region.amazonaws.com/myrepo:$BITBUCKET_BUILD_NUMBER
- aws ecr get-login --no-include-email --region my-region >> login.sh
- sh login.sh
- docker push 123456799.dkr.ecr.my-region.amazonaws.com/myrepo:$BITBUCKET_BUILD_NUMBER
- sh .docker/workload-update.sh // my curl script calling rancher API
</code></pre>
<p>note: Since I am using Rancher, I can use Rancher API to update pods and
their configuration.</p>
<hr>
<p>Now for the ECR credentials part for Kubernetes, you have to create a secret ( a Kubernetes only entity), this secret is created using your AWS ECR details. Then you can use this secret in your pod.yml as image-pull-secret. This will tell k8 to use the secret and pull the image from ECR</p>
<p>I have a simple script to quickly do that.</p>
<pre><code>#
# RUN me where kubectl is available,& make sure to replace account,region etc
#
ACCOUNT=123456789
REGION=my-region
SECRET_NAME=${REGION}-ecr-registry
[email protected] ( can be anything)
#
# Fetch token (which will expire in 12 hours)
#
TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
#
# Create or replace registry secret
#
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
</code></pre>
<p>And this is how you can use it in your pod.yml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- image: 123456789.dkr.ecr.my-region.amazonaws.com/my-repo
name: -cntnr
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-secret-name ( this will be same as name of secret we created earlier)
</code></pre>
<hr>
<p>I've written a detailed article about the process as well. Please find it <a href="https://medium.com/@damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c" rel="noreferrer">here</a>.</p>
|
<p>I've been exploring the <a href="https://github.com/kubernetes-client/java" rel="noreferrer">Kubernetes-Client/Java library</a> and I can't figure out the API call to get all the deployments.</p>
<p>I'm looking for the K8-Client/Java API call for this command:</p>
<pre><code>kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
appservice1 3 3 3 3 5d
appservice2 3 3 3 3 1d
appservice3 1 1 1 1 22d
</code></pre>
<p><strong>More specifically, I'm interested in determining the number of desired & current pods for each deployment (like above).</strong></p>
| <p>You can find all methods of the Kubernetes java client here:
<a href="https://github.com/kubernetes-client/java/tree/master/kubernetes/docs" rel="nofollow noreferrer">https://github.com/kubernetes-client/java/tree/master/kubernetes/docs</a></p>
<p>What you're searching for is the <code>listNamespacedDeployment</code> or <code>listDeploymentForAllNamespaces</code>.</p>
<p>The return type of those methods is <code>V1DeploymentList</code> so you'll find the <code>V1DeploymentStatus</code> which is containing all information about the current number of pods controlled by the deployment.</p>
|
<p>It is how to run simple batch in kubernetes yaml (helloworld.yaml):</p>
<pre><code>...
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
...
</code></pre>
<p>In Kubernetes i can deploy that like this:</p>
<pre><code>$ kubectl create -f helloworld.yaml
</code></pre>
<p>Suppose i have a batch script like this (script.sh):</p>
<pre><code>#!/bin/bash
echo "Please wait....";
sleep 5
</code></pre>
<p>Is there way to include the script.sh into <code>kubectl create -f</code> so it can run the script. Suppose now helloworld.yaml edited like this:</p>
<pre><code>...
image: "ubuntu:14.04"
command: ["/bin/bash", "./script.sh"]
...
</code></pre>
| <p>I'm using this approach in OpenShift, so it should be applicable in Kubernetes as well.</p>
<p>Try to put your script into a configmap key/value, mount this configmap as a volume and run the script from the volume.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
parallelism: 1
completions: 1
template:
metadata:
name: hello-world-job
spec:
volumes:
- name: hello-world-scripts-volume
configMap:
name: hello-world-scripts
containers:
- name: hello-world-job
image: alpine
volumeMounts:
- mountPath: /hello-world-scripts
name: hello-world-scripts-volume
env:
- name: HOME
value: /tmp
command:
- /bin/sh
- -c
- |
echo "scripts in /hello-world-scripts"
ls -lh /hello-world-scripts
echo "copy scripts to /tmp"
cp /hello-world-scripts/*.sh /tmp
echo "apply 'chmod +x' to /tmp/*.sh"
chmod +x /tmp/*.sh
echo "execute script-one.sh now"
/tmp/script-one.sh
restartPolicy: Never
---
apiVersion: v1
items:
- apiVersion: v1
data:
script-one.sh: |
echo "script-one.sh"
date
sleep 1
echo "run /tmp/script-2.sh now"
/tmp/script-2.sh
script-2.sh: |
echo "script-2.sh"
sleep 1
date
kind: ConfigMap
metadata:
creationTimestamp: null
name: hello-world-scripts
kind: List
metadata: {}
</code></pre>
|
<ol>
<li>I have setup a kubernetes using kubeadm v1.8.5</li>
<li>Setup a dashboard using:</li>
</ol>
<pre><code>wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.0/src/deploy/recommended/kubernetes-dashboard.yaml`
kubectl create -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard-admin.rbac.yaml
</code></pre>
<ol start="3">
<li><p>Then setup kubectl proxy, using <code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code> as recommended.</p></li>
<li><p>When I am trying to login using kubernetes-dashboard-admin token. Token was received by using the command:</p></li>
</ol>
<pre><code> kubectl -n kube-system get secret | grep -i dashboard-admin | awk '{print $1}' | xargs -I {}
kubectl -n kube-system describe secret {}
</code></pre>
<p><strong>Here comes my problem: I CANT access the dashboard via token, when I paste the token and click "Signin" botton, nothing happened. And I get nothing in my log[using tail -f /var/log/messages and journalctl -xeu kubelet]. I am a newbee on k8s, maybe someone could tell me where the log is?</strong><br>
<a href="https://i.stack.imgur.com/7ZabE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7ZabE.png" alt="enter image description here"></a></p>
<p>Here are my k8s cluster-info:</p>
<p>[root@k8s-1 pki]# <code>kubectl cluster-info</code></p>
<pre><code>Kubernetes master is running at https://172.16.1.15:6443
KubeDNS is running at https://172.16.1.15:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://172.16.1.15:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>[root@k8s-1 pki]# <code>kubectl get nodes</code></p>
<pre><code>NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 4d v1.8.5
k8s-2 Ready <none> 4d v1.8.5
k8s-3 Ready <none> 4d v1.8.5
</code></pre>
<p>[root@k8s-1 pki]# <code>kubectl get pods --all-namespaces</code></p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-1 1/1 Running 2 4d
kube-system kube-apiserver-k8s-1 1/1 Running 2 4d
kube-system kube-controller-manager-k8s-1 1/1 Running 1 4d
kube-system kube-dns-545bc4bfd4-94vxx 3/3 Running 3 4d
kube-system kube-flannel-ds-97frd 1/1 Running 2 4d
kube-system kube-flannel-ds-bl9tp 1/1 Running 2 4d
kube-system kube-flannel-ds-bn9hp 1/1 Running 1 4d
kube-system kube-proxy-9ncdm 1/1 Running 0 4d
kube-system kube-proxy-qjm9k 1/1 Running 1 4d
kube-system kube-proxy-rknz4 1/1 Running 0 4d
kube-system kube-scheduler-k8s-1 1/1 Running 2 4d
kube-system kubernetes-dashboard-7486b894c6-tszq9 1/1 Running 0 2h
</code></pre>
<p>The kubernetes-dashboard-admin-rbac.yaml is:</p>
<p>[root@k8s-1 dashboards]# <code>cat kubernetes-dashboard-admin.rbac.yaml</code> </p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-admin
namespace: kube-system
</code></pre>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
</code></pre>
<p>Any suggestions? Thank you!!!</p>
| <p>Try connect with https, I have same problem, and this works for me</p>
<p>Kubernetes Manual:</p>
<blockquote>
<p>NOTE: Dashboard should not be exposed publicly using kubectl proxy
command as it only allows HTTP connection. For domains other than
localhost and 127.0.0.1 it will not be possible to sign in. Nothing
will happen after clicking Sign in button on login page. Logging in is
only available when accessing Dashboard over HTTPS or when domain is
either localhost or 127.0.0.1. It's done this way for security
reasons. Closing as this works as intended.</p>
</blockquote>
|
<p>I have a statefulset of kafka. I need to expand the disk size, i try wihout succes to use the automatic resize feature of k8s 1.9</p>
<p>Here : <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims</a></p>
<p>I did activate feature gates and admission pluging, i think it's work beacause i can succefully change the size of the pvc after the modification.</p>
<p>But nothing happend i have modified the size of the PVC from 50Gi to 250Gi.</p>
<p>The capacity did change everywhere in the pvc, but not on AWS the EBS volume is still 50gb and a df -h in the pod still show 50gb</p>
<p>Did i miss something ? Do i have to manually resize on aws ?</p>
<p>thank you</p>
| <p>That is an alpha feature which has some problems and limitations.</p>
<p>Try to find some information on the Issues on Github which is related to your problem:</p>
<ul>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/61259" rel="nofollow noreferrer">Support automatic resizing of volumes</a> </p></li>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/61259" rel="nofollow noreferrer">[pvresize]Display of pvc capacity did not make corresponding changes when pv resized</a></p></li>
</ul>
<p>Also, check that <a href="https://github.com/kubernetes/kubernetes/issues/35941#issuecomment-380442340" rel="nofollow noreferrer">comment</a>, it can be useful:</p>
<blockquote>
<p>@discordianfish please try EBS PVC resize with 1.10. Currently the user experience of resizing volumes with file systems is not ideal. You will have to edit the pvc and then wait for <code>FileSystemResizePending</code> condition to appear on PVC and then delete and recreate the pod that was using the PVC. If there was no pod using the PVC, then once condition <code>FileSystemResizePending</code> appears on PVC then you will have to start a pod using it for file system resize to finish.</p>
</blockquote>
|
<p>I have a 5 worker node and a master node in a kubernetes cluster. I want to add a new master in that cluster so that the cluster will be highly available. what could be the best approach to add a new master in a cluster? </p>
| <p><a href="https://thenewstack.io/kubernetes-high-availability-no-single-point-of-failure/" rel="noreferrer">This article</a> helped me a lot with understanding how HA cluster looks like in real life, so I recommend to check it out first.</p>
<p>Here is a quote from <a href="https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/" rel="noreferrer">Kubernetes documentation</a> that describes creating HA cluster:</p>
<blockquote>
<h3>Starting an HA-compatible cluster To create a new HA-compatible cluster, you must set the following flags in your kube-up script:</h3>
<ul>
<li><p>MULTIZONE=true - to prevent removal of master replicas kubelets from zones different than the server's default zone. Required if you
want to run master replicas in different zones, which is recommended.</p>
</li>
<li><p>ENABLE_ETCD_QUORUM_READS=true - to ensure that reads from all API servers will return the most up-to-date data. If true, reads will be
directed to leader etcd replica. Setting this value to true is
optional: reads will be more reliable but will also be slower.</p>
</li>
</ul>
<p>Optionally, you can specify a GCE zone where the first master replica
is to be created. Set the following flag:</p>
<ul>
<li><p>KUBE_GCE_ZONE=zone - zone where the first master replica will run. The following sample command sets up a HA-compatible cluster in the
GCE zone europe-west1-b:</p>
<p>$ MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh</p>
</li>
</ul>
<p>Note that the commands above create a cluster with one master;
however, you can add new master replicas to the cluster with
subsequent commands.</p>
<h3>Adding a new master replica After you have created an HA-compatible cluster, you can add master replicas to it. You add master replicas by</h3>
<p>using a kube-up script with the following flags:</p>
<ul>
<li><p>KUBE_REPLICATE_EXISTING_MASTER=true - to create a replica of an existing master.</p>
</li>
<li><p>KUBE_GCE_ZONE=zone - zone where the master replica will run. Must be in the same region as other replicas’ zones.</p>
</li>
</ul>
<p>You don’t need to set the MULTIZONE or ENABLE_ETCD_QUORUM_READS flags,
as those are inherited from when you started your HA-compatible
cluster.</p>
<p>The following sample command replicates the master on an existing
HA-compatible cluster:</p>
<p>$ KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true
./cluster/kube-up.sh</p>
</blockquote>
<p>You can also find these resources useful:</p>
<ul>
<li><a href="https://github.com/salmanb/Kubernetes-HA-on-baremetal" rel="noreferrer">Setup an HA kubernetes cluster on Bare Metal</a></li>
<li><a href="https://medium.com/@bambash/ha-kubernetes-cluster-via-kubeadm-b2133360b198" rel="noreferrer">HA Kubernetes with Kubeadm</a></li>
<li><a href="https://elastisys.com/2018/01/25/setting-highly-available-kubernetes-clusters/" rel="noreferrer">On setting up highly available Kubernetes clusters</a></li>
<li><a href="http://www.tothenew.com/blog/how-to-setup-kubernetes-master-ha-on-centos/" rel="noreferrer">How to Setup Kubernetes Master HA on CentOS?</a></li>
<li><a href="https://www.youtube.com/watch?v=ppY9cqTvBVE" rel="noreferrer">Managing HA Kubernetes clusters</a> (Rancher Labs meetup video)</li>
</ul>
|
<p>I have deployed an application in kubernetes which is served by Google Ingress Controller (Service as ELB). The application is working fine. But the moment I am applying https related configuration, the https is coming but websocket fails. </p>
<p>Below is the service file and configmap </p>
<p>for http:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---------------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "true"
</code></pre>
<p>for https:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:2xxxxxxxxxxxxxxxxxxx56:certificate/3fxxxxxxxxxxxxxxxxxxxxxxxxxx80"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "false"
</code></pre>
<p>Am I missing any annotations or data in configmap ? Pls help me out</p>
| <p>I think the problem is the annotation:</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
</code></pre>
<p>The backend-protocol in ELBs must be TCP for websocket connections.</p>
<p>Also, I see you're using Nginx Ingress Controller, maybe you want to set these variables in the config</p>
<pre><code>proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
</code></pre>
<p>To avoid connection closings.</p>
|
<p>My application is running in Kubernetes pods and I would like to use <a href="https://github.com/golang/groupcache" rel="nofollow noreferrer">golang/groupcache</a> to distribute key/value entities amongst instances of the app.</p>
<p>The groupcache library in each of my app instances is required to know the connection details (IP & port) of all the other app instances so it can distribute the key/value entities as it sees fit. How can I let groupcache know the connection details of the other running instances of my application? Would it be preferable/possible to use the Kubernetes <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Go Client Library</a>?</p>
| <p>Go Client Library is one of the options. Another (more simple) option is to setup a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> for groupcache, so with a single DNS lookup you can get all IP addresses of all groupcache pods.</p>
|
<p>As I have been using kubernetes more I keep on seeing the reference that a pod can contain 1 container or more and I have even looked at examples.</p>
<p>My question is whether there is a case where this would be best practice and more efficient to create multi container pods since you can scale and replicate your pods coupling it with a service. </p>
<p>Thanks in advance </p>
| <p>A Pod can contain multiple containers, but for the most portion of the situations, it makes perfect sense for the Pod to be simply an abstraction over a single running container. </p>
<p><strong>In what situations does it make sense to have a multi-container deployed Pod?</strong></p>
<p>What comes to my mind are the scenarios where you have a primary Pod running, but you need to tightly couple helper processes, such as a log watcher. In those situations, it makes perfect sense to actually have multiple containers running inside a single pod.</p>
<p>Another big example that comes to my mind is from the <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> project, which is a platform made to connect, manage and secure microservices and is generally referred as a Service Mesh. </p>
<p>A huge part of what it does and is able to accomplish to provide a greater control and customization over the deployed microservices network, is due to the fact that it deploys a sidecar proxy, denominated Envoy, throughout the environment intercepting all network communication between microservices.</p>
<p><a href="https://istio.io/docs/concepts/traffic-management/load-balancing.html" rel="nofollow noreferrer">Here</a>, you can check an example of load balancing in a Istio service mesh. As you can see the Proxy is deployed inside the Pod, intercepting all communication that goes through it. </p>
|
<p>I have an umbrella chart with multiple sub-charts, I simply want to make sure that subchart1 runs before subchart2, etc. How can we define the order of subchart execution?</p>
<p>Seems like hook-weights only apply relative to the chart that declares them.</p>
| <p>Here is a part of the Helm <a href="https://helm.sh/docs/topics/charts/#operational-aspects-of-using-dependencies" rel="noreferrer">documentation</a> related to execution order of charts:</p>
<blockquote>
<p>The above sections explain how to specify chart dependencies, but how
does this affect chart installation using helm install and helm
upgrade?</p>
<p>Suppose that a chart named "A" creates the following Kubernetes
objects</p>
<p>namespace "A-Namespace" <br>
statefulset "A-StatefulSet" <br>
service "A-Service"</p>
</blockquote>
<blockquote>
<p>Furthermore, A is dependent on chart B that creates
objects</p>
<p>namespace "B-Namespace" <br>
replicaset "B-ReplicaSet" <br>
service "B-Service"<br></p>
</blockquote>
<blockquote>
<p>After installation/upgrade of chart A a single Helm release is
created/modified. The release will create/update all of the above
Kubernetes objects in the following order:</p>
<p>A-Namespace <br>
B-Namespace <br>
A-StatefulSet <br>
B-ReplicaSet <br>
A-Service <br>
B-Service<br></p>
</blockquote>
<blockquote>
<p>This is because when Helm installs/upgrades charts, the Kubernetes
objects from the charts and all its dependencies are aggregrated into a single set; then sorted by type followed by name;
and then created/updated in that order. <br>
Hence a single release is created with all the objects for the chart and its dependencies.</p>
<p>The install order of Kubernetes types is given by the enumeration
InstallOrder in kind_sorter.go (<a href="https://github.com/helm/helm/blob/9ad53aac42165a5fadc6c87be0dea6b115f93090/pkg/tiller/kind_sorter.go#L29" rel="noreferrer">Helm v2 source file</a>).</p>
</blockquote>
<p>Part of kind_sorter.go (<a href="https://github.com/helm/helm/blob/9b42702a4bced339ff424a78ad68dd6be6e1a80a/pkg/releaseutil/kind_sorter.go#L27" rel="noreferrer">Helm v3 source</a>) is related to install charts:</p>
<pre><code>var InstallOrder KindSortOrder = []string{
"Namespace",
"NetworkPolicy",
"ResourceQuota",
"LimitRange",
"PodSecurityPolicy",
"PodDisruptionBudget",
"Secret",
"ConfigMap",
"StorageClass",
"PersistentVolume",
"PersistentVolumeClaim",
"ServiceAccount",
"CustomResourceDefinition",
"ClusterRole",
"ClusterRoleList",
"ClusterRoleBinding",
"ClusterRoleBindingList",
"Role",
"RoleList",
"RoleBinding",
"RoleBindingList",
"Service",
"DaemonSet",
"Pod",
"ReplicationController",
"ReplicaSet",
"Deployment",
"HorizontalPodAutoscaler",
"StatefulSet",
"Job",
"CronJob",
"Ingress",
"APIService",
}
</code></pre>
<p>There is a workaround, that can change the default behaviour, shared by <strong>elementalvoid</strong> in this <a href="https://github.com/kubernetes/helm/issues/1228" rel="noreferrer">issue</a>:</p>
<blockquote>
<p>I've been setting my services, secrets, and configmaps as pre-install
hooks to achieve this behavior.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
</code></pre>
</blockquote>
<p>--</p>
<blockquote>
<p>It is possible to define a weight for a hook which will help build a
deterministic executing order. Weights are defined using the following
annotation:</p>
<pre><code> annotations:
"helm.sh/hook-weight": "5"
</code></pre>
<p>Hook weights can be positive or negative numbers but must be
represented as strings. When Tiller starts the execution cycle of
hooks of a <strong>particular Kind</strong> it will sort those hooks in ascending
order.</p>
</blockquote>
<p>More detailed information about hooks can be found here (<a href="https://github.com/helm/helm/blob/9ad53aac42165a5fadc6c87be0dea6b115f93090/docs/charts_hooks.md#hooks" rel="noreferrer">v2 doc</a>, <a href="https://helm.sh/docs/topics/charts_hooks/" rel="noreferrer">v3 doc</a>) and in the source file (<a href="https://github.com/helm/helm/blob/9ad53aac42165a5fadc6c87be0dea6b115f93090/pkg/tiller/hooks.go" rel="noreferrer">helm v2 source</a>, <a href="https://github.com/helm/helm/blob/02ad2b118731d012ea248e3ede800aa28fa71301/pkg/releaseutil/manifest_sorter.go#L54" rel="noreferrer">helm v3 source</a>)</p>
|
<p>I'm working on a Java Play project, and in my <code>application.conf</code> file I have a Redis cluster set-up that receives an array of Redis server nodes.</p>
<p>Now, I want to inject that value in Kubernetes deployment as an environment variable and can't find the right syntax to do so.</p>
<p>My current <code>application.conf</code> looks something like this:</p>
<pre><code>play.cache.redis {
# enable cluster mode
source: cluster
# nodes are defined as a sequence of objects:
cluster: [
{
# required string, defining a host the node is running on
host: localhost
# required integer, defining a port the node is running on
port: 6379
# optional string, defines a password to use
password: null
}
]
}
</code></pre>
<p>Can someone please tell me how to pass the <code>play.cache.redis.cluster</code> variable to a Kubernetes deployment so it stays like this?</p>
| <p>You can inject your entire application.conf with a mechanism of ConfigMaps:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metatada:
name: app-config
data:
application.conf: |
play.cache.redis {
# enable cluster mode
source: cluster
# nodes are defined as a sequence of objects:
cluster: [
{
# required string, defining a host the node is running on
host: localhost
# required integer, defining a port the node is running on
port: 6379
# optional string, defines a password to use
password: null
}
]
}
</code></pre>
<p>and then mount it directly to your container:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ....
spec:
containers:
- name: ...
image: ...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
</code></pre>
<p>The app can access it then at /etc/config/application.conf.</p>
|
<p>I have a statefulset of kafka. I need to expand the disk size, i try wihout succes to use the automatic resize feature of k8s 1.9</p>
<p>Here : <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims</a></p>
<p>I did activate feature gates and admission pluging, i think it's work beacause i can succefully change the size of the pvc after the modification.</p>
<p>But nothing happend i have modified the size of the PVC from 50Gi to 250Gi.</p>
<p>The capacity did change everywhere in the pvc, but not on AWS the EBS volume is still 50gb and a df -h in the pod still show 50gb</p>
<p>Did i miss something ? Do i have to manually resize on aws ?</p>
<p>thank you</p>
| <p>I made the feature work, but in a very very dirty way. </p>
<ol>
<li>Modify the size of the PVC</li>
<li>Modify the size of the EBS manually</li>
<li>Force unmount the volume on AWS</li>
<li>The pod crash and is
rescheduled by the statefullset, when the pod is up again the volume and partition have the correct size</li>
</ol>
|
<p>Having some issue with my kubernetes cluster and DNS.</p>
<p>We recently updated to RHEL 7.5 and one of the machines was the Master. Once it back online most everything worked but I just noticed that external connections from the cluster do not resolve. Internal communication works great.</p>
<p>Here's the busybox nslookup results:</p>
<pre><code>kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
</code></pre>
<p>kubedns status</p>
<pre><code>Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.32.0.18:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.32.0.18:53
Session Affinity: None
Events: <none>
</code></pre>
<p>All pods say they are up</p>
<pre><code>NAME READY STATUS RESTARTS AGE
kube-dns-86f4d74b45-9m292 3/3 Running 26 44d
</code></pre>
<p>Lastest logs from the kubedns pods</p>
<pre><code>I0523 15:59:14.291623 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0523 15:59:14.291638 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0523 15:59:14.791440 1 dns.go:170] Initialized services and endpoints from apiserver
I0523 15:59:14.791560 1 server.go:135] Setting up Healthz Handler (/readiness)
I0523 15:59:14.791579 1 server.go:140] Setting up cache handler (/cache)
I0523 15:59:14.791588 1 server.go:126] Status HTTP port 8081
I0523 16:48:05.175159 1 dns.go:555] Could not find endpoints for service "kube-prometheus-exporter-kube-controller-manager" in namespace "kube-system". DNS records will be created once endpoints show up.
I0523 16:48:05.236418 1 dns.go:555] Could not find endpoints for service "kube-prometheus-exporter-kube-dns" in namespace "kube-system". DNS records will be created once endpoints show up.
I0523 16:48:05.252139 1 dns.go:555] Could not find endpoints for service "kube-prometheus-exporter-kube-etcd" in namespace "kube-system". DNS records will be created once endpoints show up.
I0523 16:48:05.264292 1 dns.go:555] Could not find endpoints for service "kube-prometheus-exporter-kube-scheduler" in namespace "kube-system". DNS records will be created once endpoints show up.
I0524 13:04:38.008038 1 logs.go:41] skydns: failure to forward request "read udp 10.32.0.18:48431->10.101.24.100:53: i/o timeout"
I0524 13:04:41.973900 1 logs.go:41] skydns: failure to forward request "read udp 10.32.0.18:45738->10.101.24.100:53: i/o timeout"
I0524 17:26:54.056913 1 logs.go:41] skydns: failure to forward request "read udp 10.32.0.18:52242->10.101.24.100:53: i/o timeout"
I0524 18:28:16.762002 1 logs.go:41] skydns: failure to forward request "read udp 10.32.0.18:43051->10.101.24.100:53: i/o timeout"
I0524 18:28:20.762912 1 logs.go:41] skydns: failure to forward request "read udp 10.32.0.18:57624->10.101.24.100:53: i/o timeout"
</code></pre>
<p>I am assuming this has something to do with the reboot.</p>
<p>Kubernetes: version 1.10.3</p>
<p>DNS: kube-dns</p>
<p>Network: Weavenet</p>
<p>Host OS: RHEL 7.5</p>
<p>Any help would be great.</p>
<p>Thanks</p>
| <p>Not sure of the details of your upgrade, but check to see if your kube-dns version changed when you upgraded your OS. There’s an issue with kube-dns version 1.14.9 resolving external name services. You’ll need to roll back kube-dns to a previous build, or upgrade to 1.14.10.</p>
<p>See this issue: <a href="https://github.com/kubernetes/kops/issues/4986" rel="nofollow noreferrer">https://github.com/kubernetes/kops/issues/4986</a></p>
|
<p>I am having a lot of issues configuring My Dockerized Django + PostgreSQL DB application to work on Kubernetes Cluster, which I have created using Google Cloud Platform.</p>
<p>How do I specify DATABASES.default.HOST from my settings.py file when I deploy image of PostgreSQL from Docker Hub and an image of my Django Web Application, to the Kubernetes Cluster?</p>
<p>Here is how I want my app to work. When I run the application locally, I want to use SQLITE DB, in order to do that I have made following changes in my settings.py file:</p>
<pre><code>if(os.getenv('DB')==None):
print('Development - Using "SQLITE3" Database')
DATABASES = {
'default':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR,'db.sqlite3'),
}
}
else:
print('Production - Using "POSTGRESQL" Database')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'agent_technologies_db',
'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': , #???
'PORT': , #???
}
}
</code></pre>
<p>The main idea is that when I deploy application to Kubernetes Cluster, inside of Kubernetes Pod object, a Docker container ( my Dockerized Django application ) will run. When creating a container I am also creating Environment Variable <code>DB</code> and setting it to True. So when I deploy application I use PostgreSQL Database .</p>
<p><strong>NOTE</strong>: If anyone has any other suggestions how I should separate Local from Production development, please leave a comment. </p>
<p>Here is how my Dockerfile looks like:</p>
<pre><code>FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /agent-technologies
WORKDIR /agent-technologies
COPY . /agent-technologies
RUN pip install -r requirements.txt
EXPOSE 8000
</code></pre>
<p>And here is how my docker-compose file looks like:</p>
<pre><code>version: '3'
services:
web:
build: .
command: python src/manage.py runserver --settings=agents.config.settings
volumes:
- .:/agent-technologies
ports:
- "8000:8000"
environment:
- DB=true
</code></pre>
<p>When running application locally it works perfectly fine. But when I try to deploy it to Kubernetes cluster, Pod objects which run my application containers are crashing in an infinite loop, because I dont know how to specify the DATABASES.default.HOST when running app in production environment. And of course the command specified in docker-compose file (<code>command: python src/manage.py runserver --settings=agents.config.settings</code>) probably produces an exception and makes the Pods crash in infinite loop.</p>
<p>NOTE: I have already created all necessary configuration files for Kubernetes ( Deployment definitions / Services / Secret / Volume files ). Here is my github link: <a href="https://github.com/StefanCepa/agent-technologies-bachelor" rel="nofollow noreferrer">https://github.com/StefanCepa/agent-technologies-bachelor</a></p>
<p>Any help would be appreciated! Thank you all in advance!</p>
| <p>You will have to create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> (cluster ip) for your postgres pod to make it "accessible". When you create a service, you can <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">access</a> it via <code><service name>.default:<port></code>. However, running postgres (or any db) as a simple pod is dangerous (you will loose data as soon as you or k8s re-creates the pod or scale it up). You can use a <a href="https://cloud.google.com/sql/docs/postgres/" rel="nofollow noreferrer">service</a> or install it properly using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">statefulSets</a>.</p>
<p>Once you have the address, you can put it in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">env variable</a> and access it from your settings.py</p>
<p><strong>EDIT</strong>:
Put this in your deployment yaml (<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">example</a>):</p>
<pre><code>env:
- name: POSTGRES_HOST
value: "postgres-service.default"
- name: POSTGRES_PORT
value: "5432"
- name: DB
value: "DB"
</code></pre>
<p>And in your settings.py</p>
<pre><code>'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': os.getenv('POSTGRES_HOST'),
'PORT': os.getenv('POSTGRES_PORT'),
</code></pre>
|
<p>Currently I have a piece of go code that is based on other examples. I can list all the pods, jobs... etc but I am encountering a rather tricky problem with the creation of a Job on <strong>Openshift</strong>.</p>
<p>The following parts of my code are supposed to create a Job, I even get a response, but no job is being created on the mentioned namespace:</p>
<pre><code>func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
jobsClient := clientset.BatchV1().Jobs("gitlab")
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "demo-job",
},
Spec: batchv1.JobSpec{
Template: apiv1.PodTemplateSpec{
Spec: apiv1.PodSpec{
Containers: []apiv1.Container{
{
Name: "demo",
Image: "myimage",
},
},
},
},
},
}
fmt.Println("Creating job... ")
result1, err1 := jobsClient.Create(job)
if err != nil {
fmt.Println(err1)
panic(err1)
}
fmt.Printf("Created job %q.\n", result1)
}
</code></pre>
<p>As a result, all I get is this:</p>
<pre><code>Created job "&Job{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:JobSpec{Parallelism:nil,Completions:nil,ActiveDeadlineSeconds:nil,Selector:nil,ManualSelector:nil,Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[],RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},},BackoffLimit:nil,},Status:JobStatus{Conditions:[],StartTime:<nil>,CompletionTime:<nil>,Active:0,Succeeded:0,Failed:0,},}".
</code></pre>
<p>Checking on the "gitlab" namespace and there is no new job.</p>
| <p>Based on the work that I did in the past (you can see <a href="https://github.com/rancher/rancher/pull/11615/commits/6dabdc78dfcdaee6990dc6e3b54fa45c0cc8f8da#diff-9953a124b5cf32451b22b398a0b4d62eR38" rel="noreferrer">here</a>), I think you can specify the Namespace in the ObjectMeta of the Job resource. And you can drop it in the jobsClient.</p>
<pre><code>jobsClient := clientset.BatchV1().Jobs()
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "demo-job",
Namespace: "gitlab",
},
Spec: batchv1.JobSpec{
Template: apiv1.PodTemplateSpec{
Spec: apiv1.PodSpec{
Containers: []apiv1.Container{
{
Name: "demo",
Image: "myimage",
},
},
},
},
},
}
</code></pre>
|
<p>I am deploying sample springboot application using fabric8 maven deploy. The build fails with SSLHandshakeException.</p>
<pre><code>F8: Cannot access cluster for detecting mode: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred.
</code></pre>
<p>So, I downloaded the public certificate from the Openshift webconsole and added it to JVM using </p>
<pre><code>C:\...\jdk.\bin>keytool -import -alias rootcert -file C:\sample\RootCert.cer -keystore cacerts
</code></pre>
<p>and got message that its successfully added to the keystore and the list command shows the certificates added.</p>
<pre><code> C:\...\jdk.\bin>keytool -list -keystore cacerts
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 2 entries
rootcert, May 18, 2018, trustedCertEntry,
Certificate fingerprint (SHA1): XX:XX:XX:..........
</code></pre>
<p>But the mvn:fabric8 deploy build still fails with the same exception.</p>
<p>Can someone shed some light on this issue? Am I missing anything?</p>
| <p>Adding the certificates to the "cacerts" in $JAVAHOME/jre/lib/security solved the issue. </p>
|
<p>I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.</p>
<p>I start eureka server: ( pesudo code)</p>
<pre><code>kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
</code></pre>
<p>Then I use command "docker pull" to download config server image and run it as below:</p>
<pre><code>kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
</code></pre>
<p>With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort. </p>
| <p>Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.</p>
<p>Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
<code>http://config-server.default:31001</code> and <code>http://eureka-server.default:31000</code>. <code>default</code> is the namespace here.</p>
<p><code>172.16.100.81:31000</code> will make it accessible from <em>outside</em> the cluster.</p>
|
<p>I'm trying to understand why kubernetes docs <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#organizing-resource-configurations" rel="noreferrer">recommend</a> to specify service before deployment in one configuration file:</p>
<blockquote>
<p>The resources will be created in the order they appear in the file. Therefore, it’s best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.</p>
</blockquote>
<p>Does it mean spread pods between kubernetes cluster nodes?</p>
<p>I tested with the following configuration where a deployment is located before a service and pods are distributed between nods without any issues. </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: incorrect-order
namespace: test
spec:
selector:
matchLabels:
app: incorrect-order
replicas: 2
template:
metadata:
labels:
app: incorrect-order
spec:
containers:
- name: incorrect-order
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: incorrect-order
namespace: test
labels:
app: incorrect-order
spec:
type: NodePort
ports:
- port: 80
selector:
app: incorrect-order
</code></pre>
<p>Another <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="noreferrer">explanation</a> is that some environment variables with service URL will not be set for pods in this case. However it also works ok in case a configuration is inside one file like the example above.</p>
<p>Could you please explain why it is better to specify service before the deployment in case of one configuration file? Or may be it is some outdated recommendation.</p>
| <p>If you use <code>DNS</code> as service discovery, the order of creation doesn't matter.</p>
<p>In case of <code>Environment Vars</code> (the second way K8S offers service discovery) the order matters, because once that vars are passed to the <strong>starting</strong> pod, they cannot be modified later if the service definition changes.</p>
<p>So if your service is deployed <strong>before</strong> you start your pod, the service envvars are injected inside the linked pod.</p>
<p>If you create a Pod/Deployment resource with labels, this resource will be exposed through a service once this last is created (with proper selector to indicate what resource to expose).</p>
|
<p>We are following a microservice based architecture, where I deploy my application inside a kubernetes cluster by helm package. I have a nodejs application which queries cron jobs from database and should add these jobs to crontab file inside the same container.</p>
<p>So when I expose my container as a nodejs app in a port, I will make a add job request and these jobs are formatted to cron job type, and these entries needed to be added to crontab and execute these jobs in background.</p>
<p>Can this be achieved?</p>
<p>If yes, on what base image should I build my node application?</p>
<p>And what are the steps I should include to my docker file, to also verify my cron executed logs, run both node application and cron jobs?</p>
| <p>I understand that you are looking for a job scheduler, by my experience putting more than 1 process per container is not the best idea, maybe you can use another approach using an extra microservice that runs that jobs.</p>
<p>I recommend using Agenda</p>
<p><a href="https://github.com/agenda/agenda" rel="nofollow noreferrer">https://github.com/agenda/agenda</a></p>
<p>You can create an Agenda worker that has the code for the different jobs and the only thing that you need is to send a scheduled job or now execution, by this architecture you can have multiple agendas also as a sidecar container inside the same pod of your node application.</p>
|
<p>We sometimes use Python scripts to spin up and monitor Kubernetes Pods running on Google Kubernetes Engine using the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Official Python client library for kubernetes</a>. We also enable auto-scaling on several of our node pools. </p>
<p>According to <a href="https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html" rel="nofollow noreferrer">this</a>, "Master VM is automatically scaled, upgraded, backed up and secured". The post also seems to indicate that some automatic scaling of the control plane / Master VM occurs when the node count increases from 0-5 to 6+ and potentially at other times when more nodes are added.</p>
<p>It seems like the control plane can go down at times like this, when many nodes have been brought up. In and around when this happens, our Python scripts that monitor pods via the control plane often crash, seemingly unable to find the KubeApi/Control Plane endpoint triggering some of the following exceptions:</p>
<blockquote>
<p>ApiException, urllib3.exceptions.NewConnectionError, urllib3.exceptions.MaxRetryError.</p>
</blockquote>
<p>What's the best way to handle this situation? Are there any properties of the autoscaling events that might be helpful?</p>
<p>To clarify what we're doing with the Python client is that we are in a loop reading the status of the pod of interest via <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod" rel="nofollow noreferrer">read_namespaced_pod</a> every few minutes, and catching exceptions similar to the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod" rel="nofollow noreferrer">provided example</a> (in addition we've tried also catching exceptions for the underlying <em>urllib</em> calls). We have also added retrying with exponential back-off, but things are unable to recover and fail after a specified max number of retries, even if that number is high (e.g. keep retrying for >5 minutes). </p>
<p>One thing we haven't tried is recreating the <code>kubernetes.client.CoreV1Api</code> object on each retry. Would that make much of a difference?</p>
| <p>When a nodepool size changes, depending on the size, this can initiate a change in the size of the master. Here are the <a href="https://kubernetes.io/docs/admin/cluster-large/#size-of-master-and-master-components" rel="nofollow noreferrer"><strong>nodepool sizes mapped with the master sizes</strong></a>. In the case where the nodepool size requires a larger master, automatic scaling of the master is initiated on GCP. During this process, the master will be unavailable for approximately 1-5 minutes. Please note that these events are not available in <a href="https://cloud.google.com/logging/docs/view/overview#sd-accts-for-logging" rel="nofollow noreferrer"><strong>Stackdriver Logging</strong></a>. </p>
<p>At this point all API calls to the master will fail, including the ones from the Python API client and kubectl. However after 1-5 minutes the master should be available and calls from both the client and kubectl should work. I was able to test this by scaling my cluster from 3 node to 20 nodes and for 1-5 minutes the master wasn't available .
I obtained the following errors from the Python API client: </p>
<pre><code>Max retries exceeded with url: /api/v1/pods?watch=False (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at>: Failed to establish a new connection: [Errno 111] Connection refused',))
</code></pre>
<p>With kubectl I had : </p>
<pre><code>“Unable to connect to the server: dial tcp”
</code></pre>
<p>After 1-5 minutes the master was available and the calls were successful. There was no need to recreate <strong><em>kubernetes.client.CoreV1Api</em></strong> object as this is just an API endpoint. </p>
<p>According to your description, your master wasn't accessible after 5 minutes which signals a potential issue with your master or setup of the Python script. To troubleshoot this further on side while your Python script runs, you can check for availability of master by running any kubectl command. </p>
|
<p>i have a kubernetes cluster of 2 machines on bare metal machines on which master machine contains the docker image which i want to deploy only on slave node, but i dont want to use the cloud repository for image storage,i am building docker image from jenkins job which runs on master machine, if i use docker image from dockerhub the deployment works fine, but it gives me an error while deploying on slave node when image is in master machine, </p>
<pre><code>NAME READY STATUS RESTARTS AGE
dash-deploy-5bd7cb4484-wvwcr 0/1 ImagePullBackOff 0 22h
hello-deploy-5756df7ff4-mwzcw 1/1 Running 0 21h
</code></pre>
<p>hello-deploy uses docker image from docker hub, while dash-deploy uses docker image present on master node, both the deployment are done on slave node.
I know there are a lot of other methods to do this, but any help on this would be appreciated.
I am using ubuntu 16.04 </p>
<blockquote>
<p>kubeadm version: &version.Info{Major:"1", Minor:"10",
GitVersion:"v1.10.2",
GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335",
GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z",
GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}</p>
<p>kubectl version Client Version: version.Info{Major:"1", Minor:"9",
GitVersion:"v1.9.3",
GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b",
GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z",
GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}</p>
<p>Server Version: version.Info{Major:"1", Minor:"10",
GitVersion:"v1.10.3",
GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0",
GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z",
GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}</p>
</blockquote>
<p>My yaml file looks like </p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dash-deploy
spec:
replicas: 1
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dash-pod
image: 32:32
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: dash-svc
labels:
app: dashboard
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30002
protocol: TCP
selector:
app: dashboard
---
</code></pre>
| <p>The slave needs to download the image from somewhere. Images are not automatically distributed to the slaves.</p>
<p>If you don't want to use dockerhub, you need to host your own Docker Registry in the cluster to download images from it. <a href="https://medium.com/@jmarhee/in-cluster-docker-registry-with-tls-on-kubernetes-758eecfe8254" rel="nofollow noreferrer">This guide will help you do it</a>.</p>
|
<p>I set up a Kubernetes cluster with a single master node and two worker nodes using <code>kubeadm</code>, and I am trying to figure out how to recover from node failure.</p>
<p>When a worker node fails, recovery is straightforward: I create a new worker node from scratch, run <code>kubeadm join</code>, and everything's fine.</p>
<p>However, I cannot figure out how to recover from master node failure (without interrupting the deployments running on the worker nodes). Do I need to backup and restore the original certificates or can I just run <code>kubeadm init</code> to create a new master from scratch? How do I join the existing worker nodes?</p>
| <p>I ended up writing a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> backing up the etcd data. If you are interested: I wrote a blog post about it: <a href="https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html" rel="nofollow noreferrer">https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html</a></p>
<p>In addition to that you may want to backup all of <code>/etc/kubernetes/pki</code> to avoid issues with secrets (tokens) having to be renewed.</p>
<p>For example, kube-proxy uses a secret to store a token and this token becomes invalid if only the etcd certificate is backed up.</p>
|
<p>OS: CentOS 7
docker version 1.13.1</p>
<p>I am trying to install kubernetes on centos to run in-house. I built it using the build on docker since the build with go would not work. Documentation is extremely poor regarding dependencies and specifics.</p>
<p>I followed the instructions on the kubernetes site here : <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes</a></p>
<pre><code>[kubernetes]$ git clone https://github.com/kubernetes/kubernetes
[kubernetes]$ cd kubernetes
[kubernetes]$ make quick-release
+++ [0521 22:31:10] Verifying Prerequisites....
+++ [0521 22:31:17] Building Docker image kube-build:build-e7afc7a916-5-v1.10.2-1
+++ [0521 22:33:45] Creating data container kube-build-data-e7afc7a916-5-v1.10.2-1
+++ [0521 22:34:57] Syncing sources to container
+++ [0521 22:35:15] Running build command...
+++ [0521 22:36:02] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0521 22:36:14] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0521 22:36:21] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0521 22:36:31] Building go targets for linux/amd64:
./vendor/k8s.io/code-generator/cmd/openapi-gen
+++ [0521 22:36:40] Building go targets for linux/amd64:
./vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0521 22:36:42] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/cloud-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/hyperkube
cmd/kube-scheduler
vendor/k8s.io/kube-aggregator
vendor/k8s.io/apiextensions-apiserver
cluster/gce/gci/mounter
+++ [0521 22:40:24] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kubeadm
cmd/kubelet
+++ [0521 22:41:08] Building go targets for linux/amd64:
cmd/kubectl
+++ [0521 22:41:31] Building go targets for linux/amd64:
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0521 22:44:24] Building go targets for linux/amd64:
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e_node/e2e_node.test
+++ [0521 22:45:24] Syncing out of container
+++ [0521 22:46:39] Building tarball: src
+++ [0521 22:46:39] Building tarball: manifests
+++ [0521 22:46:39] Starting tarball: client darwin-386
+++ [0521 22:46:39] Starting tarball: client darwin-amd64
+++ [0521 22:46:39] Starting tarball: client linux-386
+++ [0521 22:46:39] Starting tarball: client linux-amd64
+++ [0521 22:46:39] Starting tarball: client linux-arm
+++ [0521 22:46:39] Starting tarball: client linux-arm64
+++ [0521 22:46:39] Starting tarball: client linux-ppc64le
+++ [0521 22:46:39] Starting tarball: client linux-s390x
+++ [0521 22:46:39] Starting tarball: client windows-386
+++ [0521 22:46:39] Starting tarball: client windows-amd64
+++ [0521 22:46:39] Waiting on tarballs
+++ [0521 22:47:19] Building tarball: server linux-amd64
+++ [0521 22:47:19] Building tarball: node linux-amd64
+++ [0521 22:47:47] Starting docker build for image: cloud-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-apiserver-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-scheduler-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-aggregator-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-proxy-amd64
+++ [0521 22:47:47] Building hyperkube image for arch: amd64
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-scheduler:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-aggregator:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:41] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:43] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:46] Deleting docker image k8s.gcr.io/kube-apiserver:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:58] Deleting docker image k8s.gcr.io/kube-proxy:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Deleting hyperkube image k8s.gcr.io/hyperkube-amd64:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Docker builds done
+++ [0521 22:50:54] Building tarball: final
+++ [0521 22:50:54] Building tarball: test
</code></pre>
<ol>
<li><p>my first question is, why at the end of the build, does docker delete the kube-apiserver, kube-proxy, etc.. These are the tools I was expecting to use.</p></li>
<li><p>second question, why do I now just have a 'kube-build' image. How do I interact with this? I was expecting to see kubeadm and kubectl in addition to the kube build.<br>
The documentation says nothing else about what to do next. How to create the a pod, deploy a container, and manage it. I was expecting to do this with docker attach on the kubectl/kubeadm images, but there are none.</p>
<pre><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kube-build build-e7afc7a916-5-v1.10.2-1 8d27a8ba87fd About an hour ago 2.58 GB
docker.io/node latest f697cb5f31f8 12 days ago 675 MB
docker.io/redis latest bfcb1f6df2db 2 weeks ago 107 MB
docker.io/mongo latest 14c497d5c758 3 weeks ago 366 MB
docker.io/nginx latest ae513a47849c 3 weeks ago 109 MB
</code></pre></li>
</ol>
<p>So what exactly is someone supposed to do with the 'kube-build' image. Any help would be great. thanks!</p>
<p>Additionally, I tried to tag this 'kube-build' since that is the exact image name, but I don't have enough reputation to make a new tag. </p>
| <p>First of all, the result of built is located in folder <code>_output</code> :</p>
<pre><code>[@_output]# ls
dockerized images release-images release-stage release-tars
</code></pre>
<p>In folder <code>release-images\$your_architecture</code>, you can find the images in tarballs:</p>
<pre><code>[@release-images]# cd amd64/
[@amd64]# ls
cloud-controller-manager.tar hyperkube-amd64.tar kube-aggregator.tar kube-apiserver.tar kube-controller-manager.tar kube-proxy.tar kube-scheduler.tar
</code></pre>
<p>You can import them to you local docker repo with the following command:</p>
<pre><code>cat kube-apiserver.tar | docker import - kube-api:new
</code></pre>
<p>The result you will find in the local docker image repo :</p>
<pre><code>[@amd64]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kube-api new 4bd734072676 7 minutes ago 183MB
</code></pre>
<p>You can also find tarballs with binaries in folder <code>release-tars</code>.</p>
<p>Usually, Kubernetes was built on one server and then used on another, that's why you have folder <code>_output</code> with the results of you build. </p>
|
<p>Suppose we have a kubernetes stack running on AWS and we would like to config our component directly via kubernetes chart files to be able to provision some AWS services (for example a DynamoDB table).</p>
<p>What would be the best practice if we want to achieve this and hopefully also allow our kubernetes component can connect to the provisioned services via IAM way (not just using simple key secret pairs)</p>
<p>So once achieve that, one can let a kubernetes component has a char file such as:</p>
<pre><code>apiVersion: alpha1/v1
Kind: Database
type: dynamodb
table_name: a_test_table
hash_key: some_id
range_key: some_order
</code></pre>
<p>Then when deploying this component onto Kubernetes stacks, it will provision a DynamoDB table with name <code>a_test_table</code> for this service to be successfully connected onto.</p>
| <p>What you are describing here is more or less what the <a href="https://github.com/awslabs/aws-servicebroker/" rel="nofollow noreferrer">AWS Service Broker</a> is trying to achieve. Using the <a href="https://www.openservicebrokerapi.org/" rel="nofollow noreferrer">Open Service Broker API</a>, the AWS broker would react to the creation of specific kubernetes objects, creating the required resources on AWS. <a href="https://aws.amazon.com/es/blogs/opensource/provision-aws-services-kubernetes-aws-service-broker/" rel="nofollow noreferrer">Here you can find an AWS post explaining how this works</a>.</p>
<p>There are different brokers for different cloud providers, not just AWS.</p>
|
<p>Can someone give a simple example on how to route the following URLs:</p>
<pre><code>http://monitor.app.com/service-one
http://monitor.app.com/service-two
http://monitor.app.com/service-three
http://monitor.app.com/service-four
</code></pre>
<p>To the following backend services?</p>
<pre><code>http://service-one/monitor
http://service-two/monitor
http://service-three/monitor
http://service-four/monitor
</code></pre>
<p>Preferably using the [file] syntax of Traefik, although any is fine.</p>
| <p>Here is a configuration for your example. Adjust it according to your real cluster configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-one
spec:
selector:
k8s-app: service-one-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-two
spec:
selector:
k8s-app: service-two-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-three
spec:
selector:
k8s-app: service-three-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitor.app
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rewrite-target: /monitor # set path to result request
spec:
rules:
- host: monitor.app.com
http:
paths:
- path /service-one # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-one
servicePort: 80
- path /service-two # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-two
servicePort: 80
- path /service-three # path for routing, it will be removed because of PathPrefixStrip settings
backend:
serviceName: service-three
servicePort: 80
</code></pre>
<p>Additional information could be found here:</p>
<ul>
<li><a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">Kubernetes Ingress Controller</a></li>
<li><a href="https://docs.traefik.io/configuration/backends/kubernetes/" rel="nofollow noreferrer">Kubernetes Ingress Provider</a></li>
<li><a href="https://github.com/containous/traefik/pull/1723/files" rel="nofollow noreferrer">kubernetes ingress rewrite-target implementation #1723</a></li>
</ul>
|
<p><code>List</code> API objects and triple dashes (<code>---</code>) can both be used to denote multiple objects in a single YAML file. Therefore, why do Lists exist when triple dashes accomplish the same thing (in my opinion) in a cleaner way? Are there any cases in which a List would be preferred over triple dashes, or is this purely a stylistic choice?</p>
<p>For example, these two YAML files both produce the same two <code>ServiceAccount</code> objects (chosen for brevity):</p>
<p>my-serviceaccounts1.yaml</p>
<pre><code>apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
- apiVersion: v1
kind: ServiceAccount
metadata:
name: my-other-app
</code></pre>
<p>my-serviceaccounts2.yaml</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-other-app
</code></pre>
| <p>I can think of two reasons:</p>
<ol>
<li>Because the Kubernetes API works with JSON and in JSON there is no ---</li>
<li>Maybe the kind List is meant only for responses.</li>
</ol>
|
<p>This question is about the behavior of PersistentVolume and PersistentVolumeClaim configurations within Kubernetes. We have read through the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">documentation</a> and are left with a few lingering questions. </p>
<p>We are using Azure Kubernetes Service to host our cluster and we want to provide a shared persistent storage backend for many of our Pods. We are planning on using PersistentVolumes to accomplish this. </p>
<p>In this scenario, we want to issue a PersistentVolume backed by an AzureFile storage resource. We will deploy Jenkins to our cluster and store the jenkins_home directory in the PersistentVolume so that our instance can survive pod and node failures. We will be running multiple Master Jenkins nodes, all configured with a similar deployment yaml.</p>
<p>We have created all the needed storage accounts and applicable shares ahead of time, as well as the needed secrets. </p>
<p>First, we issued the following PersistentVolume configuration;</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-azure-file-share
labels:
usage: jenkins-azure-file-share
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azure-file-secret
shareName: jenkins
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
</code></pre>
<p>Following that, we then issued the following PersistentVolumeClaim configuration;</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-file-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: "jenkins-azure-file-share"
</code></pre>
<p>Next, we use this claim within our deployments in the following manner;</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-instance-name
spec:
replicas: 1
template:
metadata:
labels:
role: jenkins
app: jenkins-instance-name
spec:
containers:
- name: jenkins-instance-name
image: ContainerRegistry.azurecr.io/linux/jenkins_master:latest
ports:
- name: jenkins-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
subPath: "jenkins-instance-name"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: "jenkins-file-claim"
imagePullSecrets:
- name: ImagePullSecret
</code></pre>
<p>This is all working as expected. We have deployed multiple Jenkins Masters to our Kubernetes cluster and each one is correctly allocating a new folder on the share specific to each master instance.
<a href="https://i.stack.imgur.com/utRpj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/utRpj.png" alt="enter image description here"></a></p>
<p><strong>Now for my questions</strong></p>
<hr>
<p>The PersistentVolume is configured with 100Gig of Storage. Does this mean that Kubernetes will only allow a maximum of 100Gig of total storage in this volume? </p>
<hr>
<p>When the PersistentVolumeClaim is bound to the PersistentVolume, the PersistentVolumeClaim seems to show that it has 100Gig of total storage available, even though the PersistentVolumeClaim was configured for 10Gig of storage;</p>
<pre><code>C:\ashley\scm\kubernetes>kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins-azure-file-share 100Gi RWX Retain Bound default/jenkins-file-claim 2d
C:\ashley\scm\kubernetes>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-homes-file-claim Bound jenkins-azure-file-share 100Gi RWX 2d
</code></pre>
<p>Is this just bad output from the get pvc command or am I misinterpreting the output of the get pvc command?</p>
<hr>
<p>When sharing a PersistentVolumeClaim in this way;</p>
<ol>
<li>Does each deployment ONLY have access to the configured maximum of 10Gig of storage from the PersistentVolume's 100Gig capacity?</li>
<li>Or, does each deployment have access to its own 10Gig slice of the total 100Gig of storage configured for the PersistentVolume? </li>
</ol>
<hr>
<p>With this configuration, what happens when a single PersistentVolumeClaim capacity gets fully utilized? Do all the Deployments using this single PersistentVolumeClaim stop working? </p>
| <p>So for the pvc it is definitely the case that it has only 10Gig available with this config. For the pv I assume it is the same but in this case I don't know for sure but should be, because of consistency. And it stops working if any of this limits are reached so if you have 11 Jenkins running it will even fail although you not reached the limit on a single pvc.</p>
|
<p>I'm creating a pod with a volumeMount set to <code>mountPropagation: Bidirectional</code>. When created, the container is mounting the volume with <code>"Propagation": "rprivate"</code>. </p>
<p>From the k8s <a href="https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation" rel="nofollow noreferrer">docs</a> I would expect <code>mountPropagation: Bidirectional</code> to result in a volume mount propagation of <code>rshared</code></p>
<p>If I start the container directly with <code>docker</code> this is working. </p>
<p>Some info:</p>
<p><strong>Deployment Yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test
spec:
containers:
- image: gcr.io/google_containers/busybox:1.24
command:
- sleep
- "36000"
name: test
volumeMounts:
- mountPath: /tmp/test
mountPropagation: Bidirectional
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /tmp/test
</code></pre>
<p><strong>Resulting mount section from <code>docker inspect</code></strong></p>
<pre><code>"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}…..
</code></pre>
<p><strong>Equivalent Docker run</strong></p>
<pre><code>docker run --restart=always --name test -d --net=host --privileged=true -v /tmp/test:/tmp/test:shared gcr.io/google_containers/busybox:1.24
</code></pre>
<p><strong>Resulting Mounts section from <code>docker inspect</code> when created with <code>docker run</code></strong></p>
<pre><code>"Mounts": [
{
"Type": "bind",
"Source": "/tmp/test",
"Destination": "/tmp/test",
"Mode": "shared",
"RW": true,
"Propagation": "shared"
}...
</code></pre>
<p><strong>Output of kubectl version</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-13T22:29:03Z", GoVersion:"go1.9.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Using <code>rke version v0.1.6</code></p>
| <p>this was a regression fixed in 1.10.3 in <a href="https://github.com/kubernetes/kubernetes/pull/62633" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/62633</a></p>
|
<p>repo: <a href="https://github.com/Yolean/kubernetes-kafka" rel="nofollow noreferrer">yolean/kubernetes-kafka</a></p>
<p>I'm going to follow the steps to deploy a kafka cluster in the cloud. But I don't quite understand the instruction. Specifically, under /kafka and /zookeeper, do I need to "kubectl create -f" all YAMLs? what do the numerical prefix of each YAML represent? And what order do I need to follow?</p>
| <blockquote>
<p>I need to "kubectl create -f" all YAMLs? </p>
</blockquote>
<p>If you want a fully installed cluster, yes. </p>
<blockquote>
<p>what do the numerical prefix of each YAML represent? </p>
</blockquote>
<p>Ordering. For example, configure the broker pods, then install Kafka, then configure Kafka </p>
<blockquote>
<p>And in what order I need to follow?</p>
</blockquote>
<p>Ascending order. Zookeeper before Kafka </p>
|
<p>I defined the following parameters in values.yaml</p>
<pre><code>resources:
nvidia.com/gpu: 1
</code></pre>
<p>and in templates, I'd like to add following logic in <code>templates/deployment.yaml</code></p>
<pre><code>{{- if .Values.resources.nvidia.com/gpu }}
****
{{- end}}
</code></pre>
<p>But it's failed, please help me to know how to use it in <code>templates/deployment.yaml</code>.</p>
| <p>You can always use the function index:</p>
<pre><code> {{- if index .Values.resources "nvidia.com/gpu" }}
x: {{ index .Values.resources "nvidia.com/gpu" }}
{{- end}}
</code></pre>
|
<p>I have deployed my application image into Kubernetes.</p>
<p>I am trying to send application logs to Splunk. One option would be to use DeamonSet. But, due to some restrictions, I would like to use sidecar.</p>
<p>Is there any sidecar for sending logs from Kubernetes Docker to Splunk?</p>
| <p>As you say, you could add a docker container to your pod with a shared volume, for example</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app-with-sidecar-logs
spec:
volumes:
- name: logs
emptyDir: {}
containers:
- name: app
image: nginx
volumeMounts:
- name: logs
mountPath: /var/logs
- name: fluentd
image: fluent/fluentd
volumeMounts:
- name: logs
mountPath: /var/logs
</code></pre>
<p>You could use the splunk plugin for fluentd configuring and running the docker container properly.</p>
<pre><code><match pattern>
type splunk
host <splunk_host>
port <splunk_port>
</match>
</code></pre>
<p>More info:</p>
<p><a href="https://www.fluentd.org/plugins" rel="nofollow noreferrer">https://www.fluentd.org/plugins</a></p>
<p><a href="https://github.com/parolkar/fluent-plugin-splunk" rel="nofollow noreferrer">https://github.com/parolkar/fluent-plugin-splunk</a></p>
<p><a href="https://www.loggly.com/blog/how-to-implement-logging-in-docker-with-a-sidecar-approach/" rel="nofollow noreferrer">https://www.loggly.com/blog/how-to-implement-logging-in-docker-with-a-sidecar-approach/</a> . <strong>Notice this is for loggly, but the idea is the same.</strong></p>
|
<p>I'm trying to deploy Traefik as an ingress controller on my GKE cluster.
It's a basic cluster with 3 nodes.</p>
<p>I'm used to deploy Traefik using manifest on a Kubernetes cluster deployed by Kubespray, but we are migrating some of our infrastructures to GCP.</p>
<p>So I tried to deploy Traefik using the <a href="https://github.com/kubernetes/charts/tree/master/stable/traefik" rel="nofollow noreferrer">community helm chart</a> with the following configuration:</p>
<pre><code>image: traefik
imageTag: 1.6.2
serviceType: LoadBalancer
loadBalancerIP: X.X.X.X
kubernetes:
ingressClass: traefik
ssl:
enabled: false
enforced: false
insecureSkipVerify: false
acme:
enabled: false
email: [email protected]
staging: true
logging: false
challengeType: http-01
dashboard:
enabled: true
domain: traefik.mydomain.com
ingress:
annotations:
kubernetes.io/ingress.class: traefik
gzip:
enabled: true
accessLogs:
enabled: true
format: common
</code></pre>
<p>And then launch it with the following command:</p>
<pre><code>helm install --namespace kube-system --name traefik --values values.yaml stable/traefik
</code></pre>
<p>All is well deployed on my K8S cluster, except the dashboard-ingress with the following error:</p>
<pre><code>kevin@MBP-de-Kevin ~/W/g/s/traefik> kubectl describe ingress traefik-dashboard -n kube-system
Name: traefik-dashboard
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (10.20.2.6:8080)
Rules:
Host Path Backends
---- ---- --------
traefik.mydomain.com
traefik-dashboard:80 (10.20.1.14:8080)
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 4m loadbalancer-controller googleapi: Error 400: Invalid value for field 'namedPorts[2].port': '0'. Must be greater than or equal to 1, invalid
</code></pre>
<p>Any idea where is my error?</p>
<p>Thanks a lot!</p>
| <pre><code>Invalid value for field 'namedPorts[0].port': '0'
</code></pre>
<p>This error happens when the <code>Service</code> that's being used by GKE Ingress is of type <code>ClusterIP</code> (and not <code>NodePort</code>). GKE Ingress requires backing Services to be of type NodePort.</p>
|
<p>I try to create cluster with kops. I buy domain name in aws route53 megatest.com in I created public Hosted zones.</p>
<pre><code>megatest.com.
NS
ns-1092.awsdns-08.org.
ns-1917.awsdns-47.co.uk.
ns-69.awsdns-08.com.
ns-801.awsdns-36.net.
megatest.com.
SOA
ns-801.awsdns-36.net. awsdns-hostmaster.amazon.com.
</code></pre>
<p>but when I a want to create my cluster I have this error </p>
<blockquote>
<p>error doing DNS lookup for NS records for "artistesemergents.com": lookup artistesemergents.com on 127.0.0.53:53: no such host</p>
</blockquote>
<p>The command that I use looks like this</p>
<pre><code>kops create cluster --name=megatest.com --state=s3://kops-state-megatest123 --zones=us-east-1a --node-count=3 --node-size=t2.micro --master-size=t2.micro --dns-zone=megatest.com
</code></pre>
| <p>When you create your Hosted zones aws does not change your name server in your Registered domains. </p>
<p>Click on your domain in Registered domains after change your name server with the new name server on your host zones.</p>
|
<p>I'm having trouble accessing a Kubernetes environment variable in my python app's init.py file. It appears to be available in other files, however. </p>
<p>My init.py file includes this code <code>app.config.from_object(os.environ['APP_SETTINGS'])</code>. The value of <code>APP_SETTINGS</code> depends on my environment with values being <code>config.DevelopmentConfig</code>, <code>config.StagingConfig</code> or <code>config.ProductionConfig</code>. From here, my app pulls configs from my config.py file, which looks like this:</p>
<pre><code>import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
WTF_CSRF_ENABLED = True
SECRET_KEY = 'you-will-never-guess'
APP_SETTINGS = os.environ['APP_SETTINGS'] # For debug purposes
class DevelopmentConfig(Config):
TEMPLATES_AUTO_RELOAD = True
DEBUG = True
class StagingConfig(Config):
DEBUG = True
class ProductionConfig(Config):
DEBUG = False
</code></pre>
<p>When I set APP_SETTINGS locally in my dev environment in my docker-compose, like so...</p>
<pre><code>environment:
- APP_SETTINGS=config.DevelopmentConfig
</code></pre>
<p>everything works just fine. When I deploy to my Staging pod in Kubernetes with <code>APP_SETTINGS=config.StagingConfig</code> set in my Secrets file, I'm greeted with the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string
return getattr(module, obj_name)
AttributeError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
ImportError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 3, in <module>
from app import app
File "/root/app/__init__.py", line 11, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object
obj = import_string(obj)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string
sys.exc_info()[2])
File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Debugged import:
- 'config' found in '/root/config.py'.
- 'config.StagingConfig\n' not found.
Original exception:
ImportError: module 'config' has no attribute 'StagingConfig
'
upgrading database schema...
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string
return getattr(module, obj_name)
AttributeError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
ImportError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 3, in <module>
from app import app
File "/root/app/__init__.py", line 11, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object
obj = import_string(obj)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string
sys.exc_info()[2])
File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Debugged import:
- 'config' found in '/root/config.py'.
- 'config.StagingConfig\n' not found.
Original exception:
ImportError: module 'config' has no attribute 'StagingConfig
'
starting metriculous web server...
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 434, in import_string
return getattr(module, obj_name)
AttributeError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
ImportError: module 'config' has no attribute 'StagingConfig
'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 3, in <module>
from app import app
File "/root/app/__init__.py", line 11, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/usr/local/lib/python3.6/site-packages/flask/config.py", line 168, in from_object
obj = import_string(obj)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 443, in import_string
sys.exc_info()[2])
File "/usr/local/lib/python3.6/site-packages/werkzeug/_compat.py", line 137, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/werkzeug/utils.py", line 436, in import_string
raise ImportError(e)
werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Debugged import:
- 'config' found in '/root/config.py'.
- 'config.StagingConfig\n' not found.
Original exception:
ImportError: module 'config' has no attribute 'StagingConfig
</code></pre>
<p>However, when I hard code the APP_SETTINGS value in my init.py file like so <code>app.config.from_object('config.StagingConfig')</code> and deploy to Kubernetes, it works fine. When I do it this way, I can even confirm that my APP_SETTINGS env var declared in my Settings in Kubernetes exists by logging into my pod and running <code>echo $APP_SETTINGS</code>.</p>
<p>Any thoughts about what I'm doing wrong?</p>
<p>EDIT #1 - Adding my deployment.yaml file</p>
<pre><code>kind: Deployment
apiVersion: apps/v1beta2
metadata:
annotations:
deployment.kubernetes.io/revision: '4'
selfLink: /apis/apps/v1beta2/namespaces/tools/deployments/met-staging-myapp
resourceVersion: '51731234'
name: met-staging-myapp
uid: g1fce905-1234-56y4-9c15-12de61100d0a
creationTimestamp: '2018-01-29T17:22:14Z'
generation: 6
namespace: tools
labels:
app: myapp
chart: myapp-1.0.1
heritage: Tiller
release: met-staging
spec:
replicas: 1
selector:
matchLabels:
app: myapp
release: met-staging
template:
metadata:
creationTimestamp: null
labels:
app: myapp
release: met-staging
spec:
containers:
- name: myapp-web
image: 'gitlab.ourdomain.com:4567/ourspace/myapp:web-latest'
ports:
- containerPort: 80
protocol: TCP
env:
- name: APP_SETTINGS
valueFrom:
secretKeyRef:
name: myapp-creds
key: APP_SETTINGS
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: myapp-creds
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: myapp-creds
key: AWS_SECRET_ACCESS_KEY
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: myapp-celery
image: 'gitlab.ourdomain.com:4567/ourspace/myapp:celery-latest'
env:
- name: APP_SETTINGS
valueFrom:
secretKeyRef:
name: myapp-creds
key: APP_SETTINGS
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: myapp-creds
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: myapp-creds
key: AWS_SECRET_ACCESS_KEY
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: rabbit
image: 'rabbitmq:alpine'
env:
- name: RABBITMQ_DEFAULT_USER
value: rabbit_user
- name: RABBITMQ_DEFAULT_PASS
value: fake_pw
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: gitlab-registry
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 6
replicas: 1
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2018-01-29T17:22:14Z'
lastTransitionTime: '2018-01-29T17:22:14Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2018-05-25T10:20:49Z'
lastTransitionTime: '2018-02-16T20:29:45Z'
reason: NewReplicaSetAvailable
message: >-
ReplicaSet "met-staging-myapp-2615c4545f" has successfully
progressed.
</code></pre>
| <blockquote>
<p><code>werkzeug.utils.ImportStringError: import_string() failed for 'config.StagingConfig\n'. Possible reasons are:</code></p>
</blockquote>
<p>It very clearly shows you that the module name has a trailing newline character, which is a very, very, very common error for people who try to <code>echo something | base64</code> and put that value into a kubernetes <code>Secret</code>. The <em>correct</em> way of doing that is either via <code>kubectl create secret generic myapp-creds --from-literal=APP_SETTINGS=config.StagingConfig</code>, or <code>printf '%s' config.StagingConfig | base64</code>. Or, of course, stop putting non-Secret text into a Secret and using either a <code>ConfigMap</code> or just a traditional environment <code>value: config.StagingConfig</code> setting, and reserve the <code>Secret</code> construct for <code>Secret</code> values.</p>
|
<p>We are unable to grab logs from our GKE cluster running containers if StackDriver is disabled on GCP. I understand that it is proxying stderr/stdout but it seems rather heavy handed to block these outputs when Stackdriver is disabled.</p>
<p>How does one get an ELF stack going on GKE without being billed for StackDriver aka disabling it entirely? or is it so much a part of GKE that this is not doable?</p>
<p>From the article linked on a similar question regarding GCP:</p>
<p>"Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: Stackdriver Logging for use with Google Cloud Platform, and Elasticsearch. You can find more information and instructions in the dedicated documents. Both use fluentd with custom configuration as an agent on the node." (<a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#exposing-logs-directly-from-the-application" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#exposing-logs-directly-from-the-application</a>) </p>
<p>Perhaps our understanding of Stackdriver billing is wrong? </p>
<p>But we don't want to be billed for Stackdriver as the 150MB of logs outside of the GCP metrics is not going to be enough and we have some expertise in setting up ELF for logging that we'd like to use. </p>
| <p>You can disable Stackdriver logging/monitoring on Kubernetes by editing your cluster, and setting "Stackdriver Logging" and "Stackdriver Monitoring" to disable.</p>
<p>I would still suggest sticking to GCP over AWS as you get the whole Kube as a service experience. Amazon's solution is still a little way off, and they are planning charging for the service in addition to the EC2 node prices (Last I heard).</p>
|
<p>There is container for using certbot in kubernetes.
<a href="https://hub.docker.com/r/choffmeister/kubernetes-certbot/" rel="nofollow noreferrer">https://hub.docker.com/r/choffmeister/kubernetes-certbot/</a></p>
<p>It uses 1.3.0 release. I want to build same docker image, but with latest version 1.10.2.
I just replaced version in file, and when i try to build image, it exit with error:</p>
<pre><code> Step 1/9 : FROM debian:8.1
---> 73fe606256a5
Step 2/9 : MAINTAINER *** <***>
---> Using cache
---> 0d91ae291e44
Step 3/9 : RUN apt-get update && apt-get install ---yes git wget && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
---> Using cache
---> 2192a26717c9
Step 4/9 : RUN git clone https://github.com/certbot/certbot /opt/certbot && ln -s /opt/certbot/letsencrypt-auto /usr/local/bin/letsencrypt-auto && letsencrypt-auto; exit 0
---> Using cache
---> e5e4082a2e72
Step 5/9 : RUN wget https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin
---> Using cache
---> 3159428e91d3
Step 6/9 : WORKDIR /opt/kubernetes-certbot
---> Using cache
---> 71eb2643ce9f
Step 7/9 : COPY run.sh ./run.sh
lstat run.sh: no such file or directory
</code></pre>
<p>Same problem when i try to build same Dockerfile from <a href="https://github.com/choffmeister/kubernetes-certbot/blob/master/Dockerfile" rel="nofollow noreferrer">https://github.com/choffmeister/kubernetes-certbot/blob/master/Dockerfile</a>.
Please help.</p>
| <p>It looks like the run.sh file is not available. Can you confirm that file is in the same folder as the Dockerfile? Here is the link to the run.sh file:
<a href="https://github.com/choffmeister/kubernetes-certbot/blob/master/run.sh" rel="nofollow noreferrer">https://github.com/choffmeister/kubernetes-certbot/blob/master/run.sh</a></p>
|
<p>I am trying to run my Django + PostgreSQL application on Kubernetes Google Cloud cluster. I've successfully deployed the following files:</p>
<ol>
<li>Django App Deployment</li>
<li>Django App Service</li>
<li>Kubernetes Secret object for DB credentials</li>
<li>PersistentVolumeClaim</li>
</ol>
<p>But I am having trouble when deploying my PostgreSQL DB to the cluster. Here is the definition of my .yml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
tier: backend
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
tier: backend
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: agent_technologies_db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgres-credentials
secret:
secretName: postgres-credentials
</code></pre>
<p>And here is the error I get when I run <code>kubectl logs postgres-85c56dfb9b-95c74</code> command:</p>
<pre><code>initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
</code></pre>
<p>Could someone please explain this error to me. Thanks in advance!</p>
<p>*****UPDATE******</p>
<p>When I run <code>kubectl logs $pod</code> I am getting the following error (even though container is RUNNING on cluster):</p>
<pre><code>Host: 10.52.1.5
Production - Using "POSTGRESQL" Database
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
Host: 10.52.1.5
Production - Using "POSTGRESQL" Database
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
Performing system checks...
System check identified no issues (0 silenced).
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f62f4c948c8>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/utils/autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django/core/management/commands/runserver.py", line 124, in inner_run
self.check_migrations()
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 427, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/loader.py", line 206, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 61, in applied_migrations
if self.has_table():
File "/usr/local/lib/python3.6/site-packages/django/db/migrations/recorder.py", line 44, in has_table
return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 255, in cursor
return self._cursor()
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 232, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 168, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
</code></pre>
<p>This is from my settings.py file:</p>
<pre><code>import socket
print("Host: "+socket.gethostbyname(socket.gethostname()))
if(os.getenv('POSTGRES_DB_HOST')==None):
print('Development - Using "SQLITE3" Database')
DATABASES = {
'default':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR,'db.sqlite3'),
}
}
else:
print('Production - Using "POSTGRESQL" Database')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'agent_technologies_db',
'USER': 'stefan_radonjic',
'PASSWORD': 'cepajecar995',
'HOST': os.getenv('POSTGRES_HOST'),
'PORT': os.getenv('POSTGRES_PORT'),
}
}
</code></pre>
<p>Could the error I am getting when i run <code>kubectl logs $pod</code> command be from the fact that the PostgreSQL contianer is not running so it cannot find it or? If anyone is interested in other files of my project here is github link :<a href="https://github.com/StefanCepa/agent-technologies-bachelor" rel="nofollow noreferrer">https://github.com/StefanCepa/agent-technologies-bachelor</a></p>
| <p>It does not look like an error to me, more like a warning. PostgreSQL is complaining that you should use a totally empty directory for Postgres data folder. What the application is telling you to do is to create an empty directory inside your volume and the mount that specific subdirectory. That would be done by doing the following:</p>
<ul>
<li><p>Create a subdirectory in your volume, you can use for example an initContainer that mounts the volume and creates the directory.</p></li>
<li><p>Now that you have a volume that created the directory, then you can modify the postgres-volume-mount in Postgres and add a subPath to your newly created directory.</p>
<pre><code> - name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
subPath: NAME_OF_YOUR_SUBDIRECTORY
</code></pre></li>
</ul>
|
<p>I'm starting with k8s, and i have little problem with parallel processing in my pods.</p>
<p>Currently I'm using dot.net core platform with c# 7.2 for my application with is running in pods.
I'm trying to use parallel task in apps, but it looks lika application is using only one core.</p>
<p><strong>So I'm thinking that I should use only async/await pattern for this application and solve parallel processing by numbers of the pods in deployment settings.
Is this opinion correct?</strong> </p>
<p>Thanks for help.</p>
| <blockquote>
<p>When to use Parallel API ?</p>
</blockquote>
<p>You have a CPU intensive task and wanted to ensure all the CPU cores are effectively utilized. Parallel calls are always blocking operation for main / Ui thread</p>
<blockquote>
<p>When to use Async Await ?</p>
</blockquote>
<p>When your aim is to do processing Asynchronously (In background), thus allowing Main / UI thread to remain responsive, main use case is calling remote processing logic like Database query, which shall not block the server thread. Async AWait used for in memory processing is mainly meant to allowing the Ui thread responsive for the end user, but that would still use Thread pool thread, which is not the case for IO processing, no pool threads are used</p>
<blockquote>
<p>Regarding Kubernetes set up ?</p>
</blockquote>
<p>Since this is an orchestration mechanism for the Docker, which virtualize the OS resources for setting up the Docker containers, so you may have to ensure that there's no setting configuration, which is restricting the total assigned CPU Cores, just having an adverse impact on the overall performance. This aspect will be outside the ambit of .Net Parallel APIs </p>
|
<p>In our Kuberenetes cluster, we are running into sporadic situations where a cluster node runs out of memory and Linux invokes OOM killer. Looking at the logs, it appears that the Pods scheduled onto the Node are requesting more memory than can be allocated by the Node.</p>
<p>The issue is that, when OOM killer is invoked, it prints out a list of processes and their memory usage. However, as all of our Docker containers are Java services, the "process name" just appears as "java", not allowing us to track down which particular Pod is causing the issues.</p>
<p>How can I get the history of which Pods were scheduled to run on a particular Node and when? </p>
| <p>You can now use kube-state-metrics <code>kube_pod_container_status_terminated_reason</code> to detect OOM events</p>
<pre><code>kube_pod_container_status_terminated_reason{reason="OOMKilled"}
kube_pod_container_status_terminated_reason{container="addon-resizer",endpoint="http-metrics",instance="100.125.128.3:8080",job="kube-state-metrics",namespace="monitoring",pod="kube-state-metrics-569ffcff95-t929d",reason="OOMKilled",service="kube-state-metrics"}
</code></pre>
|
<p>I use <code>minikube</code> to create local kubernetes cluster.</p>
<p>I create <code>ReplicationController</code> via <code>webapp-rc.yaml</code> file.</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
</code></pre>
<p>and, I print the pods' ip to stdout:</p>
<p><code>kubectl get pods -l app=webapp -o yaml | grep podIP</code></p>
<pre><code>podIP: 172.17.0.18
podIP: 172.17.0.1
</code></pre>
<p>and, I want to access pod using <code>curl</code></p>
<p><code>curl 172.17.0.18:8080</code></p>
<p>But, the stdout give me: <code>curl: (52) Empty reply from server</code></p>
<p>I know I can access my application in docker container in pod via service.</p>
<p>I find this code in a book. But the book does not give the <code>context</code> for executing this code.</p>
<p>Using <code>minikube</code>, how to access pod via pod ip using curl in host machine?</p>
<p><strong>update 1</strong></p>
<p>I find a way using <code>kubectl proxy</code>:</p>
<pre><code>➜ ~ kubectl proxy
Starting to serve on 127.0.0.1:8001
</code></pre>
<p>and then I can access pod via curl like this:</p>
<p><code>curl http://localhost:8001/api/v1/namespaces/default/pods/webapp-jkdwz/proxy/</code></p>
<p><code>webapp-jkdwz</code> can be found by command <code>kubectl get pods -l app=webapp</code></p>
<p><strong>update 2</strong></p>
<ol>
<li><p><code>minikube ssh</code> - log into minikube VM</p></li>
<li><p>and then, I can use <code>curl <podIP>:<podPort></code>, for my case is <code>curl 172.17.0.18:8080</code></p></li>
</ol>
| <p>First of all, tomcat image expose port 8080 not 80, so the correct YAML would be:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
</code></pre>
<p>minikube is executed inside a virtual machine, so the curl 172.17.0.18:8080 would only work from inside that virtual machine.</p>
<p>You can always create a service to expose your apps:</p>
<pre><code>kubectl expose rc webapp --type=NodePort
</code></pre>
<p>And use the following command to get the URL:</p>
<pre><code>minikube service webapp --url
</code></pre>
<p>If you need to query a specific pod, use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">port forwarding</a>:</p>
<pre><code>kubectl port-forward <POD NAME> 8080
</code></pre>
<p>Or just ssh into minikube's virtual machine and query from there.</p>
|
<p>I'v enabled heapster on minikube</p>
<pre><code>minikube addons start heapster
</code></pre>
<p>And custom metrics with</p>
<pre><code>minikube start --extra-config kubelet.EnableCustomMetrics=true
</code></pre>
<p>My deployment looks like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 1
template:
metadata:
name: kubia
labels:
app: kubia
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "setup",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sh", "-c", "echo \"{\\\"endpoint\\\": \\\"http://$POD_IP:8080/metrics\\\"}\" > /etc/custom-metrics/definition.json"],
"env": [{
"name": "POD_IP",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "status.podIP"
}
}
}],
"volumeMounts": [
{
"name": "config",
"mountPath": "/etc/custom-metrics"
}
]
}
]'
spec:
containers:
- image: luksa/kubia:qps
name: nodejs
ports:
- containerPort: 8080
volumeMounts:
- name: config
mountPath: /etc/custom-metrics
resources:
requests:
cpu: 100m
volumes:
- name: config
emptyDir:
</code></pre>
<p>My hpa looks like</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: kubia
annotations:
alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "20"}]}'
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: kubia
targetCPUUtilizationPercentage: 1000000
</code></pre>
<p>However I get target unknown</p>
<pre><code>jonathan@ubuntu ~> kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
kubia Deployment/kubia <unknown> / 1000000% 1 5 1 31m
</code></pre>
<p>And the following warnings from the hpa</p>
<pre><code> Warning FailedGetResourceMetric 27m (x12 over 33m) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from heapster
Warning FailedComputeMetricsReplicas 27m (x12 over 33m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster
</code></pre>
| <p>Ensure the <code>metrics-server</code> addons is enabled on minikube.</p>
<p>When I start minikube () I have the following addons enabled by default:</p>
<pre><code>$ minikube addons list
- addon-manager: enabled
- coredns: disabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- heapster: enabled
- ingress: disabled
- kube-dns: enabled
- metrics-server: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
</code></pre>
<p>Enable the metrics server and HPAs appear to work great.</p>
<pre><code>$ minikube addons enable metrics-server
metrics-server was successfully enabled
</code></pre>
|
<p>I have a server nodes and powerful expensive worker nodes.
Worker nodes are set to be autoscaled from/to zero and run few hours per week.</p>
<p>When server creates HTTP request to worker Service (where corresponding Job resides on worker nodes) I want that worker node to bring up, start Job pod and process this request. Something similar to systemd service socket-based activation.</p>
| <p>Probably you need to create your own Custom Resource Definition (CRD) and write the trigger you need.</p>
<p>You may find useful this link: <a href="https://kubeless.io/docs/implementing-new-trigger/" rel="nofollow noreferrer">https://kubeless.io/docs/implementing-new-trigger/</a></p>
|
<p>I'm using Arch linux<br>
I had virtualbox 5.2.12 installed<br>
I had the minikube 0.27.0-1 installed<br>
I had the Kubernetes v1.10.0 installed<br></p>
<p>When i try start the minkube with <code>sudo minikube start</code> i get this error</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0527 12:58:18.929483 22672 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: running command:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: exit status 1
</code></pre>
<p>I already try start minekube with others option like:</p>
<pre><code>sudo minikube start --kubernetes-version v1.10.0 --bootstrapper kubeadm
sudo minikube start --bootstrapper kubeadm
sudo minikube start --vm-driver none
sudo minikube start --vm-driver virtualbox
sudo minikube start --vm-driver kvm
sudo minikube start --vm-driver kvm2
</code></pre>
<p>Always I get the same error. Can someone help me?</p>
| <p>Minikube VM is usually started for simple experiments without any important payload.
That's why it's much easier to recreate minikube cluster than trying to fix it.</p>
<p>To delete existing minikube VM execute the following command:</p>
<pre><code>minikube delete
</code></pre>
<p>This command shuts down and deletes the minikube virtual machine. No data or state is preserved.</p>
<p>Check if you have all dependencies at place and run command:</p>
<pre><code>minikube start
</code></pre>
<p>This command creates a “kubectl context” called “minikube”. This context contains the configuration to communicate with your minikube cluster. minikube sets this context to default automatically, but if you need to switch back to it in the future, run:</p>
<pre><code>kubectl config use-context minikube
</code></pre>
<p>Or pass the context on each command like this: </p>
<pre><code>kubectl get pods --context=minikube
</code></pre>
<p>More information about command line arguments can be found <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#installation" rel="noreferrer">here</a>.</p>
|
<p>I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:</p>
<pre><code>kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash
</code></pre>
<p>I download the correct version of kubectl inside the running container, make it executable and try to run</p>
<pre><code>./kubectl get pods
</code></pre>
<p>but get the following error:</p>
<pre><code>Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"
</code></pre>
<p>Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run <code>helm</code> inside the container. According to the docs I found, this should work fine as soon as <code>kubectl</code> is working fine.</p>
| <blockquote>
<p>Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?</p>
</blockquote>
<p>Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> file to authenticate itself.</p>
<blockquote>
<p>How do I allow the serviceaccount to list the pods?</p>
</blockquote>
<p>That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.</p>
<p>See <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions</a> for more information.</p>
<p>I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the <code>serviceAccountName</code> in the helm pods). Then, to grant superuser permissions to that service account, run: </p>
<pre><code>kubectl create clusterrolebinding helm-superuser \
--clusterrole=cluster-admin \
--serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
</code></pre>
|
<p>I am currently trying to use a Kubernetes cluster for the Gitlab CI.
While following the not so good docs (<a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a>), what I did was manually register a runner with the token from Gitlab CI section so I could get another token and use it in the ConfigMap I use for the deployment.</p>
<p>-ConfigMap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "https://url/ci"
token = "TOKEN"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab"
</code></pre>
<p>-Deployment</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 4
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:latest
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
</code></pre>
<p>With these two I get to see the runner in the Gitlab Runner section but whenever I start a job, the new created pods stay in pending status.</p>
<p>I would like to fix it but all I know is that the nodes and pods get these events:</p>
<p>-Pods:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
35s 4s 7 {default-scheduler } Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (2).
</code></pre>
<p>-Nodes:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4d 31s 6887 {kubelet gitlab-ci-hc6k3ffax54o-master-0} Warning FailedNodeAllocatableEnforcement Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3783761920 to memory.limit_in_bytes: write /rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument
</code></pre>
<p><strong>Any idea of why this is happening?</strong></p>
<p>EDIT: kubectl describe added:</p>
<pre><code>Name: runner-45384765-project-1570-concurrent-00mb7r
Namespace: gitlab
Node: /
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
build:
Image: blablabla:latest
Port:
Command:
sh
-c
if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
else
echo shell not found
exit 1
fi
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1qm5n (ro)
/vcs from repo (rw)
Environment Variables:
CI_PROJECT_DIR: blablabla
CI_SERVER: yes
CI_SERVER_TLS_CA_FILE: -----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
CI: true
GITLAB_CI: true
CI_SERVER_NAME: GitLab
CI_SERVER_VERSION: 9.5.5-ee
CI_SERVER_REVISION: cfe2d5c
CI_JOB_ID: 5625
CI_JOB_NAME: pylint
CI_JOB_STAGE: build
CI_COMMIT_SHA: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_COMMIT_REF_NAME: master
CI_COMMIT_REF_SLUG: master
CI_REGISTRY_USER: gitlab-ci-token
CI_BUILD_ID: 5625
CI_BUILD_REF: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_BUILD_BEFORE_SHA: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_BUILD_REF_NAME: master
CI_BUILD_REF_SLUG: master
CI_BUILD_NAME: pylint
CI_BUILD_STAGE: build
CI_PROJECT_ID: 1570
CI_PROJECT_NAME: blablabla
CI_PROJECT_PATH: blablabla
CI_PROJECT_PATH_SLUG: blablabla
CI_PROJECT_NAMESPACE: vcs
CI_PROJECT_URL: https://blablabla
CI_PIPELINE_ID: 2574
CI_CONFIG_PATH: .gitlab-ci.yml
CI_PIPELINE_SOURCE: push
CI_RUNNER_ID: 111
CI_RUNNER_DESCRIPTION: testing on kubernetes
CI_RUNNER_TAGS: docker-image-build
CI_REGISTRY: blablabla
CI_REGISTRY_IMAGE: blablabla
PYLINTHOME: ./pylint-home
GITLAB_USER_ID: 2277
GITLAB_USER_EMAIL: blablabla
helper:
Image: gitlab/gitlab-runner-helper:x86_64-a9a76a50
Port:
Command:
sh
-c
if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
else
echo shell not found
exit 1
fi
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1qm5n (ro)
/vcs from repo (rw)
Environment Variables:
CI_PROJECT_DIR: blablabla
CI_SERVER: yes
CI_SERVER_TLS_CA_FILE: -----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
blablabla
-----END CERTIFICATE-----
CI: true
GITLAB_CI: true
CI_SERVER_NAME: GitLab
CI_SERVER_VERSION: 9.5.5-ee
CI_SERVER_REVISION: cfe2d5c
CI_JOB_ID: 5625
CI_JOB_NAME: pylint
CI_JOB_STAGE: build
CI_COMMIT_SHA: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_COMMIT_REF_NAME: master
CI_COMMIT_REF_SLUG: master
CI_REGISTRY_USER: gitlab-ci-token
CI_BUILD_ID: 5625
CI_BUILD_REF: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_BUILD_BEFORE_SHA: ece31293f8eeb3a36a8585b79d4d21e0ebe8008f
CI_BUILD_REF_NAME: master
CI_BUILD_REF_SLUG: master
CI_BUILD_NAME: pylint
CI_BUILD_STAGE: build
CI_PROJECT_ID: 1570
CI_PROJECT_NAME: blablabla
CI_PROJECT_PATH: blablabla
CI_PROJECT_PATH_SLUG: blablabla
CI_PROJECT_NAMESPACE: vcs
CI_PROJECT_URL: blablabla
CI_PIPELINE_ID: 2574
CI_CONFIG_PATH: .gitlab-ci.yml
CI_PIPELINE_SOURCE: push
CI_RUNNER_ID: 111
CI_RUNNER_DESCRIPTION: testing on kubernetes
CI_RUNNER_TAGS: docker-image-build
CI_REGISTRY: blablabla
CI_REGISTRY_IMAGE: blablabla
PYLINTHOME: ./pylint-home
GITLAB_USER_ID: 2277
GITLAB_USER_EMAIL: blablabla
Conditions:
Type Status
PodScheduled False
Volumes:
repo:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-1qm5n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1qm5n
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
39s 8s 7 {default-scheduler } Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (2).
</code></pre>
| <p>@djuarez As long the deployment selector matches the pods label in the template section, and in this case from what I can see that is the case:</p>
<pre><code>selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
</code></pre>
<p>it should not be a problem; provided the correct API is used which in this case <code>apiVersion: extensions/v1beta1</code> is also correct. The <code>describe</code> output shows <code>MatchNodeSelector</code> which has nothing to do with the Deployment selector. My guess is the full Deployment config is not being shown here and something else is wrong like trying to deploy the pods to specific nodes via <code>nodeSeletor</code> that do not have the requested label in the nodeSelector condition.</p>
|
<p>I have a Kubernetes's and spring boot's env variables conflict error. Details is as follows:</p>
<p>When creating my zipkin server pod, I need to set env variable <code>RABBITMQ_HOST=http://172.16.100.83,RABBITMQ_PORT=5672</code>. </p>
<p>Initially I define zipkin_pod.yaml as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: 31503
</code></pre>
<p>With this configuration, when I do command </p>
<pre><code>kubectl apply -f zipkin_pod.yaml
</code></pre>
<p>The console throws error:</p>
<pre><code>[root@master3 sup]# kubectl apply -f zipkin_pod.yaml
Error from server (BadRequest): error when creating "zipkin_pod.yaml": Pod in version "v1" cannot be handled as a Pod: v1.Pod: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, parsing 1018 ...,"value":3... at {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"gearbox-rack-zipkin-server\",\"purpose\":\"platform-demo\"},\"name\":\"gearbox-rack-zipkin-server\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"EUREKA_SERVER\",\"value\":\"http://172.16.100.83:31501\"},{\"name\":\"RABBITMQ_HOST\",\"value\":\"http://172.16.100.83\"},{\"name\":\"RABBITMQ_PORT\",\"value\":31503}],\"image\":\"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server\",\"name\":\"gearbox-rack-zipkin-server\",\"ports\":[{\"containerPort\":9411}]}]}}\n"},"labels":{"app":"gearbox-rack-zipkin-server","purpose":"platform-demo"},"name":"gearbox-rack-zipkin-server","namespace":"default"},"spec":{"containers":[{"env":[{"name":"EUREKA_SERVER","value":"http://172.16.100.83:31501"},{"name":"RABBITMQ_HOST","value":"http://172.16.100.83"},{"name":"RABBITMQ_PORT","value":31503}],"image":"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server","name":"gearbox-rack-zipkin-server","ports":[{"containerPort":9411}]}]}}
</code></pre>
<p>so I modified the last line of zipkin_pod.yaml file as follows: Or use brutal force to make port number as int.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: !!31503
</code></pre>
<p>Then pod is successfully created, but spring getProperties throws exception.</p>
<pre><code>[root@master3 sup]# kubectl apply -f zipkin_pod.yaml
pod "gearbox-rack-zipkin-server" created
</code></pre>
<p>When I check logs:</p>
<pre><code>[root@master3 sup]# kubectl logs gearbox-rack-zipkin-server
2018-05-28 07:56:26.792 INFO [zipkin-server,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@4ac68d3e: startup date [Mon May 28 07:56:26 UTC 2018]; root of context hierarchy
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.autoconfigure.amqp.RabbitProperties@324c64cd failed:
Property: spring.rabbitmq.port
Value:
Reason: Failed to convert property value of type 'java.lang.String' to required type 'int' for property 'port'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [int]
Action:
Update your application's configuration
</code></pre>
<p>My question is how to let kubernetes understand the port number as int, while not breaking spring boot convert rule from string to int? because spring boot could not convert <code>!!31503 to int 31503</code>.</p>
| <p>As @Bal Chua and @Pär Nilsson mentioned, for environmental variables you can use only string variables because Linux environmental variables can be only strings. </p>
<p>So, if you use yaml, you need to place value into quotes to force Kubernetes to use string.</p>
<p>For example:</p>
<pre><code>- name: RABBITMQ_PORT
value: '31503'
</code></pre>
|
<p>I want to export <em>already templated</em> Helm Charts as YAML files. I can not use Tiller on my Kubernetes Cluster at the moment, but still want to make use of Helm Charts. Basically, I want Helm to export the YAML that gets send to the Kubernetes API with values that have been templated by Helm. After that, I will upload the YAML files to my Kubernetes cluster.</p>
<p>I tried to run <code>.\helm.exe install --debug --dry-run incubator\kafka</code> but I get the error <code>Error: Unauthorized</code>. </p>
<p>Note that I run Helm on Windows (version helm-v2.9.1-windows-amd64).</p>
| <p>We need logs to check the <code>Unauthorized</code> issue.</p>
<p>But you can easily generate templates locally:</p>
<pre><code>helm template mychart
</code></pre>
<blockquote>
<p>Render chart templates locally and display the output.</p>
<p>This does not require Tiller. However, any values that would normally
be looked up or retrieved in-cluster will be faked locally.
Additionally, none of the server-side testing of chart validity (e.g.
whether an API is supported) is done.</p>
</blockquote>
<p>More info: <a href="https://helm.sh/docs/helm/helm_template/" rel="noreferrer">https://helm.sh/docs/helm/helm_template/</a></p>
|
<p>I am searching for the command to <strong>print out the podname (or hostname)</strong> when I call my echoserver (<code>gcr.io/google_containers/echoserver</code>. I saw that in a video, regarding <strong>loadbalancing and ingress as a proof of concept</strong>, to show which server responds when I hit the refresh button in the browser. But I cannot remember how that worked or where that was. I searched the web but didn't find any hint.</p>
<p>At the moment my ReplicaSet looks like this:
Maybe I am missing an env variable or something like this.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: echoserver
spec:
replicas: 1
template:
metadata:
name: echoserver
labels:
project: chapter5
service: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
</code></pre>
| <p>I got it: I have to raise the Version!</p>
<p>With versions greater than 1.4 it works :-)</p>
<p>So the correct one is the actual version 1.10:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: echoserver
spec:
replicas: 1
template:
metadata:
name: echoserver
labels:
project: chapter5
service: echoserver
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.10
ports:
- containerPort: 8080
</code></pre>
|
<p>Is it possible to get a list of pods that are Running on matser from kubectl?</p>
<p>i have tried this </p>
<pre><code>kubectl get pods -o wide --sort-by="{.spec.nodeName}"
</code></pre>
<p>but this doesnt say whether the node is master or worker</p>
| <p>As mentioned in <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/" rel="nofollow noreferrer">the overview</a>:</p>
<blockquote>
<p>A Pod always runs on a <strong><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a></strong>.<br>
<strong>A Node is a worker machine in Kubernetes</strong> and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master</p>
</blockquote>
<p>So by definition (even if it runs on the same physical machine than the master), any node is on a "worker machine"</p>
<p><a href="https://i.stack.imgur.com/190yK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/190yK.png" alt="https://d33wubrfki0l68.cloudfront.net/5cb72d407cbe2755e581b6de757e0d81760d5b86/a9df9/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></a></p>
<p>Only kubectl get node does display a ROLE:</p>
<pre><code>vonc@voncvb:~/.kube$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
serv0.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
serv1.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
serv2.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
servm.server Ready master 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
^^^^^^^
</code></pre>
|
<p>I was one that was having trouble with this above mentioned issue where after a "kubectl delete -f" my container would be stuck on "Terminating".
I could not see anything in the Docker logs to help me narrow it down.
After a Docker restart the pod would be gone and i could continue as usual, but this is not the way to live your life.</p>
<p>I Googled for hours and finally got something on a random post somewhere.</p>
<p>Solution:
When i installed Kubernetes on Ubuntu 16.04 i followed a guide that said to install "docker.io".
In this article it said to remove "docker.io" and rather use a "docker-ce or docker-ee" installation.</p>
<p>BOOM, i did it, disabled the swappoff function and my troubles are no more.</p>
<p>I hope this helps people that are also stuck with this.</p>
<p>Cheers</p>
| <p>As <a href="https://stackoverflow.com/users/8019337/kleuf">kleuf</a> mentioned in comments, the solution to the stuck docker container in his case was the following:</p>
<blockquote>
<p>When i installed Kubernetes on Ubuntu 16.04 i followed a guide that
said to install "docker.io". In this article it said to remove
"docker.io" and rather use a "docker-ce or docker-ee" installation.</p>
</blockquote>
<pre><code>sudo apt-get remove docker docker-engine docker-ce docker.io
sudo apt-get remove docker docker-engine docker.io -y
curl -fsSL download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce -y
sudo service docker restart
</code></pre>
<blockquote>
<p>BOOM, i did it, disabled the swappoff function and my troubles are no
more.</p>
<p>I hope this helps people that are also stuck with this.</p>
</blockquote>
|
<p>I am creating some secrets when <code>helm install</code> is executed via <code>pre-install</code> hooks.</p>
<p>Everything works great. However when <code>helm delete</code> is performed the secrets created are not deleted. This is because any resource installed using <code>pre-install</code> is considered to be self managed. So I read this could be done using <code>post-delete</code> hooks.</p>
<p>So questions are:</p>
<ol>
<li><p>How do I delete secrets in post delete?</p></li>
<li><p>If we remove <code>pre-install</code> hooks then then delete works just fine. But then how to guarantee that secrets are created before the pods are even created when we perform <code>helm install</code>?</p></li>
</ol>
| <p>Tiller creates resources in a specific order (find it in the source code here: <a href="https://github.com/kubernetes/helm/blob/master/pkg/tiller/kind_sorter.go#L26" rel="noreferrer">https://github.com/kubernetes/helm/blob/master/pkg/tiller/kind_sorter.go#L26</a>)</p>
<p>So for this specific user case there is no need for hooks or any other mechanism, just include your secret and your pods and magic will happen ;)</p>
<p>That said, there is still the issue with <em>pre-installed</em> objects. The documentation states that this is the desired behaviour:</p>
<blockquote>
<p>Practically speaking, this means that if you create resources in a
hook, you cannot rely upon helm delete to remove the resources. To
destroy such resources, you need to either write code to perform this
operation in a pre-delete or post-delete hook or add
"helm.sh/hook-delete-policy" annotation to the hook template file.</p>
</blockquote>
<p>The only solution is to add a job to the chart, with the <em>post-delete</em> hook, that deletes those resources.</p>
|
<p>I'd like to know if the client-go library for Kubernetes contains a function that validates if a json/yaml file. Ideally, it would catch errors such as names not being a DNS-1123 compliant or invalid fields specified. It would also be ideal if a list of errors was returned as opposed to the function returning after the first error encountered. </p>
<p>One thought I have tried is doing an exec to call <code>kubectl --validate --dry-run</code> but this does not fully validate a manifest (meaning it's possible to pass here but fail when you actually apply the file). It also stops at the first error. Plus, it would get expensive quickly if you have a list of manifests to go through.</p>
<p>Another option I looked at was here <a href="https://github.com/kubernetes/client-go/issues/193" rel="nofollow noreferrer">Kubernetes GitHub Issue 193</a> but that's not really the appropriate function nor does it do the checks I'm looking for. </p>
| <p>Client-go library for Kubernetes contains no validation functions for YAML/JSON configuration files.</p>
<p>But take a look at this <a href="https://github.com/garethr/kubeval" rel="nofollow noreferrer">utiliy</a>, you can use it for validation on a client’s side and also use its code as an example of validation implementation.</p>
|
<p>I have just installed my kubernetes cluster on azure using AKS. I have not installed anything and I noticed that the 'tunnelfront' pod was running:</p>
<p><a href="https://i.stack.imgur.com/15pCP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/15pCP.png" alt="tunnelfront"></a></p>
<p>I have tried to find out what this pod is for and why it is running on my cluster, cannot find any reasons for it being there. I used kubectl to describe the pod:</p>
<pre><code>Name: tunnelfront-597b4868b8-8rz4w
Namespace: kube-system
Node: aks-agentpool-22029027-0/10.240.0.5
Start Time: Mon, 07 May 2018 19:51:22 +0200
Labels: component=tunnel
pod-template-hash=1536042464
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"tunnelfront-597b4868b8","uid":"d46dab68-449e-11e8-961c-0a58a...
Status: Running
IP: 10.244.1.72
Controlled By: ReplicaSet/tunnelfront-597b4868b8
Containers:
tunnel-front:
Container ID: docker://a69b8d6dcaef7253d41d44fbd57fd776a0dfbf70dbbbb8303a691bebab169c26
Image: dockerio.azureedge.net/deis/hcp-tunnel-front:v1.9.2-v3.0.3
Image ID: docker-pullable://dockerio.azureedge.net/deis/hcp-tunnel-front@sha256:378db6f97778c6d86de94f72573a97975cd7b5ff6f1f02c1618616329fd94f1f
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 15 May 2018 09:40:10 +0200
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 07 May 2018 19:56:15 +0200
Finished: Tue, 15 May 2018 09:40:09 +0200
Ready: True
Restart Count: 1
Liveness: exec [/lib/tunnel-front/check-tunnel-connection.sh] delay=10s timeout=1s period=10s #success=1 #failure=12
Environment:
OVERRIDE_TUNNEL_SERVER_NAME: t_XXXXXX-66f17513.hcp.westeurope.azmk8s.io
KUBE_CONFIG: /etc/kubernetes/kubeconfig/kubeconfig
Mounts:
/etc/kubernetes/certs from certificates (ro)
/etc/kubernetes/kubeconfig from kubeconfig (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xkj92 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet
HostPathType:
certificates:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/certs
HostPathType:
default-token-xkj92:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xkj92
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 1m (x54 over 7d) kubelet, aks-agentpool-22029027-0 Liveness probe failed:
</code></pre>
<p>I can see that the image is from <code>deis</code>, but I have not installed <code>helm</code> or any such tool. What is TunnelFront? And do I need it? </p>
| <p><code>tunnelfront</code> is an AKS system component that's installed on every cluster that helps to facilitate secure communication from your hosted Kubernetes control plane and your nodes. It's needed for certain operations like <code>kubectl exec</code>, and will be redeployed to your cluster on version upgrades (note that the tunnelfront version matches the cluster version).</p>
<p>If you run into problems with tunnelfront, please do file an issue on <a href="https://github.com/Azure/AKS/issues" rel="noreferrer">https://github.com/Azure/AKS/issues</a></p>
|
<p>I need to deploy GitLab with Helm on Kubernetes.
I have the problem: PVC is Pending.</p>
<p>I see <code>volume.alpha.kubernetes.io/storage-class: default</code> in PVC description, but I set value <code>gitlabDataStorageClass: gluster-heketi</code> in values.yaml.
And I fine deploy simple nginx from article <a href="https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md" rel="nofollow noreferrer">https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md</a>
Yes, I use distribute storage GlusterFS <a href="https://github.com/gluster/gluster-kubernetes" rel="nofollow noreferrer">https://github.com/gluster/gluster-kubernetes</a></p>
<pre><code># kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-data Pending 19s
gitlab1-gitlab-etc Pending 19s
gitlab1-postgresql Pending 19s
gitlab1-redis Pending 19s
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 43m
</code></pre>
<p>Structure for single of pangings:</p>
<pre><code># kubectl get pvc gitlab1-gitlab-data -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: default
creationTimestamp: 2018-05-29T19:43:18Z
finalizers:
- kubernetes.io/pvc-protection
name: gitlab1-gitlab-data
namespace: default
resourceVersion: "263950"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gitlab1-gitlab-data
uid: 8958d4f5-6378-11e8-8f10-4ccc6a60fcbe
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status:
phase: Pending
</code></pre>
<p>In describe I see:</p>
<pre><code># kubectl describe pvc gitlab1-gitlab-data
Name: gitlab1-gitlab-data
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: volume.alpha.kubernetes.io/storage-class=default
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m (x43 over 12m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>My values.yaml file:</p>
<pre><code># Default values for kubernetes-gitlab-demo.
# This is a YAML-formatted file.
# Required variables
# baseDomain is the top-most part of the domain. Subdomains will be generated
# for gitlab, mattermost, registry, and prometheus.
# Recommended to set up an A record on the DNS to *.your-domain.com to point to
# the baseIP
# e.g. *.your-domain.com. A 300 baseIP
baseDomain: my-domain.com
# legoEmail is a valid email address used by Let's Encrypt. It does not have to
# be at the baseDomain.
legoEmail: [email protected]
# Optional variables
# baseIP is an externally provisioned static IP address to use instead of the provisioned one.
#baseIP: 95.165.135.109
nameOverride: gitlab
# `ce` or `ee`
gitlab: ce
gitlabCEImage: gitlab/gitlab-ce:10.6.2-ce.0
gitlabEEImage: gitlab/gitlab-ee:10.6.2-ee.0
postgresPassword: NDl1ZjNtenMxcWR6NXZnbw==
initialSharedRunnersRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
mattermostAppSecret: NDl1ZjNtenMxcWR6NXZnbw==
mattermostAppUID: aadas
redisImage: redis:3.2.10
redisDedicatedStorage: true
redisStorageSize: 5Gi
redisAccessMode: ReadWriteOnce
postgresImage: postgres:9.6.5
# If you disable postgresDedicatedStorage, you should consider bumping up gitlabRailsStorageSize
postgresDedicatedStorage: true
postgresAccessMode: ReadWriteOnce
postgresStorageSize: 30Gi
gitlabDataAccessMode: ReadWriteOnce
#gitlabDataStorageSize: 30Gi
gitlabRegistryAccessMode: ReadWriteOnce
#gitlabRegistryStorageSize: 30Gi
gitlabConfigAccessMode: ReadWriteOnce
#gitlabConfigStorageSize: 1Gi
gitlabRunnerImage: gitlab/gitlab-runner:alpine-v10.6.0
# Valid values for provider are `gke` for Google Container Engine. Leaving it blank (or any othervalue) will disable fast disk options.
#provider: gke
# Gitlab pages
# The following 3 lines are needed to enable gitlab pages.
# pagesExternalScheme: http
# pagesExternalDomain: your-pages-domain.com
# pagesTlsSecret: gitlab-pages-tls # An optional reference to a tls secret to use in pages
## Storage Class Options
## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
## If not defined, but provider is gke, will use SSDs
## Otherwise default: volume.alpha.kubernetes.io/storage-class: default
gitlabConfigStorageClass: gluster-heketi
gitlabDataStorageClass: gluster-heketi
gitlabRegistryStorageClass: gluster-heketi
postgresStorageClass: gluster-heketi
redisStorageClass: gluster-heketi
healthCheckToken: 'SXBAQichEJasbtDSygrD'
# Optional, for GitLab EE images only
#gitlabEELicense: base64-encoded-license
# Additional omnibus configuration,
# see https://docs.gitlab.com/omnibus/settings/configuration.html
# for possible configuration options
#omnibusConfigRuby: |
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.example.org"
gitlab-runner:
checkInterval: 1
# runnerRegistrationToken must equal initialSharedRunnersRegistrationToken
runnerRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
# resources:
# limits:
# memory: 500Mi
# cpu: 600m
# requests:
# memory: 500Mi
# cpu: 600m
runners:
privileged: true
## Build Container specific configuration
##
# builds:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Service Container specific configuration
##
# services:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Helper Container specific configuration
##
# helpers:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
</code></pre>
<p>You can see I have the StorageClass:</p>
<pre><code># kubectl get sc
NAME PROVISIONER AGE
gluster-heketi kubernetes.io/glusterfs 48m
</code></pre>
| <p>Without a link to the actual helm you used, it's impossible for anyone to troubleshoot why the go-template isn't correctly consuming your <code>values.yaml</code>.</p>
<blockquote>
<p>I see <code>volume.alpha.kubernetes.io/storage-class: default</code> in PVC description, but I set value <code>gitlabDataStorageClass: gluster-heketi</code> in values.yaml</p>
</blockquote>
<p>I can appreciate you set whatever you wanted in values.yaml, but as long as that <code>StorageClass</code> doesn't match any existing <code>StorageClass</code>, I'm not sure what positive thing will materialize from there. You can certainly try creating a <code>StorageClass</code> named <code>default</code> containing the same values as your <code>gluster-heketi</code> SC, or update the PVC to use the correct SC.</p>
<p>To be honest, this may be a bug in the helm chart, but until it is fixed (and/or we get the link to the chart to help you know how to adjust your yaml) if you want your GitLab to deploy, you will need to work around this bad situation manually.</p>
|
<p><strong>traefik.toml</strong>:</p>
<pre><code>defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[api]
</code></pre>
<p><strong>traefik Service</strong>:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
</code></pre>
<p><strong>Then:</strong></p>
<pre><code>kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
deployment "source-ip-app" created
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
service "clusterip" exposed
kubectl get svc clusterip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.5.55.102 <none> 80/TCP 2h
</code></pre>
<p><strong>Create ingress for clusterip:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: clusterip-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: clusterip.staging
http:
paths:
- backend:
serviceName: clusterip
servicePort: 80
</code></pre>
<p><strong>clusterip.staging ip: 192.168.0.69</strong></p>
<p><strong>From other pc with ip: 192.168.0.100:</strong></p>
<pre><code>wget -qO - clusterip.staging
</code></pre>
<p><strong>and get results:</strong></p>
<pre><code>CLIENT VALUES:
client_address=10.5.65.74
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://clusterip.staging:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
cache-control=max-age=0
host=clusterip.staging
upgrade-insecure-requests=1
x-forwarded-for=10.5.64.0
x-forwarded-host=clusterip.staging
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-server=traefik-ingress-controller-755cc56458-t8q9k
x-real-ip=10.5.64.0
BODY:
-no body in request-
</code></pre>
<p><strong>kubectl get svc --all-namespaces</strong></p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default clusterip NodePort 10.5.55.102 <none> 80:31169/TCP 19h
default kubernetes ClusterIP 10.5.0.1 <none> 443/TCP 22d
kube-system kube-dns ClusterIP 10.5.0.3 <none> 53/UDP,53/TCP 22d
kube-system kubernetes-dashboard ClusterIP 10.5.5.51 <none> 443/TCP 22d
kube-system traefik-ingress-service LoadBalancer 10.5.2.37 192.168.0.69 80:32745/TCP,443:30219/TCP 1d
kube-system traefik-web-ui NodePort 10.5.60.5 <none> 80:30487/TCP 7d
</code></pre>
<p><strong>How to get real ip (192.168.0.100) in my installation? Why x-real-ip 10.5.64.0?</strong> I could not find the answers in the documentation.</p>
| <p>When <code>kube-proxy</code> uses the <code>iptables</code> mode, it uses NAT to send data to the node where payload works, and you lose the original <code>SourceIP</code> address in that case.</p>
<p>As I understood, you use <code>Matallb</code> behind the <code>Traefik</code> Ingress Service (because its type is <code>LoadBalancer</code>). That means traffic from the client to the backend goes that way:</p>
<p><code>Client -> Metallb -> Traefik LB -> Traefik Service -> Backend pod</code>.</p>
<p>Traefik works correctly and adds headers <code>x-*</code>, including <code>x-forwarded-for</code> and <code>x-real-ip</code> which contain a fake address, and that's why:</p>
<p>From the <code>Metallb</code> <a href="https://metallb.universe.tf/usage/" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>MetalLB understands the service’s <code>externalTrafficPolicy</code> option and implements different announcements modes depending on the policy and announcement protocol you select.</p>
<ul>
<li><p><strong>Layer2</strong></p>
<p>This policy results in uniform traffic distribution across all pods in the service. However, <code>kube-proxy</code> will <strong>obscure the source IP address</strong> of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the cluster’s leader node.</p></li>
<li><p><strong>BGP</strong></p>
<ul>
<li><p><strong>“Cluster” traffic policy</strong></p>
<p>With the default Cluster traffic policy, every node in your cluster will attract traffic for the service IP. On each node, the traffic is subjected to a second layer of load-balancing (provided by kube-proxy), which directs the traffic to individual pods.</p>
<p>......</p>
<p>The other downside of the “Cluster” policy is that <code>kube-proxy</code> will <strong>obscure the source IP address</strong> of the connection when it does its load-balancing, so your pod logs will show that external traffic appears to be coming from your cluster’s nodes.</p></li>
<li><p><strong>“Local” traffic policy</strong></p>
<p>With the Local traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no “horizontal” traffic flow between nodes.</p>
<p>This policy provides the most efficient flow of traffic to your service. Furthermore, because <code>kube-proxy</code> doesn’t need to send traffic between cluster nodes, <strong>your pods can see the real source IP address</strong> of incoming connections.</p></li>
</ul></li>
</ul>
</blockquote>
<p>Finally, the only way to get the real source IP address is to use "Local" mode of <code>TrafficPolicy</code>.</p>
<p>If you set it up, you will get what you want.</p>
|
<p>I'm planning to upgrade kubernetes cluster version from v1.7.10 to v1.8.12. but as this <a href="https://github.com/kubernetes/kubernetes/issues/53644" rel="nofollow noreferrer">issue</a> state that all containers will be restart because of the spec hash change. So, what's the suggested upgrade procedure ? do I need to drain the node before upgrading kubelet version, or just do an in-place upgrade and let all the containers restarting ? what's the difference ? </p>
<p>Also, since upgrading to v1.9.0 will also cause containers restarting, can I upgrade v1.7.10 directly to v1.10.3 ? In this way I can avoid two time-consuming upgrades to v1.8 and v1.9 at least. Is there any constraints provent me from doing this ? </p>
<p>Any suggestion will be appreciated.</p>
| <p>After some testing and research, I come to some conclusion:</p>
<ol>
<li><p>Drain the node is not a must. At least, drain can't evict daemonsets. But drain a node is recommend way to upgrade kubernetes, since this can at most extent reduce the impact to applications that's deployed using deployment.</p>
<p>Also, sometimes, drain a node is neccesary. For instance, from kubernetes v1.10, the log files of all pods have changed from /var/log/pods/pod-id/container_id.log to /var/log/pods/pod-id/container/id.log, so when upgrade to v1.10, all pods have to restart to use the new log file, otherwise, you can't access their logs through 'kubectl logs' command. At this time, drain a node is helpful, and to those pods that can't be evicted, like daemonsets, we have to restart them manually.</p></li>
<li><p>Skip minor version upgrading is not supported, especially in a HA setup, with multiple masters, which can be proved somehow by <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster" rel="nofollow noreferrer">GKE's supported upgrades policy</a>. Also, skip minor version upgrading implies some risk sometimes. </p>
<p>Again, as an example, upgrade to v1.10. In 1.10, objects in the apps API group began persisting in etcd in apps/v1 format, which can be handled very well by v1.9, but v1.8 can't. So, when you upgrade kubernetes from v1.8 to 1.10, in a HA setup, some masters have been upgraded, some haven't, will bring some weird problems, like deployment/daemonset can't be handled properly, more info refer to <a href="https://stackoverflow.com/questions/50595823/kubernetes-v1-8-12-cant-list-extensions-deployment">my another question</a>. So, such upgrade should be avoid as long as possible.</p></li>
</ol>
|
<p>I'm using kubectl from within a docker container running on a Mac. I've already successfully configured the bash completion for kubectl to work on the Mac, however, it doesn't work within the docker container. I always get <code>bash: _get_comp_words_by_ref: command not found</code>. </p>
<p>The docker image is based on <code>ubuntu:16.04</code> and kubectl is installed via the line (snippet from the dockerfile)</p>
<pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
mv kubectl /usr/local/bin
</code></pre>
<p><code>echo $BASH_VERSION</code> gives me <code>4.3.48(1)-release</code>, and according to apt, the <code>bash-completion</code>package is installed. </p>
<p>I'm using iTerm2 as terminal.</p>
<p>Any idea why it doesn't work or how to get it to work?</p>
| <p>Ok, I found it - I simply needed to do a <code>source /etc/bash_completion</code> before or after the <code>source <(kubectl completion bash)</code>.</p>
|
<p>when running docker-compose, all containers have network aliases to the ip of the other containers running in the network.
How are these aliases created ?</p>
<p>I need to recreate an alias in an nginx container within a Kubernetes cluster, because nginx conf doesn't allow environment variables and I reverse proxy requests to another container from there.
I would normally edit the /etc/hosts file, but since docker-compose doesn't, I'm wondering how is the alias created and if I could do it the same way within my kubernetes cluster.</p>
| <p>In a kubernetes cluster, there is a DNS server service deployed by default.</p>
<p><code>kubectl get svc</code> should show the DNS service IP address.</p>
<p>You can find the DNS server IP address inside the containers <code>/etc/resolv.conf</code> file too.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">here</a> for more info.</p>
<p>You can specify additional entries in container's <code>/etc/hosts</code> file using HostAliases. See <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">here</a>.</p>
<p>Snip:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
</code></pre>
|
<p>I had a working kops cluster. I deleted some unneeded <code>ig</code>s and updated the cluster. Now kubectl won't connect to the cluster. I get the following error: <code>Unable to connect to the server: dial tcp {ip} i/o timeout</code>.</p>
<p>How do I go about debugging the issue?</p>
| <p>as a first step I would try to run it again with higher log level, there logs are really good.
Note you probably want to redirect to a file ...</p>
<pre><code>kops <whatever> -v 10 &> log.txt
</code></pre>
|
<p>By following link <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/</a> it is described how to add secret (encripted) data.</p>
<p>How to get that key-value s with from java client?</p>
| <p>You can use the official Java client for Kubernetes’ REST API and read the secret as defined in this <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CoreV1Api.md#readNamespacedSecret" rel="noreferrer">doc</a>. You will get a result of return type <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/V1Secret.md" rel="noreferrer">V1Secret</a>. </p>
<p><code>V1Secret result = apiInstance.readNamespacedSecret(name, namespace, pretty, exact, export)</code>;</p>
<p>This object <code>result</code> has a property <code>data</code> of type <code>Map<String, byte></code> to get the key value pairs from.</p>
|
<p>I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine. </p>
<p>The way to link them is via labelSelector.
I am creating the objects with this definition:</p>
<p>volume.yml</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
</code></pre>
<p>and running:</p>
<pre><code>kubectl apply -f volume.yml
</code></pre>
<p>Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.</p>
<p>Could you please help me?</p>
<p>Thanks!</p>
| <p>First of all, <code>PersistentVolume</code> resources don’t belong to any namespace. They’re cluster-level resources like nodes, but <code>PersistentVolumeClaim</code> objects can only be created in a specific namespace.</p>
<p>Seems like when you created the claim earlier, it was immediately bound to the <code>PersistentVolume</code>. Can you show output of the commands:</p>
<pre><code>$ kubectl get pv
$ kubectl get pvc
</code></pre>
<p>Highly likely your <code>persistentVolumeReclaimPolicy</code> was set to <code>Retain</code>, so your <code>PersistentVolume</code> is in <code>Released</code> status now. Since there is no another <code>PersistenVolume</code> resource matches your claim's requirements your <code>PersistentVolumeClaim</code> is in <code>Pending</code> status.</p>
|
<p>How To deploy in kubernetes a message consumer for kafka, amqp or any other message broker which scales up and down ? My hypothesis is that the consumer runs a loop which pulls messages.</p>
<p>I d like kubernetes To create more pods when many messages arrive in the broker queue and remove some pods when too few messages arrive in the queue.</p>
<p>Which component has the initiative of the ending of the pods? The pod itself because it can't fetch a message from the queue? Or kubernetes because the pod doesnt consume cpu?</p>
<p>If any pod ends when the queue is empty, i m afraid that pods Will keep born and die as long as the queue is empty.</p>
| <p>The Kubernetes Horizontal Pod Autoscaler has support for <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="noreferrer">custom and external metrics</a>. With more traditional messaging brokers like AMQP (1 queue / many competing consumers) you should be able to easily scale the consumer based on queue depth (such as <em>If queue depth is >= 10000 msg, scale up. If queue depth is <= 1000 msg scale down</em>). You could also do it based on your the average client throughput (such as <em>if average throughput is >= 5000 msg/s, scale up</em>) or average latency. The Horizontal Pod Autoscaler would do the scale up and scale down for you. It will observer the metrics and decide when a pod should be shutdown or started. The consumer application is not aware of this - it doesn't need any special support for this. But you will need to get these metrics and expose them so that Kubernetes can consume them which is currently not completely trivial.</p>
<p>With Kafka, this will be a bit harder since Kafka implements competing consumers very differently from more traditional messaging brokers like AMQP. Kafka topics are split into partitions. And each partition can have only one consumer from a single consumer group. So whatever autoscaling you do, it will not be able to handle situations such as:</p>
<ul>
<li>Small number of partitions for given topic (you will never have more active consumers than the number of partitions) </li>
<li>Asymmetric partition load (some partitions being very busy while other are empty)</li>
</ul>
<p>Kafka also doesn't have anything like queue depth. But you can for example use the information about the consumer lag (which shows how much is the consumer behind the producer for given partition) to do the scaling. </p>
|
<p>I've created some secrets and I'm trying to import the file in this way:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: {{ template "amq.broker.fullname" . }}-tls
labels:
app: {{ template "amq.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
component: "{{ .Values.broker.name }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
data:
{{ (.Files.Glob "secrets/broker.ks").AsSecrets | indent 2 }}
</code></pre>
<p>The file is under the <code>secrets</code> dir. When I run the install, the <code>broker.ks</code> secret is not there. However the secret is under the <code>secrets</code> folder. Any idea?</p>
<p>Here the dir struct</p>
<pre><code>├── Chart.yaml
├── README.md
├── secrets
│ ├── broker.ks
│ ├── broker_cert
│ ├── client.ks
│ └── client.ts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
└── values.yaml
</code></pre>
| <p>The solution to this is, as per the <a href="https://helm.sh/docs/chart_template_guide/accessing_files/#configmap-and-secrets-utility-functions" rel="nofollow noreferrer">docs</a>, the following:</p>
<pre><code>{{- $root := . -}}
{{- range $path, $bytes := .Files.Glob "secrets/broker.ks" }}
{{ base $path }}: '{{ $root.Files.Get $path | b64enc }}'
{{- end }}
</code></pre>
<p>You can also pull all the files of a specific type in the folder with <code>.Files.Glob "secrets/*.ks"</code></p>
<p>Also make sure that the folder has not been added to <code>.helmignore</code> otherwise the files won't be accessible.</p>
|
<p>I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine. </p>
<p>The way to link them is via labelSelector.
I am creating the objects with this definition:</p>
<p>volume.yml</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
</code></pre>
<p>and running:</p>
<pre><code>kubectl apply -f volume.yml
</code></pre>
<p>Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.</p>
<p>Could you please help me?</p>
<p>Thanks!</p>
| <p>Thanks for your help @konstantin-vustin</p>
<p>I found the solution. I had to specify <strong>storageClassName: manual</strong> attribute in the spec of both objects.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class</a></p>
<p>According to the doc</p>
<blockquote>
<p>A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.</p>
</blockquote>
<p>So IMO it should have worked before, so I am not sure if I clearly understood it.</p>
<p>This was the status before</p>
<pre><code>kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
</code></pre>
<p>The updated definitions</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
</code></pre>
<p>This is the status after</p>
<pre><code>kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s
</code></pre>
|
<p>I've got a username and password, how do I authenticate kubectl with them?</p>
<p>Which command do I run?</p>
<p>I've read through: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authorization/</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> though can not find any relevant information in there for this case.</p>
<hr>
<pre><code>kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
</code></pre>
<p><a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/" rel="nofollow noreferrer">https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/</a></p>
<hr>
<p>The above does not seem to work:
<code>kubectl get pods<br>
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
</code></p>
| <p>Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.</p>
<p>The syntax you've listed appears right, assuming that the cluster supports basic authentication.</p>
<p>The error you're seeing is similar to the one <a href="https://stackoverflow.com/questions/49075723/what-does-unknown-user-client-mean">here</a> which may suggest that the cluster you're using doesn't currently support the authentication method you're using.</p>
<p>Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.</p>
|
<p>Is there any configuration snapshot mechanism on kubernetes?</p>
<p>The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.</p>
<p>The steps that should be taken.</p>
<ul>
<li>Take a configuration snapshot</li>
<li>Delete the cluster</li>
<li>Create a new cluster</li>
<li>Apply the configuration snapshot to the new cluster</li>
<li>New cluster works like the old one</li>
</ul>
| <p>These are the 3 that spring to mind, with <code>kubed</code> being, at least according to their readme, the closest to your stated goals:</p>
<ul>
<li><a href="https://github.com/heptio/ark#readme" rel="nofollow noreferrer">Ark</a></li>
<li><a href="https://github.com/appscode/kubed#readme" rel="nofollow noreferrer">kubed</a></li>
<li><a href="https://github.com/pieterlange/kube-backup#readme" rel="nofollow noreferrer">kube-backup</a></li>
</ul>
<p>I run Ark in my cluster, but (to my discredit) I have not yet attempted to do a D.R. drill using it; I only checked that it is, in fact, making config backups.</p>
|
<p>I am just trying to deploy kubernetes Dashboard in a namespace called "test".</p>
<p><a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
<p>I just replaced namespace from kube-system to test from the above yaml file and executed as below.</p>
<pre><code>kubectl apply -f kubernetes-dashboard.yaml -n test
</code></pre>
<p>But, it is still trying to do something with namespace kube-system and getting the below error.</p>
<p><strong>Image:</strong></p>
<blockquote>
<p>gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3</p>
</blockquote>
<p><strong>Error:-</strong></p>
<pre><code>2018/05/31 16:56:55 Starting overwatch
2018/05/31 16:56:55 Using in-cluster config to connect to apiserver
2018/05/31 16:56:55 Using service account token for csrf signing
2018/05/31 16:56:55 No request provided. Skipping authorization
2018/05/31 16:56:55 Successful initial request to the apiserver, version: v1.10.2
2018/05/31 16:56:55 Generating JWE encryption key
2018/05/31 16:56:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/05/31 16:56:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:55 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: unexpected object: &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string][]byte{},Type:,StringData:map[string]string{},}
2018/05/31 16:56:57 Restarting synchronizer: kubernetes-dashboard-key-holder-kube-system.
2018/05/31 16:56:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/05/31 16:56:57 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
2018/05/31 16:56:59 Storing encryption key in a secret
panic: secrets is forbidden: User "system:serviceaccount:test:dashboard" cannot create secrets in the namespace "kube-system"
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc420254e00)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a7ee00, 0xc42037a5a0, 0xc42037a5a0, 0x127b962)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x83
main.initAuthManager(0x1a7e300, 0xc4201e2240, 0xc42066dc68, 0x1)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:183 +0x12f
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:101 +0x28c
</code></pre>
<p>I created Secret, Rolebinding, Serviceaccount, deployment, Service & Ingress in the namesapce "test". Removed namespace from the yaml file and supplied thru -n "test" while creating.</p>
| <p>That happened because you created the ServiceAccount on a different namespace, namely <code>test</code> but as it says, it needs to be deployed in <code>kube-system</code> in order to be able to function.</p>
<p>You can find a nice walkthrough and possibly some clarifications <a href="https://github.com/kubernetes/dashboard/wiki/Installation#recommended-setup" rel="nofollow noreferrer">here</a> </p>
<p>However, if you still want to deploy on a different namespace, you would have to add the following role and rolebinding to your cluster:</p>
<pre><code># ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: test
---
</code></pre>
<p>I am afraid there is no other way around, you have to allow the service account to create secrets in kube-system namespace.</p>
|
<p>I have created a <strong>Dockerfile</strong> (for a Node <strong>JNLP</strong> slave which can be used with the <strong>Kubernetes Plugin of Jenkins</strong> ). I am extending from from the official image <code>jenkinsci/jnlp-slave</code></p>
<pre><code>FROM jenkinsci/jnlp-slave
USER root
MAINTAINER Aryak Sengupta <[email protected]>
LABEL Description="Image for NodeJS slave"
COPY cert.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install -y nodejs
ENTRYPOINT ["jenkins-slave"]
</code></pre>
<p>I have this image saved inside my Pod template (in K8s plugin configuration). Now, when I'm trying to run a build on this <strong>slave</strong>, I find that two containers are getting spawned up inside the Pod (A screenshot to prove the same.).</p>
<p><a href="https://i.stack.imgur.com/kkbMH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kkbMH.png" alt="enter image description here"></a> </p>
<p>My Pod template looks like this:</p>
<p><a href="https://i.stack.imgur.com/ZqT2u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZqT2u.png" alt="enter image description here"></a></p>
<p>And my Kubernetes configuration looks like this:
<a href="https://i.stack.imgur.com/RHaRH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RHaRH.png" alt="enter image description here"></a></p>
<p>Now if I do a simple <code>docker ps</code>, I find that there are two containers which started up (Why?):</p>
<p><a href="https://i.stack.imgur.com/x2oYp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x2oYp.png" alt="enter image description here"></a></p>
<p>Now, inside the Jenkins Job configuration of Jenkins, whatever I add in the build step, the steps get executed in the first container . </p>
<p>Even if I use the official <code>Node</code> container inside my <code>PodTemplate</code>, the result is still the same:</p>
<p><a href="https://i.stack.imgur.com/w9lYP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9lYP.png" alt="enter image description here"></a></p>
<p>I have tried to print the Node version inside my Jenkins Job, and the output is <strong>"Node not found"</strong> . Also, to verify my haunch, I have done a <code>docker exec</code> into my second container and tried to print the <code>Node</code> version. In this case, it works absolutely fine. </p>
<p>This is what my build step looks like:</p>
<p><a href="https://i.stack.imgur.com/Chprj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Chprj.png" alt="enter image description here"></a></p>
<p>So, to boil it down, I have two major questions:</p>
<ol>
<li>Why does <strong>two</strong> separate (one for JNLP and one with all custom changes) containers start up whenever I fire up the Jenkins Job? </li>
<li>Why is my job running on the first container where Node isn't installed? How do I achieve the desired behaviour of building my project with <code>Node</code> using this configuration?</li>
</ol>
<p>What am I missing? </p>
<p>P.S. - Please do let me know if the question turns out to be unclear in some parts. </p>
<p>Edit: I understand that this can be done using the <code>Pipeline</code> Jenkins plugin where I can explicitly mention the <code>container</code> name, but I need to do this from the Jenkins UI. Is there any way to specify the container name along with the slave name which I am already doing like this:</p>
<p><a href="https://i.stack.imgur.com/XvMBn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XvMBn.png" alt="enter image description here"></a></p>
| <p>The Jenkins kubernetes plugin will always create a JNLP slave container inside the pod that is created to perform the build. The podTemplate is where you define the other containers you need in order to perform your build.</p>
<p>In this case it seems you would want to add a Node container to your podTemplate. In your build you would then have the build happen inside the named Node container.</p>
<p>You shouldn't really care where the Pod runs. All you need to do is make sure you add a container that has the resources you need (like Node in this case). You can add as many containers as you want to a podTemplate. I have some with 10 or more containers for steps like PMD, Maven, curl, etc.</p>
<p>I use a Jenkinsfile with pipelines. </p>
<pre><code>podTemplate(cloud: 'k8s-houston', label: 'api-hire-build',
containers: [
containerTemplate(name: 'maven', image: 'maven:3-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'pmd', image: 'stash.company.com:8443/pmd:pmd-bin-5.5.4', alwaysPullImage: false, ttyEnabled: true, command: 'cat')
],
volumes: [
persistentVolumeClaim(claimName: 'jenkins-pv-claim', mountPath: '/mvn/.m2nrepo')
]
)
{
node('api-hire-build') {
stage('Maven compile') {
container('maven') {
sh "mvn -Dmaven.repo.local=/mvn/.m2nrepo/repository clean compile"
}
}
stage('PMD SCA (docker)') {
container('pmd') {
sh 'run.sh pmd -d "$PWD"/src -f xml -reportfile "$PWD"/target/pmd.xml -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh pmd -d "$PWD"/src -f html -reportfile "$PWD"/target/pmdreport.html -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh cpd --files "$PWD"/src --minimum-tokens 100 --failOnViolation false --language java --format xml > "$PWD"/target/duplicate-code.xml'
}
archive 'target/duplicate-code.xml'
step([$class: 'PmdPublisher', pattern: 'target/pmd.xml'])
}
}
}
</code></pre>
|
<p>Is there any configuration snapshot mechanism on kubernetes?</p>
<p>The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.</p>
<p>The steps that should be taken.</p>
<ul>
<li>Take a configuration snapshot</li>
<li>Delete the cluster</li>
<li>Create a new cluster</li>
<li>Apply the configuration snapshot to the new cluster</li>
<li>New cluster works like the old one</li>
</ul>
| <p>State of the kubernetes is stored on etcd, so back up etcd data and restore would be able to restore cluster. But this would not backup any information stored in persistent volumes, that needs to be handled separately.</p>
<p>backup operater provided by coreos is a good option: </p>
<ul>
<li><a href="https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html" rel="nofollow noreferrer">https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html</a></li>
</ul>
<p>Taking backups with etcdctl : </p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/</a>
<a href="https://github.com/coreos/etcd/blob/master/etcdctl/README.md" rel="nofollow noreferrer">https://github.com/coreos/etcd/blob/master/etcdctl/README.md</a></li>
</ul>
<p>Heptio ark has capability to backup config and also volumes : </p>
<ul>
<li><a href="https://github.com/heptio/ark" rel="nofollow noreferrer">https://github.com/heptio/ark</a></li>
</ul>
<p>if you want a UI based option, these would be good :</p>
<ul>
<li><a href="https://github.com/kaptaind/kaptaind" rel="nofollow noreferrer">https://github.com/kaptaind/kaptaind</a>
<a href="https://github.com/mhausenblas/reshifter" rel="nofollow noreferrer">https://github.com/mhausenblas/reshifter</a></li>
</ul>
|
<p>Trying run container on Kubernetes, container don't start, fail with error:</p>
<pre><code>Error: failed to start container "tbsp-dev-container":
Error response from daemon: invalid header field value "oci runtime error:
container_linux.go:247: starting container process caused "process_linux.go:320: writing syncT run type caused
"write parent: broken pipe
</code></pre>
<p>Can you help me figure this out please?</p>
| <p>Check that you are base64 encoding all the secrets you are passing to the container.</p>
<p>Based on this <a href="https://github.com/kubernetes/kubernetes/issues/52481" rel="noreferrer">issue</a> that might cause that error to trigger.</p>
|
<p>I am trying using Kubernetes Java client for few use cases.</p>
<p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p>
<p>Our Kubernetes cluster is been implemented with OpenId authentication.</p>
<p>Unfortunately, the Java client doesn't support OpenId auth.</p>
<p><strong>Java code:</strong></p>
<pre><code>final ApiClient client = io.kubernetes.client.util.Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>13:25:22.549 [main] ERROR io.kubernetes.client.util.KubeConfig - Unknown auth provider: oidc
Exception in thread "main" io.kubernetes.client.ApiException: Forbidden
at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882)
at io.kubernetes.client.ApiClient.execute(ApiClient.java:798)
at io.kubernetes.client.apis.CoreV1Api.listPodForAllNamespacesWithHttpInfo(CoreV1Api.java:18462)
at io.kubernetes.client.apis.CoreV1Api.listPodForAllNamespaces(CoreV1Api.java:18440)
</code></pre>
<p>Is there any plan to support OpenId auth with the Java client. Or, is there any other way?</p>
| <p><a href="https://github.com/kubernetes-client/java/tree/client-java-parent-2.0.0-beta1/util/src/main/java/io/kubernetes/client/util/credentials" rel="nofollow noreferrer">Apparently not</a>, but by far the larger question is: what would you <em>expect</em> to happen with an <code>oidc</code> <code>auth-provider</code> in a Java setting? Just use the <code>id-token</code>? Be able to use the <code>refresh-token</code> and throw an exception if unable to reacquire an <code>id-token</code>? Some callback system for you to manage that lifecycle on your own?</p>
<p>Trying to do oidc from a <em>library</em> is fraught with peril, since it is almost certain that there is no "user" to interact with.</p>
<blockquote>
<p>Is there any plan to support OpenId auth with the Java client</p>
</blockquote>
<p>Only the project maintainers could answer that, and it is unlikely they know to prioritize that kind of work when there is no issue describing what you would expect to happen. Feel free to <a href="https://github.com/kubernetes-client/java/issues" rel="nofollow noreferrer">create one</a>.</p>
<blockquote>
<p>Or, is there any other way?</p>
</blockquote>
<p>In the meantime, you still have <a href="https://github.com/kubernetes-client/java/blob/client-java-parent-2.0.0-beta1/util/src/main/java/io/kubernetes/client/util/Config.java#L63" rel="nofollow noreferrer"><code>Config.fromToken()</code></a> where you can go fishing in your <code>.kube/config</code> and pull out the existing <code>id-token</code> then deal with any subsequent <code>ApiException</code> which requires using the <code>refresh-token</code>, because you will know more about what tradeoffs your client is willing to make.</p>
|
<p>I created a Docker image based on microsoft/dotnet-framework of a C#.NET console application built for Windows containers, then ensured I can run the image in a container locally. I successfully pushed the image to our Azure Container registry. Now I'm trying to create a deployment in our Azure Kubernetes service, but I'm getting an error: </p>
<blockquote>
<p>Failed to pull image "container-registry/image:tag": rpc error: code = Unknown desc = unknown blob</p>
</blockquote>
<p>I see this error on my deployment, pods, and replica sets in the Kubernetes dashboard.</p>
<p>We already have a secret that works with the azure-vote app, so I wouldn't think this is related to secrets, but I could be wrong.</p>
<p>So far, I've tried to create this deployment by pasting the following YAML into the Kubernetes dashboard Create dialog:</p>
<pre><code>apiVersion:
kind: Deployment
metadata:
name: somename
spec:
selector:
matchLabels:
app: somename
tier: backend
replicas: 2
template:
metadata:
labels:
app: somename
tier: backend
spec:
containers:
- name: somename
image: container-registry/image:tag
ports:
- containerPort: 9376
</code></pre>
<p>And I also tried running variations of this kubectl command:</p>
<pre><code>kubectl run deploymentname --image=container-registry/image:tag
</code></pre>
<p>In my investigation so far, I've tried reading about different parts of k8s to understand what may be going wrong, but it's all fairly new to me. I think it may have to do with this being a Windows Server 2016 based image. A team member successfully added the azure-vote tutorial code to our AKS, so I'm wondering if there is a restriction on a single AKS service running deployments for both Windows and Linux based containers. I see by running <code>az aks list</code> that the AKS has an agentPoolProfile with "osType": "Linux", but I don't know if that means simply that the orchestrator is in Linux or if the containers in the pods have to be Linux based. I have found stackoverflow questions about the "unknown blob" error, and it seems <a href="https://stackoverflow.com/questions/45138558/pull-image-from-and-connect-to-the-acs-engine-kubernetes-cluster/45141031#45141031">the answer to this question</a> might support my hypothesis, but I can't tell if that question is related to my questions.</p>
<p>Since the error has to do with failing to pull an image, I don't think this has to do with configuring a service for this deployment. Adding a service didn't change anything. I've tried rebuilding my app under the suspicion that the image was corrupted, but rebuilding and re-registering had no effect. Another thing that doesn't seem relevant that I read about is <a href="https://stackoverflow.com/questions/48765821/unable-to-pull-public-images-with-kubernetes-using-kubectl">this question and answer</a> regarding a manifest mismatch (which I don't completely understand yet).</p>
<p>I have not tried creating a local Kubernetes. I don't know if that's something folks typically do. </p>
<p>Summary of questions:</p>
<ol>
<li>What causes this unknown blob error? Does it have to do with a Windows container/Linux container mismatch?</li>
<li>Does the agent pool profile affect all the nodes in the cluster, or just the "master" nodes? </li>
</ol>
<p>Let me know if you need more information. Thanks.</p>
| <p><strong>1. What causes this unknown blob error? Does it have to do with a Windows container/Linux container mismatch?</strong>
It's because you're trying to run a Windows-based Docker container on a Linux host. It has nothing directly to do with Kubernetes or AKS. Currently AKS is in preview and supports only Linux environments. To be more precise, when you provision your AKS cluster (<code>az aks create</code>), all your k8s minions (worker nodes) will be Linux boxes and thus will not be able to run Windows-based containers.</p>
<p><strong>2. Does the agent pool profile affect all the nodes in the cluster, or just the "master" nodes?</strong>
It affects the worker nodes and is used to group them together logically so you can better manage workload distribution. In the future, when AKS supports both Linux and Windows, you will be able to i.e. create agent pools based on OS type and instruct k8s to deploy your Windows-based services only to the Windows-based hosts (agents).</p>
|
<p>I have a folder in my server that I want to update inside a cointaier in a pod. How do I do that without deleting the Secret?</p>
<p>I know that the following creates my secret:
<code>
oc create secret generic privkey --from-file=/opt/privkey
</code></p>
<p>I know that the following deletes the secret:</p>
<p><code>
oc delete secret generic privkey
</code></p>
<p>Is there a way to do that without deleting?
I am fine if I have to delete the pod.</p>
| <p>try this:</p>
<pre><code>oc create secret generic privkey --from-file=/opt/privkey --dry-run -o yaml | oc replace -f -
</code></pre>
|
<p>We're using a >1.8 version of k8s on gcloud. Unfortunately EventStore stops pushing data until it is rebooted. Thus we'd like to run <code>kubectl --namespace=$NAMESPACE delete pod eventstore-0</code> every 6 hours. Thus we have a cron job like:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: eventstore-restart
spec:
# Run every full hour, 15 past, 30 past, 45 past every other time-unit.
schedule: "0,15,30,45 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- name: eventstore-restart
image: eu.gcr.io/$PROJECT_ID/kubectl:latest
imagePullPolicy: Always
command: [ "/bin/sh", "-c" ]
args:
- 'set -x; kubectl --namespace=$NAMESPACE get pods
| grep -ho "eventstore-\d+"
| xargs -n 1 -I {} kubectl --namespace=$NAMESPACE delete pod {}'
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccount: restart-eventstore
</code></pre>
<p>However, this seems to expand to <code>kubectl get pods ...</code>, piped with <code>| { ... }</code>, which causes <code>"/bin/sh: syntax error: unexpected end of file (expecting "}")</code> to fail the script.</p>
<p><strong>How do I write the command to delete a pod on a schedule</strong>?</p>
| <p>I would do this:</p>
<pre><code>kubectl delete po $(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep eventstore) -n $NAMESPACE
</code></pre>
<p>or (your way)</p>
<pre><code>kubectl get pods -n $NAMESPACE -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep eventstore | xargs -n 1 -I {} kubectl delete po {}
</code></pre>
<p>Now, if you know you want to delete pod "eventstore-0", why to not do directly <code>kubectl delete pod eventstore-0</code>?</p>
|
<p>Given:</p>
<ul>
<li>a simple pod running an nginx </li>
<li>a nodeport service</li>
<li>an ingress</li>
</ul>
<p>When calling the <em>pod</em> from within the cluster we get a 200 response code</p>
<p>When calling the <em>service</em> from within the cluster we get a 200 response code</p>
<p>The ingress shows as annotation:</p>
<p><code>ingress.kubernetes.io/backends: '{"k8s-be-30606--559b9972f521fd4f":"UNHEALTHY"}'</code></p>
<p>To top things of, we have a different kubernetes cluster with the exact same configuration (apart from the namespace dev vs qa & timestamps & assigned ips & ports) where everything is working properly.</p>
<p>We've already tried removing the ingress, deleting pods, upscaling pods, explicitly defining the readiness probe, all without any change in the result.</p>
<p>Judging from the above it's the health check on the pod that's failing for some reason (even though if we do it manually (curl to a node internal ip + the node port from the service from within the cluster), it returns 200 & in qa it's working fine with the same container image).</p>
<p>Is there any log available in Stackdriver Logging (or elsewhere) where we can see what exact request is being done by that health check and what the exact response code is? (or if it timed out for some reason?)</p>
<p>Is there any way to get more view on what's happening in the google processes?</p>
<p>We use the default gke ingress controller. </p>
<p>Some additional info:
When comparing with an entirely different application, I see tons of requests like these:</p>
<pre><code>10.129.128.10 - - [31/May/2018:11:06:51 +0000] "GET / HTTP/1.1" 200 1049 "-" "GoogleHC/1.0"
10.129.128.8 - - [31/May/2018:11:06:51 +0000] "GET / HTTP/1.1" 200 1049 "-" "GoogleHC/1.0"
10.129.128.12 - - [31/May/2018:11:06:51 +0000] "GET / HTTP/1.1" 200 1049 "-" "GoogleHC/1.0"
10.129.128.10 - - [31/May/2018:11:06:51 +0000] "GET / HTTP/1.1" 200 1049 "-" "GoogleHC/1.0"
</code></pre>
<p>Which I assume are the health checks. I don't see any similar logs for the failing application nor for the working version in qa. So I imagine the health checks are ending up somewhere entirely different & by chance in qa it's something that also returns 200. So question remains: where can I see the actual requests performed by a health check?</p>
<p>Also for this particular application I see about 8 health checks <em>per second</em> for that single pod which seems to be a bit much to me (the configured interval is 60 seconds). Is it possible health checks for other applications are ending up in this one?</p>
| <p>GKE is managing a firewall rule. For some reason new (node) ports used by ingresses aren't added automatically anymore to this rule. After adding the new ports <strong>manually</strong> to this rule in the console, the backend service became healthy.</p>
<p>Still need to find out:</p>
<ul>
<li>why is the port not added automatically anymore?</li>
<li>why don't I see the health checks in the access log?</li>
</ul>
<p>In any case I hope this can help someone else since we wasted a huge amount of time finding this out.</p>
<p><strong>edit</strong>:</p>
<p>The error turned out to be an invalid certificate used by tls termination by an unrelated (except that it's managed by the same controller) ingress. Once that was fixed, the rule was updated automatically again.</p>
|
<p>I try to resize a kubernetes cluster to zero nodes using</p>
<pre><code>gcloud container clusters resize $CLUSTER_NAME --size=0 --zone $ZONE
</code></pre>
<p>I get a success message but the size of the node-pool remains the same (I use only one node pool)</p>
<p>Is it possible to resize the cluster to zero?</p>
| <p>Sometimes you just need to wait 10-20 minutes before autoscale operation takes effect.<br>
In other cases, you may need to check if some conditions are met for downscaling the node.</p>
<p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">autoscaler documentation</a>:</p>
<blockquote>
<p>Cluster autoscaler also measures the usage of each node against the node pool's total demand for capacity. If a node has had no new Pods scheduled on it for a set period of time, and all Pods running on that node can be scheduled onto other nodes in the pool, the autoscaler moves the Pods and deletes the node.</p>
<p>Note that cluster autoscaler works based on Pod resource requests, that is, how many resources your Pods have requested. Cluster autoscaler does not take into account the resources your Pods are actively using. Essentially, cluster autoscaler trusts that the Pod resource requests you've provided are accurate and schedules Pods on nodes based on that assumption.</p>
<p><strong>Note:</strong> Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads. However, while a node pool can scale to a zero size, the overall cluster size does not scale down to zero nodes (as at least one node is always required to run system Pods)</p>
<p>Cluster autoscaler has following limitations:
- When scaling down, cluster autoscaler supports a graceful termination period for a Pod of up to 10 minutes. A Pod is always killed after a maximum of 10 minutes, even if the Pod is configured with a higher grace period.</p>
<p><strong>Note:</strong> Every change you make to the cluster autoscaler causes the Kubernetes master to restart, which takes several minutes to complete.</p>
</blockquote>
<p>However, there are cases mentioned in <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">FAQ</a> that can prevent CA from removing a node:</p>
<blockquote>
<h3>What types of pods can prevent CA from removing a node?</h3>
<ul>
<li>Pods with restrictive PodDisruptionBudget.</li>
<li>Kube-system pods that:
<ul>
<li>are not run on the node by default, *</li>
<li>don't have PDB or their PDB is too restrictive (since CA 0.6).</li>
</ul></li>
<li>Pods that are not backed by a controller object (so not created by deployment, replica set, job, stateful set etc). *</li>
<li>Pods with local storage. *</li>
<li>Pods that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, matching anti-affinity, etc)
*Unless the pod has the following annotation (supported in CA 1.0.3 or later):</li>
</ul>
<p><code>"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"</code></p>
<h3>How can I scale my cluster to just 1 node?</h3>
<p>Prior to version 0.6, Cluster Autoscaler was not touching nodes that were running important kube-system pods like DNS, Heapster, > Dashboard etc. If these pods landed on different nodes, CA could not scale the cluster down and the user could end up with a completely empty 3 node cluster. In 0.6, we added an option to tell CA that some system pods can be moved around. If the user configures a <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">PodDisruptionBudget</a> for the kube-system pod, then the default strategy of not touching the node running this pod is overridden with PDB settings. So, to enable kube-system pods migration, one should set <a href="https://kubernetes.io/docs/api-reference/v1.7/#poddisruptionbudgetspec-v1beta1-policy" rel="nofollow noreferrer">minAvailable</a> to 0 (or <= N if there are N+1 pod replicas.) See also <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#i-have-a-couple-of-nodes-with-low-utilization-but-they-are-not-scaled-down-why" rel="nofollow noreferrer">I have a couple of nodes with low utilization, but they are not scaled down. Why?</a></p>
<h3>How can I scale a node group to 0?</h3>
<p>From CA 0.6 for GCE/GKE and CA 0.6.1 for AWS, it is possible to scale a node group to 0 (and obviously from 0), assuming that all scale-down conditions are met.</p>
<p>For AWS, if you are using nodeSelector, you need to tag the ASG with a node-template key "k8s.io/cluster-autoscaler/node-template/label/".</p>
<p>For example, for a node label of foo=bar, you would tag the ASG with:</p>
<p><code>{
"ResourceType": "auto-scaling-group",
"ResourceId": "foo.example.com",
"PropagateAtLaunch": true,
"Value": "bar",
"Key": "k8s.io/cluster-autoscaler/node-template/label/foo"
}</code></p>
</blockquote>
|
<p>I'm working in kube-proxy development and I'm in the stage of understanding the purpose and execution of kube-proxy. </p>
<p>I know that kube-proxy will add iptables rules to enable user to access the exposed pods (which is kubernetes service in iptables mode). </p>
<p>what makes me wonder, is the fact that those rules are added in the host node where a pod of kube-proxy is running, and it's not clear how this pod is capable of accessing those privileges on the host node. </p>
<p>I have took a look on the code of kubernetes with no success to find this specific part, so if you have any idea, resource, or documentation that would help me to figure this out it would be appreciated. </p>
| <p><a href="https://github.com/philips/real-world-kubernetes/blob/master/k8s-setup/kube-proxy.yaml" rel="nofollow noreferrer">kube-proxy.yaml</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.0.6
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>According to <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies</a> document:</p>
<blockquote>
<p><strong>Privileged</strong> - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a “privileged” container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.</p>
</blockquote>
<p>In other words, it gives the container or the pod (depending on a <a href="https://kubernetes-v1-4.github.io/docs/user-guide/security-context/" rel="nofollow noreferrer">context</a>) most of the root privileges. </p>
<p>There are many more options to control pods capabilities in the securityContext section:</p>
<ul>
<li>Privilege escalation</li>
<li>Linux Capabilities</li>
<li>SELinux</li>
<li>Volumes</li>
<li>Users and groups</li>
<li>Networking</li>
</ul>
<p>Consider reading the full <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">article</a> for details and code snippets.</p>
|
<p>It seems a deployment has gotten stuck. How can I diagnose this further? </p>
<pre><code>kubectl rollout status deployment/wordpress
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
</code></pre>
<p>It's stuck on that for ages already. It is not terminating the two older pods: </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-r6g6w 1/1 Running 0 2h
redis-679c597dd-67rgw 1/1 Running 0 2h
wordpress-64c944d9bd-dvnwh 4/4 Running 3 3h
wordpress-64c944d9bd-vmrdd 4/4 Running 3 3h
wordpress-f59c459fd-qkfrt 0/4 Pending 0 22m
wordpress-f59c459fd-w8c65 0/4 Pending 0 22m
</code></pre>
<p>And the events:</p>
<pre><code>kubectl get events --all-namespaces
NAMESPACE LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
default 25m 2h 333 wordpress-686ccd47b4-4pbfk.153408cdba627f50 Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the predicates: Insufficient cpu (1), Insufficient memory (2), MatchInterPodAffinity (1).
default 25m 2h 337 wordpress-686ccd47b4-vv9dk.153408cc8661c49d Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the predicates: Insufficient cpu (1), Insufficient memory (2), MatchInterPodAffinity (1).
default 22m 22m 1 wordpress-686ccd47b4.15340e5036ef7d1c ReplicaSet Normal SuccessfulDelete replicaset-controller Deleted pod: wordpress-686ccd47b4-4pbfk
default 22m 22m 1 wordpress-686ccd47b4.15340e5036f2fec1 ReplicaSet Normal SuccessfulDelete replicaset-controller Deleted pod: wordpress-686ccd47b4-vv9dk
default 2m 22m 72 wordpress-f59c459fd-qkfrt.15340e503bd4988c Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the predicates: Insufficient cpu (1), Insufficient memory (2), MatchInterPodAffinity (1).
default 2m 22m 72 wordpress-f59c459fd-w8c65.15340e50399a8a5a Pod Warning FailedScheduling default-scheduler No nodes are available that match all of the predicates: Insufficient cpu (1), Insufficient memory (2), MatchInterPodAffinity (1).
default 22m 22m 1 wordpress-f59c459fd.15340e5039d6c622 ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: wordpress-f59c459fd-w8c65
default 22m 22m 1 wordpress-f59c459fd.15340e503bf844db ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: wordpress-f59c459fd-qkfrt
default 3m 23h 177 wordpress.1533c22c7bf657bd Ingress Normal Service loadbalancer-controller no user specified default backend, using system default
default 22m 22m 1 wordpress.15340e50356eaa6a Deployment Normal ScalingReplicaSet deployment-controller Scaled down replica set wordpress-686ccd47b4 to 0
default 22m 22m 1 wordpress.15340e5037c04da6 Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set wordpress-f59c459fd to 2
</code></pre>
| <p>You can use describe <code>kubectl describe po wordpress-f59c459fd-qkfrt</code> but from the message the pods cannot be scheduled in any of the nodes.</p>
<p>Provide more capacity, like try to add a node, to allow the pods to be scheduled.</p>
|
<p>Trying to run Elastic Search 6.2.4 on Openshift but it is not running and the container exits with the code 137. </p>
<pre><code>[2018-06-01T14:24:58,148][INFO ][o.e.p.PluginsService ] [jge060C]
loaded module [ingest-common]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-expression]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-mustache]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [lang-painless]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [mapper-extras]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [parent-join]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [percolator]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [rank-eval]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [reindex]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [repository-url]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [transport-netty4]
[2018-06-01T14:24:58,149][INFO ][o.e.p.PluginsService ] [jge060C] loaded module [tribe]
[2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [ingest-geoip]
[2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [ingest-user-agent]
[2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-core]
[2018-06-01T14:24:58,150][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-deprecation]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-graph]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-logstash]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-ml]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-monitoring]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-security]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-upgrade]
[2018-06-01T14:24:58,151][INFO ][o.e.p.PluginsService ] [jge060C] loaded plugin [x-pack-watcher]
[2018-06-01T14:25:01,592][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/131] [Main.cc@128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
[2018-06-01T14:25:03,271][INFO ][o.e.d.DiscoveryModule ] [jge060C] using discovery type [zen]
[2018-06-01T14:25:04,305][INFO ][o.e.n.Node ] initialized
[2018-06-01T14:25:04,305][INFO ][o.e.n.Node ] [jge060C] starting ...
[2018-06-01T14:25:04,497][INFO ][o.e.t.TransportService ] [jge060C] publish_address {10.131.3.134:9300}, bound_addresses {[::]:9300}
[2018-06-01T14:25:04,520][INFO ][o.e.b.BootstrapChecks ] [jge060C] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-06-01T14:25:04,531][INFO ][o.e.n.Node ] [jge060C] stopping ...
[2018-06-01T14:25:04,623][INFO ][o.e.n.Node ] [jge060C] stopped
[2018-06-01T14:25:04,624][INFO ][o.e.n.Node ] [jge060C] closing ...
[2018-06-01T14:25:04,634][INFO ][o.e.n.Node ] [jge060C] closed
</code></pre>
<p>As you can see from the logs, the vm max heap size has to be increased. As it turns out to be a kernel parameter, how to change that for the pod that is running ES?</p>
| <p>Kernel <em>command line</em> parameters can't be changed per pod, but <code>vm.max_map_count</code> is parameter you can change via sysctl.</p>
<p>See these two similar SO question for a solution:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/44439372/how-to-pass-sysctl-flags-to-docker-from-k8s">How to pass `sysctl` flags to docker from k8s?</a></li>
<li><a href="https://stackoverflow.com/questions/49961956/enabling-net-ipv4-ip-forward-for-a-container">Enabling net.ipv4.ip_forward for a container</a></li>
</ul>
<p>There is also a more general explanation in the official <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">Kubernetes documentation on sysctl</a></p>
|
<p>I'm pretty much new to Kubernetes and don't have hands-on experience on it. </p>
<p>My team is facing issue regarding the log format pushed by <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag"><img src="https://i.stack.imgur.com/aIElQ.png" height="16" width="18" alt="" class="sponsor-tag-img">kubernetes</a> to <a href="/questions/tagged/splunk" class="post-tag" title="show questions tagged 'splunk'" rel="tag">splunk</a>. </p>
<h3>Application is pushing log to stdout in this format</h3>
<pre><code>{"logname" : "app-log", "level" : "INFO"}
</code></pre>
<h3>Splunk eventually get this format (splunkforwarder is used)</h3>
<pre><code>{
"log" : "{\"logname\": \"app-log\", \"level\": \"INFO \"}",
"stream" : "stdout",
"time" : "2018-06-01T23:33:26.556356926Z"
}
</code></pre>
<p>This format kind of make things harder in Splunk to query based on properties.</p>
<p>Is there any options in <code>Kubernetes</code> to forward <code>raw logs</code> from app rather than grouping into another json ?</p>
<p>I came across <a href="https://answers.splunk.com/answers/592640/kubernetesdocker-json-logs.html" rel="nofollow noreferrer">this</a> post in Splunk, but the configuration is done on Splunk side </p>
<p>Please let me know if we have any option from <code>Kubernetes</code> side to send raw logs from application</p>
| <p>Kubernetes architecture provides three ways to gather logs:</p>
<p><strong>1. Use a node-level logging agent that runs on every node.</strong></p>
<p>You can implement cluster-level logging by including a node-level logging agent on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.</p>
<p>The logs format depends on Docker settings. You need to set up <code>log-driver</code> parameter in <code>/etc/docker/daemon.json</code> on every node.</p>
<p>For example,</p>
<pre><code>{
"log-driver": "syslog"
}
</code></pre>
<p>or</p>
<pre><code>{
"log-driver": "json-file"
}
</code></pre>
<ul>
<li>none - no logs are available for the container and docker logs does not
return any output. </li>
<li>json-file - the logs are formatted as JSON. The
default logging driver for Docker. </li>
<li>syslog - writes logging messages to
the syslog facility.</li>
</ul>
<p>For more options, check the <a href="https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers" rel="nofollow noreferrer">link</a></p>
<p><strong>2. Include a dedicated sidecar container for logging in an application pod.</strong></p>
<p>You can use a sidecar container in one of the following ways:</p>
<ul>
<li>The sidecar container streams application logs to its own stdout. </li>
<li>The sidecar container runs a logging agent, which is configured to pick up logs from an application container.</li>
</ul>
<p>By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.</p>
<p><strong>3. Push logs directly to a backend from within an application.</strong></p>
<p>You can implement cluster-level logging by exposing or pushing logs directly from every application.</p>
<p>For more information, you can check <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures" rel="nofollow noreferrer">official documentation</a> of Kubernetes</p>
|
<h1>My question is 'probably' specific to Azure.</h1>
<blockquote>
<p>How can I review the Kube-Proxy logs?</p>
</blockquote>
<p>After SSH'ing into an Azure AKS Node (done) I can use the following to view the Kubelet logs:</p>
<pre><code>journalctl -u kubelet -o cat
</code></pre>
<p>Azure docs on the Azure Kubelet logs can be found here:
<a href="https://learn.microsoft.com/en-us/azure/aks/kubelet-logs" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubelet-logs</a></p>
<p>I have reviewed the following Kubernetes resource regarding logs but Kube-Proxy logs on Azure do not appear in any of the suggested locations on the AKS node:
<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs</a></p>
<p>This is part of a trouble shooting effort related to a Kubernetes nGinx Ingress temporarily returning a '504 Gateway Time-out' when a service has not been accessed / going idle for some period of time (perhaps 5 to 10 minutes) but then becoming accessible on the next attempt(s).</p>
| <p>On AKS, kube-proxy runs as a DaemonSet in the kube-system namespace</p>
<p>You can list the kube-proxy pods + node information with:</p>
<pre><code>kubectl get pods -l component=kube-proxy -n kube-system -o wide
</code></pre>
<p>And then you can review the logs by running:</p>
<pre><code>kubectl logs kube-proxy-<suffix> -n kube-system
</code></pre>
|
<p>i have a sample nodejs application which uses an envVar environment variable, i have deployed this on kubernetes cluster. I am passing the env variable through config map.</p>
<p>once deployed and when pods is all running, if i change my config map with new value. Should deployment of my nodejs application need to be redone after this?</p>
<p>configmap.yaml</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '12345' # initial value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
</code></pre>
<p>after updating the configmap.yaml</p>
<p>configmap.yaml</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '56789' # changed value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
</code></pre>
| <p>When you mount the keys from the ConfigMap as environment variables, you would need to restart your pod for the changes to take effect.</p>
<p>When you mount it as volume into you system, the files in the volume will be updated automatically. The update is not immediate, there is some TTL configured in the <code>kubelet</code> before it checks for changes / does the update. But it is normally quite quick. However it would still depend on your application how it loads the data from the file - whether it will be able to update its self <em>on the fly</em> when the files change or whether these data were loaded only once at startup.</p>
|
<p>I couldn't find any information on wherever a connection creation between cluster's pod and locahost is encrypted when running "kubectl port-forward" command.</p>
<p>It seems like it uses "<a href="https://linux.die.net/man/1/socat" rel="noreferrer">socat</a>" library which supports encryption, but I'm not sure if kubernetes actually uses it.</p>
| <p>As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.