prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I successfully deployed a kubernetes cluster of 3 nodes in Azure. I saw in the resource group that there were only 1 master created (same as with kubeadm).</p>
<p><a href="https://i.stack.imgur.com/vhktg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vhktg.png" alt="enter image description here"></a></p>
<p>Do someone know if we can have a cluster of more than one master to ensure a high availability kubernetes cluster ?</p>
<p>Thank you.</p>
| <p>I don't believe that the ACS engine currently supports HA for the kubernetes master. <a href="https://github.com/Azure/acs-engine" rel="nofollow noreferrer">https://github.com/Azure/acs-engine</a></p>
<p>At this time, you will need to create a cluster manually using the HA documentation on the kubernetes website</p>
<p>I know that kops will be supporting Azure in their next major release : <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">https://github.com/kubernetes/kops</a>, not sure if they will have HA support at the beginning </p>
<p>Currently we are using our on-premise installer on pre-provisioned machines in Azure to get what we think is a production ready cluster. </p>
|
<p>The fabric8 kubernetes Java & Scala client API is great for talking to kubernetes (or OpenShift), but its documentation is very sparse. What is a code example for adding resource requirements to containers running in a kubernetes pod?</p>
| <p>If you are using the fabric8 kubernetes-client API for Java and Scala, here is a snippet of code that demonstrates how to add resource requirements to a container running in a pod. This code was copied from Scala, but Java code would be very similar:</p>
<pre><code>// other fabric8 imports not included; just focusing on resource
// requirements logic in this example
import io.fabric8.kubernetes.api.model.Quantity
import io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder
// Use Java style Map (as opposed to Scala's Map class)
val reqMap: java.util.Map[String, Quantity] =
new java.util.HashMap[String, Quantity]()
// add CPU and memory requirements to the map
reqMap.put("cpu", new Quantity("1"))
reqMap.put("memory", new Quantity("1500Mi"))
// Build a ResourceRequirements object from the map
val reqs = new ResourceRequirementsBuilder()
.withRequests(reqMap)
.build()
// pass the ResourceRequirements object to the container spec
val pod = new PodBuilder()
.withNewMetadata()
.withName(podName)
.endMetadata()
.withNewSpec()
.withRestartPolicy("OnFailure")
.addNewContainer()
.withName(containerName)
.withImage(containerImage)
.withImagePullPolicy("Always")
.withResources(reqs) // <-- resource reqs here
.withCommand(commandName)
.withArgs(commandArguments)
.endContainer()
.endSpec()
.build()
// create the new pod with resource requirements via the
// fabric8 kube client:
client.pods().inNamespace(nameSpace).withName(podName).create(pod)
</code></pre>
|
<p>I'm seeing a weird issue on kubernetes and I'm not sure how to debug it. The k8s environment was installed by kube-up for vsphere using the 2016-01-08 kube.vmdk</p>
<p>The symptom is that the dns for a container in a pod is not working correctly. When I logon to the kube-dns service to check the settings everything looks correct. When I ping outside the local network it works as it should but when I ping inside my local network it cannot reach any of the hosts.</p>
<p>For the following my host network is 10.1.1.x, the gateway / dns server is 10.1.1.1.</p>
<h3>inside the kube-dns container:</h3>
<p>(I can ping outside the network by ip and I can ping the gateway just fine. dns isn't working since the nameserver is unreachable)</p>
<pre><code>kube@kubernetes-master:~$ kubectl --namespace=kube-system exec -ti kube-dns-v20-in2me -- /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.1.1.1
options ndots:5
/ # ping google.com
^C
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=13.542 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=13.862 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 13.542/13.702/13.862 ms
/ # ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1): 56 data bytes
^C
--- 10.1.1.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.244.2.1 0.0.0.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 eth0
/ # ping 10.244.2.1
PING 10.244.2.1 (10.244.2.1): 56 data bytes
64 bytes from 10.244.2.1: seq=0 ttl=64 time=0.249 ms
64 bytes from 10.244.2.1: seq=1 ttl=64 time=0.091 ms
^C
--- 10.244.2.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.091/0.170/0.249 ms
</code></pre>
<h3>on the master:</h3>
<pre><code>kube@kubernetes-master:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.246.0.0 * 255.255.255.0 U 0 0 0 cbr0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube@kubernetes-master:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.409 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.481 ms
^C
--- 10.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.409/0.445/0.481/0.036 ms
</code></pre>
<h3>version:</h3>
<pre><code>kube@kubernetes-master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:38:40Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.5", GitCommit:"5a0a696437ad35c133c0c8493f7e9d22b0f9b81b", GitTreeState:"clean", BuildDate:"2016-10-29T01:32:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h3>kubernetes-minion-2 (10.244.2.1):</h3>
<p>(Per @der's response adding info from 10.244.2.1)</p>
<pre><code>kube@kubernetes-minion-2:~$ ip addr show cbr0
5: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default
link/ether 8a:ef:b5:fc:28:f4 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.1/24 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::38b5:44ff:fe8a:6d79/64 scope link
valid_lft forever preferred_lft forever
kube@kubernetes-minion-2:~$ ping google.com
PING google.com (216.58.192.14) 56(84) bytes of data.
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=1 ttl=52 time=11.8 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=2 ttl=52 time=11.6 ms
64 bytes from nuq04s29-in-f14.1e100.net (216.58.192.14): icmp_seq=3 ttl=52 time=10.4 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 10.477/11.343/11.878/0.624 ms
kube@kubernetes-minion-2:~$ ping 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.369 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.442 ms
^C
--- 10.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.369/0.422/0.456/0.041 ms
kube@kubernetes-minion-2:~$ netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 10.1.1.1 0.0.0.0 UG 0 0 0 eth0
10.1.1.0 * 255.255.255.0 U 0 0 0 eth0
10.244.0.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.1.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 * 255.255.255.0 U 0 0 0 cbr0
10.244.3.0 kubernetes-mini 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
kube@kubernetes-minion-2:~$ routel
target gateway source proto scope dev tbl
default 10.1.1.1 eth0
10.1.1.0 24 10.1.1.86 kernel link eth0
10.244.0.0 24 10.1.1.88 eth0
10.244.1.0 24 10.1.1.87 eth0
10.244.2.0 24 10.244.2.1 kernel link cbr0
10.244.3.0 24 10.1.1.85 eth0
172.17.0.0 16 172.17.0.1 kernel linkdocker0
10.1.1.0 broadcast 10.1.1.86 kernel link eth0 local
10.1.1.86 local 10.1.1.86 kernel host eth0 local
10.1.1.255 broadcast 10.1.1.86 kernel link eth0 local
10.244.2.0 broadcast 10.244.2.1 kernel link cbr0 local
10.244.2.1 local 10.244.2.1 kernel host cbr0 local
10.244.2.255 broadcast 10.244.2.1 kernel link cbr0 local
127.0.0.0 broadcast 127.0.0.1 kernel link lo local
127.0.0.0 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.0 broadcast 172.17.0.1 kernel linkdocker0 local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
::1 local kernel lo
fe80:: 64 kernel eth0
fe80:: 64 kernel cbr0
fe80:: 64 kernel veth6129284
default unreachable kernel lo unspec
::1 local none lo local
fe80::250:56ff:fe8e:d580 local none lo local
fe80::38b5:44ff:fe8a:6d79 local none lo local
fe80::88ef:b5ff:fefc:28f4 local none lo local
ff00:: 8 eth0 local
ff00:: 8 cbr0 local
ff00:: 8 veth6129284 local
default unreachable kernel lo unspec
</code></pre>
<p>How can I diagnose what is going on here?</p>
<p>thanks!</p>
| <p>Turns out this is an issue with the default nat routing rules on the minions</p>
<pre><code>$ iptables –t nat –vnxL
...
...
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
...
80 4896 MASQUERADE all -- * * 0.0.0.0/0 !10.0.0.0/8 /* kubelet: SNAT outbound cluster traffic */ ADDRTYPE match dst-type !LOCAL
...
...
</code></pre>
<p>This shows that all traffic coming from the 10.x.x.x network gets ignored by the postrouting rules.</p>
<p>If anyone runs across this fix it with:</p>
<pre><code>$ iptables -t nat -I POSTROUTING 1 -s 10.244.0.0/16 -d 10.1.1.1/32 -j MASQUERADE
</code></pre>
<p>where 10.244.x.x/16 is the container network and 10.1.1.1 is the gateway ip</p>
|
<p>I have a Kubernetes cluster that I setup with <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html" rel="noreferrer">kube-aws</a>. I'm trying to run a custom NGINX configuration which uses DNS resolutions to proxy_pass. Here is the NGINX block of code</p>
<pre><code>location /api/v1/lead {
resolver 10.3.0.10 ipv6=off;
set $container lead-api;
proxy_pass http://$container:3000;
}
</code></pre>
<p>10.3.0.10 comes from the cluster IP of the DNS service found in Kubernetes. I've also tried 127.0.0.11 which is what we use in the docker-compose/docker environments. </p>
<pre><code>$ kubectl describe --namespace=kube-system service kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.3.0.10
Port: dns 53/UDP
Endpoints: 10.2.26.61:53
Port: dns-tcp 53/TCP
Endpoints: 10.2.26.61:53
Session Affinity: None
</code></pre>
<p>This configuration works well on three different environments which use docker-compose. However I get the following error in the NGINX logs of the Kubernetes cluster</p>
<blockquote>
<p>[error] 9#9: *20 lead-api could not be resolved (2: Server failure), client: 10.2.26.0, server: , request: "GET /api/v1/lead/661DF757-722B-41BB-81BD-C7FD398BBC88 HTTP/1.1"</p>
</blockquote>
<p>If I run nslookup within the NGINX pod I can resolve the host with the same dns server:</p>
<pre><code>$ kubectl exec nginx-1855584872-kdiwh -- nslookup lead-api
Server: 10.3.0.10
Address: 10.3.0.10#53
Name: lead-api.default.svc.cluster.local
Address: 10.3.0.167
</code></pre>
<p>I don't know if it matters or not, but notice the "server" part of the error is empty. When I look at the pod logs for dnsmasq I don't see anything relevant. If I change the NGINX block to hardcode the proxy_pass then it resolves fine. However, I have other configurations that require dynamic proxy names. I could hard code every upstream this way, but I want to know how to make the DNS resolver work.</p>
<pre><code>location /api/v1/lead {
proxy_pass http://lead-api:3000;
}
</code></pre>
| <p>Resolving the name fails because you need to use the Full Qualified Domain name. That is, you should use:</p>
<p><code>lead-api.<namespace>.svc.cluster.local</code></p>
<p>not just </p>
<p><code>lead-api</code></p>
<p>Using just the hostname will usually work because in kubernetes the <code>resolv.conf</code> is configured with search domains so that you don't usually need to provide a service's FQDN. e.g:</p>
<pre><code>search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.3.240.10
options ndots:5
</code></pre>
<p>However, specifying the FQDN is necessary when you tell nginx to use a custom resolver because it does not get the benefit of these domain search specs.</p>
|
<p>In short I have two containers in a pod, named <code>a</code> and <code>b</code>, on ports 80 and 9000 respectively. They can reach each other through <code>127.0.0.1:80</code> and <code>127.0.0.1:9000</code>. Can they also reach each other by name?</p>
<p>I've tried <code>a.podname:90</code>, <code>b.podname:9000</code>, <code>podname:80</code> and other variations without luck.</p>
<p>There is <a href="http://kubernetes.io/docs/admin/dns/#a-records" rel="nofollow noreferrer">http://kubernetes.io/docs/admin/dns/#a-records</a> but those are based on the PODs assigned IP which isn't known beforehand.</p>
<p>There is also <a href="http://kubernetes.io/docs/admin/dns/#srv-records" rel="nofollow noreferrer">http://kubernetes.io/docs/admin/dns/#srv-records</a> but they depend on the service, and thus the readiness criteria. Also SRV is a DNS extension that can sometimes not be used.</p>
<p>Background: To keep containers compatible with both Kubernetes and other docker environments you must avoid dependencies to localhost, but can depend on fixed port numbers. For example if there is a name that resolves to 127.0.0.1 or the POD's IP in Kubernetes, that same name can be used as a <a href="https://docs.docker.com/compose/compose-file/#/links" rel="nofollow noreferrer">https://docs.docker.com/compose/compose-file/#/links</a> alias in docker-compose.</p>
<p>In particular I'm trying to solve <a href="https://github.com/Reposoft/docker-svn/issues/8" rel="nofollow noreferrer">https://github.com/Reposoft/docker-svn/issues/8</a></p>
| <p>If you're always going to run the two containers in a pod, you <em>can</em> rely on <code>localhost:xyz</code> to be accessible, no matter where that pod gets scheduled. </p>
<p>If you want to use DNS names, you can create <a href="http://kubernetes.io/docs/user-guide/services/#headless-services" rel="nofollow noreferrer">a headless service</a> for the SRV records. You can override the readiness behavior using an annotation: <a href="https://github.com/kubernetes/kubernetes/blob/10aee82ae3bf8ec8f8926e5198efceae1384e0b8/pkg/controller/endpoint/endpoints_controller.go#L58-L64" rel="nofollow noreferrer">service.alpha.kubernetes.io/tolerate-unready-endpoints</a>. That should give you a DNS name to reach your pod.</p>
|
<p>I'm using GCE with Kubernetes to host my rails app but the ingress reports the pod as UNHEALTHY. Below is my setup</p>
<p><strong>Ingress:</strong></p>
<pre><code>spec:
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p><strong>Service:</strong></p>
<pre><code>spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 3000
</code></pre>
<p><strong>Depoyment</strong></p>
<pre><code>readinessProbe:
httpGet:
path: /health_check
port: 3000
initialDelaySeconds: 20
timeoutSeconds: 5
</code></pre>
<p>The ingress reports the pod as UNHEALTHY and I don't see /health_check shows up in the health checks list in GCE console. It seems it's not picked up by Google load balancer controller.</p>
<p>Thank you very much.</p>
| <p>It turned out the ingress didn't pick up the new path of the readiness probe that I changed earlier. The problem was solved after I recreated the ingress (which was definitely not an optimal solution). </p>
|
<p>This question is discussed many times but I'd like to hear some best practices and real-world examples of using each of the approaches below:</p>
<ol>
<li><p>Designing containers which are able to check the health of dependent services. Simple script <a href="https://github.com/vishnubob/wait-for-it" rel="nofollow noreferrer">whait-for-it</a> can be usefull for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applyied yet.</p></li>
<li><p>Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?</p></li>
<li><p>Manage startup order of containers by external scheduler. </p></li>
</ol>
<p>Which of the approaches above are preferable in context of absence/presence orchestrators like Swarm/Kubernetes/etc in delivery process ?</p>
| <p>I can take a stab at the kubernetes perspective on those.</p>
<blockquote>
<p>Designing containers which are able to check the health of dependent services. Simple script whait-for-it can be useful for this kind of developing containers, but aren't suitable for more complex deployments. For instance, database could accept connections but migrations aren't applied yet.</p>
</blockquote>
<p>This sounds like you want to differentiate between liveness and readiness. Kubernetes allows for <a href="http://kubernetes.io/docs/user-guide/production-pods/#liveness-and-readiness-probes-aka-health-checks" rel="nofollow noreferrer">both types of probes</a> for these, that you can use to check health and wait before serving any traffic. </p>
<blockquote>
<p>Make container able to post own status in Consul/etcd. All dependent services will poll certain endpoint which contains status of needed service. Looks nice but seems redundant, don't it?</p>
</blockquote>
<p>I agree. Having to maintain state separately is not preferred. However, in cases where it is absolutely necessary, if you really want to store the state of a resource, it is possible to use a <a href="http://kubernetes.io/docs/user-guide/thirdpartyresources/" rel="nofollow noreferrer">third party resource</a>.</p>
<blockquote>
<p>Manage startup order of containers by external scheduler.</p>
</blockquote>
<p>This seems tangential to the discussion mostly. However, <a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow noreferrer">Pet Sets</a>, soon to be replaced by Stateful Sets in Kubernetes v1.5, give you deterministic order of initialization of pods. For containers on a single pod, there are <a href="http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization" rel="nofollow noreferrer">init-containers</a> which run serially and in order prior to running the main container.</p>
|
<p>I have a ruby on rails application running on AWS. As usual each application server have an nginx and multiple unicorn workers of the application instance. </p>
<p>I am going to move the workload to Kubernetes. I have couple of question regarding this, please help if anyone out there who have kubernetised there ror application.</p>
<ul>
<li>What will be the role of nginx?. Do i need to install nginx in all the pods or i should have a an nginx pod which will reverse proxy to all rails/unicorn pods?</li>
<li>Which one is the best for ror in kubernetes, passenger or unicorn?</li>
</ul>
| <p><strong>How will you use nginx?</strong></p>
<p>A kubernetes service can be backed by several kubernetes pods. Whenever anyone makes a request to the kubernetes service, the request is sent to one of the upstream pods in a round robin fashion.</p>
<p>If you were planning to use nginx as a 'load-balancer' or reverse proxy to front your rails app, you don't really need that anymore. Each pod ofcourse will need to have something like passenger/unicorn to serve the rails app.</p>
<p>Here'a a guide I found that talks about a rails deployment from start to end: <a href="http://www.thagomizer.com/blog/2015/07/01/kubernetes-and-deploying-to-google-container-engine.html" rel="nofollow noreferrer">http://www.thagomizer.com/blog/2015/07/01/kubernetes-and-deploying-to-google-container-engine.html</a></p>
<p>If you're planning to use nginx as a static file server, my recommendation would be to have a different pod for the static files that just contains nginx.</p>
<p><strong>What is better to use with k8s?</strong></p>
<p>K8s doesn't really care, because this is outside k8s's concern. Use whatever you like, or whatever you think works better in a container environment. The better question to ask might be <em>which one of passenger/unicorn is a better fit for containerised rails apps</em>. </p>
|
<p>Does anyone know if Google’s HTTPS loadbalancer is working?
I was working on setting up a NGINX ingress service but I noticed the Google Loadbalancer was automatically being setup by Kubernetes. I was getting two external IPs instead of one. So instead of setting up the NGINX load balancer I decided to use the Google service. I deleted my container cluster, created a brand new one. I started my HTTP pod and HTTP service on port 80. I then created my ingress service and L7 controller pod. Now I'm getting the following error when I review the load balancer logs: </p>
<blockquote>
<p>Event(api.ObjectReference{Kind:"Ingress", Namespace:"default",
Name:"echomap", UID:"9943e74c-76de-11e6-8c50-42010af0009b",
APIVersion:"extensions", ResourceVersion:"7935", FieldPath:""}): type:
'Warning' reason: 'GCE' googleapi: Error 400: Validation failed for
instance
'projects/mundolytics/zones/us-east1-c/instances/gke-airportal-default-pool-7753c577-129e':
instance may belong to at most one load-balanced instance group.,
instanceInMultipleLoadBalancedIgs</p>
</blockquote>
| <p>Probably you have one or more hanging backend services. Run <code>gcloud compute backend-services list</code> to find them and then <code>gcloud compute backend-services delete [SERVICE-NAME]</code> for each service to remove it.</p>
<pre><code>$ gcloud compute backend-services list
NAME BACKENDS PROTOCOL
my-hanging-service us-central1-a/instanceGroups/gke-XXXXXXX-default-pool-XXXXXXX-grp HTTP
$ gcloud compute backend-services delete my-hanging-service
</code></pre>
|
<p>I am using GCP Container Engine in my project and now I am facing some issue that I don't know if it can be solved via secrets.</p>
<p>One of my deployments is node-js app server, there I use some npm modules which require my GCP service account key (.json file) as an input.</p>
<p>The input is the path where this json file is located. Currently I managed to make it work by providing this file as part of my docker image and then in the code I put the path to this file and it works as expected. The problem is that I think that it is not a good solution because I want to decouple my nodejs image from the service account key because the service account key may be changed (e.g. dev,test,prod) and I will not be able to reuse my existing image (unless I will build and push it to a different registry).</p>
<p>So how could I upload this service account json file as secret and then consume it inside my pod? I saw it is possible to create secrets out of files but I don't know if it is possible to specify the path to the place where this json file is stored. If it is not possible with secrets (because maybe secrets are not saved in files...) so how (and if) it can be done?</p>
| <p>You can make your json file a secret and consume in your pod. See the following link for secrets (<a href="http://kubernetes.io/docs/user-guide/secrets/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/secrets/</a>), but I'll summarize next:</p>
<p>First create a secret from your json file:</p>
<pre><code>kubectl create secret generic nodejs-key --from-file=./key.json
</code></pre>
<p>Now that you've created the secret, you can consume in your pod (in this example as a volume):</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "nodejs"
},
"spec": {
"containers": [{
"name": "nodejs",
"image": "node",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "nodejs-key"
}
}]
}
}
</code></pre>
<p>So when your pod spins up the file will be dropped in the "file system" in /etc/foo/key.json</p>
|
<p>I have a setup with a webserver (NGINX) and a react-based frontend that uses webpack to build the final static sources.</p>
<p>The webserver has its own kubernetes <code>deployment</code> + <code>service</code>.</p>
<p>The frontend needs to be build before the webserver can serve the static html/js/css files - but after that, the <code>pod</code>/<code>container</code> can stop.</p>
<p>My idea was to share a <code>volume</code> between the webserver and the frontend <code>pod</code>. The frontend will write the generated files to the <code>volume</code> and the webserver can serve them from there. Whenever there is an update to the frontend sourcecode, the files need to be regenerated.</p>
<p>What is the best way to accomplish that using kubernetes tools?
Right now, I'm using a <code>init-container</code> to build - but this leads to a restart of the webserver <code>pod</code> as well, which wouldn't be neccessary.</p>
<p>Is this the best/only solution to this problem or should I use kubernetes' <code>jobs</code> for this kind of tasks?</p>
| <p>There are multiple ways to do this. Here's how I think about this:</p>
<p><strong>Option 1: The static files represent built source code</strong></p>
<p>In this case, the static files that you want to serve should actually be packaged and built into the docker image of your nginx webserver (in the html directory say). When you want to update your frontend, you update the version of the image used and update the pod.</p>
<p><strong>Option 2: The static files represent state</strong></p>
<p>In this case, your approach is correct. Your 'state' (like a database) is stored in a folder. You then run an init container/job to initialise 'state' and then your webserver pod works fine.</p>
<p>I believe option 1 to be better for 2 reasons:</p>
<ol>
<li>You can horizontally scale your webserver trivially by increasing the pod replica number. In option 2, you're actually dealing with state so that's a problem when you want to add more nodes to your underlying k8s cluster (you'll have to copy files/folders from one volume/folder to another).</li>
<li>The static files are actually the source code of your app. These are not uploaded media files or similar. In this case, it absolutely makes sense to make them a part of your docker image. Otherwise, it kind of defeats that advantage of containerising and deploying.</li>
</ol>
|
<p>We are splitting our monolith API into micro services.</p>
<p>We do not need rate-limiting, auth, caching or any other gateway like abilities. </p>
<p>Would it be a valid approach to use very simple stateless Nginx containers that route to the underlying services?</p>
| <p>Yes, Nginx can be a deployment and a service (of loadbalancer or externalIP type) and can forward to upstream services. </p>
<p>You might have to frequently change the nginx.conf though (when you add/remove services), so I would recommend using a ConfigMap to keep your nginx.conf and mounting that as a volume in your deployment. Refer: <a href="http://kubernetes.io/docs/user-guide/configmap/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/configmap/</a> and scroll down to consume configmap via volumes.</p>
<p>Another thing to keep in mind is that if you delete and create a service that is referred to in nginx.conf as an upstream service, you'll have to restart your deployment because nginx resolves all service DNS labels when nginx starts.</p>
|
<p>I have a setup with a webserver (NGINX) and a react-based frontend that uses webpack to build the final static sources.</p>
<p>The webserver has its own kubernetes <code>deployment</code> + <code>service</code>.</p>
<p>The frontend needs to be build before the webserver can serve the static html/js/css files - but after that, the <code>pod</code>/<code>container</code> can stop.</p>
<p>My idea was to share a <code>volume</code> between the webserver and the frontend <code>pod</code>. The frontend will write the generated files to the <code>volume</code> and the webserver can serve them from there. Whenever there is an update to the frontend sourcecode, the files need to be regenerated.</p>
<p>What is the best way to accomplish that using kubernetes tools?
Right now, I'm using a <code>init-container</code> to build - but this leads to a restart of the webserver <code>pod</code> as well, which wouldn't be neccessary.</p>
<p>Is this the best/only solution to this problem or should I use kubernetes' <code>jobs</code> for this kind of tasks?</p>
| <p><strong>Jobs</strong>, <strong>Init</strong> containers, or alternatively a <strong>gitRepo</strong> type of <strong>Volume</strong> would work for you.</p>
<p><a href="http://kubernetes.io/docs/user-guide/volumes/#gitrepo" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/volumes/#gitrepo</a></p>
<p>It is not clear in your question why you want to update the static content without simply re-deploying / updating the <strong>Pod</strong>. </p>
<p>Since somewhere, somehow, you have to build the webserver Docker image, it seems best to build the static content into the image: no moving parts once deployed, no need for volumes or storage. Overall it is simpler.</p>
<p>If you use any kind of automation tool for Docker builds, it's easy.
I personally use Jenkins to build Docker images based on a hook from git repo, and the image is simply rebuilt and deployed whenever the code changes.</p>
<p>Running a <strong>Job</strong> or <strong>Init</strong> container doesn't gain you much: sure the web server keeps running, but it's as easy to have a <strong>Deployment</strong> with rolling updates which will deploy the new <strong>Pod</strong> before the old one is torn down and you server will always be up too.</p>
<p>Keep it simple...</p>
|
<p>With Kubernetes configured to point to an external OpendID provider, it seems through browsing through the code that Kubernetes makes a call to the OpendID provider to get a refresh token. It expects an <code>id_token</code> to come back. It seems that Kubernetes respects the expire time for the bearer token and not make a call to the OpendID provider until the bearer token expires.</p>
<p>Is that the correct description of how the refresh tokens work in Kubernetes?</p>
| <p>Kubernetes doesn't have any concept of refresh tokens because the Kubernetes API server isn't a client of the OpenID provider, it simply validates <code>id_token</code>s issues for a specific client.</p>
<p>Clients of the OpenID provider which wish to talk to the API server on the end user's behalf must manage the refresh tokens to issue more <code>id_token</code>s as the current one expires. The API server wont do it for you. </p>
|
<p>this question is similar to <a href="https://stackoverflow.com/questions/38901877/kubernetes-petset-dns-not-working">Kubernetes PetSet DNS not working</a> but (I believe) distinct. My problem is: I want to use a Kubernetes PetSet to run a sharded database (RethinkDB). I need to pass each shard the dns address of another shard in the database, so that the shards can connect to each other run as a cluster. I also need other services to connect to the database and query it, and I'd like to do that through a k8s NodePort service (I think that if other pods connect to RethinkDB through a service, each client pod will connect to a random Rethink pod, providing a basic kind of load balancing. Using a NodePort service also means I can connect to the Rethink admin console from outside the cluster).</p>
<p>I believe Kubernetes should assign each RethinkDB shard a consistent domain name, and I should be able to pass each shard e.g. <code>rethink-0.rethink-service.default.svc.cluster.local</code> for clustering. However, I've tried two ways of configuring my PetSet and neither seems to assign the domain name <code>rethink-0.rethink-service.default.svc.local</code>:</p>
<p>1) I created a non-headless service for talking to the PetSet and that's it. In this configuration, the the only rethink pet I create seems to be getting a random name:</p>
<pre><code>$ kc get all
NAME DESIRED CURRENT READY AGE
rc/etcd 1 1 1 46s
rc/pachd 1 1 1 46s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/etcd 10.0.0.206 <none> 2379/TCP,2380/TCP 46s
svc/kubernetes 10.0.0.1 <none> 443/TCP 3d
svc/pachd 10.0.0.176 <nodes> 650/TCP,651/TCP 46s
svc/rethink-service 10.0.0.3 <nodes> 8080/TCP,28015/TCP,29015/TCP 46s
NAME READY STATUS RESTARTS AGE
po/etcd-x02ou 1/1 Running 0 46s
po/pachd-cqdus 1/1 Running 1 46s
po/rethink-0 1/1 Running 0 46s
info: 2 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/rethink-volume-claim-rethink-0 Bound rethink-volume-0 1Gi RWO 46s
$ kubectl run -i --tty --image ubuntu dns-test --restart=Never /bin/sh
...
# nslookup -type=srv rethink-service.default.svc.cluster.local
Server: 10.0.0.10
Address: 10.0.0.10#53
rethink-service.default.svc.cluster.local service = 10 100 0 3231383531646337.rethink-service.default.svc.cluster.local.
</code></pre>
<p>Here my RethinkDB pet seems to get the name <code>3231383531646337.rethink-service.default.svc.cluster.local</code></p>
<p>2) I created both a non-headless service (for external services to talk to Rethink) and a headless service (for domain name assignment) and I still seem to get random DNS names:</p>
<pre><code>$ kc get all
NAME DESIRED CURRENT READY AGE
rc/etcd 1 1 1 6m
rc/pachd 1 1 1 6m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/etcd 10.0.0.59 <none> 2379/TCP,2380/TCP 6m
svc/kubernetes 10.0.0.1 <none> 443/TCP 3d
svc/pachd 10.0.0.222 <nodes> 650/TCP,651/TCP 6m
svc/rethink-headless None <none> 6m
svc/rethink-service 10.0.0.30 <nodes> 8080/TCP,28015/TCP,29015/TCP 6m
NAME READY STATUS RESTARTS AGE
po/etcd-anc7v 1/1 Running 0 6m
po/pachd-i1anr 1/1 Running 1 6m
po/rethink-0 1/1 Running 0 6m
info: 2 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/rethink-volume-claim-rethink-0 Bound rethink-volume-0 1Gi RWO 6m
$ kubectl run -i --tty --image ubuntu dns-test --restart=Never /bin/sh
...
# nslookup -type=srv rethink-service.default.svc.cluster.local
Server: 10.0.0.10
Address: 10.0.0.10#53
rethink-service.default.svc.cluster.local service = 10 100 0 6638393531396237.rethink-service.default.svc.cluster.local.
</code></pre>
<p>Here my RethinkDB pet seems to get the name <code>6638393531396237.rethink-service.default.svc.cluster.local</code> which still seems arbitrary.</p>
<p>My basic questions are: <strong>Do I need to connect the nodes to a headless service, in addition to my non-headless NodePort service, to get stable DNS addresses? Can I even have two services for the same set of nodes? Why do neither of these setups give <code>rethink-0</code> the domain name <code>rethink-0.rethink-<something>.default.svc.cluster.local</code>?</strong></p>
<p>Thank you so much for your help!!!</p>
<p>Edit: two updates:</p>
<p>1) Here's the complete k8s manifest I'm using. It's long, but I'd be happy to extract certain parts if that's helpful: <a href="http://pastebin.com/nm73Xtxi" rel="nofollow noreferrer">http://pastebin.com/nm73Xtxi</a></p>
<p>2) I can't seem to do any DNS resolution related to my headless RethinkDB service, <code>rethink-headless</code>:</p>
<pre><code># nslookup rethink-headless.default
Server: 10.0.0.10
Address: 10.0.0.10#53
** server can't find rethink-headless.default: NXDOMAIN
# nslookup rethink-headless
Server: 10.0.0.10
Address: 10.0.0.10#53
** server can't find rethink-headless: SERVFAIL
</code></pre>
| <p>You can also use a headless Service for the DB access between nodes, and a regular Service for external access. </p>
<p>There is no issue having 2 Services pointing to the same nodes for different purposes</p>
|
<p>I'd like to implement a sticky-session Ingress controller. Cookies or IP hashing would both be fine; I'm happy as long as the same client is <em>generally</em> routed to the same pod.</p>
<p>What I'm stuck on: it seems like the Kubernetes service model means my connections are going to be proxied randomly no matter what. I can configure my Ingress controller with session affinity, but as soon as the the connection gets past the that and hits a service, <code>kube-proxy</code> is just going to route me randomly. There's the <code>sessionAffinity: ClientIP</code> flag on services, but that doesn't help me -- the Client IP will always be the internal IP of the Ingress pod.</p>
<p>Am I missing something? Is this possible given Kubernetes' current architecture?</p>
| <p>I evaluated the haproxy controller but could not get it running reliably with session affinity. After some research I discovered <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer" title="Nginx Ingress Controller">Nginx Ingress Controller</a> which since version 0.61 also includes the <em>nginx-sticky-module-ng</em> module and is now running reliably since a couple of days in our test environment. I created a <a href="https://gist.github.com/tillkuhn/c2dba43d60d3e7a2462928812ca9e4bf" rel="nofollow noreferrer">Gist</a> that sets up the required Kubernetes pieces since some important configuration is a bit hard to locate in the existing documentation. Good luck!</p>
|
<p>I want to know the high level benefit of Kubernetes running in bare metal machines.</p>
<p>So let's say we have 100 bare metal machines ready with <code>kubelet</code> being deployed in each. Doesn't it mean that when the application only runs on 10 machines, we are wasting the rest 90 machines, just standing by and not using them for anything?</p>
<p>For cloud, does Kubernetes launch new VMs as needed, so that clients do not pay for idle machines?</p>
<p>How does Kubernetes handle the extra machines that are needed at the moment?</p>
| <p>Yes, if you have 100 bare metal machines and use only 10, you are wasting money. You should only deploy the machines you need.</p>
<p>The Node Autoscaler works at certain Cloud Providers like AWS, GKE, or Open Stack based infrastructures.</p>
<p>Now, Node Autoscaler is useful if your load is not very predictable and/or scales up and down widely over the course of a short period of time (think Jobs or cyclic loads like a Netflix type use case). </p>
<p>If you're running services that just need to scale eventually as your customer base grows, that is not so useful as it is as easy to simply add new nodes manually.</p>
<p>Kubernetes will handle some amount of auto-scaling with an assigned number of nodes (i.e. you can run many Pods on one node, and you would usually pick your machines to run in a safe range but still allow handling of spikes in traffic by spinning more Pods on those nodes.</p>
<p>As a side note: with bare metal, you typically gain in performance, since you don't have the overhead of a VM / hypervisor, but you need to supply distributed storage, which a cloud provider would typically provide as a service.</p>
|
<p>In minikube, how to expose a service using nodeport ?</p>
<p>For example, I start a kubernetes cluster using the following command and create and expose a port like this:</p>
<pre><code>$ minikube start
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
$ curl $(minikube service hello-minikube --url)
CLIENT VALUES:
client_address=192.168.99.1
command=GET
real path=/ ....
</code></pre>
<p>Now how to access the exposed service from the host? I guess the minikube node needs to be configured to expose this port as well. </p>
| <p>I am not exactly sure what you are asking as it seems you already know about the <code>minikube service <SERVICE_NAME> --url</code> command which will give you a url where you can access the service. In order to open the exposed service, the <code>minikube service <SERVICE_NAME></code> command can be used:</p>
<pre><code>$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.102 <nodes> 8080/TCP 7s
kubernetes 10.0.0.1 <none> 443/TCP 13m
$ minikube service hello-minikube
Opening kubernetes service default/hello-minikube in default browser...
</code></pre>
<p>This command will open the specified service in your default browser.</p>
<p>There is also a <code>--url</code> option for printing the url of the service which is what gets opened in the browser:</p>
<pre><code>$ minikube service hello-minikube --url
http://192.168.99.100:31167
</code></pre>
|
<p>I have a simple Kubernetes job (based on the <a href="http://kubernetes.io/docs/user-guide/jobs/work-queue-2/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/jobs/work-queue-2/</a> example) which uses a Docker image that I have placed as a public image on my dockerhub account. It all loks like this:</p>
<p><strong>job.yaml</strong>:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
</code></pre>
<p>Now I want to try to instead use a private Docker registry which requires authentication as in:</p>
<p><code>docker login https://myregistry.com</code></p>
<p>But I can't find anything about how I add username and password to my job.yaml file. How is it done?</p>
| <p>You need to use <a href="http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod" rel="noreferrer">ImagePullSecrets</a>.</p>
<p>Once you create a secret object, you can refer to it in your pod spec (the <code>spec</code> value that is the parent of <code>containers</code>:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
</code></pre>
<p>Ofcourse, you'll have to create the secret (as per the docs). This is what this will look like:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: mynamespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>The value of <code>.dockerconfigjson</code> is a base64 encoding of this file: <code>.docker/config.json</code>.</p>
<p>The key point: <strong>A job spec contains a pod spec</strong>. So whatever knowledge you gain about pod specs can be applied to jobs as well.</p>
|
<p>I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup). </p>
<p>I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.</p>
<p>I'd like </p>
<ol>
<li>All 6 to have SSL termination (not in the docker image)</li>
<li>4 need websockets and client IP session affinity (Meteor, Socket.io) </li>
<li>5 need http->https forwarding</li>
<li>1 serves the same content on http and https</li>
</ol>
<p>I did 1. SSL termination setting the service type to LoadBalancer and <a href="https://github.com/kubernetes/kubernetes/issues/22854" rel="nofollow noreferrer">using AWS specific annotations</a>. This created AWS load balancers, but this seems like a <a href="https://github.com/kubernetes/kubernetes/issues/13892" rel="nofollow noreferrer">dead end for the other requirements</a>.</p>
<p>I looked at Ingress, but don't see how to do it on AWS. Will this <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/README.md" rel="nofollow noreferrer">Ingress Controller</a> work on AWS?</p>
<p>Do I need an nginx controller in each pod? <a href="https://stackoverflow.com/questions/35322432/running-meteor-app-with-nginx-ssl-proxy-on-kubernetes">This</a> looked interesting, but I'm not sure how recent/relevant it is.</p>
<p>I'm not sure what direction to start in. What will work?</p>
<p>Mike</p>
| <p>You should be able to use the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="noreferrer">nginx ingress controller</a> to accomplish this.</p>
<ol>
<li><a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#https" rel="noreferrer">SSL termination</a></li>
<li><a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/configuration.md#websockets" rel="noreferrer">Websocket support</a></li>
<li><a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#server-side-https-enforcement" rel="noreferrer">http->https</a></li>
<li>Turn off the http->https redirect, as described in the link above</li>
</ol>
<p>The <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="noreferrer">README</a> walks you through how to set it up, and there are plenty of <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples" rel="noreferrer">examples</a>.</p>
<p>The basic pieces you need to make this work are:</p>
<ul>
<li>A <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/rc.yaml" rel="noreferrer">default backend</a> that will respond with 404 when there is no matching Ingress rule</li>
<li>The <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/rc.yaml" rel="noreferrer">nginx ingress controller</a> which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.</li>
<li>One or more <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/ingress.yaml" rel="noreferrer">ingress rules</a> that describe how traffic should be routed to your services.</li>
</ul>
<p>The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.</p>
|
<p>I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.</p>
<p><strong>This is what I currently do</strong></p>
<p>Let's say I have a project composed of 3 services viz. <code>postgres</code>, <code>express</code>, <code>nginx</code>.</p>
<p>I create a directory called <code>product-release</code> that is as follows:</p>
<pre><code>product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
</code></pre>
<p>The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.</p>
<p><strong>Questions</strong>:</p>
<ol>
<li>This works for me so far, but is this "the right way"?</li>
<li>Why does <code>helm</code> have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?</li>
</ol>
| <p>Seems that <a href="https://redspread.com/" rel="nofollow noreferrer">https://redspread.com/</a> (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.</p>
<p>We'll stick with keeping yaml files in git together with the deployed application for now I guess. </p>
|
<p>How I can call via rest api this commands:</p>
<pre><code>$ kubectl describe po xyz
$ kubectl describe svc xyz
$ kubectl describe node zyx
</code></pre>
<p>I need get service endpoints and node capacity usage. But I can't find in docs how I can do this. GET command not provides this information.</p>
| <p>Use <code>kubectl --v=8 ...</code> for fun and profit!</p>
<p>For eg: a describe pod is actually a combination of results from the pod and the events APIs:</p>
<pre><code>GET /api/v1/namespaces/default/pods/xyz
GET /api/v1/namespaces/default/events?fieldSelector=involvedObject.name%3Dxyz%2CinvolvedObject.namespace%3Ddefault%2CinvolvedObject.uid%3Dd4605fd6-b152-11e6-a208-02e9103bab42
</code></pre>
|
<p>I have an Openshift Origin installation (v. 1.2.1, but also reproduced this issue on 1.3.0), and I'm trying to get pods' IPs from DNS by service name. Assume my node has IP 192.168.58.6, and I look for pods of headless service 'hz' in project 'hz-test'. When I try to send DNS request to dnsmasq (which is installed on nodes and forwards requests to Kubernetes' SkyDNS) over UDP, everything goes well:</p>
<pre><code># dig +notcp +noall +answer hz.hz-test.svc.cluster.local @192.168.58.6
hz.hz-test.svc.cluster.local. 14 IN A 10.1.2.5
<and so on...>
</code></pre>
<p>However, when I switch transport protocol to TCP, I receive the following error:</p>
<pre><code># dig +tcp +noall +answer hz.hz-test.svc.cluster.local @192.168.58.6
;; communications error to 192.168.58.6#53: end of file
</code></pre>
<p>After looking on tcpdump output, I've discovered, that after establishing a TCP connection (SYN - SYN/ACK - ACK) dnsmasq immediately sends back FIN/ACK, and when the DNS client tries to send its request using this connection, dnsmasq sends back RST packet instead of DNS answer. I've tried to perform the same DNS query over TCP from the node iteself, and dnsmasq gave me usual response, i.e. it worked normally over TCP, and the problem arose only when I tried to perform request from pod. Also, I've tried to send the same query over TCP directly from pod to Kubernetes' DNS (avoiding dnsmasq), and this query was OK too.</p>
<p>So, why dnsmasq on nodes ignores TCP requests from pods, and why any other communications are okay? Is it supposed behavior?</p>
<p>Any help and ideas are appreciated.</p>
| <p>Finally, the reason was that dnsmasq was configured to listen node's IP (listen-adress=192.168.58.6). With such configuration dnsmasq binds to <em>all</em> node's network interfaces, but tries to reject "wrong" connections in userspace (i.e. on its own).</p>
<p>I don't really understand, why dnsmasq decided that requests from pod to 192.168.58.6 were forbidden with such configuration, but I got it working by changing "listen-address" to</p>
<pre><code>interface=eth0
bind-interfaces
</code></pre>
<p>which forced dnsmasq to actually bind only to NIC with IP 192.168.58.6. After that dnsmasq started to accept all TCP requests.</p>
|
<p>I deployed my java web application in kubernetes using DEPLOYMENTS and was able to scale it and connect it to a database POD, but then I wanted to scale the database too but as you know is not possible in kubernetes and the MYSQL REPLICA not recommended for production. So I tried vitess and was able to scale my database but don't know how or where should I create my java web application DEPLOYMENTS/REPLICAS and connect them to the database through vtgate.
And is there another way of scaling mysql database through kubernetes ?</p>
| <p>It's important to note that Vitess is not a transparent proxy that you can just insert between the app and MySQL at the connection level. Vitess turns a set of MySQL servers into a clustered database, and it requires you to build your app against a Vitess driver instead of the plain MySQL driver.</p>
<p>If you're already using JDBC, you shouldn't need a lot of code changes other than connection management, since there is a <a href="https://github.com/youtube/vitess/blob/master/java/example/src/main/java/com/youtube/vitess/example/VitessJDBCExample.java" rel="nofollow noreferrer">Vitess implementation of the JDBC interface</a>. However, some query constructs may not be supported yet by Vitess, so you may need to rewrite them into an equivalent form that is supported.</p>
<p>Once your app is compatible with Vitess, deploying it in Kubernetes will be the same as you did before, except you will point the app pods to <a href="https://github.com/youtube/vitess/blob/faed5ff5fc854ec3a33125318a791391faaaf638/examples/kubernetes/guestbook/main.py#L84" rel="nofollow noreferrer">connect to the VTGate service via DNS</a>.</p>
<p>As for other ways to scale MySQL in Kubernetes without Vitess, there's an important new feature entering Beta in Kubernetes 1.5 called StatefulSet that will help you scale databases like MySQL similar to the way a Deployment can scale stateless Pods. Vitess itself will also become more convenient to scale in Kubernetes by taking advantage of StatefulSet.</p>
<p>However, StatefulSet with pure MySQL will mostly only help you scale read-only traffic by increasing the number of slaves. If you need to scale write traffic, you will likely need to implement application-defined sharding. At that point, the required changes to your app will almost certainly be much more than if you modify it to support Vitess.</p>
|
<p>The example on the documentation (<a href="http://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">http://kubernetes.io/docs/user-guide/configmap/</a>) for consuming values is based on a ConfigMap where each data entry is a single pair/value. Example:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
</code></pre>
<p>However when we create a ConfigMap from property files, every data entry value is itself a list of key/pair values. Example:</p>
<pre><code>$ kubectl get configmaps game-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: game-config
[...]
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
[...]
</code></pre>
<p>In such a case:</p>
<ol>
<li>How do we consume a single entry (example: enemies.cheat) as an
environment variable?</li>
<li>How do we consume all the entries (example: all game.properties entries)
as a set of environment variables, assuming we just use each key as
the environment variable name?</li>
</ol>
| <p>You can't consume a single entry since it is just one big blob of text. You have two options that I see:</p>
<ol>
<li><p>Don't create the config map from a file. Instead create each entry in the ConfigMap manually. You'll have to consume each key separately though, at least until <a href="https://github.com/kubernetes/kubernetes/issues/26299" rel="nofollow noreferrer">this issue</a> is resolved.</p></li>
<li><p>Don't use the ConfigMap as environment variables. Instead mount that key as a volume and have your application read the key/values.</p></li>
</ol>
<p>It seems like the second option would work nicely for you. It would let you continue to generate your ConfigMap from a file, and also allow you to consume all declared key/values without having to constantly change your Kubernetes manifests.</p>
<p>Another advantage of mounting your ConfigMap as a volume is that it will allow you to perform in-place updates to your configs (assuming that your app tolerates that). If you mount ConfigMap keys as environment variables the only way to update them is to restart the app.</p>
|
<p>I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????</p>
<p><strong>MY POD YAML</strong></p>
<pre class="lang-none prettyprint-override"><code>kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
</code></pre>
<p><strong>MY KUBERNETES COMMAND</strong></p>
<p>kubectl create -f pod-yumserver.yaml</p>
<p><strong>THE ERROR</strong></p>
<pre class="lang-none prettyprint-override"><code>kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
</code></pre>
| <blockquote>
<p>So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16</p>
</blockquote>
<p>For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.</p>
|
<p>Let's say I set up a fresh Kubernetes cluster. I assume the both <code>kube-system</code> and <code>default</code> namespaces will get a service account named <code>default</code>? Which permissions does that service account have? Full read/write permissions?</p>
<p>I'm essentially asking this to understand best practises to give a custom Go controller write access to resources.</p>
| <p>Service accounts have no inherent permissions. The permissions they have depend entirely on the authorization mode configured (<code>--authorization-mode</code> flag passed to the apiserver)</p>
<p>Defining RBAC roles is a good method for specifying the permissions required for a controller.</p>
<p>There are existing role definitions for in-tree controllers at <a href="https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy</a></p>
|
<p>I am trying to install <strong>kubernetes on Bare Metal with IPv6 only networks</strong>. I have stuck saying that <strong>Invalid Arguments</strong> whenever I put IPv6 address instead of IPv4 address. </p>
<hr>
<p>Can anyone suggest any guidelines how to install Kubernetes on IPv6 only networks.
I know, it's not IPv6 production ready but in the source code, it seems,few component does have support for IPv6 -- that's why I am trying.</p>
| <p>IPv6 is not supported at this time, and it's very clearly state in the docs that it's a work in progress: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/networking.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/networking.md</a></p>
|
<p>I am finding docker swarm, kubernetes quite similar and then there is docker which is a company and the above two are docker clustering tool. So what exactly all these tools are and differences between them?</p>
| <p>There are lots of articles out there which will explain the differences. In a nutshell:</p>
<ul>
<li>Both are trying to solve the same problem - container orchestration over a large number of hosts. Essentially these problems can be broken down like so:
<ul>
<li>Scheduling containers across multiple hosts (taking into account resource utilization etc)</li>
<li>Grouping containers into logical units</li>
<li>Scaling of containers</li>
<li>Load balancing/access to these containers once they're deployed</li>
<li>Attaching storage to the containers, whether it be shared or not</li>
<li>Communication/networking between the containers/grouped containers</li>
<li>Service discovery of the containers (ie where is X service)</li>
</ul></li>
</ul>
<p>Both Kubernetes & Docker Swarm can solve these problems for you, but they have different naming conventions and ideas on how to solve them. The differences are relatively conceptual. There are articles that break this down quite well:</p>
<p><a href="https://platform9.com/blog/compare-kubernetes-vs-docker-swarm/">https://platform9.com/blog/compare-kubernetes-vs-docker-swarm/</a>
<a href="https://torusware.com/blog/2016/09/hands-on-orchestration-docker-1-12-swarm-vs-kubernetes/">https://torusware.com/blog/2016/09/hands-on-orchestration-docker-1-12-swarm-vs-kubernetes/</a>
<a href="http://containerjournal.com/2016/04/07/kubernetes-vs-swarm-container-orchestrator-best/">http://containerjournal.com/2016/04/07/kubernetes-vs-swarm-container-orchestrator-best/</a></p>
<p>Essentially, they are similar products in the same space. There are a few caveats to bear in mind though:</p>
<ul>
<li>Kubernetes is developed with a container agnostic mindset (at present it supports Docker, <a href="https://coreos.com/rkt/">rkt</a> and has some support for <a href="https://www.hyper.sh/">hyper</a> whereas docker swarm is docker only)</li>
<li>Kubernetes is "cloud native" in that it can run equally well across <a href="http://kubernetes.io/docs/getting-started-guides/coreos/azure/">Azure</a>, <a href="https://cloud.google.com/container-engine/">Google Container Engine</a> and <a href="http://kubernetes.io/docs/getting-started-guides/aws/">AWS</a> - I am not currently aware of this being a feature of Docker Swarm, although you could configure it yourself</li>
<li>Kubernetes is an entirely open source product. If you require commercial support, you need to go to a third party to get it. Docker provides enterprise support for Swarm</li>
<li>If you are familiar with the docker-compose workflow, Docker Swarm makes use of this so it may be familiar to you and easier to get started. Kubernetes requires learning pod/service/deployment definitions which while pure yaml, are an adjustment.</li>
</ul>
|
<p>Kubernetes scheduler includes two parts: <strong>predicate</strong> and <strong>priority</strong>. The source code is in <em>kubernetes/plugin/pkg/scheduler</em>. I want to add a new priority algorithm to the default priorities. Can anyone guide me the detailed steps? Thanks a lot!</p>
<p>Maybe I should do the following steps:</p>
<ol>
<li>Add my own priority algorithm to the path: kubernetes/plugin/pkg/scheduler/algorithm/priorities</li>
<li>Register that priority algorithm</li>
<li>Build/Recompile the whole k8s project and install\deploy a new k8s cluster</li>
<li>Test if that priority effects, maybe give it a high weight.</li>
</ol>
<p>If there are more detailed articles and documents, it will help me a lot!
The more detailed the better!Thanks a lot!</p>
<p><em>k8s version: 1.2.0, 1.4.0 or later.</em></p>
| <p>You can run your scheduler as a kubernetes deployment.</p>
<p>Kelsey Hightower has an example scheduler coded up on <a href="https://github.com/kelseyhightower/scheduler" rel="nofollow noreferrer">Github</a></p>
<p>The meat and bones of this is here: <a href="https://github.com/kelseyhightower/scheduler/blob/master/bestprice.go" rel="nofollow noreferrer">https://github.com/kelseyhightower/scheduler/blob/master/bestprice.go</a></p>
<p>And the deployment yaml is <a href="https://github.com/kelseyhightower/scheduler/blob/master/deployments/scheduler.yaml" rel="nofollow noreferrer">here</a></p>
<p>Essentially, you can package it up as a docker container and deploy it. </p>
<p>Take note of the way you interact with the k8s API using <a href="https://github.com/kelseyhightower/scheduler/blob/master/kubernetes.go" rel="nofollow noreferrer">this package</a> in order to do it this way you'll need to have a similar wrapper, but it's much easier than building/recompiling the whole k8s package.</p>
|
<p>I'm trying to make use of the new <code>subPath</code> feature implemented in <a href="https://github.com/kubernetes/kubernetes/pull/22575" rel="noreferrer">this</a> pull request (recently released in v1.3).</p>
<p>However, the output of <code>mount</code> shows it ignoring the <code>subPath</code>, mounting the same NFS directory for both volume mounts:</p>
<pre><code>nfs-server:/mnt/nfs/exports/apps/my-app on /home/share/foo type nfs4 (rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.128.0.4,local_lock=none,addr=nfs-server)
nfs-server:/mnt/nfs/exports/apps/my-app on /home/share/bar/baz type nfs4 (rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.128.0.4,local_lock=none,addr=nfs-server)
</code></pre>
<p>The relevant bits of my deployment YAML:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
template:
metadata:
labels:
name: app
spec:
containers:
- name: app
image: my-org/my-app:latest
volumeMounts:
- mountPath: /home/share/foo
name: nfs
subPath: foo-resources
- mountPath: /home/share/bar/baz
name: nfs
subPath: baz-resources
volumes:
- name: nfs
nfs:
path: /mnt/nfs/exports/apps/my-app
server: nfs-server
</code></pre>
| <p>I'm not 100% sure about this, as I'm using a <code>configMap</code> volume rather than NFS, but I had to make the <code>mountPath</code> match the <code>subPath</code> as seen below before it worked for me.</p>
<p>FYI, I'm using Kubernetes v1.4.5.</p>
<p>If I'm reading this correctly, you are wanting to:</p>
<ul>
<li>Mount the NFS file or directory <code>/mnt/nfs/exports/apps/my-app/foo-resources</code> such that it's path in the container is <code>/home/share/foo/foo-resources</code>.</li>
<li>Mount, the NFS file or directory <code>/mnt/nfs/exports/apps/my-app/baz-resources</code> such that it's path in the container is <code>/home/share/bar/baz/baz-resources</code>.</li>
</ul>
<p>Try this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
template:
metadata:
labels:
name: app
spec:
containers:
- name: app
image: my-org/my-app:latest
volumeMounts:
- mountPath: /home/share/foo/foo-resources
name: nfs
subPath: foo-resources
- mountPath: /home/share/bar/baz/baz-resources
name: nfs
subPath: baz-resources
volumes:
- name: nfs
nfs:
path: /mnt/nfs/exports/apps/my-app
server: nfs-server
</code></pre>
<p>The differences:</p>
<pre><code>16c16
< - mountPath: /home/share/foo/foo-resources
---
> - mountPath: /home/share/foo
19c19
< - mountPath: /home/share/bar/baz/baz-resources
---
> - mountPath: /home/share/bar/baz
</code></pre>
|
<p>What I am trying to do:</p>
<p>I have setup kubernete cluster using documentation available on Kubernetes website (http_kubernetes.io/v1.1/docs/getting-started-guides/aws.html). Using kube-up.sh, i was able to bring kubernete cluster up with 1 master and 3 minions (as highlighted in blue rectangle in the diagram below). From the documentation as far as i know we can add minions as and when required, So from my point of view k8s master instance is single point of failure when it comes to high availability.</p>
<p><a href="http://i.stack.imgur.com/ZrtZI.png" rel="nofollow">Kubernetes Master HA on AWS</a></p>
<p>So I am trying to setup HA k8s master layer with the three master nodes as shown above in the diagram. For accomplishing this I am following kubernetes high availability cluster guide, http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
What I have done:</p>
<p>Setup k8s cluster using kube-up.sh and provider aws (master1 and minion1, minion2, and minion3)
Setup two fresh master instance’s (master2 and master3)
I then started configuring etcd cluster on master1, master 2 and master 3 by following below mentioned link:
http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
So in short i have copied etcd.yaml from the kubernetes website (http_kubernetes.io/v1.1/docs/admin/high-availability/etcd.yaml) and updated Node_IP, Node_Name and Discovery Token on all the three nodes as shown below. </p>
<blockquote>
<p>NODE_NAME NODE_IP DISCOVERY_TOKEN</p>
<p>Master1
172.20.3.150 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed</p>
<p>Master2
172.20.3.200 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed</p>
<p>Master3
172.20.3.250 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed</p>
</blockquote>
<p>And on running etcdctl member list on all the three nodes, I am getting:</p>
<pre><code>$ docker exec <container-id> etcdctl member list
ce2a822cea30bfca: name=default peerURLs=http_localhost:2380,http_localhost:7001 clientURLs=http_127.0.0.1:4001
</code></pre>
<p>As per documentation we need to keep etcd.yaml in /etc/kubernete/manifest, this directory already contains etcd.manifest and etcd-event.manifest files. For testing I modified etcd.manifest file with etcd parameters.</p>
<p>After making above changes I forcefully terminated docker container, container was existing after few seconds and I was getting below mentioned error on running kubectl get nodes:
error: couldn't read version from server: Get httplocalhost:8080/api: dial tcp 127.0.0.1:8080: connection refused</p>
<p>So please kindly suggest how can I setup k8s master highly available setup on AWS.</p>
| <p>There is also <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> project</p>
<p>From the project README:</p>
<blockquote>
<p>Operate HA Kubernetes the Kubernetes Way</p>
</blockquote>
<p>also:</p>
<blockquote>
<p>We like to think of it as <code>kubectl</code> for clusters</p>
</blockquote>
<p>Download <a href="https://github.com/kubernetes/kops/releases/latest" rel="nofollow noreferrer">the latest release</a>, e.g.:</p>
<pre><code>cd ~/opt
wget https://github.com/kubernetes/kops/releases/download/v1.4.1/kops-linux-amd64
mv kops-linux-amd64 kops
chmod +x kops
ln -s ~/opt/kops ~/bin/kops
</code></pre>
<p>See <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops.md" rel="nofollow noreferrer">kops usage</a>, especially:</p>
<ul>
<li><a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md" rel="nofollow noreferrer">kops create cluster</a></li>
<li><a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_update_cluster.md" rel="nofollow noreferrer">kops update cluster</a></li>
</ul>
<p>Assuming you already have <code>s3://my-kops</code> bucket and <code>kops.example.com</code> hosted zone.</p>
<p>Create configuration:</p>
<pre><code>kops create cluster --state=s3://my-kops --cloud=aws \
--name=kops.example.com \
--dns-zone=kops.example.com \
--ssh-public-key=~/.ssh/my_rsa.pub \
--master-size=t2.medium \
--master-zones=eu-west-1a,eu-west-1b,eu-west-1c \
--network-cidr=10.0.0.0/22 \
--node-count=3 \
--node-size=t2.micro \
--zones=eu-west-1a,eu-west-1b,eu-west-1c
</code></pre>
<p>Edit configuration:</p>
<pre><code>kops edit cluster --state=s3://my-kops
</code></pre>
<p>Export terraform scripts:</p>
<pre><code>kops update cluster --state=s3://my-kops --name=kops.example.com --target=terraform
</code></pre>
<p>Apply changes directly:</p>
<pre><code>kops update cluster --state=s3://my-kops --name=kops.example.com --yes
</code></pre>
<p>List cluster:</p>
<pre><code>kops get cluster --state s3://my-kops
</code></pre>
<p>Delete cluster:</p>
<pre><code>kops delete cluster --state s3://my-kops --name=kops.identityservice.co.uk --yes
</code></pre>
|
<p>I have installed Kubernetes using <a href="https://github.com/kubernetes/contrib" rel="noreferrer">contrib/ansible</a> scripts.
When I run cluster-info:</p>
<pre><code>[osboxes@kube-master-def ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
</code></pre>
<p>The cluster is exposed on localhost with insecure port, and exposed on secure port 443 via ssl</p>
<p><code>kube 18103 1 0 12:20 ? 00:02:57 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=https://10.57.50.161:443 -- kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt
kube 18217 1 0 12:20 ? 00:00:15 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=https://10.57.50.161:443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
root 27094 1 0 12:21 ? 00:00:00 /bin/bash /usr/libexec/kubernetes/kube-addons.sh
kube 27300 1 1 12:21 ? 00:05:36 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://10.57.50.161:2379 --insecure-bind-address=127.0.0.1 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/certs/server.crt
</code></p>
<p>I have copied the certificates from kube-master machine to my local machine, I have installed the ca root certificate. The chrome/safari browsers are accepting the ca root certificate.
When I'm trying to access the <a href="https://10.57.50.161/ui" rel="noreferrer">https://10.57.50.161/ui</a>
I'm getting the 'Unauthorized'</p>
<p>How can I access the kubernetes ui?</p>
| <p>You can use kubectl proxy.</p>
<p>Depending if you are using a config file, via command-line run</p>
<pre><code>kubectl proxy
</code></pre>
<p>or</p>
<pre><code>kubectl --kubeconfig=kubeconfig proxy
</code></pre>
<p>You should get a similar response</p>
<blockquote>
<p>Starting to serve on 127.0.0.1:8001</p>
</blockquote>
<p>Now open your browser and navigate to</p>
<p><s><a href="http://127.0.0.1:8001/ui/" rel="noreferrer">http://127.0.0.1:8001/ui/</a></s> (deprecated, see <a href="https://github.com/kubernetes/dashboard#getting-started" rel="noreferrer">kubernetes/dashboard</a>)<br>
<a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p>
<p>You need to make sure the ports match up.</p>
|
<p>I want to expose a HTTP service running in Google Container Engine over <strong>HTTPS only</strong> load balancer.</p>
<p>How to define in ingress object that I want <code>HTTPS</code> only load balancer instead of default HTTP?</p>
<p>Or is there a way to permanently drop <code>HTTP</code> protocol from created load balancer? When I add <code>HTTPS</code> protocol and then drop <code>HTTP</code> protocol, <code>HTTP</code> is recreated after few minutes by the platform.</p>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
spec:
backend:
serviceName: myapp-service
servicePort: 8080
</code></pre>
| <p>In order to have HTTPs service exposed only, you can block traffic on port 80 as mentioned on this <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce#blocking-http" rel="nofollow noreferrer">link</a>:</p>
<blockquote>
<p>You can block traffic on :80 through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through https and you don't want to waste the extra GCE forwarding rule, eg:</p>
</blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
# This assumes tls-secret exists.
# To generate it run the make in this directory.
- secretName: tls-secret
backend:
serviceName: echoheaders-https
servicePort: 80
</code></pre>
|
<p>I need to install a Kubernetes cluster in complete offline mode. I can follow all the instructions at <a href="http://kubernetes.io/docs/getting-started-guides/scratch/" rel="noreferrer">http://kubernetes.io/docs/getting-started-guides/scratch/</a> and install from binaries but that seems like an involved setup. The installation using <code>kubeadm</code> is pretty easy but I don't see any docs on whether I can install the cluster by downloading the <code>.deb</code> packages locally.</p>
<p>Any pointers to that direction are much appreciated.</p>
| <p>I don't think that anyone has documented this yet. The biggest thing needed is to get the right images pre-loaded on every machine in the cluster. After that things should just work.</p>
<p>There was some discussion of this in this PR: <a href="https://github.com/kubernetes/kubernetes/pull/36759" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/36759</a>.</p>
<p>If I had the bandwidth I'd implement a <code>kubeadm list-images</code> so we could do <code>docker save $(kubeadm list-images) | gzip > kube-images.tar.gz</code>. You could manually construct that list by reading code and such.</p>
|
<p>I have a docker image that serves a simple static web page.
I have a working Kubernetes cluster of 4 nodes (physical servers not in the cloud anywhere).</p>
<p>I want to run that docker image on 2 of the 4 Kubernetes nodes and have it be accessible to the world outside the cluster and load balanced and have it move it to another node if one dies.</p>
<p>Do I need to make a pod then a replication controller then a kube proxy something?
Or do I need to just make a replication controller and expose it somehow?
Do I need to make service?</p>
<p>I don't need help with how to make any of those things, that seems well documented, but what I can't tell what I need to make.</p>
| <p>What you need is to <strong>expose</strong> your <strong>service</strong> (that consists of <strong>pods</strong> which are run/scaled/restarted by your <strong>replication controller</strong>). Using <strong>deployment</strong> instead of replication controller has additional benefits (mainly for updating the app).</p>
<p>If you are on bare metal then you probably wish to expose your service via <code>type: NodePort</code> - so every node in your cluster will open a static port that routes traffic to pods.</p>
<p>You can then either point your load balancer to that nodes on that port, or make a DNS entry with all Kubernetes nodes.</p>
<p>Docs: <a href="http://kubernetes.io/docs/user-guide/quick-start/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/quick-start/</a></p>
|
<p>I need to delete a kubernetes deployment resource using REST API. That's possible and it works, but I just found out that while the deployment resource is deleted ok, its associated ReplicaSet is not.</p>
<p>That means its pods are still running.</p>
<p>I don't know how to find name of a ReplicaSet associated to a Deployment.</p>
<p>I can see it when using kubectl: <em>kubectl describe deployment mydeployment</em>, but I can't find a REST method to get that information.</p>
<p>Is there a way?</p>
| <p>An easy way to find the calls associated would be to use the associated <code>kubectl</code> command with a higher level of verbosity (<code>--v=6</code> or <code>--v=9</code>). </p>
<pre><code>#~ kubectl delete deployment nginx-deployment --v=6
I1201 12:26:16.511683 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment 200 OK in 50 milliseconds
I1201 12:26:16.568980 6235 round_trippers.go:318] PUT https://XXX/apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment 200 OK in 50 milliseconds
I1201 12:26:17.621751 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment 200 OK in 50 milliseconds
I1201 12:26:17.680228 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets?labelSelector=app%3Dnginx 200 OK in 50 milliseconds
I1201 12:26:17.738684 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 56 milliseconds
I1201 12:26:18.790243 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 49 milliseconds
I1201 12:26:18.843446 6235 round_trippers.go:318] PUT https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 50 milliseconds
I1201 12:26:18.894538 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 49 milliseconds
I1201 12:26:19.946417 6235 round_trippers.go:318] GET https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 49 milliseconds
I1201 12:26:20.001367 6235 round_trippers.go:318] DELETE https://XXX/apis/extensions/v1beta1/namespaces/default/replicasets/nginx-deployment-4087004473 200 OK in 53 milliseconds
I1201 12:26:20.055669 6235 round_trippers.go:318] DELETE https://XXX/apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment 200 OK in 53 milliseconds
</code></pre>
|
<p>We run our cluster with two nodes of type standard 2: 7.5Gb and 2vCPU</p>
<p>are there any recommended minimum size for a cluster on GKE. I assume there is no real master as this is a managed "service"?</p>
<p>I'm struggling to deal with resource limits.</p>
| <p>There is no recommended minimum size. However, pods have both <a href="http://kubernetes.io/docs/user-guide/compute-resources/" rel="nofollow noreferrer">CPU and memory requests and limits.</a> </p>
<p>Requests define how much free CPU/memory there must be on a node so a pod can be scheduled there; that amount is then reserved for that pod and won't be considered 'free' for scheduling of a next pod.<br>
On the other hand, limits define the maximum amount a pod can ask for - these can be overcommitted. </p>
<p>Try looking at your <code>kubectl describe nodes</code> output, which lists all pods and their requests and limits. By default, the requests are 100m (10% of a core) - if you know that some of your pods don't need that much, set this lower. Then you will be able to schedule more pods on a node or at least work out the number of nodes you need. </p>
|
<p>I have been trying to setup Jenkins to utilize Kubernetes as in the tutorials. I have everything working really well, but I have been trying to add some custom images using the Kubernetes Jenkins plugin. It seems that any public images work just fine, but when I create an image and put it in my private Container Registry, the Jenkins slave fails miserably. </p>
<p>I wanted to find out how best to utilize the images in my Container Registry within Jenkins. I found this tutorial (<a href="https://cloud.google.com/solutions/jenkins-on-container-engine#customizing_the_docker_image" rel="nofollow noreferrer">https://cloud.google.com/solutions/jenkins-on-container-engine#customizing_the_docker_image</a>). When I tried those steps by building the jenkins-slave image and pushing it to my repo, it did not work. Every time it complains the slave is offline and is unable to be reached.</p>
| <p>Never tried google container registry but from what I understand is that you can just use the complete repo+image name. Something like:
gcr.io/my_project/image:tag</p>
<p>Make sure your images/repo are under the same service account as your kubernetes and jenkins on google cloud!</p>
|
<p>i have a (spring boot/spring cloud) application (micro-service 'MS' architecture) built with netflix tools and which i want to deploy it on kubernetes cluster (one master and 2 minions) to get advantage from its orchestration fact.</p>
<p>By the way, i created a kube-dns service on the cluster and i tried also to mount an eureka service (named eurekaservice) with 3 pods.
From the other side, i run a micro-service with the next eureka configuration:</p>
<pre><code>client:
serviceUrl:
defaultZone: http://eurekaservice:8761/eureka/
</code></pre>
<p>The good news is that every eureka pod on the cluster get notified about the new mounted MS instance. The bad news was that when the MS goes down, only one eureka pod get notified and the others no.
Another thing, is that when i see the MS logfile, while mounted, it shows me the next errors:</p>
<pre><code>Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: 2016-12-01 06:01:54.469 ERROR 1 --- [nio-8761-exec-1] c.n.eureka.resources.StatusResource: Could not determine if the replica is available
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]:
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: java.lang.NullPointerException: null
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at com.netflix.eureka.resources.StatusResource.isReplicaAvailable(StatusResource.java:90)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at com.netflix.eureka.resources.StatusResource.getStatusInfo(StatusResource.java:70)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at org.springframework.cloud.netflix.eureka.server.EurekaController.status(EurekaController.java:63)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at java.lang.reflect.Method.invoke(Method.java:606)
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.918 WARN 1 --- [nio-8761-exec-8] com.netflix.eureka.InstanceRegistry : DS: Registry: lease doesn't exist, registering resource: MS - gateway-bbn50:MS:8090
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.919 WARN 1 --- [nio-8761-exec-8] c.n.eureka.resources.InstanceResource : Not Found (Renew): MS - gateway-bbn50:MS:8090
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.927 INFO 1 --- [nio-8761-exec-5] com.netflix.eureka.InstanceRegistry : Registered instance id 12.16.64.2 with status UP
Dec 01 09:02:17 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:17.061 INFO 1 --- [io-8761-exec-10] com.netflix.eureka.InstanceRegistry : Registered instance id 12.16.64.2 with status UP
Dec 01 09:02:46 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:46.932 WARN 1 --- [nio-8761-exec-9] com.netflix.eureka.InstanceRegistry : DS: Registry: lease doesn't exist, registering resource: MS - gateway-bbn50:MS:8090
</code></pre>
<p>I think what was causing the problem is that replicas was unable to see each others.</p>
<p>how can i resolve this issue!!</p>
| <p>The problem here is that Eureka is a stateful app and you cannot scale it by just increasing the number of replicas.</p>
<p>See the Eureka "peer awereness" docs : <a href="http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html" rel="nofollow">http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html</a></p>
|
<p>The status of nodes is reported as <code>unknown</code></p>
<pre><code>"conditions": [
{
"type": "Ready",
"status": "Unknown",
"lastHeartbeatTime": "2015-11-12T06:03:19Z",
"lastTransitionTime": "2015-11-12T06:04:03Z",
"reason": "Kubelet stopped posting node status."
}
</code></pre>
<p>whle <code>kubectl get nodes</code> return a NOTReady status. What does this imply and how to fix this?</p>
| <h3>Get nodes</h3>
<pre><code>kubectl get nodes
</code></pre>
<p>Result:</p>
<pre><code>NAME STATUS AGE
192.168.1.157 NotReady 42d
192.168.1.158 Ready 42d
192.168.1.159 Ready 42d
</code></pre>
<h3>Describe node</h3>
<p>Here is a <strong>NotReady</strong> on the node of <code>192.168.1.157</code>. Then debugging this notready node, and you can read offical documents - <a href="http://kubernetes.io/docs/user-guide/introspection-and-debugging/" rel="noreferrer">Application Introspection and Debugging</a>.</p>
<pre><code>kubectl describe node 192.168.1.157
</code></pre>
<p>Partial Result:</p>
<pre><code>Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk Unknown Sat, 28 Dec 2016 12:56:01 +0000 Sat, 28 Dec 2016 12:56:41 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Sat, 28 Dec 2016 12:56:01 +0000 Sat, 28 Dec 2016 12:56:41 +0000 NodeStatusUnknown Kubelet stopped posting node status.
</code></pre>
<p>There is a <strong>OutOfDisk</strong> on my node, then <strong>Kubelet stopped posting node status.</strong>
So, I must free some disk space, using the command of <code>df</code> on my <em>Ubuntu14.04</em> I can check the details of memory, and using the command of <code>docker rmi image_id/image_name</code> under the role of <code>su</code> I can remove the useless images.</p>
<h3>Login in node</h3>
<p>Login in <code>192.168.1.157</code> by using <em>ssh</em>, like <code>ssh [email protected]</code>, and switch to the 'su' by <code>sudo su</code>;</p>
<h3>Restart kubelet</h3>
<pre><code>/etc/init.d/kubelet restart
</code></pre>
<p>Result:</p>
<pre><code>stop: Unknown instance:
kubelet start/running, process 59261
</code></pre>
<h3>Get nodes again</h3>
<p>On the master:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>Result:</p>
<pre><code>NAME STATUS AGE
192.168.1.157 Ready 42d
192.168.1.158 Ready 42d
192.168.1.159 Ready 42d
</code></pre>
<p>Ok, that node works fine.</p>
<p>Here is a reference: <a href="https://opensource.ncsa.illinois.edu/confluence/display/~lambert8/Kubernetes" rel="noreferrer">Kubernetes</a></p>
|
<p>I would like to be informed when ever a service is changed on kubernetes using client-go.</p>
| <p>this can be done like this:</p>
<pre><code>package main
import (
"fmt"
"flag"
"time"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/pkg/api/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/pkg/fields"
)
var (
kubeconfig = flag.String("kubeconfig", "./config", "absolute path to the kubeconfig file")
)
func main() {
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
watchlist := cache.NewListWatchFromClient(clientset.Core().RESTClient(), "services", v1.NamespaceDefault,
fields.Everything())
_, controller := cache.NewInformer(
watchlist,
&v1.Service{},
time.Second * 0,
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
fmt.Printf("service added: %s \n", obj)
},
DeleteFunc: func(obj interface{}) {
fmt.Printf("service deleted: %s \n", obj)
},
UpdateFunc:func(oldObj, newObj interface{}) {
fmt.Printf("service changed \n")
},
},
)
stop := make(chan struct{})
go controller.Run(stop)
for{
time.Sleep(time.Second)
}
}
</code></pre>
|
<p>I would like to run a Cassandra cluster under Kubernetes on Google Container Engine using the examples given here: <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra</a></p>
<p>The file describes 3 ways to setup the cluster - PetSet(StatefulSet), Replication Controller and DaemonSet. Each one of them has its pros and cons.</p>
<p>While trying to choose the best setup for me, I noticed that I cannot figure out what to do with the storage and backups. </p>
<ol>
<li>How can I set or scale the storage size (increase/decrease node/cluster <strong>data storage</strong> size without data loss) ?</li>
<li>How do I manage backups and restores?</li>
</ol>
| <p>You should definitely check out Flocker and Flockerhub from ClusterHQ. I've been playing around with their products in order to prove with a POC that containerized sharded db's can be done in an easy and manageable way. Make sure to check them out:
<a href="https://clusterhq.com/" rel="nofollow noreferrer">https://clusterhq.com/</a></p>
<p>They are handling data the same way as docker images are being handled. So you will be able to push and pull data volumes into a hub/repository.</p>
|
<p>I have a VM installed on my desktop which is running windows 8 and with minikube i
am running a single-node kubernetes cluster on the VM. Now i want to expose a service so as to access it from outside my VM e.g from chrome browser of my desktop or from anywhere else. </p>
<p>I have already tried the "kubectl expose" command but didn't succeed. So what should i do to implement this?</p>
| <p>Did you use the nodeport service when you made the kubernetes service? If so, you can access your service through your vm it's ip and the port you assigned with the nodeport k8s service.</p>
|
<p>We are debating the best node size for our production GKE cluster.</p>
<p>Is it better to have more smaller nodes or less larger nodes in general?</p>
<p>e.g. we are choosing between the following two options</p>
<ol>
<li>3 x n1-standard-2 (7.5GB 2vCPU)</li>
<li>2 x n1-standard-4 (15GB 4vCPU)</li>
</ol>
<p>We run on these nodes:</p>
<ul>
<li>Elastic search cluster</li>
<li>Redis cluster</li>
<li>PHP API microservice</li>
<li>Node API microservice</li>
<li>3 x seperate Node / React websites</li>
</ul>
| <p>Two things to consider in my opinion:</p>
<ul>
<li><strong>Replication</strong>: </li>
</ul>
<p>services like Elasticsearch or Redis cluster / sentinel are only able to provide reliable redundancy if there are enough Pods running the service: if you have 2 nodes, 5 elasticsearch Pods, well chances are 3 Pods will be on one node and 2 on the other: you maximum replication will be 2. If you happen to have 2 replica Pods on the same node and it goes down, you lose the whole index.</p>
<p>[EDIT]: if you use persistent block storage (this best for persistence but is complex to setup since each node needs its own block, making scaling tricky), you would not 'lose the whole index', but this is true if you rely on local storage.</p>
<p>For this reason, more nodes is better.</p>
<ul>
<li><strong>Performance</strong>:</li>
</ul>
<p>Obviously, you need enough resources. Smaller nodes have lower resources, so if a Pod starts getting lots of traffic, it will be more easily reaching its limit and Pods will be ejected. </p>
<p>Elasticsearch is quite a memory hog. You'll have to figure if running all these Pods require bigger nodes.</p>
<p>In the end, as your need grow, you will probably want to use a mix of different capacity nodes, which in GKE will have labels for capacity which can be used to set resources quotas and limits for memory and CPU. You can also add your own labels to insure certain Pods end up on certain types of nodes.</p>
|
<p>AWS + Kubeadm (k8s 1.4)
I tried following the README at: </p>
<blockquote>
<p><a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a></p>
</blockquote>
<p>but that doesnt seem to work. I asked around in slack, and it seems the yamls are out-dated, which i had to modify as such</p>
<p>first i deployed default-http-backend using yaml found on git:</p>
<blockquote>
<p><a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default-backend.yaml" rel="nofollow">https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default-backend.yaml</a></p>
</blockquote>
<p>Next, the ingress-RC i had to modify:</p>
<blockquote>
<p><a href="https://gist.github.com/lilnate22/5188374" rel="nofollow">https://gist.github.com/lilnate22/5188374</a></p>
</blockquote>
<p>(note the change to get path to <code>healthz</code> to reflect <code>default-backend</code> as well as the port change to <code>10254</code> which is apparently needed according to slack)</p>
<p>Everything is running fine
<code>kubectl get pods</code> i see the ingress-controller
<code>kubectl get rc</code> i see 1 1 1 for the ingress-rc</p>
<p>i then deploy the simple <strong>echoheaders</strong> application (according to git readme):</p>
<pre><code>kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
</code></pre>
<p>next i created a simple ingress :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: echoheaders-x
servicePort: 80
</code></pre>
<p>both <code>get ing</code> and <code>describe ing</code> gives be a good sign:</p>
<pre><code>Name: test-ingress
Namespace: default
Address: 172.30.2.86 <---this is my private ip
Default backend: echoheaders-x:80 (10.38.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-x:80 (10.38.0.2:8080)
</code></pre>
<p>but attempting to go to nodes public ip doesnt seem to work, as i am getting "unable to reach server`</p>
| <p>Unfortunately it seems that using ingress controllers with Kubernetes clusters set up using kubeadm doesn't is not supported at the moment.</p>
<p>The reason for this is that the ingress controllers specify a <strong>hostPort</strong> in order to become available on the public IP of the node, but the cluster created by kubeadm uses the <a href="http://kubernetes.io/docs/admin/network-plugins/#cni" rel="noreferrer">CNI</a> network plugin <a href="https://github.com/kubernetes/kubernetes/issues/31307" rel="noreferrer">which does not support</a> <strong>hostPort</strong> at the moment.</p>
<p>You may have better luck <a href="http://kubernetes.io/docs/getting-started-guides/" rel="noreferrer">picking a different way to set up the cluster</a> which does not use CNI.</p>
<p>Alternatively, you can edit your ingress-rc.yaml to declare "hostNetwork: true" under the "spec:" section. Specifying <a href="http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec" rel="noreferrer">hostNetwork</a> will cause the containers to run using the host's network namespace, giving them access to the network interfaces, routing tables and iptables rules of the host. Think of this as equivalent to "docker run" with the option --network="host".</p>
|
<p>I am having trouble using the HTTP Load Balancer (Layer 7) with a service in GKE. I originally exposed the service in GKE using the LoadBalancer service type:</p>
<p>kubectl expose deployment myservice --type="LoadBalancer"</p>
<p>This will create a public IP address for my cluster in the same zone and it has been working well. My goal is to use a global IP address with SSL support for my service. This is what I have tried:</p>
<ol>
<li>expose the GKE service as NodePort type
kubectl expose deployment myservice --type=“NodeType”</li>
<li>follow the tutorial <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">here</a> to create an ingress object. It didn’t work because the service in GKE is already in a managed instance group, and the ingress object would create a new instance group with zero instance. I found a discussion <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/49" rel="nofollow noreferrer">here</a>.</li>
<li>follow the suggestion in the discussion. Deleted to ingress object, and try to set up the HTTP Load Balancer manually from google cloud console.
<ul>
<li>Added a firewall rules to allow 130.211.0.0/22 on port 80 and 8081 (my service port) on all targets.</li>
<li>Backend service is the managed Instance group created by GKE.</li>
<li>Created a health check on port 8081, path /health.
It still doesn’t work because the load balancer can recognize the correct number of instances in the managed instance group, but there are zero healthy nodes. My service has a health check endpoint myservice:8081/health and the root path "/" also returns HTTP 200 OK. I played with the health check configurations but the load balancer can never find a healthy node. </li>
</ul></li>
</ol>
<p>What am I missing? </p>
| <p>It turns out if I want to follow the ingress tutorial, I cannot define a HTTP(S) Load Balancer on the same endpoint. You either choose to follow the tutorial and let the ingress object to define a LB automatically, which worked for me after deleting my testing LB, or do not use ingress and define LB manually (which I never got it working).</p>
|
<p>Is it possible to create a replication controller and service for a containerized application using one configuration file (yml/json)</p>
| <p>yes, you can have a normal yaml array of objects under List type, typical example can be found in the main repo like <a href="https://raw.githubusercontent.com/kubernetes/kubernetes/master/hack/testdata/list.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/kubernetes/master/hack/testdata/list.yaml</a></p>
|
<p><code>kubectl get nodes</code> all nodes status NotReady. </p>
<p>What is the actions should I take to diagnose and fix the issue?</p>
<p>I tried <code>kubectl drain</code> status changed to <code>NotReady,SchedulingDisabled</code> and then <code>kubectl uncordoned</code> changed back to NotReady.</p>
<p>Master version 1.4.6</p>
| <p>I you run <code>kubectl get nodes</code> you'll get the nodes ids.</p>
<p>Then usually, you may find relevant infos about such problems by running <code>kubectl describe node node_id</code> (<code>node_id</code>being the one you've seen listed by the previous command)</p>
|
<p>i have a (spring boot/spring cloud) application (micro-service 'MS' architecture) built with netflix tools and which i want to deploy it on kubernetes cluster (one master and 2 minions) to get advantage from its orchestration fact.</p>
<p>By the way, i created a kube-dns service on the cluster and i tried also to mount an eureka service (named eurekaservice) with 3 pods.
From the other side, i run a micro-service with the next eureka configuration:</p>
<pre><code>client:
serviceUrl:
defaultZone: http://eurekaservice:8761/eureka/
</code></pre>
<p>The good news is that every eureka pod on the cluster get notified about the new mounted MS instance. The bad news was that when the MS goes down, only one eureka pod get notified and the others no.
Another thing, is that when i see the MS logfile, while mounted, it shows me the next errors:</p>
<pre><code>Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: 2016-12-01 06:01:54.469 ERROR 1 --- [nio-8761-exec-1] c.n.eureka.resources.StatusResource: Could not determine if the replica is available
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]:
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: java.lang.NullPointerException: null
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at com.netflix.eureka.resources.StatusResource.isReplicaAvailable(StatusResource.java:90)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at com.netflix.eureka.resources.StatusResource.getStatusInfo(StatusResource.java:70)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at org.springframework.cloud.netflix.eureka.server.EurekaController.status(EurekaController.java:63)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Dec 01 09:01:54 ctc-cicd3 docker-current[1465]: at java.lang.reflect.Method.invoke(Method.java:606)
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.918 WARN 1 --- [nio-8761-exec-8] com.netflix.eureka.InstanceRegistry : DS: Registry: lease doesn't exist, registering resource: MS - gateway-bbn50:MS:8090
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.919 WARN 1 --- [nio-8761-exec-8] c.n.eureka.resources.InstanceResource : Not Found (Renew): MS - gateway-bbn50:MS:8090
Dec 01 09:02:16 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:16.927 INFO 1 --- [nio-8761-exec-5] com.netflix.eureka.InstanceRegistry : Registered instance id 12.16.64.2 with status UP
Dec 01 09:02:17 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:17.061 INFO 1 --- [io-8761-exec-10] com.netflix.eureka.InstanceRegistry : Registered instance id 12.16.64.2 with status UP
Dec 01 09:02:46 ctc-cicd3 docker-current[1465]: 2016-12-01 06:02:46.932 WARN 1 --- [nio-8761-exec-9] com.netflix.eureka.InstanceRegistry : DS: Registry: lease doesn't exist, registering resource: MS - gateway-bbn50:MS:8090
</code></pre>
<p>I think what was causing the problem is that replicas was unable to see each others.</p>
<p>how can i resolve this issue!!</p>
| <p>As it seems you want to replace Eureka discovery by Kubernetes discovery here is another answer (from <a href="http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html" rel="nofollow noreferrer">http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html</a>) :</p>
<p>First disable eureka support in Ribbon:</p>
<pre><code>ribbon:
eureka:
enabled: false
</code></pre>
<p>Then add a config property like this in your gateway for every microservice you want to access through your gateway (you could load those properties with a config server).</p>
<pre><code>app1:
ribbon:
listOfServers: app1-k8s-service-name
app2:
ribbon:
listOfServers: app2-k8s-service-name
</code></pre>
<p>Then your gateway should be able correctly route calls to your microservices.</p>
|
<p>I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?</p>
<p>My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)</p>
<p>At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.</p>
<p>This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.</p>
<p>I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.</p>
<p>What are your ways of handling the source code?</p>
| <p>Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.</p>
<p>that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.</p>
<p>If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.</p>
|
<p>Sometimes people create a deployment without liveniess/readiness probe. How can we patch a probe for that deployment.I try to use PATCH + "Content-Type:application/strategic-merge-patch+json" it doesn't work.</p>
<p>On the other hand, if we created a deployment with probe how can we remove it?</p>
| <p>You should be able to do </p>
<pre><code>kubectl edit deployment <your deployment>
</code></pre>
<p>and the yaml from the currently running deployment should pop up in your default editor. </p>
<p>Edit it (add/remove probe) and save and kubectl will apply the new file automatically.</p>
<p>Of course, a better way is to have the deployment yaml on disk, change it to contain the probe and run</p>
<pre><code>kubectl apply -f <the yaml file>
</code></pre>
|
<p>I would like to be able to get a description of my current state of the cluster so that in the future I would be able to recover from a failure. Aside from recreating all of the services from source/cli individually, what solutions are available?</p>
| <p>Update: this is a really old method. We now have much better tools to backup k8s clusters, like <a href="https://heptio.github.io/velero/v0.11.0/" rel="nofollow noreferrer">velero</a></p>
<p>I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. I'm using it more for cluster migration, but at some level can be used for backups too.</p>
<pre><code>for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,petsets | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
</code></pre>
<p>In case you need to revocer the state after, you just need to execute <code>kubectl create -f ./my-cluster.json</code></p>
|
<p>I've got a 5 node kubernetes cluster set up with 3 HA masters working nicely. Unfortunately DNS is not working or doesn't exist as a service to my knowledge.</p>
<p>The api-server, controller-manager, and scheduler are all running in pods and working correctly using the hyperkube 1.4.6 image on quay. I've created manifests for the dns service following <a href="https://coreos.com/kubernetes/docs/latest/deploy-addons.html" rel="nofollow noreferrer">https://coreos.com/kubernetes/docs/latest/deploy-addons.html</a> in /etc/kubernetes/addons but they don't seem to have an effect. I was under the impression that DNS was built in to kubernetes at this point but I'm having a hard time figuring out what component it's built in to or how to start it.</p>
<p>Does the <code>apiserver</code> read the contents of <code>/etc/kubernetes/addons</code> or the <code>kubelet</code>? I'm wondering if I need to mount <code>/etc/kubernetes/addonss</code> as a volume on the <code>apiserver pod</code>.</p>
| <p>The addons are handled differently by different deployment methods. The <a href="https://coreos.com/kubernetes/docs/latest/deploy-addons.html" rel="nofollow noreferrer">CoreOS method</a> that you linked to simply has you start them by hand using <code>kubectl create -f dns-addon.yml</code>; there is no automation around that. If you did not run that command, give it a try and see if that solves your issue.</p>
<p>As mentioned above, other deployment methods do this as part of the deployment. The Salt-based <code>kube-up.sh</code> method, for example, uses a "watcher" pod called the <code>kube-addon-manager</code>(<a href="https://github.com/kubernetes/kubernetes/blob/ef0e13bd7d41d678d14d591341982e41db16bef5/cluster/saltbase/salt/kube-addons/kube-addon-manager.yaml" rel="nofollow noreferrer">manifest</a>, <a href="https://github.com/kubernetes/kubernetes/tree/v1.4.3/cluster/addons/addon-manager" rel="nofollow noreferrer">code</a>). The <code>kops</code> deployment method deploys <code>kube-dns</code>, but uses <code>kubectl</code> for other addons, as outlined <a href="https://github.com/kubernetes/kops/blob/master/docs/addons.md" rel="nofollow noreferrer">here</a>. Since these addons are really not different from normal applications running on Kubernetes in the sense that they are just using normal Kubernetes manifests, there is some variation out there. You basically can take what your deployment method gives you and alter it to the needs of your environment.</p>
|
<p>I have an off-the-shelf Kubernetes cluster running on AWS, installed with the <code>kube-up</code> script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:</p>
<pre><code> > kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
</code></pre>
<p>I've created a secret containing a <code>.dockercfg</code> file. I've confirmed it works by running the script posted <a href="https://github.com/kubernetes/kubernetes/issues/499#issuecomment-131162957" rel="noreferrer">here</a>:</p>
<pre><code> > kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
</code></pre>
<p>I've confirmed I'm not using <a href="https://github.com/kubernetes/kubernetes/issues/12626" rel="noreferrer">the new format of .dockercfg script</a>, mine looks like this:</p>
<pre><code>> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"[email protected]"}}
</code></pre>
<p>I've tried <a href="https://github.com/kubernetes/kubernetes/issues/499#issuecomment-132504379" rel="noreferrer">running the Base64 encode on Debian instead of OS X</a>, no luck there. (It produces the same string, as might be expected.)</p>
<p>Here's the YAML for my Replication Controller:</p>
<pre><code>---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
</code></pre>
<p><code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
</code></pre>
<p>Any ideas?</p>
| <p>Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container. </p>
<p>For example, if your Deployment yaml looks like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
</code></pre>
<p>Then you must make sure the Secret yaml uses a matching namespace:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.</p>
|
<p>Assuming I have two multi-container pods and services for exposing the pods to the cluster. I also have Replication controller for maintaining the number of pods at any time. </p>
<p>The above cluster is an trivial example. In this point I have two pod files, two service files and two RC files. This makes the file management difficult. </p>
<p>I know that all the files can be put in a directory and use <strong>kubectl create -f directory</strong> to execute whole thing in a single command. But, I feel the file management is a overhead. Is there something like docker-compose.yml, where we can include all pods in a single file.</p>
<p>I would like to know the best practices for using kubernetes in production. Changing many files in production does not seem to be an good idea.</p>
| <p>You can use triple dashes or List resource as in these examples:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/8fd414537b5143ab039cb910590237cabf4af783/hack/testdata/list.yaml" rel="noreferrer">List</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/blob/8fd414537b5143ab039cb910590237cabf4af783/test/fixtures/doc-yaml/user-guide/multi-pod.yaml" rel="noreferrer">Triple Dashes</a></li>
</ul>
<p>examples:</p>
<p><strong>List</strong></p>
<pre><code>apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: list-service-test
spec:
ports:
- protocol: TCP
port: 80
selector:
app: list-deployment-test
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: list-deployment-test
labels:
app: list-deployment-test
spec:
replicas: 1
template:
metadata:
labels:
app: list-deployment-test
spec:
containers:
- name: nginx
image: nginx
</code></pre>
<p><strong>Triple dash</strong>:</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
redis-sentinel: "true"
role: master
name: redis-master
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.5"
volumeMounts:
- mountPath: /redis-master-data
name: data
- name: sentinel
image: kubernetes/redis:v1
env:
- name: SENTINEL
value: "true"
ports:
- containerPort: 26379
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis-proxy
role: proxy
name: redis-proxy
spec:
containers:
- name: proxy
image: kubernetes/redis-proxy:v1
ports:
- containerPort: 6379
name: api
</code></pre>
|
<p>I have a multi-tier app in Google Container Engine and I have a single ingress with single static IP address where I'll be forwarding multiple domains. Each domain will require its own ssl certificate(let's encrypt). The issue I am facing is that I do not see any option to use multiple certificates in the load balancer. It looks like each IP address has a single global forwarding rule that can forward to only one SSL(port 443) HTTPS proxy which can use only one SSL certificate.</p>
<p>I do not want to use multi-domain SSL certificate because it would be huge pita to manage.</p>
<p>This isn't some weird exotic use case but quite normal requirement. What is the proper solution here?</p>
<p>I was thinking of maybe setting up static IP for each domain and using routes to route to the single IP "front end", but each IP costs some money so having tens or hundreds of domains would be financially just crazy(100 domains would cost 1800 USD per month, data/traffic not included).</p>
| <p>Update:
GCE now supports SNI by attaching at most 10 SSL certificates to a load balancer. You can specify the SSL certificates list with TargetHttpsProxy or TargetSslProxy.</p>
<p>Reference:
<a href="https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates#multiple_ssl_certificate_example" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates#multiple_ssl_certificate_example</a></p>
<p>=======================<br>
You can use network load balancer and terminate the ssl connection on your VM instances. Note that network load balancing cannot forward your traffic to different regions. If you want that, you'll need to set up them separately in different regions.</p>
<p>Reference:
<a href="https://cloud.google.com/compute/docs/load-balancing/network/" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/load-balancing/network/</a></p>
|
<p>I have a multi-tier app in Google Container Engine and I have a single ingress with single static IP address where I'll be forwarding multiple domains. Each domain will require its own ssl certificate(let's encrypt). The issue I am facing is that I do not see any option to use multiple certificates in the load balancer. It looks like each IP address has a single global forwarding rule that can forward to only one SSL(port 443) HTTPS proxy which can use only one SSL certificate.</p>
<p>I do not want to use multi-domain SSL certificate because it would be huge pita to manage.</p>
<p>This isn't some weird exotic use case but quite normal requirement. What is the proper solution here?</p>
<p>I was thinking of maybe setting up static IP for each domain and using routes to route to the single IP "front end", but each IP costs some money so having tens or hundreds of domains would be financially just crazy(100 domains would cost 1800 USD per month, data/traffic not included).</p>
| <p>The <a href="http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingressspec" rel="nofollow noreferrer">TLS config is an array</a> where you can define <a href="http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingresstls" rel="nofollow noreferrer">multiple host to secret</a> mappings.</p>
|
<p>How does Kubernetes create Pods?</p>
<p>I.e. what are the sequential steps involved in creating a Pod, is it implemented in Kubernetes? </p>
<p>Any code reference in Kubernetes repo would also be helpful.</p>
| <p>A <strong>Pod</strong> is described in a definition file, and ran as a set of Docker containers on a given host which is part of the Kubernetes cluster, much like <code>docker-compose</code> does, but with several differences.</p>
<p>Precisely, a Pod always contains multiple Docker containers, even though, only the containers defined by the user are usually visible through the API: A Pod has one container that is a placeholder generated by the Kubernetes API, that will hold the IP for the Pod (so that when a Pod is restarted, it's actually the client containers that are restarted, but the placeholder container remains and keeps the same IP, unlike in straight Docker or docker-compose, where recreating a composition or container changes the IP.)</p>
<p>How Pods are scheduled, created, started, restarted if needed, re-scheduled etc... it a much longer story and very broad question.</p>
|
<p>I am trying to connect to a JMX port that I have defined to be 1099 (as is the default) on my Java application (Java 8 and with Spring Boot 1.4 and Tomcat) using a client such as JConsole, Java Mission Control, or Java VisualVM, but I am reaching an error: </p>
<pre><code>java.rmi.ConnectException: Connection refused to host: 10.xxx.xxx.xx, nested exception is:
java.net.ConnectException: Connection timed out
...
</code></pre>
<p>Note that I hid the exact host IP, but it is the pod IP of the particular Kubernetes-managed Docker container that my service is deployed in. I try to connect to the JMX port using the following service URL:</p>
<pre><code>jconsole service:jmx:rmi://<nodeIP>:<nodePort>/jndi/rmi://<nodeIP>:<nodePort>/jmxrmi
</code></pre>
<p>I know that JMX opens a <a href="https://stackoverflow.com/questions/7163173/jmx-enabled-java-application-appears-to-open-a-random-high-order-port-when-jmx-c">random high port</a>, and I have tried to resolve that by including a <a href="https://stackoverflow.com/a/33089641">custom @Configuration class that forces that high port to also serve on port 1099</a>. I have gone into the actual container and pod and ran </p>
<pre><code>netstat -tulpn
</code></pre>
<p>to see the ports that are opened, and I have confirmed that the only ports opened are 8443 (which my application is running on) and 1099 (the JMX port); this indicates that my class works. I also ensured that port 1099 is open on the Kubernetes side, so that is not what is blocking it.</p>
<p>As many answers surrounding a JMX remote connection have suggested, I have tried many variations of the following Java options to no avail:</p>
<pre><code>-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.jmi.port=1099 -Djava.rmi.server.hostname=0.0.0.0
</code></pre>
<p>The answers have suggested forcing the JMX port and the RMI registry port to be the same, but none of these have worked. </p>
<p>I believe the issue may be because the hostname (as I tried to make dynamic with -Djava.rmi.server.hostname=0.0.0.0) can not resolve to the different hostnames (pod IPs) that are created every time my service is deployed. </p>
<p>Thus, it looks like the connection cannot complete because JMX is unable to see what hostname Kubernetes is assigning after my service is deployed. </p>
<p>Is there a way for JMX to recognize the Kubernetes hostname? Or, is there some other way to connect remotely to a JMX port through a Kubernetes-deployed service?</p>
<p><strong>EDIT 1:</strong> I have done some additional research, and maybe an optional JMXMP instead of RMI may work? Has anyone gotten this working with Tomcat?</p>
| <p>The, jmx remote connection is a pain to work with, proxying it is impossible from my view. I had similar problems and in the end I just used jolokia to connect.</p>
<p>Jolokia is a JMX-HTTP bridge giving an alternative to JSR-160 connectors. It is an agent based approach with support for many platforms. In addition to basic JMX operations it enhances JMX remoting with unique features like bulk requests and fine grained security policies. -> <a href="http://jolokia.org" rel="nofollow noreferrer">http://jolokia.org</a></p>
|
<p>I deployed Kubernetes on AWS with KOPS and the nginx-ingress. </p>
<p>To evaluate multiple clouds (and cut costs), I want to deploy on GKE. Everything worked, except the darn Ingress's. (That was the hardest part on AWS).</p>
<p>Below is the Ingress I'm using on GKE. It makes two Ingresses in the dashboard, each with an IP address.</p>
<p>If I point my DNS at those addresses, the connection is refused. I'm checking the DNS resultion with ping.</p>
<p>All HTTPS fail to connect with "Unable to establish SSL connection.", except button which is "502 Bad Gateway"</p>
<p>HTTP fails to connect with 502 except admin which is 503.</p>
<p>In the Google Cloud Platform dashboard, I see two load balancers. "all" points to my SSL cert. "button" isn't doing HTTPS, but that's another problem.</p>
<p>Clearly I'm missing something. What did I miss?</p>
<p>I'm using kubectl v1.4.6 and whatever version on GKE would have installed yesterday.</p>
<pre><code>```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# this is for nginx ingress controler on AWS
# kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.com
- dashboard-stage.example.com
- expert-stage.example.com
- signal-stage.example.com
- stage.example.com
secretName: tls-secret
rules:
- host: admin-stage.example.com
http:
paths:
- backend:
serviceName: admin-service
servicePort: http-port
path: /
- host: dashboard-stage.example.com
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: http-port
path: /
- host: expert-stage.example.com
http:
paths:
- backend:
serviceName: expert-service
servicePort: http-port
path: /
- host: signal-stage.example.com
http:
paths:
- backend:
serviceName: signal-service
servicePort: http-port
path: /
- host: stage.example.com
http:
paths:
- backend:
serviceName: www-service
servicePort: http-port
path: /
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# this is for nginx ingress controler on AWS
# kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "false"
name: button-ingress
spec:
tls:
- hosts:
- button-stage.example.com
secretName: tls-secret
rules:
- host: button-stage.example.com
http:
paths:
- backend:
serviceName: button-service
servicePort: http-port
path: /
```
</code></pre>
| <p>Prashanth's comments were helpful, in the end, native cloud Ingress (AWS/GCE) isn't finished in Kubernetes enough to be useful for my purposes. There's no point learning an abstraction that is more complicated and less functional than the thing underneath. </p>
<p>I ended up using the nginx-ingress from this answer: <a href="https://stackoverflow.com/questions/40745885/kubernetes-1-4-ssl-termination-on-aws">Kubernetes 1.4 SSL Termination on AWS</a></p>
<p>On the resulting Ingress is an IP you can point DNS (not the "External Endpoints" on the service). Good luck!</p>
|
<p>I'm using minikube on macOS 10.12 and trying to use a private image hosted at docker hub. I know that minikube launches a VM that as far as I know will be the unique node of my local kubernetes cluster and that will host all my pods.</p>
<p>I read that I could use the VM's docker runtime by running <code>eval $(minikube docker-env)</code>. So I used those variables to change from my local docker runtime to the other. Running <code>docker images</code> I could see that the change was done effectively.</p>
<p>My next step was to log in at docker hub using <code>docker login</code> and then pulling my image manually, which ended without error. After that I thought that the image will by ready to by be used by any pod in the cluster but I'm always getting <code>ImagePullBackOff</code>. I also tried to ssh into the VM via <code>minikube ssh</code> and the result is the same, the image is there to be used but for some reason I don't know it's refusing to use it.</p>
<p>In case it helps, this is my deployment description file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
</code></pre>
<p>And this is the output of <code>kubectl describe pod <podname></code>:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned web-deployment-2451628605-vtbl8 to minikube
1m 23s 4 {kubelet minikube} spec.containers{nginx} Normal Pulling pulling image "godraude/nginx"
1m 20s 4 {kubelet minikube} spec.containers{nginx} Warning Failed Failed to pull image "godraude/nginx": Error: image godraude/nginx not found
1m 20s 4 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "Error: image godraude/nginx not found"
1m 4s 5 {kubelet minikube} spec.containers{nginx} Normal BackOff Back-off pulling image "godraude/nginx"
1m 4s 5 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image \"godraude/nginx\""
</code></pre>
| <p>i think what u need is to create a secrete which will tell kube from where it can pull your private image and its credentials</p>
<pre><code>kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
</code></pre>
<p>below command to list your secretes</p>
<pre><code>kubectl get secret
NAME TYPE DATA AGE
my-secret kubernetes.io/dockercfg 1 100d
</code></pre>
<p>now in deployment defination u need to define whcih secret to use</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
imagePullSecrets:
- name: my-secret
</code></pre>
|
<p>I've run into an interesting situation where I need to clone a private github repo into a docker container that I'm running in Kubernetes. Originally I tried using a gitRepo mount, however, having an OAuth key in my deployment manifest is unacceptable and I would like to use a repo deploy key rather an OAuth key attached to my GitHub account.</p>
<p>Ideally I would use a gitRepo mount that auths using a secret, but this feature isn't available at the time of writing.</p>
<h2>Constraints</h2>
<p>I need the following:</p>
<ul>
<li>A repo inside the container that I can pull intermittently while my container's running</li>
<li>The repo must be accessed using a GitHub deploy key</li>
<li>The key must be kept secure (in a Kubernetes secret) and not stored in the docker image</li>
<li>The repo must be shared between two containers in the same pod—one that writes and one that reads</li>
</ul>
<h2>Possible Solutions:</h2>
<h3>Mount SSH keys as secrets and clone:</h3>
<p>I tried cloning the repo into an emptydir with a bash script running in a separate pod (this script has to run anyway, I'm using it for other things as well), however I then ran into the issue of getting ssh keys into the pod. <a href="https://stackoverflow.com/questions/39568412/creating-ssh-secrets-key-file-in-kubernetes">This question</a> is about this issue, however it doesn't seem to have a way to do it. I was able to get the keys in with a secret mount, but then the permissions are set to 777. To get around this, I mounted the keys into a <code>/test/</code> directory and then tried to <code>cp</code> them into <code>/root/.ssh/</code>. This gave me these strange errors:</p>
<pre><code>cp: '/test/id_rsa' and '/root/.ssh/id_rsa' are the same file
cp: '/test/id_rsa.pub' and '/root/.ssh/id_rsa.pub' are the same file
</code></pre>
<p>I also tried using <code>cat</code> and piping them to their files but that didn't work. At first it gave me these errors when I had the paths wrong:</p>
<pre><code>cat: /keys/id_rsa: input file is output file
cat: /keys/id_rsa.pub: input file is output file
</code></pre>
<p>Once I fixed the paths, it did nothing and silently failed. <code>kubectl exec</code>ing into the containter showed no files in <code>/root/.ssh/</code>.</p>
<p>I think I've pretty much reached the bottom of this path so I don't think it will be the solution.</p>
<h3>Configuring ssh to ignore key permissions</h3>
<p>If SSH has a way to ignore the permissions on keys—by default it enforces 644 or less I believe—then the above solution could be possible. I'm surprised I haven't found any ways to do this, but my google-fu always turned up results saying that you just have to set the permissions properly.</p>
<h3>Some other way of getting the keys into the container securely</h3>
<p>Ideally, I would like to have a key in the container for potential future expansions of this project using other repos. There could be some other way of doing this that I haven't thought of or tried.</p>
<h3>Clone using OAuth key within the container</h3>
<p>I've thought about trying to use an OAuth key in an environment variable and then using that to clone the repo by HTTPS. This is less than ideal but if it works I'll take it. The only thing preventing me from doing this now is that I can't use a deploy key. If there is a way to use OAuth with a deploy key, I haven't found it yet but it could be the solution if someone knows more.</p>
<h3>Cloning in the docker image</h3>
<p>There isn't anything in the repo that I would be uncomfortable having in the docker image so I could go this route. The problem with this is that I need to be able to pull down updates to the repo. If I put it in the container, I won't be able to pull without my keys. There could be some workaround to this that I haven't tried.</p>
<p>I'm at the point where I feel like I have nothing left to try so any suggestions could be worth a shot. </p>
<hr>
<h3>Similar Questions</h3>
<p><a href="https://stackoverflow.com/questions/33373846/cloning-private-git-repo-in-the-docker-container">This question</a> is very similar to what I'm trying to do, however I feel that since they aren't using Kubernetes and I am, it's worth posting separately because of secrets and the different methods of mounting files.</p>
<p><a href="https://stackoverflow.com/questions/39568412/creating-ssh-secrets-key-file-in-kubernetes">This question</a> talks about getting SSH keys into containers in Kubernetes, however I'm not dead-set on using git over SSH so I think this should be it's own question.</p>
| <p>Try to mount your secret containing your deploy key like this:</p>
<pre><code>volumeMounts:
- mountPath: /root/.ssh
name: ssh-key
volumes:
- name: ssh-key
secret:
secretName: ssh-key
defaultMode: 256
</code></pre>
<p>Here is the full example of how I'm using it:</p>
<pre><code>apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: transporter
spec:
schedule: 0 5 * * *
jobTemplate:
spec:
template:
spec:
nodeSelector:
role: mysqldump
containers:
- name: transporter
image: camil/mysqldump
command: ["/bin/bash", "-c"]
args:
- ssh-keyscan -t rsa $TARGET_HOST > ~/.ssh/known_hosts && ssh -i /root/.ssh/private/id_rsa $LINUX_USER@$TARGET_HOST 'mkdir mysqldump || true' && scp -i /root/.ssh/private/id_rsa /mysqldump/* $LINUX_USER@$TARGET_HOST:/home/$LINUX_USER/mysqldump
env:
- name: TARGET_HOST
valueFrom:
configMapKeyRef:
name: transporter
key: target.host
- name: LINUX_USER
valueFrom:
configMapKeyRef:
name: transporter
key: linux.user
imagePullPolicy: Always
volumeMounts:
- mountPath: /mysqldump
name: mysqldump
- mountPath: /root/.ssh/private
name: ssh-key
volumes:
- name: mysqldump
hostPath:
path: /home/core/mysqldump
- name: ssh-key
secret:
secretName: ssh-key
defaultMode: 256
restartPolicy: OnFailure
</code></pre>
|
<p>According to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">docs</a> -</p>
<blockquote>
<p>Failed containers that are restarted by Kubelet, are restarted with an
exponential back-off delay, the delay is in multiples of
sync-frequency 0, 1x, 2x, 4x, 8x … capped at 5 minutes and is reset
after 10 minutes of successful execution.</p>
</blockquote>
<p>Is there any way to define a custom RestartPolicy? I want to minimize the back-off delay as much as possible and drop off the exponential behavior.</p>
<p>As far as I can find, you can't even configure the RestartPoilcy, let alone make a new one...</p>
| <p>The backoff delay is not tunable because it could severely affects the reliability of kubelet. Imagine you have some pods that keep crashing on the node, kubelet will continuously restarting all those pods/containers with no break, consuming a lot of resources.</p>
<p>Why do you want to change the restart backoff delay? </p>
|
<p>We have a <code>aiohttp</code> based web services which uses <code>ZMQ</code> to send jobs to workers and waits for the result. We are of course using the <em>ZMQ eventloop</em>, so we can wait for <em>ZMQ sockets</em>. "Sometimes" the process crashes and we get this stack trace:</p>
<pre><code>...
await socket.send(z, flags=flags)
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 165, in send
kwargs=dict(flags=flags, copy=copy, track=track),
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 276, in _add_send_event
timeout_ms = self._shadow_sock.sndtimeo
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 45, in _getattr_
return self._get_attr_opt(upper_key, opt)
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 49, in _get_attr_opt
return self.get(opt)
File "zmq/backend/cython/socket.pyx", line 449, in zmq.backend.cython.socket.Socket.get (zmq/backend/cython/socket.c:4920)
File "zmq/backend/cython/socket.pyx", line 221, in zmq.backend.cython.socket._getsockopt (zmq/backend/cython/socket.c:2860)
</code></pre>
<p>"Sometimes" means, that the code works fine, if I just run it on my test machine. We encountered the problem in some rare cases when using docker containers, but were never able to reproduce it in an reliable way. Since we moved our containers into a Kubernetes cluster, it occurs much more often. Does anybody know, what could be the source of the above stack trace? </p>
| <p>aiohttp is not intended to be used with vanilla pyzmq.
Use <a href="https://github.com/aio-libs/aiozmq" rel="nofollow noreferrer">aiozmq</a> loopless streams instead.</p>
<p>See also <a href="https://github.com/zeromq/pyzmq/issues/894" rel="nofollow noreferrer">https://github.com/zeromq/pyzmq/issues/894</a> and <a href="https://github.com/aio-libs/aiozmq/blob/master/README.rst" rel="nofollow noreferrer">https://github.com/aio-libs/aiozmq/blob/master/README.rst</a></p>
|
<p>Is it possible to turn off/remove/disable the basic auth in GKE that was added by default?</p>
<p>It's possible to authenticate towards the GKE master using a number of ways, <a href="http://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">as listed in the documentation</a>.</p>
<p>When you create a cluster using GKE it creates a username/password for basic authentication to the master. </p>
<p>I want to turn this off to tighten up security (the other authentication methods are significantly better and are used transparently by the tooling AFAIK).</p>
<p>Is it possible? I have searched the <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">kubernetes github issues</a> list but not found anyone with the exact same problem (yet).</p>
<p>(The default password is 16 characters, and should be OK, but it is not possible to change without tearing down the entire cluster. I just want to disable basic auth.)</p>
<p>Thanks.</p>
| <p>It is not currently possible to disable basic auth in GKE. </p>
<p>On the bright side, <a href="https://github.com/kubernetes/kubernetes/pull/36778" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/36778</a> was recently merged into Kubernetes core which makes it possible to disable basic auth when launching Kubernetes clusters on GCE, I would expect something similar to be added to GKE in the future. </p>
|
<p>I am using Kubernetes cluster for deploying our bunch of Microservices.I am able to manage Blue Green depoyment for all microservices at the same time like below<a href="https://i.stack.imgur.com/zveaq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zveaq.png" alt="enter image description here"></a></p>
<p>My problem is that some times i want to deploy only App1 or App2 or both not all microservices.is this possible to manage this using Blue Green deployment?</p>
<p>Implemented Things:(want to deploy only App3 using blue green strategy)</p>
<p>if i am running BlueApp3 with blue deployment and GreenApp1 and GreenApp2 refirring BlueApp3. then i tested my whole application</p>
<p><a href="https://i.stack.imgur.com/em3lU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/em3lU.png" alt="enter image description here"></a></p>
<p>once it will work fine i'll convert BlueApp3 to GreeApp3 like below<a href="https://i.stack.imgur.com/5cFlz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5cFlz.png" alt="enter image description here"></a></p>
<ol>
<li>is this strategy is fine?</li>
<li>if not then why?</li>
<li>do we need to deploy all bunch of MicorServices at a time to implement blue green deployment(it will cause unnecesaary deployment)</li>
<li>what are the pros and cons of Blue Green deployment strategy what i am following for one particular microservice.</li>
</ol>
| <p>I would suggest switching your deployment strategy from combined to per microservice completely. That includes the fact that you will no longer run all-blue or all-green deployment.</p>
<p>You can launch new deployment for given service, and when it is in place switch the selector under your kubernetes service from say <code>app: app2, flavor: green</code> to <code>app: app2, flavor: blue</code> and when considered as validated simply delete the green Deployment object.</p>
<p>The one setback from doing blue-green on k8s is that you don't really utilise potential provided by k8s deployments with their native support for RollingUpdates</p>
|
<p>I'm pretty new to Docker orchestration and managing a fleet of containers. I'm wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?</p>
<p>I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container.</p>
<p>Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I'm wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?</p>
| <p>I'd look at Kubernetes + the <a href="http://kubernetes.io/docs/user-guide/jobs/#what-is-a-job" rel="nofollow noreferrer">Jobs API</a> (short lived) or <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="nofollow noreferrer">Deployments</a> (long lived)</p>
<p>I'm not sure exactly what you mean by command, but I'll assume its some sort of dev env triggered by a CLI, <code>make-dev</code>.</p>
<ol>
<li>User triggers <code>make-dev</code>, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.</li>
<li>Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">Ingress rule</a> + <a href="http://kubernetes.io/docs/user-guide/services/" rel="nofollow noreferrer">Service</a></li>
<li>Kubernetes will schedule it out across your fleet of machines</li>
<li>Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like <code>devclusters.com/foobar123</code></li>
<li>User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod</li>
</ol>
<p>This should scale well, and if your different environments use the same base container image, they should start really fast.</p>
<p>Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try <a href="https://coreos.com/tectonic" rel="nofollow noreferrer">https://coreos.com/tectonic</a></p>
|
<p>I would like to setup a HA swarm / kubernetes cluster based on low power architecture (arm).
My main objective is to learn how works a HA web cluster, how it reacts to failures and recover from them, how easy it is to scale.</p>
<p>I would like to host a blog on it as well as other services once it is working (git / custom services / home automation / CI server / ...).</p>
<p>Here are my first questions:</p>
<ol>
<li><p>Regading the hardware, which is the more appropriate ? Rpi3 or Odroid-C2 or something else? I intend to have 4-6 nodes to start. Low power consumption is important to me since it will be running 24/7 at home </p></li>
<li><p>What is the best architecure to follow ? I would like to run everything in container (for scalability and redudancy), and have redundant load balancer, web servers and databases. Something like this: <a href="https://i.stack.imgur.com/TFu4G.png" rel="nofollow noreferrer">architecture</a> </p></li>
<li><p>Would it be possible to have web server / databases distributed on all the cluster, and load balancing on 2-3 nodes ? Or is it better to separate it physically? </p></li>
<li><p>Which technology is the more suited (swarm / kubernetes / ansible to deploy / flocker for storage) ? I read about this topic a lot lately, but there are a lot of choices.</p></li>
</ol>
<p>Thanks for your answers !</p>
<hr>
<h2>EDIT1: infrastructure deployment and management</h2>
<p>I have almost all the material and I am now looking in a way to easily manage and deploy the 5 (or more) PIs. I want the procedure to be as scalable as possible.</p>
<p>Is there some way to:</p>
<ul>
<li>retrieve an image from network the first time (PXE boot like)</li>
<li>apply custom settings for each node: network config (IP), SSH access, ...</li>
<li>automatically deploy / update new software on servers </li>
<li>easily add new nodes on the cluster</li>
</ul>
<p>I can have a dedicated PI or my PC that would act as deployment server.</p>
<p>Thanks for your inputs !</p>
| <p>Raspberry Pi, ODroid, CHIP, BeagleBoard are all suitable hardware.
Note that flash card has a limited lifetime if you constantly read/write to them.</p>
<p>Kubernetes is a great option to learn clustering containers.
Docker Swarm is also good.
None of these solutions provide distributed storage, so if you're talking about a PHP type web server and SQL database which are not distributed, then you can't really be redundant even with Kubernetes or Swarm.
To be effectively redundant, you need master/slave setup for the DB, or better a clustered database like elasticsearch or maybe the cluster version of MariaDB for SQL, so you have redundancy provided by the database cluster itself (which is not a replacement for backups, but it better than a single container)</p>
<p>For real distributed storage, you need to look at technologies like Ceph or GlusterFS. These do not work well with Kubernetes or Swarm because they need to be tied to the hardware. There is a docker/kubernetes Ceph project on Github, but I'd say it is still a bit hacky.
Better provision this separately, or directly on the host.</p>
<p>As far as load balancing is concerned, you ay want to have a couple nodes with external load balancers for redundancy, if you build a Kubernetes cluster you don't really chose what else may run on the same node, except by specifying CPU/RAM quota and limits, or affinity.</p>
|
<p>Can <a href="https://github.com/coreos/etcd" rel="noreferrer">etcd</a> be used as reliable database replacement? Since it is distributed and stores key/value pairs in a persistent way, it would be a great alternative nosql database. In addition, it has a great API. Can someone explain why this is not a thing?</p>
| <p><strong>etcd</strong></p>
<ul>
<li><a href="http://kubernetes.io/docs/admin/etcd/" rel="noreferrer">etcd</a> is a highly available key-value store which Kubernetes uses for persistent storage of all of its objects like <strong>deployment, pod, service</strong> information. </li>
<li>etcd has high access control, that it can be accessed only using API in master node. Nodes in the cluster other than master <strong>do not have access</strong> to etcd store.</li>
</ul>
<p><strong>nosql database</strong></p>
<ul>
<li><p>There are currently more than than <a href="http://nosql-database.org/" rel="noreferrer">255</a> nosql databases, which can be broadly classified into <a href="https://www.digitalocean.com/community/tutorials/a-comparison-of-nosql-database-management-systems-and-models" rel="noreferrer">Key-Value based, Column based, Document based and Graph based</a>. Considering <strong>etcd</strong> as an key-value store, lets see the available nosql key-value data stores.</p></li>
<li><p><a href="http://bigdata-madesimple.com/a-deep-dive-into-nosql-a-complete-list-of-nosql-databases/" rel="noreferrer">Redis, memcached and memcacheDB</a> are popular key-value stores. These are general-purpose distributed memory caching system often used to speed up dynamic database-driven websites by caching data and objects in memory.</p></li>
</ul>
<p><strong>Why etcd not an alternative</strong></p>
<ul>
<li><p>etcd cannot be stored in memory(ram) they can only be persisted in disk storage, whereas redis can be cached in ram and can also be persisted in disk.</p></li>
<li><p>etcd does not have various data types. It is made to store only kubernetes objects. But redis and other key-value stores have data-type flexibility.</p></li>
<li><p>etcd guarantees only high availabilty, but does not give you the fast querying and indexing. All the nosql key-value stores are built with the goal of fast querying and searching.</p></li>
</ul>
<p>Eventhough it is obvious that etcd cannot be used as an alternative nosql database, I think the above explanation will prove it cannot be an suitable alternative.</p>
|
<p>Is there a groovy script out there to setup a kubernetes cloud config in jenkins?</p>
<p>I had one for mesos but have since moved to kubernetes. I have multiple masters and would like to keep them all up to date with the active kubernetes cluster and current list of containers.</p>
| <p>Found this</p>
<p><a href="https://gist.github.com/jhoblitt/ce91b458526e3a03d365e2689db825f0" rel="nofollow noreferrer">https://gist.github.com/jhoblitt/ce91b458526e3a03d365e2689db825f0</a></p>
<pre><code>import org.csanchez.jenkins.plugins.kubernetes.*
import jenkins.model.*
def j = Jenkins.getInstance()
def k = new KubernetesCloud(
'jenkins-test',
null,
'https://130.211.146.130',
'default',
'https://citest.lsst.codes/',
'10', 0, 0, 5
)
k.setSkipTlsVerify(true)
k.setCredentialsId('ec5cf56b-71e9-4886-9f03-42934a399148')
def p = new PodTemplate('centos:6', null)
p.setName('centos6')
p.setLabel('centos6-docker')
p.setRemoteFs('/home/jenkins')
k.addTemplate(p)
p = new PodTemplate('lsstsqre/centos:7-docker', null)
p.setName('centos7')
p.setLabel('centos7-docker')
p.setRemoteFs('/home/jenkins')
k.addTemplate(p)
j.clouds.replace(k)
j.save()
</code></pre>
|
<p>I'm getting acquainted with Kubernetes, and haven't found simple solution for deploying stateful services in Kubernetes.</p>
<ul>
<li>Every pod has to be bootstrapped with contact points (list of other pod IPs), which can't be load balanced (i'm afraid of split-brain in case of bad luck: in worst case load-balancer may load-balance every pod to itself, making several self-enclosed clusters of one node)</li>
<li>Every pod has to have persistent storage that, in worst case, has to be accessed manually (e.g. consul's peers.json)</li>
<li>Every pod should be reconfigurable; if i've forgot to do something with my consul cluster, rebuilding it from scratch would simply result in downtime. In case kubernetes prevents this, feel free to tell me, i'm still not familiar enough with deployment mechanics.</li>
<li>Increasing service cluster dynamically with newly-configured instances and then draining older ones may be highly undesired (i'm not consul expert, but from my point of view that drops split-brain protection in consul cluster).</li>
</ul>
<p>AFAIK the most applicable thing is Pet Set, however, it is still in alpha and can only be deleted completely; also, i don't feel i understand how persistent volumes should be managed to survive Pet Set recreation. Another option i've came up with is splitting service deployments into bootstrap-node deployment, bootstrap-node service and all-other-nodes deployment, which allows me to use bootstrap-node service as a contact point (that's not completely safe, though).</p>
<p>What are the popular approaches for this case and what pros and cons do they have?</p>
| <p>If you're looking at a set number of Pods in your stateful cluster in the Kubernetes cluster, PetSets (StatefulSets I believe it is called now) is the answer... or you can define a Service per Pod to achieve the same.</p>
<p>For Pods to be aware of other Pods's IPs, you can use Headless Services, which provide you with the list of IPs associated with a label.</p>
<p>For Storage, if you use emptyDir, you have local storage but you lose it when the Pod is removed / re-scheduled. </p>
<p>I use Zookeeper in Kubernetes and it is a bit of a pain to setup, but Zookeeper provides a 'reconfigure' API which allows to reconfigure the cluster when a node changes, so it is fairly easy to redefine the cluster on startup of a new node when a Pod is rescheduled. I'm not sure if Consul has the same type of feature, but it probably does.</p>
|
<p>I'm currently using Kubernetes for our staging environment - and because it is only a small one, I'm only using one node for <code>master</code> and for running my application pods on there.</p>
<p>When we switch over to production, there will be more than one node - at least one for master and one bigger node for the application pods. Do I have to make sure that all my pods are running on a different node than <code>master</code> or does Kubernetes take care of that automagically?</p>
| <p>If you look at the output of <code>kubectl get nodes</code>, you'll see something like:</p>
<pre><code>~ kubectl get nodes
NAME STATUS AGE VERSION
test-master Ready,SchedulingDisabled 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-f635 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-fzu7 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-vc1p Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
</code></pre>
<p>The <code>SchedulingDisabled</code> tag ensures that we do not schedule any pods onto that node, and each of your HA master nodes should have that by default. </p>
<p>It is possible to set other nodes to <code>SchedulingDisabled</code> as well by using <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_cordon/" rel="nofollow noreferrer">kubectl cordon</a>.</p>
|
<p>I've deployed Spring Cloud Data Flow server on a local Kubernetes cluster. All seems fine.
Then I create an App of type <code>Task</code>, giving the URL of a Spring-Boot JAR.
Then I create a task 'definition' and launch it.
The task definition hangs in status 'launching'. </p>
<p>Here are my findings: </p>
<ol>
<li><p>Looking at Kubernetes, I see the a pod corresponding to the task correctly created but failing to start, with status <code>ImagePullBackOff</code></p></li>
<li><p>This pod is configured with <code>image: /tmp/deployer-resource-cache5494152820122807128/https-60030cec0dd24157b95f59cd3e5b0819916e4adc</code>, and the logs show the message:</p>
<blockquote>
<p>Failed to pull image "/tmp/deployer-resource-cache5494152820122807128/https-60030cec0dd24157b95f59cd3e5b0819916e4adc": couldn't parse image reference "/tmp/deployer-resource-cache5494152820122807128/https-60030cec0dd24157b95f59cd3e5b0819916e4adc": invalid reference format</p>
</blockquote></li>
<li><p>I connect to the SCDF server pod shell, check out the <code>/tmp</code> folder, and see the <code>deployer-resource-cache5494152820122807128</code> folder there. </p></li>
</ol>
<p>My understanding is SCDF creates a temp image to be executed in the Kubernetes pod, but this image is created <em>inside</em> the scdf server pod, so it's obviously not available from the task pod.</p>
<p>My question is how is this supposed to work? </p>
<p>In my opinion the image should be pushed to a registry, or stored on a shared volume somehow, but I didn't find anything on the topic in the documentation.
Any idea or suggestion would be appreciated. </p>
| <p>What you are trying to do will not work for Spring Cloud Dataflow for Kubernetes. In the Kubernetes implementaton, only Docker images are supported as a deployment artifact.</p>
<p>Currently this fact is not explicitly mentioned in the documentation.
Check out <a href="https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes/issues/157#issuecomment-265722115" rel="nofollow noreferrer">this</a> and <a href="https://github.com/spring-cloud/spring-cloud-dataflow/issues/1049" rel="nofollow noreferrer">this</a> issue</p>
|
<p>I am trying to spin up a Kube cluster on AWS.</p>
<p>The documentation (<a href="http://kubernetes.io/docs/getting-started-guides/aws/" rel="nofollow noreferrer">http://kubernetes.io/docs/getting-started-guides/aws/</a>) specifically mentions that kube-up.sh is deprecated.</p>
<p>What is the correct way to configure the cluster? What is the alternative to kube-up.sh?</p>
| <p>The information is in the article you linked to.</p>
<p>You can either use <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">Kube Ops</a> or coreos's <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html" rel="nofollow noreferrer">kube-aws</a></p>
|
<p>I created the <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag"><img src="https://i.stack.imgur.com/8UH0j.png" height="16" width="18" alt="" class="sponsor-tag-img">kubernetes</a> cluster by using <a href="/questions/tagged/kubeadm" class="post-tag" title="show questions tagged 'kubeadm'" rel="tag">kubeadm</a> <code>kubeadm init</code>.</p>
<p>I am getting error messages in <code>/var/log/messages</code>.</p>
<pre><code>Oct 20 10:09:52 aws08 kubelet: I1020 10:09:52.015921 7116
docker_manager.go:1787] DNS ResolvConfPath exists:
/var/lib/docker/containers/717adf7a8481637ac20a9ba103d8f97635a88bf05f18bd4299f0d164e48f2920/resolv.conf.
Will attempt to add ndots option: options ndots:5 Oct 20 10:09:52
aws08 kubelet: I1020 10:09:52.015963 7116 docker_manager.go:2121]
Calling network plugin cni to setup pod for
kube-dns-2247936740-cjij4_kube-system(3b296413-96aa-11e6-8c40-02fff663a168)
Oct 20 10:09:52 aws08 kubelet: E1020 10:09:52.015982 7116
docker_manager.go:2127] Failed to setup network for pod
"kube-dns-2247936740-cjij4_kube-system(3b296413-96aa-11e6-8c40-02fff663a168)"
using network plugins "cni": cni config unintialized; Skipping pod Oct
20 10:09:52 aws08 kubelet: I1020 10:09:52.018824 7116
docker_manager.go:1492] Killing container
"717adf7a8481637ac20a9ba103d8f97635a88bf05f18bd4299f0d164e48f2920
kube-system/kube-dns-2247936740-cjij4" with 30 second grace period
</code></pre>
<p>The DNS pod is failing:</p>
<pre><code>kube-system kube-dns-2247936740-j5rtc 0/3 ContainerCreating 21 1h
</code></pre>
<p>If I disabled <a href="https://github.com/containernetworking/cni" rel="nofollow noreferrer">CNI</a>, the DNS pod is running. But the issue for DNS persists.</p>
<p>The method to disable cni is to comment the <code>KUBELET_NETWORK_ARGS</code> line in <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> and restart <code>kubelet</code> service</p>
<pre><code>[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
# Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=100.64.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--v=4"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
<p>followed by:</p>
<p>sudo systemctl restart kubelet</p>
| <p>I'm guessing that you forgot to <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/#installing-a-pod-network" rel="nofollow noreferrer">setup the pod network</a>.</p>
<p>From the documentation:</p>
<blockquote>
<p>It is necessary to do this before you try to deploy any applications to your cluster, and before <code>kube-dns</code> will start up. Note also that <code>kubeadm</code> only supports CNI based networks and therefore <code>kubenet</code> based networks will not work.</p>
</blockquote>
<p>You can install a pod network add-on with the following command:</p>
<pre><code>kubectl apply -f <add-on.yaml>
</code></pre>
<p>Example:</p>
<pre><code>kubectl create -f https://git.io/weave-kube
</code></pre>
<p>To install <a href="https://www.weave.works/docs/net/latest/kube-addon/" rel="nofollow noreferrer">Weave Net</a> add-on.</p>
<p>After you have done this, you might need to recreate kube-dns pod.</p>
|
<p>I need help to figure out how to port a current-working VM-based solution to a container-based solution using Kubernetes.</p>
<p><strong>Scenario</strong></p>
<p>An application is made of two components (lets'call them master and slave). A master instance is always up, while 0 or more slaves can be running.</p>
<p><strong>Current flow is:</strong></p>
<ol>
<li>master is assumed up</li>
<li>one (or more) slaves start-up</li>
<li>slave sends a HELLOWORLD message to <a href="https://MASTER_IP:9090">https://MASTER_IP:9090</a>, with some specs like own CPU and RAM</li>
<li>slave start listening on port 8080</li>
<li>master infers slave's IP from TCP headers (step 3)</li>
<li>master fills a running-slaves table with information found during steps 3 and 5</li>
<li>when a new job is available, master sends it to <a href="https://A_SPECIFIC_SLAVE_IP:8080">https://A_SPECIFIC_SLAVE_IP:8080</a></li>
<li>slave does its job</li>
<li>slave sends output to <a href="https://MASTER_IP:9090">https://MASTER_IP:9090</a></li>
</ol>
<p><strong>Notes and requirements:</strong></p>
<p>A. during steps 3 and 9 slave acts as client and master acts as server (I mean slave begins the comunication, while master is listening)</p>
<p>B. during step 3 slave doesn't need to discover master's IP, it's a configuration setting.</p>
<p>C. during step 7 slave acts as server and master acts as client (opposite to A)</p>
<p>D. slaves never sends their own IP explicitly to the master (steps 3 and 5)</p>
<p>E. slaves will be containerized but not the master</p>
<p>F. Master lives on the same local network where k8s-nodes live, but master is out of kubernetes' control. It should be seen as an external service/api to connect to.</p>
<p>Using a POD for each slave, I can get an IP for each slave, but as far as I can see, this IP is part of k8s' internal network:</p>
<p>X. how to let the master deduce POD's IP? (step 3-4)</p>
<p>Y. how to reach a specific POD from the outside? (step 7)</p>
<p>I'm looking into ingress now, but I feel something is still missing.</p>
<p>Thank you.</p>
| <p>Does it matter what job goes to what slave?
Because in Kubernetes, you would have a load balancer (like an nginx instance) as your proxy from the outside, and then you need to use a Kubernetes Service targeting the slave Pods.</p>
<p>The point of Kubernetes is not to worry about where the Pods live, just to be able to reach one of them when needed, which is what a Service does: it looks at Pod with a specific label (or set of labels) and proxies traffic to one of them in a round robin or client-IP based fashion.</p>
<p>There are some ways you could reach specific Pods: </p>
<ul>
<li><p>use a Service per slave Pod: then your nginx proxy can forward traffic to (a) specific Pod, wherever it is in the cluster. Obviously this is not very convenient to automate.</p></li>
<li><p>use StatefulSets (formerly PetSets) behind an Ingress: with StatefulSet you can get access a Pod by name+index, and with Ingress you can specify a parametric URL to proxy your traffic.</p></li>
<li><p>maybe the easiest: use a VPN into the cluster: then you can access each Pod by their FQDN (usually servicename.svc.namespace.cluster.local) </p></li>
</ul>
|
<p>My application needs to run a lot of containers as worker nodes (to do various batch processing jobs) and I'm not really interested in keeping up web servers or databases - just short jobs that can take anywhere between 1 second to 1 hour. My idea is to work against a cloud of nodes without me having to worry about what machine from these nodes has the available resources to process my job (mesos is pretty good at this - as advertised). </p>
<p>I'm playing right now with DC/OS and I was wondering if any of the other clustering technologies offer this feature: <code>given I need 1CPU, 2GB RAM and 2GB of disk - run X docker container against my nodes</code>.</p>
<p>I like the idea of swarm due to the fact that I'm very familiar with docker itself and I believe it's the easiest to setup and automate (scale up or down). I like kubernetes (no experience however) because it's free and I'm pretty sure it will stay that way for a long time. I like DC/OS because it bundles a lot but I'm not sure of their future plans and I'm used to projects cutting off features to include them in a plan that charges your soul for x number of nodes.</p>
<p>What are your thoughts?</p>
| <p>Kubernetes, Swarm, and Mesos can all technically schedule jobs for you and handle constraining resources for you. </p>
<p>Unlike the other two, Mesos was designed primarily to handle distribution, task, and resource management at a lower level. Focusing on these bits led to greater power and flexibility, but also more complexity at a lower level. That's why DC/OS exists, to give you a bundle of microservice tools that work well as a higher level platform.</p>
<p>Mesos was also designed to allow you to bring your own scheduler to handle task lifecycle needs, which tend to be needed for stateful tasks. Kubernetes and Swarm were designed primarily to handle the stateless services use case and then adapted later to handle stateful services and jobs, with the included scheduler. </p>
<p>DC/OS is built on Mesos and comes with built-in schedulers for jobs and services, while still allowing you to build your own custom scheduler if needed.</p>
<p>Kubernetes recently added support for custom schedulers as well, but its significantly less mature than the Mesos implementation and ecosystem and also still revolves around using the core pods & replica set primitives, which may be empowering or limiting, depending on your needs.</p>
<p>Mesosphere recently built a new dcos-commons framework to make it trivial to build JVM-based Mesos schedulers, as well. So that may boost your productivity on DC/OS. <a href="https://github.com/mesosphere/dcos-commons" rel="nofollow noreferrer">https://github.com/mesosphere/dcos-commons</a></p>
<p>Mesos & DC/OS also gives you more options on containerization. You can use Docker images and Docker containers, if you like. Or you can use the Mesos container runtime with or without Docker images, which gives you more flexibility in terms of workloads and packaging.</p>
<p>DC/OS and Kubernetes both have package managers, as well, which can be useful for installing dependencies like Spark, Kafka, or Cassandra. But DC/OS tends to have more robust data services, because they're built with their own custom schedulers, whereas the Kubernetes ecosystem tends to make complex lifecycle managing Docker container wrappers around their systems due to the late arrival of custom schedulers. Docker also sort of includes package management if you consider docker images "packages". The difference is that DC/OS and Kubernetes package higher level abstractions (apps & pods) which may include multiple containers. More recently, Docker has added "stacks" which are higher level abstractions, but I don't think there's any external repository mechanism or much package management around them.</p>
<p>Swarm is definitely the simplest, but its original API was designed to be the same as the node API, which was great for familiarity and onboarding, but rather limiting as a higher level abstraction. They've since effectively rewritten the swarm API and bundled it into docker-engine as "swarm-mode". This bundling of the orchestration engine and container runtime makes it easier for the user to install and manage but also combines what was previously two different abstraction levels. So instead of being just a dependency of orchestration engines, Docker engine now competes with them as well, going against the unix philosophy of doing one thing well and making for a bit of a political mess in the respective open source communities. Twitter, hacker news, and chat conversations escalated into talk of <a href="http://thenewstack.io/docker-fork-talk-split-now-table/" rel="nofollow noreferrer">forking docker</a> which lead to <a href="http://thenewstack.io/oci-building-way-kubernetes-run-containers-without-docker/" rel="nofollow noreferrer">K8s experimenting on alternatives</a>, <a href="https://mesosphere.com/blog/2016/09/30/dcos-universal-container-runtime/" rel="nofollow noreferrer">DC/OS supporting Docker images without using Docker engine</a>, and <a href="https://www.docker.com/docker-news-and-press/docker-extracts-and-donates-containerd-its-core-container-runtime-accelerate" rel="nofollow noreferrer">Docker extracting containerd</a>.</p>
<p>They all work fine. Selecting one kind of depends on your needs. I generally recommend DC/OS because it tackles a larger set of problems and is made up of many distinct microservice tools and layers, allowing you support multiple use cases by programming against the layer than makes the most sense. Disclosure tho, I do work for Mesosphere! ;)</p>
|
<p>I just installed k8s cluster but the URLs are all using localhost and would like to change it to use the hostname or even IP address. Due to this the cluster can be accessed only from the master node. I am unable to find the right place to make this change. Any help is really appreciated. </p>
<pre><code>OS: Redhat 7.1
Kubernetes version: 1.2
[rakeshk@ kubernetes]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubedash is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubedash
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
</code></pre>
| <p>Based on the port number (<code>8080</code>, which is the default value of <a href="http://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer"><code>--insecure-port</code></a> of <code>kube-apiserver</code>), I am guessing that you are running the <code>kubectl cluster-info</code> command on the same machine where the <code>kube-apiserver</code> is running.</p>
<p>If the above assumption is correct, then copy the <code>/etc/kubernetes/admin.conf</code> file (from the machine thats running the <code>kube-apiserver</code>) to your local machine <code>~/.kube/config</code>. Run <code>kubectl cluster-info</code> on your local machine (install <a href="http://kubernetes.io/docs/user-guide/prereqs/" rel="nofollow noreferrer">kubectl</a> program on your local machine, if you haven't already). This should give you cluster address as either a hostname or IP address. Whether it shows it as an IP address or a hostname depends on whether a reverse lookup of the IP address resolves a DNS record.</p>
<blockquote>
<p>If you are initialising <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag"><img src="https://i.stack.imgur.com/8UH0j.png" height="16" width="18" alt="" class="sponsor-tag-img">kubernetes</a> using <a href="/questions/tagged/kubeadm" class="post-tag" title="show questions tagged 'kubeadm'" rel="tag">kubeadm</a> program. <code>kubeadm init</code> will autodetect the network interface to advertise the master on as the interface with the default gateway. If default gateway IP address is not routable IP address, ensure that you set <code>--api-advertise-addresses</code> to a routable IP address. </p>
</blockquote>
|
<p>Following are the steps I followed.</p>
<ol>
<li>Installed and configured etcd, kube apiserver, kube controller
manager , kube-scheduler, flannel on master. </li>
<li>List item kubectl get nodes doesnt display any nodes initially. </li>
<li><p>Installed and configured flannel network, kubernetes, docker on node.When node starts up and trying to register with api server, it gives this error in journalctl -xe</p>
<p>Attempting to register node 192.168.6.103
E1222 02:20:03.487534 2030 kubelet.go:1222] Unable to register node "192.168.6.103" with API server: the body of the request was in an unknown format - accepted media ty
E1222 02:20:03.490982 2030 event.go:198] Server rejected event ' &api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"192.168
E1222 02:20:03.493741 2030 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"192.168</p></li>
</ol>
<p>Following are the version installed on the node. I guess it is an issue with the node.</p>
<p>Docker client version - 1.10.3 API server 1.22
Docker server version - 1.10.3 API server 1.22</p>
<p>kubectl version gave an error </p>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
<p>I had done this </p>
<pre><code>$ kubectl config set-cluster demo-cluster --server=http://master.example.com:8080
$ kubectl config set-context demo-system --cluster=demo-cluster
$ kubectl config use-context demo-system
</code></pre>
<p>Then the version is displayed as </p>
<p>Client version,. Major 1 Minor 3
Server version Major 1 Minor 2</p>
<p>Anything I am doing wrong? Any suggestions would be very helpful.</p>
<p>Thanks</p>
| <p>Issue was because of version mismatch in kubectl in between the master and nodes. Minor version in master was 2 whereas in the nodes it was 3. sudo update on master and node and restarted fixed the issue.</p>
|
<p>Does anyone have any advice on how to pull from Azure container registry whilst running within Azure container service (kubernetes)</p>
<p>I've tried a sample deployment like the following but the image pull is failing:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
</code></pre>
| <p>I got this working after reading this info.</p>
<p><a href="http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod" rel="noreferrer">http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod</a></p>
<p>So firstly create the registry access key</p>
<pre><code>kubectl create secret docker-registry myregistrykey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
</code></pre>
<p>Replacing the server address with the address of your ACR address and the USERNAME, PASSWORD and EMAIL address with the values from the admin user for your ACR. Note: The email address can be value.</p>
<p>Then in the deploy you simply tell kubernetes to use that key for pulling the image like so:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
name: jenkins-master
labels:
name: jenkins-master
spec:
containers:
- name: jenkins-master
image: myregistry.azurecr.io/infrastructure/jenkins-master:1.0.0
imagePullPolicy: Always
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
timeoutSeconds: 5
ports:
- name: jenkins-web
containerPort: 8080
- name: jenkins-agent
containerPort: 50000
imagePullSecrets:
- name: myregistrykey
</code></pre>
|
<p>Volumes created through GKE can easily be resized using <code>gcloud compute disks resize [volume name from kubectl get pv]</code>. The pod will keep running.</p>
<p>However a <code>df</code> in the pod will still report the same size. More importantly <code>kubectl describe pv</code> will still report the original "capacity". Is there a way to grow the pod's actual storage space on the volume?</p>
<p>Official support may be in the roadmap, according to <a href="https://github.com/kubernetes/kubernetes/issues/24255#issuecomment-210227126" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/24255#issuecomment-210227126</a>, but where is that discussion taking place?</p>
| <p>Usually containers in pods are designed to be ephemeral. If this is the design of your deployment then kill the pod and once the pod restarts it should reattach to the gcePersistentDisk with the correct size. </p>
|
<p>I would like to update Heapster configuration (add sink for influxdb). The problem is that, since we created cluster via Google Container Engine, heapster was created by default and have configuration file on kubernetes master. I can't connect to kubernetes master the same way i can connect to minion nodes (ssh). I would like to know if there is a way to update heapster pod configuration either directly via configuration file on k8s master or via kubernetes API</p>
| <p>I have the same usecase, so I can share what I've found so far.</p>
<p>Heapster runs as cluster addon and it seems there's no way to add/delete/modify all of cluster addons on hosted Kubernetes in Google Container Engine (GKE).
You can, however, control two of them: "HorizontalPodAutoscaling" and "HttpLoadBalancing" (source: <a href="https://cloud.google.com/container-engine/docs/clusters/operations#updating_a_container_cluster" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/clusters/operations#updating_a_container_cluster</a>).</p>
<p>In Kubernetes 1.4 I was able to apply my custom Heapster Deployment (with influxdb sink) and it would effectively replace the built-in heapster addon.
In Kubernetes 1.5.1 my changes are reverted and that makes sense, as there's probably a cluster addon manager which keeps all addons in sync (source: <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/README.md</a>.</p>
<p>What I've done in the end, is that I've created a separate Heapster deployment with influxdb sink (with a different name and pod labels).
Pros: I have full control over it's configuration. Cons: two heapsters use more resources that one.</p>
<p>If you find a way to edit or disable the built-in heapster cluster addon, please share how to do it.</p>
|
<p>I would like to connect my Kubernetes cluster to Google Cloud SQL.</p>
<p>I have at least 10 different deployed pods which presently connect to MySQL [docker image deployed to k8s] using a JDBC url + username/password.</p>
<p>It it possible to use a single instance of the Google Cloud SQL Proxy and connect all the pods through this proxy to the Cloud SQL database? Ideally I would like to replace the mysql running in the container with the proxy.</p>
<p>I would prefer not having to run the proxy inside each deployment. The only samples I found seem to indicate the proxy needs to be declared in each deployment.</p>
| <p>I found a solution. </p>
<p>Deploy the proxy with the yml below, and expose the deployment as a service. Most importantly, make the proxy listen on 0.0.0.0, instead of default 127.0.0.1. All the secrets as per the Google Cloud sql documentation</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
template:
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=MYSQL:ZONE:DATABASE_INSTANCE=tcp:0.0.0.0:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
ports:
- containerPort: 3306
name: mysql
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
</code></pre>
<p>The solution is slightly more expensive than having the proxy in the same deployment as the client software, since there is an extra TCP connection. </p>
<p>However there are many benefits:</p>
<ul>
<li>Much simpler and doesn't require modifying existing K8S deployment files</li>
<li>Allows switching the implementation to a MySQL Docker container or using the Google Cloud SQL proxy without any modifications to the client configuration.</li>
</ul>
|
<p>I have a node outside of my Kubernetes cluster running a web service that I need to access from inside a Pod. The documentation mentions using a Service without a Selector here:
<a href="http://kubernetes.io/docs/user-guide/services/" rel="noreferrer">http://kubernetes.io/docs/user-guide/services/</a></p>
<p>So I created a service like so:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8082,
"targetPort": 8082
}
]
}
}
</code></pre>
<p>Then created my endpoint:</p>
<pre><code>{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"subsets": [
{
"addresses": [
{ "ip": "128.115.198.7" }
],
"ports": [
{ "port": 8082 }
]
}
]
}
</code></pre>
<p>Test App:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: ta-p
spec:
restartPolicy: Never
containers:
- name: ta-c
image: "centos:7"
command: ["/bin/bash","-c", "sleep 100000"]
nodeSelector:
node: "kube-minion-1"
</code></pre>
<p>Remote into Pod doing:</p>
<pre><code>kubectl exec ta-p -c ta-c -i --tty -- /bin/bash
</code></pre>
<p>Then whenever I <code>kubectl exec</code> into a container in my pod and try to ping or curl my-service like so:</p>
<pre><code>curl http://my-service/api/foo
</code></pre>
<p>it times out. I have verified DNS is setup and working correctly. However, I have even tried using the IP address directly bound to the service:</p>
<pre><code>curl http://10.0.124.106:8082/api/foo
</code></pre>
<p>Anyone have any suggestions?</p>
| <p>Note: A better way to solve this now is using <code>externalName</code> on the Service. This will add a <code>CNAME</code> record to the internal Kubernetes DNS:
<a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/service-external-name.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/service-external-name.md</a></p>
<p>This feature was shipped with Kubernetes 1.4.</p>
|
<p>I'm using kubernetes secret as my environment variable
(<a href="http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-environment-variables" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-environment-variables</a>).</p>
<p>I've check whether env vars set correctly by </p>
<pre><code>kubectl exec -it my_pod bash
</code></pre>
<p>1.</p>
<pre><code>echo $RAILS_ENV #=> staging
</code></pre>
<p>2.</p>
<pre><code>bin/rails c;
puts ENV['RAILS_ENV'] #=> staging
</code></pre>
<p>It works fine for my rails application and bash command, but doesn't work when executing cron process.</p>
<p>I've read some post to understand cron process (e.g. <a href="https://unix.stackexchange.com/questions/27289/how-can-i-run-a-cron-command-with-existing-environmental-variables">https://unix.stackexchange.com/questions/27289/how-can-i-run-a-cron-command-with-existing-environmental-variables</a>) and know that I should source .profile or .bashrc before executing cron command like.</p>
<pre><code>$ crontab -l
* * * * * . $HOME/.profile; /bin/bash -l -c 'echo rails_env=$RAILS_ENV >> /tmp/cron_test.log'
* * * * * . $HOME/.bashrc; /bin/bash -l -c 'echo rails_env=$RAILS_ENV >> /tmp/cron_test.log'
</code></pre>
<p>but both doesn't work for me ˊˋ</p>
<p>result</p>
<pre><code>rails_env=
</code></pre>
<p>Any help will be appreciated!</p>
<p>===</p>
<p><strong>[update: add kubernetes yaml config]</strong></p>
<p>Both environment variables assign by kubernetes config and kubernetes secret can read in bash and rails application, but doesn't work in cron process.</p>
<p>deployment.yaml</p>
<pre><code>spec:
containers:
- name: my_rails
...
command:
- bash
args:
- /tmp/run_cron.sh
...
env:
- name: RAILS_ENV
value: staging
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: rails-secret
key: MYSQL_PASSWORD
</code></pre>
<p>secret.yaml</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: rails-secret
type: Opaque
data:
MYSQL_PASSWORD: ENCODED_PASSWORD
</code></pre>
| <p>I found the solution after discussion with some guys.</p>
<p>The problem should be docker level not kubernetes level.
<a href="https://ypereirareis.github.io/blog/2016/02/29/docker-crontab-environment-variables/" rel="nofollow noreferrer">https://ypereirareis.github.io/blog/2016/02/29/docker-crontab-environment-variables/</a></p>
<p>Cron process doesn't run as normal login-shell, so it didn't act as our expectation for read environment variables set by docker/k8s.</p>
<p>To solve the problem, we have to make cron process read environment variables first.</p>
<p>STEP1.</p>
<p>Add one line to dump environment variables at run_cron.sh</p>
<pre><code>#!/bin/bash
# dump environment variables
printenv | sed 's/^\(.*\)$/export \1/g' >> /etc/profile.d/rails_env.sh
cron -f
</code></pre>
<p>STEP2.</p>
<p>Make cron read environment variables before executing command.</p>
<pre><code>* * * * * . /etc/profile.d/rails_env.sh; ruby my_job.rb >> /tmp/cron.log 2>&1
</code></pre>
<p>or using bash <strong>--login</strong> option, which would every file under /etc/profile.d</p>
<pre><code>* * * * * /bin/bash -l -c 'ruby my_job.rb >> /tmp/cron.log 2>&1'
</code></pre>
<p>Then cron work as expectation!</p>
|
<h3>Problem</h3>
<p>I'd like to issue certs to many different developers (different subjects) all within the dev group, and have them all have access to create and modify things within the dev namespace, but not touch anything outside it, and definitely not see secrets outside it. I suspect the roles, role bindings, etc. I'm creating in step 2 below are not correct, can anyone suggest corrections?</p>
<h3>Attempt</h3>
<ol>
<li>Deployed Kubernetes with API Server flags to support "RBAC,AlwaysAllow" authorization modes, set RBAC super user, and enable RBAC API via <code>--runtime-config</code>.</li>
<li>Created a namespace, role, and role binding with the intent that (a) service accounts and system components can effectively still have "AlwaysAllow" access, and (b) any entity in group <code>dev</code> can access anything in namespace <code>dev</code> using <a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#dev-accessyml" rel="noreferrer">this YAML file</a>. <strong>NOTE: contents of this link have changed, see YAML files I got working at bottom of question.</strong></li>
<li>Updated Kubernetes to only allow "RBAC" authorization mode.</li>
<li>Generated client TLS data where the certificate subject flag (for openssl) was <code>-subj "/[email protected]/O=dev"</code>.</li>
<li>Generated a kubeconfig file following <a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#dev-kube-configyml" rel="noreferrer">this template</a>.</li>
</ol>
<h3>Actual Result</h3>
<p>I get the following errors when running: <code>kubectl -v 8 --kubeconfig=/tmp/dev-kube-config.yml create -f /tmp/busybox.yml</code>:</p>
<pre><code>I1219 16:12:37.584657 44323 loader.go:354] Config loaded from file /tmp/dev-kube-config.yml
I1219 16:12:37.585953 44323 round_trippers.go:296] GET https://api.kubernetes.click/api
I1219 16:12:37.585968 44323 round_trippers.go:303] Request Headers:
I1219 16:12:37.585983 44323 round_trippers.go:306] Accept: application/json, */*
I1219 16:12:37.585991 44323 round_trippers.go:306] User-Agent: kubectl/v1.5.1+82450d0 ( darwin/amd64) kubernetes/82450d0
I1219 16:12:38.148994 44323 round_trippers.go:321] Response Status: 403 Forbidden in 562 milliseconds
I1219 16:12:38.149056 44323 round_trippers.go:324] Response Headers:
I1219 16:12:38.149070 44323 round_trippers.go:327] Content-Type: text/plain; charset=utf- 8
I1219 16:12:38.149081 44323 round_trippers.go:327] Content-Length: 17
I1219 16:12:38.149091 44323 round_trippers.go:327] Date: Tue, 20 Dec 2016 00:12:38 GMT
I1219 16:12:38.149190 44323 request.go:904] Response Body: Forbidden: "/api"
I1219 16:12:38.149249 44323 request.go:995] Response Body: "Forbidden: \"/api\""
I1219 16:12:38.149567 44323 request.go:1151] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'Forbidden: "/api"'
...
I1219 16:12:38.820672 44323 round_trippers.go:296] GET https://api.kubernetes. click/swaggerapi/api/v1
I1219 16:12:38.820702 44323 round_trippers.go:303] Request Headers:
I1219 16:12:38.820717 44323 round_trippers.go:306] User-Agent: kubectl/v1.5.1+82450d0 ( darwin/amd64) kubernetes/82450d0
I1219 16:12:38.820731 44323 round_trippers.go:306] Accept: application/json, */*
I1219 16:12:38.902256 44323 round_trippers.go:321] Response Status: 403 Forbidden in 81 milliseconds
I1219 16:12:38.902306 44323 round_trippers.go:324] Response Headers:
I1219 16:12:38.902327 44323 round_trippers.go:327] Content-Type: text/plain; charset=utf- 8
I1219 16:12:38.902345 44323 round_trippers.go:327] Content-Length: 31
I1219 16:12:38.902363 44323 round_trippers.go:327] Date: Tue, 20 Dec 2016 00:12:38 GMT
I1219 16:12:38.902456 44323 request.go:904] Response Body: Forbidden: "/swaggerapi/api/v1"
I1219 16:12:38.902512 44323 request.go:995] Response Body: "Forbidden: \"/swaggerapi/api/v1\""
F1219 16:12:38.903025 44323 helpers.go:116] error: error validating "/tmp/busybox.yml": error validating data: the server does not allow access to the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<h3>Expected Result</h3>
<p>Expected to create busybox pod in <code>dev</code> namespace.</p>
<h3>Additional details:</h3>
<ul>
<li><p><code>$ kubectl version</code></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1+82450d0", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"not a git tree", BuildDate:"2016-12-14T04:09:31Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:16:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre></li>
<li><p>GitHub issue: <a href="https://github.com/kubernetes/kubernetes/issues/38997" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/38997</a></p></li>
<li>Mailing list post: <a href="https://groups.google.com/forum/#!topic/kubernetes-dev/6TBTu1AC2L8" rel="noreferrer">https://groups.google.com/forum/#!topic/kubernetes-dev/6TBTu1AC2L8</a></li>
</ul>
<hr>
<p><strong>EDIT: Working solution based on answer <em>and</em> comments</strong></p>
<p>Based on Jordan's answer below, I upgraded to Kubernetes v1.5.1 and then got the following two YAML files to construct the namespace and all the correct RBAC resources so that everything works as desired:</p>
<p><a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#system-accessyml" rel="noreferrer"><code>system-access.yml</code></a> (because the out-of-the-box cluster roles and cluster role bindings didn't seem to work):</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: system:node--kubelet
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- kind: User
name: kubelet
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: cluster-admin--kube-system:default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: system:node-proxier--kube-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: User
name: kube-proxy
</code></pre>
<p><a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#dev-accessyml" rel="noreferrer"><code>dev-access.yml</code></a>:</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
namespace: dev
name: dev-all
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dev-role-dev-all-members
namespace: dev
subjects:
- kind: Group
name: dev
- kind: Group
name: system:serviceaccounts:dev
roleRef:
kind: Role
name: dev-all
apiGroup: "rbac.authorization.k8s.io"
</code></pre>
| <p>First, you need to allow access to the URLs kubectl uses for API discovery and validation (swagger, listings of API groups and resource types, etc).</p>
<p>The easiest way to do that is to load the default bootstrap <a href="https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml" rel="noreferrer">cluster roles</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-role-bindings.yaml" rel="noreferrer">cluster role bindings</a>:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-role-bindings.yaml
</code></pre>
<p>That will create a <code>system:discovery</code> ClusterRole and bind all users (authenticated and unauthenticated) to it, allowing them to access swagger and API group information.</p>
<p>Second, you shouldn't include the dev service account in the <code>all</code> cluster role binding. That would allow that service account (and anyone with access to secrets in the dev namespace containing the dev service account credentials) cluster wide access</p>
|
<p>I am trying to set up EFK stack on my k8s cluster using <a href="https://github.com/kubernetes/contrib/blob/master/ansible/roles/kubernetes-addons/tasks/cluster-logging.yml" rel="nofollow noreferrer">ansible repo</a>.</p>
<p>When i tried to browse kibana dashboard it shows me next output:</p>
<p><a href="https://i.stack.imgur.com/2P12B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2P12B.png" alt="kibana dash"></a></p>
<p>After making some research, i found out that i don't have any log detected by Fluentd.
I am running k8s 1.2.4 on minions and 1.2.0 on master.
What i succeeded to understand, is that kubelet creates /var/log/containers directory, and make symlinks from all containers running in the cluster into it. After that Fluentd mounts share /var/log volume from the minion and have eventually access to all logs containers. So , it can send these logs to elastic search.</p>
<p>In my case i had /var/log/containers created, but it is empty, even /var/lib/docker/containers does not contain any log file.
I used to use the following controllers and services for EFK stack setup:</p>
<h1>es-controller.yaml</h1>
<p><code>apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/elasticsearch:v2.4.1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: es-persistent-storage
emptyDir: {}</code></p>
<h1>es-service.yaml</h1>
<p><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging</code></p>
<h1>fluentd-es.yaml</h1>
<p><code>apiVersion: v1
kind: Pod
metadata:
name: fluentd-es-v1.20
namespace: kube-system
labels:
k8s-app: fluentd-es
version: v1.20
spec:
containers:
- name: fluentd-es
image: gcr.io/google_containers/fluentd-elasticsearch:1.20
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
cpu: 100m
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers</code></p>
<h1>kibana-controller.yaml</h1>
<p><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:v4.6.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: ui
protocol: TCP</code></p>
<h1>kibana-service.yaml</h1>
<p><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging</code></p>
<h1>update:</h1>
<p>I changed fluentd-es.yaml as following:</p>
<p><code>apiVersion: v1
kind: Pod
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google_containers/fluentd-elasticsearch:1.15
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers</code></p>
<p>But when i run a pod "named gateway", i got in the fluentd log the next error:
<code>/var/log/containers/gateway-c3cuu_default_gateway-d5966a86e7cb1519329272a0b900182be81f55524227db2f524e6e23cd75ba04.log unreadable. It is excluded and would be examined next time.</code></p>
| <p>Finally i found out what was causing the issue.
when installing docker from CentOS 7 repo, there is an option (--log-driver=journald) which force docker to run log output to journald. The default behavior is to write these logs to json.log files.So, the only thing i had to do, delete the last mentioned option from /etc/sysconfig/docker.</p>
|
<p>I initialized Kubernetes with <code>kubeadm init</code>, and after I used <code>kubeadm reset</code> to reset it I found <code>--pod-network-cidr</code> was wrong. After correcting it I tried to use <code>kubeadm</code> to init Kubernetes again like this:</p>
<pre><code>kubeadm init --use-kubernetes-version v1.5.1 --external-etcd endpoints=http://10.111.125.131:2379 --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>Then I got some errors on the nodes</p>
<pre><code>12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838700 13333 cni.go:255] Error adding network: no IP addresses available in network: cbr0
12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838727 13333 cni.go:209] Error while adding to cni network: no IP addresses available in network: cbr0
12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838781 13333 docker_manager.go:2201] Failed to setup network for pod "test-701078429-tl3j2_default(6945191b-ccce-11e6-b53d-78acc0f9504e)" using network plugins "cni": no IP addresses available in network: cbr0; Skipping pod
12月 28 15:30:56 ydtf-node-137 kubelet[13333]: E1228 15:30:56.205596 13333 pod_workers.go:184] Error syncing pod 6945191b-ccce-11e6-b53d-78acc0f9504e, skipping: failed to "SetupNetwork" for "test-701078429-tl3j2_default" with SetupNetworkError: "Failed to setup network for pod \"test-701078429-tl3j2_default(6945191b-ccce-11e6-b53d-78acc0f9504e)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod"
</code></pre>
<p>or</p>
<pre><code>Dec 29 10:20:02 ydtf-node-137 kubelet: E1229 10:20:02.065142 22259 pod_workers.go:184] Error syncing pod 235cd9c6-cd6c-11e6-a9cd-78acc0f9504e, skipping: failed to "SetupNetwork" for "test-701078429-zmkdf_default" with SetupNetworkError: "Failed to setup network for pod \"test-701078429-zmkdf_default(235cd9c6-cd6c-11e6-a9cd-78acc0f9504e)\" using network plugins \"cni\": \"cni0\" already has an IP address different from 10.244.1.1/24; Skipping pod"
</code></pre>
<p>Why can't I create a network for the new pods?</p>
<p>By the way, I use flannel as network provider and it works fine.</p>
<pre><code>[root@ydtf-master-131 k8s151]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default test-701078429-tl3j2 0/1 ContainerCreating 0 2h <none> ydtf-node-137
kube-system dummy-2088944543-hd7b7 1/1 Running 0 2h 10.111.125.131 ydtf-master-131
kube-system kube-apiserver-ydtf-master-131 1/1 Running 7 2h 10.111.125.131 ydtf-master-131
kube-system kube-controller-manager-ydtf-master-131 1/1 Running 0 2h 10.111.125.131 ydtf-master-131
kube-system kube-discovery-1769846148-bjgp8 1/1 Running 0 2h 10.111.125.131 ydtf-master-131
kube-system kube-dns-2924299975-q8x2m 4/4 Running 0 2h 10.244.0.3 ydtf-master-131
kube-system kube-flannel-ds-3fsjh 2/2 Running 0 2h 10.111.125.137 ydtf-node-137
kube-system kube-flannel-ds-89r72 2/2 Running 0 2h 10.111.125.131 ydtf-master-131
kube-system kube-proxy-7w8c4 1/1 Running 0 2h 10.111.125.137 ydtf-node-137
kube-system kube-proxy-jk6z6 1/1 Running 0 2h 10.111.125.131 ydtf-master-131
kube-system kube-scheduler-ydtf-master-131 1/1 Running 0 2h 10.111.125.131 ydtf-master-131
</code></pre>
| <p>I figured it out, if you change <strong>--pod-network-cidr</strong> when you reinitialize kubernetes via <strong>kubeadm init</strong>, you should delete some auto-created things, just follow the steps below before you execute <strong>kubeadm init</strong> again:</p>
<ol>
<li><p>execute <strong>kubeadm reset</strong> on master and nodes.</p>
</li>
<li><p>execute <strong>etcdctl rm --recursive registry</strong> reset data in etcd.</p>
</li>
<li><p><strong>rm -rf /var/lib/cni</strong> on master and nodes</p>
</li>
<li><p><strong>rm -rf /run/flannel</strong> on master and nodes</p>
</li>
<li><p><strong>rm -rf /etc/cni</strong> on master and nodes</p>
</li>
<li><p><strong>ifconfig cni0 down</strong> on master and nodes</p>
</li>
<li><p><strong>brctl delbr cni0</strong> on master and nodes</p>
</li>
</ol>
<p>Now, my Kubernetes works fine :)</p>
|
<p>I'm trying to publish to an existing pubsub topic from a Scala application running in Google Container Engine (i.e. running in Kubernetes). </p>
<p>I have enabled (I think) the correct permissions for the underlying cluster: </p>
<p><a href="https://i.stack.imgur.com/WnEwS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WnEwS.png" alt="permissions"></a></p>
<p>However, when I try run my Scala application, I get the following error: </p>
<pre><code>2016-12-10T22:22:57.811982246Z Caused by:
com.google.cloud.pubsub.PubSubException: java.lang.IllegalStateException:
No NameResolverProviders found via ServiceLoader, including for DNS.
This is probably due to a broken build. If using ProGuard, check your configuration
</code></pre>
<p>Full stack trace <a href="https://gist.github.com/ciaranarcher/58fa789c2f4ffe5d15f902bff7233295" rel="noreferrer">here</a>.</p>
<p>My Scala code is pretty much right out of the quick start guide: </p>
<pre><code>val TopicName = "my-topic"
val pubsub = PubSubOptions.getDefaultInstance.getService
val topic = pubsub.getTopic(TopicName)
...
topic.publish(Message.of(json))
</code></pre>
<p>I think I might be missing some vital Kubernetes configuration, so any and all help is very much appreciated.</p>
| <p>I've found that this problem happens when sbt manages the "com-google-cloud-pubsub" dependancy. My work around to this is, I created a maven project and built a jar with only that dependency. Then I added that jar to my classpath and in my build.sbt I annotated the "com-google-cloud-pubsub" as "provided". I hope this works for you. </p>
<pre><code><dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.8.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>assemble-all</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</code></pre>
|
<p>I have a kubernetes cluster of 3 hosts where each Host has a unique id label.
On this cluster, there is a software that has 3 instances (replicas).</p>
<p>Each replica requires to talk to all other replicas. In addition, there is a service that contains all pods so that this application is permanently available.</p>
<p>So I have:</p>
<pre><code>Instance1 (with labels run: theTool,instanceid: 1)
Instance2 (with labels run: theTool,instanceid: 2)
Instance3 (with labels run: theTool,instanceid: 3)
</code></pre>
<p>and</p>
<pre><code>Service1 (selecting pods with label instanceid=1)
Service2 (selecting pods with label instanceid=2)
Service3 (selecting pods with label instanceid=3)
Service (selecting pods with label run=theTool)
</code></pre>
<p>This approach works but have I cannot scale or use the rolling-update feature.</p>
<p>I would like to define a deployment with 3 replicas, where each replicate gets a unique generic label (for instance the replica-id like 1/3, 2/3 and so on).</p>
<p>Within the services, I could use the selector to fetch this label which will exist even after an update.</p>
<p>Another solution might be to select the pod/deployment, depending on the host where it is running on. I could use a DaemonSet or just a pod/deployment with affinity to ensure that each host has an exact one replica of my deployment.</p>
<p>But I didn't know how to select a pod based on a host label where it runs on.</p>
<p>Using the hostname is not an option as hostnames will change in different environments.</p>
<p>I have searched the docs but didn't find anything matching this use case. Hopefully, someone here has an idea how to solve this.</p>
| <p>The feature you're looking for is called <a href="http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/" rel="noreferrer">StatefulSets</a>, which just launched to beta with <a href="http://blog.kubernetes.io/2016/12/kubernetes-1.5-supporting-production-workloads.html" rel="noreferrer">Kubernetes 1.5</a> (note that it was previously available in alpha under a different name, PetSets).</p>
<p>In a StatefulSet, each replica has a unique name that is persisted across restarts. In your example, these would be instance-1, instance-2, instance-3. Since the instance names are persisted (even if the pod is recreated on another node), you don't need a service-per-instance.</p>
<p>The documentation has more details:</p>
<ul>
<li><a href="http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/" rel="noreferrer">Using StatefulSets</a></li>
<li><a href="http://kubernetes.io/docs/tasks/manage-stateful-set/scale-stateful-set/" rel="noreferrer">Scaling a StatefulSet</a></li>
<li><a href="http://kubernetes.io/docs/tasks/manage-stateful-set/deleting-a-statefulset/" rel="noreferrer">Deleting a StatefulSet</a></li>
<li><a href="http://kubernetes.io/docs/tasks/manage-stateful-set/deleting-a-statefulset/" rel="noreferrer">Debugging a StatefulSet</a></li>
</ul>
|
<p>From what I can tell in the documentation, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noreferrer"><code>ReplicaSet</code></a> is created when running a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer"><code>Deployment</code></a>. It seems to support some of the same features of a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="noreferrer"><code>ReplicationController</code></a> - scale up/down and auto restart, but it's not clear if it supports rolling upgrades or autoscale.</p>
<p>The v1.1.8 user guide shows how to create a deployment in <a href="http://kubernetes.io/docs/user-guide/deploying-applications/" rel="noreferrer">Deploying Applications</a> (which automatically creates a <code>ReplicaSet</code>), yet the <code>kubectl get replicasets</code> command is not available until v1.2.0. I cannot find any other information about <code>ReplicaSet</code> in the documentation. </p>
<p>Will <code>ReplicaSet</code> eventually replace <code>ReplicationController</code>? Why would I want to use <code>Deployment</code> and <code>ReplicaSet</code> instead of <code>ReplicationController</code>?</p>
| <p>Replica Set is the next generation of Replication Controller. Replication controller is kinda imperative, but replica sets try to be as declarative as possible.</p>
<p>1.The main difference between a Replica Set and a Replication Controller right now is the selector support.</p>
<pre><code>+--------------------------------------------------+-----------------------------------------------------+
| Replica Set | Replication Controller |
+--------------------------------------------------+-----------------------------------------------------+
| Replica Set supports the new set-based selector. | Replication Controller only supports equality-based |
| This gives more flexibility. for eg: | selector. for eg: |
| environment in (production, qa) | environment = production |
| This selects all resources with key equal to | This selects all resources with key equal to |
| environment and value equal to production or qa | environment and value equal to production |
+--------------------------------------------------+-----------------------------------------------------+
</code></pre>
<p>2.The second thing is the updating the pods.</p>
<pre><code>+-------------------------------------------------------+-----------------------------------------------+
| Replica Set | Replication Controller |
+-------------------------------------------------------+-----------------------------------------------+
| rollout command is used for updating the replica set. | rolling-update command is used for updating |
| Even though replica set can be used independently, | the replication controller. This replaces the |
| it is best used along with deployments which | specified replication controller with a new |
| makes them declarative. | replication controller by updating one pod |
| | at a time to use the new PodTemplate. |
+-------------------------------------------------------+-----------------------------------------------+
</code></pre>
<p>These are the two things that differentiates RS and RC. Deployments with RS is widely used as it is more declarative. </p>
|
<p>Trying to understand how sticky session should be configured when working with service type=loadbalancer in AWS
My backend are 2 pods running tomcat app
I see that the service create the AWS LB as well and I set the right cookie value in the AWS LB configuration ,but when accessing the system I see that I keep switching between my pods/tomcat instances</p>
<p>My service configuration </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
labels:
app: app1
name: AWSELB
namespace: local
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: app1
</code></pre>
<p>Is there any additional settings that are missing?
Thank you
Jack</p>
| <p>Can you try setting Client-IP based session affinity by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). (<a href="http://kubernetes.io/docs/user-guide/services/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/services/</a>)</p>
<p>You can also try running an ingress controller which can better manage the routing internal, see: <a href="https://github.com/kubernetes/kubernetes/issues/13892#issuecomment-223731222" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/13892#issuecomment-223731222</a></p>
|
<p>Do you know if there is a way to dynamically change the namespace of a running RC or pod in kubernetes?</p>
<p>Thanks for your help.</p>
| <p>You can not change the namespace of a running resource:</p>
<p><strong>A resource with the same name might already exist in another namespace, making a rename difficult and unpredictable</strong></p>
|
<p>I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.</p>
<p>What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.</p>
<p>I have the following considerations:</p>
<ul>
<li>Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.</li>
<li>On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.</li>
</ul>
<p><strong>Security:</strong></p>
<ul>
<li>Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.</li>
</ul>
<p><strong>Authentication:</strong></p>
<ul>
<li>At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).</li>
</ul>
<p>A number of vms already exist on the server. Would this be an issue?</p>
| <p>128 cores and doubts.... That is a lot of cores for a single server.</p>
<p>For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.</p>
<p>What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).</p>
|
<p>I'm using kubernetes secret as my environment variable
(<a href="http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-environment-variables" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-environment-variables</a>).</p>
<p>I've check whether env vars set correctly by </p>
<pre><code>kubectl exec -it my_pod bash
</code></pre>
<p>1.</p>
<pre><code>echo $RAILS_ENV #=> staging
</code></pre>
<p>2.</p>
<pre><code>bin/rails c;
puts ENV['RAILS_ENV'] #=> staging
</code></pre>
<p>It works fine for my rails application and bash command, but doesn't work when executing cron process.</p>
<p>I've read some post to understand cron process (e.g. <a href="https://unix.stackexchange.com/questions/27289/how-can-i-run-a-cron-command-with-existing-environmental-variables">https://unix.stackexchange.com/questions/27289/how-can-i-run-a-cron-command-with-existing-environmental-variables</a>) and know that I should source .profile or .bashrc before executing cron command like.</p>
<pre><code>$ crontab -l
* * * * * . $HOME/.profile; /bin/bash -l -c 'echo rails_env=$RAILS_ENV >> /tmp/cron_test.log'
* * * * * . $HOME/.bashrc; /bin/bash -l -c 'echo rails_env=$RAILS_ENV >> /tmp/cron_test.log'
</code></pre>
<p>but both doesn't work for me ˊˋ</p>
<p>result</p>
<pre><code>rails_env=
</code></pre>
<p>Any help will be appreciated!</p>
<p>===</p>
<p><strong>[update: add kubernetes yaml config]</strong></p>
<p>Both environment variables assign by kubernetes config and kubernetes secret can read in bash and rails application, but doesn't work in cron process.</p>
<p>deployment.yaml</p>
<pre><code>spec:
containers:
- name: my_rails
...
command:
- bash
args:
- /tmp/run_cron.sh
...
env:
- name: RAILS_ENV
value: staging
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: rails-secret
key: MYSQL_PASSWORD
</code></pre>
<p>secret.yaml</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: rails-secret
type: Opaque
data:
MYSQL_PASSWORD: ENCODED_PASSWORD
</code></pre>
| <p>Instead of running cron scripts inside the container, you should probably use <a href="http://kubernetes.io/docs/user-guide/cron-jobs/" rel="nofollow noreferrer">CronJobs</a>, where you can set env vars from secrets in the same way you do for deployments.</p>
|
<p>I'm trying to set up environment for microservices. I'm using fabric8 to do that.</p>
<p>I'm using <code>mvn fabric8:cluster-start -Dfabric8.cluster.kind=openshift</code> command. while executing i'm getting following error...</p>
<pre><code> [INFO] gofabric8> Downloading https://github.com/openshift/origin/releases/download/v1.3.1/openshift-origin-client-tools-v1.3.1-dad658de7465ba8a234a4fb40b5b446a45a4cee1-mac.zip...
[INFO] gofabric8> **Unable to unzip /Users/apple/.fabric8/bin/oc.zip zip: not a valid zip fileUnable to download client zip: not a valid zip file**
[INFO] gofabric8> using the executable /Users/apple/.fabric8/bin/minishift
[INFO] gofabric8> running: /Users/apple/.fabric8/bin/minishift start --vm-driver=xhyve --memory=4096 --cpus=1
[INFO] gofabric8> Starting local OpenShift cluster...
[INFO] gofabric8> Downloading ISO
[INFO] gofabric8>
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18:50 min
[INFO] Finished at: 2016-11-14T16:05:32+05:30
[INFO] Final Memory: 21M/224M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.49:cluster-start (default-cli) on project demo: Failed to execute gofabric8 start --batch --minishift --console. java.io.IOException: Failed to execute process stdin for gofabric8 start --batch --minishift --console: java.util.UnknownFormatConversionException: Conversion = ''' -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.49:cluster-start (default-cli) on project demo: Failed to execute gofabric8 start --batch --minishift --console. java.io.IOException: Failed to execute process stdin for gofabric8 start --batch --minishift --console: java.util.UnknownFormatConversionException: Conversion = '''
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
</code></pre>
<p>Any Idea?</p>
| <p>I had the similar problem today when I attempted to follow the fabric8 get started instructions here: <a href="https://fabric8.io/guide/getStarted/gofabric8.html" rel="nofollow noreferrer">https://fabric8.io/guide/getStarted/gofabric8.html</a>.
I used:
<code>gofabric8 start --minishift</code> and received this error:</p>
<p><code>DSKTP-000003:~ usr$ gofabric8 start --minishift
fabric8 recommends OSX users use the xhyve driver
xhyve driver already installed
Downloading https://github.com/jimmidyson/minishift/releases/download/v1.0.0-beta.1/minishift-darwin-amd64...
Downloaded /Users/brent.fisher/.fabric8/bin/minishift
kubectl is already available on your PATH
Downloading https://github.com/openshift/origin/releases/download/v1.3.1/openshift-origin-client-tools-v1.3.1-dad658de7465ba8a234a4fb40b5b446a45a4cee1-mac.zip...
Unable to unzip /Users/brent.fisher/.fabric8/bin/oc.zip zip: not a valid zip fileUnable to download client zip: not a valid zip file
using the executable /Users/brent.fisher/.fabric8/bin/minishift
Unable to get status fork/exec /Users/brent.fisher/.fabric8/bin/minishift: exec format errorDSKTP-000003:~ usr$</code></p>
<p>I am using gofabric8 version 0.4.112:</p>
<p><code>gofabric8 version
gofabric8, version 0.4.112 (branch: 'master', revision: '50d5d75')
build date: '20161129-10:39:49'
go version: '1.7.1'
</code>
It seems that the minishift option tries to download a version of openshift client (oc) that no longer exists [1.3.1] at that url. I was able to get around the error by manually downloading open shift from here: <a href="https://github.com/openshift/origin/releases/tag/v1.3.2" rel="nofollow noreferrer">https://github.com/openshift/origin/releases/tag/v1.3.2</a>
and extracting it and renaming the extracted executable to <code>oc</code>.</p>
|
<p>How to set the maximum size of the log file? Or enable log rotation.
In the documentation I have not found anything about this.</p>
<p>Or it is necessary to write a script for this?</p>
| <p>I don't think kubernetes provides log rotation feature now. You can put logrotate configuration in your host machine. Something like this one:</p>
<pre><code>/var/lib/docker/containers/*/*.log {
rotate 7
daily
size=10M
compress
missingok
delaycompress
copytruncate
}
</code></pre>
|
<p>I'm wondering the graceful way to reduce nodes in a Kubernetes cluster on GKE.</p>
<p>I have some nodes each of which has some pods watching a shared job queue and executing a job. I also have the script which monitors the length of the job queue and increase the number of instances when the length exceeds a threshold by executing <code>gcloud compute instance-groups managed resize</code> command and it works ok.</p>
<p>But I don't know the graceful way to reduce the number of instances when the length falls below the threshold.</p>
<p>Is there any good way to stop the pods working on the terminating instance before the instance gets terminated? or any other good practice?</p>
<p>Note</p>
<ul>
<li>Each job can take around between 30m and 1h</li>
<li>It is acceptable if a job gets executed more than once (in the worst case...)</li>
</ul>
| <p>I think the best approach is instead of using a pod to run your tasks, use the kubernetes job object. That way when the task is completed the job terminates the container. You would only need a small pod that could initiate kubernetes jobs based on the queue. </p>
<p>The more kube jobs that get created, the more resources will be consumed and the cluster auto-scaler will see that it needs to add more nodes. A kube job will need to complete even if it gets terminated, it will get re-scheduled to complete.</p>
<p>There is no direct information in the GKE docs about whether a downsize will happen if a Job is running on the node, but the stipulation seems to be if a pod can be easily moved to another node and the resources are under-utilized it will drain the node. </p>
<p><strong>Refrences</strong></p>
<ul>
<li><p><a href="https://cloud.google.com/container-engine/docs/cluster-autoscaler" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/cluster-autoscaler</a></p></li>
<li><p><a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/</a></p></li>
<li><p><a href="http://kubernetes.io/docs/user-guide/jobs/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/jobs/</a></p></li>
</ul>
|
<p>I've successfully deployed a Kubernetes cluster using the <a href="https://github.com/kubernetes/kube-deploy" rel="nofollow noreferrer">docker-multinode configuration</a> as well as a Ceph cluster and am able to mount a CephFS device manually using the following:</p>
<p><code>sudo mount -t ceph monitor1:6789:/ /ceph -o name=admin,secretfile=/etc/ceph/cephfs.secret</code></p>
<p>I'm now attempting to launch a pod using the kubernetes example <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/cephfs" rel="nofollow noreferrer">here</a>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: my-ceph-secret-key
---
apiVersion: v1
kind: Pod
metadata:
name: cephfs2
spec:
containers:
- name: cephfs-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/cephfs"
name: cephfs
volumes:
- name: cephfs
cephfs:
monitors:
- "monitor1:6789"
- "monitor2:6789"
- "monitor3:6789"
user: admin
secretRef:
name: ceph-secret
readOnly: false
</code></pre>
<p>When I run:</p>
<p><code>sudo kubectl create -f cephfs.yml</code></p>
<p>I am receiving the following error:</p>
<blockquote>
<p>Warning FailedMount MountVolume.SetUp failed for volume
"kubernetes.io/cephfs/445ee063-d1f1-11e6-a3e3-1418776a29a6-cephfs"
(spec.Name: "cephfs") pod "445ee063-d1f1-11e6-a3e3-1418776a29a6" (UID:
"445ee063-d1f1-11e6-a3e3-1418776a29a6") with: CephFS: mount failed:
mount failed: fork/exec /bin/mount: invalid argument Mounting
arguments: monitor1:6789,monitor2:6789,monitor3:6789:/data
/var/lib/kubelet/pods/445ee063-d1f1-11e6-a3e3-1418776a29a6/volumes/kubernetes.io~cephfs/cephfs
ceph [name=admin,secret=secret]</p>
</blockquote>
<p>Do the kubernetes manager containers need to have the ceph-fs-common package installed in order to perform a successful mount? I cannot find any further debugging information to determine the cause of the error.</p>
| <p>AFAIK you might have 2 problems here:</p>
<ul>
<li>Ceph required the ip addresses of the machines to work</li>
<li>The OS you are running the container on, is the one which mounts the storage: The ceph tooling needs to be installed on that machine. The container is completely unaware of the mounted disks</li>
</ul>
|
Subsets and Splits