prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm attempting to create an FTP service with Kubernetes via google cloud. I've created the docker image with and exposed the necessary ports with: <code>EXPOSE 20 21 50000-52000</code>.</p>
<p>I've run into several problems so far: The biggest involving port ranges. ProFTPD needs a good amount of ports available to be able to handle passive connections, so I'm not quite sure how to create a service that will allow this.</p>
<p>This lead me to <a href="https://stackoverflow.com/questions/35603658/configure-port-range-mapping-into-containers-yaml-for-google-container-engine">this issue</a> which mentions I should use the <code>hostNetwork: true</code>, but that doesn't help the fact that each service needs to have well defined ports. After some configuration changes, I was able to add the 2000 ports by defining them out manually. When I did this though, now google API returns an error when trying to create the load balancer because it only allows 100 ports in the array (But it does appear via the console they support ranges).</p>
<p>How do I go about adding this FTP service, and supporting the passive range?</p>
| <p>Kubernetes does not currently support port ranges. It's difficult to implement with the legacy (but still supported) userspace proxy.</p>
<p>I think there are a few GH issues open on this but <a href="https://github.com/kubernetes/kubernetes/issues/20420" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/20420</a> is one.</p>
|
<p>Everything in my Kubernetes configuration works great with a single TLS certificate. I define the cert as a <code>Secret</code> and reference it in my <code>Ingress</code> object configuration by name and my HTTPS traffic is served with the certificate. </p>
<p>My problem comes in when I try to use my other certificate as well. One of my certs has <code>www.DomainA.com</code> as the CN and <code>DomainA.com,DomainB.com,DomainC.com</code> as SAN's. The other one of my certs is a wildcard for <code>*.DomainA.com</code>.</p>
<p>When I add both certs to my Kubenetes configuration (with or without a defined <a href="http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingresstls" rel="nofollow"><code>hosts</code></a> field for either/both), only the first specified cert is added in the interface and served to any traffic. </p>
<p>What is odd is that when I run a <code>kubectl describe ingress my-ingress</code> I see:</p>
<pre><code>TLS:
non-wildcard-cert terminates DomainA.com,DomainB.com,DomainC.com
wildcard-cert terminates *.DomainA.com
</code></pre>
<p>So I know both certs were picked up by Kubernetes, it just doesn't seem to be used (or stored) in GCP.</p>
<p>How can I use both of these certs?</p>
| <p>Looks like, for now, GCP doesn't support the <a href="http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingresstls" rel="nofollow">Kubernetes spec</a> correctly according to the <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md#tls" rel="nofollow">GCE Ingress Controller doc</a>. When they do add the full SNI support from the spec, I should be able to do what I need.</p>
<p>I see that the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/multi-tls" rel="nofollow">nginx Ingress controller has SNI support</a> but I only want to use the official, transparent, Google-maintained Ingress controller.</p>
|
<p>I have the following deployment in kubernetes:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-node
name: hello-node
namespace: default
spec:
replicas: 2
selector:
matchLabels:
run: hello-node
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-node
spec:
containers:
- image: <image>:<tag>
imagePullPolicy: Always
name: hello-node
livenessProbe:
httpGet:
path: /rest/hello
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 1
ports:
- containerPort: 8081
protocol: TCP
resources:
requests:
cpu: 400m
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>The issue is that when I update my deployment to let's say a new version of my image, <code>Kubernetes</code> will instantly kill both pods with the old image, and bring two new pods with the new image. While the new pods are booting up I experience an interruption of service.</p>
<p>Because of the <code>rollingUpdate</code> and the <code>livenessProbe</code> I'm expecting <code>Kubernetes</code> to do the following:</p>
<ol>
<li>Start one pod with the new image</li>
<li>Wait for the new pod to be healthy based on the <code>livenessProbe</code></li>
<li>Kill one pod with the old image</li>
<li>Repeat until all pods have been migrated</li>
</ol>
<p>I am missing something here?</p>
| <p>What you need is <code>readinessProbe</code>.</p>
<p>The default state of <code>Liveness</code> before the initial delay is <code>Success</code>, whereas the default state of <code>Readiness</code> before the initial delay is <code>Failure</code>.</p>
<p>If you’d like your container to be killed and restarted if a probe fails, then specify a <code>LivenessProbe</code> and a <code>RestartPolicy</code> of <code>Always</code> or <code>OnFailure</code>.</p>
<p>If you’d like to start sending traffic to a pod only when a probe succeeds, specify a <code>ReadinessProbe</code>.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">container probes</a> for more details.</p>
<p>To have the rolling update behavior you described, set <code>maxSurge</code> to <code>1</code> (default value). This tells the Deployment to "scale up at most one more replica at a time". See <a href="http://kubernetes.io/docs/user-guide/deployments/#max-surge" rel="nofollow noreferrer">docs of <code>maxSurge</code></a> for more details.</p>
|
<p>I have by mistake added a pod in the system namespace "kube-system". And then I am unable to remove this pod. It also seems to have created a replica set. Every time delete these items, they are recreated. </p>
<p>Can't seem to find a way to delete pods or replica sets belonging to the system namespace "kube-system"</p>
| <p>If you created the pod using <code>kubectl run</code>, then you will need to delete the deployment (which created the replica set, which created the pod). Otherwise, the higher level controllers will continue to ensure that the objects they are responsible for keeping running stay around in the system, even if you try to delete them manually. Try <code>kubectl get deployment --namespace=kube-system</code> to see if you have a deployment in the <code>kube-system</code> namespace. If so, deleting it should also delete the replica set and the pods that you created. </p>
|
<p>Is there a way to use Fabric8 for PHP project?<br>
All the sample I can find is for java and nodejs.<br>
I am using Fabric8 on Google Container Engine with Kubernetes.</p>
| <p>There's not at the moment but we can look to add one in for the next release. There's Java, nodejs, golang, swift and ruby already but we can add php and .net hopefully in the next few days. </p>
|
<p>Below is how I am using kunbernetes on google.</p>
<p>I have one node application let's say <strong>Book-portal</strong>.</p>
<p>node app is using <strong>environment variables for configurations</strong>.</p>
<p><strong>Step1:</strong> I created docker file and pushed</p>
<pre><code>gcr.io/<project-id>/book-portal:v1
</code></pre>
<p><strong>Step2:</strong> deployed with following commands</p>
<pre><code>kubectl run book-portal --image=gcr.io/<project-id>/book-portal:v1 --port=5555 --env ENV_VAR_KEY1=value1 --env ENV_VAR_KEY2=value2 --env ENV_VAR_KEY3=value3
</code></pre>
<p><strong>Step3:</strong></p>
<pre><code>kubectl expose deployment book-portal --type="LoadBalancer"
</code></pre>
<p><strong>Step4:</strong> Get public ip with </p>
<pre><code>kubectl get services book-portal
</code></pre>
<p>now assume I added new features and new configurations in next release.</p>
<p><strong>So to roll out new version v2</strong></p>
<p><strong>Step1:</strong> I created docker file and pushed</p>
<pre><code>gcr.io/<project-id>/book-portal:v2
</code></pre>
<p><strong>Step2:</strong> Edit deployment</p>
<pre><code>kubectl edit deployment book-portal
---------------yaml---------------
...
spec:
replicas: 1
selector:
matchLabels:
run: book-portal
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: book-portal
spec:
containers:
- env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
image: gcr.io/<project-id>/book-portal:v1
imagePullPolicy: IfNotPresent
name: book-portal
...
----------------------------------
</code></pre>
<p>I am successfully able to change </p>
<pre><code>image:gcr.io/<project-id>/book-portal:v1
</code></pre>
<p>to</p>
<pre><code>image:gcr.io/<project-id>/book-portal:v2
</code></pre>
<p>But I can not add/change environment variables</p>
<pre><code> - env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
- name: ENV_VAR_KEY4
value: value4
</code></pre>
<ol>
<li>Can anyone guide with what is best practices to pass configurations
in node app on kubernetes? </li>
<li>how should I handle environment variable
changes during rolling updates?</li>
</ol>
| <p>I think your best bet is to use configmaps in k8s and then change you pod template to get env variable values from the configmap see <a href="http://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">Consuming ConfigMap in pods</a></p>
<p>edit: I appologize I put the wrong link here. I have updated but for the TL;DR
you can do the following.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
</code></pre>
<p>and then pod usage can look like this.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never
</code></pre>
|
<p>I had some issues running some pods on a cluster, I want to know the way to detect which pod (and rc) is causing OOM on my nodes after the exception is thrown. I cannot access the node to check logs and <code>kubectl describe node</code> doesn't give me much information about this.</p>
<p>Thanks :)</p>
| <p>Have you try running <code>kubectl get events --watch</code> to monitor the events on k8s and monitor the pod as well with <code>kubectl logs -f podname</code></p>
|
<p>I am seeing a lot of errors in my logs relating to watches. Here's a snippet from my apiserver log on one machine:</p>
<pre><code>W0517 07:54:02.106535 1 reflector.go:289] pkg/storage/cacher.go:161: watch of *api.Service ended with: client: etcd cluster is unavailable or misconfigured
W0517 07:54:02.106553 1 reflector.go:289] pkg/storage/cacher.go:161: watch of *api.PersistentVolumeClaim ended with: client: etcd cluster is unavailable or misconfigured
E0517 07:54:02.120217 1 reflector.go:271] pkg/admission/resourcequota/admission.go:86: Failed to watch *api.ResourceQuota: too old resource version: 790115 (790254)
E0517 07:54:02.120390 1 reflector.go:271] pkg/admission/namespace/lifecycle/admission.go:126: Failed to watch *api.Namespace: too old resource version: 790115 (790254)
E0517 07:54:02.134209 1 reflector.go:271] pkg/admission/serviceaccount/admission.go:102: Failed to watch *api.ServiceAccount: too old resource version: 790115 (790254)
</code></pre>
<p>As you can see, there are two types of errors:</p>
<ul>
<li><code>etcd cluster is unavailable or misconfigured</code><br>
I am passing <code>--etcd-servers=http://k8s-master-etcd-elb.eu-west-1.i.tst.nonprod-ffs.io:2379</code> to the apiserver (this is definitely reachable). <a href="https://stackoverflow.com/questions/35673283/why-does-kubernetes-apiserver-present-a-bad-certificate-to-the-etcd-server">Another question</a> seems to suggest that this does not work, but <code>--etcd-cluster</code> is not a recognised option in the version I'm running (1.2.3)</li>
<li><code>too old resource version</code><br>
I've seen various mentions of this (eg. <a href="https://github.com/kubernetes/kubernetes/issues/22024" rel="nofollow noreferrer">this issue</a>) but nothing conclusive as to what causes this. I understand the default cache window is 1000, but the delta between versions in the example above are less than 1000. Could it be the error above is the cause of this?</li>
</ul>
| <p>I see that you are accessing the etcd through ELB proxy on AWS.</p>
<p>I have similar solution, just the ETCD is decoupled from the kubmaster server to its own 3 node cluster, hidden behind a internal ELB.</p>
<p>I can see the same errors from the kube-apiserver when configured to use the ELB. Without the ELB, configured as usual with a list of ETCD endponts, I don't see any errors.</p>
<p>Unfortunately, I don't know the root cause or why is this happening, will investigate more.</p>
|
<p>I am trying to run a Kubernetes cluster on Google cloud. I am using the below link - <a href="http://kubernetes.io/docs/hellonode/" rel="nofollow">http://kubernetes.io/docs/hellonode/</a></p>
<p>When I execute the below command - gcloud container clusters get-credentials hello-world</p>
<p>I get an error Request had insufficient authentication scopes</p>
<p>What is the possible solution to this problem?</p>
| <p>The Container Engine API requires the <code>cloud-platform</code> <a href="https://cloud.google.com/storage/docs/authentication#oauth-scopes" rel="nofollow">OAuth2 Scope</a> for authentication. If you are running these commands from a Google Compute Engine instance, you'll need to create the instance with that authentication scope:</p>
<ul>
<li><p>With the gcloud CLI, add <code>--scopes=cloud-platform</code> to your <code>gcloud compute instances create</code> command.</p></li>
<li><p>In the Developer Console UI, select "Allow full access to all Cloud APIs" on the "Create an Instance" page.</p></li>
</ul>
|
<p>I'm running a kubernetes cluster in which I am deploying a "cloud native hazelcast" following the instructions on the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/hazelcast" rel="nofollow noreferrer">kubernetes-hazelcast github page</a>. Once I have a number of hazelcast instances running, I try to connect a java client to one of the instances but for some reason the connection fails.</p>
<h3>Some background</h3>
<p>Using a kubernetes external endpoint I can connect to hazelcast from outside the kubernetes cluster. When I do a REST call with <code>curl kubernetes-master:32469/hazelcast/rest/cluster</code>, I get a correct response from hazelcast with it's cluster information. So I know my endpoint works.</p>
<p>The hazelcast-kubernetes deployment uses the <a href="https://github.com/pires/hazelcast-kubernetes-bootstrapper" rel="nofollow noreferrer">hazelcast-kubernetes-bootstrapper</a> which allows some configuration by setting environment variables with the replication controller, but I'm using all defaults. So my group and password are "someGroup" and "someSecret".</p>
<h3>The java client</h3>
<p>My Java client code is really straightforward:</p>
<pre><code>ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(0);
clientConfig.getNetworkConfig().setConnectionTimeout(10000);
clientConfig.getNetworkConfig().setConnectionAttemptPeriod(2000);
clientConfig.getNetworkConfig().addAddress("kubernetes-master:32469");
clientConfig.getGroupConfig().setName("someGroup");
clientConfig.getGroupConfig().setPassword("someSecret")
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
</code></pre>
<p>When start my client this is the log output of the hazelcast container</p>
<pre><code>2016-07-05 12:54:38.143 INFO 5 --- [thread-Acceptor] com.hazelcast.nio.tcp.SocketAcceptor : [172.16.15.4]:5701 [someGroup] [3.5.2] Accepting socket connection from /172.16.29.0:54333
2016-07-05 12:54:38.143 INFO 5 --- [ cached4] c.h.nio.tcp.TcpIpConnectionManager : [172.16.15.4]:5701 [someGroup] [3.5.2] Established socket connection between /172.16.15.4:5701
2016-07-05 12:54:38.157 INFO 5 --- [.IO.thread-in-1] c.h.nio.tcp.SocketClientMessageReader : [172.16.15.4]:5701 [someGroup] [3.5.2] Unknown client type: <
</code></pre>
<p>And the console output of the client</p>
<pre><code>jul 05, 2016 2:54:37 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTING
jul 05, 2016 2:54:38 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_someGroup][3.6.2] is STARTED
jul 05, 2016 2:54:48 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 1 of 2147483647.
jul 05, 2016 2:54:58 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
WARNING: Unable to get alive cluster connection, try in 0 ms later, attempt 2 of 2147483647.
jul 05, 2016 2:55:08 PM com.hazelcast.client.spi.impl.ClusterListenerSupport
etc...
</code></pre>
<p>The client just keeps trying to connect but no connection is ever established.</p>
<h3>What am I missing?</h3>
<p>So why won't my client connect to the hazelcast instance? Is it some configuration part I'm missing?</p>
| <p>Not sure about the official kubernetes support, however Hazelcast has a kubernetes discovery plugin (based on the new discovery spi) that works on both, client and nodes: <a href="https://github.com/noctarius/hazelcast-kubernetes-discovery" rel="nofollow noreferrer">https://github.com/noctarius/hazelcast-kubernetes-discovery</a></p>
|
<p>I have installed Kubernetes in Ubuntu server using instructions <a href="https://github.com/kubernetes/minikube/blob/master/README.md" rel="noreferrer">here</a>. I am trying to create pods using <code>kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080</code> as listed in the example. However, when I do <code>kubectl get pod</code> I get the status of the container as <code>pending</code>. I further did <code>kubectl describe pod</code> for debugging and I see the message:</p>
<p><code>FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts</code>.</p>
<p>I am further trying to delete this pod by <code>kubectl delete pod hello-minikube-3383150820-1r4f7</code> but when I further do <code>kubectl get pod</code> I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.</p>
| <p>The <code>PodFitsHostPorts</code> predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running <code>kubectl describe svc</code>.</p>
<p><code>kubectl run</code> creates a <code>deployment</code> object (you can see it with <code>kubectl describe deployments</code>) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run <code>kubectl delete deployments hello-minikube</code>.</p>
|
<p>Today when I launch an app using kubernetes over aws it exposes a publicly visible LoadBalancer Ingress URL, however to link that to my domain to make the app accessible to the public, I need to manually go into the aws route53 console in a browser on every launch. Can I update the aws route53 Resource Type A to match the latest Kubernetes LoadBalancer Ingress URL from the command line ?</p>
<p>Kubernetes over gcloud shares this challenge of having to either predefine a Static IP which is used in launch config or manually do a browser based domain linkage post launch. On aws I was hoping I could use something similar to this from the command line </p>
<pre><code>aws route53domains update-domain-nameservers ???
</code></pre>
<p>__ OR __ can I predefine an aws kubernetes LoadBalancer Ingress similar to doing a predefined Static IP when over gcloud ?</p>
<p>to show the deployed app's LoadBalancer Ingress URL issue</p>
<pre><code>kubectl describe svc
</code></pre>
<p>... output</p>
<pre><code>Name: aaa-deployment-407
Namespace: ruptureofthemundaneplane
Labels: app=bbb
pod-template-hash=4076262206
Selector: app=bbb,pod-template-hash=4076262206
Type: LoadBalancer
IP: 10.0.51.82
LoadBalancer Ingress: a244bodhisattva79c17cf7-61619.us-east-1.elb.amazonaws.com
Port: port-1 80/TCP
NodePort: port-1 32547/TCP
Endpoints: 10.201.0.3:80
Port: port-2 443/TCP
NodePort: port-2 31248/TCP
Endpoints: 10.201.0.3:443
Session Affinity: None
No events.
</code></pre>
<p>UPDATE:</p>
<p>Getting error trying new command line technique (hat tip to @error2007s comment) ... issue this</p>
<pre><code>aws route53 list-hosted-zones
</code></pre>
<p>... outputs</p>
<pre><code>{
"HostedZones": [
{
"ResourceRecordSetCount": 6,
"CallerReference": "2D58A764-1FAC-DEB4-8AC7-AD37E74B94E6",
"Config": {
"PrivateZone": false
},
"Id": "/hostedzone/Z3II3949ZDMDXV",
"Name": "chainsawhaircut.com."
}
]
}
</code></pre>
<p>Important bit used below : hostedzone Z3II3949ZDMDXV</p>
<p>now I craft following <a href="http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html" rel="noreferrer">using this Doc</a> <a href="http://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html" rel="noreferrer">(and this Doc as well)</a> as file /change-resource-record-sets.json (NOTE I can successfully change Type A using a similar cli call ... however I need to change Type A with an Alias Target of LoadBalancer Ingress URL)</p>
<pre><code>{
"Comment": "Update record to reflect new IP address of fresh deploy",
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "chainsawhaircut.com.",
"Type": "A",
"TTL": 60,
"AliasTarget": {
"HostedZoneId": "Z3II3949ZDMDXV",
"DNSName": "a244bodhisattva79c17cf7-61619.us-east-1.elb.amazonaws.com",
"EvaluateTargetHealth": false
}
}
}]
}
</code></pre>
<p>on command line I then issue</p>
<p>aws route53 change-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --change-batch file:///change-resource-record-sets.json</p>
<p>which give this error message</p>
<pre><code>An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request
</code></pre>
<p>Any insights ?</p>
| <p>Here is the logic needed to update aws route53 Resource Record Type A with value from freshly minted kubernetes LoadBalancer Ingress URL</p>
<p>step 1 - identify your hostedzone Id by issuing</p>
<pre><code>aws route53 list-hosted-zones
</code></pre>
<p>... from output here is clip for my domain</p>
<pre><code>"Id": "/hostedzone/Z3II3949ZDMDXV",
</code></pre>
<p>... importantly never populate json with hostedzone Z3II3949ZDMDXV its only used as a cli parm ... there is a second similarly named token HostedZoneId which is entirely different</p>
<p>step 2 - see current value of your route53 domain record ... issue :</p>
<pre><code>aws route53 list-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --query "ResourceRecordSets[?Name == 'scottstensland.com.']"
</code></pre>
<p>... output</p>
<pre><code>[
{
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"EvaluateTargetHealth": false,
"DNSName": "dualstack.asomepriorvalue39e7db-1867261689.us-east-1.elb.amazonaws.com."
},
"Type": "A",
"Name": "scottstensland.com."
},
{
"ResourceRecords": [
{
"Value": "ns-1238.awsdns-26.org."
},
{
"Value": "ns-201.awsdns-25.com."
},
{
"Value": "ns-969.awsdns-57.net."
},
{
"Value": "ns-1823.awsdns-35.co.uk."
}
],
"Type": "NS",
"Name": "scottstensland.com.",
"TTL": 172800
},
{
"ResourceRecords": [
{
"Value": "ns-1238.awsdns-26.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
}
],
"Type": "SOA",
"Name": "scottstensland.com.",
"TTL": 900
}
]
</code></pre>
<p>... in above notice value of </p>
<pre><code>"HostedZoneId": "Z35SXDOTRQ7X7K",
</code></pre>
<p>which is the second similarly name token <strong><em>Do NOT use wrong Hosted Zone ID</em></strong></p>
<p>step 3 - put below into your change file aws_route53_type_A.json <a href="https://oliverhelm.me/sys-admin/updating-aws-dns-records-from-cli" rel="nofollow noreferrer">(for syntax Doc see link mentioned in comment above)</a></p>
<pre><code>{
"Comment": "Update record to reflect new DNSName of fresh deploy",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"AliasTarget": {
"HostedZoneId": "Z35SXDOTRQ7X7K",
"EvaluateTargetHealth": false,
"DNSName": "dualstack.a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com."
},
"Type": "A",
"Name": "scottstensland.com."
}
}
]
}
</code></pre>
<p>To identify value for above field "DNSName" ... after the kubernetes app deploy on aws it responds with a LoadBalancer Ingress as shown in output of cli command :</p>
<pre><code>kubectl describe svc --namespace=ruptureofthemundaneplane
</code></pre>
<p>... as in </p>
<pre><code>LoadBalancer Ingress: a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com
</code></pre>
<p>... even though my goal is to execute a command line call I can do this manually by getting into the aws console browser ... pull up my domain on route53 ... </p>
<p><a href="https://i.stack.imgur.com/UIzqu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UIzqu.png" alt="Notice the green circle where the correct value of my LoadBalancer Ingress URL will auto appear in a self populating picklist (thanks aws)"></a> </p>
<p>... In this browser picklist editable text box (circled in green) I noticed the URL gets magically prepended with : dualstack. Previously I was missing that magic string ... so json key "DNSName" wants this </p>
<pre><code>dualstack.a0b82c81f47d011e6b98a0a28439e7db-1867261689.us-east-1.elb.amazonaws.com.
</code></pre>
<p>finally execute the change request</p>
<pre><code>aws route53 change-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV --change-batch file://./aws_route53_type_A.json
</code></pre>
<p>... output</p>
<pre><code>{
"ChangeInfo": {
"Status": "PENDING",
"Comment": "Update record to reflect new DNSName of fresh deploy",
"SubmittedAt": "2016-07-13T14:53:02.789Z",
"Id": "/change/CFUX5R9XKGE1C"
}
}
</code></pre>
<p>.... now to confirm change is live run this to show record</p>
<pre><code>aws route53 list-resource-record-sets --hosted-zone-id Z3II3949ZDMDXV
</code></pre>
|
<p>I was following along with the <a href="http://kubernetes.io/docs/hellonode/#create-your-cluster" rel="noreferrer">Hello, World example</a> in Kubernetes getting started guide.</p>
<p>In that example, a cluster with 3 nodes/instances is created on Google Container Engine.</p>
<p>The <code>container</code> to be deployed is a basic nodejs http server, which listens on port 8080.</p>
<p>Now when I run <br>
<code>kubectl run hello-node --image <image-name> --port 8080</code> <br>
it creates a <code>pod</code> and a <code>deployment</code>, deploying the <code>pod</code> on one of nodes.</p>
<p>Running the <br>
<code>kubectl scale deployment hello-node --replicas=4</code> <br>
command increases the number of pods to 4.</p>
<p><em>But since each pod exposes the 8080 port, will it not create a port conflict on the pod where two nodes are deployed?
I can see 4 pods when I do <code>kubernetes get pods</code>, however what the behaviour will be in this case?</em></p>
| <p>Got some help in <code>#kubernetes-users</code> <a href="http://slack.k8s.io" rel="noreferrer">channel</a> on slack :</p>
<ol>
<li>The port specified in <code>kubectl run ...</code> is that of a <code>pod</code>. And each pod has its unique IP address. So, there are no port conflicts.</li>
<li>The pods won’t serve traffic until and unless you expose them as a <code>service</code>.</li>
<li>Exposing a <code>service</code> by running <code>kubectl expose ...</code> assigns a <code>NodePort</code> (which is in range 30000-32000) on <em>every</em> <code>node</code>. This port must be unique for every service.</li>
<li>If a node has multiple pods <code>kube-proxy</code> balances the traffic between those pods. </li>
</ol>
<p>Also, when I accessed my service from the browser, I was able to see logs in all the 4 pods, so the traffic was served from all the 4 pods.</p>
|
<p>Trying to start this pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: tinyproxy
spec:
containers:
- name: master
image: asdrepo.isus.emc.com:8091/francisbesset/tinyproxy
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /tinyproxy-data
name: data
volumes:
- name: data
emptyDir: {}
</code></pre>
<p>This gets stuck in pending state. I looked in the troubleshooting guide, but this pod does not seem to have any events</p>
<pre><code>$ kubectl describe pods tinyproxy
Name: tinyproxy
Namespace: default
Node: /
Labels: name=tinyproxy
Status: Pending
IP:
Controllers: <none>
Containers:
master:
Image: asdrepo.isus.emc.com:8091/francisbesset/tinyproxy
Port: 6379/TCP
QoS Tier:
cpu: Guaranteed
memory: BestEffort
Limits:
cpu: 100m
Requests:
cpu: 100m
Environment Variables:
MASTER: true
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
No events.
</code></pre>
<p>Also</p>
<pre><code>$ kubectl get events
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
13m 13m 1 10.0.0.5 Node Normal Starting {kubelet 10.0.0.5} Starting kubelet.
13m 13m 2 10.0.0.5 Node Warning MissingClusterDNS {kubelet 10.0.0.5} kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "kube-proxy-10.0.0.5_kube-system(9fa6e0ea64b9f19ad6996367402408eb)". Falling back to DNSDefault policy.
13m 13m 1 10.0.0.5 Node Normal NodeHasSufficientDisk {kubelet 10.0.0.5} Node 10.0.0.5 status is now: NodeHasSufficientDisk
13m 13m 1 10.0.0.5 Node Normal Starting {kubelet 10.0.0.5} Starting kubelet.
13m 13m 1 10.0.0.5 Node Normal NodeHasSufficientDisk {kubelet 10.0.0.5} Node 10.0.0.5 status is now: NodeHasSufficientDisk
13m 13m 1 k8-dvawxybzux-0-a7m3diiryehx-kube-minion-itahxn4icom6 Node Normal Starting {kube-proxy k8-dvawxybzux-0-a7m3diiryehx-kube-minion-itahxn4icom6} Starting kube-proxy.
</code></pre>
<p>The proxy does seem to be running and is not restarting</p>
<pre><code>bash-4.3# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d6dd779b301f gcr.io/google_containers/hyperkube:v1.2.0 "/hyperkube proxy --m" 15 minutes ago Up 15 minutes k8s_kube-proxy.d87e83d4_kube-proxy-10.0.0.5_kube-system_9fa6e0ea64b9f19ad6996367402408eb_caae92ac
8191770f15d9 gcr.io/google_containers/pause:2.0 "/pause" 15 minutes ago Up 15 minutes k8s_POD.6059dfa2_kube-proxy-10.0.0.5_kube-system_9fa6e0ea64b9f19ad6996367402408eb_e4da5a30
</code></pre>
<p>How do I debug this?</p>
| <p>Looks like the scheduler service did not start (this is in an openstack VM). All services were supposed to be configured and started automatically. This worked after I started the service manually.</p>
|
<p>what exactly is the link between these two? how can i specify that PersistentVolumeClaim must use specific PersistentVolume? it seems to be sharing files between all PersistentVolumeClaims</p>
| <p>Yes, this sharing as you stated it, is the case and you could say that this is at least very troubling if you want specific volumes for a given purpose. It is beneficial if you have random usable volumes, which is often not the case.</p>
<blockquote>
<p>Scenario: Create a NFS volume for 1 database & a second volume for a second database. The database have to be retained between restarts of the pods/complete system reboots and have to be mounted again without issues later on.</p>
</blockquote>
<p>To solve this scenario (within the constraints of Kubernetes) there are several possible solution paths:</p>
<ul>
<li><p>Use the namespace as a solution to be able to prevent cross use of the volumes, resulting then in namespace issues since containers have to talk over the external (or flat) network to communicate with each other when crossing namespaces. </p></li>
<li><p>Another possible solution to solve this scenario is create the mount points using OS mounts and using the then present local volume. This will work, but requires maintenance of the OS template, something which we were trying to prevent using Kubernetes.</p></li>
<li><p>A third possible solutions is to have the NFS mount executed from within your container, thus avoiding the persistent volume approach completely, see <a href="https://stackoverflow.com/questions/33552277/how-do-you-mount-an-external-nfs-share-in-kubernetes">How do you mount an external nfs share in Kubernetes?</a> for this</p></li>
</ul>
|
<p>As per the official documentation <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="nofollow">here</a> on running Kubernetes locally within a Container -- I have followed all the steps carefully, and I am still getting the message <code>connection refused</code> when I type <code>kubectl get nodes</code>. </p>
<p><code>docker ps</code> shows that <em>api-server</em> is not running, and <code>docker logs kubelet</code> does indeed varify so:</p>
<pre><code>[kubelet.go:1137] Unable to register 127.0.0.1 with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused.
</code></pre>
<p>After a little while, <code>docker logs kubelet</code></p>
<pre><code>E0711 16:07:06.814735 33792 event.go:202] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
</code></pre>
<p>Apparently, I am not alone in experiencing this problem. </p>
<hr>
<p>UPDATE:
After several hours, <code>docker logs kubelet</code></p>
<pre><code>E0712 08:28:03.528010 33792 pod_workers.go:138] Error syncing pod 4c6ab43ac4ee970e1f563d76ab3d3ec9, skipping: [failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=controller-manager pod=k8s-master-127.0.0.1_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)"
, failed to "StartContainer" for "apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=apiserver pod=k8s-master-127.0.0.1_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)"
, failed to "StartContainer" for "setup" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=setup pod=k8s-master-127.0.0.1_default(4c6ab43ac4ee970e1f563d76ab3d3ec9)"
]
</code></pre>
| <p>The easiest way to run Kubernetes locally now is <a href="http://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow">Minikube</a>, though I'd expect the local docker method to still be functional.</p>
<p>Does <code>docker ps -a</code> show any crashed kube-apiserver containers that might have any clues in their logs?</p>
|
<p>Hi im trying to setup stackdriver to monitor my containers but the cpu metrics dont seem to work, im working with the following versions</p>
<pre><code>Master Version 1.2.5
Node Version 1.2.4
heapster-v1.0.2-594732231-sil32
</code></pre>
<p>this is a group a create for the databases (it also happens for the wildfly pod and modcluster), i have a couple of other questions, </p>
<ol>
<li>is it posible to monitor postgres or i have to install the agent on
the docker image</li>
<li>can i monitor the images on kubernetes, or the disks on Google cloud?</li>
</ol>
<p><a href="https://i.stack.imgur.com/XYZje.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XYZje.jpg" alt="enter image description here"></a></p>
| <p>Do your containers have CPU limits specified on them? The CPU Usage graph on that page is supposed to show utilization, which is defined as <code>cores used / cores reserved</code>. If a container hasn't specified a maximum number of cores, then it won't have a utilization either, as <a href="https://cloud.google.com/monitoring/api/metrics#gcp-container" rel="nofollow">mentioned in the description of the CPU utilization metric</a>.</p>
|
<p>What's the suggested way to update the cluster version from 1.2 to 1.3?</p>
<p>Is there a structured way to do it or I have to create a new cluster from scratch?</p>
<p>Couldn't find any documentation regarding this.</p>
| <p>The answer partially depends on how you set up your cluster in the first place. If you used the <code>kube-up.sh</code> script with the environment set to <code>AWS</code>, then they don't currently provide an upgrade mechanism. If you used <a href="https://github.com/kubernetes/kops" rel="nofollow">kops</a> then you can use the built in <code>upgrade</code> command. </p>
<p>The reason that I said "partially" above is that many Kubernetes users have found it easier to lift and shift rather than upgrade in place when they are running on cloud infrastructure. The idea is that cluster deployment is a more well tested code path than cluster upgrades (especially on AWS). So you'd deploy a second cluster, re-provision your applications and services, shift your traffic from your existing cluster to your new cluster, and then delete your old cluster. </p>
<p>Once you have this strategy working, you can do it to shift to any desired cluster software version (upgrade or downgrade), and depending on the mechanism you use to shift traffic, you can also move across zones, regions, or even cloud providers. </p>
|
<p>From the kubernetes <a href="http://kubernetes.io/docs/user-guide/pods/" rel="noreferrer">docs</a>: </p>
<blockquote>
<p>The applications in a pod all use the same network namespace (same IP and port space), and can thus “<em>find</em>” each other and communicate using localhost.</p>
</blockquote>
<p><em>Is it possible to use some container specific names instead of <code>locahost</code>?</em> </p>
<p>For example, with <code>docker-compose up</code>, you use <em>name of the service</em> to communicate. <a href="https://docs.docker.com/compose/networking/" rel="noreferrer">[docs]</a></p>
<p>So, if my <strong>docker-compose.yml</strong> file is</p>
<pre><code>version: '2'
services:
web:
build: .
ports:
- "8000:8000"
srv:
build: .
ports:
- "3000:3000"
</code></pre>
<p>Then I access <code>srv</code> from within <code>web</code> by calling <code>http://srv:3000/</code>, <strong>not</strong> <code>http://localhost:3000</code></p>
<p>How can I achieve the same behaviour in kubernetes? Any way to specify what name to use in pods' yaml configuration?</p>
| <p><code>localhost</code> is just a name for the network loopback device (usually <code>127.0.0.1</code> for IPv4 and <code>::1</code> for IPv6). This is usually specified in your <code>/etc/hosts</code> file.</p>
<p>A pod has its own IP, so each container inside shares that IP. If these containers should be independent (i.e. don't need to be collocated), they should each be in their own pod. Then, you can define a <a href="http://kubernetes.io/docs/user-guide/services/#defining-a-service" rel="noreferrer">service</a> for each that allows DNS lookups as either "$SERVICENAME" from pods in the same namespace, or "$SERVICENAME.$NAMESPACE" from pods in different namespaces.</p>
|
<p>Still experiencing a similar issues 1.3.0 and upto 1.4.0-alpha.0</p>
<p>In my case (docker based set up), either trusty or kubedns would get unauthorized from api server. </p>
<p>and strangely I see that the secrets are not there inside the instances, under the path /var/run/secrets/kubernetes.io/serviceaccount </p>
<pre><code>[root@ ... ]# kubectl exec -it kube-dns-v13-htfjo ls /bin/sh
/ #
/ # ls /var/run/secrets/kubernetes.io/serviceaccount
/ #
</code></pre>
<p>While it seems they are in the node and in the proxy instance </p>
<pre><code>tmpfs on /var/lib/kubelet/pods/3de53b0c-45bb-11e6-9f03-08002776167a/volumes/kubernetes.io~secret/default-token-8axd8 type
tmpfs on /var/lib/kubelet/pods/3de5591e-45bb-11e6-9f03-08002776167a/volumes/kubernetes.io~secret/default-token-8axd8 type
tmpfs on /var/lib/kubelet/pods/f29f35c7-45cc-11e6-9f03-08002776167a/volumes/kubernetes.io~secret/default-token-ql88q type
</code></pre>
<ul>
<li>Deleting the secret and deleting the pods then recreating them has no effect </li>
<li>Restarting cluster after unmounting & deleting the folders has no effect either </li>
</ul>
<p>Naturally this results in kubedns unable to start. Log below </p>
<pre><code>I0709 09:04:11.578816 1 dns.go:394] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0709 09:04:11.578873 1 dns.go:427] records:[], retval:[], path:[local cluster svc default kubernetes]
I0709 09:04:11.579657 1 dns.go:394] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0709 09:04:11.579677 1 dns.go:427] records:[], retval:[], path:[local cluster svc default kubernetes]
E0709 09:04:11.786646 1 reflector.go:216] pkg/dns/dns.go:128: Failed to list *api.Service: serializer for text/html; charset=utf-8 doesn't exist
E0709 09:04:11.786995 1 reflector.go:216] pkg/dns/dns.go:127: Failed to list *api.Endpoints: serializer for text/html; charset=utf-8 doesn't exist
I0709 09:04:12.488674 1 dns.go:145] Ignoring error while waiting for service default/kubernetes: serializer for text/html; charset=utf-8 doesn't exist. Sleeping 1s before retrying.
E0709 09:04:12.879701 1 reflector.go:216] pkg/dns/dns.go:128: Failed to list *api.Service: serializer for text/html; charset=utf-8 doesn't exist
E0709 09:04:12.880000 1 reflector.go:216] pkg/dns/dns.go:127: Failed to list *api.Endpoints: serializer for text/html; charset=utf-8 doesn't exist
I0709 09:04:13.582561 1 dns.go:145] Ignoring error while waiting for service default/kubernetes: serializer for text/html; charset=utf-8 doesn't exist. Sleeping 1s before retrying.
</code></pre>
| <p>This one seems to be a bug still open</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/26943">https://github.com/kubernetes/kubernetes/issues/26943</a></p>
<p>The workaround that works is to add rslave option in the kubelet mount like <strong>--volume=/var/lib/kubelet:/var/lib/kubelet:rw,rslave</strong> as shown below. </p>
<p>This solution also is platform dependent. Read the notes in the bug report. </p>
<pre><code>## Start kubernetes master
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
**--volume=/var/lib/kubelet:/var/lib/kubelet:rw,rslave** \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube-amd64:${K8S_VERSION} \
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
</code></pre>
|
<p>I have a kubernetes cluster setup in AWS.
We are using the EC2 container registry to store our docker images on.
We have the master/minions all setup and everything seems to be working with the cluster.</p>
<p>My spec file is as followed:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apim-mysql
labels:
app: apim
spec:
ports:
# the port that this service should serve on
- port: 3306
selector:
app: apim
tier: mysql
clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: apim-mysql
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: apim
tier: mysql
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: mysql
image: <This points to our EC2 Container Registry and retreives the image>
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: WSO2_ZIP_FILE
value: wso2am-1.10.1-SNAPSHOT.zip
- name: WSO2_VERSION
value: 1.10.1
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: 'yes'
- name: MYSQL_USER
value: username
- name: MYSQL_USER_PASSWD
value: password
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
#value: env
ports:
- containerPort: 3306
name: mysql
</code></pre>
<p>What this container does is just sets up a mysql.
We will need other nodes within the cluster to connect to this node.
Because they will need to use the mysql DB.</p>
<p>I guess my first question is does everything look okay with this spec file?
Or does anyone see something wrong?</p>
<p>I do the kubectl create command and it runs successfully:</p>
<pre><code>kubectl create -f mysql.yaml
service "apim-mysql" created
deployment "apim-mysql" created
</code></pre>
<p>kubectl get pods shows the pod running:</p>
<pre><code>apim-mysql-545962574-w2qz1 1/1 Running 1 8m
</code></pre>
<p>I sometimes when doing kubectl logs get an error showing this:</p>
<pre><code>kubectl logs apim-mysql-545962574-w2qz1
Error from server: dial unix /var/run/docker.sock: no such file or directory
</code></pre>
<p>But eventually retrying enough it will go through... if anyone has information on why that occurs it would be great.</p>
<p>When it does work get something like this:</p>
<pre><code>kubectl logs apim-mysql-545962574-w2qz1
Initializing database
2016-07-13T15:51:47.375052Z 0 [Warning] InnoDB: New log files created, LSN=45790
2016-07-13T15:51:52.029915Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2016-07-13T15:51:53.531183Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: b837bd45-4911-11e6-99ba-02420af40208.
2016-07-13T15:51:53.746173Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2016-07-13T15:51:53.746621Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2016-07-13T15:52:19.891437Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2016-07-13T15:52:19.891705Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-07-13T15:52:19.891733Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2016-07-13T15:52:19.891778Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2016-07-13T15:52:19.891831Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
Database initialized
MySQL init process in progress...
2016-07-13T15:52:34.632188Z 0 [Note] mysqld (mysqld 5.7.13) starting as process 49 ...
2016-07-13T15:52:49.814764Z 0 [Note] InnoDB: PUNCH HOLE support available
2016-07-13T15:52:49.814846Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-07-13T15:52:49.814859Z 0 [Note] InnoDB: Uses event mutexes
2016-07-13T15:52:49.814870Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2016-07-13T15:52:49.814928Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-07-13T15:52:49.814932Z 0 [Note] InnoDB: Using Linux native AIO
2016-07-13T15:52:50.243657Z 0 [Note] InnoDB: Number of pools: 1
2016-07-13T15:52:52.175079Z 0 [Note] InnoDB: Using CPU crc32 instructions
MySQL init process in progress...
MySQL init process in progress...
MySQL init process in progress...
MySQL init process in progress...
MySQL init process in progress...
MySQL init process in progress...
MySQL init process in progress...
</code></pre>
<p>After a little bit it the pod seems to restart and it will say Initializing database again..</p>
<p>A couple days ago when I ran kubectl logs it would return everything right away and be very fast now it is very slow and doesn't really show anything I have not really changed my spec file at all so I have no idea what is going on. To me it seems like the container isn't getting executed properly when it actually does show some logs... but I unsure.</p>
<p>If anyone has any clue on how to debug this further with some commands I can run it would be very appreciated. I am very stuck at this point and have google'd so much but with no luck.</p>
<p>thanks!</p>
| <p>My experience using kubernetes on aws while getting unhelpful errors as</p>
<pre><code>Error from server: dial unix /var/run/docker.sock: no such file or directory
</code></pre>
<p>was resolved by <a href="https://aws.amazon.com/ec2/instance-types/" rel="nofollow">choosing a more hefty aws cluster instance type</a> ... here are relevant env vars</p>
<pre><code>export MASTER_SIZE=t2.medium
export NODE_SIZE=t2.medium
export NUM_NODES=2 # if not defined aws will auto guess
</code></pre>
<p>... also remove mention of resource limiting settings under tag <code>resources</code> until after it runs OK</p>
<p>Following commands are essential ... just leave off mention of namespace if you are not using </p>
<p>kubectl describe svc --namespace=xxx</p>
<p>kubectl get pods --namespace=xxx </p>
<p>kubectl describe pods --namespace=xxx </p>
<p>kubectl describe nodes </p>
<p>Also nice is ability to perform a live edit of a deployment ... first see your deployments .. issue </p>
<pre><code>kubectl get deployments --namespace=ruptureofthemundaneplane
</code></pre>
<p>... output</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
loudspeed-deployment 1 1 1 1 1h
mongo-deployment 1 1 1 1 1h
</code></pre>
<p>so now we know name of deployment to do live edit issue</p>
<pre><code>kubectl edit deployment/mongo-deployment
</code></pre>
<p>which will open an edit session in your terminal using default editor where you can change settings at will</p>
<p>I find when troubleshooting a database deployment its handy to also launch an image using below Dockerfile ... this allows you login using exec as per</p>
<pre><code>kubectl exec -ti $(kubectl get pods --namespace=${PROJECT_ID}|grep ${GKE_NODEDEPLOYMENT}|cut -d' ' -f1) --namespace=${PROJECT_ID} -c ${GKE_NGINX} -- bash
</code></pre>
<p>where you are free to do interactive database login session (once you install needed client code or put same into below Dockerfile) ... here is matching Dockerfile for this troubleshooting deployment container</p>
<pre><code>FROM ubuntu:16.04
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y wget curl
COPY .bashrc /root/
# ENTRYPOINT ["/bin/bash"]
CMD ["/bin/bash"]
#
# docker build --tag stens_ubuntu .
#
# docker run -d stens_ubuntu sleep infinity
#
# docker ps
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti 3cea1993ed28 bash
#
#
</code></pre>
|
<p>We're running a kubernetes cluster on the Google Cloud Platform, which creates a Deployment with 8 hazelcast-based replicas. We've had this running fine for over a month, but recently, we started receiving the below error message whenever we try to start our deployment (non-relevant stack frames omitted):</p>
<pre><code>2016-07-15 12:58:02,117 [My-hazelcast.my-deployment-368708980-8v7ig @ my-deployment-368708980-8v7ig] ERROR - [10.68.5.3]:5701 [MyProject] [3.6.2] Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/my-service. Cause: Received fatal alert: protocol_version
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/my-service. Cause: Received fatal alert: protocol_version
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:272) ~[kubernetes-client-1.3.66.jar:na]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:205) ~[kubernetes-client-1.3.66.jar:na]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:196) ~[kubernetes-client-1.3.66.jar:na]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:483) ~[kubernetes-client-1.3.66.jar:na]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:108) ~[kubernetes-client-1.3.66.jar:na]
at com.noctarius.hazelcast.kubernetes.ServiceEndpointResolver.resolve(ServiceEndpointResolver.java:62) ~[hazelcast-kubernetes-discovery-0.9.2.jar:na]
at com.noctarius.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy.discoverNodes(HazelcastKubernetesDiscoveryStrategy.java:74) ~[hazelcast-kubernetes-discovery-0.9.2.jar:na]
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.discoverNodes(DefaultDiscoveryService.java:74) ~[hazelcast-all-3.6.2.jar:3.6.2]
....
Caused by: javax.net.ssl.SSLException: Received fatal alert: protocol_version
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[na:1.7.0_95]
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154) ~[na:1.7.0_95]
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1991) ~[na:1.7.0_95]
...
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:203) ~[kubernetes-client-1.3.66.jar:na]
... 18 common frames omitted
</code></pre>
<p>When I google this error, I get a lot of hits about TLS protocol version mismatch. Apparently, Java 8 assumes a different TLS protocol version (TLS 1.2) than Java 7 and 6(TLS 1.0). However, all of our containers run the same docker image (based off of the <a href="https://hub.docker.com/r/hazelcast/hazelcast/" rel="nofollow">hazelcast/hazelcast:3.6.2</a> image), which is based off of Java 7, so there should be no protocol version mismatch (and this layer of our image has not changed).</p>
<p>We've tried to revert all of our recent changes in an attempt to resolve this error, to no avail. And frankly, nobody on our team has changed anything receltly related to SSL or the Hazelcast Kubernetes discovery mechanism. We recently updated our google cloud SDK components (<code>gcloud components update</code>) at the urging of the Cloud SDK tools ("Updates are available for some Cloud SDK components."). We're now running Google Clouds SDK version 117.0.0, but I don't see any breaking changes related to SSL or TLS in the <a href="https://cloud.google.com/sdk/release_notes" rel="nofollow">release notes</a>.</p>
<p>Why would we suddenly start seeing this "<code>fatal alert: protocol_version</code>" error message in our kubernetes pods, and how can I resolve it?</p>
| <p>The initial google searches indicating this was a TLS version error (version 1.0 vs 1.2 incompatibility) turned out to be useful. <a href="https://stackoverflow.com/a/33494593/13140">This answer</a> to a question about a similar SSLException protocol_version error is what pointed me in the right direction.</p>
<p>I got a test container to run, and using <code>kubectl exec my-test-pod -i -t -- /bin/bash -il</code> to launch an interactive bash shell into the container, I determined that the Hazelcast discovery service could <em>NOT</em> connect using TLS 1.0, but could using TLS 1.2:</p>
<pre><code>/opt/hazelcast# curl -k --tlsv1.0 https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/my-service
curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
/opt/hazelcast# curl -k --tlsv1.2 https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/my-service
Unauthorized # <-- Unauthorized is expected, as I didn't specify a user/passwd.
</code></pre>
<p>I am still not sure what exactly changed, possibly a layer of a public Docker container we use, possibly something within Google cloud service (Java 7 is End of Life, after all), and the fine folks at Hazelcast suggested perhaps the REST API had been updated. But evidently <em>something</em> changed that was causing the discovery service expect clients to TLS version 1.2.</p>
<p>The solution was to <a href="https://github.com/hazelcast/hazelcast-docker/releases" rel="nofollow noreferrer">download the Hazelcast Docker image</a> we were using, and tweak it to use Java 8 instead of Java 7, and then rebuild the image in our own development sandbox:</p>
<pre><code>$ pwd
/home/jdoe/devel/hazelcast-docker-3.6.2/hazelcast-oss
$ head -n3 Dockerfile
FROM java:8
ENV HZ_VERSION 3.6.2
ENV HZ_HOME /opt/hazelcast/
</code></pre>
<p>Voila! Our Deployment is running again.</p>
|
<p>New to kubernetes. What I was understood is each kubernetes uses docker pause image to store namespace information.My question in which pause image goes well with which which kubernetes version? How to find that out? If I am using kubernetes 1.3.2 which pause image version should I use?</p>
| <p>kubelet has a default infra container image that it uses and it is hard-coded in each version. In normal circumstances, users should not need to manually set the image. In some cases, where people want to use their customized image, they can override this by passing a <code>--pod-infra-container-image</code> flag to kubelet.</p>
<p><a href="http://kubernetes.io/docs/admin/kubelet" rel="nofollow">http://kubernetes.io/docs/admin/kubelet</a></p>
|
<p>My kubernetes cluster is hosted on Google Cloud on <code>europe-west1-d</code> region</p>
<p>My local setup have [email protected] and [email protected]</p>
<p>I managed to deploy without any issue when my cluster was on version 1.2.5</p>
<p>But since I upgraded to 1.3.0, I've got this:
<code>
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean", BuildDate:"2016-07-17T18:30:39Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
error: failed to negotiate an api version; server supports: map[], client supports: map[federation/v1beta1:{} apps/v1alpha1:{} authorization.k8s.io/v1beta1:{} authentication.k8s.io/v1beta1:{} autoscaling/v1:{} policy/v1alpha1:{} batch/v1:{} batch/v2alpha1:{} v1:{} rbac.authorization.k8s.io/v1alpha1:{} componentconfig/v1alpha1:{} extensions/v1beta1:{}]
</code></p>
<p>Notice the: <code>server supports: map[]</code></p>
| <p>Did you set a custom user name (other than admin) when you created your cluster? Kubernetes 1.3.0 on GKE has a known issue (see the <a href="https://cloud.google.com/container-engine/release-notes#july_11_2016" rel="nofollow">July 11, 2016 release notes</a>) where authorization fails if you try to authenticate using http basic auth. As described in the release note, you can use client certificate authentication until a fix is available. </p>
|
<p>Heeey all, I need some help with getting the dashboard to work. My dashboard pod has status "Pending" and if I do a curl call to <a href="http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard" rel="nofollow">http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard</a> then I get this result: </p>
<p>"no endpoints available for service \"kubernetes-dashboard\""</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
</code></pre>
<p>All pods</p>
<pre><code>core@helena-coreos ~ $ ./kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-146.185.128.27 1/1 Running 0 5d
kube-apiserver-37.139.31.151 1/1 Running 0 7d
kube-controller-manager-146.185.128.27 1/1 Running 0 19h
kube-controller-manager-37.139.31.151 1/1 Running 0 16h
kube-dns-v11-ika0m 0/4 Pending 0 19h
kube-proxy-146.185.128.27 1/1 Running 0 5d
kubernetes-dashboard-1775839595-1h0lt 0/1 Pending 0 19h
</code></pre>
<p>Describe pod:</p>
<pre><code>core@helena-coreos ~ $ ./kubectl describe pod kubernetes-dashboard-1775839595-1h0lt --namespace="kube-system"
Name: kubernetes-dashboard-1775839595-1h0lt
Namespace: kube-system
Node: /
Labels: app=kubernetes-dashboard,pod-template-hash=1775839595
Status: Pending
IP:
Controllers: ReplicaSet/kubernetes-dashboard-1775839595
Containers:
kubernetes-dashboard:
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
Port: 9090/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment Variables:
Volumes:
default-token-mn7e9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mn7e9
No events.
</code></pre>
<p>Service configuration:</p>
<pre><code>core@helena-coreos ~ $ ./kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-10T22:25:03Z
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "58669"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 25d56060-46ed-11e6-9817-040124359901
spec:
clusterIP: 10.3.0.67
ports:
- nodePort: 32014
port: 80
protocol: TCP
targetPort: 9090
selector:
app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>I also tried to find some logs:</p>
<pre><code>core@helena-coreos ~ $ ./kubectl logs kubernetes-dashboard-1775839595-1h0lt --namespace=kube-system
core@helena-coreos ~ $
</code></pre>
<p>I've got a really hard time to figure out why things are pending</p>
<p>Already thanks in advance.</p>
| <p>The other possibility is that you have no resources in the cluster and the scheduler fails to schedule the Dashboard UI. If this is true you can see this in cluster events (<code>kubectl get events --namespace=kube-system</code>).</p>
|
<p>According to the user guide it should be possible to <a href="http://kubernetes.io/docs/user-guide/deployments/#updating-a-deployment" rel="noreferrer">update a deployment's image</a> with <code>kubectl set/edit</code> and still get a meaningful <a href="http://kubernetes.io/docs/user-guide/deployments/#checking-rollout-history-of-a-deployment" rel="noreferrer">history</a>.</p>
<p>However i'm getting some errors instead, and no meaningful history, could this be a bug? </p>
<pre><code>$ kubectl create -f kubernetes/deployment.yml --record
deployment "nginx" created
$ kubectl set image deployment/nginx nginx=nginx:0.2.0
deployment "nginx" image updated
changes to deployments/nginx can't be recorded: Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try againc
$ kubectl set image deployment/nginx nginx=nginx:0.2.1
deployment "nginx" image updated
changes to deployments/nginx can't be recorded: Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
$ kb rollout history deployment/nginx
deployments "nginx":
REVISION CHANGE-CAUSE
1 kubectl create -f kubernetes/deployment.yml --record
2 kubectl create -f kubernetes/deployment.yml --record
3 kubectl create -f kubernetes/deployment.yml --record
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:13Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"283137936a498aed572ee22af6774b6fb6e9fd94", GitTreeState:"clean", BuildDate:"2016-07-01T19:19:19Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Thanks!</p>
| <p>This is due to update conflict when we update the deployment's annotation (for recording its change history). This means that the deployment object is modified (most likely by server) when <code>kubectl</code> updates the deployment change history. <code>kubectl set image</code> currently won't retry change history update on conflict for you. </p>
<p>I've filed a <a href="https://github.com/kubernetes/kubernetes/pull/29300" rel="nofollow">fix</a> to mitigate this.</p>
|
<p>I managed to have ribbon dynamically discover instances in a <code>k8s</code> cluster using <a href="https://github.com/fabric8io/kubeflix" rel="nofollow">kubeflix</a> and <a href="https://github.com/fabric8io/spring-cloud-kubernetes" rel="nofollow">spring-cloud-kubernetes</a></p>
<p>This was when I manually used <code>Ribbon</code> to communicate between my microservices.</p>
<p><code>Zuul</code> automatically uses <code>Ribbon</code> for the routes defined in its configuration.</p>
<p>Has anyone managed to enable <code>Ribbon</code> discovery for <code>Zuul</code>? I think I would need to override the instance of <code>LoadBalancer</code> for each of the routes. Any ideas how to do that?</p>
| <p>That was actually quite easy. You only need to specify the <code>NIWSServerListClassName</code>, the <code>k8s</code> namespace and the k8s port name in the <code>Ribbon</code> configuration:</p>
<pre><code>service-name:
ribbon:
NIWSServerListClassName: io.fabric8.kubeflix.ribbon.KubernetesServerList
KubernetesNamespace: uat
PortName: tcp80 #make sure you this matches the port name in the k8s service (kubectl describe svc service-name)
</code></pre>
<p>Then the <code>Zuul</code> route can refer to the service:</p>
<pre><code>zuul:
routes:
rm-data-store:
path: /foo/**
retryable: true
service-id: service-name
</code></pre>
|
<p>In docker run command, we can specify host port range to bind to EXPOSEd container port. Same thing I want to do through Kubernetes. Does any one know, how to do that? My current pod definition is as-</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-testing
spec:
containers:
- name: nginx-container
image: docker.io/nginx
ports:
- containerPort: 80
hostPort: 9088
</code></pre>
<p>At the last line, Instead of specifying single port number, I want a range of port numbers. I tried something like <code>hostPort: 9088-9999 or 9088..9999</code>, but it wouldn't worked. </p>
| <p>Port ranges are not currently supported in any of the Kubernetes API objects. There is an open <a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="noreferrer">issue</a> discussing port ranges in services. Please add your use case and your thoughts!</p>
|
<p>I have a kubernetes cluster setup where I am trying to publish a message to google cloud pub/sub from my pod. When the POST call (created by the API behind the scenes) is being made by the pod, it fails citing the issue below:</p>
<p><code>2016/07/21 10:31:24 Publish failed, Post https://pubsub.googleapis.com/v1/projects/<project-name>/topics/MyTopic:publish?alt=json: x509: certificate signed by unknown authority</code></p>
<p>I have already put a self signed certificate in the /etc/ssl/certs of my docker Debian image. Do I need to purchase a SSL certificate signed by some certified authority or will a self signed one do the job and I am missing something out here.</p>
| <p>Self-signed certificates will not work. The certificate needs to be signed by a certificate authority.</p>
|
<p><a href="http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a></p>
<p>"This guide will only get ONE node working. Multiple nodes requires a functional networking configuration done outside of kubernetes." </p>
<p>So, is a node made up of many hosts?<br>
I thought cluster is made up of many hosts. Is the cluster made up of many nodes instead? </p>
<p>Each node had a master and minions so a cluster has more than one master?</p>
| <ul>
<li><strong>Host</strong>: some machine (physical or virtual)</li>
<li><strong>Master</strong>: a host running Kubernetes API server and other master systems</li>
<li><strong>Node</strong>: a host running <code>kubelet</code> + <code>kube-proxy</code> that pods can be scheduled onto</li>
<li><strong>Cluster</strong>: a collection of one or masters + one or more nodes</li>
</ul>
|
<p>Is there an already built in j2 template processor in kubernetes or docker? I am doing the configuration below and wanted to plugin the values on the template.</p>
<p>Note that using hostPath is not an option since this is using openshift and no pv/pvc can be used. </p>
<pre><code>containers:
- image: some-docker-image:latest
name: some-docker-image
volumeMounts:
- mountPath: /etc/app/conf
name: configuration-volume
.
. Do some j2 template processing here if possible.
.
volumes:
- name: configuration-volume
gitRepo:
repository: "https://gitrepo/repo/example.git
</code></pre>
| <p>There isn't any templating support built into Kubernetes. You can easily build a templating system on top of the yaml/json files that you pass into <code>kubectl -f create</code> though. I know some folks that are using <a href="http://jsonnet.org" rel="nofollow">jsonnet</a> to accomplish this. </p>
<p>The discussion around adding templates is happening in <a href="https://github.com/kubernetes/kubernetes/issues/23896" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/23896</a> if you'd like to contribute. </p>
|
<p>enter image description hereI tried to used the instructions from this link <a href="https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md" rel="nofollow noreferrer">https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md</a> but I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did this <code>sudo find / -name "kubecfg.sh"</code> and I found no results. </p>
<p>moving on to the next step <code>"kubectl create -f deploy/kube-config/influxdb/"</code> when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1 </p>
<p>These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on and <strong>default</strong> is the only namespace I have when i do <code>kubectl get pods,svc,rc --all-namespaces</code></p>
<p>Changing kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful</p>
<p><a href="https://i.stack.imgur.com/SZT8e.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SZT8e.jpg" alt="Displaying all pods and namespaces"></a></p>
<p><a href="https://i.stack.imgur.com/4kycM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4kycM.jpg" alt="InfluxDB seems like its working"></a></p>
<p><a href="https://i.stack.imgur.com/thvcz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/thvcz.jpg" alt="Grafana link is not working for some reason but you can see the screen shot below that shows IP"></a></p>
<p><a href="https://i.stack.imgur.com/zUD2k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zUD2k.jpg" alt="cluster-info"></a></p>
| <p>We had the same issue deploying grafana/influxdb. So we dug into it:</p>
<p>Per <a href="https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md" rel="nofollow">https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md</a> since we don’t have an external load balancer, we changed the port type on the grafana service to NodePort which made it accessible at port 30397.</p>
<p>Then looked at the controller configuration here: <a href="https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/influxdb-grafana-controller.yaml" rel="nofollow">https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/influxdb-grafana-controller.yaml</a> and noticed the comment about using the api-server proxy which we wouldn’t be doing by exposing the NodePort, so we deleted the GF_SERVER_ROOT_URL environment variable from the config. At that point Grafana at least seemed to be running, but it looked like it was having trouble reaching influxdb.</p>
<p>We then changed the datasource to use localhost instead of monitoring-influxd and was able to connect. We're getting data on the cluster usage now, though individual pod data doesn’t seem to be working. </p>
|
<p>Using Kubernetes' <code>kubectl</code> I can execute arbitrary commands on any pod such as <code>kubectl exec pod-id-here -c container-id -- malicious_command --steal=creditcards</code></p>
<p>Should that ever happen, I would need to be able to pull up a log saying who executed the command and what command they executed. This includes if they decided to run something else by simply running <code>/bin/bash</code> and then stealing data through the tty.</p>
<p>How would I see which authenticated user executed the command as well as the command they executed?</p>
| <p>Audit logging is not currently offered, but the Kubernetes community <a href="https://github.com/kubernetes/features/issues/22" rel="nofollow">is working to get it available in the 1.4 release</a>, which should come around the end of September.</p>
|
<p>I'm using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin" rel="nofollow">Kubernetes Jenkins</a> plugin to orchestrate jenkins slaves </p>
<p>I want to run all the jobs in Docker (build docker images and execute tests/builds in docker).</p>
<p>example jenkins job: </p>
<pre><code>docker run -e NEXUS_USERNAME=${NEXUS_USERNAME} -e NEXUS_PASSWORD=${NEXUS_PASSWORD} common-dropwizard:latest mvn deploy
</code></pre>
<p>I am using the jenkinsci/jnlp-slave from here: <a href="https://hub.docker.com/r/jenkinsci/jnlp-slave/" rel="nofollow">https://hub.docker.com/r/jenkinsci/jnlp-slave/</a></p>
<p>Unfortunately, the slave image doesn't appear to support running docker. My question is what is the best approach to accomplish this?</p>
<p>thanks</p>
| <p>You need to install docker client and mount the docker socket so you can access the Docker host. Then you can interact with that Docker host</p>
<p><a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/" rel="nofollow">https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/</a>
<a href="https://github.com/jenkinsci/docker-workflow-plugin/tree/master/demo" rel="nofollow">https://github.com/jenkinsci/docker-workflow-plugin/tree/master/demo</a></p>
|
<p>I've looked over the documentation and browsed the source, but I can't seem to figure out how to do this. Is there any way to send query string parameters along with the path when implementing a Kubernetes liveness probe?</p>
<p>The string I am sending, which looks something like this:</p>
<pre><code>/api/v1?q=...
</code></pre>
<p>becomes URL-encoded and hits the server as:</p>
<pre><code>/api/v1%3fq=...
</code></pre>
<p>As I have no such route on this particular API, I get a 404, and Kube reaps the pods after the allotted timeout.</p>
<p>Is there any way to define query string parameters to liveness probes and/or trick the URI encoder to allow query string parameters?</p>
| <p>EDIT: This should now be fixed in Kubernetes 1.3. Thanks to Rudi C for pointing that out.</p>
<p>Liveness probes in Kubernetes v1.2 don't support passing query parameters.</p>
<p><a href="https://github.com/deis/controller/issues/774" rel="nofollow">This Issue</a> in the Deis Controller repo has a good explanation. The gist is that the LivenessProbe.HttpGet.Path is treated as a true URL path (which needs the "?" to be escaped as "%3f").</p>
<p>I've opened a <a href="https://github.com/kubernetes/kubernetes/issues/29470" rel="nofollow">feature request Issue</a> against Kubernetes to discuss adding query parameter(s).</p>
<p>As a workaround, you could use an exec livenessProbe that included the query parameters (as long as your container includes something like wget or curl):</p>
<pre><code>livenessProbe:
exec:
command:
- wget
- /api/v1?q=...
</code></pre>
|
<p>I have some containers that will be runnin users code in them. In order to strengthen security, I want to prevent them from having access to kubernetes api via the service account mechanism, but don't want to turn it off globally. The documentation says you can switch the service account name but only to another valid name. Are there alternatives that I missed? Can you restrict the account to have 0 permissions? Can you overmount the volume with a different one thats empty? Any other ideas?</p>
| <p>The easiest hack is to mount an emptyDir over the location that the serviceAccount secret would have been mounted. Something like:</p>
<pre><code>containers:
- name: running-user-code
image: something-i-dont-trust
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: no-api-access-please
readOnly: true
volumes:
- name: no-api-access-please
emptyDir: {}
</code></pre>
<p>There is more discussion in Kubernetes <a href="https://github.com/kubernetes/kubernetes/issues/16779" rel="noreferrer">Issue #16779</a> on potential solutions (and that's where I stole the emptyDir example from).</p>
|
<p>I am testing and learning kubernetes. I am using ubuntu 16.04 and have been looking for the simple and straightforward installation guide, but have failed to find one... Any suggestion? My aim is to be able to run kubernetes as master on one ubuntu 16.04 laptop and later set up a second ubuntu 16.04 laptop easily join the cluster. I wonder if this can be achieved with the current version of kubernetes and the 16.04 version of ubuntu... Any pointer to a guide or useful resource will be appreciated... Best regards. </p>
| <p>You could check the way to bring up a single-node cluster which is via <a href="https://github.com/kubernetes/kubernetes/blob/ef0c9f0c5b8efbba948a0be2c98d9d2e32e0b68c/cluster/get-kube-local.sh" rel="nofollow">cluster/get-kube-local.sh</a>. It shows you how one could use hyperkube to bring up a cluster.</p>
<p>If you want to get into the underlying details, the other method is to check out the contents of <a href="https://github.com/kubernetes/kubernetes/blob/9fc1d61ab70c3d72cb6bf16d5d6b002074a0f6cd/hack/local-up-cluster.sh" rel="nofollow">hack/local-up-cluster.sh</a>. This brings up each component separately, such as:</p>
<ul>
<li>kube-apiserver</li>
<li>kube-proxy</li>
<li>kube-dns</li>
<li>kube-controller-manager</li>
</ul>
<p>One could potentially use the same steps to create a two-node cluster as you stated in your question.</p>
|
<p>I'm trying to start a new Kubernetes cluster on AWS with the following settings:</p>
<pre><code>export KUBERNETES_PROVIDER=aws
export KUBE_AWS_INSTANCE_PREFIX="k8-update-test"
export KUBE_AWS_ZONE="eu-west-1a"
export AWS_S3_REGION="eu-west-1"
export ENABLE_NODE_AUTOSCALER=true
export NON_MASQUERADE_CIDR="10.140.0.0/20"
export SERVICE_CLUSTER_IP_RANGE="10.140.1.0/24"
export DNS_SERVER_IP="10.140.1.10"
export MASTER_IP_RANGE="10.140.2.0/24"
export CLUSTER_IP_RANGE="10.140.3.0/24"
</code></pre>
<p>After running <code>$KUBE_ROOT/cluster/kube-up.sh</code> the master appears and 4 (default) minions are started. Unfortunately only one minion gets read. The result of <code>kubectl get nodes</code> is:</p>
<pre><code>NAME STATUS AGE
ip-172-20-0-105.eu-west-1.compute.internal NotReady 19h
ip-172-20-0-106.eu-west-1.compute.internal NotReady 19h
ip-172-20-0-107.eu-west-1.compute.internal Ready 19h
ip-172-20-0-108.eu-west-1.compute.internal NotReady 19h
</code></pre>
<p>Please not that one node is running while 3 are not ready. If I look at the details of a NotReady node I get the following error:</p>
<blockquote>
<p>ConfigureCBR0 requested, but PodCIDR not set. Will not configure CBR0
right now.</p>
</blockquote>
<p>If I try to start the cluster with out the settings NON_MASQUERADE_CIDR, SERVICE_CLUSTER_IP_RANGE, DNS_SERVER_IP, MASTER_IP_RANGE and CLUSTER_IP_RANGE everything works fine. All minions get ready as soon as they are started.</p>
<p>Does anyone has an idea why the PodCIDR was only set on one node but not on the other nodes?</p>
<p>One more thing: The same settings worked fine on kubernetes 1.2.4.</p>
| <p>Your Cluster IP range is too small. You've allocated a /24 for your entire cluster (255 addresses), and Kubernetes by default will give a /24 to each node. This means that the first node will be allocated <code>10.140.3.0/24</code> and then you won't have any further /24 ranges to allocate to the other nodes in your cluster.</p>
<p>The fact that this worked in 1.2.4 was a bug, because the CIDR allocator wasn't checking that it didn't allocate ranges beyond the cluster ip range (which it now does). Try using a larger range for your cluster (GCE uses a /14 by default, which allows you to scale to 1000 nodes, but you should be fine with a /20 for a small cluster). </p>
|
<p>Following the docs to create a Deployment, I have a .yaml file like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
...
</code></pre>
<p>I wasn't sure what to make of the "extensions/v1beta1", so I ended up <a href="http://kubernetes.io/docs/api/#v1beta1-v1beta2-and-v1beta3-are-deprecated-please-move-to-v1-asap" rel="noreferrer">here in the API docs</a>.</p>
<p>That makes it sound like I should use a value of "v1", but that doesn't seem to be valid when I try to <code>kubectl apply</code> my .yaml file.</p>
<p>Could someome help me to better understand what the <strong><em>apiVersion</em></strong> values mean and how I can determine the best value to use for each component?</p>
<p>Oh, and I'm using minikube and "kubectl version" reports that client and server are "GitVersion:"v1.3.0".</p>
| <p>The docs you linked to are from before the release of Kubernetes 1.0 (a year ago). At that time, we had beta versions of the API and were migrating to the v1 API. Since then, we have introduced multiple API groups, and each API group can have a different version. The version indicates the maturity of the API (alpha is under active development, beta means it will have compatibility/upgradability guarantees, and v1 means it's stable). The deployment API is currently in the second category, so using <code>extensions/v1beta1</code> is correct. </p>
|
<p>I'm following the <a href="http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch/" rel="nofollow noreferrer">k8s logging instructions</a> on how to configure cluster level logging. I'm using <a href="https://github.com/coreos/coreos-kubernetes/releases" rel="nofollow noreferrer">kube-aws cli Tool</a> to configure the cluster, and I can't seem to find a way to make it work.
I've tried setting the env vars as they mentioned in the k8s logging guide (KUBE_ENABLE_NODE_LOGGING and KUBE_LOGGING_DESTINATION) before running <code>kube-aws up</code> but that didn't seem to change anything.</p>
<p>After that, I've tried running the es and kibana rc's and services manually by taking them from the <strong>cluster/addons/fluentd-elasticsearch</strong> directory on k8s github repo, but that ran only those specific services and not the <strong>fluentd-elasticsearch</strong> service which supposed to run also by the tutorial example.</p>
<p>running <code>kubectl get pods --namespace=kube-system</code> returns the following:</p>
<p><a href="https://i.stack.imgur.com/acR1k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/acR1k.png" alt="enter image description here" /></a></p>
<p>where we can see that the <code>fluentd-elasticsearch-kubernetes-node</code> is missing.</p>
<p>Also tried connecting to the cluster but failed with:</p>
<blockquote>
<p>unauthorized</p>
</blockquote>
<p>following the <a href="http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch/" rel="nofollow noreferrer">k8s logging instructions</a> and running the command <code>kubectl config view</code> didn't return any username and password, and when tried accessing the es url, I didn't get any dialog with asking for username and password. Not sure if it related to the first issue.</p>
<p>Not sure what I'm missing here.</p>
<p>Thanks.</p>
| <p>I've managed to get the cluster-level logging running on a small testing cluster started through the CoreOS <code>kube-aws</code> tool using the following steps. Please be aware that although I've had this running, I haven't really played with it sufficiently to be able to guarantee that all works correctly!</p>
<p><em>Enable log collection on nodes</em></p>
<p>You'll need to edit the <code>cloud-config-worker</code> and <code>cloud-config-controller</code> to export kubelet-collected logs and create the log directory </p>
<p><code>[Service]
Environment="RKT_OPTS=--volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
Environment=KUBELET_VERSION=v1.2.4_coreos.1
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--config=/etc/kubernetes/manifests
...other flags...</code></p>
<p>(taken from the 'Use the cluster logging add-on' section <a href="https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html" rel="nofollow">here</a>)</p>
<p><em>Install the logging components</em>
I used the components from <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow">here</a> (as you've already attempted). As you noticed, this does not run fluentd, and assumes that it is run as part of the cluster bootstrapping. To get fluentd running I've extracted the fluentd Daemonset definition discussed <a href="https://github.com/coreos/coreos-kubernetes/issues/320" rel="nofollow">here</a> into a separate file:</p>
<p><code>{
"apiVersion": "extensions\/v1beta1",
"kind": "DaemonSet",
"metadata": {
"name": "fluent-elasticsearch",
"namespace": "kube-system",
"labels": {
"k8s-app": "fluentd-logging"
}
},
"spec": {
"template": {
"metadata": {
"name": "fluentd-elasticsearch",
"namespace": "kube-system",
"labels": {
"k8s-app": "fluentd-logging"
}
},
"spec": {
"containers": [
{
"name": "fluentd-elasticsearch",
"image": "gcr.io\/google_containers\/fluentd-elasticsearch:1.15",
"resources": {
"limits": {
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "200Mi"
}
},
"volumeMounts": [
{
"name": "varlog",
"mountPath": "\/var\/log"
},
{
"name": "varlibdockercontainers",
"mountPath": "\/var\/lib\/docker\/containers",
"readOnly": true
}
]
}
],
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "varlog",
"hostPath": {
"path": "\/var\/log"
}
},
{
"name": "varlibdockercontainers",
"hostPath": {
"path": "\/var\/lib\/docker\/containers"
}
}
]
}
}
}
}</code></p>
<p>this Daemonset runs fluentd on each of the cluster nodes. </p>
<p>(NOTE: Whilst I've only tried adding these components after the cluster is already running, there's no reason you shouldn't be able to add these to to the <code>cloud-config-controller</code> in order to bring these up at the same time the cluster is started - which is more inline with that discussed on the referenced <a href="https://github.com/coreos/coreos-kubernetes/issues/320" rel="nofollow">issue</a>)</p>
<p>These instruction all assume that you're working with a cluster that you're happy to restart, or haven't yet started, in order to get the logging running - which I assume from your question is the situation you're in. I've also been able to get this working on a pre-existing cluster, by manually editing the AWS settings, and can add additional information on doing this if that is in fact what you are trying to do.</p>
|
<p>Anyone try to run vm for production on a Kubernetes cluster. Is their a way to run kvm instance inside a pod ? I know that google run all the vm inside container is it planned for kubernetes ?
Thank you</p>
| <p>Running VMs inside(!) Kubernetes can have legitimate use cases.</p>
<p>The most native way as of recently to run VMs and manage them in Kubernetes is <a href="http://blog.kubernetes.io/2016/07/rktnetes-brings-rkt-container-engine-to-Kubernetes.html" rel="nofollow">sing rkt</a>. You can then use rkt's <a href="https://coreos.com/rkt/docs/latest/running-lkvm-stage1.html" rel="nofollow">(L)KVM stage1</a> to run containers as VMs.</p>
<p>For your use case you would want something like an "empty" container with a Linux for your customers most probably, so it would still be different from actually running VM images, but maybe there's a work around there.</p>
<p>Another cool use case for running and managing several Kubernetes instances inside VMs managed again by Kubernetes. This way you could build fully-isolated multi-tenancy Kubernetes clusters.</p>
|
<p>I am using google container engine and getting tons of dns errors in the logs. </p>
<p>Like:</p>
<pre><code>10:33:11.000 I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
</code></pre>
<p>And:</p>
<pre><code>10:46:11.000 I0720 17:46:11.546237 1 dns.go:539] records:[0xc8203153b0], retval:[{10.71.240.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3465623435313164}], path:[local cluster svc default kubernetes]
</code></pre>
<p>This is the payload.</p>
<pre><code>{
metadata: {
severity: "ERROR"
serviceName: "container.googleapis.com"
zone: "us-central1-f"
labels: {
container.googleapis.com/cluster_name: "some-name"
compute.googleapis.com/resource_type: "instance"
compute.googleapis.com/resource_name: "fluentd-cloud-logging-gke-master-cluster-default-pool-f5547509-"
container.googleapis.com/instance_id: "instanceid"
container.googleapis.com/pod_name: "fdsa"
compute.googleapis.com/resource_id: "someid"
container.googleapis.com/stream: "stderr"
container.googleapis.com/namespace_name: "kube-system"
container.googleapis.com/container_name: "kubedns"
}
timestamp: "2016-07-20T17:33:11.000Z"
projectNumber: ""
}
textPayload: "I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false"
log: "kubedns"
}
</code></pre>
<p>Everything is working just the logs are polluted with errors. Any ideas on why this is happening or if I should be concerned?</p>
| <p>Thanks for the question, Aaron. Those error messages are actually just tracing/debugging output from the container and don't indicate that anything is wrong. The fact that they get written out as error messages has been fixed in Kubernetes at head and will be better in the next release of Kubernetes.</p>
|
<p>I'm running a kubernetes application on GKE, which serves HTTP requests on port 80 and websocket on port 8080.</p>
<p>Now, HTTP part needs to know client's IP address, so I have to use HTTP load balancer as ingress service. Websocket part then has to use TCP load balancer, as it's clearly stated in docs that HTTP LB doesn't support it.</p>
<p>I got them both working, but on different IPs, and I need to have them on one.</p>
<p>I would expect that there is something like iptables on GCE, so I could forward traffic from port 80 to HTTP LB, and from 8080 to TCP LB, but I can't find anything like that. Anything including forwarding allows only one them.</p>
<p>I guess I could have one instance with nginx/HAproxy doing only this, but that seems like an overkill</p>
<p>Appreciate any help!</p>
| <p>There's not a great answer to this right now. Ingress objects are really HTTP only right now, and we don't really support multiple grades of ingress in a single cluster (though we want to).</p>
<p>GCE's HTTP LB doesn't do websockets yet.</p>
<p>Services have a flaw in that they lose the client IP (we are working on that). Even once we solve this, you won't be able to use GCE's L7 balancer because of the extra port you need.</p>
<p>The best workaround I can think of, and has been used by a number of users until we preserve source IP, is this:</p>
<p>Run your own haproxy or nginx or even your own app as a Daemonset on some or all nodes (label controlled) with HostPorts.</p>
<p>Run a GCE Network LB (outside of Kubernetes) pointing at the nodes with HostPorts.</p>
<p>Once we can properly preserve external IPs, you can turn this back into a plain Service.</p>
|
<p>I am using an external database that requires you to whitelist IPs for use, and I want a particular service in my k8s cluster to have access to this database.</p>
<p>I don't know which IP address to add to the whitelist. I tried whitelisting the <code>IP</code> field from <code>kubectl describe svc <service_name></code>. That did not appear to work, so I then tried whitelisting the IP field from <code>kubectl describe pod <pod_name></code>, which also didn't work. </p>
<p>Ideally I would be able to whitelist the IP from the service instead of the pod, as the pod IP is not static.</p>
| <p>You cannot whitelist the service IP because there is a sorta of NAT that is connecting pods to pods and containers to containers etc (using etcd). But you can whitelist your global IP, which means you will have access to the database from every pod or service.</p>
<p>Read more about the network <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-networking.html" rel="nofollow">here</a></p>
|
<p>I have a coreos kubernetes cluster, which I started by following this article: </p>
<p><a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html" rel="nofollow">kubernetes coreos cluster on AWS</a></p>
<p>TLDR; </p>
<pre><code>> kube-aws init
> kube-aws render
> kube-aws up
</code></pre>
<p>Everything worked good and I had a kubernetes coreos cluster on AWS.
In the article there is a warning that said:</p>
<blockquote>
<p>PRODUCTION NOTE: the TLS keys and certificates generated by kube-aws
should not be used to deploy a production Kubernetes cluster. Each
component certificate is only valid for 90 days, while the CA is valid
for 365 days. If deploying a production Kubernetes cluster, consider
establishing PKI independently of this tool first.</p>
</blockquote>
<p>So I wanted to replace the default certificates, so I followed the following article: </p>
<p><a href="https://coreos.com/kubernetes/docs/latest/openssl.html" rel="nofollow">coreos certificates</a></p>
<p>TLDR;</p>
<ol>
<li>created the following self signed certificates: ca.pem, ca-key.pem</li>
<li>created the certificates for the controller: apiserver.pem, apiserver-key.pem</li>
<li>Replaced the certificates in the controller with the certificates created above, and rebooted the controller</li>
<li>created a worker certificates and replaced the certificates in the workers and rebooted them</li>
<li>configured kubectl to use the new certificates i created and also configured the context and user</li>
</ol>
<p>Im getting a communication error between kubectl and the cluster, complaining about the certificate</p>
<blockquote>
<p>Unable to connect to the server: x509: certificate signed by unknown
authority</p>
</blockquote>
<p>I also tried to use a signed certificate for kubectl which points to the cluster DNS, I set a DNS for the cluster. </p>
<p>How do I make kubectl communicate with my cluster? </p>
<p>Thanks in advance</p>
<p>EDIT:</p>
<p>My <strong>~/.kube/config</strong> looks like this:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/ca2.pem
server: https://kubernetes.bugeez.io
name: bugeez
contexts:
- context:
cluster: bugeez
user: bugeez-admin
name: bugeez-system
current-context: bugeez-system
kind: Config
preferences: {}
users:
- name: bugeez-admin
user:
client-certificate: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/admin2.pem
client-key: /Users/Yariv/Development/workspace/bugeez/bugeez-kubernetes/credentials/admin-key2.pem
</code></pre>
<p>EDIT:</p>
<p>All my certificates are signed by ca2.pem, I also validated this fact by running: </p>
<pre><code>openssl verify -CAfile ca2.pem <certificate-name>
</code></pre>
<p>EDIT:</p>
<p>What I think is the cause of the error is this:
When I switch the keys in the controller and workers, seems like cloud-config is overwriting my new keys with the old ones. How do I replace the keys and also change cloud-config to adapt to my change?</p>
| <p>An alternative solution that worked for me was to start a new cluster, and use custom certificates initially, without ever relying on the default temporary credentials.</p>
<p>Following the same <a href="https://coreos.com/kubernetes/docs/latest/openssl.html" rel="nofollow">tutorial</a> that you used, I made the following changes:</p>
<pre><code>> kube-aws init
> kube-aws render
</code></pre>
<p>Before <code>kube-aws up</code>, I created the certificates by following the tutorial. The only issue with the tutorial is that it is geared toward creating new certificates for an existing cluster. Therefore, the following changes are necessary:</p>
<ul>
<li><p>This line: <code>$ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"</code> needs to be replaced by: <code>$ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem</code></p></li>
<li><p>In the openssl.cnf file, remove the lines that define the IP for the master host, and the loadbalancer, since we don't know what they will be yet. The final openssl.cnf should look something like this: </p></li>
</ul>
<p><strong>openssl.cnf</strong></p>
<pre><code>[req]
...
[req_distinguished_name]
[ v3_req ]
...
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = mydomain.net
IP.1 = ${K8S_SERVICE_IP} # 10.3.0.1
IP.2 = ${MASTER_IP} # 10.0.0.50
</code></pre>
<p>I also used the same worker certificate for all the worker nodes. </p>
<p>After the certificates are in place, enter <code>kube-aws up</code>. </p>
<p>I hope this helps you get off the ground</p>
|
<p>Heeey all, I've been working on to get kubernetes working for days now and I learned a lot but I'm still keep on struggling with the dashboard. I dont get it working on my CoreOS machines.</p>
<p>The message that I get is:
while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get <a href="https://146.185.XXX.XXX:443/version" rel="nofollow">https://146.185.XXX.XXX:443/version</a>: x509: failed to load system roots and no roots provided</p>
<p>I dont know to test if certificates are really the problem. I hardly can believe it because I can successfully use curl on my worker machine. On the other hand I'm wondering how does dashboard know which certificates to use?</p>
<p>I really did my best to provide you with the right info if you need additional info I'll add it to this ticket. </p>
<p>So everthing seems to work accept dashboard.</p>
<pre><code>core@amanda ~ $ ./bin/kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-146.185.XXX.XXX 1/1 Running 0 3h
kube-controller-manager-146.185.XXX.XXX 1/1 Running 0 3h
kube-dns-v11-nb4aa 4/4 Running 0 1h
kube-proxy-146.185.YYY.YYY 1/1 Running 0 1h
kube-proxy-146.185.XXX.XXX 1/1 Running 0 3h
kube-scheduler-146.185.XXX.XXX 1/1 Running 0 3h
kubernetes-dashboard-2597139800-hg5ik 0/1 CrashLoopBackOff 21 1h
</code></pre>
<p>The kubernetes-dashboard container logs:</p>
<pre><code>core@amanda ~ $ ./bin/kubectl logs kubernetes-dashboard-2597139800-hg5ik --namespace=kube-system
Starting HTTP server on port 9090
Creating API server client for https://146.185.XXX.XXX:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://146.185.XXX.XXX:443/version: x509: failed to load system roots and no roots provided
</code></pre>
<p>Curl calls succeeds with certificates</p>
<pre><code>core@amanda ~ $ curl -v --cert /etc/kubernetes/ssl/worker.pem --key /etc/kubernetes/ssl/worker-key.pem --cacert /etc/ssl/certs/ca.pem https://146.185.XXX.XXX:443/version
* Trying 146.185.XXX.XXX...
* Connected to 146.185.XXX.XXX (146.185.XXX.XXX) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca.pem
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS handshake, CERT verify (15):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
...
> GET /version HTTP/1.1
> Host: 146.185.XXX.XXX
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Sun, 24 Jul 2016 11:19:38 GMT
< Content-Length: 269
<
{
"major": "1",
"minor": "3",
"gitVersion": "v1.3.2+coreos.0",
"gitCommit": "52a0d5141b1c1e7449189bb0be3374d610eb98e0",
"gitTreeState": "clean",
"buildDate": "2016-07-19T17:45:13Z",
"goVersion": "go1.6.2",
"compiler": "gc",
"platform": "linux/amd64"
* Connection #0 to host 146.185.XXX.XXX left intact
}
</code></pre>
<p>Dashboard deployment settings:</p>
<pre><code>./bin/kubectl edit deployment kubernetes-dashboard --namespace=kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "3"
creationTimestamp: 2016-07-19T22:27:24Z
generation: 36
labels:
app: kubernetes-dashboard
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "553126"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kubernetes-dashboard
uid: f7793d2f-4dff-11e6-b31e-04012dd8e901
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: kubernetes-dashboard
spec:
containers:
- args:
- --apiserver-host=https://146.185.XXX.XXX:443
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
observedGeneration: 36
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
</code></pre>
<p>Dashboard service settings:</p>
<pre><code>./bin/kubectl edit deployment kubernetes-dashboard --namespace=kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-19T22:27:24Z
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "408001"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: f7a57f1a-4dff-11e6-b31e-04012dd8e901
spec:
clusterIP: 10.3.0.80
ports:
- nodePort: 30009
port: 80
protocol: TCP
targetPort: 9090
selector:
app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>Kube proxy settings on worker node:</p>
<pre><code>core@amanda ~ $ cat /etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.2_coreos.0
command:
- /hyperkube
- proxy
- "--master=https://146.185.XXX.XXX"
- "--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"
- "--proxy-mode=iptables"
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"
</code></pre>
<p>Worker kube config (/etc/kubernetes/worker-kubeconfig.yaml)</p>
<pre><code>core@amanda ~ $ cat /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
</code></pre>
| <p>You can assign a kubeconfig with token/ssl configuration to the dashboard. </p>
<p>Then depending on your installation you may need to mount the kubeconfig and the certificates. </p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.1.0-beta3
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.1.0-beta3
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.1.0-beta3
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
**env:
- name: KUBECONFIG
value: /etc/kubernetes/kubeconfig**
ports:
- containerPort: 9090
volumeMounts:
- name: "etcpki"
mountPath: "/etc/pki"
readOnly: true
- name: "config"
mountPath: "/etc/kubernetes"
readOnly: true
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: "etcpki"
hostPath:
path: "/etc/pki"
- name: "config"
hostPath:
path: "/etc/kubernetes"
</code></pre>
|
<p>I'd like a multi-container pod with a couple of components:</p>
<ul>
<li>A "main" container which contains a build job</li>
<li>A "sidecar" container which contains an HTTP proxy, used by the "main" container</li>
</ul>
<p>This seems to fit well with the pod design philosophy as <a href="http://kubernetes.io/docs/user-guide/pods/#uses-of-pods" rel="noreferrer">described in the Kubernetes documentation</a>, but I believe so long as the "sidecar" runs, the pod is kept alive. In my case, the "main" container is not long-lived; once it exits, the "sidecar" should be terminated.</p>
<p>How can I achieve this?</p>
| <p>A pod is running as long as one of the containers is running. If you need them to exit together, you have to arrange that the sidecar dies. We do not have a notion of "primary" vs "secondary" containers wrt lifecycle, though that's sort of interesting.</p>
<p>One option would be to use an emptyDir volume and write a file telling the sidecar "time to go". The sidecar would exit when it sees that file.</p>
|
<p>How can I configure a specific pod to run on a multi-node kubernetes cluster so that it would restrict the containers of the POD to a subset of the nodes. </p>
<p>E.g. let's say I have A, B, C three nodes running mu kubernetes cluster. </p>
<p>How to limit a Pod to run its containers only on A & B, and not on C? </p>
| <p>You can add label to nodes that you want to run pod on and add nodeSelector to pod configuration. The process is described here:</p>
<p><a href="http://kubernetes.io/docs/user-guide/node-selection/" rel="noreferrer">http://kubernetes.io/docs/user-guide/node-selection/</a></p>
<p>So basically you want to</p>
<pre><code>kubectl label nodes A node_type=foo
kubectl label nodes B node_type=foo
</code></pre>
<p>And you want to have this nodeSelector in your pod spec:</p>
<pre><code>nodeSelector:
node_type: foo
</code></pre>
|
<p>I'm following <a href="http://kubernetes.io/docs/hellonode/" rel="noreferrer">Kubernete's getting started guide</a>. Everything went smoothly until I ran </p>
<p><code>$ gcloud docker push gcr.io/<PROJECT ID>/hello-node:v1</code></p>
<p>(Where is, well, my project id). For some reason, Kubernetes is not able to push to the registry. This is what I get:</p>
<pre><code>Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
Warning: '--email' is deprecated, it will be removed soon. See usage.
Login Succeeded
The push refers to a repository [gcr.io/kubernetes-poc-1320/hello-node]
18465c0e312f: Preparing
5f70bf18a086: Preparing
9f7afc4ce40e: Preparing
828b3885b7b1: Preparing
5dce5ebb917f: Preparing
8befcf623ce4: Waiting
3d5a262d6929: Waiting
6eb35183d3b8: Waiting
denied: Unable to create the repository, please check that you have access to do so.
</code></pre>
<p>Any ideas on what I might be doing wrong? Note that I have run. <code>$ gcloud init</code>, so I've logged in.</p>
<p>Thanks in advance!</p>
| <p>This solved it in my case:</p>
<hr>
<p><strong>Short version:</strong></p>
<p>Press <code>Enable billing</code> in the <code>Container Engine</code> screen in the <code>https://console.cloud.google.com</code>.</p>
<hr>
<p><strong>Long version:</strong></p>
<p>In my case I got the error because of an issue with setting billing in the google cloud platform console.</p>
<p>Although I entered all my credit card information and the screen of my <code>Container Engine</code> Screen in the google cloud platform console said <code>Container Engine is getting ready. This may take a minute or more.</code>, it didn't work before I pressed <code>Enable billing</code> on the same screen. Then the <code>gcloud docker push</code> command finally worked.</p>
<p>Oddly enough after later returning to the <code>Container Engine</code> screen, it shows me <code>Container Engine is getting ready. This may take a minute or more.</code> and the button <code>Enable billing</code> again.. must be a bug in the console.</p>
|
<p>I came across this article <a href="http://www.networkworld.com/article/3100383/cloud-computing/the-worlds-of-openstack-and-containers-are-colliding.html" rel="nofollow">http://www.networkworld.com/article/3100383/cloud-computing/the-worlds-of-openstack-and-containers-are-colliding.html</a>. It talks about openstack running atop kubernetes. What does that actually mean? Going by what they do, openstack is more lower level (IAAS) compared to kubernetes (between IAAS and PAAS), as per my understanding.</p>
| <p>There are many ways containers and OpenStack are being mixed. The article you quoted refers to a new-ish approach of running OpenStack servers inside containers. The advantage is for maintainability and scalability. Basically, kubernetes is in charge of orchestrating the various pieces that make an OpenStack service, instead of the more general approach of installing OpenStack services on bare metal or VM (see <a href="http://tripleo.org/" rel="nofollow">TripleO</a>).</p>
|
<p>When listing resources such as POD running on a cluster, how to know which physical node are they on?</p>
<p><code>kubectl get {resource-type}</code> command returns the following columns. </p>
<p><code>NAMESPACE NAME READY STATUS RESTARTS AGE</code> </p>
<p>Could not find a way to list the actual nodes (could be more than one for a resource) side by side. </p>
| <p>The -o flag seems to work </p>
<pre><code>[root@kubernetes1 temp]# kubectl get pod --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
k8s-master-127.0.0.1 4/4 Running 0 33m 127.0.0.1 127.0.0.1
k8s-proxy-127.0.0.1 1/1 Running 0 32m 127.0.0.1 127.0.0.1
kube-addon-manager-127.0.0.1 2/2 Running 0 33m 127.0.0.1 127.0.0.1
kube-dns-v18-z9igq 3/3 Running 0 33m 10.1.49.2 127.0.0.1
</code></pre>
|
<p>I'm trying to create a new ThirdPartyResource as per Kelsey Hightowers <a href="https://github.com/kelseyhightower/kube-cert-manager/blob/master/docs/certificate-third-party-resource.md" rel="nofollow">kube-cert-manager guide</a> but I'm getting this error:</p>
<pre><code>Error from server: error when creating "certificate.yaml": the server could not find the requested resource
</code></pre>
<p>Something interesting from the verbose log:</p>
<pre><code>POST https://104.155.48.255/apis/extensions/v1beta1/namespaces/default/thirdpartyresources 404 Not Found in 15 milliseconds
</code></pre>
<p>My cluster is created using GKE. Has just a single node running Kubernetes 1.3.2:</p>
<pre><code>clusterIpv4Cidr: 10.244.0.0/14
createTime: '2016-08-01T09:35:39+00:00'
currentMasterVersion: 1.3.2
currentNodeCount: 1
currentNodeVersion: 1.3.2
endpoint: 104.155.48.255
initialClusterVersion: 1.3.2
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/cs-cisco/zones/europe-west1-d/instanceGroupManagers/gke-minimesos-sonar-default-pool-3d02eeb3-grp
locations:
- europe-west1-d
loggingService: logging.googleapis.com
</code></pre>
| <p>ThirdPartyResources were namespace-scoped alpha objects in 1.2, and they are now cluster-scoped in 1.3 (see the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#thirdpartyresource" rel="nofollow">1.3.0 Known Issues</a>). Unfortunately, that means that a 1.2.x client will not know the right place to look for them (hence the 404 on the <code>/namespaces/default/thirdpartyresources</code> path).</p>
<p>You can either wait for kubectl 1.3.x to be rolled out with cloudsdk, or you can download the kubectl binaries directly from the <a href="https://github.com/kubernetes/kubernetes/releases" rel="nofollow">Kubernetes Releases</a> page.</p>
|
<p>I want to use volumes for deployments with more than one replica. How do I define an <code>PersistentVolumeClaim</code> so it will be generated for each replica? At the moment (see example below) I am able to generate a volume and assign it to the pods. The problem is, that only one volume gets generated which causes this error messages:</p>
<pre><code> 38m 1m 18 {kubelet worker-1.loc} Warning FailedMount Unable to mount volumes for pod "solr-1254544937-zblou_default(610b157c-549e-11e6-a624-0238b97cfe8f)": timeout expired waiting for volumes to attach/mount for pod "solr-1254544937-zblou"/"default". list of unattached/unmounted volumes=[datadir]
38m 1m 18 {kubelet worker-1.loc} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "solr-1254544937-zblou"/"default". list of unattached/unmounted volumes=[datadir]
</code></pre>
<p>How can I tell Kubernetes to generate a volume for each replica?</p>
<p>I am using Kubernetes 1.3.</p>
<hr>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: solr-datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: solr
labels:
team: platform
tier: search
app: solr
spec:
revisionHistoryLimit: 3
replicas: 3
template:
metadata:
name: solr
labels:
team: platform
tier: search
app: solr
spec:
containers:
- name: solr
image: solr:6-alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
resources:
requests:
cpu: 512m
memory: 512Mi
command:
- /bin/bash
args:
- -c
- /opt/solr/bin/solr start -f -z zookeeper:2181
volumeMounts:
- mountPath: "/opt/solr/server/solr/mycores"
name: datadir
volumes:
- name: datadir
persistentVolumeClaim:
claimName: solr-datadir
</code></pre>
<p>Generated pods:</p>
<pre><code>$ kubectl get pods -lapp=solr
NAME READY STATUS RESTARTS AGE
solr-1254544937-chenr 1/1 Running 0 55m
solr-1254544937-gjud0 0/1 ContainerCreating 0 55m
solr-1254544937-zblou 0/1 ContainerCreating 0 55m
</code></pre>
<p>Generated volumes:</p>
<pre><code>$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-3955e8f1-549e-11e6-94be-060ea3314be5 50Gi RWO Bound default/solr-datadir 57m
</code></pre>
<p>Generated claims:</p>
<pre><code>$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
solr-datadir Bound pvc-3955e8f1-549e-11e6-94be-060ea3314be5 0 57m
</code></pre>
| <p>ReplicaSets treat volumes as stateless. If your replicaset pod template specifies a volume that can only be attached read-write once, then the same volume is used by all pods in that replicaset. If that volume can only be attached read-write to one node at a time (like GCE PDs), then after the first pod is successfully scheduled and started, subsequent instances of the pod will fail to start if they are scheduled to a different node, because the volume will not be able to attach to the second node.</p>
<p>What you are looking for is Pet Sets which enable you to generate a volume for each replica. See <a href="http://kubernetes.io/docs/user-guide/petset/" rel="noreferrer">http://kubernetes.io/docs/user-guide/petset/</a> The feature is currently in alpha but should address your usecase.</p>
<p><strong>Update:</strong> In Kubernetes 1.5+ PetSets were renamed to StatefulSets. See the documentation <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">here</a>.</p>
|
<p>Using a Google Container Engine cluster running Kubernetes, what would the process be in order to point <a href="http://mydomain.co.uk" rel="nofollow">http://mydomain.co.uk</a> onto a LoadBalanced ReplicationController?</p>
<p>I'm aware Kubernetes supports SkyDNS - how would I go about delegating Google Cloud DNS for a domain name onto the internal Kubernetes cluster DNS service? </p>
| <p>You will need to create a <a href="http://kubernetes.io/docs/user-guide/services/" rel="noreferrer">service</a> that maps onto the pods in your replication controller and then expose that service outside of your cluster. You have two options to expose your web service externally:</p>
<ol>
<li><a href="http://kubernetes.io/docs/user-guide/services/#type-loadbalancer" rel="noreferrer">Set your service</a> to be <code>type: LoadBalancer</code> which will provision a <a href="https://cloud.google.com/compute/docs/load-balancing/network/" rel="noreferrer">Network load balancer</a>. </li>
<li>Use the <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="noreferrer">ingress support</a> in Kubernetes to create an <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="noreferrer">HTTP(S) load balancer</a>.</li>
</ol>
<p>The end result of either option is that you will have a public IP address that is routed to the service backed by your replication controller.</p>
<p>Once you have that IP address, you will need to manually configure a DNS record to point your domain name at the IP address. </p>
|
<p>Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).<br>
If I understand correctly the documentation in <a href="http://kubernetes.io/docs/user-guide/services/" rel="nofollow">http://kubernetes.io/docs/user-guide/services/</a> ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.<br>
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).<br>
Is the above correct?<br>
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)</p>
| <p>No, the statement is not correct. The loadbalancing should be across nodes (VMs). This <a href="https://github.com/kubernetes/contrib/tree/master/micro-demos/services" rel="nofollow">demo</a> demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.</p>
|
<p>I've two docker images, one is a webserver and the other is a backend Rest application. I deployed those images in an Openshift cluster. I want to configure my pods where the webserver is running to access the pods where the backend Rest application is running but I can't figure out how I can specify to my front-end pods that they have to communicate with my back-end service. I can only reach the pod ip but that's not what I want as I want to keep scalability advantage.</p>
<p>I tried to access it like this:</p>
<ol>
<li>via a defined route: svc-backend.router.default.svc.cluster.local</li>
<li>via his service name: svc-backend.environment.svc.cluster.local</li>
<li>via his ip adress (internal): 172.30.214.192</li>
<li>via master host + service name: master.svc-backend.environment.svc.cluster.local</li>
</ol>
<p>Nothing worked sadly. Can anyone explain to me how to communicate in openshift between pods and services?</p>
| <p>The best thing you can do is deploy those 2 pods in the same project so you can keep the communication internally:</p>
<pre><code>$ oc new-project test
$ oc new-app registry:5000/frontend-image
$ oc new-app registry:5000/backend-image
</code></pre>
<p>This will automatically create a deploymentconfig and create your pod + container + a replication controller (for high availability, it will check if the pod is still running) + a service.</p>
<p>A service is an important aspect. This is actually a loadbalancer which will distribute the traffic between multiple pods. The <code>oc new-app</code> will check which ports are exposed an create a service above the ports.
So for example you can scale up your frontend pod to 3, than the service will distribute visitor1 to pod1 and another visitor to pod2 etc. A service is stable so its IP will not change. A service IP starts with 172.30.xx.xx. So traffic sended to this IP will be forwarded to your pod(s). So for save internal network it's the best to connect services. You can connect to the service name which will be translated to the service IP. (If there is some weird case that you have to recreate your service you can create it with the same name so you don't have to change your appconfigs).</p>
<p>E.g.
I have an application which is connected with a mysql database.
In the conf of my application I'm pointing to connect with host: mysql. This is the name of the service of my MySQL. </p>
<pre><code> connection: {
host: 'mysql',
user: 'xx-user',
password: 'xx',
database: 'db',
charset: 'utf8'
</code></pre>
<p>You can check your service:</p>
<pre><code>$ oc get svc
</code></pre>
<p>or in the webconsole</p>
<p><a href="https://i.stack.imgur.com/DCmKu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DCmKu.png" alt="enter image description here"></a></p>
<p>So for your application you have to point to the service-name of your backend. (I first have to start the database because otherwise the deployment of my app will fail because it would not find the database). So you first have to deploy your backend + create the service and point to that service name in the config of your frontend.</p>
<p>Sometimes you aren't able to keep everything internally. Than you have to create routes on your services. This will expose your service to the outside and you can communicate over routes.
Than you have to point to those routes in your configs. The routes will be translated by the OpenShift router and the router will forward it to the right service.
Give some feedback if things aren't clear.</p>
<p><strong>EDIT 1:</strong></p>
<pre><code>nslookup mysql
Server: 172.30.0.1
Address: 172.30.0.1#53
Name: mysql.test.svc.cluster.local
Address: 172.30.195.xx
</code></pre>
<p><strong>EDIT 2:</strong>
Start mysql in OpenShift (use ephemeral template: user=test, password=test, database=test.
Go inside your container and try to authenticate in the following way:
You'll define your user, password and host (host = servicename). This will also work with your service IP: 172.30.xxx)</p>
<pre><code>sh-4.2$ mysql -utest -ptest -hmysql
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 48880
Server version: 5.6.26 MySQL Community Server (GPL)
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
</code></pre>
|
<p>To clarify, this is not a question about running Jenkins in Kubernetes, this is about deploying to Kubernetess from Jenkins.</p>
<p>I have recently settled on using Jenkins (and the workflow/pipeline plugin) to orchestrate our delivery process. Currently, I'm using the imperative style to deploy as per below:</p>
<pre><code>stage 'Deploy to Integ'
// Clean up old releases
sh "kubectl delete svc,deployment ${serviceName} || true"
def cmd = """kubectl run ${serviceName} --image=${dockerRegistry}/${serviceName}:${env.BUILD_NUMBER} --replicas=2 --port=${containerPort} --expose --service-overrides='{ "spec": { "type": "LoadBalancer" }}' """
// execute shell for the command above
sh cmd
</code></pre>
<p>This works well because the <code>${env.BUILD_NUMBER}</code> persists through the pipeline, making it easy for me to ensure the version I deploy is the same all the way through. The problem I have is that I would like to use the declarative approach as this isn't scalable, and I would like the definition in VCS.</p>
<p>Unfortunately, the declarative approach comes with the adverse effect of needing to explicitly state the version of the image (to be deployed) in the yaml. One way around this might be to use the <code>latest</code> tag, however this comes with its own risks. For example, lets take the scenario where I'm about to deploy <code>latest</code> to production and a new version gets tagged <code>latest</code>. The new <code>latest</code> may not have gone through testing.</p>
<p>I could get into changing the file programmatically, but that feels rather clunky, and doesn't help developers who have the file checked out to understand what is latest.</p>
<p>What have you done to solve this issue? Am I missing something obvious? What workflow are you using?</p>
| <p>In my yaml file (server.origin.yml), I set my image as <code>image-name:$BUILD_NUMBER</code></p>
<p>Then I run: <code>envsubst < ./server.origin.yml > ./server.yml</code><br>
This command will replace the string <code>$BUILD_NUMBER</code> by the value of the environment variable</p>
|
<p>UPDATE:
I connected to the minikubevm and I see my host directory mounted but there is no files there. Also when I create a file there it will not in my host machine. Any link are between them</p>
<p>I try to mount an host directory for developing my app with kubernetes.</p>
<p>As the doc recommended, I am using minikube for running my kubernetes cluster on my pc. The goal is to create a develop environment with docker and kubernetes for develop my app. I want to mount a local directory so my docker will read the code app from there. But it is not work. Any help would be really appreciate.</p>
<p>my test app (server.js):</p>
<pre><code>var http = require('http');
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);
</code></pre>
<p>my Dockerfile:</p>
<pre><code>FROM node:latest
WORKDIR /code
ADD code/ /code
EXPOSE 8080
CMD server.js
</code></pre>
<p>my pod kubernetes configuration: (pod-configuration.yaml)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: apiserver
spec:
containers:
- name: node
image: myusername/nodetest:v1
ports:
- containerPort: 8080
volumeMounts:
- name: api-server-code-files
mountPath: /code
volumes:
- name: api-server-code-files
hostPath:
path: /home/<myuser>/Projects/nodetest/api-server/code
</code></pre>
<p>my folder are:</p>
<pre><code>/home/<myuser>/Projects/nodetest/
- pod-configuration.yaml
- api-server/
- Dockerfile
- code/
- server.js
</code></pre>
<p>When I running my docker image without the hostPath volume it is of course works but the problem is that on each change I will must recreate my image that is really not powerful for development, that's why I need the volume hostPath.</p>
<p>Any idea ? why i don't success to mount my local directory ?</p>
<p>Thanks for the help.</p>
| <p>EDIT: Looks like the solution is to either use a <a href="http://kubernetes.io/docs/user-guide/pods/#privileged-mode-for-pod-containers" rel="noreferrer">privilaged container</a>, or to manually mount your home folder to allow the MiniKube VM to read from your hostPath -- <a href="https://github.com/boot2docker/boot2docker#virtualbox-guest-additions" rel="noreferrer">https://github.com/boot2docker/boot2docker#virtualbox-guest-additions</a>. (Credit to Eliel for figuring this out).</p>
<p>It is absolutely possible to configure a hostPath volume with minikube - but there are a lot of quirks and there isn't very good support for this particular issue.</p>
<p>Try removing <code>ADD code/ /code</code> from your Dockerfile. <a href="https://docs.docker.com/engine/reference/builder/#/add" rel="noreferrer">Docker's "ADD" instruction</a> is copying the code from your host machine into your container's <code>/code</code> directory. This is why rebuilding the image successfully updates your code.</p>
<p>When Kubernetes tries to mount the container's <code>/code</code> directory to the host path, it finds that this directory is already full of the code that was baked into the image. If you take this out of the build step, Kubernetes should be able to successfully mount the host path at runtime.</p>
<p>Also be sure to check the permissions of the <code>code/</code> directory on your host machine.</p>
<p>My only other thought is related to mounting in the root directory. I had issues when mounting Kubernetes hostPath volumes to/from directories in the root directory (I assume this was permissions related). So, something else to try would be a mountPath like <code>/var/www/html</code>.</p>
<p>Here's an example of a functional hostPath volume:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: example-volume
hostPath:
path: '/Users/example-user/code'
containers:
- name: example-container
image: example-image
volumeMounts:
- mountPath: '/var/www/html'
name: example-volume
</code></pre>
|
<p>I have to set resource limits for my kubernetes apps, and they use the "milicore" unity "m". </p>
<p>When analyzing my apps in Datadog, I see a unity called M% for CPU usage.</p>
<p>How do I convert 1.5M% to m?</p>
<p>Kubernetes resources: <a href="http://kubernetes.io/docs/user-guide/compute-resources/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/compute-resources/</a></p>
<p><a href="https://i.stack.imgur.com/S6dS8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S6dS8.png" alt="Datadog usage"></a></p>
| <p>This is not correct graph to detect correct resource limit. You graph shows CPU usage of your app in the cluster, but resource limit is per pod (container). We (and you as well) don't know from the graph how many containers were up and running. You can determinate right CPU limit from the container CPU usage graph(s). You will need Datadog-Docker integration:</p>
<blockquote>
<p>Please be aware that Kubernetes relies on Heapster to report metrics,
rather than the cgroup file directly. The collection interval for
Heapster is unknown which can lead to innacurate time-related data,
such as CPU usage. If you require more precise metrics, we recommend
using the Datadog-Docker Integration.</p>
</blockquote>
<p>Then it will depends how Datadog measure CPU utilization per container. If container CPU utilization has max 100%, then 100% CPU container utilization ~ 1000m ~ 1. </p>
<p>I recommend you to read how and when cgroup limits CPU - <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html" rel="nofollow">https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html</a></p>
<p>You will need a deep knowledge to set proper CPU limits. If you don't need to prioritize any container, then IMHO the best practice is to set 1 (<code>resources.requests.cpu</code>) for all your containers - they will have always equal CPU times.</p>
|
<p>i have configured kubernetes master on centos 7 and kubernetes node on another node centos 7 </p>
<p><strong>services running on kube master:</strong></p>
<ul>
<li>kube-controller-manager</li>
<li>kube-apiserver</li>
<li>kube-scheduler </li>
<li>etcd </li>
<li>flanneld</li>
</ul>
<p><strong>service running on kube node:</strong></p>
<ul>
<li>flanneld</li>
<li>docker</li>
<li>kube-proxy</li>
<li>kubelet </li>
</ul>
<p>all services are up and running and i could see the api url successfully getting all endpoints. <a href="http://kube-master:8080" rel="nofollow">http://kube-master:8080</a>
however, when i am running command <code>kube get nodes</code> , getting following error :</p>
<p><code>skipping pod synchronization. container runtime is down</code></p>
<p>I am not getting what this error means and how to resolve this. Please suggest.</p>
| <p><code>kubelet</code> is the only component with a dependency on the container runtime (Docker in your case). If I were you I'd start investigating the <code>kubelet</code> logs and search for references to Docker. Maybe the user configured to run kubelet doesn't have the necessary permissions to interact with the Docker socket at <code>/var/run/docker.sock</code>.</p>
<p>The content of your logs may help if you need further help.</p>
|
<p>As per kubernetes docs: <a href="http://kubernetes.io/docs/user-guide/configmap/" rel="nofollow">http://kubernetes.io/docs/user-guide/configmap/</a></p>
<blockquote>
<p>Kubernetes has a ConfigMap API resource which holds key-value pairs
of configuration data that can be consumed in pods.</p>
</blockquote>
<p>This looks like a very useful feature as many containers require configuration via some combination of config files, and environment variables</p>
<p>Is there a similar feature in docker1.12 swarm ?</p>
| <p>Sadly, Docker (even in 1.12 with swarm mode) does not support the variety of use cases that you could solve with ConfigMaps (also no Secrets).</p>
<p>The only things supported are external env files in both Docker (
<a href="https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file" rel="nofollow">https://docs.docker.com/engine/reference/commandline/run/#/set-environment-variables-e-env-env-file</a>) and Compose (<a href="https://docs.docker.com/compose/compose-file/#/env-file" rel="nofollow">https://docs.docker.com/compose/compose-file/#/env-file</a>).</p>
<p>These are good to keep configuration out of the image, but they rely on environment variables, so you cannot just externalize your whole config file (e.g. for use in nginx or Prometheus). Also you cannot update the env file separately from the deployment/service, which is possible with K8s.</p>
<p>Workaround: You could build your configuration files in a way that uses those variables from the env file maybe.</p>
<p>I'd guess sooner or later Docker will add those functionality. Currently, Swarm is still in it's early days so for advanced use cases you'd need to either wait (mid to long term all platforms will have similar features), build your own hack/woraround, or go with K8s, which has that stuff integrated.</p>
<p>Sidenote: For Secrets storage I would recommend Hashicorp's Vault. However, for configuration it might not be the right tool.</p>
|
<p>Having set up a kubernetes cluster with calico for the one-ip-per-pod networking, I'm wondering what the best practise is to expose services to the outside world.</p>
<p>IMHO I got two options here, BGP'ing the internal pod IP's (172...) to an edge router/firewall (vyos in my case) and do an SNAT on the firewall / router. But then I'd need one public IP per pod to expose.</p>
<p>Pro: less public IP's need to be used
Con: Pod changes need updated firwall rules?!</p>
<p>Or 2nd: Taking the provided public network and hand it over to calico as an IP pool to be used for the pods.
Con: lots of public IP's wasted for internal services which won't get exposed to the internet</p>
<p>Hope someone could enlighten me or point me in the right direction.</p>
<p>Thanks!</p>
| <p>Calico doesn't provide any special way to expose services in Kubernetes. You should use standard Kubernetes services, node ports and the like to expose your services. In the future, there's a possibility that Calico will offer some of the features that kube-proxy currently does for Kubernetes (such as exposing service IPs) but right now, Calico fits in at the low-level networking API layer only. Calico's real strength in the Kubernetes integration is the ability to define network security policy using the new Kubernetes NetworkPolicy API.</p>
<p>Source: I'm one of Calico's core developers.</p>
|
<p>I have a disk image with mirrors of some protein databases (HHsearch, BLAST, PDB, etc.) That I build with some CI tooling, and write to a GCE disk to run against. I'd like to access this <code>ReadOnlyMany</code> PV in <code>Pods</code> created by <code>ReplicationControllers</code> in multiple namespaces via <code>PersistentVolumeClaims</code> but I'm not getting the expected result.</p>
<p>The PersistentVolume configuration looks like this;</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: "databases"
spec:
capacity:
storage: 500Gi
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: "databases-us-central1-b-kube"
fsType: "ext4"
</code></pre>
<p>How it looks when loaded into kubernetes;</p>
<pre><code>$ kubectl describe pv
Name: databases
Labels: <none>
Status: Bound
Claim: production/databases
Reclaim Policy: Retain
Access Modes: ROX
Capacity: 500Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: databases-us-central1-b-kube
FSType: ext4
Partition: 0
ReadOnly: false
</code></pre>
<p>The PVC configurations are all identical, and look like this;</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: databases
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage:
volumeName: databases
</code></pre>
<p>And the <code>PVC</code>s as they look in the system;</p>
<pre><code>$ for ns in {development,staging,production}; do kubectl describe --namespace=$ns pvc; done
Name: databases
Namespace: development
Status: Pending
Volume: databases
Labels: <none>
Capacity: 0
Access Modes:
Name: databases
Namespace: staging
Status: Pending
Volume: databases
Labels: <none>
Capacity: 0
Access Modes:
Name: databases
Namespace: production
Status: Bound
Volume: databases
Labels: <none>
Capacity: 0
Access Modes:
</code></pre>
<p>I'm seeing lots of <code>timeout expired waiting for volumes to attach/mount for pod "mypod-anid""[namespace]". list of unattached/unmounted volumes=[databases]</code> when I do <code>$ kubectl get events --all-namespaces</code></p>
<p>When I scale the RC 1->2 in production (where one pod <em>did</em> manage to bind the PV), the second Pod fails to mount the same PVC. When I create a second <code>ReplicationController</code> and <code>PersistentVolumeClaim</code> in my production namespace (recall that this is where the pod that successfully mounted the pv lives) backed by the same <code>PersistentVolume</code>, the second Pod/PVC cannot bind.</p>
<p>Am I missing something? How is one supposed to actually <em>use</em> an ROX <code>PersistentVolume</code> with <code>PersistentVolumeClaim</code>s?</p>
| <p>A single PV can only be bound to a single PVC at a given time, regardless of whether it is ReadOnlyMany or not (once a PV/PVC binds, the PV can't bind to any other PVC).</p>
<p>Once a PV/PVC is bound, ReadOnlyMany PVCs may be referenced from multiple pods. In Peter's case, however, he can't use a single PVC object since he is trying to refer to it from multiple namespaces (PVCs are namespaced while PV objects are not).</p>
<p>To make this scenario work, create multiple PV objects that are identical (referring to the same disk) except for the name. This will allow each PVC object (in all namespaces) to find a PV object to bind to.</p>
|
<p>I have created a Kubernetes v1.3.3 cluster on CoreOS based on the <a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="nofollow">contrib repo</a>. My cluster appears healthy, and I would like to use the Dashboard but I am unable to access the UI, even when all authentication is disabled. Below are details of the <code>kubernetes-dashboard</code> components, as well as some API server configs/output. What am I missing here?</p>
<p><strong>Dashboard Components</strong></p>
<pre><code>core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "345970"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
- ip: 172.16.100.9
targetRef:
kind: Pod
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
uid: d4791722-5908-11e6-9697-02b43b6aa639
ports:
- port: 9090
protocol: TCP
core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "109199"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
clusterIP: 172.20.164.194
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --
namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 172.20.164.194
Port: <unset> 80/TCP
Endpoints: 172.16.100.9:9090
Session Affinity: None
No events.
core@ip-10-178-153-240 ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
creationTimestamp: 2016-08-02T23:28:34Z
generateName: kubernetes-dashboard-v1.1.0-
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
version: v1.1.0
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
containers:
- image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-lvmnw
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-178-153-57.us-west-2.compute.internal
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-lvmnw
secret:
secretName: default-token-lvmnw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
lastState: {}
name: kubernetes-dashboard
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-02T23:28:34Z
hostIP: 10.178.153.57
phase: Running
podIP: 172.16.100.9
startTime: 2016-08-02T23:28:34Z
</code></pre>
<p><strong>API Server config</strong></p>
<pre><code>/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws
</code></pre>
<p><strong>API Server is accessible from remote host (laptop)</strong></p>
<pre><code>$ curl http://10.178.153.240:8080/
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
</code></pre>
<p><strong>UI is not accessible remotely</strong></p>
<pre><code>$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'
</code></pre>
<p><strong>UI is accessible from Minion Node</strong></p>
<pre><code>core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
<!doctype html> <html ng-app="kubernetesDashboard">...
</code></pre>
<p><strong>API Server route tables</strong></p>
<pre><code>core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1
</code></pre>
<p><strong>Minion (where pod lives) route table</strong></p>
<pre><code>core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1
</code></pre>
<p><strong>Flannel Logs</strong>
It seems that this one route is misbehaving with Flannel. I'm getting these errors in the logs but restarting the daemon does not seem to resolve it.</p>
<pre><code>...Watch subnets: client: etcd cluster is unavailable or misconfigured
... L3 miss: 172.16.100.9
... calling NeighSet: 172.16.100.9
</code></pre>
| <p>Either you have to expose your service outside of the cluster using a service of type NodePort as mentioned in the previous answer, or if you enabled Basic Auth on your API Server you can reach your service using the following URL:</p>
<p><code>http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name</code></p>
<p>See: <a href="http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls" rel="nofollow">http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls</a></p>
|
<p>I've set cpu limits on my Kubernetes pods, but they do not seem to cap cpu usage at all running on Google Container Engine version 1.3.3</p>
<p>Reading <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraints" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraints</a> this has to be enabled on the kubelet as follows:</p>
<pre><code>kubelet --cpu-cfs-quota=true
</code></pre>
<p>However when checking the process when logging into one of the nodes of my cluster it seems the kubelet is missing this flag:</p>
<pre><code>/usr/local/bin/kubelet --api-servers=https://xxx.xxx.xxx.xxx --enable-debugging-handlers=true --cloud-provider=gce --config=/etc/kubernetes/manifests --allow-privileged=True --v=2 --cluster-dns=10.223.240.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --node-labels=cloud.google.com/gke-nodepool=default-pool --babysit-daemons=true --eviction-hard=memory.available<100Mi
</code></pre>
<p>Is any Googler able to confirm whether its enabled or not and if not tell us why? Now it seems I don't have the choice to use cpu limits whereas as it's enabled I can just leave cpu limit out of my spec if I don't wish to use it.</p>
| <p>That flag's <a href="https://github.com/kubernetes/kubernetes/blob/release-1.3/cmd/kubelet/app/server.go#L570" rel="nofollow">default value is true</a> :)</p>
<p>So yes, it is enabled in Container Engine.</p>
<p>edit: I was wrong - the flag is enabled, but the default operating system used by GKE doesn't support it. Vishnu Kannan's answer is correct!</p>
|
<p>I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.</p>
<p>Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?</p>
<p>I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).</p>
<p>Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this <em>could</em> rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.</p>
| <p>Check out my more thorough answer <a href="https://stackoverflow.com/questions/31664060/how-to-call-a-service-exposed-by-a-kubernetes-cluster-from-another-kubernetes-cl/31665248#31665248">here</a>, but the most common solution to this is to create bastion routes in your GCP project.</p>
<p>In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.</p>
<p>If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.</p>
<p><a href="https://cloud.google.com/compute/docs/load-balancing/internal/" rel="nofollow noreferrer">GCP internal load balancing</a> was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.</p>
|
<p>Can Kubernetes be deployed with Docker locally now?</p>
<p>I see the tutorial of deploying Kubernetes with docker on the Kubernetes official website has been removed.In the <a href="http://kubernetes.io/docs/getting-started-guides/binary_release/" rel="nofollow">Kubernetes' download link</a>, Docker is no longer one of providers. And I have tried to deploy Kubernetes with following commands which are similar to the old official tutorial.</p>
<pre><code>docker run -d \
--net=host \
gcr.io/google_containers/etcd:2.0.9 \
/usr/local/bin/etcd \
--addr=127.0.0.1:4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
docker run -d \
--net=host \
-v /var/run/docker.sock:/var/run/docker.sock \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet \
--api_servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable_server \
--hostname_override=127.0.0.1 \
--config=/etc/kubernetes/manifests
docker run -d \
--net=host \
--privileged \
gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube proxy \
--master=http://127.0.0.1:8080 \
--v=2
</code></pre>
<p>The result is that only etcd, kubelet and proxy are created. And I can not connect Kubernetes server with kubectl. The results of docker ps:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c21652ceae44 gcr.io/google_containers/hyperkube:v1.0.1 "/hyperkube proxy --m" 28 seconds ago Up 27 seconds sleepy_bardeen
ee4568ed948c gcr.io/google_containers/hyperkube:v1.0.1 "/hyperkube kubelet -" About a minute ago Up About a minute elegant_hugle
533c459ec7d4 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " About a minute ago Up About a minute condescending_bhabha
</code></pre>
| <p>kubernetes is an orchestration(scheduling) system for docker container and don't run inside docker, because k8ts need docker daemon to schedule and orchestrate container.</p>
<p>k8ts need physical(bare metal or other) or virtual machine to run.
for run kubernetes locally you can use <a href="https://github.com/kubernetes/minikube" rel="nofollow">minikube</a>.</p>
|
<p>I am using Minikube and I am trying to configure Heapster with Grafana and Influxdb. I followed the instructions <a href="https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md" rel="nofollow">here</a> and all the ReplicationControllers, Pods and Services created successfully except for the monitoring-grafana service.</p>
<pre><code>$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 4d
kubernetes-dashboard 10.0.0.122 <nodes> 80/TCP 12m
monitoring-influxdb 10.0.0.66 <nodes> 8083/TCP,8086/TCP 1h
$ kubectl get rc --namespace=kube-system
NAME DESIRED CURRENT AGE
heapster 1 1 1h
influxdb-grafana 1 1 34m
kubernetes-dashboard 1 1 13m
$ kubectl get po --namespace=kube-system
NAME READY STATUS RESTARTS AGE
heapster-hrgv3 1/1 Running 1 1h
influxdb-grafana-9pqv8 2/2 Running 0 34m
kube-addon-manager-minikubevm 1/1 Running 6 4d
kubernetes-dashboard-rrpes 1/1 Running 0 13m
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
kubernetes-dashboard is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
</code></pre>
<p>I only changed the grafana-service.yaml to add type: NodePort:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 3000
selector:
name: influxGrafana
type: NodePort
</code></pre>
<p>When I type kubectl create -f grafana-service.yaml it seems that Kubernetes is creating the service successfully but it really doesn't. It simply creates it and 10 seconds later disappear.</p>
<pre><code>$ kubectl create -f grafana-service.yaml
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30357) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "monitoring-grafana" created
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 4d
kubernetes-dashboard 10.0.0.122 <nodes> 80/TCP 20m
monitoring-grafana 10.0.0.251 <nodes> 80/TCP 3s
monitoring-influxdb 10.0.0.66 <nodes> 8083/TCP,8086/TCP 1h
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 4d
kubernetes-dashboard 10.0.0.122 <nodes> 80/TCP 20m
monitoring-influxdb 10.0.0.66 <nodes> 8083/TCP,8086/TCP 1h
</code></pre>
<p>I have already checked the logs of containers (InfluxDB, Grafana and Heapter) and everything seems to be fine.</p>
<pre><code>$ kubectl logs influxdb-grafana-9pqv8 grafana --namespace=kube-system
Influxdb service URL is provided.
Using the following URL for InfluxDB: http://monitoring-influxdb:8086
Using the following backend access mode for InfluxDB: proxy
Starting Grafana in the background
Waiting for Grafana to come up...
2016/08/09 16:51:04 [I] Starting Grafana
2016/08/09 16:51:04 [I] Version: 2.6.0, Commit: v2.6.0, Build date: 2015-12-14 14:18:01 +0000 UTC
2016/08/09 16:51:04 [I] Configuration Info
Config files:
[0]: /usr/share/grafana/conf/defaults.ini
[1]: /etc/grafana/grafana.ini
Command lines overrides:
[0]: default.paths.data=/var/lib/grafana
[1]: default.paths.logs=/var/log/grafana
Environment variables used:
[0]: GF_SERVER_HTTP_PORT=3000
[1]: GF_SERVER_ROOT_URL=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
[2]: GF_AUTH_ANONYMOUS_ENABLED=true
[3]: GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
[4]: GF_AUTH_BASIC_ENABLED=false
Paths:
home: /usr/share/grafana
data: /var/lib/grafana
logs: /var/log/grafana
2016/08/09 16:51:04 [I] Database: sqlite3
2016/08/09 16:51:04 [I] Migrator: Starting DB migration
2016/08/09 16:51:04 [I] Migrator: exec migration id: create migration_log table
2016/08/09 16:51:04 [I] Migrator: exec migration id: create user table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index user.login
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index user.email
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_user_login - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_user_email - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table user to user_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create user table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_user_login - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_user_email - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data_source v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table user_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create temp user table v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_email - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_org_id - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_code - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_temp_user_status - v1-7
2016/08/09 16:51:04 [I] Migrator: exec migration id: create star table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index star.user_id_dashboard_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: create org table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_org_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create org_user table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_org_user_org_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_org_user_org_id_user_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data account to org
2016/08/09 16:51:04 [I] Migrator: skipping migration id: copy data account to org, condition not fulfilled
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data account_user to org_user
2016/08/09 16:51:04 [I] Migrator: skipping migration id: copy data account_user to org_user, condition not fulfilled
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table account
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table account_user
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index dashboard.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index dashboard_account_id_slug
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_tag table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index dashboard_tag.dasboard_id_term
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_dashboard_tag_dashboard_id_term - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table dashboard to dashboard_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_dashboard_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_org_id_slug - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy dashboard v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop table dashboard_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: alter dashboard.data to mediumtext v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create data_source table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index data_source.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add unique index data_source.account_id_name
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index IDX_data_source_account_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_data_source_account_id_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table data_source to data_source_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create data_source table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_data_source_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_data_source_org_id_name - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy data_source v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table data_source_v1 #2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Add column with_credentials
2016/08/09 16:51:04 [I] Migrator: exec migration id: create api_key table
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.account_id
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.key
2016/08/09 16:51:04 [I] Migrator: exec migration id: add index api_key.account_id_name
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index IDX_api_key_account_id - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_api_key_key - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop index UQE_api_key_account_id_name - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: Rename table api_key to api_key_v1 - v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create api_key table v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_api_key_org_id - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_api_key_key - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_api_key_org_id_name - v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: copy api_key v1 to v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: Drop old table api_key_v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_snapshot table v4
2016/08/09 16:51:04 [I] Migrator: exec migration id: drop table dashboard_snapshot_v4 #1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create dashboard_snapshot table v5 #2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_key - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_delete_key - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index IDX_dashboard_snapshot_user_id - v5
2016/08/09 16:51:04 [I] Migrator: exec migration id: alter dashboard_snapshot to mediumtext v2
2016/08/09 16:51:04 [I] Migrator: exec migration id: create quota table v1
2016/08/09 16:51:04 [I] Migrator: exec migration id: create index UQE_quota_org_id_user_id_target - v1
2016/08/09 16:51:04 [I] Created default admin user: admin
2016/08/09 16:51:04 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
.Grafana is up and running.
Creating default influxdb datasource...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 242 100 37 100 205 2222 12314 --:--:-- --:--:-- --:--:-- 12812
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=5d74e6fdfa244c4c; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 37
{"id":1,"message":"Datasource added"}
Importing default dashboards...
Importing /dashboards/cluster.json ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 71639 100 49 100 71590 376 537k --:--:-- --:--:-- --:--:-- 541k
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=b7bc3ca23c09d7b3; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 49
{"slug":"cluster","status":"success","version":0}
Done importing /dashboards/cluster.json
Importing /dashboards/pods.json ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 32141 100 46 100 32095 2476 1687k --:--:-- --:--:-- --:--:-- 1741k
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=79de9b266893d792; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly
Date: Tue, 09 Aug 2016 16:51:06 GMT
Content-Length: 46
{"slug":"pods","status":"success","version":0}
Done importing /dashboards/pods.json
Bringing Grafana back to the foreground
exec /usr/sbin/grafana-server --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini cfg:default.paths.data=/var/lib/grafana cfg:default.paths.logs=/var/log/grafana
$ kubectl logs influxdb-grafana-9pqv8 influxdb --namespace=kube-system
8888888 .d888 888 8888888b. 888888b.
888 d88P" 888 888 "Y88b 888 "88b
888 888 888 888 888 888 .88P
888 88888b. 888888 888 888 888 888 888 888 888 8888888K.
888 888 "88b 888 888 888 888 Y8bd8P' 888 888 888 "Y88b
888 888 888 888 888 888 888 X88K 888 888 888 888
888 888 888 888 888 Y88b 888 .d8""8b. 888 .d88P 888 d88P
8888888 888 888 888 888 "Y88888 888 888 8888888P" 8888888P"
2016/08/09 16:51:04 InfluxDB starting, version 0.9.4.1, branch 0.9.4, commit c4f85f84765e27bfb5e58630d0dea38adeacf543
2016/08/09 16:51:04 Go version go1.5, GOMAXPROCS set to 1
2016/08/09 16:51:04 Using configuration at: /etc/influxdb.toml
[metastore] 2016/08/09 16:51:04 Using data dir: /data/meta
[metastore] 2016/08/09 16:51:04 Node at localhost:8088 [Follower]
[metastore] 2016/08/09 16:51:05 Node at localhost:8088 [Leader]. peers=[localhost:8088]
[metastore] 2016/08/09 16:51:05 Created local node: id=1, host=localhost:8088
[monitor] 2016/08/09 16:51:05 Starting monitor system
[monitor] 2016/08/09 16:51:05 'build' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'runtime' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'network' registered for diagnostics monitoring
[monitor] 2016/08/09 16:51:05 'system' registered for diagnostics monitoring
[store] 2016/08/09 16:51:05 Using data dir: /data/data
[handoff] 2016/08/09 16:51:05 Starting hinted handoff service
[handoff] 2016/08/09 16:51:05 Using data dir: /data/hh
[tcp] 2016/08/09 16:51:05 Starting cluster service
[shard-precreation] 2016/08/09 16:51:05 Starting precreation service with check interval of 10m0s, advance period of 30m0s
[snapshot] 2016/08/09 16:51:05 Starting snapshot service
[copier] 2016/08/09 16:51:05 Starting copier service
[admin] 2016/08/09 16:51:05 Starting admin service
[admin] 2016/08/09 16:51:05 Listening on HTTP: [::]:8083
[continuous_querier] 2016/08/09 16:51:05 Starting continuous query service
[httpd] 2016/08/09 16:51:05 Starting HTTP service
[httpd] 2016/08/09 16:51:05 Authentication enabled: false
[httpd] 2016/08/09 16:51:05 Listening on HTTP: [::]:8086
[retention] 2016/08/09 16:51:05 Starting retention policy enforcement service with check interval of 30m0s
[run] 2016/08/09 16:51:05 Listening for signals
[monitor] 2016/08/09 16:51:05 Storing statistics in database '_internal' retention policy '', at interval 10s
[metastore] 2016/08/09 16:51:05 database '_internal' created
[metastore] 2016/08/09 16:51:05 retention policy 'default' for database '_internal' created
[metastore] 2016/08/09 16:51:05 retention policy 'monitor' for database '_internal' created
2016/08/09 16:51:05 Sending anonymous usage statistics to m.influxdb.com
[wal] 2016/08/09 16:51:15 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 20971520 partition size threshold
[wal] 2016/08/09 16:51:15 WAL writing to /data/wal/_internal/monitor/1
[wal] 2016/08/09 16:51:20 Flush due to idle. Flushing 1 series with 1 points and 143 bytes from partition 1
[wal] 2016/08/09 16:51:20 write to index of partition 1 took 496.995µs
[wal] 2016/08/09 16:51:30 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:30 write to index of partition 1 took 436.627µs
[wal] 2016/08/09 16:51:40 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:40 write to index of partition 1 took 360.64µs
[wal] 2016/08/09 16:51:50 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:51:50 write to index of partition 1 took 383.191µs
[wal] 2016/08/09 16:52:00 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:00 write to index of partition 1 took 362.55µs
[wal] 2016/08/09 16:52:10 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:10 write to index of partition 1 took 337.138µs
[wal] 2016/08/09 16:52:20 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:20 write to index of partition 1 took 356.146µs
[wal] 2016/08/09 16:52:30 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:30 write to index of partition 1 took 398.484µs
[wal] 2016/08/09 16:52:40 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:40 write to index of partition 1 took 473.95µs
[wal] 2016/08/09 16:52:50 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:52:50 write to index of partition 1 took 255.661µs
[wal] 2016/08/09 16:53:00 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:53:00 write to index of partition 1 took 352.629µs
[wal] 2016/08/09 16:53:10 Flush due to idle. Flushing 5 series with 5 points and 364 bytes from partition 1
[wal] 2016/08/09 16:53:10 write to index of partition 1 took 373.52µs
[http] 2016/08/09 16:53:12 172.17.0.2 - root [09/Aug/2016:16:53:12 +0000] GET /ping HTTP/1.1 204 0 - heapster/1.2.0-beta.0 c2197fd8-5e51-11e6-8001-000000000000 80.938µs
[http] 2016/08/09 16:53:12 172.17.0.2 - root [09/Aug/2016:16:53:12 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 404 50 - heapster/1.2.0-beta.0 c21e2912-5e51-11e6-8002-000000000000 18.498818ms
[wal] 2016/08/09 16:53:20 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:20 write to index of partition 1 took 463.429µs
[wal] 2016/08/09 16:53:30 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:30 write to index of partition 1 took 486.92µs
[wal] 2016/08/09 16:53:40 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:40 write to index of partition 1 took 489.395µs
[wal] 2016/08/09 16:53:50 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:53:50 write to index of partition 1 took 502.615µs
[wal] 2016/08/09 16:54:00 Flush due to idle. Flushing 6 series with 6 points and 408 bytes from partition 1
[wal] 2016/08/09 16:54:00 write to index of partition 1 took 526.287µs
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] GET /ping HTTP/1.1 204 0 - heapster/1.2.0-beta.0 e183bf22-5e51-11e6-8003-000000000000 77.559µs
[query] 2016/08/09 16:54:05 CREATE DATABASE k8s
[metastore] 2016/08/09 16:54:05 database 'k8s' created
[metastore] 2016/08/09 16:54:05 retention policy 'default' for database 'k8s' created
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] GET /query?db=&q=CREATE+DATABASE+k8s HTTP/1.1 200 40 - heapster/1.2.0-beta.0 e183d606-5e51-11e6-8004-000000000000 1.435103ms
[wal] 2016/08/09 16:54:05 WAL starting with 30720 ready series size, 0.50 compaction threshold, and 20971520 partition size threshold
[wal] 2016/08/09 16:54:05 WAL writing to /data/wal/k8s/default/2
[http] 2016/08/09 16:54:05 172.17.0.2 - root [09/Aug/2016:16:54:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 e1860e09-5e51-11e6-8005-000000000000 30.444828ms
[wal] 2016/08/09 16:54:10 Flush due to idle. Flushing 8 series with 8 points and 514 bytes from partition 1
[wal] 2016/08/09 16:54:10 write to index of partition 1 took 530.292µs
[wal] 2016/08/09 16:54:11 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:54:11 write to index of partition 1 took 32.567355ms
[wal] 2016/08/09 16:54:20 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:20 write to index of partition 1 took 1.549305ms
[wal] 2016/08/09 16:54:30 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:30 write to index of partition 1 took 572.059µs
[wal] 2016/08/09 16:54:40 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:40 write to index of partition 1 took 580.618µs
[wal] 2016/08/09 16:54:50 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:54:50 write to index of partition 1 took 641.815µs
[wal] 2016/08/09 16:55:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:01 write to index of partition 1 took 385.986µs
[http] 2016/08/09 16:55:05 172.17.0.2 - root [09/Aug/2016:16:55:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 05482b86-5e52-11e6-8006-000000000000 10.363919ms
[wal] 2016/08/09 16:55:10 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:55:10 write to index of partition 1 took 19.304596ms
[wal] 2016/08/09 16:55:11 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:11 write to index of partition 1 took 638.219µs
[wal] 2016/08/09 16:55:21 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:21 write to index of partition 1 took 409.537µs
[wal] 2016/08/09 16:55:31 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:31 write to index of partition 1 took 442.186µs
[wal] 2016/08/09 16:55:41 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:41 write to index of partition 1 took 417.074µs
[wal] 2016/08/09 16:55:51 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:55:51 write to index of partition 1 took 434.209µs
[wal] 2016/08/09 16:56:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:01 write to index of partition 1 took 439.568µs
[http] 2016/08/09 16:56:05 172.17.0.2 - root [09/Aug/2016:16:56:05 +0000] POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1 204 0 - heapster/1.2.0-beta.0 290b8b5e-5e52-11e6-8007-000000000000 5.954015ms
[wal] 2016/08/09 16:56:10 Flush due to idle. Flushing 261 series with 261 points and 4437 bytes from partition 1
[wal] 2016/08/09 16:56:10 write to index of partition 1 took 16.643255ms
[wal] 2016/08/09 16:56:11 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:11 write to index of partition 1 took 479.833µs
[wal] 2016/08/09 16:56:21 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:21 write to index of partition 1 took 631.107µs
[wal] 2016/08/09 16:56:31 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:31 write to index of partition 1 took 694.61µs
[wal] 2016/08/09 16:56:41 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:41 write to index of partition 1 took 708.474µs
[wal] 2016/08/09 16:56:51 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
[wal] 2016/08/09 16:56:51 write to index of partition 1 took 627.979µs
[wal] 2016/08/09 16:57:01 Flush due to idle. Flushing 9 series with 9 points and 612 bytes from partition 1
</code></pre>
<p>I also tried to create the service from the Kubernetes dashboard with the same result. It creates the service and almost inmediately there's no service.</p>
<p>Sorry for the huge post. I hope you could help me out. Thanks.</p>
<p><strong>EDIT</strong></p>
<p>Thanks to @Pixel_Elephant. After remove the label 'kubernetes.io/cluster-service: 'true'' in both files: grafana-service.yaml and heapster-service.yaml the service could survive.</p>
<p>Just another one more step:
In influxdb-grafana-controller.yaml change the:</p>
<pre><code>- name: GF_SERVER_ROOT_URL
value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
</code></pre>
<p>for</p>
<pre><code>- name: GF_SERVER_ROOT_URL
value: /
</code></pre>
<p>and I could finally access to Grafana dashboard in <a href="http://192.168.99.100/" rel="nofollow">http://192.168.99.100/</a>< NODE_PORT>/</p>
| <p>Remove the <code>kubernetes.io/cluster-service: 'true'</code> label.</p>
<p>See <a href="https://github.com/kubernetes/kops/issues/13" rel="nofollow">https://github.com/kubernetes/kops/issues/13</a></p>
|
<p>I'm looking for a simple application to be deployed on Kubernetes environment, that is divided to docker micro services, that some of the microservices talk to each other with REST API.</p>
<p>Somebody?</p>
<p>I'll be happy to get full details of how to install it on my environment (I have 3 hosts VMs)</p>
<p>Thanks!</p>
| <p>You can find a list of application examples in the kubernetes <a href="https://github.com/kubernetes/kubernetes/tree/e3fa83177c4d2d1331d41e7ce70b1f145475587a/examples" rel="nofollow">github repo</a>. The <a href="https://github.com/kubernetes/kubernetes/tree/e3fa83177c4d2d1331d41e7ce70b1f145475587a/examples/guestbook#guestbook-example" rel="nofollow">Guestbook application</a> will be a good start.</p>
|
<p>I have created a Kubernetes v1.3.3 cluster on CoreOS based on the <a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="nofollow">contrib repo</a>. My cluster appears healthy, and I would like to use the Dashboard but I am unable to access the UI, even when all authentication is disabled. Below are details of the <code>kubernetes-dashboard</code> components, as well as some API server configs/output. What am I missing here?</p>
<p><strong>Dashboard Components</strong></p>
<pre><code>core@ip-10-178-153-240 ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "345970"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard
uid: bb49360f-551c-11e6-be8c-02b43b6aa639
subsets:
- addresses:
- ip: 172.16.100.9
targetRef:
kind: Pod
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
uid: d4791722-5908-11e6-9697-02b43b6aa639
ports:
- port: 9090
protocol: TCP
core@ip-10-178-153-240 ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-07-28T23:40:57Z
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "109199"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: bb4804bd-551c-11e6-be8c-02b43b6aa639
spec:
clusterIP: 172.20.164.194
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
core@ip-10-178-153-240 ~ $ kubectl describe svc/kubernetes-dashboard --
namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 172.20.164.194
Port: <unset> 80/TCP
Endpoints: 172.16.100.9:9090
Session Affinity: None
No events.
core@ip-10-178-153-240 ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}}
creationTimestamp: 2016-08-02T23:28:34Z
generateName: kubernetes-dashboard-v1.1.0-
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
version: v1.1.0
name: kubernetes-dashboard-v1.1.0-nog8g
namespace: kube-system
resourceVersion: "345969"
selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g
uid: d4791722-5908-11e6-9697-02b43b6aa639
spec:
containers:
- image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-lvmnw
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-178-153-57.us-west-2.compute.internal
restartPolicy: Always
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-lvmnw
secret:
secretName: default-token-lvmnw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-02T23:28:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53
lastState: {}
name: kubernetes-dashboard
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-02T23:28:34Z
hostIP: 10.178.153.57
phase: Running
podIP: 172.16.100.9
startTime: 2016-08-02T23:28:34Z
</code></pre>
<p><strong>API Server config</strong></p>
<pre><code>/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws
</code></pre>
<p><strong>API Server is accessible from remote host (laptop)</strong></p>
<pre><code>$ curl http://10.178.153.240:8080/
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
</code></pre>
<p><strong>UI is not accessible remotely</strong></p>
<pre><code>$ curl -L http://10.178.153.240:8080/ui
Error: 'dial tcp 172.16.100.9:9090: i/o timeout'
Trying to reach: 'http://172.16.100.9:9090/'
</code></pre>
<p><strong>UI is accessible from Minion Node</strong></p>
<pre><code>core@ip-10-178-153-57 ~$ curl -L 172.16.100.9:9090
<!doctype html> <html ng-app="kubernetesDashboard">...
</code></pre>
<p><strong>API Server route tables</strong></p>
<pre><code>core@ip-10-178-153-240 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1
</code></pre>
<p><strong>Minion (where pod lives) route table</strong></p>
<pre><code>core@ip-10-178-153-57 ~ $ ip route show
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024
172.16.0.0/12 dev flannel.1
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1
</code></pre>
<p><strong>Flannel Logs</strong>
It seems that this one route is misbehaving with Flannel. I'm getting these errors in the logs but restarting the daemon does not seem to resolve it.</p>
<pre><code>...Watch subnets: client: etcd cluster is unavailable or misconfigured
... L3 miss: 172.16.100.9
... calling NeighSet: 172.16.100.9
</code></pre>
| <p>For anyone who finds their way to this question, I wanted to post the final resolution as it was not a Flannel, Kubernetes, or SkyDNS issue, it was an inadvertent firewall. As soon as I opened up the firewall on the API server, my Flannel routes were fully functional and I could access the Dashboard (assuming basic auth was enabled on the API Server).</p>
<p>So in the end, user error :)</p>
|
<p>May I ask if it is possible that I will use kubernetes in spring cloud instead of Eureka server?
The reason I am asking is we already have Kubernetes in our environment, I know Kubernete also provide service discovery, so I don't want to start Eureka server for service discovery purpose. If I can replace Eureka with Kubernete, that will be great.</p>
| <p>How about spring-cloud-kubernetes by fabric8io? </p>
<p>There is an implementation of the DiscoveryService interface to work with Kubernetes services.</p>
|
<p>We're expanding our microservice herd application and i was looking into Kubernetes for our needs. Before my dive into the modern orchestration i was thinking about service discovery in the following way:</p>
<ul>
<li>Cluster is bootstrapped with some kind of distributed service registry (Consul in our case)</li>
<li>Every service is launched with service registry endpoints passed in somehow</li>
<li>Every service self-registers itself in registry</li>
<li>Whenever service needs some other service addresses, it fetches contact points from registry</li>
</ul>
<p>In that case, if any service fails or some kind of network disruption occurs, client service may proceed with next contact point and eventually succeed (in case it is not totally cut off). As far as i've understood, kubernetes uses completely different model:</p>
<ul>
<li>All pods are self-registered in kubernetes</li>
<li>Kubernetes provides single load balancer instance to pass traffic through to services</li>
<li>Load balancer itself may be discovered via environment variables or DNS query (and that may result in creepy things such as fetching port from DNS records or just stale environment variable)</li>
</ul>
<p>And that confuses me a little. If i'm correct (feel free to tell me i'm not if that's the case), this basically turns load balancer into SPOF that may stop whole application the moment it dies. Am i right? Are there any guarantees made by Kubernetes that such situation won't happen or would be resolved in <N> <time units>?</p>
| <p>The in-cluster load balancer in Kubernetes (kube-proxy) is distributed among all of your cluster's nodes. It syncs all service endpoints to iptables rules on the node. Kube-proxy is healthchecked by kubelet, and will be restarted if it is unhealthy for 30 seconds (that's configurable, I think). The actual traffic does not go through the kube-proxy binary, so if kube-proxy does get stuck, the worst thing that happens is your view of service endpoints gets stale. Take a look at the docs and the <a href="http://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies" rel="nofollow">Virtual IPs section</a> of the Service Docs, <a href="https://github.com/kubernetes/kubernetes/wiki/Services-FAQ" rel="nofollow">Services FAQs</a>, and <a href="https://speakerdeck.com/cjcullen/kubernetes-networking?slide=46" rel="nofollow">slides 46-61</a> of this kubernetes networking deck for more details on kube-proxy.</p>
<p>For service discovery, each service is accessible by name through kube-dns. Pods have kube-dns in their search path, so no environment variable magic is necessary. <a href="https://speakerdeck.com/cjcullen/kubernetes-networking?slide=68" rel="nofollow">Slides 68-83</a> of that same deck has a full walkthrough of DNS->Virtual IP->Pod traffic.</p>
<p>So, the load balancer is not really a SPOF. It should mostly share fate with the workloads running on the same node. Kube-dns could be a SPOF for service discovery, but it can be replicated however much you like.</p>
|
<p>I'm trying via Jenkins to push an image to the container repository. It was working at first, but now, I got "access denied"</p>
<pre><code>docker -- push gcr.io/xxxxxxx-yyyyy-138623/myApp:master.1
The push refers to a repository [gcr.io/xxxxxxx-yyyyy-138623/myApp]
bdc3ba7fdb96: Preparing
5632c278a6dc: Waiting
denied: Access denied.
</code></pre>
<p>the Jenkinsfile look like :</p>
<pre><code> sh("gcloud docker --authorize-only")
sh("docker -- push gcr.io/xxxxxxx-yyyyy-138623/hotelpro4u:master.1")
</code></pre>
<p>Remarks:</p>
<ul>
<li>Jenkins is running in Google Cloud</li>
<li>If I try in Google Shell or from my computer, it's working</li>
<li>I followed this tutorial : <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes" rel="nofollow">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes</a></li>
</ul>
<p>I'm stuck while 12 hours.... I need help</p>
| <p>That error means that the GKE node is not authorized to push to the GCS bucket that is backing your repository.</p>
<p>This could be because:</p>
<ol>
<li>The cluster does not have the correct scopes to authenticate to GCS. Did you create the cluster w/ <code>--scopes storage-rw</code>?</li>
<li>The service account that the cluster is running as does not have permissions on the bucket. Check the <a href="https://console.cloud.google.com/iam-admin" rel="nofollow">IAM & Admin section</a> on your project to make sure that the service account has the necessary role.</li>
</ol>
|
<p>I've set cpu limits on my Kubernetes pods, but they do not seem to cap cpu usage at all running on Google Container Engine version 1.3.3</p>
<p>Reading <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraints" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraints</a> this has to be enabled on the kubelet as follows:</p>
<pre><code>kubelet --cpu-cfs-quota=true
</code></pre>
<p>However when checking the process when logging into one of the nodes of my cluster it seems the kubelet is missing this flag:</p>
<pre><code>/usr/local/bin/kubelet --api-servers=https://xxx.xxx.xxx.xxx --enable-debugging-handlers=true --cloud-provider=gce --config=/etc/kubernetes/manifests --allow-privileged=True --v=2 --cluster-dns=10.223.240.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --node-labels=cloud.google.com/gke-nodepool=default-pool --babysit-daemons=true --eviction-hard=memory.available<100Mi
</code></pre>
<p>Is any Googler able to confirm whether its enabled or not and if not tell us why? Now it seems I don't have the choice to use cpu limits whereas as it's enabled I can just leave cpu limit out of my spec if I don't wish to use it.</p>
| <p>Google Container Engine does not support CPU Quota by default. If you'd like to use CPU quota you can switch to using GCI Node image - <a href="https://cloud.google.com/container-engine/docs/gci" rel="nofollow">https://cloud.google.com/container-engine/docs/gci</a>.
GCI has support for CPU quota and Container Engine would automatically start supporting CPU <code>limits</code> on containers.</p>
|
<p>Sometimes pod creation fails with the 500 error on our GKE cluster:</p>
<pre><code>1m 1m 1 installer-u57ab1f7707b03 Pod Normal Scheduled {default-scheduler } Successfully assigned installer-u57ab1f7707b03 to gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l
1m 1m 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container ff8573fbf0b90a25b5565b1feb36671f13367115dde74e581cf249be772d8e4e: [8] System error: read parent: connection reset by peer\n"
1m 1m 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container fbd7151d4489ed3ac9b21ef9ee3268039374fe3aee1f5933dc27d003f5388e7d: [8] System error: read parent: connection reset by peer\n"
1m 1m 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container c6b7969fd036fd187f8b5b815106887d718780b290b81e6dde12162d15c22728: [8] System error: read parent: connection reset by peer\n"
49s 49s 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container 5b0d78ee31759a3472f15fe375ef4f2542dcc65518023a1bd06593fe7d28a448: [8] System error: read parent: connection reset by peer\n"
32s 32s 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container 7ff5941a30ce432aa1b1382e4b20d272a08a7113f79f7f1ff2f8898a00ca8f06: [8] System error: read parent: connection reset by peer\n"
18s 18s 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container a91ae7d6dc9dee5196e73457d817bc46f8009c26147cc81727920aebfa52cc38: [8] System error: read parent: connection reset by peer\n"
2s 2s 1 installer-u57ab1f7707b03 Pod Warning FailedSync {kubelet gke-oro-cloud-v1-1445426963-ffbcc283-node-bo1l} Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: API error (500): Cannot start container ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508: [8] System error: read parent: connection reset by peer\n"
</code></pre>
<p>In docker.log I found:</p>
<pre><code>time="2016-08-10T12:37:24.458097892Z" level=warning msg="failed to cleanup ipc mounts:\nfailed to umount /var/lib/docker/containers/ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508/shm: invalid argument\nfailed to umount /var/lib/docker/containers/ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508/mqueue: invalid argument"
time="2016-08-10T12:37:24.458280187Z" level=error msg="Handler for POST /containers/ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508/start returned error: Cannot start container ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508: [8] System error: read parent: connection reset by peer"
time="2016-08-10T12:37:24.458315257Z" level=error msg="HTTP Error" err="Cannot start container ad8b7bbe72410232d7fe6197e057d15e9003e24f6d8aad15bc7068430cfea508: [8] System error: read parent: connection reset by peer" statusCode=500
time="2016-08-10T12:37:40.151776337Z" level=warning msg="signal: killed"
</code></pre>
<p>Kubernetes version v1.2.5<br>
Docker version 1.9.1</p>
<p>Any ideas how to fix it?</p>
| <p>This is probably due to the <a href="https://github.com/docker/docker/issues/14203" rel="nofollow">runc bug</a> in Docker 1.9 where the container reads its config, but closes the read pipe before the the parent is done writing.</p>
<p>A fixed runc is included in Docker 1.10. Kubernetes 1.3 uses Docker 1.11.2, but until you upgrade, you may be able to work around the issue by <a href="https://github.com/docker/docker/issues/14203#issuecomment-204976854" rel="nofollow">adding extra characters</a> to your container's command line.</p>
|
<p>What is the username/password/keys to ssh into the Minikube VM?</p>
| <p>You can use the Minikube binary for this, <code>minikube ssh</code>.</p>
|
<p>After creating a new app using <code>oc new-app location/nameofapp</code>, many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run <code>oc delete <label></code>. I would like to know how to delete all of these given the label.</p>
| <p>When using <code>oc new-app</code>, it would normally add a label on each resource created call <code>app</code> with value being the name given to the application. That name would be based on the name of the git repository, or could have been supplied using the <code>--name</code> option. Knowing that to delete everything you can then run:</p>
<pre><code>oc delete all --selector app=appname
</code></pre>
<p>Before you delete anything you should be able to check what would matche by running:</p>
<pre><code>oc get all --selector app=appname
</code></pre>
<p>Note that if creating from a template, rather than a repository, how things are labelled can depend on what the template itself sets up, so the instructions above may not apply.</p>
|
<p>I have a local Kubernetes cluster on a single machine, and a container that needs to receive a url (like <a href="https://www.wikipedia.org/" rel="noreferrer">https://www.wikipedia.org/</a>) and extract the text content from it. Essentially I need my pod to connect to the outside world. Since I am using v1.2.5, I need some DNS add-on like SkyDNS, but I cannot find any working example or tutorial on how to set it up. Tutorials like <a href="http://www.projectatomic.io/blog/2015/10/setting-up-skydns/" rel="noreferrer">this</a> usually only tell me how to make pods within the cluster talk to each other by DNS look-up. </p>
<p>Therefore, could anyone give me some advice on how to set up and configure an add-on of Kubernetes so that pods can access the public Internet? Thank you very much!</p>
| <p>You can simply create your pods with "dnsPolicy: Default", this will give it a resolv.conf just like on the host and it will be able to resolve wikipedia.org. It will not be able to resolve cluster local services.If you're looking to actually deploy kube-dns so you can also resolve cluster local services this is probably the best starting point: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
|
<p>I'm querying the Kubernetes kubelet API (curl -s <a href="http://localhost:10255/stats/summary" rel="nofollow">http://localhost:10255/stats/summary</a>) for CPU/Memory statistics and the CPU info is showing up as follows.</p>
<pre><code> "cpu": {
"time": "2016-08-04T22:48:22Z",
"usageNanoCores": 6392499,
"usageCoreNanoSeconds": 3270519504746
},
</code></pre>
<p>How do I convert usageNanoCores or usageCoreNanoSeconds to CPU utilization percentage? </p>
| <blockquote>
<p>If a process were to run on one cpu continuously for a second, its
usage will be 1e+9 nanoseconds. If it ran on <code>n</code> cores continuously
its usage will n * 1e+9 nanoseconds. </p>
<p>Percentage will be usage_in_nanoseconds / (capacity_in_absolute_cores
* 1e+9).</p>
</blockquote>
<p>*source: <a href="https://github.com/kubernetes/heapster/issues/650#issuecomment-147795824" rel="nofollow">https://github.com/kubernetes/heapster/issues/650#issuecomment-147795824</a></p>
|
<p>root@k8s:/opt/k8s/kubernetes/cluster/ubuntu/binaries# ./kubectl logs jnlp-slave-1c45182a61</p>
<pre><code>Aug 16, 2016 6:56:38 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up slave: jnlp-slave-1c45182a61
Aug 16, 2016 6:56:38 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Aug 16, 2016 6:56:38 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://jenkins_server_ip:8080]
Aug 16, 2016 6:56:38 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: http://jenkins_server_ip:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found
java.lang.Exception: http://jenkins_server_ip:8080/tcpSlaveAgentListener/ is invalid: 404 Not Found
at hudson.remoting.Engine.run(Engine.java:215)
</code></pre>
<p>root@k8s:/opt/k8s/kubernetes/cluster/ubuntu/binaries# curl <a href="http://jenkins_server_ip:8080/tcpSlaveAgentListener/" rel="noreferrer">http://jenkins_server_ip:8080/tcpSlaveAgentListener/</a></p>
<pre><code><html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /tcpSlaveAgentListener/. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
</code></pre>
| <p>Please check
<a href="http://jenkins_server_ip:8080/configureSecurity/" rel="noreferrer">http://jenkins_server_ip:8080/configureSecurity/</a></p>
<p><strong>[TCP port for JNLP slave agents]</strong> should <strong>NOT</strong> be [Disable]</p>
|
<p>My understanding is that Docker creates an image layer at every stage of a dockerfile.</p>
<p>If I have X containers running on the same machine (where X >=2) and every container has a common underlying image layer (ie. debian), will docker keep only one copy of the base image on that machine, or does it have multiple copies for each container?</p>
<p>Is there a point this breaks down, or is it true for every layer in the dockerfile?</p>
<p>How does this work?</p>
<p>Does Kubernetes affect this in any way?</p>
| <p>Dockers <a href="https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/" rel="noreferrer">Understand images, containers, and storage drivers</a> details most of this.</p>
<p>From Docker 1.10 onwards, all the layers that make up an image have an SHA256 secure content hash associated with them at build time. This hash is consistent across hosts and builds, as long as the content of the layer is the same. </p>
<p>If any number of images share a layer, only the 1 copy of that layer will be stored and used by all images on that instance of the Docker engine.</p>
<p>A tag like <code>debian</code> can refer to multiple SHA256 image hash's over time as new releases come out. Two images that are built with <code>FROM debian</code> don't necessarily share layers, only if the SHA256 hash's match. </p>
<p>Anything that runs the Docker Engine underneath will use this storage setup. </p>
<p>This sharing also works in the Docker Registry (>2.2 for the best results). If you were to push images with layers that already exist on that registry, the existing layers are skipped. Same with pulling layers to your local engine. </p>
|
<p>Hi i tried the new annotation for ingress explained <a href="https://stackoverflow.com/questions/37001557/how-to-force-ssl-for-kubernetes-ingress-on-gke">here</a> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ssl-iagree-ingress
annotations:
kubernetes.io/ingress.allowHTTP: "false"
spec:
tls:
- secretName: secret-cert-myown
backend:
serviceName: modcluster
servicePort: 80
</code></pre>
<p>but i can still access it trough http, this is my setup on gcloud <strong>ingress</strong>--<strong>apache:80</strong></p>
| <p>Well i was able to resolve the issue, thanks to Mr Danny, from this pull request <a href="https://github.com/kubernetes/contrib/pull/1462" rel="noreferrer">here</a>, there was a typo in</p>
<pre><code>kubernetes.io/ingress.allowHTTP: "false"
</code></pre>
<p>change it to </p>
<pre><code>kubernetes.io/ingress.allow-http: "false"
</code></pre>
<p>and it works fine now.</p>
<p>ps: only for master version 1.3.5</p>
|
<p>I have a Kubernetes PetSet with name == <code>elasticsearch</code> and serviceName == <code>es</code>. It does create pods and, as expected, they have names like <code>elasticsearch-0</code> and <code>elasticsearch-1</code>. However, DNS does not seem to be working. <code>elasticsearch-0.es</code> does not resolve (nor does <code>elasticsearch-0.default</code>, etc.). If you look at the generated srv records they seem to be random instead of predictable:</p>
<pre><code># nslookup -type=srv elasticsearch
Server: 10.1.0.2
Address: 10.1.0.2#53
elasticsearch.default.svc.cluster.local service = 10 100 0 9627d60e.elasticsearch.default.svc.cluster.local.
</code></pre>
<p>Anyone have any ideas?</p>
<hr>
<p><strong>Details</strong></p>
<p>Here's the actual PetSet and Service definition:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
ports:
- name: rest
port: 9200
- name: native
port: 9300
clusterIP: None
selector:
app: elasticsearch
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: elasticsearch
spec:
serviceName: "es"
replicas: 2
template:
metadata:
labels:
app: elasticsearch
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: elasticsearch
image: 672129611065.dkr.ecr.us-west-2.amazonaws.com/elasticsearch:v1
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ES_CLUSTER_NAME
value: EsEvents
volumeClaimTemplates:
- metadata:
name: es-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
</code></pre>
| <p>This was an issue of me mis-reading the documentation. <a href="http://kubernetes.io/docs/user-guide/petset/#peer-discovery" rel="noreferrer">The docs</a> say:</p>
<blockquote>
<p>The network identity has 2 parts. First, we created a headless Service that controls the domain within which we create Pets. The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where “cluster.local” is the cluster domain. As each pet is created, it gets a matching DNS subdomain, taking the form: $(petname).$(governing service domain), where the governing service is defined by the serviceName field on the Pet Set.</p>
</blockquote>
<p>I took this to mean that the value of the <code>serviceDomain</code> field is the value of the "governing service domain", but that's not what it means. It means that the value of <code>serviceDomain</code> must match the name of an existing headless service and that service will be used to as the governing service domain. If no such service exists you don't get an error - you just get random DNS names for you pets.</p>
|
<p>We are in the process of move all our services over to Docker hosted on Google Container Engine. In the mean time we have have some services in docker and some not.</p>
<p>Within Kubernetes services discovery is easy via DNS, but how do I resolve services from outside my container cluster? ie, How do I connect from a Google Compute Engine instance to a service running in Kubernetes?</p>
| <p>The solution I have for now is to use the service clusterIP address.</p>
<p>You can see this IP address by executing <code>kubectl get svc</code>. This ip address is by default not static, but you can assign it when defining you service.</p>
<p>From the documentation:</p>
<blockquote>
<p>You can specify your own cluster IP address as part of a Service creation request. To do this, set the spec.clusterIP</p>
</blockquote>
<p>The services are accessed outside the cluster via IP address instead of DNS name.</p>
<h3>Update</h3>
<p>After deploying another cluster the above solution did not work. It turns out that the new IP range could not be reached and that you do need to add a network route.</p>
<p>You can get the cluster IP range by running
<code>$ gcloud container clusters describe CLUSTER NAME --zone ZONE</code></p>
<p>In the output the ip range is shown with the key <code>clusterIpv4Cidr</code>, in my case it was <code>10.32.0.0/14</code>.</p>
<p>Then create a route for that ip range that points to one of the nodes in your cluster. <code>$ gcloud compute routes create --destination-range 10.32.0.0/14 --next-hop-instance NODE0 INSTANCE NAME</code></p>
|
<p>I deployed heapster with influxdb and grafana by following <a href="https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fheapster%2Fblob%2Fmaster%2Fdocs%2Finfluxdb.md&sa=D&sntz=1&usg=AFQjCNEoYG3zwdLq02lqnQanNidvqs9fww" rel="nofollow noreferrer">heapster-influxDB guide</a>. When accessing the grafana instance I couldn't see any data in graphs(grafana service exposed to outside from NodePort). There are no errors in heapster and influxdb logs as attached below.</p>
<p>What could be the issue here? Really appreciate a positive feedback.</p>
<pre><code>$ kubectl version
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
</code></pre>
<p>Grafana dashboard</p>
<p><a href="https://i.stack.imgur.com/qEdRm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qEdRm.png" alt="enter image description here"></a></p>
<p>Grafana Datasource settings</p>
<p><a href="https://i.stack.imgur.com/QUb0k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QUb0k.jpg" alt="enter image description here"></a></p>
<p>heapster logs </p>
<pre><code>I0510 10:33:12.556974 1 heapster.go:60] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086
I0510 10:33:12.557111 1 heapster.go:61] Heapster version 1.1.0-beta1
I0510 10:33:12.557394 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version "v1"
I0510 10:33:12.557414 1 configs.go:61] Using kubelet port 10255
I0510 10:33:12.619309 1 influxdb.go:199] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s
I0510 10:33:12.619546 1 heapster.go:87] Starting with InfluxDB Sink
I0510 10:33:12.619601 1 heapster.go:87] Starting with Metric Sink
I0510 10:33:12.637683 1 heapster.go:166] Starting heapster on port 8082
I0510 10:33:35.000319 1 manager.go:79] Scraping metrics start: 2016-05-10 10:33:00 +0000 UTC, end: 2016-05-10 10:33:30 +0000 UTC
I0510 10:33:35.292539 1 manager.go:152] ScrapeMetrics: time: 292.067849ms size: 78
I0510 10:33:35.300239 1 influxdb.go:177] Created database "k8s" on influxDB server at "monitoring-influxdb:8086"
I0510 10:34:05.000270 1 manager.go:79] Scraping metrics start: 2016-05-10 10:33:30 +0000 UTC, end: 2016-05-10 10:34:00 +0000 UTC
I0510 10:34:05.274965 1 manager.go:152] ScrapeMetrics: time: 274.615057ms size: 78
I0510 10:34:35.000246 1 manager.go:79] Scraping metrics start: 2016-05-10 10:34:00 +0000 UTC, end: 2016-05-10 10:34:30 +0000 UTC
I0510 10:34:35.247562 1 manager.go:152] ScrapeMetrics: time: 247.236807ms size: 78
I0510 10:35:05.000265 1 manager.go:79] Scraping metrics start: 2016-05-10 10:34:30 +0000 UTC, end: 2016-05-10 10:35:00 +0000 UTC
</code></pre>
<p>Influxdb logs
<a href="https://drive.google.com/open?id=0B4f4RNm4mfqWZGtqbVBnWUJ2QjA" rel="nofollow noreferrer">https://drive.google.com/open?id=0B4f4RNm4mfqWZGtqbVBnWUJ2QjA</a></p>
<p>Refer the yaml files I used.
<a href="https://drive.google.com/open?id=0B4f4RNm4mfqWY2pZRmViWHFuMFk" rel="nofollow noreferrer">https://drive.google.com/open?id=0B4f4RNm4mfqWY2pZRmViWHFuMFk</a></p>
<p>InfluxDB data
<a href="https://drive.google.com/open?id=0B4f4RNm4mfqWQTFxcFVhdko2Vms" rel="nofollow noreferrer">https://drive.google.com/open?id=0B4f4RNm4mfqWQTFxcFVhdko2Vms</a></p>
<p>Heapster api data
<a href="https://drive.google.com/open?id=0B4f4RNm4mfqWQVhEZ3oxdEs1VTA" rel="nofollow noreferrer">https://drive.google.com/open?id=0B4f4RNm4mfqWQVhEZ3oxdEs1VTA</a></p>
| <p>It happens to be the <strong>Heapster</strong> master branch version is not compatible with some <strong>Kubernetes</strong> clusters. I tried following latest beta version and it works.</p>
<pre><code>https://github.com/kubernetes/heapster/releases/tag/v1.2.0-beta.0
</code></pre>
|
<p>I have configured one Ubuntu system (192.168.1.2) to be a master and minion.</p>
<p>Another Ubuntu machine to be a minion (192.168.1.3). I performed upgrade on Ubuntu machine that was both master and minion and rebooted the system. </p>
<p>Now when I perform:</p>
<p><code>kubectl get nodes</code></p>
<p>The node (192.168.1.2) is down. </p>
<p>I went through the documentation but could not find any documentation that explains how to add back the lost nodes. </p>
<p>I need some help to understand how to bring the node back into the Kubernetes cluster. Is there any script to do so?</p>
<p>Thanks</p>
| <p>I am not aware of any script, but here are a few basic things might help:</p>
<ol>
<li>Please ensure that <code>kubelet</code> and <code>docker</code> services are running on the node in question.</li>
<li>Run <code>kubectl describe node <nodename></code> to get additional details.</li>
<li>Get the logs of <code>kubelet</code> service if it is already running by executing <code>journalctl -u kubelet</code></li>
</ol>
|
<p>I have deployed a Service into a kubernetes cluster and it looks like so:</p>
<pre><code>$ kubectl get svc my-service
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-service 192.168.202.23 <none> 8080/TCP name=my-service 38d
</code></pre>
<p>The spec part of YAML config looks like so:</p>
<pre><code>"spec": {
"ports": [
{
"name": "http-port",
"protocol": "TCP",
"port": 8080,
"targetPort": 8080
}
],
"selector": {
"name": "my-service"
},
"clusterIP": "192.168.202.23",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<p>Now, I want to expose this service to be externally accessible using LoadBalancer. Using kubectl expose service gives an error like so:</p>
<pre><code>$ kubectl expose service my-service --type="LoadBalancer"
Error from server: services "my-service" already exists
</code></pre>
<p>Is it not possible to 'edit' existing deployed service and make it externally accessible?</p>
| <p>The type of the service that you have created is <code>ClusterIP</code> which is not visible outside the cluster. If you edit the service and change the <code>type</code> field to either <code>NodePort</code>, or <code>LoadBalancer</code>, it would expose it.</p>
<p>Documentation on what those service types are and what they mean is at:
<a href="http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types" rel="noreferrer">http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types</a></p>
|
<p>When deleting Openshift apps, I have noticed that deleting its resources (e.g. deploymentConfig) before scaling the app's replicas can cause some odd behavior. Is it advisable to always scale down first (and why)? </p>
| <p>Yes. I'd say that it is advisable, especially if you're having odd behavior occur because of it. However, it isn't necessary. </p>
<p>Openshift does a great job of managing an application and all of its resources. Unfortunately, it doesn't do a great 'clean up' job. For instance, deleting the deployment config before deleting all relative deployed PODs may lead to orphaned deployments.</p>
<p>To avoid this, first scale down your app (like you suggested in your question):</p>
<pre><code>oc scale dc <app-name> --replicas=0
</code></pre>
<p>Then you can delete all resources in one fell swoop with:</p>
<pre><code>oc delete all --selector app=appname
</code></pre>
<p>This should do the trick. Personally, creating a script to do this helps save time. Here's a simple sample one:</p>
<pre><code>#!/bin/bash
# scale down app to 0
oc scale dc $1 --replicas=0
# delete all resources
oc delete all --selector app=$1
</code></pre>
<p>This would allow you to pass in an actual variable. Let's say you named this script 'oc-delete-app' and your the app you wanted to delete was named 'hello-world'. You'd run:</p>
<pre><code>./oc-delete-app hello-world
</code></pre>
|
<p>Following instructions in the book of "Kubernetes Cookbook", I create a docker cluster with one master and two nodes:</p>
<pre><code>master: 198.11.175.18
etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler
minion:
etcd, flannel, kubelet, kube-proxy
minion1: 120.27.94.15
minion2: 114.215.142.7
</code></pre>
<p>OS version is:</p>
<pre><code>[user1@iZu1ndxa4itZ ~]$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.2.1511 (Core)
Release: 7.2.1511
Codename: Core
[user1@iZu1ndxa4itZ ~]$ uname -a
Linux iZu1ndxa4itZ 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>Kuberneters version is:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}
</code></pre>
<p>I can get the status of two nodes by issuing kubectl on Master.</p>
<pre><code>[user1@iZu1ndxa4itZ ~]$ kubectl get nodes
NAME STATUS AGE
114.215.142.7 Ready 23m
120.27.94.15 Ready 14h
</code></pre>
<p>The components on Master work well:</p>
<pre><code> [user1@iZu1ndxa4itZ ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
</code></pre>
<p>But after starting a nginx container, there is no Pods status.</p>
<pre><code>[user1@iZu1ndxa4itZ ~]$ kubectl run --image=nginx nginx-test
deployment "nginx-test" created
[user1@iZu1ndxa4itZ ~]$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-first-nginx 2 0 0 0 20h
my-first-nginx01 1 0 0 0 20h
my-first-nginx02 1 0 0 0 19h
nginx-test 1 0 0 0 5h
[user1@iZu1ndxa4itZ ~]$ kubectl get pods
</code></pre>
<p>Any clue to diagnose the problem? Thanks.</p>
<p>BTW, I attempted to run two Docker containers manually in different nodes, the two containers can communicate with each other using ping.</p>
<p><strong><em>Update 2016-08-19</em></strong></p>
<p>Found clue from services log of kube-apiser and kube-controller-manager, the problem may be caused by incorrect secure configuration:</p>
<p>sudo service kube-apiserver status -l</p>
<pre><code> Aug 19 14:59:53 iZu1ndxa4itZ kube-apiserver[21393]: E0819 14:59:53.118954 21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
Aug 19 15:00:08 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:08.120253 21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
Aug 19 15:00:23 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:23.121345 21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
Aug 19 15:00:38 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:38.122638 21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
</code></pre>
<p>sudo service kube-controller-manager status -l</p>
<pre><code> Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:01:52.138742 21415 replica_set.go:446] unable to create pods: pods "my-first-nginx02-1004561501-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:01:52.138799 21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"my-first-nginx02-1004561501", UID:"ba35be11-652a-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"120", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "my-first-nginx02-1004561501-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:01:52.144583 21415 replica_set.go:446] unable to create pods: pods "my-first-nginx-3671155609-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:01:52.144657 21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"my-first-nginx-3671155609", UID:"d6c8288c-6529-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"54", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "my-first-nginx-3671155609-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Aug 19 15:04:17 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:04:17.149320 21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-test-863723326", UID:"624ed0ea-65a2-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"12247", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-test-863723326-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Aug 19 15:04:17 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:04:17.148513 21415 replica_set.go:446] unable to create pods: pods "nginx-test-863723326-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service accoun
</code></pre>
| <p>Resolved the problem with following procedure:</p>
<pre><code> openssl genrsa -out /tmp/service_account.key 2048
sudo cp /tmp/service_account.key /etc/kubernetes/service_account.key
sudo vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--secure-port=0 --service-account-key-file=/etc/kubernetes/service_account.key"
sudo service kube-apiserver restart
sudo vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/service_account.key"
sudo service kube-controller-manager restart
</code></pre>
|
<p>I would like to be able to see all of the various things that happened to a kube cluster on a timeline, including when nodes were found to be dead, when new nodes were added, when pods crashed and when they were restarted.</p>
<p>So far the best that we have found is <code>kubectl get event</code> but that seems to have a few limitations:</p>
<ul>
<li>it doesn't go back in time that far (I'm not sure how far it goes back. A day?)</li>
<li>it combines similar events and orders the resulting list by the time of the latest event in each group. This makes it impossible to know what happened during some time range since events in that range may have been combined with later events outside the range.</li>
</ul>
<p>One idea that I have is to write a pod that will use the API to watch the stream of events and log them to a file. This would let us control retention and it seems that events that occur while we are watching will not be combined, solving the second problem as well.</p>
<p>What are other people doing about this?</p>
| <ul>
<li><p>My understanding is that Kubernetes itself dedups events, documented here:
<a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/event_compression.md" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/design/event_compression.md</a>
Once that happens, there is no way to get the individual events back.</p>
<p>See <a href="https://github.com/kubernetes/kubernetes/issues/36304" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/36304</a> for complaints how that loses info. <a href="https://github.com/kubernetes/kubernetes/pull/46034" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/46034</a> at least improved the message. See also <a href="https://github.com/kubernetes/enhancements/pull/1291" rel="noreferrer">https://github.com/kubernetes/enhancements/pull/1291</a> KEP for recent discussion and proposal to improve usability in kubectl.</p></li>
<li><p>How long events are retained? Their "time-to-live" is apparently controlled by kube-apiserver <code>--event-ttl</code> option, defaults to 1 hour:
<a href="https://github.com/kubernetes/kubernetes/blob/da53a247633/cmd/kube-apiserver/app/options/options.go#L71-L72" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/da53a247633/cmd/kube-apiserver/app/options/options.go#L71-L72</a></p>
<p>You can raise this. Might require more resources for <code>etcd</code> — from what I saw in some 2015 github discussions, event TTL used to be 2 days, and events were the main thing stressing <code>etcd</code>...</p></li>
</ul>
<p>In a pinch, it might be possible to figure out what happened earlier from various log, especially the kubelet logs?</p>
<h3>Saving events</h3>
<ul>
<li><p>Running <code>kubectl get event -o yaml --watch</code> into a persistent file sounds like a simple thing to do. I <em>think</em> when you watch events as they arrive, you see them pre-dedup.</p></li>
<li><p>Heapster can send events to some of the supported sinks:
<a href="https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md" rel="noreferrer">https://github.com/kubernetes/heapster/blob/master/docs/sink-configuration.md</a></p></li>
<li><p><a href="https://github.com/heptiolabs/eventrouter" rel="noreferrer">Eventrouter</a> can send events to various sinks: <a href="https://github.com/heptiolabs/eventrouter/tree/master/sinks" rel="noreferrer">https://github.com/heptiolabs/eventrouter/tree/master/sinks</a></p></li>
</ul>
|
<p>I installed minikube as instructed here <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow">https://github.com/kubernetes/minikube/releases</a>
and started with with a simple <code>minikube start</code> command. </p>
<p>But the next step, which is as simple as <code>kubectl get pods --all-namespaces</code> fails with</p>
<p><code>Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout</code></p>
<p>What did I miss?</p>
| <p>I ran into the same issue using my Mac and basically I uninstalled both minikube and Kubectl and installed it as follows:</p>
<ol>
<li>Installed Minikube.</li>
</ol>
<p>curl -Lo minikube <a href="https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-darwin-amd64" rel="nofollow">https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-darwin-amd64</a> && chmod +x minikube && sudo mv minikube /usr/local/bin/</p>
<ol start="2">
<li>Installed Kubectl.</li>
</ol>
<p>curl -Lo kubectl <a href="http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/darwin/amd64/kubectl" rel="nofollow">http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/darwin/amd64/kubectl</a> && chmod +x kubectl && sudo mv kubectl /usr/local/bin/</p>
<ol start="3">
<li>Start a cluster, run the command:</li>
</ol>
<p>minikube start</p>
<ol start="4">
<li>Minikube will also create a “minikube” context, and set it to default in kubectl. To switch back to this context later, run this command:</li>
</ol>
<p>kubectl config use-context minikube</p>
<ol start="5">
<li>Now to get the list of all pods run the command:</li>
</ol>
<p>kubectl get pods --all-namespaces</p>
<p>Now you should be able to get the list of pods. Also make sure that you don't have a firewall within your network that blocks the connections.</p>
|
<p>I am trying to mount an NFS volume to my pods but with no success.</p>
<p>I have a server running the nfs mount point, when I try to connect to it from some other running server </p>
<p><code>sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt</code> works fine</p>
<p>Another thing worth mentioning is when I remove the volume from the deployment and the pod is running. I log into it and i can telnet to 10.0.0.4 with ports 111 and 2049 successfully. so there really doesnt seem to be any communication problems</p>
<p>as well as: </p>
<blockquote>
<pre><code>showmount -e 10.0.0.4
Export list for 10.0.0.4:
/export/drive 10.0.0.0/16
/export 10.0.0.0/16
</code></pre>
</blockquote>
<p>So I can assume that there is no network or configuration problems between the server and the client (I am using Amazon and the server that i tested on is in the same security group as the k8s minions)</p>
<p>P.S:
The server is a simple ubuntu->50gb disk</p>
<p>Kubernetes v1.3.4</p>
<p>So I start creating my PV</p>
<blockquote>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.0.0.4
path: "/export"
</code></pre>
</blockquote>
<p>And my PVC</p>
<blockquote>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
</code></pre>
</blockquote>
<p>here is how kubectl describes them:</p>
<blockquote>
<pre><code> Name: nfs
Labels: <none>
Status: Bound
Claim: default/nfs-claim
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 50Gi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.0.0.4
Path: /export
ReadOnly: false
No events.
</code></pre>
</blockquote>
<p>AND</p>
<blockquote>
<pre><code> Name: nfs-claim
Namespace: default
Status: Bound
Volume: nfs
Labels: <none>
Capacity: 0
Access Modes:
No events.
</code></pre>
</blockquote>
<p>pod deployment:</p>
<blockquote>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mypod
labels:
name: mypod
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
name: mypod
labels:
# Important: these labels need to match the selector above, the api server enforces this constraint
name: mypod
spec:
containers:
- name: abcd
image: irrelevant to the question
ports:
- containerPort: 80
env:
- name: hello
value: world
volumeMounts:
- mountPath: "/mnt"
name: nfs
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs-claim
</code></pre>
</blockquote>
<p>When I deploy my POD i get the following:</p>
<blockquote>
<pre><code>Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-claim
ReadOnly: false
default-token-6pd57:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6pd57
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13m 13m 1 {default-scheduler } Normal Scheduled Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal
11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]
11m 7s 6 {kubelet ip-10-0-0-157.us-west-2.compute.internal} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]
</code></pre>
</blockquote>
<p>Tried everything I know, and everything i can think of. What am i missing or doing wrong here?</p>
| <p>I tested version 1.3.4 and 1.3.5 of Kubernetes and NFS mount didn't work for me. Later I switched to the 1.2.5 and that version gave me some more detailed info ( kubectl describe pod ...). It turned out that 'nfs-common' is missing in the hyperkube image. After I added nfs-common to all container instances based on hyperkube image on master and worker nodes the NFS share started to work normally (mount was successful). So that's the case here. I tested it in practice and it solved my problem.</p>
|
<p>I'm trying to learn Kubernetes so I've deploy a single-node-cluster of Kubernetes 1.3.5 on Ubuntu 14.04 server.</p>
<p>When I try to run a docker image <code>nginx</code> I get the error message <code>Failed to start with docker id [id-removed] with error: API error (400): {"message":"starting container with HostConfig was deprecated since v1.10 and removed in v1.12"}</code></p>
<p><code>kubectl version</code> output:</p>
<pre><code>Client Version: version.Info{Major:"0", Minor:"19", GitVersion:"v0.19.3", GitCommit:"3103c8ca0f24514bc39b6e2b7d909bbf46af8d11", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"19", GitVersion:"v0.19.3", GitCommit:"3103c8ca0f24514bc39b6e2b7d909bbf46af8d11", GitTreeState:"clean"}
</code></pre>
<p><code>dockr --version</code> output</p>
<pre><code>Docker version 1.12.0, build 8eab29e
</code></pre>
| <p>From your <code>kubectl version</code> output, you are running v0.19.3 kubernetes, which is not compatible with docker v1.12. You may want to re-build/re-deploy a newer version of kuberentes.</p>
<p>The version you wanted to run, "v1.3.5" should be compatible with docker v1.12.</p>
|
<p>I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc. </p>
<p>What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.</p>
<p><strong><em>Is there <a href="https://docs.docker.com/compose/overview/" rel="noreferrer">docker-compose</a> functionality for Kubernetes?</em></strong></p>
<p>I want my application to be defined in git—to be version controlled–without relying on manual cli interactions. </p>
<p>Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?</p>
| <p>If you're still looking, maybe this tool can help: <a href="https://github.com/kelseyhightower/compose2kube" rel="noreferrer">https://github.com/kelseyhightower/compose2kube</a></p>
<p>You can create a compose file:</p>
<pre><code># sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
</code></pre>
<p>Then use the tool to convert it to kubernetes objects:</p>
<pre><code>compose2kube -compose-file docker-compose.yml -output-dir output
</code></pre>
<p>Which will create these files:</p>
<pre><code>output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
</code></pre>
<p>Then you can use <code>kubectl</code> to apply them to kubernetes.</p>
|
<p>About 7 hours ago i was working with kubectl without problems. Now (after a few hours of sleep, and a reboot of my computer) all kubectl get commands gives me this error:</p>
<pre><code>Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>I did not do anything since it worked besides shut down my computer.</p>
<p>Since I'm new with Kubernetes and GCE i need a few hints on what this could be, and where to look.</p>
| <p>So i found the problem.</p>
<p>kubernetes was set to use the wrong context and cluster (i had created a GC project and deleted it again, and the created a new project).</p>
<p>I got the new credentials from GC:</p>
<pre><code>gcloud container clusters get-credentials CLUSTER_NAME_FROM_GC
</code></pre>
<p>To get the new context name and cluster name i used:</p>
<pre><code>kubectl config view
</code></pre>
<p>And to update the current context and cluster i used:</p>
<pre><code>kubectl config set-cluster CLUSTER_NAME_FROM_CREDENTIALS
kubectl config set-context CONTEXT_NAME_FROM_CREDENTIALS
</code></pre>
<p>This fixed the problem.</p>
|
<p>I'm using preStop command to gracefully shutdown my server application when I delete a pod. What is the state of the pod/ container when it runs preStop command? For example, does it stop the network interfaces before running the preStop command? </p>
<pre><code>lifecycle:
preStop:
exec:
command: ["kill", "-SIGTERM", "`pidof java`"]
</code></pre>
| <p>When a pod should be terminated, Kubernetes does the following:</p>
<ol>
<li>Switch the Pod to the <code>Terminating</code> state</li>
<li>Invoke the <code>preStop</code> hook (if any)</li>
<li>Once the <code>preStop</code> hook ends, it sends a <code>SIGTERM</code> to the main process in the container (PID 1)</li>
<li>If the container doesn't terminate with the termination grace period (30 seconds by default - starts counting at point #1), Kubernetes will send a <code>SIGKILL</code> to the container's main process to violently stop it</li>
</ol>
<p>More details here:
<a href="https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html" rel="nofollow">Graceful shutdown of Kubernetes Pods</a></p>
|
<p>I am unable to add a label to my kubernetes pod. Why is this not working?</p>
<pre><code>$ kubectl describe pods secure-monolith | grep Label
Labels: app=monolith
$ kubectl label pods secure-monolith "secure=enabled"
pod "secure-monolith" labeled
$ kubectl describe pods secure-monolith | grep Label
Labels: app=monolith
$ kubectl label pods secure-monolith "secure=enabled"
'secure' already has a value (enabled), and --overwrite is false
</code></pre>
<p>As you can see, it says the label was successfully added, however the label does not appear when "describing" the pod; and it can also not be added again.</p>
| <p>You are grepping through the <code>describe</code> output, but only the first line of labels description contains <code>Label</code> string.</p>
<p>Labels output for two labels looks as follows:</p>
<pre><code>Labels: a=b
c=d
</code></pre>
<p>So <code>secure=enabled</code> is there, you just filtered it out.</p>
|
<p>I have a kubernetes cluster running on GCE.</p>
<p>I created a setup in which I have 2 <em>pods</em> <code>glusterfs-server-1</code> and <code>glusterfs-server-2</code> that are my gluster server.</p>
<p>The 2 <code>glusterfsd</code> daemon correctly communicate and I am able to create replicated volumes, write files to them and see the files correctly replicated on both pods.</p>
<p>I also have 1 <em>service</em> called <code>glusterfs-server</code> that automatically balances the traffic between my 2 glusterfs pods.</p>
<p>From inside another pod, I can issue <code>mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume</code> and everything works perfectly.</p>
<p>Now, what I really want is being able to use the <code>glusterfs</code> volume type inside my .yaml files when creating a container: </p>
<p><code>...truncated...
spec:
volumes:
- name: myvolume
glusterfs:
endpoints: glusterfs-server
path: myvolume
...truncated...
</code></p>
<p>Unfortunately, this doesn't work. I was able to find out why it doesn't work:</p>
<p>When connecting directly to a kubernetes <em>node</em>, issuing a <code>mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume</code> does not work, this is because from my node's perspective <code>glusterfs-server</code> does not resolve to any IP address. (That is <code>getent hosts glusterfs-server</code> returns nothing)</p>
<p>And also, due to how glusterfs works, even directly using the service's IP will fail as glusterfs will still eventually try to resolve the name <code>glusterfs-server</code> (and fail).</p>
<p>Now, just for fun and to validate that this is the issue, I edited my node's <code>resolv.conf</code> (by putting my kube-dns IP address and search domains) so that it would correctly resolve my pods and services ip addresses. I then was finally able to successfully issue <code>mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume</code> on the node. I was then also able to create a pod using a glusterfs volume (using the PodSpec above).</p>
<p>Now, I'm fairly certain modifying my node's <code>resolv.conf</code> is a terrible idea: kubernetes having the notion of namespaces, if 2 services in 2 different namespaces share the same name (say, glusterfs-service), a <code>getent hosts glusterfs-service</code> would resolve to 2 different IPs living in 2 different namespaces.</p>
<p>So my question is:</p>
<p><em>What can I do for my node to be able to resolve my pods/services IP addresses?</em></p>
| <p>You can modify <code>resolv.conf</code> and use the full service names to avoid collisions. Usually are like this: <code>service_name.default.svc.cluster.local</code> and <code>service_name.kube-system.svc.cluster.local</code> or whatever namespace is named.</p>
|
<p>I have a composite service S.c which consumes 2 atomic service S.a and S.b where all three services are running in Kubernetes cluster. What would be a better pattern</p>
<p>1) Create Sa,Sb as a headless service and let Sc integrate with them via external Loadbalancer like NGINX+ (which uses DNS resolver to maintain updated backend pods)</p>
<p>2) Create Sa,Sb with clusterIP and let Sc access/resolve them via cluster DNS (skyDNS addon). Which will internally leverage IP-Table based load-balancing to pods.</p>
<p>Note: My k8s cluster is running on custom solution (on-premise VMs)
We have many composite services which consume 1 to many atomic services (like example above). </p>
<p>Edit: In few scenarios I would also need to expose services to external network
like Sb would need access both from Sc and outside.
In such it would make more sense to create Sb as a headless service, otherwise DNS resolver would always return only the clusterIP address and all external request will also get routed to clusterIP address.
My challenge is both scenarios (intra vs inter) are conflicting with each other</p>
<p>example: nginx-service (which has clusterIP) and nginx-headless-service (headless)</p>
<pre><code> / # nslookup nginx-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 172.16.50.29 nginx-service.default.svc.cluster.local
/ # nslookup nginx-headless-service
Server: 172.16.48.11
Address 1: 172.16.48.11 kube-dns.kube-system.svc.cluster.local
Name: nginx-headless-service
Address 1: 11.11.1.13 wrkfai1asby2.my-company.com
Address 2: 11.11.1.15 imptpi1asby.my-company.com
Address 3: 11.11.1.4 osei623a-sby.my-company.com
Address 4: 11.11.1.6 osei511a-sby.my-company.com
Address 5: 11.11.1.7 erpdbi02a-sbyold.my-company.com
</code></pre>
| <p>Using DNS + cluster IPs is the simpler approach, and doesn't require exposing your services to the public internet. Unless you want specific load-balancing features from nginx, I'd recommend going with #2.</p>
|
<p>I know that Kubernetes has 5 modules: <code>kube-apiserver, kube-controller-manager, kube-scheduler</code> in Master nodes and <code>kubelet, kube-proxy</code> in Minion nodes. How do they communicate with etcd? Will they all query or set data in etcd? or only some of them do?</p>
<p><a href="http://kubernetes.io/docs/admin/etcd/" rel="nofollow">The docs</a> said that etcd is only accessed by <code>kube-apiserver</code>:</p>
<blockquote>
<p>Access Control: give only kube-apiserver read/write access to etcd.
You do not want apiserver’s etcd exposed to every node in your cluster
(or worse, to the internet at large), because access to etcd is
equivalent to root in your cluster.</p>
</blockquote>
<p>But some blogs and architecture figures from Google believe etcd is also accessed by other modules, like what <a href="http://cloud-mechanic.blogspot.jp/2014/09/kubernetes-under-hood-etcd.html" rel="nofollow">this blog</a> said:</p>
<blockquote>
<p>The etcd services are the communications bus for the Kubernetes
cluster. The app-service posts cluster state changes to the etcd
database in response to commands and queries. The kubelets read the
contents of the etcd database and act on any changes they detect.</p>
</blockquote>
<p>So which said is correct? Thanks!</p>
| <p>The docs you linked to are correct -- only the apiserver communicates directly with etcd. The rest of the system components communicate with etcd <em>indirectly</em> through the apiserver. </p>
<p>Also note that the blog post you linked to is just about 2 years old, and it may have been accurate at the time, but the docs you linked to are for the current release of Kubernetes. </p>
|
Subsets and Splits