prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I want to implement a simple Layer 7 Load Balancer in my kubernetes cluster which will allow me to expose kubernetes services to external consumers.</p>
<p>I will create a simple ha-proxy based container which will observe kubernetes services and respective endpoints and reload its backend/frontend configuration (complemented with SYN eating rule during reload)</p>
<p>This will allow me to access kubernetes services like SVCa, SVCb, SVCc over</p>
<pre><code>http://load-balancer-ip:port/SVCa -------> Pod endpoints.....
http://load-balancer-ip:port/SVCb -------> Pod endpoints.....
http://load-balancer-ip:port/SVCc -------> Pod endpoints.....
</code></pre>
<p>How would above approach work compared to </p>
<p>(1) ha-proxy forwarding all the requests to clusterIP address of kubernetes services.</p>
<pre><code> http://load-balancer-ip:port/SVCa ------->clusterIP-SVCa
http://load-balancer-ip:port/SVCb ------->clusterIP-SVCa
http://load-balancer-ip:port/SVCc ------->clusterIP-SVCa
</code></pre>
<p>(2) ha-proxy load-balancing requests to worker-node-ip:port obtained by creating NodePort type services</p>
<pre><code>http://load-balancer-ip:port/SVCa --------> node1:p1, node2:p1, node3:p1
http://load-balancer-ip:port/SVCb --------> node1:p2, node2:p2, node3:p2
http://load-balancer-ip:port/SVCc --------> node1:p3, node2:p3, node3:p3
</code></pre>
<p>Note: My k8s cluster is running on custom solution (on-premise VMs) </p>
| <p>I think a nginx IngressController can work better in this case.
You only have to set the backend service and the hostname inside the ingress definition.</p>
<p>Take a look here:
<a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a></p>
|
<p>When creating a service of type <a href="http://kubernetes.io/docs/user-guide/services/#type-loadbalancer" rel="nofollow">LoadBalancer</a> on AWS, Kubernetes auto-provisions an elastic load balancer. I am wondering how I can automatically associate that load balancer with a Route 53 alias?</p>
<p>Alternatively, can I make Kubernetes re-use an elastic load balancer (which I've assigned a Route 53 alias to)?</p>
| <p>There is a project that accomplishes this: <a href="https://github.com/wearemolecule/route53-kubernetes" rel="nofollow">https://github.com/wearemolecule/route53-kubernetes</a></p>
<p>A side note here, there are some issues with being able to select the TLD that this uses, it seems to use the first matching public recordset. </p>
<p>Also this doesn't work with the internal ELBs. There was an issue opened under the project for this request.</p>
|
<p>I am running a kubernetes cluster on google cloud(version 1.3.5) .
I found a <a href="https://github.com/kubernetes/contrib/tree/master/pets" rel="nofollow">redis.yaml</a>
that uses petset to create a redis cluster but when i run kubectl create -f redis.yaml i get the following error :
error validating "redis.yaml": error validating data: the server could not find the requested resource (get .apps); if you choose to ignore these errors, turn validation off with --validate=false</p>
<p>i cant find why i get this error or how to solve this.</p>
| <p>PetSet is currently an alpha feature (which you can tell because the <code>apiVersion</code> in the linked yaml file is <code>apps/v1alpha1</code>). It may not be obvious, but alpha features are not supported in Google Container Engine. </p>
<p>As described in <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api_changes.md#alpha-beta-and-stable-versions">api_changes.md</a>, alpha level API objects are disabled by default, have no guarantees that they will exist in future versions, can break compatibility with older versions at any time, and may destabilize the cluster. </p>
|
<p>I'm trying to create two kubernetes services, one which is a loadbalancer with a cluster IP, and another which is a headless (no cluster IP), but instead returns an A record round robin collection of the pod ip addresses (as it should do, according to <a href="http://kubernetes.io/docs/user-guide/services/#headless-services" rel="nofollow">http://kubernetes.io/docs/user-guide/services/#headless-services</a>).</p>
<p>I need to do this because I need a dynamic collection of pod ip's in order to do auto clustering and service discovery.</p>
<p>My services look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
tier: messaging
spec:
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-cluster
labels:
app: rabbitmq
tier: messaging
spec:
clusterIP: None
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
</code></pre>
<p>With these two services, i get the following:</p>
<pre><code>$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq 10.23.255.174 <none> 5672/TCP 7m
rabbitmq-cluster None <none> 5672/TCP 7m
</code></pre>
<p>And DNS (from another pod) for the cluster IP works:</p>
<pre><code>[root@gateway-3738159135-a7wp9 app]# nslookup rabbitmq.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
Name: rabbitmq.td-integration.svc.cluster.local
Address: 10.23.255.174
</code></pre>
<p>However, the dns for the 'headless' service, doesn't return:</p>
<pre><code>[root@gateway-3738159135-a7wp9 app]# nslookup rabbitmq-cluster.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
** server can't find rabbitmq-cluster.td-integration: NXDOMAIN
</code></pre>
| <p>It seems like there is no pod matching these labels within your cluster, therefore the DNS query doesn't return anything. This is expected.</p>
<p>Start the corresponding pods and you should see a list of A records.</p>
<p>Please be aware that these A records are not shuffled as far as I know, so your clients are expected to consume the DNS answer and perform their own round robin.</p>
|
<p>I work in a company where almost all private ipv4 space is already used, so using 10.254.0.0/16 for service address space is a non-starter. I have carved out a /64 of ipv6 space that I can use, but I can't seem to make it work.</p>
<p>Here's my apiserver config:</p>
<pre><code># The address on the local server to listen to.
KUBE_API_ADDRESS="--address=::"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Address range to use for services
# KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=fc00:dead:beef:cafe::/64"
# Add your own!
KUBE_API_ARGS=""
</code></pre>
<p>But when I try to start <code>kube-apiserver.service</code> I get an error about "invalid argument". Is it possible to use IPv6 for kubernetes? </p>
| <p><a href="https://github.com/kubernetes/kubernetes/issues/1443" rel="nofollow noreferrer">I don't think IPv6 is fully supported</a>. I don't think there is a strong motivation among the developers of the project to add IPv6 support, because the largest group of contributors is <a href="http://stackalytics.com/?project_type=kubernetes-group&metric=commits" rel="nofollow noreferrer">Google employees</a>. Google Compute Engine (and thus Google Container Engine) doesn't support IPv6, so it wouldn't benefit Google directly to pay their employees to support IPv6. Best thing to do would probably be to pull in employees of companies that run their hosted product on AWS (as AWS has IPv6 support) such as RedHat, or try to contribute some of the work yourself.</p>
<p>From the linked PR, it looks like Brian Grant (Google) is, for whatever reason, somewhat interested and able to contribute IPv6 support. He'd probably be a good resource to query if you're interested in contributing this functionality to Kubernetes your self.</p>
|
<p>I've been trying to figure out what happens when the Kubernetes master fails in a cluster that only has one master. Do web requests still get routed to pods if this happens, or does the entire system just shut down?</p>
<p>According to the OpenShift 3 documentation, which is built on top of Kubernetes, (<a href="https://docs.openshift.com/enterprise/3.2/architecture/infrastructure_components/kubernetes_infrastructure.html" rel="noreferrer">https://docs.openshift.com/enterprise/3.2/architecture/infrastructure_components/kubernetes_infrastructure.html</a>), if a master fails, nodes continue to function properly, but the system looses its ability to manage pods. Is this the same for vanilla Kubernetes?</p>
| <p>In typical setups, the master nodes run both the API and etcd and are either largely or fully responsible for managing the underlying cloud infrastructure. When they are offline or degraded, the API will be offline or degraded.</p>
<p>In the event that they, etcd, or the API are fully offline, the cluster ceases to be a cluster and is instead a bunch of ad-hoc nodes for this period. The cluster will not be able to respond to node failures, create new resources, move pods to new nodes, etc. Until both:</p>
<ol>
<li>Enough etcd instances are back online to form a quorum and make progress (for a visual explanation of how this works and what these terms mean, see <a href="https://raft.github.io/" rel="noreferrer">this page</a>).</li>
<li>At least one API server can service requests</li>
</ol>
<p>In a partially degraded state, the API server may be able to respond to requests that only read data.</p>
<p>However, in any case, life for applications will continue as normal unless nodes are rebooted, or there is a dramatic failure of some sort during this time, because TCP/ UDP services, load balancers, DNS, the dashboard, etc. Should all continue to function for at least some time. Eventually, these things will all fail on different timescales. In single master setups or complete API failure, DNS failure will probably happen first as caches expire (on the order of minutes, <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="noreferrer">though the exact timing is configurable</a>, see <a href="https://coredns.io/plugins/cache/" rel="noreferrer">the coredns cache plugin documentation</a>). This is a good reason to consider a multi-master setup–DNS and service routing can continue to function indefinitely in a degraded state, even if etcd can no longer make progress.</p>
<p>There are actions that you could take as an operator which would accelerate failures, especially in a fully degraded state. For instance, rebooting a node would cause DNS queries and in fact probably all pod and service networking functionality until at least one master comes back online. Restarting DNS pods or kube-proxy would also be bad.</p>
<p>If you'd like to test this out yourself, I recommend <a href="https://github.com/kubernetes-sigs/kubeadm-dind-cluster" rel="noreferrer">kubeadm-dind-cluster</a>, <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> or, for more exotic setups, kubeadm on VMs or bare metal. Note: <code>kubectl proxy</code> will not work during API failure, as that routes traffic through the master(s).</p>
|
<p>I am a bit confused on when to a build environment variable vs a runtime environment variable in OpenShift Enterprise. Would someone please help me understand and provide example use cases for each?</p>
| <p>An environment variable that might only be needed at build time is one such as to setup use of a proxy so that when building, external package repositories can be accessed, allowing dependencies to be pulled down. At runtime you probably wouldn't need that and since it may have account/password information in it, you wouldn't want to leave it defined as someone breaking into your application would find it and that may be valuable to them.</p>
<p>An environment variable that would only be set for the deployment is where a database used by the application can be found. This wouldn't generally be available at build time since the database may not have even been started at that point.</p>
|
<p>I've successfully deployed a Kubernetes 1.3.5 cluster on 1 master + 6 nodes (all running CentOS) including the DNS and Kubernetes Dashboard addons. Everything seemed to be working OK at first. However, when I tried to run</p>
<pre><code>kubectl proxy --address=<master-external-ip> --port=9090 --disable-filter
</code></pre>
<p>and access <code>http://<master-external-ip>:9090/ui</code> I got the following output</p>
<pre><code>Error: 'dial tcp 172.16.38.2:9090: i/o timeout'
Trying to reach: 'http://172.16.38.2:9090/'
</code></pre>
<p>However, if I start <code>flanneld</code> on the master everything works and I can actually reach the Dashboard. Now, I've used <code>kube-up.sh</code> to install the cluster and it didn't install any configs or <code>systemd</code> service for Flannel, which leaves me confused—should Flannel also run on the master?</p>
| <p>Yes, it should, otherwise packets going through the API server proxy can not be routed to their final destination: the dashboard pod running on some other machine.</p>
|
<p>I'm building a 3 VM (CentOS 7) cluster of Kubernetes 1.3.2.
According to this kubernetes documentation page <a href="http://kubernetes.io/docs/admin/networking/" rel="nofollow">Networking in Kubernetes</a>: “We give every pod its own IP address” and by that there is no port collision when few pods use the same ports on the same node.
But as seen here, the pods do get the same IP addresses:</p>
<pre><code>[root@gloom kuber-test]# kubectl get pods -o wide -l app=userloc
NAME READY STATUS RESTARTS AGE IP NODE
userloc-dep-857294609-0am9d 1/1 Running 0 27m 172.17.0.5 157.244.150.86
userloc-dep-857294609-a4538 1/1 Running 0 27m 172.17.0.7 157.244.150.96
userloc-dep-857294609-c4wzy 1/1 Running 0 6h 172.17.0.3 157.244.150.86
userloc-dep-857294609-hbl9i 1/1 Running 0 6h 172.17.0.5 157.244.150.96
userloc-dep-857294609-rpgyd 1/1 Running 0 27m 172.17.0.5 157.244.150.198
userloc-dep-857294609-tnnho 1/1 Running 0 6h 172.17.0.3 157.244.150.198
</code></pre>
<p>What do I miss?</p>
<p><strong>EDIT - 31/07/16:</strong><br>
Following Sven Walter's comments, maybe the issue is that somehow the IPs which the pods had received are of the docker bridge subnet 172.17.0.0/16 (which is not distinct per node) instead of flannel’s subnets 10.x.x.x/24 (which are distinct per node).
Can this be the issue?</p>
<p>In case needed, here is the deployment yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userloc-dep
spec:
replicas: 6
template:
metadata:
labels:
app: userloc
spec:
containers:
- name: userloc
image: globe:5000/openlso/userlocation-ms:0.1
ports:
- containerPort: 8081
</code></pre>
| <p>The issue occured becuase following <a href="https://docs.docker.com/engine/admin/systemd/" rel="nofollow">docker documentation</a> I had added additional docker config in <em>/etc/systemd/system/docker.service.d/docker.conf</em> that overrides the config in <em>/usr/lib/systemd/system/docker.service</em>. Unfortunatelly the scripts I used to setup the cluster (master.sh and worker.sh) doesn't refer to the first file but to the second one.<br>
Once I removed the <em>docker.conf</em> file the pods got flannel’s subnet.</p>
|
<p>Ok, So I am defiantly not a security expert and battling with this for a few days now, </p>
<p>I am using the Coreos <code>kube-aws</code> cloud-formation template maker, and i want to deploy my cluster to production but because of this little comment:</p>
<blockquote>
<p>PRODUCTION NOTE: the TLS keys and certificates generated by kube-aws should not be used to deploy a production Kubernetes cluster. Each component certificate is only valid for 90 days, while the CA is valid for 365 days. If deploying a production Kubernetes cluster, consider establishing PKI independently of this tool first</p>
</blockquote>
<p>I need to generate my own keys, but I don't seem to understand how to do that, their documentation (IMHO as someone who's not an expert) is seriously outrageous if you are not a familiar with. </p>
<p><strong>So my requirements are like so:</strong></p>
<ol>
<li>I would like to cert/keys to last for X years (long time)</li>
<li>I would like them to be valid for the entire domain *.company.com (I dont care for the internals just for the admin key for kubectl) </li>
<li>I would like to repeat the process 2 times (once for production, and once for QA/Staging) and eventually have 2 sets of credentials</li>
<li>override the default credentials and use <code>kube-aws up --export</code> to get the <code>userdata</code> i need for my cluster</li>
</ol>
<p><strong>My problems are:</strong></p>
<ol>
<li>how to make the admin cert valid for *.company.com</li>
<li>In the documentation they say that you need to create a <code>key-pair</code> for each node in the cluster... WHAT! <code>kube-aws render</code> generates only 1 worker <code>key-pair</code> so whats up with that!</li>
</ol>
<p>Well now for the "fun" part:</p>
<blockquote>
<pre><code>$ openssl genrsa -out ca-key.pem 2048
$ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
</code></pre>
</blockquote>
<p>I guess that the <code>-days 10000</code> solves my first problem with the expiration. cool</p>
<p><strong>API-SERVER key pair</strong> </p>
<p>openssl.cnf</p>
<blockquote>
<pre><code> [req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
is it
DNS.5 = mycompany.com
or
DNS.5 = *.mycompany.com
IP.1 = 10.3.0.1
IP.2 = 10.0.0.50
</code></pre>
</blockquote>
<p>and run the commands</p>
<blockquote>
<pre><code>$ openssl genrsa -out apiserver-key.pem 2048
$ openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
$ openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf
</code></pre>
</blockquote>
<p>Fine, besides the <code>subjectAltName</code> which I don't know how to use i could have made a few attempts to see what works its cool.</p>
<p><strong>Worker Keypairs</strong></p>
<p>Here is where I am really stuck, what am I supposed to do with this sentence:</p>
<pre><code>This procedure generates a unique TLS certificate for every Kubernetes worker node in your cluster
</code></pre>
<p>Fine security and all but this is really unrealistic and overkill IMO on an amazon autoscaling group</p>
<p>So in case I don't want to have a key for each node but 1 key for all, how does my <code>worker-openssl.cnf</code> should look????</p>
<blockquote>
<pre><code>[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = $ENV::WORKER_IP <- what am i supposed to do here?
</code></pre>
</blockquote>
<p>after this creating the admin key pair is straight forward.</p>
<p>please help! </p>
| <p>I was able to get it to work with this <code>worker-openssl.conf</code> using the <strong>the same certificate</strong> for all workers. Probably not the most secure setup.</p>
<pre><code>[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.*.cluster.internal
</code></pre>
|
<p>I have posted this problem from <a href="https://github.com/kubernetes/kubernetes/issues/4825#issuecomment-153567540" rel="noreferrer">here</a></p>
<p>I am running sharded mongodb in a kubernetes environment, with 3 shards, and 3 instances on each shard. for some reasons, my mongodb instance have been rescheduled to another machine.</p>
<p>the problem is when a mongodb instance have ben rescheduled to another instance, its <code>replica config</code> will be invalidated. resulting to this error below.</p>
<pre><code> > rs.status()
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 2110,
"optime" : Timestamp(1448462710, 6),
"optimeDate" : ISODate("2015-11-25T14:45:10Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
>
</code></pre>
<p>this is the config</p>
<pre><code> > rs.config().members
[
{
"_id" : 0,
"host" : "mongodb-shard2-service:27038",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "shard2-slave2-service:27039",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 2,
"host" : "shard2-slave1-service:27033",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
]
</code></pre>
<p>and a sample of <code>db.serverStatus()</code> of a rescheduled mongodb instance</p>
<pre><code> > db.serverStatus()
{
"host" : "mongodb-shard2-master-ofgrb",
"version" : "3.0.7",
"process" : "mongod",
"pid" : NumberLong(8),
</code></pre>
<p>I hope I am making sense.. because, I will be using this in live production very soon.. thank you!!</p>
| <p>For those who want to use the old way of setting up mongo (using ReplicationControllers or Deployments instead of PetSet), the problem seems to be in the hostname assignment delay of kubernetes Services. The solution is to add a 10 seconds delay in the container entrypoint (before starting the actual mongo):</p>
<pre><code>spec:
containers:
- name: mongo-node1
image: mongo
command: ["/bin/sh", "-c"]
args: ["sleep 10 && mongod --replSet rs1"]
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage1
mountPath: /data/db
</code></pre>
<p>related discussion: <a href="https://jira.mongodb.org/browse/SERVER-24778" rel="nofollow">https://jira.mongodb.org/browse/SERVER-24778</a></p>
|
<p>When I deploy each app using kubernetes/cluster/kube-up.sh over aws I set context using :</p>
<pre><code>CONTEXT=$(kubectl config view | grep current-context | awk '{print $2}')
kubectl config set-context $CONTEXT --namespace=${PROJECT_ID}
</code></pre>
<p>I do this for multiple apps and each deploys fine. However I then need to be able to toggle between kubernetes context to interact with an arbitrary deployed app (view logs/ do a kubectl exec ) </p>
<p>Here is how to show all my contexts </p>
<pre><code>kubectl config view --output=json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "aws_kubernetes",
"cluster": {
"server": "https://52.87.88.888",
"certificate-authority-data": "REDACTED"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"cluster": {
"server": "https://104.196.888.888",
"certificate-authority-data": "REDACTED"
}
}
],
"users": [
{
"name": "aws_kubernetes",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED",
"token": "taklamakan"
}
},
{
"name": "aws_kubernetes-basic-auth",
"user": {
"username": "admin",
"password": "retrogradewaif"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"user": {
"client-certificate-data": "REDACTED",
"client-key-data": "REDACTED",
"username": "admin",
"password": "emptyadjacentpossible"
}
}
],
"contexts": [
{
"name": "aws_kubernetes",
"context": {
"cluster": "aws_kubernetes",
"user": "aws_kubernetes",
"namespace": "ruptureofthemundaneplane"
}
},
{
"name": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"context": {
"cluster": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"user": "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster",
"namespace": "primacyofdirectexperience"
}
}
],
"current-context": "aws_kubernetes"
}
</code></pre>
<p>you can see in above I have deployed two apps ... when I try the obvious to choose my kubernetes context </p>
<pre><code>kubectl config set-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster --namespace=${PROJECT_ID}
... outputs
context "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster" set.
kubectl config set-cluster "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster"
... outputs
cluster "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster" set.
</code></pre>
<p>it then just hangs when I issue commands like</p>
<pre><code>kubectl describe pods --namespace=primacyofdirectexperience
</code></pre>
<p>perhaps I am missing the command to also set user since in above json each deployed app gets its own user name ???</p>
<p><strong><em>UPDATE</em></strong></p>
<pre><code>kubectl config use-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster
... outputs
switched to context "gke_primacyofdirectexperience_us-east1-b_loudhttpscluster".
</code></pre>
<p>however now when I issue any kubectl command ... for example</p>
<pre><code>kubectl get pods
.... outputs
Unable to connect to the server: x509: certificate signed by unknown authority
</code></pre>
<p>which is an error I have never seen before ... no doubt due to the toggle issue</p>
<p>Even with above error message /kubernetes/cluster/kube-down.sh was able to teardown the cluster so there is hope toggling will work !</p>
| <p>To switch between contexts use <code>use-context</code>:</p>
<pre><code>kubectl config use-context gke_primacyofdirectexperience_us-east1-b_loudhttpscluster
</code></pre>
<p>Any kubectl commands applied now will be applied to that cluster (using the <code>primacyofdirectexperience</code> namespace, since you set that as the default for the cluster). </p>
<pre><code>kubectl get pods
</code></pre>
<p>Will now get all pods <code>gke_primacyofdirectexperience_us-east1-b_loudhttpscluster</code> on the <code>primacyofdirectexperience</code> namespace. To use a different namespace, you can apply the namspace flag:</p>
<pre><code>kubectl get pods --namespace=someothernamespace
</code></pre>
<p>To switch contexts again, just run <code>use-context</code> again:</p>
<pre><code>kubectl config use-context aws_kubernetes
</code></pre>
<p>Now,</p>
<pre><code>kubectl get pods
</code></pre>
<p>will run on the <code>aws_kubernetes</code> cluster, using the <code>default</code> namespace.</p>
<p>You can always see which context <code>kubectl</code> is currently using by running:</p>
<pre><code>kubectl config current-context
</code></pre>
|
<p>I'm trying to spin up a new pod that doesn't automatically include a JWT for talking to the k8s cluster.</p>
<p>I've created a new Service Account and tried removing the built in secret, but it is automatically regenerated after each delete.</p>
<p>According to the <a href="http://kubernetes.io/docs/user-guide/secrets/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials" rel="nofollow">secrets</a> documentation, I should be able to disable the creation of tokens for service accounts.</p>
<blockquote>
<p>The automatic creation and use of API credentials can be disabled or overridden if desired.</p>
</blockquote>
<p>Where/how do I do that?</p>
| <p>removing the <code>--service-account-private-key-file</code> argument from the controller manager will prevent auto-creation of tokens</p>
<p>removing the ServiceAccount admission plugin will prevent auto-mounting of tokens</p>
<p>however, many services are likely to depend on those tokens being present.</p>
<p>if you are concerned about access to the API, it is generally better to set an authorization mode other than AlwaysAllow, and use one of the modes that lets you specify policy around which users can perform which actions</p>
|
<p>I setup k8s by <a href="https://github.com/kubernetes/kube-deploy/tree/master/docker-multinode" rel="nofollow">docker-multinode</a></p>
<pre><code>$ https_proxy=http://10.25.30.127:7777 IP_ADDRESS=10.25.24.116 MASTER_IP=10.25.30.127 ./worker.sh
+++ [0828 15:38:35] K8S_VERSION is set to: v1.3.6
+++ [0828 15:38:35] ETCD_VERSION is set to: 3.0.4
+++ [0828 15:38:35] FLANNEL_VERSION is set to: v0.6.1
+++ [0828 15:38:35] FLANNEL_IPMASQ is set to: true
+++ [0828 15:38:35] FLANNEL_NETWORK is set to: 10.1.0.0/16
+++ [0828 15:38:35] FLANNEL_BACKEND is set to: udp
+++ [0828 15:38:35] RESTART_POLICY is set to: unless-stopped
+++ [0828 15:38:35] MASTER_IP is set to: 10.25.30.127
+++ [0828 15:38:35] ARCH is set to: amd64
+++ [0828 15:38:35] IP_ADDRESS is set to: 10.25.24.116
+++ [0828 15:38:35] USE_CNI is set to: false
+++ [0828 15:38:35] --------------------------------------------
+++ [0828 15:38:35] Killing all kubernetes containers...
+++ [0828 15:38:35] Launching docker bootstrap...
+++ [0828 15:38:36] Launching flannel...
b70f4cc14aac8315740a916fae459d0b354280d97d8da743c67e9e692beea601
+++ [0828 15:38:37] FLANNEL_SUBNET is set to: 10.1.102.1/24
+++ [0828 15:38:37] FLANNEL_MTU is set to: 1472
$ k cluster-info
Kubernetes master is running at http://10.25.30.127:8080
KubeDNS is running at http://10.25.30.127:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://10.25.30.127:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ k get nodes
NAME STATUS AGE
10.25.17.232 Ready 9h
10.25.19.197 Ready 3h
10.25.24.116 Ready 7h
139.1.1.1 Ready 9h
</code></pre>
<p>DNS started properly</p>
<pre><code>$ k logs kube-dns-v17.1-qaygj -c kubedns --namespace=kube-system |head -50
I0828 04:49:14.079474 1 server.go:91] Using https://10.0.0.1:443 for kubernetes master
I0828 04:49:14.081339 1 server.go:92] Using kubernetes API <nil>
I0828 04:49:14.081923 1 server.go:132] Starting SkyDNS server. Listening on port:10053
I0828 04:49:14.082071 1 server.go:139] skydns: metrics enabled on :/metrics
I0828 04:49:14.082181 1 dns.go:166] Waiting for service: default/kubernetes
I0828 04:49:14.082462 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0828 04:49:14.082607 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0828 04:49:14.480396 1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I0828 04:49:14.483012 1 dns.go:660] DNS Record:&{10.0.0.1 0 10 10 false 30 0 }, hash:24c3d825
I0828 04:49:14.483065 1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10 false 30 0 }, hash:c3f6ae26
I0828 04:49:14.483115 1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:b9b7d845
I0828 04:49:14.483160 1 dns.go:660] DNS Record:&{10.0.0.24 0 10 10 false 30 0 }, hash:d8b58e70
I0828 04:49:14.483194 1 dns.go:660] DNS Record:&{kubernetes-dashboard.kube-system.svc.cluster.local. 0 10 10 false 30 0 }, hash:529066a8
I0828 04:49:14.483237 1 dns.go:660] DNS Record:&{10.0.0.10 0 10 10 false 30 0 }, hash:2d9aa69
I0828 04:49:14.483266 1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10 false 30 0 }, hash:fdbb4e78
I0828 04:49:14.483309 1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 53 10 10 false 30 0 }, hash:fdbb4e78
I0828 04:49:14.483337 1 dns.go:660] DNS Record:&{kube-dns.kube-system.svc.cluster.local. 0 10 10 false 30 0 }, hash:d1247c4e
I0828 04:49:16.678334 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:49:16.678405 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:49:16.777991 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:49:16.778100 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:49:16.778886 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:49:46.778352 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:49:46.778406 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:49:46.778932 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:50:16.879611 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:50:16.879685 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:50:16.880274 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:50:16.880332 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:50:16.880900 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:50:46.878037 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:50:46.878094 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:50:46.978007 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:51:16.778397 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:51:16.778455 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:51:16.779062 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:51:16.779110 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:51:16.779588 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:51:46.778319 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:51:46.778374 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:51:46.779048 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:52:16.878240 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:52:16.878309 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:52:16.878848 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:52:16.878886 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:52:16.977642 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:52:46.678628 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0828 04:52:46.678685 1 dns.go:539] records:[0xc820356af0], retval:[{10.0.0.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3234633364383235}], path:[local cluster svc default kubernetes]
I0828 04:52:46.878096 1 dns.go:583] Received ReverseRecord Request:1.0.0.10.in-addr.arpa.
I0828 04:53:16.679056 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
</code></pre>
<p>When I start guestbook I notice the dns log</p>
<pre><code>I0828 14:39:07.977746 1 dns.go:660] DNS Record:&{redis-master.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:606d0bd5
I0828 14:39:14.178122 1 dns.go:660] DNS Record:&{10.0.0.147 0 10 10 false 30 0 }, hash:e8b710be
I0828 14:39:14.178212 1 dns.go:660] DNS Record:&{myapp.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:2077e618
I0828 14:39:14.178254 1 dns.go:660] DNS Record:&{10.0.0.145 0 10 10 false 30 0 }, hash:d93e9f2c
I0828 14:39:14.178296 1 dns.go:660] DNS Record:&{redis-master.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:606d0bd5
I0828 14:39:14.178336 1 dns.go:660] DNS Record:&{10.0.0.171 0 10 10 false 30 0 }, hash:cb7a4915
I0828 14:39:14.178364 1 dns.go:660] DNS Record:&{redis-slave.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:ff7d45d8
I0828 14:39:14.178404 1 dns.go:660] DNS Record:&{10.0.0.139 0 10 10 false 30 0 }, hash:9afc24c9
I0828 14:39:14.178431 1 dns.go:660] DNS Record:&{guestbook.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:78248924
I0828 14:39:14.178456 1 dns.go:660] DNS Record:&{10.0.0.1 0 10 10 false 30 0 }, hash:24c3d825
I0828 14:39:14.178513 1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10 false 30 0 }, hash:c3f6ae26
</code></pre>
<p>But dns lookup is not working</p>
<p>Guestbook info report</p>
<pre><code>PANIC: dial tcp: lookup redis-master on 10.0.0.10:53: read udp 10.1.102.2:46755->10.0.0.10:53: i/o timeout
goroutine 277 [running]:
github.com/codegangsta/negroni.(*Recovery).ServeHTTP.func1(0x7f1d98e5ba90, 0xc820323580, 0xc8200b8ec0)
/go/src/github.com/codegangsta/negroni/recovery.go:34 +0xe9
panic(0x7a8c60, 0xc820327ae0)
</code></pre>
<p>Guestbook env page</p>
<pre><code>{
"GUESTBOOK_PORT": "tcp://10.0.0.139:3000",
"GUESTBOOK_PORT_3000_TCP": "tcp://10.0.0.139:3000",
"GUESTBOOK_PORT_3000_TCP_ADDR": "10.0.0.139",
"GUESTBOOK_PORT_3000_TCP_PORT": "3000",
"GUESTBOOK_PORT_3000_TCP_PROTO": "tcp",
"GUESTBOOK_SERVICE_HOST": "10.0.0.139",
"GUESTBOOK_SERVICE_PORT": "3000",
"HOME": "/",
"HOSTNAME": "guestbook-advba",
"KUBERNETES_PORT": "tcp://10.0.0.1:443",
"KUBERNETES_PORT_443_TCP": "tcp://10.0.0.1:443",
"KUBERNETES_PORT_443_TCP_ADDR": "10.0.0.1",
"KUBERNETES_PORT_443_TCP_PORT": "443",
"KUBERNETES_PORT_443_TCP_PROTO": "tcp",
"KUBERNETES_SERVICE_HOST": "10.0.0.1",
"KUBERNETES_SERVICE_PORT": "443",
"KUBERNETES_SERVICE_PORT_HTTPS": "443",
"MYAPP_PORT": "tcp://10.0.0.147:8765",
"MYAPP_PORT_8765_TCP": "tcp://10.0.0.147:8765",
"MYAPP_PORT_8765_TCP_ADDR": "10.0.0.147",
"MYAPP_PORT_8765_TCP_PORT": "8765",
"MYAPP_PORT_8765_TCP_PROTO": "tcp",
"MYAPP_SERVICE_HOST": "10.0.0.147",
"MYAPP_SERVICE_PORT": "8765",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"REDIS_MASTER_PORT": "tcp://10.0.0.35:6379",
"REDIS_MASTER_PORT_6379_TCP": "tcp://10.0.0.35:6379",
"REDIS_MASTER_PORT_6379_TCP_ADDR": "10.0.0.35",
"REDIS_MASTER_PORT_6379_TCP_PORT": "6379",
"REDIS_MASTER_PORT_6379_TCP_PROTO": "tcp",
"REDIS_MASTER_SERVICE_HOST": "10.0.0.35",
"REDIS_MASTER_SERVICE_PORT": "6379",
"REDIS_SLAVE_PORT": "tcp://10.0.0.171:6379",
"REDIS_SLAVE_PORT_6379_TCP": "tcp://10.0.0.171:6379",
"REDIS_SLAVE_PORT_6379_TCP_ADDR": "10.0.0.171",
"REDIS_SLAVE_PORT_6379_TCP_PORT": "6379",
"REDIS_SLAVE_PORT_6379_TCP_PROTO": "tcp",
"REDIS_SLAVE_SERVICE_HOST": "10.0.0.171",
"REDIS_SLAVE_SERVICE_PORT": "6379"
}
</code></pre>
<p>I also try to resolve in dns pod</p>
<pre><code>$ k get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 10h
kubernetes-dashboard 10.0.0.24 <none> 80/TCP 10h
$ k exec -it kube-dns-v17.1-qaygj -c kubedns --namespace=kube-system -- /bin/sh
/ # nslookup redis-master
Server: 10.143.22.118
Address 1: 10.143.22.118
nslookup: can't resolve 'redis-master'
/ # nslookup baidu.com
Server: 10.143.22.118
Address 1: 10.143.22.118
Name: baidu.com
Address 1: 111.13.101.208
Address 2: 220.181.57.217
Address 3: 123.125.114.144
Address 4: 180.149.132.47
/ # nslookup redis-master 10.0.0.10
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'redis-master'
/ # nslookup baidu.com 10.0.0.10
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: baidu.com
Address 1: 123.125.114.144
Address 2: 180.149.132.47
Address 3: 111.13.101.208
Address 4: 220.181.57.217
</code></pre>
<p>Why kube-dns not works ?</p>
| <p>This problem is caused by the ip conflict, provider use 10 as internal network, docker-multinode use 10/8 as default cluster ip range, then something got wired.</p>
<p>This is what I done</p>
<ul>
<li>Copy /etc/kubernetes outside the container</li>
<li>Replace <code>10.0.0.1</code> to <code>172.16.0.1</code> in <code>/etc/kubernetes/master-multi/master-multi.json</code></li>
<li><p>Change <code>addon</code> volume in <code>/etc/kubernetes/master-multi/addon-manager.json</code> to </p>
<p>{
"name": "addons",
"hostPath": {"path":"/path/to/you/own/etc/kubernetes/"}
}</p></li>
<li><p>Change <code>clusterIP</code> in <code>/etc/kubernetes/addon/skydns-svc.yaml</code> to <code>172.16.0.10</code></p></li>
<li>Change all <code>--cluster-dns=10.0.0.10</code> in <code>common.sh</code> to <code>--cluster-dns=172.16.0.10</code></li>
<li>Add <code>-v /path/to/you/own/etc/kubernetes/:/etc/kubernetes/ \</code> to KUBECTL_MOUNT in <code>common.sh</code></li>
<li>Then <code>FLANNEL_IPMASQ=false FLANNEL_NETWORK=172.16.0.0/16 ./master.sh</code></li>
<li>Done</li>
</ul>
<p>I also add an issues to kube-deploy <a href="https://github.com/kubernetes/kube-deploy/issues/215" rel="nofollow">kube-deploy#215</a></p>
|
<p>i have a rather recent kubernetes cluster running on GCE. I am trying to get my application to log to Cloud Logging / Stackdriver.</p>
<p>I can see all the kubernetes cluster logs there but no container output ever materializes.</p>
<p>So when I follow this guide: <a href="http://kubernetes.io/docs/getting-started-guides/logging/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/logging/</a>, I can see the output of the pod</p>
<pre><code>kubectl logs counter
2163: Wed Aug 31 15:02:52 UTC 2016
</code></pre>
<p>This never makes it to the Logging Interface</p>
<p><a href="http://i.stack.imgur.com/N7qlF.png" rel="nofollow">Pod not showing in selector</a></p>
<p>The fluentd-cloud-logging pods give no logging output</p>
<pre><code>kubectl logs --namespace=kube-system fluentd-cloud-logging-staging-minion-group-20hk
</code></pre>
<p>The /var/log/google-fluentd/google-fluentd.log file looks happy</p>
<pre><code>...
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/containers/node-problem-detector-v0.1-hgtcr_kube-system_POD-07e5b134c9f8ff48f73f1df41473a84a07738ac750840f09938d604694c4bd6e.log
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/containers/rails-2607986313-s7r5e_default_POD-9f1dd02f23de552a40297f761d09c03b50e5a2cd9789ef498139d24602d9847e.log
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/salt/minion
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/startupscript.log
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/docker.log
2016-08-31 14:07:16 +0000 [info]: following tail of /var/log/kubelet.log
2016-08-31 14:07:22 +0000 [info]: Successfully sent to Google Cloud Logging API.
2016-08-31 14:07:22 +0000 [info]: Successfully sent to Google Cloud Logging API.
</code></pre>
<p>Kubernetes Version is</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:29:08Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:21:58Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Cluster was started with</p>
<pre><code>export KUBE_GCE_ZONE=europe-west1-d
export NODE_SIZE=n1-standard-2
export NUM_NODES=2
export KUBE_GCE_INSTANCE_PREFIX=staging
export ENABLE_CLUSTER_AUTOSCALER=true
export KUBE_ENABLE_CLUSTER_MONITORING=true
export KUBE_ENABLE_CLUSTER_MONITORING=google
</code></pre>
<p>Any ideas what I might be doing wrong? To my understanding this should work out of the box, right?</p>
| <p>Bit of a long shot, but have you enabled the logging API?</p>
<p>"You can do so from the Developers Console, <a href="https://console.developers.google.com/apis/api/logging/overview" rel="noreferrer">here</a>. Try going there, clicking the Enable API button, and seeing whether the errors keep coming."</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/20516" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/20516</a></p>
<p><a href="https://stackoverflow.com/questions/35165973/google-cloud-logging-google-fluentd-dropping-messages">Google Cloud Logging + google-fluentd Dropping Messages</a></p>
|
<p>I'm running a service written in go in a pod in Kubernetes. The service doesn't expose an HTTP interface; it's processing work from a queue.</p>
<p>I could:</p>
<ul>
<li>Use an executable liveness check to see if the process is running</li>
<li>Expose an HTTP health check endpoint</li>
<li>Use expvars to expose basic health data.</li>
</ul>
<p>Is there a common / idiomatic way of doing this in go/Kubernetes?</p>
| <p>In general I recommend the HTTP mechanism because it is SO easy to add in Go. If you already have an <code>exec</code>able command that will return a useful status, go for it. Or you might consider <a href="https://github.com/kubernetes/contrib/tree/master/exec-healthz" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/exec-healthz</a></p>
|
<p>I have put together a simple cluster with several deploys that interact nicely, dns works, etc. However, as I'm using Deployments, and I have a few questions that I could not find answered in the docs.</p>
<ul>
<li><p>How do I non-destructively update a deployment with a new copy of the deploy file? I've got edit and replace, but I'd really like to just pass in the file proper with it's changed fields (version, image, ports, etc.)</p></li>
<li><p>What's the preferred way of exposing a deployment as a service? There's a standalone file, there's an expose command... anything else I should consider? Is it possible to bundle the service into the deployment file?</p></li>
</ul>
| <blockquote>
<p>How do I non-destructively update a deployment</p>
</blockquote>
<p>You can use <code>kubectl replace</code> or <code>kubectl apply</code>. Replace is a full replacement. Apply tries to do a selective patch operation.</p>
<blockquote>
<p>What's the preferred way of exposing a deployment as a service?</p>
</blockquote>
<p>All of your suggestions are valid. Some people prefer a script, and for that <code>kubectl expose</code> is great. Some people want more control and versioning, so YAML files + <code>kubectl apply</code> or <code>kubectl replace</code> are appropriate. You can bundle multiple YAML "documents" into a single file, just join the blocks with "---" on a line by itself.</p>
|
<p>I have a GCE Container Cluster composed of 3 nodes. On every node I run a POD like that one:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: none
track: stable
spec:
containers:
- name: hello
image: gcr.io/persistent-volumes-test/alpine:v1.2
resources:
limits:
cpu: 0.2
memory: "10Mi"
volumeMounts:
- mountPath: "/persistentDisk"
name: persistent-disk
ports:
- containerPort: 65535
name: anti-affinity
hostPort: 65535
volumes:
- name: persistent-disk
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>The trick of defining the "anti-affinity" port ensures that every POD runs on a different node. I've created 3 PersistentVolume defined like this:</p>
<pre><code> kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume-1
annotations:
volume.beta.kubernetes.io/storage-class: "slow"
labels:
release: "dev"
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: persistent-disk-1
fsType: ext4
</code></pre>
<p>and they are well deployed</p>
<pre><code>NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
persistent-volume-1 10Gi RWO Released default/myclaim 13h
persistent-volume-2 10Gi RWO Released default/myclaim 5h
persistent-volume-3 10Gi RWO Available 5h
</code></pre>
<p>the claim definition is the following:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
annotations:
volume.beta.kubernetes.io/storage-class: "slow"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
release: "dev"
</code></pre>
<p>What I noticed is that the claim bounds only to one of the volumes I created, so, only one of my PODS can get successfully deployed. What I expected was that the claim, when used by a POD, would have found one available volume to bound to, matching selectors rules.
In other words, what I've interpreted of PersistentVolumeClaims is that a POD use claim to search an available volume in a set o PersistentVolumes matching PVC specs. So that's my question:</p>
<p>can the same PersistentVolumeClaim be used by differente instances of the same POD to be connected to different PersistentVolumes? Or the claim is bound to one and only one volume once it is created and cannot bound to any other volume?</p>
<p>If the right answer is the second, how can I make a POD to be dynamically bound to a PersistentVolume (chosen form a set) when deployed whitout creating a claim per POD and thus avoiding to create a specific POD for every volume I need to connect to?</p>
| <p>A <code>PersistentVolumeClaim</code> reserves a specific instance of storage that satisfies its request. Using that same <code>PersistentVolumeClaim</code> in multiple <code>Pods</code> will attempt to use the same bound <code>PersistentVolume</code> in each of the <code>Pods</code>, which will not be possible in the case of a <code>gcePersistentDisk</code>.</p>
<p>Try creating a separate <code>PersistentVolumClaim</code> for each <code>Pod</code>.</p>
<p>The <a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#lifecycle-of-a-volume-and-claim" rel="noreferrer">Lifecycle</a> section of the <a href="http://kubernetes.io/docs/user-guide/persistent-volumes" rel="noreferrer">Persistent Volumes doc</a> provides a nice overview.</p>
|
<p>I have exposed a service on an external port on all nodes in a kubernetes
cluster from:</p>
<p><code>kubectl create -f nginx-service.yaml</code></p>
<p>You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30002) to serve traffic.</p>
<p>See <a href="http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md" rel="noreferrer">http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md</a> for more details.
service "nginx-service" created.`</p>
<p>Is there anyway to get the external ports of the kubernetes cluster?</p>
| <p><code>kubectl get svc --all-namespaces -o go-template='{{range .items}}{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}{{end}}'</code></p>
<p>This gets all services in all namespaces, and does basically: "for each service, for each port, if nodePort is defined, print nodePort".</p>
|
<p>I want to experiment with PetSet on GKE.
I have a 1.3.5 Kubernetes cluster on GKE, but PetSet does not seem to be activated.</p>
<pre><code> > kubectl get petset
Unable to list "petsets": the server could not find the requested resource
</code></pre>
<p>Do I need to activate v1alpha1 feature on GKE ?</p>
| <p>I'm using <code>PetSet</code> in zone <code>europe-west1-d</code> but got the error you're seeing when I tried in zone <code>europe-west1-c</code>.</p>
<p>Update:</p>
<p>Today, September 1, I got an email from Google Cloud Platform announcing that PetSet was "accidentally enabled" and will be disabled on September 30.</p>
<blockquote>
<p>Dear Google Container Engine customer,</p>
<p>Google Container Engine clusters running Kubernetes 1.3.x versions accidentally enabled Kubernetes alpha features (e.g. PetSet), which are not production ready. Access to alpha features has already been disabled for clusters not using them, but cannot be safely disabled in clusters that are currently using alpha resources. The following clusters in projects owned by you have been identified as running alpha resources:</p>
<p>Please delete the alpha resources from your cluster. Continued usage of these features after September 30th may result in an unstable or broken cluster, as access to alpha features will be disabled.</p>
<p>The full list of unsupported alpha resources that are currently enabled (and will be disabled) is below:
Resource API Group
petset apps/v1alpha1
clusterrolebindings rbac.authorization.k8s.io/v1alpha1
clusterroles rbac.authorization.k8s.io/v1alpha1
rolebindings rbac.authorization.k8s.io/v1alpha1
roles rbac.authorization.k8s.io/v1alpha1
poddisruptionbudgets policy/v1alpha1</p>
</blockquote>
|
<p>I followed the load balancer tutorial: <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/container-engine/docs/tutorials/http-balancer</a> which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy. </p>
<p>My application redirects on / (returns a 302) but I added a <code>livenessProbe</code> in the pod definition:</p>
<pre><code> livenessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.bar.com
</code></pre>
<p>My ingress looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
spec:
backend:
serviceName: foo
servicePort: 80
rules:
- host: foo.bar.com
</code></pre>
<p>Service configuration is:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 4001
</code></pre>
<p>Backends health in <code>ingress describe ing</code> looks like:</p>
<pre><code>backends: {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}
</code></pre>
<p>and the rules on the ingress look like:</p>
<pre><code>Rules:
Host Path Backends
---- ---- --------
* * foo:80 (10.0.0.7:4001,10.0.1.6:4001)
</code></pre>
<p>Any pointers greatly received, I've been trying to work this out for hours with no luck.</p>
<p><strong>Update</strong></p>
<p>I have added the <code>readinessProbe</code> to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like: </p>
<pre><code> readinessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.com
</code></pre>
<p>I changed my service to: </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 4001
targetPort: 4001
</code></pre>
<p><strong>Update2</strong></p>
<p>After I removed the custom headers from the <code>readinessProbe</code> it started working! Many thanks.</p>
| <p>You need to add a readinessProbe (just copy your livenessProbe).</p>
<p>It's explained in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">GCE L7 Ingress Docs</a>.</p>
<blockquote>
<p>Health checks</p>
<p>Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.</p>
</blockquote>
<p>Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.</p>
|
<p>I have been using kubernetes for a while now.</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
</code></pre>
<p>I usually set an Ingress, Service and Replication Controller for each project.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
</code></pre>
<p>My "problem" is that for deploying my app I usually do:</p>
<pre><code>kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
</code></pre>
<p>I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.</p>
<p>I have read about Deployments, I wonder how it would help me?</p>
<p><strong>Goals</strong>
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.</p>
| <p>I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.</p>
<p>If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a <strong>Deployment</strong>. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single <code>kubectl replace -f manifest.yml</code>.</p>
<p>Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. <code>kubectl replace</code> would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. <code>revision: 003</code>).</p>
|
<p><strong>Question</strong>: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?</p>
<p><strong>Background</strong>: We are running a <strong>hosted K8s</strong> setup in the <strong>Google Cloud Platform</strong>. Most services are <strong>12factor apps</strong> and <strong>run</strong> just <strong>fine</strong> within K8s. Some <strong>backing stores</strong> (databases) are run <strong>outside of K8s</strong>. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.</p>
<p>But <strong>some services</strong> running in the internal GCE network (but <strong>outside of K8s</strong>) need to <strong>access services running within K8s</strong>. We can expose the K8s services using <code>spec.type: NodePort</code> and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.</p>
<p>This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.</p>
<p>As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.</p>
| <p>You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see <a href="https://stackoverflow.com/questions/31664060/how-to-call-a-service-exposed-by-a-kubernetes-cluster-from-another-kubernetes-cl/31665248#31665248">this answer</a>).</p>
<p>I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.</p>
|
<p>I'm trying to use kubectl locally after creating a cluster via the cloud console but I keep getting an error. Below are the steps I took:</p>
<p>Via Cloud Console</p>
<blockquote>
<p>gcloud containers cluster create test</p>
</blockquote>
<p>Locally</p>
<blockquote>
<p>gcloud container clusters get-credentials test</p>
<p>kubectl cluster-info</p>
</blockquote>
<p>I'm getting the following error:</p>
<blockquote>
<p>error: failed to negotiate an api version; server supports: map[], client supports: map[componentconfig/v1alpha1:{} rbac.authorization.k8s.io/v1alpha1:{} authentication.k8s.io/v1beta1:{} apps/v1alpha1:{} batch/v1:{} authorization.k8s.io/v1beta1:{} autoscaling/v1:{} batch/v2alpha1:{} v1:{} extensions/v1beta1:{} policy/v1alpha1:{} federation/v1beta1:{}]</p>
</blockquote>
<p>Below is the output of kubectl version</p>
<blockquote>
<p>Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:29:08Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"windows/amd64"}</p>
</blockquote>
<p>Below is extracted from from the output of kubectl cluster-info --v=8</p>
<blockquote>
<p>212 request.go:891] Response Body: Forbidden: "/api"</p>
</blockquote>
<p>Any help is greatly appreciated</p>
| <p>Run again with --v=8 to see a trace of all the network calls</p>
|
<p>Question is similar to following SO question. But I am not looking forward to create classic load balancer. </p>
<p><a href="https://stackoverflow.com/questions/31611503/how-to-create-kubernetes-load-balancer-on-aws">How to create Kubernetes load balancer on aws</a> </p>
<p>AWS now provide 2 types of loadbalancer, classic load balancer and application load balancer. Please read following document for more information,</p>
<p><a href="https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/</a></p>
<p>I already know how classic load balancer work with kubernetes. I wonder if there is any flag/ tool exist so that we can also configure application loadbalancer.</p>
| <p>I can tell you that as of K8 v1.2.3/4 there is no built-in support for Application Load Balancers. </p>
<p>That said, what I do is expose internally load balanced pods via a service NodePort. You can then implement any type of AWS load balancing you would like, including new Application Load Balancing features such as Content-Based Routing, by setting up your own AWS ALB that directs a URL path like /blog to a specific NodePort.</p>
<p>You can read more about NodePorts here: <a href="http://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow">http://kubernetes.io/docs/user-guide/services/#type-nodeport</a></p>
<p>For bonus points, you could script the creation of the ALB via something like BOTO3 and have it provisioned when you provision the K8 services/pods/rc.</p>
|
<p>With the rise of containers, Kuberenetes, 12 Factor etc, it has become easier to replicate an identical environment across dev, staging and production. However, what there appears to be no common standard to domain name conventions.</p>
<p>As far as I can see it, there are two ways of doing it:</p>
<ul>
<li>Use subdomains:
<ul>
<li><code>*.dev.foobar.tld</code></li>
<li><code>*.staging.foobar.tld</code></li>
<li><code>*.foobar.tld</code></li>
</ul></li>
<li>Use separate domains:
<ul>
<li><code>*.foobar-dev.tld</code></li>
<li><code>*.foobar-staging.tld</code></li>
<li><code>*.foobar.tld</code></li>
</ul></li>
</ul>
<p>I can see up and downs with both approaches, but I'm curious what the common practise is.</p>
<p>As a side-note, Cloudflare will not issue you certificates for sub-sub domains (e.g. <code>*.stage.foobar.tld</code>).</p>
| <blockquote>
<p>There are only two hard things in Computer Science: cache invalidation
and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>Depends on the company size.</p>
<p>Small businesses usually go for dashes and get the wildcard certificate.
So they would have <code>dev.example.com, test.example.com</code></p>
<p>In larger enterprises they usually have a DNS infrastructure rolled out and the provisioning processes takes care of the assignment. It usually looks like</p>
<pre><code>aws-eu-central-1.appName.staging.[teamName].example.com
</code></pre>
<p>They would either use their own self-signed certs with the CA on all servers or have the money for the SANs.</p>
<p>For more inspiration:</p>
<p><a href="https://blog.serverdensity.com/server-naming-conventions-and-best-practices/" rel="noreferrer">https://blog.serverdensity.com/server-naming-conventions-and-best-practices/</a></p>
<p><a href="https://mnx.io/blog/a-proper-server-naming-scheme/" rel="noreferrer">https://mnx.io/blog/a-proper-server-naming-scheme/</a></p>
<p><a href="https://namingschemes.com/" rel="noreferrer">https://namingschemes.com/</a></p>
|
<p>I have configure kubelet to run properly for some days. However, currently, i want to add some image gc parameters to kubelet. I am now curiously about whether i can just add the command parameters and restart kubelet. </p>
<p>Any thing i should take care before of after doing this?</p>
| <p>It should be completely safe to bring Kubelet down for a short while. If it is down too long (40 seconds, I think), the master might detect the node as dead, though :)</p>
|
<p>We wrote some Go code to talk to our Kubernetes cluster and fetch the IP of a Service exposed. We do it like so:</p>
<pre><code>(import "gopkg.in/kubernetes/kubernetes.v1/pkg/client/restclient")
(import kubectl "gopkg.in/kubernetes/kubernetes.v1/pkg/client/unversioned")
svc, err := c.Services(k8sNS).Get(svcName)
if err != nil {
panic(l.Errorf("Could not retrieve svc details. %s", err.Error()))
}
svcIP := svc.Status.LoadBalancer.Ingress[0].IP
</code></pre>
<p><code>go get</code> works fine, and our script executes when we do <code>go run ...</code> and everybody is happy. Now, as of yesterday (from the time this question is posted) on the same script - <code>go get</code> fails. The error is like so:</p>
<pre><code>[09.07.2016 10:56 AM]$ go get
package k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install: cannot find package "k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install" in any of:
/usr/local/go/src/k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install (from $GOROOT)
/home/ckotha/godir/src/k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install (from $GOPATH)
</code></pre>
<p>We have not specifically used <code>authentication</code> package in our code. Are we importing kubernetes libraries correctly? is there another way to do this ?</p>
<p><code>ls</code> on <code>$GOPATH/k8s.io/kubernetes/pkg/apis/</code> and found this:</p>
<pre><code>:~/godir/src/k8s.io/kubernetes/pkg/apis
[09.07.2016 10:53 AM]$ ls
abac apps authentication authorization autoscaling batch certificates componentconfig extensions imagepolicy OWNERS policy rbac storage
</code></pre>
| <p>It looks like a package you imported has changed.</p>
<p>You can update existing repositories:</p>
<pre><code>go get -u
</code></pre>
<blockquote>
<p>The -u flag instructs get to use the network to update the named
packages and their dependencies. By default, get uses the network to
check out missing packages but does not use it to look for updates to
existing packages.</p>
</blockquote>
<p>You do use <a href="http://labix.org/gopkg.in" rel="nofollow">gopkg.io</a> to pin the version to v1, but I think you want to be more specific, eg, v1.3.6 (EDIT: this won't work because gopkg.in doesn't permit package selectors more specific than the major version.). </p>
<p>Alternatively, a good way to ensure code stays the same is to compile your binary and execute that, instead of using <code>go run</code>.</p>
|
<p>Dependency failed for Docker Application Container Engine. May 20 13:06:52 localhost systemd[1]: docker.service: Job docker.service/start failed with result 'dependency' when I do a systemctl status docker.</p>
<p>Using the CoreOS install documentation, Kubelet(master) all on same node.</p>
<p>Where would I start looking to debug this from?</p>
| <p>I know it's a little late, but I think you need to ensure <code>flanneld.service</code> service is running. If you're following the CoreOS step-by-step documentation for building a Kubernetes cluster with CoreOS, then flanneld is a dependency for the docker engine.</p>
<p>If you made a systemd drop-in replacement in <code>/etc/systemd/system/docker.service.d/40-flannel.conf</code>, then that is most likely the case as evidenced here:</p>
<p><code>[Unit]
Requires=flanneld.service
After=flanneld.service</code></p>
|
<p>first of all any help is appreciated</p>
<p>I want execute a command in a container and I execute:</p>
<pre><code>kubectl exec -ti busybox bash
</code></pre>
<p>but if I input about 70 characters in bash and then I get truncated output and line broken anyway lead to unreadable.</p>
<p>is there a way to increase the characters per line when using kubectl exec? </p>
<pre><code>Environment (also see manifests for more detailed info):
Centos 7.0.1406
Kubernetes: 1.2.0
etcd: 2.3.7
flannel: 0.5.3
docker:1.10.3
</code></pre>
<p>Thanks a lot for any suggestions.</p>
| <p>This will be supported in the upcoming Kubernetes 1.4 release (if you're interested, see its <a href="https://github.com/kubernetes/kubernetes/pull/25273" rel="nofollow">fix</a>). </p>
|
<p>My kubernetes pods are all able to resolve hostnames and ping servers that are on the wider Internet, but they can't do either for our VMs running in the same zone & region on Google Compute Engine.</p>
<p>How does one tell kubernetes / docker to allow outbound traffic to the Google Compute Engine environment (our subnet is 10.240.0.0) and to resolve hostnames for that subnet using 10.240.0.1?</p>
| <p>Very silly mistake on my part. </p>
<p>Our Google Container Cluster was configured to use a custom network in the Google Developer Console, while our Google Compute Engine VMs were all configured to use the default network. </p>
<p>That explains that. Make sure the machines are all on the same network and then everything works as you'd hope.</p>
|
<p>I've set up a Kubernetes cluster on Ubuntu (trusty) based on the <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="nofollow">Running Kubernetes Locally via Docker</a> guide, deployed a DNS and run Heapster with an InfluxDB backend and a Grafana UI.</p>
<p>Everything seems to run smoothly except for Grafana, which doesn't show any graphs but the message <code>No datapoints</code> in its diagrams: <a href="http://i.stack.imgur.com/W2eLv.png" rel="nofollow">Screenshot</a></p>
<p>After checking the Docker container logs I found out that Heapster is is unable to access the kubelet API (?) and therefore no metrics are persisted into InfluxDB:</p>
<pre><code>user@host:~$ docker logs e490a3ac10a8
I0701 07:07:30.829745 1 heapster.go:65] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086
I0701 07:07:30.830082 1 heapster.go:66] Heapster version 1.2.0-beta.0
I0701 07:07:30.830809 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0701 07:07:30.831284 1 configs.go:61] Using kubelet port 10255
E0701 07:09:38.196674 1 influxdb.go:209] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 10.0.0.223:8086: getsockopt: connection timed out, will retry on use
I0701 07:09:38.196919 1 influxdb.go:223] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s
I0701 07:09:38.197048 1 heapster.go:92] Starting with InfluxDB Sink
I0701 07:09:38.197154 1 heapster.go:92] Starting with Metric Sink
I0701 07:09:38.228046 1 heapster.go:171] Starting heapster on port 8082
I0701 07:10:05.000370 1 manager.go:79] Scraping metrics start: 2016-07-01 07:09:00 +0000 UTC, end: 2016-07-01 07:10:00 +0000 UTC
E0701 07:10:05.008785 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused
I0701 07:10:05.009119 1 manager.go:152] ScrapeMetrics: time: 8.013178ms size: 0
I0701 07:11:05.001185 1 manager.go:79] Scraping metrics start: 2016-07-01 07:10:00 +0000 UTC, end: 2016-07-01 07:11:00 +0000 UTC
E0701 07:11:05.007130 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused
I0701 07:11:05.007686 1 manager.go:152] ScrapeMetrics: time: 5.945236ms size: 0
W0701 07:11:25.010298 1 manager.go:119] Failed to push data to sink: InfluxDB Sink
I0701 07:12:05.000420 1 manager.go:79] Scraping metrics start: 2016-07-01 07:11:00 +0000 UTC, end: 2016-07-01 07:12:00 +0000 UTC
E0701 07:12:05.002413 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused
I0701 07:12:05.002467 1 manager.go:152] ScrapeMetrics: time: 1.93825ms size: 0
E0701 07:12:12.309151 1 influxdb.go:150] Failed to create infuxdb: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 10.0.0.223:8086: getsockopt: connection timed out
I0701 07:12:12.351348 1 influxdb.go:201] Created database "k8s" on influxDB server at "monitoring-influxdb:8086"
I0701 07:13:05.001052 1 manager.go:79] Scraping metrics start: 2016-07-01 07:12:00 +0000 UTC, end: 2016-07-01 07:13:00 +0000 UTC
E0701 07:13:05.015947 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused
...
</code></pre>
<p>I found a few issues on GitHub describing similar problems that made me understand that Heapster doesn't access the kubelet (via the node's loopback) but itself (via the container's loopback) instead. However, I fail to reproduce their solutions:</p>
<p><strong>github.com/kubernetes/heapster/issues/1183</strong></p>
<blockquote>
<p>You should either use host networking for Heapster pod or configure your cluster in a way that the node has a regular name not 127.0.0.1. The current problem is that node name is resolved to Heapster localhost. Please reopen in case of more problems.</p>
</blockquote>
<p><em>-@piosz</em></p>
<ul>
<li>How do I enable "host networking" for my Heapster pod?</li>
<li>How do I configure my cluster/node to use a regular name not 127.0.0.1?</li>
</ul>
<p><strong>github.com/kubernetes/heapster/issues/744</strong></p>
<blockquote>
<p>Fixed by using better options in hyperkube, thanks for the help!</p>
</blockquote>
<p><em>-@ddispaltro</em></p>
<ul>
<li>Is there a way to solve this issue by adding/modifying kubelet's option flags in <code>docker run</code>? <br> I tried setting<code>--hostname-override=<host's eth0 IP></code> and <code>--address=127.0.0.1</code> (as suggested in the last answer of this GitHub issue) but Heapster's container log then states: <br> <br><code>I0701 08:23:05.000566 1 manager.go:79] Scraping metrics start: 2016-07-01 08:22:00 +0000 UTC, end: 2016-07-01 08:23:00 +0000 UTC
E0701 08:23:05.000962 1 kubelet.go:279] Node 127.0.0.1 is not ready
E0701 08:23:05.003018 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://<host's eth0 IP>:10255/stats/container/": Post http://<host's eth0 IP>/stats/container/: dial tcp <host's eth0 IP>:10255: getsockopt: connection refused
</code></li>
</ul>
<p><strong>Namespace issue</strong></p>
<p>Could this problem be caused by the fact that I'm running Kubernetes API in <code>default</code> namespace and Heapster in <code>kube-system</code>?</p>
<pre><code>user@host:~$ kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default k8s-etcd-127.0.0.1 1/1 Running 0 18h
default k8s-master-127.0.0.1 4/4 Running 1 18h
default k8s-proxy-127.0.0.1 1/1 Running 0 18h
kube-system heapster-lizks 1/1 Running 0 18h
kube-system influxdb-grafana-e0pk2 2/2 Running 0 18h
kube-system kube-dns-v10-4vjhm 4/4 Running 0 18h
</code></pre>
<hr>
<p><em>OS: Ubuntu 14.04.4 LTS (trusty) |
Kubernetes: v1.2.5 |
Docker: v1.11.2</em></p>
| <p>Heapster has got the list of nodes from Kubernetes and is now trying to pull stats from the kublete process on each node (which has a built in cAdvisor collecting stats on the node). In this case there's only one node and it's known by 127.0.0.1 to kubernetes. And there's the problem. The Heapster container is trying to reach the node at 127.0.0.1 which is itself and of course finding no kublete process to interrogate within the Heapster container.</p>
<p>Two things need to happen to resolve this issue.</p>
<ol>
<li>We need to reference the kublete worker node (our host machine running kubernetes) by something else other than the loopback network address of 127.0.0.1 </li>
<li>The kublete process needs to accept traffic from the new network interface/address</li>
</ol>
<p>Assuming you are using the local installation guide and starting kubernetes off with</p>
<pre><code>hack/local-up-cluster.sh
</code></pre>
<p>To change the hostname by which the kublete is referenced is pretty simple. You can take more elaborate approaches but setting this to your eth0 ip worked fine for me (ifconfig eth0). The downside is that you need a eth0 interface and this is subject to DHCP so your mileage may vary as to how convenient this is. </p>
<pre><code>export HOSTNAME_OVERRIDE=10.0.2.15
</code></pre>
<p>To get the kublete process to accept traffic from any network interface is just as simple. </p>
<pre><code>export KUBELET_HOST=0.0.0.0
</code></pre>
|
<p>I've gone through the steps to create a persistent disk in google compute engine and attach it to a running VM instance. I've also created a docker image with a VOLUME directive. It runs fine locally, in the docker run command, i can pass a -v option to mount a host directory as the volume. I thought there would be a similar command in kubectl, but I don't see one. How can I mount my persistent disk as the docker volume?</p>
| <p>In your pod spec, you may specify a Kubernetes <a href="http://kubernetes.io/docs/user-guide/volumes/#gcepersistentdisk" rel="nofollow"><code>gcePersistentDisk</code></a> volume (the <code>spec.volumes</code> field) and where to mount that volume into containers (the <code>spec.containers.volumeMounts</code> field). Here's an example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
</code></pre>
<p>Read more about Kubernetes volumes: <a href="http://kubernetes.io/docs/user-guide/volumes" rel="nofollow">http://kubernetes.io/docs/user-guide/volumes</a></p>
|
<p>I've set up prometheus to monitor kubernetes metrics by following the prometheus <a href="https://github.com/prometheus/docs/blob/master/content/docs/operating/configuration.md" rel="noreferrer">documentation</a>.</p>
<p>A lot of useful metrics now show up in prometheus.</p>
<p>However, I can't see any metrics referencing the status of my pods or nodes.</p>
<p>Ideally - I'd like to be able to graph the pod status (Running, Pending, CrashLoopBackOff, Error) and nodes (NodeReady, Ready).</p>
<p>Is this metric anywhere? If not, can I add it somewhere? And how?</p>
| <p>The regular kubernetes setup does not expose these metrics - further discussion <a href="https://github.com/kubernetes/kubernetes/pull/31107#discussion_r75673822" rel="noreferrer">here</a>. </p>
<p>However, another service can be used to collect these cluster level metrics: <a href="https://github.com/kubernetes/kube-state-metrics" rel="noreferrer">https://github.com/kubernetes/kube-state-metrics</a>.</p>
<p>This currently provides node_status_ready and pod_container_restarts which sound like what I want.</p>
|
<p>If I do <code>kubectl get deployments</code>, I get:</p>
<pre><code>$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
analytics-rethinkdb 1 1 1 1 18h
frontend 1 1 1 1 6h
queue 1 1 1 1 6h
</code></pre>
<p>Is it possible to rename the deployment to <code>rethinkdb</code>? I have tried googling <code>kubectl edit analytics-rethinkdb</code> and changing the name in the yaml, but that results in an error:</p>
<pre><code>$ kubectl edit deployments/analytics-rethinkdb
error: metadata.name should not be changed
</code></pre>
<p>I realize I can just <code>kubectl delete deployments/analytics-rethinkdb</code> and then do <code>kubectl run analytics --image=rethinkdb --command -- rethinkdb etc etc</code> but I feel like it should be possible to simply rename it, no?</p>
| <p>Object names are immutable in Kubernetes. If you want to change a name, you can export/edit/recreate with a different name</p>
|
<p>in a kubernetes Deployment yaml file is there a simple way to run multiple commands in the postStart hook of a container?</p>
<p>I'm trying to do something like this:</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/cp", "/webapps/myapp.war", "/apps/"]
command: ["/bin/mkdir", "-p", "/conf/myapp"]
command: ["touch", "/conf/myapp/ready.txt"]
</code></pre>
<p>But it doesn't work.
(looks like only the last command is executed)</p>
<p>I know I could embed a script in the container image and simply call it there... But I would like to be able to customize those commands in the yaml file without touching the container image.</p>
<p>thanks</p>
| <p>Only one <code>command</code> allowed, but you can use <code>sh -c</code> like this</p>
<pre><code> lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ -s /var/www/mybb/inc/config.php ]; then
rm -rf /var/www/mybb/install;
fi;
if [ ! -f /var/www/mybb/index.php ]; then
cp -rp /originroot/var/www/mybb/. /var/www/mybb/;
fi
</code></pre>
|
<p>As per documentation to enable cluster metrics, I should create re-encrypting route as per the below statement</p>
<pre><code>$ oc create route reencrypt hawkular-metrics-reencrypt \
--hostname hawkular-metrics.example.com \
--key /path/to/key \
--cert /path/to/cert \
--ca-cert /path/to/ca.crt \
--service hawkular-metrics
--dest-ca-cert /path/to/internal-ca.crt
</code></pre>
<ul>
<li>What exactly should I use for these keys and certificates? </li>
<li>Are these already exists somewhere or I need to create them?</li>
</ul>
| <p>Openshift Metrics developer here.</p>
<p>Sorry if the docs were not clear enough.</p>
<p>The route is used to expose Hawkular Metrics, particularly to the browser running the OpenShift console.</p>
<p>If you don't specify any certificates, the system will use a self signed certificate instead. The browser will complain that this self signed certificate is not trusted, but you can usually just click through to accept it anyways. If you are ok with this, then you don't need to do any extra steps.</p>
<p>If you want the browser to trust this connection by default, then you will need to provide your own certificates signed by a trusted certificate authority. This is exactly similar to how you would have to generate your own certificate if you are running a normal site under https.</p>
<p>From the following command:</p>
<blockquote>
<p>$ oc create route reencrypt hawkular-metrics-reencrypt \ --hostname hawkular-metrics.example.com \ --key /path/to/key \ --cert /path/to/cert \ --ca-cert /path/to/ca.crt \ --service hawkular-metrics --dest-ca-cert /path/to/internal-ca.crt</p>
</blockquote>
<p>'cert' corresponds to your certificate signed by the certificate authority</p>
<p>'key' corresponds to the key for your certificate</p>
<p>'ca-cert' corresponds to the certificate authorities certificate</p>
<p>'dest-ca-cert' corresponds to the certificate authority which signed the self signed certificate generated by the metrics deployer</p>
<p>The docs <a href="https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route" rel="nofollow">https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route</a> should explain how to get the dest-ca-cert from the system</p>
|
<p>I have a Kubernetes pod running on Google Container Engine. It has been running for several days, writing to the log, without any restarts.</p>
<p>Why is the command <code>kubectl logs</code> only showing log rows from today?</p>
<p>Where does this limitation come from and is it based on time or number of log rows?</p>
| <p>Logrotate is enabled by default on container engine VMs. You should be able to check conf at below location for docker container logs.</p>
<pre><code>cat /etc/logrotate.d/docker-containers
</code></pre>
<p>So when you run <code>kubectl logs</code>, it streams you from current log file. Past logs are already gzipped and also only N compressed files are available as per logrotate configuration.</p>
<p>You can check all containers log files at location <code>/var/log/containers</code> also sym linked to <code>/var/lib/docker/containers/$containerId/</code></p>
<p>You may also want to refer to documentation <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/" rel="noreferrer">http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/</a> and see if additional options can come to rescue.</p>
|
<pre><code>kubectl logs <pod-id>
</code></pre>
<p>gets latest logs from my deployment - I am working on a bug and interested to know the logs at runtime - How can I get continuous stream of logs ?</p>
<p>edit: corrected question at the end.</p>
| <pre><code>kubectl logs -f <pod-id>
</code></pre>
<p>You can use the <code>-f</code> flag:</p>
<p><code>-f, --follow=false: Specify if the logs should be streamed.</code></p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs</a></p>
|
<p>I have setup a Kubernetes cluster (on GKE). Running multiple services but mainly serving a webapp through a stock Ingress controller.</p>
<p>Are there any benefits to running a reverse proxy behind the Ingress? The TLS is terminated at this point, so it's not for that. Maybe for some server hardening?</p>
| <p>The only advantage of chaining GKE ingress and Nginx is ability to use more advanced configuration provided by Nginx.</p>
<p>Currently there are benefits of running your own reverse proxy skipping ingress. Or to use <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow">custom ingress controller</a> instead of GKE one. The main benefits of doing this is:</p>
<ul>
<li>Cost saving, each GKE ingress resource costs money.</li>
<li>Advanced routing/config offered by Nginx or other proxy.</li>
<li>Bypasses the current GKE Ingress beta limitations.</li>
</ul>
|
<p>I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : <a href="https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling" rel="nofollow">https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling</a>) :</p>
<pre><code>$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
</code></pre>
<p>but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :</p>
<pre><code>$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
</code></pre>
<p>Why ?</p>
<p>Then, I tried the other way indicated in the kubernetes docs (<a href="http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling" rel="nofollow">http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling</a>) but got an error caused by the '=true' apparently :</p>
<pre><code>$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
</code></pre>
<p>Is the doc wrong on this ?
Here is my gcloud version results :</p>
<pre><code>$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
</code></pre>
<p>Last precision : the autoscaler seems 'on' in the description on the cluster :</p>
<pre><code>$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
</code></pre>
<p>Any idea to explain this behaviour please ?</p>
| <p>Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a <code>cluster-autoscaler</code> controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">The code</a> is in the <code>autoscaler</code> repo if you want more info.</p>
<p>I've also sent out <a href="https://github.com/kubernetes/kubernetes.github.io/pull/1211" rel="nofollow noreferrer">a PR</a> to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!</p>
|
<p>I’m running into the issues while creating the container, I’m using ubuntu 16.04 OS, docker 1.12.1, flannel 0.5.5, and etcd datastore.</p>
<pre><code>sudo systemctl status kubelet.service
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2016-09-12 14:23:02 EDT; 3h 6min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 15788 (kubelet)
Tasks: 9
Memory: 848.0K
CPU: 815ms
CGroup: /system.slice/kubelet.service
Sep 12 17:19:40 vm3-VirtualBox kubelet[15788]: W0912 17:19:40.585677 15788 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docke
Sep 12 17:20:40 vm3-VirtualBox kubelet[15788]: W0912 17:20:40.615756 15788 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docke
Sep 12 17:21:40 vm3-VirtualBox kubelet[15788]: W0912 17:21:40.624172 15788
Sep 12 17:23:40 vm3-VirtualBox kubelet[15788]: W0912 17:23:40.657396 15788 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker
belet[15788]: W0912 16:47:40.051784 15788 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docke
Sep 12 16:48:06 vm3-VirtualBox sudo[19448]: pam_unix(sudo:session): session closed for user root
Sep 12 16:48:40 vm3-VirtualBox kubelet[15788]: W0912 16:48:40.073855 15788 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docke
</code></pre>
<p>Master node</p>
<pre><code>kubectl describe pods my-first-nginx-a9bgy
Replication Controllers: my-first-nginx (1/1 replicas created)
Containers:
my-first-nginx:
Container ID:
Image: nginx
Image ID:
State: Waiting
Reason: ContainerCreating
1m 1m 1 {kubelet implicitly required container POD Created Created with docker id 9fc5d67d3921
1m 1m 1 {kubelet implicitly required container POD Failed Failed to start with docker id 9fc5d67d3921 with error: API error (400): {"message":"starting container with HostConfig was deprecated since v1.10 and removed in v1.12"}
{kubelet } implicitly required container POD Created Created with docker id f55e2b6538b5
1m 6s 10 {kubelet FailedSync Error syncing pod, skipping: API error (400): {"messag
"starting container with HostConfig was deprecated since v1.10 and removed in v1.12"}
</code></pre>
<p>Do I need to make any changes in the /lib/systemd/system/docker.service or in
/etc/default/docker. Is there any work around, I've read in few posts that kubernetes has some problem with the latest docker version.</p>
<p>Any help and suggestion on this will be really appreciated.</p>
| <p>HostConfig is deprecated in docker v1.12. Kubernetes <a href="https://github.com/kubernetes/kubernetes/pull/20615" rel="nofollow">made a corresponding switch</a> to deprecate HostConfig in v1.2, so you will need a newer version (v1.2+) kubernetes to work with docker v1.12.</p>
<p>Another caveat is that only the coming kubernetes 1.4 release claims to be compatible with docker v1.12. All older versions of kubernetes were not tested against docker v1.12. You might be better off using an older version of docker, or simply switch to kubernetes v1.4 beta. </p>
|
<p>I'm hosting on google container engine. Recently one of my team mates left the company and I want to revoke his access rights to the cluster. I removed his account from the compute engine project already, yet he can still access the cluster.</p>
<p>He got access through <code>gcloud container clusters get-credentials <cluster></code>. The entries I see in my <code>~/.kube/config</code> look as if I get the same certificate as all of my colleagues.</p>
<p>What do I need to do to remove him from the cluster? To me it seems as if there is zero documentation on this topic.</p>
<p><em>Additional Note:</em>
The cluster is still on kubernetes 1.2.5</p>
| <p>When using the per-cluster certificate, there is currently no way to revoke/rotate certificates (see <a href="https://github.com/kubernetes/kubernetes/issues/4672" rel="nofollow">Issue #4672</a>). The only way to completely revoke access is to delete and recreate the cluster.</p>
<p>If you instead use Google OAuth2 credentials to access your cluster (the default w/ a 1.3 cluster and an up-to-date client), permissions are tied to your project's <a href="http://console.developers.google.com/iam-admin/iam/project" rel="nofollow">IAM configuration</a>, and can be revoked/changed at any time.</p>
<p>Retrieving the cluster certificate requires the caller to have <code>container.clusters.getCredentials</code> permission, which is contained by the <code>Container Engine Admin</code> and <code>Editor</code> roles. As long as the roles that you give to your team members do not contain that permission (e.g. <code>Container Engine Developer</code>), they will not be able to retrieve cluster certificates.</p>
<p>Here are the <a href="https://cloud.google.com/container-engine/docs/iam-integration" rel="nofollow">GKE IAM docs</a> for more info on GKE permissions and roles.</p>
|
<p>Suppose I want to create a k8s cluster on bare metal servers, with 1 master and 2 nodes. What ports do I have to open in my firewall so that the master and nodes can communicate over the Internet? (I know I can just use VPN, but I just want to know which ports I need). I guess I need at least the following ports. Do I need more? How about if I'm using Flannel or Calico? I want to create a <em>comprehensive</em> list of all possible k8s services and needed ports. Thank you.</p>
<p>kubectl - 8080</p>
<p>ui - 80 or 443 or 9090</p>
<p>etcd - 2379, 2380</p>
| <p>the ports for kubernetes are the following:</p>
<p><a href="https://i.stack.imgur.com/GY4ae.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GY4ae.png" alt="enter image description here"></a></p>
<p>from the CoreOS <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-networking.html" rel="noreferrer">docs</a>.</p>
|
<p>I've set up a kubernetes cluster with three masters. The kube-apiserver should be stateless. To properly access them from the worker nodes, I've configured an haproxy which is configured to provide the ports (8080) of the apiserver.</p>
<pre><code>frontend http_front_8080
bind *:8080
stats uri /haproxy?stats
default_backend http_back_8080
backend http_back_8080
balance roundrobin
server m01 192.168.33.21:8080 check
server m02 192.168.33.22:8080 check
server m03 192.168.33.23:8080 check
</code></pre>
<p>But when I run the nodes with the loadbalancers ip as the address of the apiserver I'll receive this errors:</p>
<pre><code>Apr 20 12:35:07 n01 kubelet[3383]: E0420 12:35:07.308337 3383 reflector.go:271] pkg/kubelet/kubelet.go:240: Failed to watch *api.Service: too old resource version: 4001 (4041)
Apr 20 12:36:48 n01 kubelet[3383]: E0420 12:36:48.321021 3383 reflector.go:271] pkg/kubelet/kubelet.go:240: Failed to watch *api.Service: too old resource version: 4011 (4041)
Apr 20 12:37:31 n01 kube-proxy[3408]: E0420 12:37:31.381042 3408 reflector.go:271] pkg/proxy/config/api.go:47: Failed to watch *api.Service: too old resource version: 4011 (4041)
Apr 20 12:41:42 n01 kube-proxy[3408]: E0420 12:41:42.409604 3408 reflector.go:271] pkg/proxy/config/api.go:47: Failed to watch *api.Service: too old resource version: 4011 (4041)
</code></pre>
<p>If I change the loadbalancers IP to one of the masters nodes it works as expected (without these error messages above).</p>
<p>Am I something missing in my haproxy configuration which is vital for running this config?</p>
| <p>I had the same issue as you. I assume the watch requires some sort of state on the api server side.
The solution is to change the configuration so all the requests from a client go to the same server using balance source. I assume you only have multiple api servers so kubernetes is highly available (instead of load balancing).</p>
<pre><code>frontend http_front_8080
bind *:8080
stats uri /haproxy?stats
default_backend http_back_8080
backend http_back_8080
balance source
server m01 192.168.33.21:8080 check
server m02 192.168.33.22:8080 check
server m03 192.168.33.23:8080 check
</code></pre>
|
<p>I need to let the container to run 5 minutes after the <code>kubectl</code> ' termination. It needs to do some work before it's destroyed. It seems that kubernetes contains exactly what I need:</p>
<pre><code>terminationGracePeriodSeconds: 300
</code></pre>
<p>so I defined it within my yaml. I've updated running <code>RCs</code>, delete current pods so new ones were created and now I can see that a pod contains exactly this setting via <code>get pod xyz -o=yaml</code>.</p>
<p>Unfortunately, when I tried to do <code>rolling-update</code>, the original pod was killed after exactly 1 minute, not after 5 minutes. I does ssh to the target machine and I could see that docker termineted the container after this time.</p>
<p>I tried to do some investigation how the feature works. I finally found the documentation to <code>kubectl delete</code> where there is a notion about graceful termination period:</p>
<p><a href="http://kubernetes.io/docs/user-guide/pods/" rel="noreferrer">http://kubernetes.io/docs/user-guide/pods/</a></p>
<blockquote>
<p>By default, all deletes are graceful within 30 seconds. The kubectl delete command supports the --grace-period= option which allows a user to override the default and specify their own value. The value 0 indicates that delete should be immediate, and removes the pod in the API immediately so a new pod can be created with the same name. On the node pods that are set to terminate immediately will still be given a small grace period before being force killed</p>
</blockquote>
<p>So I took one pod, nginx, and try to delete it with <code>grace-period=30</code>. It turned out, that original pod was immediately delete and <code>get pods</code> showed that new one was being started.</p>
<p>So no 30 seconds. What am I doing wrong? It seems that all pods kubernetes does not take these values into account.
Note that I'm using kubernetes v1.2.2</p>
<p>I also found this issue <a href="https://github.com/kubernetes/kubernetes/issues/24695" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/24695</a> where the reporter had same problem and he solved it in the same fashion. So e.g. 300 seconds is not too much for kubernetes.</p>
| <p>You can use the <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">container lifecycle hook</a> <code>preStop</code> to have your pod sleep before terminating. This hook will be executed prior to <code>kubectl</code> sending <code>SIGTERM</code> to your container.</p>
<p>Example configuration from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">the docs</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sleep","300"]
</code></pre>
|
<p>This is the way I understand the flow in question:</p>
<ol>
<li>When requesting a kubernetes service (via http for example) I am using port 80.</li>
<li>The request is forwarded to a pod (still on port 80)</li>
<li>The port forwards the request to the (docker) container that exposes port 80</li>
<li>The container handles the request</li>
</ol>
<p>However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?</p>
<p>There are confusing options like <code>targetport</code> and <code>hostport</code> in the kubernetes docs which didn't help me. <code>kubectl port-forward</code> seems to forward only my local (development) machine's port to a specific pod for debugging.</p>
<p>These are the commands I use for setting up a service in the google cloud:</p>
<pre><code>kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
</code></pre>
| <p>I found that I needed to add some arguments to my second command:</p>
<pre><code>kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
</code></pre>
<p>This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.</p>
<p>A nicer way to do this whole thing is with yaml files <code>service.yaml</code> and <code>deployment.yaml</code> and calling</p>
<pre><code>kubectl create -f deployment.yaml
kubectl create -f service.yaml
</code></pre>
<p>where the files have these contents</p>
<pre><code># deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
</code></pre>
<p>and</p>
<pre><code># service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
</code></pre>
<p>Note that the selector of the service must match the label of the deployment.</p>
|
<p>Here is my deploment template:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
name: XXX
spec:
replicas: 1
revisionHistoryLimit : 0
strategy:
type : "RollingUpdate"
rollingUpdate:
maxUnavailable : 0%
maxSurge : 100%
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
template:
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
spec:
containers:
- image: docker-registry:{{ xxx-version }}
name: XXX
ports:
- name: XXX
containerPort: 9000
</code></pre>
| <p>The key section in the documentation that's relevant to this issues is:</p>
<blockquote>
<p>Existing Replica Set controlling Pods whose labels match <code>.spec.selector</code>but whose template does not match <code>.spec.template</code> are scaled down. Eventually, the new Replica Set will be scaled to <code>.spec.replicas</code> and all old Replica Sets will be scaled to 0.</p>
</blockquote>
<p><a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">http://kubernetes.io/docs/user-guide/deployments/</a></p>
<p>So the spec.selector should not vary across multiple deployments:</p>
<pre><code>selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
</code></pre>
<p>should become:</p>
<pre><code>selector:
matchLabels:
name: XXX
</code></pre>
<p>The rest of the labels can remain the same</p>
|
<p>I have installed minikube and started up its built in Kubernertes cluster</p>
<pre><code>$ minikube start
Starting local Kubernetes cluster...
Kubernetes is available at https://192.168.99.100:443.
Kubectl is now configured to use the cluster.
</code></pre>
<p>I also have kubectl installed</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"283137936a498aed572ee22af6774b6fb6e9fd94", GitTreeState:"clean", BuildDate:"2016-07-01T19:26:38Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>But I can't successfully use kubectl to speak to the running Kubernetes cluster</p>
<pre><code>$ kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<hr>
<p>EDIT</p>
<pre><code>$ minikube logs
E0712 19:02:08.767815 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:02:08.767875 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:02:23.767380 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:02:23.767464 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:02:36.766696 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:02:36.766760 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:02:51.767621 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:02:51.767672 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:03:02.766548 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:03:02.766609 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:03:16.766831 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
E0712 19:03:16.766904 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\""
E0712 19:04:15.829223 1257 docker_manager.go:1955] Failed to create pod infra container: ErrImagePull; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 74.125.28.82:443: i/o timeout)
E0712 19:04:15.829326 1257 pod_workers.go:183] Error syncing pod 48abed82af93bb0b941173334110923f, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp 74.125.28.82:443: i/o timeout)"
E0712 19:04:31.767536 1257 docker_manager.go:1955] Failed to create pod infra container: ImagePullBackOff; Skipping pod "kube-addon-manager-minikubevm_kube-system(48abed82af93bb0b941173334110923f)": Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"
</code></pre>
| <p>To get it running behind the a proxy you need to setup things a bit differently from the documentation.
a. You need to ensure that the docker daemon running with the VM can reach out to the internet via the proxy.
b. You need to ensure kubectl running on the host can get to the VM without going out through the proxy</p>
<p>Using the default kubectl example</p>
<ol>
<li>Ensure that the proxy is passed into the VM that is created by minikube (this ensures that the docker daemon within the VM can get out to the internet)</li>
</ol>
<p><code>minikube start --vm-driver="kvm" --docker-env="http_proxy=xxx" --docker-env="https_proxy=yyy" start</code></p>
<p>Note: Replace xxx and yyy with your proxy settings</p>
<ol start="2">
<li>Get the IP that the VM gets at startup.</li>
</ol>
<p><code>minikube ip</code></p>
<p>Note: You need to do this each time you setup minikube as it can change</p>
<ol start="3">
<li>Ensure that kubectl can talk to this VM without going to the proxy</li>
</ol>
<p><code>export no_proxy="127.0.0.1,[minikube_ip]"</code></p>
<ol start="4">
<li>Now kick off the POD and test it</li>
</ol>
<p><code>kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080</code></p>
<p><code>kubectl expose deployment hello-minikube --type=NodePort</code></p>
<p><code>kubectl get pod</code></p>
<p><code>curl $(minikube service hello-minikube --url)</code></p>
|
<p>on GKE I created some pods and a headless service. The headless service has a selector and I am expecting the endpoint to get the IP of the Pod that matches the selector.</p>
<p>However the endpoint remains empty</p>
<pre><code>$ kubectl get pods -lservice=front-end
NAME READY STATUS RESTARTS AGE
front-end-1567472915-tei91 1/1 Running 0 12m
$ kubectl get svc -lapp=sockshop
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end None <none> 11m
$ kubectl get endpoints -lapp=sockshop
NAME ENDPOINTS AGE
front-end <none> 11m
$ more svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: sockshop
name: front-end
spec:
clusterIP: None
ports: null
selector:
service: front-end
</code></pre>
<p>I would expect an endpoint to get the IP of the Pod so that the DNS registration works.</p>
| <p>if <code>ports</code> is set to <code>null</code> the endpoint will not get populated.</p>
<p>You need to add a port (even a dummy one) for the endpoint to get populated with the PodIPs of the Pods that match the selector.</p>
<p><code>
ports:
- port: 1234
protocol: TCP
targetPort: 1234
</code></p>
|
<p>Created a 2 node Kubernetes cluster as:</p>
<pre><code>KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
</code></pre>
<p>This shows the output as:</p>
<pre><code>Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
</code></pre>
<p>I can see the instances in EC2 console. How do I ssh into the master node?</p>
| <p>Here is the exact command that worked for me:</p>
<pre><code>ssh -i ~/.ssh/kube_aws_rsa admin@<masterip>
</code></pre>
<p><code>kube_aws_rsa</code> is the default key generated, otherwise controlled with <code>AWS_SSH_KEY</code> environment variable. For AWS, it is specified in the file <code>cluster/aws/config-default.sh</code>.</p>
<p>More details about the cluster can be found using <code>kubectl.sh config view</code>.</p>
|
<p>There's a similar question <a href="https://stackoverflow.com/questions/37559704/kubectl-yaml-config-file-equivalent-of-kubectl-run-i-tty">here</a>, but I think I want different thing.
For those who familiar with docker-compose, there's a brilliant command which runs command in container just once, this insanely helps for launching migrations before each deploy:</p>
<pre><code>docker-compose -f docker-compose.prod.yml run web npm run migrate
</code></pre>
<p>Also because this is a one line command it's comfortable for automation purpose: like using Makefile or ansible/chef/saltstack. </p>
<p>The only thing I've found is <code>kubectl run</code> which is more similar to <code>docker run</code>. But <code>docker-compose run</code> allows us to use config file, where docker-run does not:</p>
<pre><code> kubectl run rp2migrate --command -- npm run migrate
</code></pre>
<p>This would probably work, but I need to list 20 environment variables, and really don't want to do this in command line.. instead I'd like to pass a flag which would specify yaml config like this:</p>
<pre><code> kubectl run rp2migrate -f k8s/rp2/rp2-deployment.yaml --command -- npm run migrate
</code></pre>
| <p>Edit:</p>
<p>Kubernetes also got <code>init containers</code> as a beta feature (as of now) - <a href="http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization" rel="nofollow">http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization</a></p>
<hr>
<p>You should probably leverage Kubernetes PostStart hook. Something like below:</p>
<pre><code>lifecycle:
postStart:
exec:
command:
- "npm"
- "run"
- "migrate"
</code></pre>
<p><a href="http://kubernetes.io/docs/user-guide/container-environment/" rel="nofollow">http://kubernetes.io/docs/user-guide/container-environment/</a></p>
<p>The environment variables specified for your pod will be available too:</p>
<blockquote>
<p>Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image</p>
</blockquote>
|
<p>I'm trying to configure the spinnaker to deploy applications in kubernetes environment. </p>
<p>I followed a <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod" rel="nofollow">documentation</a>,
at <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod#section-3-create-a-demo-server-group" rel="nofollow">step-3</a> the containers are not showing up as shown in the <a href="https://files.readme.io/JRMxxbaSQ1mmD5VH8EtD_firstSG1.png" rel="nofollow">screenshot</a>. Then I moved to next <a href="http://www.spinnaker.io/docs/kubernetes-source-to-prod#section-4-git-to-_dev_-pipeline" rel="nofollow">step</a>(Pipeline creation), when I select <code>type: Docker</code> in the <code>Automated Trigger</code>, again the <code>Repo name</code> is not showing up as shown in <a href="https://files.readme.io/ZV0WoYPyTQSwLJ1CvysC_dockertrigger.png" rel="nofollow">screenshot</a>. </p>
<p><strong>So, I'm suspecting there is problem with spinnaker and docker hub repo(Authentication/Misconfiguration?)</strong></p>
<p>I have copied the Kubernetes Authentication config file to <code>~/.kube/config</code>. I think there is no problem with spinnaker and kubernetes. When I create a <code>Load Balancer</code> in spinnaker I can see <code>Kube Services</code> are creating (test-dev & test-prod)</p>
<pre><code>root@veeru:~# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 91d
test-dev 192.168.3.113 <none> 80/TCP 6h
test-prod 192.168.3.185 80/TCP 6h
</code></pre>
<p>My <code>spinnaker-local.yml</code></p>
<pre><code><Content removed for brevity>
kubernetes:
# For more information on configuring Kubernetes clusters (kubernetes), see
# http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup
# NOTE: enabling kubernetes also requires enabling dockerRegistry.
enabled: true
primaryCredentials:
# These credentials use authentication information at ~/.kube/config
# by default.
name: veerendrav2
namespace: default
dockerRegistryAccount: veerendrav2
dockerRegistry:
# If you want to deploy containers to a container management solution,
# you must specifiy where these container images exist first.
# NOTE: Enabling dockerRegistry is independent of other providers.
# However, for convienience, we tie docker and kubernetes together
# since kubernetes (and only kubernetes) depends on this docker provider
# configuration.
enabled: true
primaryCredentials:
name: veerendrav2
address: https://hub.docker.com
repository: veerendrav2/spin-kub-demo
<Content removed for brevity>
</code></pre>
<p>My <code>/opt/spinnaker/config/clouddriver-local.yml</code></p>
<pre><code>dockerRegistry:
enabled: true
accounts:
- name: veerendrav2
address: https://hub.docker.com/ # Point to registry of choice
username: veerendrav2
password: password
repositories:
- veerendrav2/spin-kub-demo
</code></pre>
<p>My Sample application <a href="https://github.com/veerendra2/spin-kub-demo" rel="nofollow">github repo</a> and <a href="https://hub.docker.com/r/veerendrav2/spin-kub-demo/" rel="nofollow">docker hub repo</a></p>
<p>Thanks</p>
| <p>In <code>/opt/spinnaker/config/clouddriver-local.yml</code> you likely need to change the <code>dockerRegistry.accounts[0].address</code> field to <code>https://index.docker.io</code>, since DockerHub's registry isn't hosted on <code>hub.docker.com</code>, but on <code>index.docker.io</code>.</p>
|
<p>According to Kubernetes API docs it is possible to create/list/delete pods, replication controllers and services:</p>
<p><a href="http://kubernetes.io/third_party/swagger-ui/#!/v1beta1" rel="noreferrer">http://kubernetes.io/third_party/swagger-ui/#!/v1beta1</a></p>
<p>However in the Google Container Engine documentation they don't seem to expose this API. The only resources you can manage through a REST API are clusters. Pods, replication controllers and services have to be managed using gcloud.</p>
<p>Is it possible to access the Kubernetes API when using Google Container Engine?</p>
| <p>I created a <a href="http://blog.ctaggart.com/2016/09/accessing-kubernetes-api-on-google.html" rel="nofollow">blog post</a> just for this topic. It includes a video walkthrough of the code and demo. Essentially, you can get the Kubernetes credentials from the Google Container Engine API. Here is how to do it in golang:</p>
<pre><code>func newKubernetesClient(clstr *container.Cluster) (*kubernetes.Clientset, error) {
cert, err := base64.StdEncoding.DecodeString(clstr.MasterAuth.ClientCertificate)
if err != nil {
return nil, err
}
key, err := base64.StdEncoding.DecodeString(clstr.MasterAuth.ClientKey)
if err != nil {
return nil, err
}
ca, err := base64.StdEncoding.DecodeString(clstr.MasterAuth.ClusterCaCertificate)
if err != nil {
return nil, err
}
config := &rest.Config{
Host: clstr.Endpoint,
TLSClientConfig: rest.TLSClientConfig{CertData: cert, KeyData: key, CAData: ca},
Username: clstr.MasterAuth.Username,
Password: clstr.MasterAuth.Password,
// Insecure: true,
}
kbrnts, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return kbrnts, nil
}
</code></pre>
|
<p>The following sections are: the errors, the configuration and the kubernetes version and the etcd version.</p>
<pre><code>[root@xt3 kubernetes]# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES ; done
etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:25 CST; 58ms ago
Main PID: 6382 (etcd)
CGroup: /system.slice/etcd.service
й╕йд6382 /usr/bin/etcd
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: data dir = /var/lib/etcd/default.etcd
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: member dir = /var/lib/etcd/default.etcd/member
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: heartbeat = 100ms
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: election = 1000ms
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: snapshot count = 10000
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: advertise client URLs = http://localhost:2379,http://localhost:4001
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: loaded cluster information from store: default=http://localhost:2380,default=http://localhost:7001
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: restart member ce2a822cea30bfca in cluster 7e27652122e8b2ae at commit index 10686
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 raft: ce2a822cea30bfca became follower at term 8
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 raft: newRaft ce2a822cea30bfca [peers: [ce2a822cea30bfca], term: 8, commit: 10686, applied: 10001, lastindex: 10686, lastterm: 8]
Job for kube-apiserver.service failed. See 'systemctl status kube-apiserver.service' and 'journalctl -xn' for details.
kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2016-03-25 11:11:35 CST; 58ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6401 (code=exited, status=255)
Mar 25 11:11:35 xt3 systemd[1]: Failed to start Kubernetes API Server.
Mar 25 11:11:35 xt3 systemd[1]: Unit kube-apiserver.service entered failed state.
kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:35 CST; 73ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6437 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
й╕йд6437 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.954951 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dia... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955075 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes: dial tcp 127.... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955159 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955222 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes: dial tcp 127.... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955248 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: ge... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955331 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims: dia... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955379 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: ge... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955430 6437 resource_quota_controller.go:62] Synchronization error: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: getsockopt: connection refused (&url....or)(0xc8204f2000)})
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955576 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955670 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?fieldSelector=meta... connection refused
Hint: Some lines were ellipsized, use -l to show in full.
kube-scheduler.service - Kubernetes Scheduler Plugin
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:36 CST; 71ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6466 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
й╕йд6466 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
Mar 25 11:11:36 xt3 systemd[1]: Started Kubernetes Scheduler Plugin.
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031318 6466 reflector.go:180] pkg/scheduler/factory/factory.go:194: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dial tcp 127.0.0.1:...: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031421 6466 reflector.go:180] pkg/scheduler/factory/factory.go:189: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031564 6466 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D: dial tcp 127....: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031644 6466 reflector.go:180] pkg/scheduler/factory/factory.go:184: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127...: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031677 6466 reflector.go:180] pkg/scheduler/factory/factory.go:177: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D: dial tcp 127.0.0.1:8080:...: connection refused
Hint: Some lines were ellipsized, use -l to show in full.
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
</code></pre>
<p>The error details are the following.</p>
<pre><code>[root@xt3 kubernetes]# journalctl -xn
-- Logs begin at Sat 2016-03-19 15:30:07 CST, end at Fri 2016-03-25 11:11:42 CST. --
Mar 25 11:11:41 xt3 kube-controller-manager[6437]: E0325 11:11:41.958470 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?fieldSelector=metadata.name%3Ddefault: d
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034315 6466 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D: dial tcp 127.0.0.1:8080: getsockopt:
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034325 6466 reflector.go:180] pkg/scheduler/factory/factory.go:184: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127.0.0.1:8080: getsockopt
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034324 6466 reflector.go:180] pkg/scheduler/factory/factory.go:189: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034413 6466 reflector.go:180] pkg/scheduler/factory/factory.go:194: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dial tcp 127.0.0.1:8080: getsockopt: conne
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034434 6466 reflector.go:180] pkg/scheduler/factory/factory.go:177: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D: dial tcp 127.0.0.1:8080: getsockopt: connection
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206743 6487 reflector.go:180] pkg/admission/namespace/lifecycle/admission.go:95: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: getsockopt: connection refus
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206767 6487 reflector.go:180] pkg/admission/limitranger/admission.go:102: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206816 6487 reflector.go:180] pkg/admission/namespace/exists/admission.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206831 6487 reflector.go:180] pkg/admission/resourcequota/admission.go:59: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: getsockopt: connection ref
[root@xt3 kubernetes]#
</code></pre>
<p>The configurations are the following:</p>
<pre><code>[root@xt3 kubernetes]# pwd
/etc/kubernetes
[root@xt3 kubernetes]# cat config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# ls
apiserver apiserver.rpmsave config config.rpmsave controller-manager kubelet proxy scheduler
[root@xt3 kubernetes]# cat controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS=""
</code></pre>
<p>The versions of the kubernetes and etcd:</p>
<pre><code>[root@xt3 kubernetes]# rpm -qa | grep kuber
kubernetes-node-1.1.0-0.4.git2bfa9a1.el7.x86_64
</code></pre>
<p>I do all the configurations as the kubernetes sites told.(<a href="http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a>)</p>
<pre><code> kubernetes-client-1.1.0-0.4.git2bfa9a1.el7.x86_64
kubernetes-1.1.0-0.4.git2bfa9a1.el7.x86_64
kubernetes-master-1.1.0-0.4.git2bfa9a1.el7.x86_64
[root@xt3 kubernetes]# rpm -qa | grep etcd
etcd-2.0.9-1.el7.x86_64
</code></pre>
<p>I look forward to replying for the answers. Please contact me. Thanks very much.</p>
| <p>the log you provided is not enough.
you can see all the detaile log in by use </p>
<pre><code>tail -n 1000 /var/log/messages
</code></pre>
<p>In my case , I stop the kube-apiserver service first and then start it ,and to search in the /var/log/messages.And got my error is a file is removed because I restart the machine.It's may not be your reason, but you can find it use the command I said.</p>
<p>good luck.</p>
|
<p>I am looking for best option in handling DEV, TEST, CERT and PROD environments in Kubernetes.</p>
| <p>You can use namespaces in Kubernetes. Create one namespace per environment.</p>
<p><a href="http://kubernetes.io/docs/user-guide/namespaces/" rel="nofollow">http://kubernetes.io/docs/user-guide/namespaces/</a></p>
<p>Once things get more involved, you can may be move to one cluster per environment, or something like DEV, TEST in a cluster and CERT and PROD in their own clusters.</p>
|
<p>If i create a secret from an id_rsa file using kubectl as:</p>
<pre><code>kubectl create secret generic hcom-secret --from-file=ssh-privatekey=./.ssh/id_rsa
</code></pre>
<p>And then mount the secret into the container</p>
<pre><code>"volumeMounts": [
{"name": "cfg", "readOnly": false, "mountPath": "/home/hcom/.ssh"}
]
"volumes": [
{"name": "cfg", "secret": { "secretName": "hcom-ssh" }}
],
</code></pre>
<p>The resultant file is not id_rsa but ssh-privatekey and the permits that are on it are not 600 which ssh expects</p>
<p>Is this a correct approach, or can anyone please detail how this should be done?</p>
| <p>The official Kubernetes docs for secrets cover <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys" rel="nofollow noreferrer">this exact use-case</a>.</p>
<p>To create the secret, use:</p>
<pre><code>$ kubectl create secret generic my-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
</code></pre>
<p>To mount the secret in your containers, use the following Pod config:</p>
<pre><code>{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "secret-test-pod",
"labels": {
"name": "secret-test"
}
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "my-secret"
}
}
],
"containers": [
{
"name": "ssh-test-container",
"image": "mySshImage",
"volumeMounts": [
{
"name": "secret-volume",
"readOnly": true,
"mountPath": "/etc/secret-volume"
}
]
}
]
}
}
</code></pre>
<p>Kubernetes doesn't actually have a way to control file permissions for a secret as of now, but <a href="https://github.com/kubernetes/kubernetes/pull/25285" rel="nofollow noreferrer">a recent Pull Request</a> did add support for changing the path of secrets. This support was added with <code>1.3</code> as per <a href="https://github.com/kubernetes/kubernetes/issues/26663#issuecomment-223346911" rel="nofollow noreferrer">this comment</a></p>
<p>Here are the permissions related Github Issues:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/issues/4789" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/4789</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/28317" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/28317</a></li>
</ul>
|
<p>Readiness probe success or not determine the pod ready or not ready. If I set the <code>.spec.minReadySeconds = 60</code> and the Readiness probe is success(<code>.readinessProbe.initialDelaySeconds = 1</code>), so when we created the deployment more than 1 second less than 60 seconds the pod enter the ready status but the deployment's 'status' like below:</p>
<pre><code>kubectl describe deployment readiness-minreadyseconds
Name: readiness-minreadyseconds
Namespace: default
CreationTimestamp: Wed, 21 Sep 2016 10:34:42 +0800
Labels: add=readiness-minreadyseconds
Selector: name=readiness-minreadyseconds
Replicas: 2 updated | 2 total | 0 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 45
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: readiness-minreadyseconds-536553145 (2/2 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2s 2s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set readiness-minreadyseconds-536553145 to 2
</code></pre>
<p>I found that we can access the resource from container by type nodeport, so if there are some pod unavailable in the deployment, how it could affect me?</p>
| <p>This may be a misunderstanding of the terminology. From the <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="nofollow">deployment documentation</a>, one has:</p>
<blockquote>
<p>.spec.minReadySeconds is an optional field that specifies the minimum
number of seconds for which a newly created Pod should be ready
without any of its containers crashing, for it to be considered
available.</p>
</blockquote>
<p>So the <code>minReadySeconds</code> is set to 60, it needs to be up for 60 seconds without any crashes to be considered "available". So, what you're seeing is that even though your pods have been marked ready, they're not satisfying that condition of <code>minReadySeconds</code> yet.</p>
|
<p>I'm trying to setup a Kubernetes PetSet as described in the documentation. When I create the PetSet I can't seem to get the Persistent Volume Claim to bind to the persistent volume. Here is my Yaml File for defining the PetSet:</p>
<pre><code>apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: 'ml-nodes'
spec:
serviceName: "ml-service"
replicas: 1
template:
metadata:
labels:
app: marklogic
tier: backend
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: 'ml'
image: "192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1"
imagePullPolicy: Always
ports:
- containerPort: 8000
name: ml8000
protocol: TCP
- containerPort: 8001
name: ml8001
- containerPort: 7997
name: ml7997
- containerPort: 8002
name: ml8002
- containerPort: 8040
name: ml8040
- containerPort: 8041
name: ml8041
- containerPort: 8042
name: ml8042
volumeMounts:
- name: ml-data
mountPath: /data/vol-data
lifecycle:
preStop:
exec:
# SIGTERM triggers a quick exit; gracefully terminate instead
command: ["/etc/init.d/MarkLogic stop"]
volumes:
- name: ml-data
persistentVolumeClaim:
claimName: ml-data
terminationGracePeriodSeconds: 30
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
</code></pre>
<p>If I do a 'describe' on my created PetSet I see the following:</p>
<pre><code>Name: ml-nodes
Namespace: default
Image(s): 192.168.201.7:5000/dcgs-sof/ml8-docker-final:v1
Selector: app=marklogic,tier=backend
Labels: app=marklogic,tier=backend
Replicas: 1 current / 1 desired
Annotations: <none>
CreationTimestamp: Tue, 20 Sep 2016 13:23:14 -0400
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
33m 33m 1 {petset } Warning FailedCreate pvc: ml-data-ml-nodes-0, error: persistentvolumeclaims "ml-data-ml-nodes-0" not found
33m 33m 1 {petset } Normal SuccessfulCreate pet: ml-nodes-0
</code></pre>
<p>I'm trying to run this in a minikube environment on my local machine. Not sure what I'm missing here???</p>
| <p>There is an <a href="https://github.com/kubernetes/minikube/issues/524" rel="nofollow">open issue</a> on minikube for this. Persistent volume provisioning support appears to be unfinished in minikube at this time.</p>
<p>For it to work with local storage, it needs the following flag on the controller manager and that isn't currently enabled on minikube. </p>
<blockquote>
<p>--enable-hostpath-provisioner[=false]: Enable HostPath PV
provisioning when running without a cloud provider. This allows
testing and development of provisioning features. HostPath
provisioning is not supported in any way, won't work in a multi-node
cluster, and should not be used for anything other than testing or
development.</p>
</blockquote>
<p>Reference: <a href="http://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow">http://kubernetes.io/docs/admin/kube-controller-manager/</a></p>
<p>For local development/testing, it would work if you were to use <code>hack/local_up_cluster.sh</code> to start a local cluster, after setting an environment variable:</p>
<pre><code>export ENABLE_HOSTPATH_PROVISIONER=true
</code></pre>
|
<p>I have been trying to wrap my head around how Rancher (or DC/OS) is different from Kubernetes. Both of them say they are Container management tools. Why we do we need both? How are they different?</p>
| <h1>Author's note</h1>
<p>This question was originally posted 3 years ago. Since then the technology landscape has moved on. </p>
<p>For example Mesosphere, the company behind DCOS has <a href="https://techcrunch.com/2019/08/05/mesosphere-changes-name-to-d2iq-shifts-focus-to-kubernetes-cloud-native/" rel="nofollow noreferrer">renamed itself and refocused it's efforts on Kubernetes</a>. Similarily <a href="https://rancher.com/" rel="nofollow noreferrer">Rancher</a> positioned itself as a Kubernetes installation and management layer.</p>
<p>If this issue is still a puzzle I'd suggest posing new question.</p>
<hr>
<h1>Original answer</h1>
<p>Rancher is a neat tool that is best described as a deployment tool for Kubernetes that additionally has integrated itself to provide networking and load balancing support.</p>
<p>Rancher initially created it's own framework, called Cattle, to coordinate docker containers across multiple hosts. At that time Docker was limited to running on a single host. Rancher offered an interesting solution to this problem by providing networking between hosts, something that was eventually to become part of Docker Swarm.</p>
<p>Now Rancher enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos (upstream project for DCOS) or Kubernetes to manage your containers.</p>
<hr>
<p><strong>Response to <a href="https://stackoverflow.com/users/113173/jdc0589">jdc0589</a></strong></p>
<p>You're quite correct. To the container user Kubernetes abstracts away the underlying implementation details of compute, networking and storage. It's in the setup of this underlying detail where Rancher helps. Rancher's networking provides a consistent solution across a variety of platforms. I have found it particularly useful when running on bare metal or standard (non cloud) virtual servers. </p>
<p>If you're only using AWS, I would use <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> and take advantage the native integration you've mentioned. </p>
<p>While I'm k8s fixated, it must be acknowledged that Rancher also allows the easy install of other frameworks (Swarm and Mesos). I recommend trying it out, if only to understand why you don't need it. </p>
<ul>
<li><a href="http://docs.rancher.com/rancher/v1.5/en/quick-start-guide/" rel="nofollow noreferrer">http://docs.rancher.com/rancher/v1.5/en/quick-start-guide/</a></li>
<li><a href="http://docs.rancher.com/rancher/v1.5/en/kubernetes/" rel="nofollow noreferrer">http://docs.rancher.com/rancher/v1.5/en/kubernetes/</a></li>
</ul>
<hr>
<h2>Update 2017-10-11</h2>
<p>Rancher have announced a preview of <a href="https://rancher.com/rancher2-0/" rel="nofollow noreferrer">Rancher 2.0</a>. The new answer to your question is that soon Rancher will be an admin UI and set of additional services designed to be deployed on top of Kubernetes.</p>
|
<p>I'm trying to run an api (based on Symfony) with kubernetes thanks to Google Container Engine (GKE).
This API also allow user to store and download files, which are supposed to be saved somewhere.</p>
<p>I tried to run it with 1 replica, and noticed a downtime of the service during the creation of the new container. It looks like at least 2 replicas are needed to avoid downtime.</p>
<p>Taking that into consideration, I'm interested about these options :</p>
<ul>
<li>A volume based on Google Persistent Disk. Would this mean that all my replicas would be on the same node ? (ReadWriteOnce access mode). If so, in case of a node failure, my service would not be available.</li>
<li>A volume based on Flocker (Backend Persistent Disk). What is the recommended way to install it on GKE ?</li>
</ul>
<p>Is there another interesting option ? What would you recommend ?</p>
| <p>Using GCS (like tex mentioned) is probably the simplest solution (and will be very fast from a GKE cluster). Here is <a href="https://stackoverflow.com/questions/33591580/how-to-upload-files-to-google-cloud-storage-with-symfony-gauferette-vichuplo">an answer that may help</a>.</p>
<p>If you have a specific need for local persistent storage, you can use Google Persistent Disks, but they can only be mounted as writable in one place.</p>
<p><a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow noreferrer">Petsets</a> (currently alpha) will provide better support for distributed persistent in-cluster storage, so you can also look into that if GCS doesn't work for you.</p>
|
<p>Kubernetes API server is not getting started due to DefaultStorageClass.</p>
<p><strong>The connection to the server 10.85.40.165:8080 was refused - did you specify the right host or port?
(kubectl failed, will retry 2 times)</strong></p>
<p>Ubuntu: 14.04.4</p>
<p><strong>kubectl version</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
</code></pre>
<p><strong>Docker info</strong></p>
<pre><code>docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.2.0-27-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.67 GiB
Name: stage-kube01
ID: 45T4:TF3N:VK4V:ELG3:NZA2:V5KJ:N6WE:W5RD:5F2Q:RRIZ:ZZJ4:TBZP
WARNING: No swap limit support
</code></pre>
<hr>
<p><strong>Kube-apiserver.log</strong></p>
<pre><code> mv: extensions/__internal
I0922 17:30:27.725853 9012 genericapiserver.go:82] Adding storage destination for group batch
F0922 17:30:28.418600 9012 plugins.go:107] Unknown admission plugin: DefaultStorageClass
I0922 17:30:28.455859 9025 server.go:188] Will report 10.85.40.165 as public IP address.
I0922 17:30:28.455983 9025 plugins.go:71] No cloud provider specified.
I0922 17:30:28.456165 9025 server.go:112] constructing etcd storage interface. sv: v1 mv: __internal
I0922 17:30:28.456335 9025 genericapiserver.go:82] Adding storage destination for group
I0922 17:30:28.456382 9025 server.go:296] Configuring extensions/v1beta1 storage destination
I0922 17:30:28.456411 9025 server.go:112] constructing etcd storage interface. sv: extensions/v1beta1
</code></pre>
| <p>I was having the same issue.</p>
<p>I removed "DefaultStorageClass" from the ADMISSION_CONTROL variable in ubuntu/config-default.sh, and I was able to start my cluster. However, I haven't fully tested everything out to make sure it's all still working.</p>
|
<p>How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.</p>
<pre><code>docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
</code></pre>
<p>Thanks</p>
| <p>It is possible to use ConfigMaps to achieve that goal:</p>
<p>The following example mounts a mariadb configuration file into a mariadb POD:</p>
<p><strong>ConfigMap</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
</code></pre>
<p><strong>MariaDB deployment</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
</code></pre>
<p>It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated <a href="https://github.com/kubernetes/kubernetes/issues/23748#issuecomment-230390309" rel="noreferrer">here</a>.</p>
|
<p>I'm having a bit of hard time figuring out whether the Guestbook example is working in Minikube. My main issue is possibly that the example description <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook" rel="nofollow">here</a> details all the steps but <em>there is no indication about how to connect to the web application</em> once it's running from the default YAML files.</p>
<p>I'm using Minikube v. <code>0.10.0</code> in Mac OS X 10.9.5 (Mavericks) and this is what I eventually ended up with (which seems pretty good according to what I read from the example document):</p>
<pre><code>PolePro:all-in-one poletti$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.140 <none> 80/TCP 8s
kubernetes 10.0.0.1 <none> 443/TCP 2h
redis-master 10.0.0.165 <none> 6379/TCP 53m
redis-slave 10.0.0.220 <none> 6379/TCP 37m
PolePro:all-in-one poletti$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
frontend 3 3 3 3 20s
redis-master 1 1 1 1 42m
redis-slave 2 2 2 2 37m
PolePro:all-in-one poletti$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-708336848-0h2zj 1/1 Running 0 29s
frontend-708336848-ds8pn 1/1 Running 0 29s
frontend-708336848-v8wp9 1/1 Running 0 29s
redis-master-2093957696-or5iu 1/1 Running 0 43m
redis-slave-109403812-12k68 1/1 Running 0 37m
redis-slave-109403812-c7zmo 1/1 Running 0 37m
</code></pre>
<p>I thought that I might connect to <code>http://10.0.0.140:80/</code> (i.e. the <code>frontend</code> address and port as returned by <code>kubectl get svc</code> above) and see the application running, but I'm getting a <code>Connection refused</code>:</p>
<pre><code>PolePro:all-in-one poletti$ curl -v http://10.0.0.140:80
* About to connect() to 10.0.0.140 port 80 (#0)
* Trying 10.0.0.140...
* Adding handle: conn: 0x7fb0f9803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fb0f9803a00) send_pipe: 1, recv_pipe: 0
* Failed connect to 10.0.0.140:80; Connection refused
* Closing connection 0
curl: (7) Failed connect to 10.0.0.140:80; Connection refused
</code></pre>
<p>It's somehow suspicious that the example description misses such an important step though. What am I missing?</p>
| <p>Well, it seems I figured it out myself (I'll probably send a PR too)</p>
<p>The main thing is that, at least in the Minikube setup, the <code>kubectl</code> command is run in Mac OS X but all the cool stuff happens inside a virtual machine. In my case, it's a VirtualBox VM (I'm still on Mavericks).</p>
<p>When <code>kubectl</code> shows addresses for services, like in this case:</p>
<pre><code>PolePro:all-in-one poletti$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.140 <none> 80/TCP 8s
kubernetes 10.0.0.1 <none> 443/TCP 2h
redis-master 10.0.0.165 <none> 6379/TCP 53m
redis-slave 10.0.0.220 <none> 6379/TCP 37m
</code></pre>
<p>these addresses are accessible from within the node, not necessarily from the outside. In my case, they were <em>not</em> accessible from the outside.</p>
<p>So what can you do about it?</p>
<p>First of all, to just check that it's actually running, you can log into the node and run curl from there:</p>
<pre><code># get the list of nodes, to get the name of the node we're interested into
PolePro:all-in-one poletti$ kubectl get nodes
NAME STATUS AGE
minikube Ready 3h
# that was easy. Now we can get the address of the node
PolePro:all-in-one poletti$ kubectl describe node/minikube | grep '^Address'
Addresses: 192.168.99.100,192.168.99.100
# now we can log into the node. The username is "docker", the password is "tcuser"
# by default (without quotes):
PolePro:all-in-one poletti$ ssh [email protected]
[email protected]'s password:
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.11.1, build master : 901340f - Fri Jul 1 22:52:19 UTC 2016
Docker version 1.11.1, build 5604cbe
docker@minikube:~$ curl -v http://10.0.0.140/
* Trying 10.0.0.140...
* Connected to 10.0.0.140 (10.0.0.140) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.0.0.140
> User-Agent: curl/7.49.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 19 Sep 2016 13:37:56 GMT
< Server: Apache/2.4.10 (Debian) PHP/5.6.20
< Last-Modified: Wed, 09 Sep 2015 18:35:04 GMT
< ETag: "399-51f54bdb4a600"
< Accept-Ranges: bytes
< Content-Length: 921
< Vary: Accept-Encoding
< Content-Type: text/html
<
<html ng-app="redis">
<head>
<title>Guestbook</title>
<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>
<script src="controllers.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/ui-bootstrap-tpls.js"></script>
</head>
<body ng-controller="RedisCtrl">
<div style="width: 50%; margin-left: 20px">
<h2>Guestbook</h2>
<form>
<fieldset>
<input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input"><br>
<button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button>
</fieldset>
</form>
<div>
<div ng-repeat="msg in messages track by $index">
{{msg}}
</div>
</div>
</div>
</body>
</html>
* Connection #0 to host 10.0.0.140 left intact
</code></pre>
<p>Yay! There's actually something running on port 80.</p>
<p>Anyway, this is still a bit cumbersome and we would like to see this inside a browser in Mac OS X. One way to do this is to use <code>NodePort</code> to make the node map a Service's port to a Node's port; this is accomplished adding the following line in the <code>frontend</code> service definition, which becomes:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: NodePort
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend
</code></pre>
<p>This change might be requested in either <code>frontend-service.yaml</code>, <code>all-in-one/frontend.yaml</code> or <code>all-in-one/guestbook-all-in-one.yaml</code> depending on which file you are using.</p>
<p>If you re-create the whole guestbook (I don't know if it's necessary but I'll remain on the safe side) you will get a message about ports and firewalls, like this:</p>
<pre><code># delete previous instance to start from "scratch"
PolePro:all-in-one poletti$ kubectl delete deployments,svc -l 'app in (redis, guestbook)'
deployment "frontend" deleted
deployment "redis-master" deleted
deployment "redis-slave" deleted
service "frontend" deleted
service "redis-master" deleted
service "redis-slave" deleted
# we'll use the all-in-one here to get quickly to the point
PolePro:all-in-one poletti$ vi guestbook-all-in-one.yaml
# with the new NodePort change in place, we're ready to start again
PolePro:all-in-one poletti$ kubectl create -f guestbook-all-in-one.yaml
service "redis-master" created
deployment "redis-master" created
service "redis-slave" created
deployment "redis-slave" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30559) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "frontend" created
deployment "frontend" created
</code></pre>
<p>Now, port <code>30559</code> on the node maps onto the frontend port <code>80</code>, so we can open the browser at address <code>http://192.168.99.100:30559/</code> (i.e. <code>http://<NODE-IP>:<EXTERNAL-PORT>/</code>) and we can use the guestbook!</p>
|
<p>I'm trying to delete a Job in Kubernetes, but every time I run "kubectl delete job [JOBNAME]" it just "hangs" indefinitely.</p>
<p>How can I diagnose this issue to try and determine why the Job's not able to be deleted?</p>
| <p>Turn up your debugging by set the verbosity to 9. You will see that <code>kubectl</code> is actually clearing out a lot of different resources created by the job. <code>ctrl-c</code> out of it. </p>
<p>Use <code>--cascade=false</code> it will actually complete shortly. see <a href="https://github.com/kubernetes/kubernetes/issues/8598#issuecomment-104078236" rel="nofollow">issue 8598</a></p>
|
<p>I am attempting to use Minikube for local kubernetes development. I have set up my docker environment to use the docker daemon running in the provided Minikube VM (boot2docker) as suggested:</p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>It sets up these environment variables:</p>
<pre><code>export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/jasonwhite/.minikube/certs"
</code></pre>
<p>When I attempt to pull an image from our private docker repository:</p>
<pre><code>docker pull oururl.com:5000/myimage:v1
</code></pre>
<p>I get this error:</p>
<pre><code>Error response from daemon: Get https://oururl.com:5000/v1/_ping: x509: certificate signed by unknown authority
</code></pre>
<p>It appears I need to add a trusted ca root certificate somehow, but have been unsuccessful so far in my attempts. </p>
<p>I can hit the repository fine with curl using our ca root cert:</p>
<pre><code>curl --cacert /etc/ssl/ca/ca.pem https://oururl.com:5000/v1/_ping
</code></pre>
| <p>I've been unable to find anyway to get the cert into the minikube vm. But, minikube has a command line parameter to pass in an insecure-registry.</p>
<pre><code>minikube start --insecure-registry=<HOST>:5000
</code></pre>
<p>Then to configure authentication on the registry, create a secret.</p>
<pre><code>kubectl create secret docker-registry tp-registry --docker-server=<REGISTRY>:5000 --docker-username=<USERNAME> --docker-password=<PASSWORD> --docker-email=<EMAIL> --insecure-skip-tls-verify=true
</code></pre>
<p>Add secret to the default service account as described in the <a href="http://kubernetes.io/docs/user-guide/service-accounts/#adding-imagepullsecrets-to-a-service-account" rel="nofollow">kubernetes docs</a>.</p>
|
<p>I want to create a replication controller with a POD which will have a PVC (persistent volume claim). My PVC will use an NFS storage for PV(Persistent Volume).</p>
<p>Once the POD is operational the RC would maintain the PODs up and running. In this situation would the data in the POD be available / persistent when</p>
<ol>
<li>the POD is stopped/deleted by a delete command and RC re-launches it? That means Kubernetes was not shutdown. In this case can the new POD have the same data from the same volume?</li>
<li>the POD was stopped, Kubernetes process and the nodes were restarted. The NFS storage however was still attached as PV. </li>
<li>A new PV is attached to Kubernetes and the old PV is detached. </li>
</ol>
| <p>that depends a lot on how you define your PV/PVC. From my experience it is pretty easy to use NFS based PV to retain data between pod recreations deletions. I go with following approach for NFS volume shared by multiple pods.</p>
<p>Volume :</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pvname
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: <nfs IP>
path: <nfs path>
</code></pre>
<p>Claim :</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcname
spec:
volumeName: pvname
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
</code></pre>
<p>This ensured that whatever I delete in k8s I can get back to my data on a known path on NFS server as well as reuse it again by recreating PV/PVC/POD on k8s, hence it should survive all three cases you mentioned.</p>
|
<p>I'm very new to kubernetes/docker, so apologies if this is a silly question.</p>
<p>I have a pod that is accessing a few services. In my container I'm running a python script and need to access the service. Currently I'm doing this using the services' IP addresses.</p>
<p>Is the service IP address stable or is it better to use <a href="http://kubernetes.io/docs/user-guide/services/#environment-variables" rel="noreferrer">environment variables</a>? If so, some tips on doing that would be great.</p>
<p>The opening paragraph of the <a href="http://kubernetes.io/docs/user-guide/services/" rel="noreferrer">Services Documentation</a> gives a motivation for services which implies stable IP addresses, but I never see it explicitly stated:</p>
<blockquote>
<p>While each Pod gets its own IP address, <strong>even those IP addresses cannot be relied upon to be stable over time</strong>. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?</p>
<p>Enter Services.</p>
</blockquote>
<p>My pod spec for reference:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: fetchdataiso
labels:
name: fetchdataiso
spec:
containers:
- name: fetchdataiso
image: 192.111.1.11:5000/ncllc/fetch_data
command: ["python"]
args: ["feed/fetch_data.py", "-hf", "10.222.222.51", "-pf", "8880", "-hi", "10.223.222.173", "-pi","9101"]
</code></pre>
| <p>The short answer is "Yes, the service IP can change"</p>
<pre><code>$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.172.156 <none> 80/TCP 6s
$ kubectl delete svc test
service "test" deleted
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.254.241 <none> 80/TCP 3s
</code></pre>
<p>The long answer is that if you use it right, you will have no problem with it. What is even more important in scope of your question is that ENV variables are way worse then DNS/IP coupling.
You should refer to your service by <strong>service</strong> or <strong>service.namespace</strong> or even full path like something along the lines of <strong>test.default.svc.cluster.local</strong>. This will get resolved to service ClusterIP, and in opposite to your ENVs it can get re-resolved to a new IP (which will probably never happen unless you explicitly delete and recreate service) while ENV of a running process will not be changed</p>
|
<p>I’m investigating this letsencrypt controller (<a href="https://github.com/tazjin/kubernetes-letsencrypt" rel="noreferrer">https://github.com/tazjin/kubernetes-letsencrypt</a>).</p>
<p>It requires pods have permission to make changes to records in Cloud DNS. I thought with the pods running on GKE I’d get that access with the default service account, but the requests are failing. What do I need to do do to allow the pods access to Cloud DNS?</p>
| <p>The Google Cloud DNS API's <a href="https://cloud.google.com/dns/api/v1/changes/create#auth" rel="noreferrer">changes.create call</a> requires either the <code>https://www.googleapis.com/auth/ndev.clouddns.readwrite</code> or <code>https://www.googleapis.com/auth/cloud-platform</code> scope, neither of which are enabled by default on a GKE cluster.</p>
<p>You can add a new <a href="https://cloud.google.com/container-engine/docs/node-pools" rel="noreferrer">Node Pool</a> to your cluster with the DNS scope by running:</p>
<pre><code>gcloud container node-pools create np1 --cluster my-cluster --scopes https://www.googleapis.com/auth/ndev.clouddns.readwrite
</code></pre>
<p>Or, you can create a brand new cluster with the scopes you need, either by passing the <code>--scopes</code> flag to <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create" rel="noreferrer"><code>gcloud container clusters create</code></a>, or in the <a href="https://pantheon.corp.google.com/kubernetes/add" rel="noreferrer">New Cluster dialog</a> in Cloud Console, click "More", and set the necessary scopes to "Enabled".</p>
|
<p>I am trying to get the stats of the containers that are running inside the kubernetes nodes via <strong>docker stats</strong> command. But unfortunately I am getting all the values as "0" for all the pod containers.</p>
<pre><code>CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
7dc5af9923b2 0.00% 0 B / 0 B 0.00% 0 B / 0 B 0 B / 0 B 0
</code></pre>
<p>I did the same with containers that I brought up manually via <strong>docker run</strong> command in the same node and I am getting proper values for those containers.</p>
<pre><code>CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8be93c039a25 0.12% 133.3 MB / 3.892 GB 3.43% 0 B / 648 B 2.208 MB / 0 B 0
</code></pre>
<p>Is there any specific method to get the stats fot the pod containers other than this?</p>
<p>Note: docker version is 1.11.2 and kube version is 1.3.7</p>
| <p>I resolved this one. I used the kubelet API to get the metrics for the node and well as for individual containers. The following API will return the metrics for a pod container.</p>
<pre><code>http://<nodeIP>:10255/stats/<podName>/<containerName> - POST
</code></pre>
<p><a href="https://stackoverflow.com/questions/35212008/kubernetes-how-to-get-disk-cpu-metrics-of-a-node/35212987#35212987">This post </a> that was suggested in the comments was much helpful. </p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/pkg/kubelet/server.go#L971" rel="nofollow noreferrer">This</a> document has some more APIs to collect the metrics.</p>
|
<p>I have a docker image pushed to Container Registry with <code>docker push gcr.io/go-demo/servertime</code> and a pod created with <code>kubectl run servertime --image=gcr.io/go-demo-144214/servertime --port=8080</code>.</p>
<p>How can I enable automatic update of the pod everytime I push a new version of the image?</p>
| <p>I would suggest switching to some kind of CI to manage the process, and instead of triggering on docker push triggering the process on pushing the commit to git repository. Also if you switch to using a higher level kubernetes construct such as <code>deployment</code>, you will be able to run a rolling-update of your pods to your new image version. Our process is roughly as follows :</p>
<pre><code>git commit #triggers CI build
docker build yourimage:gitsha1
docker push yourimage:gitsha1
sed -i 's/{{TAG}}/gitsha1/g' deployment.yml
kubectl apply -f deployment.yml
</code></pre>
<p>Where deployment.yml is a template for our deployment that will be updated to new tag version.</p>
<p>If you do it manually, it might be easier to simply update image in an existing deployment by running <code>kubectl set image deployment/yourdeployment <containernameinpod>=yourimage:gitsha1</code></p>
|
<p>We are using the hosted version of Kubernetes inside Google Container Engine (GKE).</p>
<p>Currently we are at Version 1.3.x, which comes with an Kubernetes Dashboard v1.1.1.</p>
<p>Some days ago Kubernetes Dashboard v1.4.0 was released which includes some very nice enhancements.</p>
<p>My <strong>Question</strong>: <strong>What is the recommended way to update the Kubernetes Dashboard</strong> on a hosted (<strong>GKE</strong>) Kubernetes cluster?</p>
<p>The cluster comes with a Dashboard controlled by an Replication Controller. We could just dump the RC config, edit the image tag and labels and apply it. But I don't want to break the dashboard. So I'd like to know what the "official" or suggested way of doing this is.</p>
| <p>The official way is to update your cluster to 1.4. It should be available a few days after Kubernetes 1.4 is released. You can do this via gcloud CLI or Google Cloud Console (click "Upgrade available" next to your cluster). </p>
|
<p>I have installed Kubernetes on Bare-metal/Ubuntu. I am on <code>6b649d7f9f2b09ca8b0dd8c0d3e14dcb255432d1</code> commit in git. I used <code>cd kubernetes/cluster; KUBERNETES_PROVIDER=ubuntu ./kube-up.sh</code> followed by <code>cd kubernetes/cluster/ubuntu; ./deployAddons.sh</code> to start the cluster. Everything went fine and the cluster got up.</p>
<p>My <code>/ubuntu/config-default.sh</code> is as follows:</p>
<pre><code># Define all your cluster nodes, MASTER node comes first"
# And separated with blank space like <user_1@ip_1> <user_2@ip_2> <user_3@ip_3>
export nodes=${nodes:-"[email protected] [email protected]"}
# Define all your nodes role: a(master) or i(minion) or ai(both master and minion), must be the order same
role=${role:-"ai i"}
# If it practically impossible to set an array as an environment variable
# from a script, so assume variable is a string then convert it to an array
export roles=($role)
# Define minion numbers
export NUM_NODES=${NUM_NODES:-2}
# define the IP range used for service cluster IPs.
# according to rfc 1918 ref: https://tools.ietf.org/html/rfc1918 choose a private ip range here.
export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24} # formerly PORTAL_NET
# define the IP range used for flannel overlay network, should not conflict with above SERVICE_CLUSTER_IP_RANGE
export FLANNEL_NET=${FLANNEL_NET:-172.16.0.0/16}
# Optionally add other contents to the Flannel configuration JSON
# object normally stored in etcd as /coreos.com/network/config. Use
# JSON syntax suitable for insertion into a JSON object constructor
# after other field name:value pairs. For example:
# FLANNEL_OTHER_NET_CONFIG=', "SubnetMin": "172.16.10.0", "SubnetMax": "172.16.90.0"'
export FLANNEL_OTHER_NET_CONFIG
FLANNEL_OTHER_NET_CONFIG=''
# Admission Controllers to invoke prior to persisting objects in cluster
export ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,SecurityContextDeny
# Path to the config file or directory of files of kubelet
export KUBELET_CONFIG=${KUBELET_CONFIG:-""}
# A port range to reserve for services with NodePort visibility
SERVICE_NODE_PORT_RANGE=${SERVICE_NODE_PORT_RANGE:-"30000-32767"}
# Optional: Enable node logging.
ENABLE_NODE_LOGGING=false
LOGGING_DESTINATION=${LOGGING_DESTINATION:-elasticsearch}
# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_LOGGING=false
ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1}
# Optional: When set to true, heapster, Influxdb and Grafana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-true}"
# Extra options to set on the Docker command line. This is useful for setting
# --insecure-registry for local registries.
DOCKER_OPTS=${DOCKER_OPTS:-""}
# Extra options to set on the kube-proxy command line. This is useful
# for selecting the iptables proxy-mode, for example.
KUBE_PROXY_EXTRA_OPTS=${KUBE_PROXY_EXTRA_OPTS:-""}
# Optional: Install cluster DNS.
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
# DNS_SERVER_IP must be a IP in SERVICE_CLUSTER_IP_RANGE
DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}
DNS_DOMAIN=${DNS_DOMAIN:-"cluster.local"}
DNS_REPLICAS=${DNS_REPLICAS:-1}
# Optional: Install Kubernetes UI
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
# Optional: Enable setting flags for kube-apiserver to turn on behavior in active-dev
RUNTIME_CONFIG="--basic-auth-file=password.csv"
# Optional: Add http or https proxy when download easy-rsa.
# Add envitonment variable separated with blank space like "http_proxy=http://10.x.x.x:8080 https_proxy=https://10.x.x.x:8443"
PROXY_SETTING=${PROXY_SETTING:-""}
DEBUG=${DEBUG:-"false"}
</code></pre>
<p>Then, I created a pod using the following yml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>And a service using the following yml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
selector:
app: nginx
type: NodePort
</code></pre>
<p>Then, I got into the started container terminal using <code>docker exec -it [CONTAINER_ID] bash</code>. There are mainly two problems:</p>
<ol>
<li>I cannot ping external domains like google.com, but I can ping external IPs like 8.8.8.8. So the container has internet access.</li>
<li>Internal services resolve to correct Internal ClusterIPs, but I cannot ping that IP from inside the container.</li>
</ol>
<p>The host's <code>/etc/resolve.conf</code> file is as follows:</p>
<pre><code>nameserver 8.8.8.8
nameserver 127.0.1.1
</code></pre>
<p>The container's <code>/etc/resolve.conf</code> file is as follows:</p>
<pre><code>search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.3.10
nameserver 8.8.8.8
nameserver 127.0.1.1
options ndots:5
</code></pre>
<p>Regarding the first problem, I think it could be related to either SkyDNS nameservers misconfigurarion or a custom configuration that I have to do but I am not aware of.</p>
<p>However, I dont have any idea about why the containers cannot ping ClusterIPs.</p>
<p>Any workarounds?</p>
| <p>I can answer your <code>ping clusterIP</code> problem.
I met the same problem, want to ping the service's cluster IP from Pod.</p>
<p>The resolution seems that the cluster IP cannot be pinged, but the endpoint can be access using curl with port.</p>
<p>I just work around to find details about ping virtual IP.</p>
|
<pre><code>kubectl get pod run-sh-1816639685-xejyk
NAME READY STATUS RESTARTS AGE
run-sh-1816639685-xejyk 2/2 Running 0 26m
</code></pre>
<p>What's the meaning of "READY=2/2"? The same with "1/1"?</p>
| <p>it shows how many containers in a pod are considered ready. You can have some containers starting faster then others or having their readiness checks not yet fulfilled (or still in initial delay). In such cases there will be less containers ready in pod then their total number (ie. 1/2) hence the whole pod will not be considered ready.</p>
|
<p>I've followed a few guides, and I've got CI set up with Google Container Engine and Google Container Registry. The problem is my updates aren't being applied to the deployment.</p>
<p>So this is my deployment.yml which contains a Kubernetes Service and Deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my_app
labels:
app: my_app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_app
spec:
replicas: 1
template:
metadata:
labels:
app: my_app
spec:
containers:
- name: node
image: gcr.io/me/my_app:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 100
- name: phantom
image: docker.io/wernight/phantomjs:2.1.1
command: ["phantomjs", "--webdriver=8910", "--web-security=no", "--load-images=false", "--local-to-remote-url-access=yes"]
ports:
- containerPort: 8910
resources:
requests:
memory: 1000
</code></pre>
<p>As part of my CI process I run a script which updates the image in google cloud registry, then runs <code>kubectl apply -f /deploy/deployment.yml</code>. Both tasks succeed, and I'm notified the Deployment and Service has been updated:</p>
<pre><code>2016-09-28T14:37:26.375Zgoogleclouddeploymentservice "my_app" configured
2016-09-28T14:37:27.370Zgoogleclouddeploymentdeployment "my_app" configured
</code></pre>
<p>Since I've included the <code>:latest</code> tag on my image, I thought the image would be <a href="http://kubernetes.io/docs/user-guide/images/#updating-images" rel="nofollow">downloaded each time</a> the deployment is updated. Acccording to the <a href="http://kubernetes.io/docs/user-guide/deployments/#strategy" rel="nofollow">docs</a> a <code>RollingUpdate</code> should also be the default strategy. </p>
<p>However, when I run my CI script which updates the deployment - the updated image isn't downloaded and the changes aren't applied. What am I missing? I'm assuming that since nothing is changing in <code>deployment.yml</code>, no update is being applied. How do I get Kubernetes to download my updated image and use a <code>RollingUpdate</code> to deploy it?</p>
| <p>You can force an update of a deployment by changing any field, such as a label. So in my case, I just added this at the end of my CI script:</p>
<pre><code>kubectl patch deployment fb-video-extraction -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
|
<p>I'm using gRPC with Python as client/server inside kubernetes pods...
I would like to be able to launch multiple pods of the same type (gRPC servers) and let the client connect to them (randomly).</p>
<p>I dispatched 10 pods of the server and setup a 'service' to target them. Then, in the client, I connected to the DNS name of the service - meaning kubernetes should do the load-balancing and direct me to a random server pod.
In reality, the client calls the gRPC functions (which works well) but when I look at the logs I see that all calls going to the same server pod. </p>
<p>I presume the client is doing some kind of DNS caching which leads to all calls being sent to the same server. Is this the case? Is there anyway to disable it and set the same stub client to make a "new" call and fetch a new ip by DNS with each call? </p>
<p>I am aware of the overhead I might cause if it will query the DNS server each time but distributing the load is much more important for me at the moment.</p>
| <p>Let me take the opportunity to answer by describing how things are supposed to work.</p>
<p>The way client-side LB works in the gRPC C core (the foundation for all but the Java and Go flavors or gRPC) is as follows (the authoritative doc can be found <a href="https://github.com/grpc/grpc/blob/master/doc/load-balancing.md#load-balancing-in-grpc" rel="noreferrer">here</a>):</p>
<p>Client-side LB is kept simple and "dumb" on purpose. The way we've chosen to implement complex LB policies is through an external LB server (as described in the aforementioned doc). You aren't concerned with this scenario. Instead, you are simply creating a channel, which will use the (default) <em>pick-first</em> LB policy.</p>
<p>The input to an LB policy is a list of resolved addresses. When using DNS, if foo.com resolves to <code>[10.0.0.1, 10.0.0.2, 10.0.0.3, 10.0.0.4]</code>, the policy will try to establish a connection to all of them. The first one to successfully connect will become the chosen one <em>until it disconnects</em>. Thus the name "pick-first". A longer name could have been "pick first and stick with it for as long as possible", but that made for a very long file name :). If/when the picked one gets disconnected, the pick-first policy will move over to returning the next successfully connected address (internally referred to as a "connected subchannel"), if any. Once again, it'll continue to choose this connected subchannel for as long as it stays connected. If all of them fail, the call would fail.</p>
<p>The problem here is that DNS resolution, being intrinsically pull based, is only triggered 1) at channel creation and 2) upon disconnection of the chosen connected subchannel. </p>
<p>As of right now, a hacky solution would be to create a new channel for every request (very inefficient, but it'd do the trick given your setup).</p>
<p>Given changes coming in Q1 2017 (see <a href="https://github.com/grpc/grpc/issues/7818" rel="noreferrer">https://github.com/grpc/grpc/issues/7818</a>) will allow clients to choose a different LB policy, namely Round Robin. In addition, we may look into introducing a "randomize" bit to that client config, which would shuffle the addresses prior to doing Round-Robin over them, effectively achieving what you intend. </p>
|
<p>I'd like to parse ingress nginx logs using fluentd in Kubernetes. That was quite easy in Logstash, but I'm confused regarding fluentd syntax.</p>
<p>Right now I have the following rules:</p>
<pre><code><source>
type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.*
format json
read_from_head true
keep_time_key true
</source>
<filter kubernetes.**>
type kubernetes_metadata
</filter>
</code></pre>
<p>And as a result I get this log but it is unparsed:</p>
<pre><code>127.0.0.1 - [127.0.0.1] - user [27/Sep/2016:18:35:23 +0000] "POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1475000747571 HTTP/2.0" 200 37593 "http://localhost/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Centos Chromium/52.0.2743.116 Chrome/52.0.2743.116 Safari/537.36" 951 0.408 10.64.92.20:5601 37377 0.407 200
</code></pre>
<p>I'd like to apply filter rules to be able to search by IP address, HTTP method, etc in Kibana. How can I implement that?</p>
| <p>Pipelines are quite different in logstash and fluentd. And it took some time to build working Kubernetes -> Fluentd -> Elasticsearch -> Kibana solution.</p>
<p>Short answer to my question is to install <strong>fluent-plugin-parser</strong> plugin (I wonder why it doesn't ship within standard package) and put this rule after <strong>kubernetes_metadata</strong> filter:</p>
<pre><code><filter kubernetes.var.log.containers.nginx-ingress-controller-**.log>
type parser
format /^(?<host>[^ ]*) (?<domain>[^ ]*) \[(?<x_forwarded_for>[^\]]*)\] (?<server_port>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+[^\"])(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")? (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?:\[(?<proxy_upstream_name>[^\]]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*)$/
time_format %d/%b/%Y:%H:%M:%S %z
key_name log
types server_port:integer,code:integer,size:integer,request_length:integer,request_time:float,upstream_response_length:integer,upstream_response_time:float,upstream_status:integer
reserve_data yes
</filter>
</code></pre>
<p>Long answer with lots of examples is here: <a href="https://github.com/kayrus/elk-kubernetes/" rel="noreferrer">https://github.com/kayrus/elk-kubernetes/</a></p>
|
<p>I'm writing an application to monitor a kubernetes cluster running on Google Container Engine. On the host where my application deployed, there are no <code>kubectl</code> and <code>gcloud</code> CLI nor are they allowed to be installed. So I am trying to do everything through REST API.</p>
<p>For creating the cluster through REST, I can use <a href="https://cloud.google.com/container-engine/reference/rest/" rel="nofollow noreferrer">GCE Rest API</a> with bearer token retrieved from <a href="https://developers.google.com/oauthplayground/" rel="nofollow noreferrer">Google OAuth Playground</a>. Something like:</p>
<pre><code>curl -i -X GET -H "Accept: application/json" -H "Content-Type: application/json" -H "Content-Length: 0" -H "Authorization: Bearer $MyBearerToken https://container.googleapis.com/v1/projects/$PROJECT_ID/zones/$ZONE/serverconfig
</code></pre>
<p>I can also find <a href="http://kubernetes.io/kubernetes/third_party/swagger-ui/" rel="nofollow noreferrer">Kubernetes REST API reference here</a>. So my question is: <strong>How do I retrieve, say pod information, from my GCE Kubernetes cluster, using REST api and REST api only?</strong></p>
<p>I tried with <code>kubectl get pods --v=8</code>, and it's using <code>GET https://${Kubenetes_IP}/api/v1/namespaces/default/pods</code>. But when I use the same api endpoint to curl with my GCE bearer. It gives me <code>Unzuthorized</code> error message.</p>
<pre><code># curl --insecure -H "Authorization: Bearer $MyBearerToken" https://${Kubenetes_IP}/api/v1/namespaces/default/pods
Unauthorized
</code></pre>
<p>I am guessing because I need to use a different bearer token, or some other authentication method. I am wondering if anyone got a quick programtic one-liner? (Without resorting to <code>kubectl</code> or <code>gcloud</code>)</p>
<hr>
<p><strong>Reference</strong></p>
<p><a href="https://stackoverflow.com/a/34598405/3806343">This answer</a> affirms that there <em>is</em> a way using bearer token, but didn't give a pointer or example</p>
<p><a href="https://stackoverflow.com/a/28700532/3806343">This answer</a> also seems promising, but all the link provided are broken (and api are deprecated as well)</p>
<p><a href="http://blog.ctaggart.com/2016/09/accessing-kubernetes-api-on-google.html" rel="nofollow noreferrer">This answer</a> assumes <code>kubectl</code> and <code>gcloud</code> are installed, which is not allowed in my current use case.</p>
| <p>Token can be retrieve from <a href="https://developers.google.com/oauthplayground/" rel="nofollow">Google OAuth Playground</a></p>
<p>Kubernetes can be reached by the following <code>curl</code> command via REST API</p>
<pre><code># curl --insecure -H "Authorization: Bearer $MyBearerToken" https://${Kubenetes_IP}/api/v1/namespaces/default/pods
</code></pre>
<p>Kubernetes Master IP can be retrieved with <code>kubectl get pods --v=8</code> and it could probably be retrieved somewhere from GCE Web GUI as well.</p>
<p>Full Kubernetes REST API can be found <a href="http://kubernetes.io/kubernetes/third_party/swagger-ui/" rel="nofollow">here</a></p>
<p>Make sure the token has not yet expired, and I think right now the default TTL is 1 hour.</p>
|
<p>Like most applications we have three distinct running environments:</p>
<ul>
<li>Production</li>
<li>Staging / QA</li>
<li>Development</li>
</ul>
<p>These are all basically configured through ENV variables.</p>
<p><strong>How is it best to run all the services/pods/containers in our environments? Through labels? or namespaces?</strong></p>
| <p>I am not sure if there is an official best practice, but I have always preferred to separate environments using namespaces for the following reasons:</p>
<ol>
<li><p>It allows you to use the exact same YAML files for your deployments, services, etc in all three environments. To switch environments, all you have to do is add <code>--namespace=${YOUR_NS}</code> to your kubectl commands or even just specify one context for every namespace in your kubectl configuration, so you can say something like <code>kubectl config use-context production</code>. Check out the <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_config/" rel="nofollow" title="docs">docs</a>!</p></li>
<li><p>You can use <a href="http://kubernetes.io/docs/admin/resourcequota/" rel="nofollow" title="Resource Quotas">Resource Quotas</a> to put limits on the amount of compute resources that should be available to each environment.</p></li>
<li><p>You could use <a href="http://kubernetes.io/docs/admin/authorization/" rel="nofollow" title="RBAC">RBAC</a> to control access to your environments. For example you could allow only a small group of people to make changes to the production environment, but have all developers do whatever they want in your development environment.</p></li>
</ol>
<p>Inside of every namespace you could then use labels to structure your app into different tiers, for example. This configuration would then be the same in every environment.</p>
|
<p>My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the <code>service.alpha.kubernetes.io/external-traffic: OnlyLocal</code> service annotation introduced in 1.4 is not applicable for me.</p>
<p>Looking for alternatives, I've found <a href="https://stackoverflow.com/questions/32112922/how-to-read-client-ip-addresses-from-http-requests-behind-kubernetes-services">this question</a> which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="noreferrer">the NginX Ingress Controller</a>. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?</p>
<p>My Ingress' config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
</code></pre>
| <p>As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the <a href="http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt" rel="noreferrer">Proxy protocol</a> (the link points to the HAProxy documentation, but Nginx also supports it).</p>
<p>For this to work however, the upstream server (meaning the <code>myWebApp</code> service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented <a href="https://www.nginx.com/resources/admin-guide/proxy-protocol/" rel="noreferrer">in the official documentation</a>.</p>
<p>According to the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#proxy-protocol" rel="noreferrer">Nginx Ingress Controller's documentation</a>, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
</code></pre>
<p>Specify the name of the ConfigMap in your Ingress controller manifest, by adding the <code>--nginx-configmap=<insert-configmap-name></code> flag to the command-line arguments.</p>
|
<p>I have a small kubernetes (1.3) cluster (basically one node) and would like to install gogs in there. Gogs is "installed" using Helm. I do have the following templates in my helm chart:</p>
<ul>
<li>Deployment (using image gogs:0.9.97, having containerPort 3000 (http) as well as 2222 (ssh)</li>
<li>Ingress (this is only for Port 80)</li>
<li>Service (Port 80 (http) as well as 2222 (ssh))</li>
</ul>
<p>The http-stuff is configured correctly and I can access the container as well as the contained git-repositories via http without any trouble. Now I would like to use ssh for the git-connections as well. I tried the --tcp-services-configmap configuration of nginx-ingress, but to no avail. The log of the Ingress Controller states, that the configured service does not have any active endpoints, which I find rather strange, since the http stuff is working.</p>
<p><strong>UPDATE</strong></p>
<p>I just did an nmap on the DNS and the port 2222 is not open. This looks like a konfiguration problem. The port is open on the container (tested by connecting to the cluster ip from the ndoe). </p>
<p>Guess that the problem is that the log of the Ingress Controller states, that the configured service does not have any active endpoints. </p>
<p>My Service onfiguration is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
spec:
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
- name: ssh
port: 2222
targetPort: ssh
protocol: TCP
selector:
app: {{ template "fullname" . }}
</code></pre>
<p>The config-map is:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-ssh
data:
2222: "default/{{ template "fullname" . }}:2222"
</code></pre>
| <p>Answering my own question. This issue is rather a configuration problem and caused by my own fault. </p>
<p>Basically I haven't posted the ReplicationController of the Nginx-Ingress Resource. This one was missing the port 2222. so now it does look like:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: {{ template "fullname" . }}
labels:
k8s-app: "{{ .Chart.Name }}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
replicas: 1
selector:
k8s-app: "{{ .Chart.Name }}"
template:
metadata:
labels:
name: {{ template "fullname" . }}
k8s-app: "{{ .Chart.Name }}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: "{{ .Chart.Name }}"
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
# we do need to expose 2222 to be able to access this port via
# the tcp-services
- containerPort: 2222
hostPort: 2222
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap-ssh
</code></pre>
|
<p>I am new to kubernetes. In docker I can use the official mongo image and run docker run --name some-mongo -d mongo --auth</p>
<p>And then connect to it and create a default db user. How can I pass --auth in a kube controller spec?</p>
<p>Replication Controller</p>
<pre><code># db-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mongo
name: mongo-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
</code></pre>
| <p>You can configure pod to run specific command and any arguments, in your case it should be something like this:</p>
<pre><code>containers:
- image: mongo
name: mongo
command: ["mongo", "--auth"]
</code></pre>
<p>or </p>
<pre><code>containers:
- image: mongo
name: mongo
args: ["--auth"]
</code></pre>
<p>more details could be found <a href="http://kubernetes.io/docs/user-guide/configuring-containers/" rel="noreferrer">here</a></p>
|
<p>Most container services have some sort of built in service discovery mechanism. So why is it that someone would still incorporate Consul into their setup? I know that Consul provides other benefits besides service discovery, such as K/V store, Consul templates, and service monitoring, however none of these seem like a big enough benefit.</p>
| <p>It's simple - it depends on your application. If it needs those features Consul have, and which other systems don't, then you can go for it.</p>
<p>Besides that, there are a lot of Hashicorp products tightly integrated with a Consul in part of HA. Vault, Nomad and Terraform. Consul can be used like a variant of ZooKeeper, but Raft based. We use it that way.</p>
<p>Another concern, maybe you would like to have another layer of indirection in your system. Consul cluster can stand for verifying connectivity between various remote parts and etc.</p>
<p>So, it's not about how big is the advantage, it's about your very specific requirements and limitations in your very own moment.</p>
|
<p>I am deploying Kubernetes 1.4 on Ubuntu 16 on Raspberry Pi 3 following the instructions at <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/kubeadm/</a>. The master starts and the minion joins no problem but when I add weave kubedns won't start. Here's the pods:</p>
<pre><code>k8s@k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 1 23h
kube-system kube-apiserver-k8s-master 1/1 Running 3 23h
kube-system kube-controller-manager-k8s-master 1/1 Running 1 23h
kube-system kube-discovery-1943570393-ci2m9 1/1 Running 1 23h
kube-system kube-dns-4291873140-ia4y8 0/3 ContainerCreating 0 23h
kube-system kube-proxy-arm-nfvvy 1/1 Running 0 1h
kube-system kube-proxy-arm-tcnta 1/1 Running 1 23h
kube-system kube-scheduler-k8s-master 1/1 Running 1 23h
kube-system weave-net-4gqd1 0/2 CrashLoopBackOff 54 1h
kube-system weave-net-l758i 0/2 CrashLoopBackOff 44 1h
</code></pre>
<p>The events log doesn't show anything. getting logs for kube-dns doesn't get anything either.</p>
<p>What can I do to debug?</p>
| <p><code>kube-dns</code> won't start until the network is up.</p>
<p>Look in the <code>kubelet</code> logs on each machine for more information about the crash that is causing the CrashLoopBackoff.</p>
<p><del>How did you get ARM images for Weave Net? The <code>weaveworks/weave-kube</code> image on DockerHub is only built for x64.</del></p>
<p>Edit: as @pidster says <a href="https://www.weave.works/weave-net-1-9-released-encrypted-fast-datapath-arm/" rel="nofollow noreferrer">Weave Net now supports ARM</a></p>
|
<p>Im trying to get the <code>kubectl</code> running on a VM. I followed the steps given <a href="https://coreos.com/kubernetes/docs/latest/configure-kubectl.html" rel="nofollow">here</a> and can go thru with the installation. I copied my local kubernetes config (from <code>/Users/me/.kube/config</code>) to the VM in the <code>.kube</code> directory. However when I run any command such as <code>kubectl get nodes</code> it returns <code>error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information</code></p>
<p>Is there a way I can run <code>kubectl</code> on a VM ?</p>
| <p>To use kubectl to talk to Google Container Engine cluster in a non-Google VM, you can create a user-managed <a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow">IAM Service Account</a>, and use it to authenticate to your cluster:</p>
<pre><code># Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME
# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL
# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer
# Configure Application Default Credentials (what kubectl uses) to use that service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE
</code></pre>
<p>And then go ahead and use kubectl as you normally would.</p>
|
<p>If I set to autoscale a deployment using the kubectl autoscale command (<a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/" rel="noreferrer">http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/</a>), how can I turn it off and go back to manual scaling?</p>
| <p>When you autoscale, it creates a <a href="http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/" rel="noreferrer">HorizontalPodScaler</a>.</p>
<p>You can delete it by:</p>
<p><code>kubectl delete hpa NAME-OF-HPA</code>.</p>
<p>You can get <code>NAME-OF-HPA</code> from:</p>
<p><code>kubectl get hpa</code>.</p>
|
<p>I'm trying to get a GKE ingress to require basic auth like this <a href="https://github.com/kubernetes/contrib/blob/45bdc249bc27bdf427498b42859e1a98d634bff0/ingress/controllers/nginx/examples/auth/README.md" rel="noreferrer">example from github.</a></p>
<p>The ingress works fine. It routes to the service. But the authentication isn't working. Allows all traffic right through. Has GKE not rolled this feature out yet? Something obviously wrong in my specs?</p>
<p>Here's the ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: super-ingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
rules:
- host: zzz.host.com
http:
paths:
- backend:
serviceName: super-service
servicePort: 9000
path: /*
</code></pre>
<p>And the <code>basic-auth</code> secret:</p>
<pre><code>$ kubectl get secret/basic-auth -o yaml
apiVersion: v1
data:
auth: XXXXXXXXXXXXXXXXXXX
kind: Secret
metadata:
creationTimestamp: 2016-10-03T21:21:52Z
name: basic-auth
namespace: default
resourceVersion: "XXXXX"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: XXXXXXXXXXX
type: Opaque
</code></pre>
<p>Any insights are greatly appreciated!</p>
| <p>The example you linked to is for nginx ingress controller. GKE uses <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce" rel="noreferrer">GLBC</a>, which doesn't support auth.</p>
<p>You can <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#deployment" rel="noreferrer">deploy</a> an nginx ingress controller in your gke cluster. Note that you need to <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers" rel="noreferrer">annotate</a> your ingress to avoid the GLBC claiming the ingress. Then you can expose the nginx controller directly, or create a glbc ingress to redirect traffic to the nginx ingress (see this <a href="https://gist.github.com/bprashanth/8975c55789d201c7d8a7575f5c2a4565" rel="noreferrer">snippet</a> written by bprashanh).</p>
|
<p>I am probably missing some of the basic. kubectl logs command usage is the following:</p>
<pre><code>"kubectl logs [-f] [-p] POD [-c CONTAINER] [options]"
</code></pre>
<p>list of my pods is the following:</p>
<pre><code>ubuntu@master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 24m
kube-system kube-apiserver-master 1/1 Running 0 24m
kube-system kube-controller-manager-master 1/1 Running 0 24m
kube-system kube-discovery-982812725-3kt85 1/1 Running 0 24m
kube-system kube-dns-2247936740-kimly 3/3 Running 0 24m
kube-system kube-proxy-amd64-gwv99 1/1 Running 0 20m
kube-system kube-proxy-amd64-r08h9 1/1 Running 0 24m
kube-system kube-proxy-amd64-szl6w 1/1 Running 0 14m
kube-system kube-scheduler-master 1/1 Running 0 24m
kube-system kubernetes-dashboard-1655269645-x3uyt 1/1 Running 0 24m
kube-system weave-net-4g1g8 1/2 CrashLoopBackOff 7 14m
kube-system weave-net-8zdm3 1/2 CrashLoopBackOff 8 20m
kube-system weave-net-qm3q5 2/2 Running 0 24m
</code></pre>
<p>I assume POD for logs command is anything from the second "name" column above. So, I try the following commands.</p>
<pre><code>ubuntu@master:~$ kubectl logs etcd-master
Error from server: pods "etcd-master" not found
ubuntu@master:~$ kubectl logs weave-net-4g1g8
Error from server: pods "weave-net-4g1g8" not found
ubuntu@master:~$ kubectl logs weave-net
Error from server: pods "weave-net" not found
ubuntu@master:~$ kubectl logs weave
Error from server: pods "weave" not found
</code></pre>
<p><strong>So, what is the POD in the logs command?</strong></p>
<p>I have got the same question about services as well. <strong>How to identify a SERVICE to supply into a command, for example for 'describe' command?</strong></p>
<pre><code>ubuntu@master:~$ kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 40m
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 39m
kube-system kubernetes-dashboard 100.70.83.136 <nodes> 80/TCP 39m
ubuntu@master:~$ kubectl describe service kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
ubuntu@master:~$ kubectl describe services kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
</code></pre>
<p>Also, <strong>is it normal that weave-net-8zdm3 is in CrashLoopBackOff state?</strong> It seems I have got one for each connected worker. <strong>If it is not normal, how can I fix it?</strong> I have found similar question here: <a href="https://stackoverflow.com/questions/39810713/kube-dns-and-weave-net-not-starting">kube-dns and weave-net not starting</a> but it does not give any practical answer.</p>
<p>Thanks for your help!</p>
| <p>It seems you are running your pods in a different namespace than <code>default</code>.</p>
<p><code>ubuntu@master:~$ kubectl get pods --all-namespaces</code> returns your pods but <code>ubuntu@master:~$ kubectl logs etcd-master</code>returns not found. Try running <code>kubectl logs etcd-master --all-namespaces</code> or if you know your namespace <code>kubectl logs etcd-mastern --namespace=mynamespace</code>. </p>
<p>The same thing goes for your services.</p>
|
<p>How do I get the pod associated with a job via the Kubernetes REST API? I cannot figure out how the two are related/linked other than by similar (but not the same) generated names.</p>
| <p>You can send a LIST request for pods, with <a href="https://github.com/kubernetes/apimachinery/blob/d16981aedf33f88b7434bd3bdbcc2e1f940aecd0/pkg/apis/meta/v1/types.go#L355" rel="nofollow noreferrer">ListOptions.LabelSelector</a> set to the <a href="https://github.com/kubernetes/api/blob/dcce3486da333605791aa8b98536ca01893194d6/batch/v1/types.go#L99" rel="nofollow noreferrer">selector of the Job</a>.</p>
<p>If you are not using the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">go client</a> for kubernetes, you need to spell the LabelSelector as a query parameter, e.g., <code>https://<host_address>/api/v1/namespaces/default/pods?labelSelector=component%3Dspark-master</code></p>
|
<p>I'm trying to start FIWARE Orion in Kubernetes.
Here is the manifest:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: broker
spec:
replicas: 1
template:
metadata:
labels:
name: broker
spec:
containers:
- name: mongo
image: waziup/mongodb:latest
args: ["--nojournal"]
ports:
- containerPort: 27017
- name: orion
image: waziup/orion:latest
ports:
- containerPort: 1026
args: ["-dbhost", "localhost:27017", "-logLevel", "INFO"]
- name: cygnus
image: waziup/cygnus:latest
ports:
- containerPort: 8081
- containerPort: 5050
----
apiVersion: v1
kind: Service
metadata:
name: broker
labels:
name: broker
spec:
type: LoadBalancer
ports:
- port: 1026
targetPort: 8026
selector:
name: broker
</code></pre>
<p>To be deployed with:</p>
<pre><code>kubectl apply -f manifest.yaml
</code></pre>
<p>The service is exposed:</p>
<pre><code>$ kubectl describe svc broker
Name: broker
Namespace: default
Labels: name=broker
Selector: name=broker
Type: LoadBalancer
IP: 100.69.249.225
Port: <unset> 1026/TCP
NodePort: <unset> 30458/TCP
Endpoints: 10.40.0.13:8026
Session Affinity: None
No events.
</code></pre>
<p>However it is not responding:</p>
<pre><code>curl <my public IP>:30458/version
</code></pre>
<p>The command above hangs forever. If I run it directly on the master node, it works.
Any ideas?
It seems that the TCP connection is not established... Orion will not send back the ACK, or it will not be routed.</p>
| <p>The problem was linked to Kubernetes networking.
It seems that adding and then deleting the "sock shop" does not remove the network "Deny Policy".
The solution is to run:</p>
<p><code>kubectl annotate namespace default net.beta.kubernetes.io/network-policy-</code></p>
<p>That will remove old policies.</p>
|
<p>I am following the <a href="http://kubernetes.io/docs/hellonode/" rel="nofollow">hellonode</a> tutorial, deploying to Google Container Engine, but running into the error below:</p>
<pre><code>kubectl run simple-gke-server --image=us.gcr.io/cloud-assets-henry/simple-gke-server:v1 --port=8888
Error from server: the server does not allow access to the requested resource (post replicationcontrollers)
</code></pre>
<p>Even though I am able to get the credential</p>
<pre><code>gcloud container clusters get-credentials simplecluster
</code></pre>
<p>I get this problem, even when trying to get version info.</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"darwin/amd64"}
Couldn't read server version from server: the server does not allow access to the requested resource
</code></pre>
<p>I did have to update my kubectl to 1.4.0, which matches the version of my cluster.</p>
<p>I have also initialized by gcloud with a config, and also did auth login.</p>
<p>Is there anything else that I can do?</p>
| <p>kubectl uses <a href="https://developers.google.com/identity/protocols/application-default-credentials">Application Default Credentials</a> for authenticating to GKE clusters. It is possible that your Application Default Credentials are configured for a different user than your gcloud credentials if you have previously configured ADC.</p>
<p>Try running <code>gcloud auth application-default login</code>, and make sure that the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable isn't pointing somewhere unexpected.</p>
|
<p>Trying several options to resolve the issue with weave-net (<a href="https://stackoverflow.com/questions/39872332/how-to-fix-weave-net-crashloopbackoff-for-the-second-node">How to fix weave-net CrashLoopBackOff for the second node?</a>), I have decided to try calico instead of weave-net. The documentation for kubernetes tells I need only one or another. The command (as per documentation here <a href="https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm" rel="nofollow noreferrer">https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm</a>) fails:</p>
<pre><code>vagrant@vm-master:~$ sudo kubectl create -f https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml
yaml: line 6: mapping values are not allowed in this context
</code></pre>
<p><strong>What I am doing wrong? Is it known issue? How can I fix/workaround it?</strong></p>
| <p>You need to reference the raw YAML file in your command, instead of the full GitHub HTML document:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/projectcalico/calico-containers/master/docs/cni/kubernetes/manifests/kubeadm/calico.yaml
</code></pre>
|
<p>This is a similar question as <a href="https://stackoverflow.com/questions/38733801/unable-to-access-kubernetes-dashboard">this</a> but I could not find a resolution from it. I have setup Kubernetes cluster with CoreOS 2 masters and 3 nodes on AWS by following this <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">step by step guide</a>. k8s version is 1.4.0 and all servers are in a private subnet, so I build a bastion VPN server on a different VPC and connect to a k8s cluster via the bastion server with VPC peering.</p>
<p>It works basically pretty well but I noticed that I cannot access kubernetes dashboard from a web browser.
These are my kuberentes dashboard svc and rc yaml files.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
---
apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.4.0
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
</code></pre>
<p>If I just access <code>https://master-host/ui</code>, it returns an authentication error. I understand it and feel no problem because the api server needs an authentication. But when I run <code>kubectl proxy --port=8001</code> then access <code>http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/</code>, a browser returns</p>
<pre><code>Error: 'dial tcp 10.10.93.3:9090: i/o timeout'
Trying to reach: 'http://10.10.93.3:9090/'
</code></pre>
<p>while a request to the api server just works file like <code>http://localhost:8001/static</code> returns:</p>
<pre><code>{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1alpha1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1alpha1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/ping",
"/logs",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
}
</code></pre>
<p>It looks pods on master cannot connect to a pod on nodes. From busybox on a node, </p>
<pre><code>kubectl exec busybox -- wget 10.10.93.3:9090
</code></pre>
<p>can fetch an index.html so node-to-node communication should be ok.</p>
<p>A result of service describe:</p>
<pre><code>❯❯❯ kubectl describe svc kubernetes-dashboard --namespace=kube-system ⏎ master ⬆ ✭ ✚ ✱ ➜ ◼
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.11.0.82
Port: <unset> 80/TCP
Endpoints: 10.10.93.9:9090
Session Affinity: None
No events.
</code></pre>
<p>What else I'm missing? If I use a NodePort I can see the dashboard but I don't want to expose the dashboard. I suspect there is either some missing port I have to open on AWS security group settings, or some flanneld/docker/cni network settings went wrong and it causes the issue.</p>
<p>This is a log of the dashboard pod.</p>
<pre><code>Starting HTTP server on port 9090
Creating API server client for https://10.11.0.1:443
Successful initial request to the apiserver, version: v1.4.0+coreos.1
Creating in-cluster Heapster client
</code></pre>
<p>so it looks nothing actually reached to the dashboard.</p>
<p><em>[Updated]</em>
I found those logs on api-server pod.</p>
<pre><code> proxy.go:186] Error proxying data from backend to client: write tcp [master-ip-address]:443->[vpn-ip-address]:61980: write: connection timed out
</code></pre>
<p>So obviously something happened when proxing between api server and VPN server.</p>
| <p>Ah...finally I noticed that there is a mistake on my AWS security setting. That is, I open TCP 8472 port for flanneld about master => node communication, that should be UDP. I knew that should be UDP so it took very long time until I re-checked it and noticed the mistake.</p>
<p>After I updated the setting, <code>kubectl proxy</code> instantly worked and I can now see kubernetes dashboard.</p>
|
<p>I'm creating an (tls enabled) ingress resource using following configurations:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-apis
spec:
tls:
- secretName: tls-secret
backend:
serviceName: my-web-service
servicePort: 80
</code></pre>
<p>A new static IP address is provisioned everytime.
Is it possible to reuse an existent one ?</p>
<p>(I'm using Kubernetes running on GKE)</p>
| <p>You can specify the IP address in an annotation on the Ingress (it looks like you specify it by name rather than IP address). This is only picked up by the GCE controller so don't expect it to work anywhere other than GCE/GKE.</p>
<p><a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/controller/utils.go#L48" rel="nofollow">https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/controller/utils.go#L48</a></p>
<p>Something like this should work:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...
</code></pre>
|
<p>I am trying to connect Google Container Engine from my local machine using <code>gcloud</code> sdk but i am getting below error.</p>
<pre><code>C:\Program Files (x86)\Google\Cloud SDK>gcloud container clusters get-credential
s cluster-2 --zone us-central1-a --project myapp-00000
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) environment variable HOME or
KUBECONFIG must be set to store credentials for kubectl
</code></pre>
<p>I have check at HOME location there is not .kube folder created and not environment variable set by default, so i have created <code>KUBECONFIG</code> environment by myself after that i am getting below error :- </p>
<pre><code>ERROR: gcloud crashed (OSError): [Errno 13] Permission denied: 'C:\\Tool\\config'
</code></pre>
<p>i have started <code>gcloud</code> sdk as admin and it have all the correct Permission.</p>
<p><strong>EDIT</strong></p>
<p>I am using below version (which are latest as of today)</p>
<pre><code>Google Cloud SDK 129.0.0
kubectl
kubectl-windows-x86_64 1.4.0
C:\Program Files (x86)\Google\Cloud SDK>kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0",
</code></pre>
| <p>I take it you set <code>KUBECONFIG</code> env to 'C:\Tool\config'? That error is gcloud failing to write due to missing admin privileges; I don't know if you need to run the shell as admin. You might also try the <code>HOME</code> directory. Note that gcloud will attempt to create any missing directories on the path to the kubeconfig file.</p>
|
<p>I am using <code>yo jhipster:kubernetes</code> to generate kubernetes file, i want to set <code>lower_case_table_names=1</code> for <code>MySQL</code> to make <code>MySQL</code> case insensitive. Below File is generated by command</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: 'yes'
- name: MYSQL_DATABASE
value: app
command:
- mysqld
- --lower_case_table_names=1
- --skip-ssl
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
port: 3306
</code></pre>
<p><code>MySQL</code> is not starting i am getting below error on <code>MySQL</code> startup on linux machine due to command node in file:-</p>
<pre><code> 2016-10-09 10:08:35 1 [Note] Plugin 'FEDERATED' is disabled. mysqld:
Table 'mysql.plugin' doesn't exist 2016-10-09 10:08:35 1
[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2016-10-09 10:08:35 1
[Note] InnoDB: Using atomics to ref count buffer pool pages 2016-10-09 10:08:35 1 [Note] InnoDB: The InnoDB memory heap is disabled 2016-10-09 10:08:35 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2016-10-09 10:08:35 1 [Note]
InnoDB: Memory barrier is not used 2016-10-09 10:08:35 1 [Note] InnoDB: Compressed tables use zlib 1.2.7 2016-10-09 10:08:35 1 [Note] InnoDB: Using Linux native AIO 2016-10-09 10:08:35 1
[Note] InnoDB: Using CPU crc32 instructions 2016-10-09 10:08:35 1
[Note] InnoDB: Initializing buffer pool, size = 128.0M 2016-10-09 10:08:35 1
[Note] InnoDB: Completed initialization of buffer pool 2016-10-09 10:08:35 1
[Note] InnoDB: Highest supported file format is Barracuda. 2016-10-09 10:08:35 1
[Note] InnoDB: Log scan progressed past the checkpoint lsn 49463 2016-10-09 10:08:35 1
[Note] InnoDB: Database was not shutdown normally! 2016-10-09 10:08:35 1
[Note] InnoDB: Starting crash recovery. 2016-10-09 10:08:35 1 [Note] InnoDB: Reading tablespace information from the .ibd files... 2016-10-09 10:08:35 1
[Note] InnoDB: Restoring possible half-written data pages 2016-10-09 10:08:35 1
[Note] InnoDB: from the doublewrite buffer... InnoDB: Doing recovery: scanned up to log sequence number 1600607 2016-10-09 10:08:35 1
[Note] InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed 2016-10-09
10:08:35 1 [Note] InnoDB: 128 rollback segment(s) are active. 2016-10-09 10:08:35 1 [Note] InnoDB: Waiting for purge to start 2016-10-09 10:08:35 1
[Note] InnoDB: 5.6.22 started; log sequence number 1600607 2016-10-09 10:08:35 1 [Note] Server hostname (bind-address): '*'; port: 3306 2016-10-09 10:08:35 1 [Note] IPv6 is available. 2016-10-09 10:08:35 1
[Note] - '::' resolves to '::'; 2016-10-09 10:08:35 1 [Note] Server socket created on IP: '::'. 2016-10-09 10:08:35 1 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist
</code></pre>
<p>any idea how to set <code>lower_case_table_names=1</code> in kubernetes yml file?</p>
| <p>Could you try using <code>args</code> instead of <code>command</code>?
That is to say,</p>
<pre><code>args:
- --lower_case_table_names=1
- --skip-ssl
</code></pre>
<p>If it still doesn't work, how about creating a config volume?
In your yaml file on mysql pod, you can define like</p>
<pre><code>spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/lib/mysql
name: data
- mountPath: /etc/mysql/conf.d/
name: config
readOnly: true
ports:
- containerPort: 3306
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
volumes:
- name: data
hostPath:
path: /var/lib/data
- name: config
configMap:
name: mysql-config
</code></pre>
<p>And then you can pass additional config parameters by loading <code>mysql-config</code> written as </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
my.conf: |
[mysqld]
lower_case_table_names=1
skip_ssl
</code></pre>
<p>Then no modification of <code>command</code> or <code>args</code> values on kuberenetes yaml required. At least on our local development environment, we can change as <code>innodb_file_format=barracuda</code> in the latter way.</p>
|
<p>I am deploying <code>symmetricds</code> on <code>google container engine</code>, so i have build <code>symmetricds</code> <code>war</code> file and create <code>docker</code> <code>tomcat</code> image like below :- </p>
<pre><code>FROM tomcat
ENV JAVA_OPTS="-Dcom.sun.management.jmxremote.port=1109 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
ENV CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=1109 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
ADD ./symmetric-ds.war /usr/local/tomcat/webapps/
ADD ./mysql-connector-java-5.1.30.jar /usr/local/tomcat/lib/
COPY ./context.xml /usr/local/tomcat/conf/context.xml
COPY ./server.xml /usr/local/tomcat/conf/server.xml
COPY ./tomcat-users.xml /usr/local/tomcat/conf/tomcat-users.xml
RUN sh -c 'touch /usr/local/tomcat/webapps/symmetric-ds.war'
VOLUME /usr/local/tomcat/webapps/
EXPOSE 8080 1109
</code></pre>
<p>and after that i have push it to repository and i am using kubernetes to deploy it.
my <code>kubernetes</code> <code>yml</code> file is below :- </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: symserver
spec:
replicas: 1
template:
metadata:
labels:
app: symserver
spec:
containers:
- name: symserver
image: symserver:v1
ports:
- containerPort: 8080
- containerPort: 1109
---
apiVersion: v1
kind: Service
metadata:
name: symserver
spec:
selector:
app: symserver
type: LoadBalancer
ports:
- port: 8080
- port: 1109
</code></pre>
<p>I have two problems for which i am looking for solution :- </p>
<ol>
<li><p>As <code>docker</code> images are read only whatever properties i have defined in <code>symmetricds.properties</code> (which will be part of war file and war file be inside <code>tomcat</code> and i named <code>tomcat</code> image as <code>symserver</code> for <code>docker</code> ) file are fixed and read only. like </p>
<p>sync.url=http://$(hostName):8080/symmtric-ds/sync/$(engineName)</p></li>
</ol>
<p>when i deploy it to google cloud i get different ip for pods and service external link. so how to solve this problem ? as i have to set this ip in symmetricds.properties file so my other store node can connect to it. and when i restart the server then it 'symmetricds' will again pickup new ip or same ip again from file.</p>
<ol start="2">
<li>How to use <code>JMX</code> in case of docker and <code>kubernetes</code>, i have added <code>JMX</code> option in build file but somehow i am not able to connect it using <code>jconsole</code>. I have exposed port 1109 to local machine using port forward command. </li>
</ol>
| <p><code>symmetricds.properties</code> file has either to be packed outside of the war file and then manipulated before starting the server so the placeholders could be replaced with concrete values or use the notation <code>${env.variable.value}</code> and try seeing if Spring replaces them with environment variables</p>
<p>To externalize the file <code>symmetricds.properties</code> add this section to the file <code>WEB-INF\web.xml</code></p>
<pre><code><context-param>
<param-name>multiServerMode</param-name>
<param-value>true</param-value>
</context-param>
</code></pre>
<p>store the file on the file system in a directory, let's say <code>/opt/symm/</code> and set the java system property <code>symmetric.engines.dir</code> to the value of the directory path</p>
|
<p>Is it correct to assume that one PV can be consumed by several PVCs and each pod instance needs one binding of PVC? I'm asking because I created a PV and then a PVC with different size requirements such as:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: k8sdisk
labels:
type: amazonEBS
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: couchbase-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p>But when I use the PVC with the pod, it shows as 200GB available instead of the 5GB. </p>
<p>I'm sure I'm mixing things, but could not find a reasonable explanation.</p>
| <p>When you have a PVC it will look for a PV that will satisfy it's requirements, but unless it is a volume and claim in multi-access mode (and there is a limited amount of backends that support it, like ie. NFS - details in <a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes" rel="noreferrer">http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes</a>), the PV will not be shared by multiple PVC. Furthermore, the size in PVC is not intended as quota for the amount of data saved on the volume during pods life, but as a way to match big enough PV, and thats it.</p>
|
<p>I configured docker on the same host as my kubernetes-master for the private docker registry.<br>Docker pushing to the private docker registry without https was successful.<br> I also can pull the image just using docker.<br></p>
<p>When I run kubernetes for this image, I get with 'kubectl describe pods' following log :</p>
<pre><code>kubectl describe pods
Name: fgpra-250514157-yh6vb
Namespace: default
Node: 5.179.232.64/5.179.232.64
Start Time: Tue, 11 Oct 2016 18:06:59 +0200
Labels: pod-template-hash=250514157,run=fgpra
Status: Pending
IP: <removed myself>
Controllers: ReplicaSet/fgpra-250514157
Containers:
fgpra:
Container ID:
Image: 5.179.232.65:5000/some_api_image
Image ID:
Port: 3000/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-q7u3x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q7u3x
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4s 4s 1 {default-scheduler } Normal Scheduled Successfully assigned fgpra-250514157-yh6vb to 5.179.232.64
4s 4s 1 {kubelet 5.179.232.64} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
4s 4s 1 {kubelet 5.179.232.64} spec.containers{fgpra} Normal Pulling pulling image "5.179.232.65:5000/some_api_image"
4s 4s 1 {kubelet 5.179.232.64} spec.containers{fgpra} Warning Failed Failed to pull image "5.179.232.65:5000/some_api_image": unable to ping registry endpoint https://5.179.232.65:5000/v0/
v2 ping attempt failed with error: Get https://5.179.232.65:5000/v2/: http: server gave HTTP response to HTTPS client
v1 ping attempt failed with error: Get https://5.179.232.65:5000/v1/_ping: http: server gave HTTP response to HTTPS client
4s 4s 1 {kubelet 5.179.232.64} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fgpra" with ErrImagePull: "unable to ping registry endpoint https://5.179.232.65:5000/v0/\nv2 ping attempt failed with error: Get https://5.179.232.65:5000/v2/: http: server gave HTTP response to HTTPS client\n v1 ping attempt failed with error: Get https://5.179.232.65:5000/v1/_ping: http: server gave HTTP response to HTTPS client"
3s 3s 1 {kubelet 5.179.232.64} spec.containers{fgpra} Normal BackOff Back-off pulling image "5.179.232.65:5000/some_api_image"
3s 3s 1 {kubelet 5.179.232.64} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fgpra" with ImagePullBackOff: "Back-off pulling image \"5.179.232.65:5000/some_api_image\""
</code></pre>
<p>I already configured my /etc/init.d/sysconfig/docker to use my insecure private registry. <br></p>
<p>This is the command to start the kubernetes deployment :</p>
<pre><code>kubectl run fgpra --image=5.179.232.65:5000/some_api_image --port=3000
</code></pre>
<p>How can I set kubernetes to pull from my private docker registry without using ssl?<br></p>
| <p>This rather a docker issue than a kubernetes one. You need to add your http registry as a <code>insecure-registry</code> to your docker daemon on each kubernetes node. </p>
<p><code>docker daemon --insecure-registry=5.179.232.65:5000</code></p>
<p>In most environment there is a file like <code>/etc/default/docker</code> where you can add this parameter. </p>
|
<p>I have problem login into one container of a multi-container pod.
I get the container id from the <code>kubectl describe pod <pod-name></code></p>
<pre><code>kubectl describe pod ipengine-net-benchmark-488656591-gjrpc -c <container id>
</code></pre>
<p>When i try: </p>
<pre><code>kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c 70761432854f /bin/bash
</code></pre>
<p>It says: Error from server: container 70761432854f is not valid for pod ipengine-net-benchmark-488656591-gjrpc</p>
| <p>Ah once more detailed reading the man page of kubectl exec :</p>
<p>Flags:</p>
<pre><code> -c, --container="": Container name. If omitted, the first container in the pod will be chosen
-p, --pod="": Pod name
-i, --stdin[=false]: Pass stdin to the container
-t, --tty[=false]: Stdin is a TTY
</code></pre>
<p>So i just used the container name from my manifest.yaml and it worked like charm.</p>
<p><code>Name of the container: ipengine-net-benchmark-iperf-server</code></p>
<pre><code>kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c ipengine-net-benchmark-iperf-server /bin/bash
</code></pre>
|
Subsets and Splits